• maegul@lemmy.ml
    link
    fedilink
    English
    arrow-up
    12
    ·
    14 days ago

    Yea, academics need to just shut the publication system down. The more they keep pandering to it the more they look like fools.

    • bolexforsoup@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      14 days ago

      It’s chicken/egg or “you first” problem.

      You spend years on your work. You probably have loans. Your income is pitiful. And this is the structural thing that gets your name out. Now someone says “hey take a risk, don’t do it and break the system.”

      Well…you first 🤷‍♂️ they publish on this garbage because it’s the only way to move up, and these garbage systems continue on because everyone has to participate. Hate the game. Don’t blame those who are by and large forced to participate.

      It would require lot of effort from people with clout. It’s a big fight to pick. I am very much in favor of picking that fight, but we need to be a little sympathetic to what that entails.

      • Rolando@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 days ago

        There are a couple things we can do:

        • decline to review for the big journals. why give them free labor? Do academic service in other ways.
        • if you’re organizing a workshop or conference, put the papers online for free. If you’re just participating and not organizing, then suggest they put the papers online for free. Here’s an example: https://aclanthology.org/ If that’s too time-consuming, use: https://arxiv.org/
        • RBG@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 days ago

          Fully agree but I can tell you about point 1 that there enough gullible scientists in the world that see nothing wrong with the current system.

          They will gadly pick up free review when Nature comes knocking, since its “such an honour” for such a reputable paper.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            14 days ago

            Such a reputable paper that’s no doubt accepted dozens of ChatGPT papers by now. Wow, how prestigious!

      • angrymouse@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        14 days ago

        100% ppl need stop thinking big changes can be made “by individuals”, this kind of stuff needs regulation and state alternatives made by popular pressure or is impossible to break as an average worker dealing with in the private sector.

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          applied for a grant last month, now to finalize grant you need to publish things in open access format. (EU country; there’s a push for all publicly funded research to be open access, with it being a requirement from year ??? on, not sure when, but soon) there’s some special funding set aside just for open access fees, which is still rotten because these leeches still stand to profit. then, if you miss that, then there’s an agreement where my uni pays a selection of publishers to let in certain number of articles per year open access, which is basically the same thing but with different source of funding (not from grant, but straight from ministry)

      • qjkxbmwvz@startrek.website
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        Funding agencies have huge power here; demanding that research be published in OA journals is perhaps a good start (with limits on $ spent publishing, perhaps).

        • skillissuer@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          13 days ago

          i hear you, but this leaves this massive gaping hole very quickly filled by predatory journals

          the better solution would be journals created and maintained by universities or other institutions with national (or international, like from EU) funding

      • porous_grey_matter@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        Nope, you just can’t get a job unless you suck it up and publish in these journals, because they’re already famous. And established profs use their cosy relationships with editors to gatekeep and stifle competition for their funding :(

  • Rayspekt@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    14 days ago

    When will scientists just self-publish? I mean seriously, nowadays there is nothing between a researcher and publishing their stuff on the web. Only thing would be peer-reviewing, if you want that, but then just organize it without Elsevier. Reviewers get paid jack shit so you can just do a peer-reviewing fediverse instance where only the mods know the people so it’s still double-blind.

    This system is just to dangle carrots in front of young researchers chasing their PhD

    • GingaNinga@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 days ago

      Because of “impact score” the journal your work gets placed in has a huge impact on future funding. Its a very frustrating process and trying to go around it is like suicide for your lab so it has to be more of a top-down fix because the bottom up is never going to happen.

      Thats why everyone uses sci hub. These publishers are terrible companies up there with EA in unpopularity.

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        It sounds like all it would take to destroy the predatory for-profit publication oligarchs is a majority of the top few hundred scientists, across major disciplines, rejecting it and switching to a completely decentralized peer-2-peer open-source system in protest… The publication companies seem to gate keep, and provide no value. It’s like Reddit. The site’s essentially worthless. All of the value is generated by the content creators.

      • Rayspekt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 days ago

        We should just self publish and then openly argue about it findings like the OG scientists. It didn’t stop them from discovering anything.

        • VeganPizza69 Ⓥ@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 days ago

          Editors can act as filters, which is required when dealing with an excess of information streaming in. Just like you follow celebrities on social media or you follow pseudo-forums like this one, you get a service of information filtration which increases the concentration of useful knowledge.

          In the early days of modern science, the rate of publications was small, make it easier to “digest” entire fields even if there’s self-publishing. The number of published papers grows exponentially, as does the number of journals. https://www.researchgate.net/publication/333487946_Over-optimization_of_academic_publishing_metrics_Observing_Goodhart’s_Law_in_action/figures

          Just like with these forums, the need for moderators (editors, reviewers) grows with the number of users who add content.

        • roguetrick@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          14 days ago

          Bone wars electric bugaloo. In the end you really do need a way to discern who is having an appreciable impact in a field in order to know who to fund. I have yet to hear a meaningful metric for that though.

          Edit: I should clarify, the other option is strictly political through an academy of sciences and has historical awfulness associated with it as well.

  • tuna@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 days ago

    Imagine they have an internal tool to check if the hash exists in their database, something like

    "SELECT user FROM downloads WHERE hash = '" + hash + "';"
    

    You set the pdf hash to be 1'; DROP TABLE books;-- they scan it, and it effectively deletes their entire business lmfaoo.

    Another idea might be to duplicate the PDF many times and insert bogus metadata for each. Then submit requests saying that you found an illegal distribution of the PDF. If their process isn’t automated it would waste a lot of time on their part to find the culprit Lol

    I think it’s more interesting to think of how to weaponize their own hash rather than deleting it

    • thesporkeffect@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      That’s using your ass. This is an active threat to society and it demands active countermeasures.

      I’d bet they have a SaaS ‘partner’ who trawls SciHub and other similar sites. I’ll try to remember to see if there is any hint of how this is being accomplished over the next few days.

  • Passerby6497@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    That’s where you print the downloaded PDF to a new PDF. New hash and same content, good luck tracing it back to me fucko.

    • Syn_Attck@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      14 days ago

      Unfortunately that wouldn’t work as this is information inside the PDF itself so it has nothing to do with the file hash (although that is one way to track.)

      Now that this is known, It’s not enough to remove metadata from the PDF itself. Each image inside a PDF, for example, can contain metadata. I say this because they’re apparently starting a game of whack-a-mole because this won’t stop here.

      There are multiple ways of removing ALL metadata from a PDF, here are most of them.

      It will be slow-ish and probably make the file larger, but if you’re sharing a PDF that only you are supposed to have access to, it’s worth it. MAT or exiftool should work.

      Edit: as spoken about in another comment thread here, there is also pdf/image steganography as a technique they can use.

      • Zacryon@lemmy.wtf
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 days ago

        Okay, got it. Print the PDF, then scan it and save as PDF.

        Or get some monks to get a handwritten copy, like the good old times.

        • sandbox@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 days ago

          it’s possible using steganographic techniques to embed digital watermarks which would not be stripped by simply printing to pdf.

            • Syn_Attck@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              14 days ago

              You should spread that idea around more, it’s pretty ingenious. I’d add first converting to B&W if possible.

          • Syn_Attck@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            14 days ago

            This is a great point. Image watermarking steganography is nearly impossible to defeat unless you can obtain multiple copies of the ‘same’ file from multiple users to look for differences. It could be a change of a single 5-15 pixels from one rgb code off.

            rgb(255, 251, 0)

            to

            rgb(255, 252, 0)

            Which would be imperceptable to the human eye. Depending on the number of users it may need to change more or less pixels.

            There is a ton of work in this field and its very interesting, for anyone considering majoring in computer science / information security.

            Another ‘neat’ technology everyone should know about is machine identification codes, or, the tiny secret tracking dots that color printers print on every page to identify the specific make, model, and serial number (I think?) of the printer the page was printed from. I don’t believe B&W printers have tracking dots, which were originally used to track creators of counterfeit currency. EFF has a page of color printers which do not include tracking dots on printed pages. This includes color LaserJets along with InkJets, although I would not be surprised if there was a similar tracking feature in place now or in the future “for safety and privacy reasons,” but none that I am aware of.

            • sus@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              13 days ago

              I wonder if it’s common for those steganography techniques to have some mechanism for defeating the fairly simple strategy of getting 2 copies of the file from different sources, and looking at the differences between them to expose all the watermarks.

              (I’d think you would need sections of watermark that are the same for any 2 or n combinations of copies of the data, which may be pretty easy to do in many cases, though the difference makes detecting the general watermarking strategy massively easier for the un-watermarkers)

    • xenoclast@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      There are tools for this already… but it sure would be nice to have a Firefox plugin that scrubs all metadata on downloads by default.

      (Note I’m hoping this exists and someone will Um, Actually me)

  • NeatNit@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    I kind of assume this with any digital media. Games, music, ebooks, stock videos, whatever - embedding a tiny unique ID is very easy and can allow publishers to track down leakers/pirates.

    Honestly, even though as a consumer I don’t like it, I don’t mind it that much. Doesn’t seem right to take the extreme position of “publishers should not be allowed to have ANY way of finding out who is leaking things”. There needs to be a balance.

    Online phone-home DRM is a huge fuck no, but a benign little piece of metadata that doesn’t interact with anything and can’t be used to spy on me? Whatever, I can accept it.

    • henfredemars@infosec.pub
      cake
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      I object because my public funds were used to pay for most of these papers. Publishers shouldn’t behave as if they own it.

      • NeatNit@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        That’s true. I was actually thinking/talking about this practice in general, not specifically with regards to Elsevier.

        I definitely agree that scientific journals as they are today are unacceptable.

  • Dark_Dragon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    Can’t we all researcher who is technically good at web servers start a opensource alternative to these paid services. I get that we need to publish to a renowned publisher, but we also decide together to publish to an alternative opensource option. This way the alternate opensource option also grows.

    • Salamander@mander.xyzM
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      13 days ago

      Some time last year I learned of an example of such a project (peerreview on GitHub):

      The goal of this project was to create an open access “Peer Review” platform:


      Peer Review is an open access, reputation based scientific publishing system that has the potential to replace the journal system with a single, community run website. It is free to publish, free to access, and the plan is to support it with donations and (eventually, hopefully) institutional support.

      It allows academic authors to submit a draft of a paper for review by peers in their field, and then to publish it for public consumption once they are ready. It allows their peers to exercise post-publish quality control of papers by voting them up or down and posting public responses.


      I just looked it up now to see how it is going… And I am a bit saddened to find out that the developer decided to stop. The author has a blog in which he wrote about the project and about why he is not so optimistic about the prospects of crowd sourced peer review anymore: https://www.theroadgoeson.com/crowdsourcing-peer-review-probably-wont-work , and related posts referenced therein.

      It is only one opinion, but at least it is the opinion of someone who has thought about this some time and made a real effort towards the goal, so maybe you find some value from his perspective.

      Personally, I am still optimistic about this being possible. But that’s easy for me to say as I have not invested the effort!

  • Андрей Быдло@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    If the paper is worth it and does have an original not OCR-ed text layer, it’d better be exported as any other format. We don’t call good things a PDF file, lol. It’s clumsy, heavy, have unadjustable font size and useless empty borders, includes various limits and takes on DRM, and it’s editing is usually done via paid software. This format shall die off.

    The only reason academia needs that is strict references to exact page but it’s not that hard to emulate. Upsides to that are overwhelming.

    I had my couple of times properly digitalizing PDFs into e-books and text-processing formats, and it’s a pain in the ass, but if I know it’d be read by someone but me, I’m okay with putting a bit more effort into it.

    • petersr@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 days ago

      Well, I guess PDF has one thing going for it (which might not be relevant for scientific papers): The same file will render the same on any platform (assuming the reader implements all the PDF spec to the tee).

      • Андрей Быдло@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        14 days ago

        FB2 is a known format for russian pirates, but it can and should be improved because it sucks ass in many things. FB3 was announced long ago but it hasn’t got any traction yet.

        EPUB is mor/e popular, so it’s probably be the go to format for most books US and EU create, but it isn’t much better.

        Other than that, even Doc\Docx is better than PDF, but I’d recomend RTF for it has less traces of M$ bullshit, and while it’s imperfect format, it’s still better.

        • visc@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 days ago

          Docx doc rtf and all those have a different purpose than pdf, word docs don’t even necessarily look the same on two different computers with the same version of word, and rtf doesn’t even attempt any kind of paper description, it’s literally only a rich format for text. None of these are a true “if I give this to someone to print I know what I will get” “portable document format”

          I will look at fb*, I had not heard of them. Thanks!

        • sem@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          I don’t like docx because it looks different in libreoffice compared to Windows, also you can run into problems with fonts