I’ve recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.

  • jordanlund@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    4
    ·
    3 months ago

    Generally the argument isn’t public vs. private, it’s public domain vs. copyright.

    You want to train an LLM using the contents of Project Gutenberg? Great, go for it!

    You want to train an LLM using bootlegged epubs stolen from Amazon? Now that’s a different deal.

    • troed@fedia.io
      link
      fedilink
      arrow-up
      11
      arrow-down
      6
      ·
      3 months ago

      Sure - they’d need to at least loan the epubs just like a human would need to if wanting to read them.

  • MajorHavoc@programming.dev
    link
    fedilink
    English
    arrow-up
    59
    arrow-down
    3
    ·
    edit-2
    3 months ago

    This falls squarely into the trap of treating corporations as people.

    People have a right to public data.

    Corporations should continue to be tolerated only while they carefully walk an ever tightening fine line of acceptable behavior.

      • MajorHavoc@programming.dev
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        edit-2
        3 months ago

        Yes. Large groups of people acting in concert, with large amounts of funding and influence, must be held to the highest standards, regardless of whether they’re doing something I personally value highly.

        An individual’s rights must be held sacred.

        When those two goals are in conflict, we must melt the corporation-in-conflict down for scrap parts, donate all of its intellectual property to the public domain, and try again with forming a new organization with a similar but refined charter.

        Shareholders should be, ideally, absolutely fucked by this arrangement, when their corporation fucks up, as an incentive to watch and maintain legal compliance in any companies they hold shares in and influence over.

        Investment will still happen, but with more care. We have historically used this model to great innovative success, public good, and lucrative dividends. Some people have forgotten how it can work.

        • areyouevenreal@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          3 months ago

          I think they are saying that preventing open source models being trained and released prevents people from using them. Trying to make training these models more difficult doesn’t just affect businesses, it affects individuals too. Essentially you have all been trying to stand in the way of progress, probably because of fears over job security. It’s not really different to being a luddite.

          • MajorHavoc@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            3 months ago

            Essentially you have all been trying to stand in the way of progress,

            Fuck progress from anyone who can’t be bothered to do it right. There’s justified risks where the cost of inaction is just as horrible as action. This isn’t that, and everyone saying it is, is an asshole whose shouting about this we would all be better off without.

            This work can be done correctly, and even reasonably quickly. Shortcuts aren’t merited.

            probably because of fears over job security. It’s not really different to being a luddite.

            My job is secure. I have substantially more than typical expertise in language models.

            The emperor, today, is butt naked. Anyone telling you we are about to see fast new progress is full of shit, and isn’t your friend.

            I’ve seen this before, and I’ll see it again.

            I’ve given a polite warning, where it looked like folks might listen. The rest aren’t my problem.

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    4
    ·
    3 months ago

    Define “public”.

    Publicly available is not the same as public domain. You should respect the copyright, especially of small creators. I’m of the opinion that an ML model is a derivative work, and so if you’ve trawled every website under the sun for data to feed your model you’ve violated copyright.

    • VoterFrog@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      7
      ·
      3 months ago

      There are multiple facets here that all kinda get mashed together when people discuss this topic and the publicly available/public domain difference kinda gets at that.

      • An AI company downloading a publicly available work isn’t a violation of copyright law. Copyright gives the owner exclusive right to distribute their work. Publishing it for anybody to download is them exercising that right.
      • Of course, if the work isn’t publicly available and the AI company got it, someone probably did violate copyright laws, likely the people who distributed the data set to the company because they’re not supposed to be passing around the work without the owner’s permission.
      • All that is to say, downloading something isn’t making a copy. Sending the work is making a copy, as far as copyright is concerned. Whether the person downloading it is going to use it for something profitable doesn’t really change anything there. Only if they were to become the sender at some later point does it matter. In other words, there’s no violation of copyright law by the company that can really occur during the whole “training” phase of AI development.
      • Beyond that, AI isn’t in the business of serving copies of works. They might come close in some specific instances, but that’s largely a technical problem that developers want to fix than a fundamental purpose of these models.
      • The only real case that might work against them is whether or not the works they produce are derivative… But derivative/transformative has a pretty strict legal definition. It’s not enough to show that the work was used in the creation of a new work. You can, for example, create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, or produce an image containing the most prominent color in every frame of a movie. None of these could exist without deriving from a copyrighted work but none of them count as a legally derivative work.
      • I chose those examples because they are basic statistical analyses not far from what AI training involves. There’s a lot of aspects of a work that are not covered by copyright. Style, structure, factual information. The kinds of things that AI is mostly interested in replicating.
      • So I don’t think we’re going to see a lot of success in taking down AI companies with copyright. We might see some small scale success when an AI crosses a line here or there. But unless a judge radically alters the bounds of copyright law, at everyone’s detriment, their opponents are going to have an uphill battle to fight here.
      • CptBread@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        3 months ago

        An AI model could be seen as an efficient but lossy compression scheme, especially when it comes to images… And a compressed jpeg of an image is still seen as a copy so why would an AI model trained on reproducing it be different?

        • BluesF@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          3 months ago

          Are you suggesting that the model itself is a compressed version of its training data? I think it requires some stretches of how training works to accept that.

        • FrenziedFelidFanatic@yiffit.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          It depends on how much you compress the jpeg. If it gets compressed down to 4 pixels, it cannot be seen as infringement. Technically, the word cloud is lossy compression too: it has all of the information of the text, but none of the structure. I think it depends largely on how well you can reconstruct the original from the data. A word cloud, for instance, cannot be used to reconstruct the original. Nor can a compressed jpeg, ofc; that’s the definition of lossy. But most of the information is still there, so a casual observer can quickly glean the gist of the image. There is a line somewhere between finding the average color of a work (compression down to one pixel) and jpeg compression levels.

          Is the line where the main idea of the work becomes obscured? Surely not, since a summary hardly infringes on the copyright of a book. I don’t know where this line should be drawn (personally, I feel very Stallman-esque about copyright: IP is not a coherent concept), but if we want to put rules on these things, we need to well-define them, which requires venturing into the domain of information theory (what percentage of the entropy in the original is part of the redistributed work, for example), but I don’t know how realistic that is in the context of law.

  • NateNate60@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    24
    ·
    3 months ago

    This is not an opinion. You have made a statement of fact. And you are wrong.

    At law, something being publicly available does not mean it is allowed to be used for any purpose. Copyright law still applies. In most countries, making something publicly available does not cause all copyrights to be disclaimed on it. You are still not permitted to, for example, repost it elsewhere without the copyright holder’s permission, or, as some courts have ruled, use it to train an AI that then creates derivative works. Derivative works are not permitted without the copyright holder’s permission. Courts have ruled that this could mean everything an AI generates is a derivative work of everything in its training data and, therefore, copyright infringement.

    • FrenziedFelidFanatic@yiffit.net
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      10
      ·
      3 months ago

      Saying that statistical analysis is derivative work is a massive stretch. Generative AI is just a way of representing statistical data. It’s not particularly informative or useful (it may be subject to random noise to create something new, for example), but calling it a derivative work in the same way that fan-fiction is derivative is disingenuous at best.

      • the_toast_is_gone@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        3 months ago

        Wouldn’t that argument be like saying an image I drew of a copyrighted character is just an arrangement of pixels based on existing data? The fact remains that, if I tell an AI to generate an image of a copyrighted character, then it’ll produce something without the permission of the original artist.

        I suppose then the problem becomes, who do you hold responsible for the copyright violation (if you pursue that course of action)? Do you go after the guy who told the AI to do it, or do the people who trained the AI and published it? Possibly both? On one hand, suing the AI AL company would be like suing Adobe because they don’t stop people from drawing copyrighted materials in their software (yet). On the other hand, they did create this software that basically acts in the place of an artist that draws whatever you want for commission. If that artist was drawing copyrighted characters for money, you could make the case that the AI company is doing the same - manufacturing copyrighted character images by feeding the AI images of the character and allowing people to generate images of it while collecting money for their services.

        All this to say, copyright is stupid.

      • Match!!@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        3 months ago
        • Tracing a picture to make an outline in pencil is a derivative work. There’s plenty of court cases ruling on this.

        • A convolutional neural network applies a kernel over the input layer to (for example) detect edges and output to the next layer a digital equivalent of a tracing.

        Why would the CNN not be a derivative work if tracing by hand is?

        • FrenziedFelidFanatic@yiffit.net
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 months ago

          Tracing is fine if you use it to learn how to draw. It’s not fine if it ends up in the finished product. Determining if it ends up in the finished product with AI either means finding the exact pattern in the AI’s output (which you will not), or clearly understanding how AI use their training data (which we do not)

    • Zagorath@aussie.zone
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      3 months ago

      They have indeed made a statement of fact. But to the best of my knowledge it’s not one that’s got any definite controlling precedent in law.

      You are still not permitted to, for example, repost it elsewhere without the copyright holder’s permission

      That’s the thing. It’s not clear that an LLM does “repost it elsewhere”. As the OP said, the model itself is basically just a mathematical construct that can’t really be turned back into the original work, which is possibly a sign that it’s not a derivative work, but a transformative one, which is much more likely to be given Fair Use protection. Though Fair Use is always a question mark and you never really know if a use is Fair without going to court.

      You could be right here. Or OP could. As far as I’m concerned anyone claiming to know either way is talking out of their arse.

      • Eccitaze@yiffit.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        Just because something is transformative doesn’t mean that it’s fair use. There’s three other factors, including the nature of the work you copied, the amount of the copyrighted work taken for the use, and the effect on the market. There’s no way in hell I believe that anyone can plausibly say with a straight face “I’m taking literally all of the creative works you’ve ever produced and using them to create a product designed to directly compete with you and put you out of business, and this qualifies as a fair use” and I would be shocked if any judge in any court heard that argument without laughing the poor lawyer making it out of the court.

  • LibertyLizard@slrpnk.net
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    3
    ·
    3 months ago

    I don’t have a problem with tech companies doing statistics on publicly available data, I have a problem with them getting rich by charging money for the collective creative works of humanity. But if they want to share their work for free, I have no issue with that.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      Yeah, because corporations never make money off things they make available free of charge. There’s no way this could go wrong.

  • deaf_fish@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    3
    ·
    3 months ago

    For personal or public use, I’m fine with it. If you use it to make money, that’s when I get upsetti spaghetti.

    • stoicmaverick@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      3 months ago

      Ok. Devil’s Advocate: how is a software engineer profiting from his AI model different from an artist who leans to draw by mimicking the style of public works? Asking for a friend.

      • Eccitaze@yiffit.net
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        3 months ago

        Good question!

        First, that artist will only learn from a few handful of artists instead of every artist’s entire field of work all at the same time. They will also eventually develop their own unique style and voice–the art they make will reflect their own views in some fashion, instead of being a poor facsimile of someone else’s work.

        Second, mimicking the style of other artists is a generally poor way of learning how to draw. Just leaping straight into mimicry doesn’t really teach you any of the fundamentals like perspective, color theory, shading, anatomy, etc. Mimicking an artist that draws lots of side profiles of animals in neutral lighting might teach you how to draw a side profile of a rabbit, but you’ll be fucked the instant you try to draw that same rabbit from the front, or if you want to draw a rabbit at sunset. There’s a reason why artists do so many drawings of random shit like cones casting a shadow, or a mannequin doll doing a ballet pose, and it ain’t because they find the subject interesting.

        Third, an artist spends anywhere from dozens to hundreds of hours practicing. Even if someone sets out expressly to mimic someone else’s style, teaches themselves the fundamentals, it’s still months and years of hard work and practice, and a constant cycle of self-improvement, critique, and study. This applies to every artist, regardless of how naturally talented or gifted they are.

        Fourth, there’s a sort of natural bottleneck in how much art that artist can produce. The quality of a given piece of art scales roughly linearly with the time the artist spends on it, and even artists that specialize in speed painting can only produce maybe a dozen pieces of art a day, and that kind of pace is simply not sustainable for any length of time. So even in the least charitable scenario, where a hypothetical person explicitly sets out to mimic a popular artist’s style in order to leech off their success, it’s extremely difficult for the mimic to produce enough output to truly threaten their victim’s livelihood. In comparison, an AI can churn out dozens or hundreds of images in a day, easily drowning out the artist’s output.

        And one last, very important point: artists who trace other people’s artwork and upload the traced art as their own are almost universally reviled in the art community. Getting caught tracing art is an almost guaranteed way to get yourself blacklisted from every art community and banned from every major art website I know of, especially if you’re claiming it’s your own original work. The only way it’s even mildly acceptable is if the tracer explicitly says “this is traced artwork for practice, here’s a link to the original piece, the artist gave full permission for me to post this.” Every other creative community writing and music takes a similarly dim views of plagiarism, though it’s much harder to prove outright than with art. Given this, why should the art community treat someone differently just because they laundered their plagiarism with some vector multiplication?

      • deaf_fish@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        Good question.

        Ok, so let’s say the artist does exactly what the AI does, in that they don’t try to do anything unique, just looking around at existing content and trying to mix and mash existing ideas. No developing of their own style, no curiosity of art history, no humanity, nothing. In this case I would say that they are mechanically doing the exact same thing as an AI is doing. Do I think I they should get payed. Yes! They spent a good chunk of their life developing this skill, they are a human, they deserve to get their basic needs met and not die of hunger or exposure. Now, this is a strange case because 99.99% of artists don’t do this. Most develop a unique style and add life experience in their art to generate something new.

        A Software Engineer can profit off their AI model by selling it. If they are make money by generating images, then they are making money off of hard working artists that should be payed for their work. That isn’t great. The outcome of allowing this is that art will no longer be something you can do to make a living. This is bad for society.

        It also should be noted that a Software Engineer making an AI model from scratch is 0.01% of the AIs being used. Most people, lay people, who have spent very little time developing art or Software Engineering skills can easily use an existing model to create “art”. The result of this is that many talented artists that could bring new and interesting ideas to world are being out competed by one guy with a web browser producing sub-par sloppy work.

  • Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    3 months ago

    It would be nice if the AI industry had one big positive effect by finally reigning in the overboarding copyright laws.

  • CluckN@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    3 months ago

    “They should pay their sources!”

    Source is 600GB of raw copied website data mixed in a giant witches cauldron

  • Match!!@pawb.social
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    3 months ago

    if they’re using creative commons licenses (or other sharing licenses) then it’s fine! but the model is then alsp bound by the same licenses because that’s how licenses work

  • Hamartiogonic@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    3 months ago

    Here’s an analogy that can be used to test this idea.

    Let’s say I want to write a book but I totally suck as an author and I have no idea how to write a good one. To get some guidelines and inspiration, I go to the library and read a bunch of books. Then, I’ll take those ideas and smash them together to produce a mediocre book that anyone would refuse to publish. Anyway, I could also buy those books, but the end result would still be the same, except that it would cost me a lot more. Either way, this sort of learning and writing procedure is entirely legal, and people have been doing this for ages. Even if my book looks and feels a lot like LOTR, it probably won’t be that easy to sue me unless I copy large parts of it word for word. Blatant plagiarism might result in a lawsuit, but I guess this isn’t what the AI training data debate is all about, now is it?

    However, if I pirated those books, that could result in some trouble. However, someone would need to read my miserable book, find a suspicious passage, check my personal bookshelf and everything I have ever borrowed etc. That way, it might be possible to prove that I could not have come up with a specific line of text except by pirating some book. If an AI is trained on pirated data, that’s obviously something worth the debate.

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      3 months ago

      You are equating traing an LLM with a person learning, but an LLM is not a person. It is not given the same rights and privileges under the law. At best it is a computer program and you can certainly infringe copyright by writing a program.

      • Specal@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        It’s not “At best it’s a computer program”. It is a computer program, a program of probability that it’s response should be X. The training data could be stolen, but it’s output isn’t.

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        An LLM is not a legal entity, nor should it be. However, similar things happen in a human brain and the network of an LLM, so same laws could be applicable to some extent. Where do we draw the line? That’s a legal/political issue we haven’t figured out yet, but following these developments is going to be interesting.

        • wewbull@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          Agreed it hasn’t been settled legally yet.

          I also agree that an LLM isn’t and shouldn’t be a legal entity. Therefore an LLM is something that can be owned, sold, and a profit made from.

          It is my opinion that the original author of the works should receive compensation when their work is used to make profit i.e. to make the LLM. I’d also say that the original intent of copyright law was to give authors protection from others making money from their work without permission.

          Maybe current copyright law isn’t up to the job here, but benefiting of the back of others creative works is not socially acceptable in my opinion.

          • Hamartiogonic@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            3 months ago

            I think of an LLM as a tool, just like drill or a hammer. If you buy or rent these tools, you pay the tool company. If you use the tools to build something, your client pays you for that work.

            Similarly, OpenAI can charge me for extensive use of ChatGPT. I can use that tool to write a book, but it’s not 100% AI work. I need to spend several hours prompt crafting, structuring, reading and editing the book in order to make something acceptable. I don’t really act as a writer in this workflow, but more like an editor or a publisher. When I publish and sell my book, I’m entitled to some compensation for the time and effort that I put into it. Does that sound fair to you?

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              3 months ago

              Yes of course you are.

              …but do you agree that if you use an AI in that way that you are benefitting from another author’s work? You may even, unknowingly, violate the copyright of the original author. You can’t be held liable for that infringement because you did it unwittingly. OpenAI, or whoever, must bare responsibility for that possible outcome through the use of their tool.

              • Hamartiogonic@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                Yes, it’s true that countless authors contributed to the development of this LLM, but they were not compensated for it in any way. Doesn’t sound fair.

                Can we compare this to some other situation where the legal status has already been determined?

                • wewbull@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  3 months ago

                  I was thinking about money laundering when I wrote my response, but I’m not sure it’s a good analogy. It still feels to me like constructing a generative model is a form of “Copyright washing”.

                  Fact is, the law has yet to be written.

    • wildncrazyguy138@fedia.io
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      3 months ago

      To expand on what you wrote, I’d equate the LLM output as similar to me reading a book. From here on out until I become senile, the book is part of memory. I may reference it, I may parrot some of its details that I can remember to a friend. My own conversational style and future works may even be impacted by it, perhaps even subconsciously.

      In other words, it’s not as if a book enters my brain and then is completely gone once I’m finished reading it.

      So I suppose then, that the question is moreso one of volume. How many works consumed are considered too many? At what point do we shift from the realm of research to the one of profiteering?

      There are a certain subset of people in the AI field who believe that our brains are biological forms of LLMs, and that, if we feed an electronic LLM enough data, it’ll essentially become sentient. That may be for better or worse to civilization, but I’m not one to get in the way of wonder building.

      • Hamartiogonic@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        A neural network (the machine learning technology) aims to imitate the function to normal neurons in a human brain. If you have lots of these neurons, all sorts of interesting phenomena begin to emerge, and consciousness might be one of them. If/when we get to that point, we’ll also have to address several of legal and philosophical questions. It’s going to be a wild ride.

  • Waldowal@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    3 months ago

    Agree for these reasons:

    • Legally: It’s always been legal (in the US at least) to relay the ideas in a copywrited work. AI might need to get better at providing a bibliography, but that’s likely a courtesy more than a legal requirement.

    • Culturally: Access to knowledge should be free. It’s one of the reasons public libraries exist. If AI can help people gain knowledge more quickly and completely, it’s just the next evolution of the same idea.

    • Also Culturally: Think about what’s out on the internet. Millions of recipes, no doubt copied from someone else, with pages of bullshit about how the author “grew up on a farm that produced Mohitos”. For decades now, “content creators” have gotten paid for millions of low quality bullshit click bait articles. There’s that. Most of the real “knowledge” on the internet is freely accessible technical / product documentation, forum posts like StackOverflow, and scientific studies. All of it is stuff the authors would probably love to have out there and freely accessible. Sure, some accidental copywrite infringement might happen here and there, but I think it’s a tiny problem in relation to the value that AI might bring society.

  • LarmyOfLone@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    Huh I read your headline in a sarcastic tone so was totally ready to argue with you. But I agree. Not sure if it’s an unpopular opinion though.

  • Xeroxchasechase@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    7
    ·
    edit-2
    3 months ago

    As long as it’s licensed as Creative Common of some sort. Copyrighted materials are copyrighted and shouldn’t be used without concent , this protect also individuals not only corporations. (Excuse my English)

    Edit: Your argument about probability and parameter size is inapplicable in my mind. The same can be said about jpeg lossy compression.

    • Zagorath@aussie.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 months ago

      Creative Commons would not actually help here. Even the most permissive licence, CC-BY, requires attribution. If using material for training material requires a copyright licence (which is certainly not a settled question of law), CC would likely be just the same as all rights reserved.

      (There’s also CC-0, but that’s basically public domain, or as near to it as an artist is legally allowed to do in their locale. So it’s basically not a Creative Commons licence.)

    • wildncrazyguy138@fedia.io
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      3 months ago

      Could the copywrited material consumed potentially fall under fair use? There are provisions for research purposes.

    • 31337@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      3 months ago

      Incidentally, I read this a while ago, because I was training a classifier on mostly Creative Commons licensed works: https://creativecommons.org/2023/08/18/understanding-cc-licenses-and-generative-ai/

      … we believe there are strong arguments that, in most cases, using copyrighted works to train generative AI models would be fair use in the United States, and such training can be protected by the text and data mining exception in the EU. However, whether these limitations apply may depend on the particular use case.

      • Xeroxchasechase@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        Maybe there should be a distinction if an individual does is for educational and research and a corporation does it for commercial use. As a user it’s fun and usefull to generate whatever mix of text or images I want from a model that was trained on everything, but a user doesn’t see the exploitation made by the corporation that handed him the tool