Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.

TLDR: A lot of ludite sentiments around AI in Linux community.

  • FQQD@lemmy.ohaa.xyz
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 months ago

    I dont think the community is generally against AI, there’s plenty of FOSS projects. They just don’t like cashgrabs, enshittification and sending personal data to someone else’s computer.

    • anamethatisnt@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      sending personal data to someone else’s computer.

      I think this is spot on. I think it’s exciting with LLMs but I’m not gonna give the huge corporations my data, nor anyone else for that matter.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      arrow-down
      2
      ·
      5 months ago

      I don’t see anyone calling for cash grabs or privacy destroying features to be added to gnome or other projects so I don’t see why that would be an issue. 🙂

      On device Foss models to help you with various tasks.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        5 months ago

        On device Foss models to help you with various tasks.

        Thankfully I really really don’t need an “AI” to use my desktop. I don’t want that kind of BS bloat either. But go ahead and install whatever you want on your machine.

        • umami_wasabi@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          It is quite a bloat. Llama3 7B is 4.7GB by itself, not counting all the dependencies and drivers. This can easily take 10+ GB of the drive. My Ollama setup takes about 30GB already. Given a single application (except games like COD that takes up 300GB), this is huge, almost the size of a clean OS install.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        You are, if you’re calling for Apple like features.

        You might argue that “private cloud” is privacy preserving, but you can only implement that with the cash of Apple. I would also argue that anything leaving my machine, to a bunch of servers I don’t control, without my knowledge is NOT preserving my privacy.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          5 months ago

          You might argue that “private cloud” is privacy preserving

          I don’t know since when “on device” means send it to a server. Come up with more straw men I didn’t mention for you to defeat.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      5 months ago

      Thanks for the history lesson, these days it is used to refer to those opposed to industrialisation, automation, computerisation, or new technologies or even progress in general.

      • Zeoic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        These days, it is often misused by ignorant people because it sounds derogatory.

        FTFY

  • DudeImMacGyver@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    Reminder that we don’t even have AI yet, just learning machine models, which are not the same thing despite wide misuse of the term AI.

        • NoiseColor@startrek.website
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          Well not at all. What a word means is not defined by what you might think. When the majority starts to use a word for something and that sticks, it can be adopted. That happens all the time and I have read articles about it many times. Even for our current predicament. Language is evolving. Meanings change. And yes ai today includes what is technically machine learning. Sorry friend, that’s how it works. Sure you can be the grumpy drunk at a bar complaining that this is not strictly ai by some definition while the rest of the world rolls their eyes and proceeds to more meaningful debates.

          • DudeImMacGyver@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Words have meaning and, sure, they can be abused and change meaning over time but let’s be real here: AI is a hype term with no basis on reality. We do not have AI, we aren’t even all that close. You can make all the ad hominem comments you want but at the end of the day, the terminology comes from ignorant figureheads hyping shit up for profit (at great environmental cost too, LLM aka “AI” takes up a lot of power while yielding questionable results).

            Kinda sounds like you bought into the hype, friend.

            • NoiseColor@startrek.website
              link
              fedilink
              arrow-up
              0
              arrow-down
              2
              ·
              5 months ago

              You missed the point again, oh dear! Let me try again in simpler terms : you yourself dont define words, how they are used in the public does. So if the world calls it ai, then the word will mean what everybody means when they use it.

              This is how the words come to be, evolve and are at the end put in the dictionary. Nobody cares what you think. Ai today includes ML. Get over it.

              Nice try with deflection attempts, but I really don’t care about them, I’m only here to teach you where words come from and to tell you, the article is written about you.

              Also that I’m out of time for this. Bye.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      That’s just nitpicking. Everyone here knows what we mean by AI. Yes it refers to LLMs.

      Reminds me of Richard Stallman always interjecting to say “actually its gnu/Linux or as I like to say gnu plus Linux”…

      Well no Mr Stallman its actually gnu + Linux + Wayland + systemd + chromium and whatever other software you have installed, are you happy now??

      • ElectricMachman@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        So when we actually do have AI, what are we supposed to call it? The current use of the term “AI” is too ambiguous to be of any use.

        • HumanPenguin@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          Honestly what we have now is AI. As in it is not intelligent just trys to mimic it.

          Digital Intelegence if we ever achive it would be a more accurate name.

          • MudMan@fedia.io
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            Look, the naming ship has sailed and sunk somewhere in the middle of the ocean. I think it’s time to accept that “AI” just means “generative model” and what we would have called “AI” is now more narrowly “AGI”.

            People call videogame enemies “AI”, too, and it’s not the end of the world, it’s just imprecise.

      • Inevitable Waffles [Ohio]@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        As someone who frequently interacts with the tech illiterate, no they don’t. This sudden rush to put weighed text hallucination tables into everything isn’t that helpful. The hype feels like self driving cars or 3D TVs for those of us old enough to remember that. The potential for damage is much higher than either of those two preceding fads and cars actually killed poeple. I think many of us are expressing a healthy level of skepticism toward the people who need to sell us the next big thing and it is absolutely warranted.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          5 months ago

          The potential for damage is much higher

          Doubt it. Maybe Microsoft can fuck it up somehow but the tech is here to stay and will do massive good.

          • Inevitable Waffles [Ohio]@midwest.social
            link
            fedilink
            English
            arrow-up
            0
            ·
            5 months ago

            You can doubt all you like but we keep seeing the training data leaking out with passwords and personal information. This problem won’t be solved by the people who created it since they don’t care and fundamentally the technology will always show that lack of care. FOSS ones may do better in this regard but they are still datasets without context. Thats the crux of the issue. The program or LLM has no context for what it says. That’s why you get these nonsensical responses telling people that killing themselves is a valid treatment for a toothache. Intelligence is understanding. The “AI” or LLM or, as I like to call them, glorified predictive textbars, doesn’t understand the words it is stringing together and most people don’t know that due to flowery marketing language and hype. The threat is real.

            • Auli@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 months ago

              Not to mention the hulucinations. What a great marketing term for it’s fucking wrong.

    • knatschus@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      Have you mentioned that in gaming forums aswell when they talked about AI?

      AI is a broad term and can mean many different things, it does not need to mean ‘true’ AI

  • 737@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    I’ve yet to see a need for “AI integration ✨” in to the desktop experience. Copilot, LLM chat bots, TTS, OCR, and translation using machine learning are all interesting but I don’t think OS integration is beneficial.

      • 737@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        not every high tech product or idea makes it, you don’t see a lot of netbooks or wifi connected kitchen appliances these days either; having the ability to make tiny devices or connecting every single device is not justification enough to actually do it. i view ai integration similarly: having an llm in some side bar to change the screen brightness, find some time or switch the keyboard layout isn’t really useful. being able to select text in an image viewer or searching through audio and video for spoken words for example would be a useful application for machine learning in the DE, that isn’t really what’s advertised as “AI” though.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          5 months ago

          Changing the brightness or WiFi settings can be very useful for many people. Not everyone is a Linux nerd and knows all the ins and outs of basic computing.

          • 737@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            maybe, but these people wouldn’t own a pc with a dedicated gpu or neutral network accelerator.

  • juliebean@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    just a historical factoid that a lot of people don’t realize: the luddites weren’t anti technology without reason. they were apprehensive about new technology that threatened their livelihoods, technology that threatened them with starvation and destitution in the pursuit of profit. i think the comparison with opposition to AI is pretty apt, in many cases, honestly.

  • Antiochus@lemmy.one
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    You’re getting a lot of flack in these comments, but you are absolutely right. All the concerns people have raised about “AI” and the recent wave of machine learning tech are (mostly) valid, but that doesn’t mean AI isn’t incredibly effective in certain use cases. Rather than hating on the technology or ignoring it, the FOSS community should try to find ways of implementing AI that mitigate the problems, while continuing to educate users about the limitations of LLMs, etc.

    • crispy_kilt@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      5 months ago

      It’s spelled flak, not flack. It’s from the German word Flugabwehrkanone which literally means aerial defense cannon.

  • DigDoug@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    …this looks like it was written by a supervisor who has no idea what AI actually is, but desperately wants it shoehorned into the next project because it’s the latest buzzword.

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    You can’t do machine learning without tons of data and processing power.

    Commercial “AI” has been built on fucking over everything that moves, on both counts. They suck power at alarming rates, especially given the state of the climate, and they blatantly ignore copyright and privacy.

    FOSS tends to be based on a philosophy that’s strongly opposed to at least some of these methods. To start with, FOSS is build around respecting copyright and Microsoft is currently stealing GitHub code, anonymizing it, and offering it under their Copilot product, while explicitly promising companies who buy Copilot that they will insulate them from any legal downfall.

    So yeah, some people in the “Linux space” are a bit annoyed about these things, to put it mildly.

    Edit: but, to address your concerns, there’s nothing to be gained by rushing head-first into new technology. FOSS stands to gain nothing from early adoption. FOSS is a cultural movement not a commercial entity. When and if the technology will be practical and widely available it will be incorporated into FOSS. If it won’t be practical or will be proprietary, it won’t. There’s nothing personal about that.

  • WallEx@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    A lot of mentions of AI from companies is absolute marketing bullshit. And if you can’t see that you don’t want to.

  • zerakith@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    I won’t rehash the arguments around “AI” that others are best placed to make.

    My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don’t and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.

    Addtionally, as others have said the current state of “AI” has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn’t theirs to build something that contains that’s data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.

    Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn’t own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?

    Personally, I can’t wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      Right, another aspect of the Luddite movement is that they lost. They failed to stop the spread of industrialization and machinery in factories.

      Screaming at a train moving 200kmph hoping it will stop.

      • davel@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        You misunderstand the Luddite movement. They weren’t anti-technology, they were anti-capitalist exploitation.

        The 1810s: The Luddites act against destitution

        It is fashionable to stigmatise the Luddites as mindless blockers of progress. But they were motivated by an innate sense of self-preservation, rather than a fear of change. The prospect of poverty and hunger spurred them on. Their aim was to make an employer (or set of employers) come to terms in a situation where unions were illegal.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          arrow-down
          2
          ·
          5 months ago

          They probably wouldn’t be such a laughing stock if they were successful.

        • FatCat@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          5 months ago

          Work on useful alternatives to big corpo crapware = lick the boot?

          Mkay…

          • kronisk @lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            5 months ago

            It was more in response to your comments. I don’t think anyone has a problem with useful FOSS alternatives per se.

  • Killing_Spark@feddit.de
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    I think the biggest problem is that ai for now is not an exact tool that gets everything right. Because that’s just not what it is built to do. Which goes against much of the philosophy of most tools you’d find on your Linux PC.

    Secondly: Many people who choose Linux or other foss operating system do so, at least partially, to stay in control over their system which includes knowing why stuff happens and being able to fix stuff. Again that is just not what AI can currently deliver and it’s unlikely it will ever do that.

    So I see why people just choose to ignore the whole thing all together.

    • FatCat@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      Good point about the imprecision. On the other hand most Linux desktop users are Normie’s, think Steam deck and so on.

      Some of the most popular Linux desktops are built for ordinary people with the KISS principle in mind. Not arch using tinkerers

      • Killing_Spark@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        I’m not saying nobody should work on this. There is obviously demand or at least big tech is assuming demand. I’m just saying it’s not surprising to me a lot of Foss developers don’t really care.

      • hydroptic@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        5 months ago

        On the other hand most Linux desktop users are Normie’s, think Steam deck and so on.

        Jesus fuck what a statement. Your parents probably regret having you.

  • electric_nan@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    There are already a lot of open models and tools out there. I totally disagree that Linux distros or DEs should be looking to bake in AI features. People can run an LLM on their computer just like they run any other application.

  • UnfortunateShort@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Is there no electron wrapper around ChatGPT yet? Jeez we better hurry, imagine having to use your browser like… For pretty much everything else.

  • nyan@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Gnome and other desktops need to start working on integrating FOSS

    In addition to everything everyone else has already said, why does this have anything to do with desktop environments at all? Remember, most open-source software comes from one or two individual programmers scratching a personal itch—not all of it is part of your DE, nor should it be. If someone writes an open-source LLM-driven program that does something useful to a significant segment of the Linux community, it will get packaged by at least some distros, accrete various front-ends in different toolkits, and so on.

    However, I don’t think that day is coming soon. Most of the things “Apple Intelligence” seems to be intended to fuel are either useless or downright offputting to me, and I doubt I’m the only one—for instance, I don’t talk to my computer unless I’m cussing it out, and I’d rather it not understand that. My guess is that the first desktop-directed offering we see in Linux is going to be an image generator frontend, which I don’t need but can see use cases for even if usage of the generated images is restricted (see below).

    Anyway, if this is your particular itch, you can scratch it—by paying someone to write the code for you (or starting a crowdfunding campaign for same), if you don’t know how to do it yourself. If this isn’t worth money or time to you, why should it be to anyone else? Linux isn’t in competition with the proprietary OSs in the way you seem to think.

    As for why LLMs are so heavily disliked in the open-source community? There are three reasons:

    1. The fact that they give inaccurate responses, which can be hilarious, dangerous, or tedious depending on the question asked, but a lot of nontechnical people, including management at companies trying to incorporate “AI” into their products, don’t realize the answers can be dangerously innacurate.
    2. Disputes over the legality and morality of using scraped data in training sets.
    3. Disputes over who owns the copyright of LLM-generated code (and other materials, but especiallly code).

    Item 1 can theoretically be solved by bigger and better AI models, but 2 and 3 can’t be. They have to be decided by the courts, and at an international level, too. We might even be talking treaty negotiations. I’d be surprised if that takes less than ten years. In the meanwhile, for instance, it’s very, very dangerous for any open-source project to accept a code patch written with the aid of an LLM—depending on the conclusion the courts come to, it might have to be torn out down the line, along with everything built on top of it. The inability to use LLM output for open source or commercial purposes without taking a big legal risk kneecaps the value of the applications. Unlike Apple or Microsoft, the Linux community can’t bribe enough judges to make the problems disappear.