You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • TacticsConsort@yiffit.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    In the interest of transparency, I don’t know if this guy is telling the truth, but it feels very plausible.

  • RelativeArea0@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    All I know when a publicly offered company slaps “AI” on their products, then its most likely a money launderi…i mean liquidation strat.

  • just another dev@lemmy.my-box.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

    If you need these kind of tips, on behalf of the gene pool, please don’t procreate, and eat as much glue as you can.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    They keep saying it’s impossible, when the truth is it’s just expensive.

    That’s why they wont do it.

    You could only train AI with good sources (scientific literature, not social media) and then pay experts to talk with the AI for long periods of time, giving feedback directly to the AI.

    Essentially, if you want a smart AI you need to send it to college, not drop it off at the mall unsupervised for 22 years and hope for the best when you pick it back up.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      No he’s right that it’s unsolved. Humans aren’t great at reliably knowing truth from fiction too. If you’ve ever been in a highly active comment section you’ll notice certain “hallucinations” developing, usually because someone came along and sounded confident and everyone just believed them.

      We don’t even know how to get full people to do this, so how does a fancy markov chain do it? It can’t. I don’t think you solve this problem without AGI, and that’s something AI evangelists don’t want to think about because then the conversation changes significantly. They’re in this for the hype bubble, not the ethical implications.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        3 months ago

        We do know. It’s called critical thinking education. This is why we send people to college. Of course there are highly educated morons, but we are edging bets. This is why the dismantling or coopting of education is the first thing every single authoritarian does. It makes it easier to manipulate masses.

        • helenslunch@feddit.nl
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          It’s called critical thinking education.

          Yeah, I mean, we have that, and parents are constantly trying to dismantle it. No amount of “critical thinking education” can undo decades of brainwashing from parents and local culture.

    • helenslunch@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      You could only train AI with good sources

      I mean yes, but also no. If you only train it with “good sources” then you miss out on a whole bunch of other valuable information.

      Just like scholar.google.com only has “good sources” but generally it’s not going to have the information that 90% of your search queries will be about.

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      3 months ago

      I let you in on a secret: scientific literature has its fair share of bullshit too. The issue is, it is much harder to figure out its bullshit. Unless its the most blatant horseshit you’ve scientifically ever seen. So while it absolutely makes sense to say, let’s just train these on good sources, there is no source that is just that. Of course it is still better to do it like that than as they do it now.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        The issue is, it is much harder to figure out its bullshit.

        Google AI suggested you put glue on your pizza because a troll said it on Reddit once…

        Not all scientific literature is perfect. Which is one of the many factors that will stay make my plan expensive and time consuming.

        You can’t throw a toddler in a library and expect them to come out knowing everything in all the books.

        AI needs that guided teaching too.

      • callouscomic@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        “Most published journal articles are horseshit, so I guess we should be okay with this too.”

        • Turun@feddit.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          No, it’s simply contradicting the claim that it is possible.

          We literally don’t know how to fix it. We can put on bandaids, like training on “better” data and fine-tune it to say “I don’t know” half the time. But the fundamental problem is simply not solved yet.

  • Mad_Punda.de@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    these hallucinations are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem”.

    Then what made you think it’s a good idea to include that in your product now?!

    • platypus_plumba@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      There’s really nothing they can do, that’s just the current state of LLMs. People are insane, they can literally talk with something that isn’t human. We are literally the first humans in human experience to have a human-level conversation with something that isn’t human… And they don’t like it because it isn’t perfect 4 years after release.

      • TachyonTele@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Using fancy predictive text is not like talking to a human level intelligence.

        You’ve bought into the fad.

        • platypus_plumba@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          3 months ago

          In terms of language, Chatgpt is more advanced than most humans. Have you spoken to the average person lately? By average I mean worldwide average.

          It’s obviously not full human intelligence, but in terms of language it is pretty mind blowing.

  • MNByChoice@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Good. Nothing will get us through the hype cycle faster than obvious public failure. Then we can get on with productive uses.

      • 9488fcea02a9@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 months ago

        I hate the AI hype right now, but to say the entire thing should fail is short sighted.

        Imagine people saying the following: “The internet is just hype. I get too much spam emails. I hope the entire thing is a catastrophic failure.”

        Imagine we just shut down the entire internet because the dotcom bubble was full of scams and overhyped…

          • KevonLooney@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 months ago

            ?

            Have you never used any of these tools? They’re excellent at doing simple things very fast. But it’s like a word processor in the 90s. It’s just a tool, not the font of all knowledge.

            I guess younger people won’t know this, but word processor programs were very impressive when they first came out. They replaced typewriters; a page printed from a printer looked much more professional than even the best typewriters. This lent an air of credibility to anything that was printed from a computer because it was new and expensive.

            Think about that now. Do you automatically trust anything that’s just printed on a piece of paper? No, because that’s stupid. Anyone can just print whatever they want. LLMs are like that now. They can just say whatever they want. It’s up to you to make sure it’s true.

  • masquenox@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Since when has feeding us misinformation been a problem for capitalist parasites like Pichai?

    Misinformation is literally the first line of defense for them.

    • RubberDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      But this is not misinformation, it is uncontrolled nonsense. It directly devalues their offering of being able to provide you with an accurate answer to something you look for. And if their overall offering becomes less valuable, so does their ability to steer you using their results.

      So while the incorrect nature is not a problem in itself for them, (as you see from his answer)… the degradation of their ability to influence results is.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        But this is not misinformation, it is uncontrolled nonsense.

        The strategy is to get you to keep feeding Google new prompts in order to feed you more adds.

        The AI response is just a gimmick. It gives Google something to tell their investors, when they get asked “What are you doing with AI right now? We hear that’s big.”

        But the real money is getting unique user interactions for the purpose of serving up more ad content. In that model, bad answers are actually better than no answers, because they force the end use to keep refining the query and searching through the site backlog.

        • fishos@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          3 months ago

          If you don’t know the answer is bad, which confident idiots spouting off on reddit and being upvoted into infinity has proven is common, then you won’t refine your search. You’ll just accept the bad answer and move on.

          Your logic doesn’t follow. If someone doesn’t know the answer and are searching for it, they likely won’t be able to tell if the answer is correct. We literally already have that problem with misinformation. And what sounds more confident than an AI?

  • retrospectology@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    3 months ago

    This is what happens every time society goes along with tech bro hype. They just run directly into a wall. They are the embodiment of “Didn’t stop to think if they should” and it’s going to cause a lot of problems for humanity.

  • Lad@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I think we should stop calling things AI unless they actually have their own intelligence independent of human knowledge and training.

  • Paradox@lemdro.id
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    3 months ago

    Replace the CEO with an AI. They’re both good at lying and telling people what they want to hear, until they get caught