• rho50@lemmy.nz
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    I know of at least one other case in my social network where GPT-4 identified a gas bubble in someone’s large bowel as “likely to be an aggressive malignancy.” Leading to said person fully expecting they’d be dead by July, when in fact they were perfectly healthy.

    These things are not ready for primetime, and certainly not capable of doing the stuff that most people think they are.

    The misinformation is causing real harm.

    • B0rax@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      To be honest, it is not made to diagnose medical scans and it is not supposed to be. There are different AIs trained exactly for that purpose, and they are usually not public.

  • anlumo@feddit.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Using a Large Language Model for image detection is peak human intelligence.

    • PerogiBoi@lemmy.ca
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      I had to prepare a high level report to a senior manager last week regarding a project my team was working on.

      We had to make 5 professional recommendations off of data we reported.

      We gave the 5 recommendations with lots of evidence and references to why we came to that decision.

      The top question we got was: “What are ChatGPT’s recommendations?”

      Back to the drawing board this week because LLMs are more credible than teams of professionals with years of experience and bachelor-masters level education on the subject matter.

      • rho50@lemmy.nz
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        It is quite terrifying that people think these unoriginal and inaccurate regurgitators of internet knowledge, with no concept of or heuristic for correctness… are somehow an authority on anything.

        • Flax@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Only thing you need to do to realise how bad they are is to play Chess against it. Vs using a chessbot from 30 years ago, it really shows.