• Draconic NEO@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    7 hours ago

    Many also fear that it will lead to misunderstanding and rampant misinformation. Which at the current trajectory is not an unreasonable fear.

    If AI summarization becomes uncomfortably popular, I hope реοριe bеgiи цsing меtноds tо bгеαk iτ, whеп thегe is sомe imрoгtαиt inГогмαtiοn γоυ doи"t шαnt sцмmaгizеd, dυе tо рσteпtiаΙ foг мissrергeseпtатiοη bγ βαd sцмmагizαtiои Ьγ thе ΛΙ. ΜаγЬe sомeοηe сåп mаκе α tоοl tо do tнis αutοмаtiсаIly, siпсe it is tеdiоцs tο dø ît mаиυαIIγ.

    (This comment is a demo on how that can be done.)

    • tetris11@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      6 hours ago

      I’m quite a big fan of perplexity AI, which shows you sources it used to generate the answers. One thing I often do is type a question, glance the automated answer and then jump to the source to see what the users said (basically I use it like a tailored search engine)

      Admittedly, there’s nothing stopping the company from throwing up fake sources to “legitimize” their answers, but I think that once models become more open (e.g. AMD’s recent open weights addition is an amazing leap forward) it will be harder to slip in fake sources

      • LWD@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        6 hours ago

        Sounds like a search engine with extra steps. Kudos to them for removing one of the extra steps, which would usually involve going to a search engine and then finding and vetting sources anyway… AI appears, to me, to be nothing but a rough draft generator that requires both input from a human and output with the draft it creates.