I was trying to do a memory test to see how far back 3.5 could recall information from previous prompts, but it really doesn’t seem to like making pseudorandom seeds. 😆

    • millie@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Oooh, so maybe it’s the term ‘non-repeating’ that’s actually tripping it?

      • Turun@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        No, the request is fine. But once it fucks up and starts generating a long string of a single number the output is censored, because it is similar to how a recent data extraction attack works.

  • Glide@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    9 months ago

    I regularly use ChatGPT to generate questions for junior high worksheets. You would be surprised how easily it fucks up “generate 20 multiple choice and 10 short answer questions”. Most frequently at about 12-13 multiple choice it gives up and moves on. When I point out its flaw and ask it to finish generating the multiple choice, it continues to find new and unique ways to fuck up coming up with the remaining questions.

    I would say it gives me simple count and recall errors in about 60% of my attempts to use it.

    • DdCno1@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      9 months ago

      Consider keeping school the one place in a child’s life where they aren’t bombarded with AI-generated content.

      • Glide@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        9 months ago

        I use it as a brainstorming tool. I haven’t had a single question make it as-is to a student’s worksheet. If the tool can’t even count to 20 successfully, I’m not sure how anyone could trust it to generate meaningful questions for an ELA program.