We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 months ago

    “Humans were stupid and taught a ChatBot how to cheat and lie.”

    No, “cheating” and “lying” imply agency. LLMs are just “spicy autocomplete”. They have no agency. They can’t distinguish between lies and the truth. They can’t “cheat” because they don’t understand rules. It’s just sometimes the auto-generated text happens to be true, other times it happens to be false.

    • gandalf_der_12te@feddit.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 months ago

      I disagree. This is no meaningful talking point. It doesn’t help anyone in practice. Sure, it clears legal questions of responsibility (and I’m not even sure about that one in the future), but apart from that, making an artificial distinction between a human and a looks-and-acts-like-human, provides no real-world value.

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 months ago

        Sure it does, because assigning agency to LLMs is like “the dice are lucky” or “this coin I’m flipping hates me”. LLMs are massively complex and very good at simulating human-generated text. But, there’s no agency there. As soon as people start thinking there’s agency they start thinking that LLMs are “making decisions”, or “being deceptive”. But, it’s just spicy autocomplete. We know exactly how it works, and there’s no thinking involved. There’s no planning. There’s no consciousness. There’s just spitting out the next word based in an insanely deep training data set.

        • gandalf_der_12te@feddit.de
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 months ago

          I believe that at a certain point, “agency” is an emergent feature. That means that, while all the single bits are well understood probability-wise, the total picture is still more than that.

          It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.

          • Skates@feddit.nl
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            11 months ago

            If I were to send you a video of a duck quacking, would you abandon going to the supermarket in the hope that your computer/phone/whatever you watch it on will now be able to lay eggs?

            Listen. It was made to look like a duck. It was made to quack like a duck. It is not a duck. It is a painting of a duck, with voice features. It won’t fly, it won’t lay eggs, it won’t feel pain, it won’t shit all over the floors. It’s not a damn duck, and pretending it is just because it looks like it and it quacks, is like wanting to marry a fleshlight because it’s really good at sex and never disagrees with you. Sure, go ahead and do it - but don’t goddamn expect it to also give birth to your children and take them to school in the mornings, that’s not it’s purpose.

            Just wait for the iteration of duck that is actually meant to and capable of doing these things. It’ll be pretty cool. But this one ain’t it.

            • gandalf_der_12te@feddit.de
              link
              fedilink
              English
              arrow-up
              0
              ·
              11 months ago

              Edgy comment here but:

              In another thread we were discussing AI-generated CSAM. Thread:

              https://feddit.de/post/6315841

              You would probably agree, then, that such material is not problematic, because even if it looks like CSAM, and it quacks like CSAM, it is not CSAM, therefore we don’t have to take it seriously or regulate it in similar ways that we do regulate actual CSAM, if I continue your logic, no?