We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.
“cheat”, “lie”, “cover up”… Assigning human behavior to Stochastic Parrots again, aren’t we Jimmy?
Ethical theories and the concept of free will depend on agency and consciousness. Things as you point out, LLMs don’t have. Maybe we’ve got it all twisted?
I’m not anthropomorphising ChatGPT to suggest that it’s like us, but rather that we are like it.
Edit: “stochastic parrot” is an incredibly clever phrase. Did you come up with that yourself or did the irony of repeating it escape you?
we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent
This already is total BS. If you know how such language models work you’d never take their responses at face value, even though it’s tempting because they spout their BS so confidently. Always double-check their responses before applying their “knowledge” in the real world.
The question they try to answer is flawed, no wonder the result is just as bad.
Before anyone starts crying about my language models opposition: I’m not opposed to LMs or ChatGPT. In fact, I’m running LMs locally because they help me be more productive and I’m a paying ChatGPT customer.
Bullshit.
It should instead read:
“Humans were stupid and taught a ChatBot how to cheat and lie.”
“Humans were stupid and taught a ChatBot how to cheat and lie.”
No, “cheating” and “lying” imply agency. LLMs are just “spicy autocomplete”. They have no agency. They can’t distinguish between lies and the truth. They can’t “cheat” because they don’t understand rules. It’s just sometimes the auto-generated text happens to be true, other times it happens to be false.
I disagree. This is no meaningful talking point. It doesn’t help anyone in practice. Sure, it clears legal questions of responsibility (and I’m not even sure about that one in the future), but apart from that, making an artificial distinction between a human and a looks-and-acts-like-human, provides no real-world value.
Sure it does, because assigning agency to LLMs is like “the dice are lucky” or “this coin I’m flipping hates me”. LLMs are massively complex and very good at simulating human-generated text. But, there’s no agency there. As soon as people start thinking there’s agency they start thinking that LLMs are “making decisions”, or “being deceptive”. But, it’s just spicy autocomplete. We know exactly how it works, and there’s no thinking involved. There’s no planning. There’s no consciousness. There’s just spitting out the next word based in an insanely deep training data set.
I believe that at a certain point, “agency” is an emergent feature. That means that, while all the single bits are well understood probability-wise, the total picture is still more than that.
It makes sense to me to accept that if it looks like a duck, and it quacks like a duck, then it is a duck, for a lot (but not all) of important purposes.
If I were to send you a video of a duck quacking, would you abandon going to the supermarket in the hope that your computer/phone/whatever you watch it on will now be able to lay eggs?
Listen. It was made to look like a duck. It was made to quack like a duck. It is not a duck. It is a painting of a duck, with voice features. It won’t fly, it won’t lay eggs, it won’t feel pain, it won’t shit all over the floors. It’s not a damn duck, and pretending it is just because it looks like it and it quacks, is like wanting to marry a fleshlight because it’s really good at sex and never disagrees with you. Sure, go ahead and do it - but don’t goddamn expect it to also give birth to your children and take them to school in the mornings, that’s not it’s purpose.
Just wait for the iteration of duck that is actually meant to and capable of doing these things. It’ll be pretty cool. But this one ain’t it.
Edgy comment here but:
In another thread we were discussing AI-generated CSAM. Thread:
https://feddit.de/post/6315841
You would probably agree, then, that such material is not problematic, because even if it looks like CSAM, and it quacks like CSAM, it is not CSAM, therefore we don’t have to take it seriously or regulate it in similar ways that we do regulate actual CSAM, if I continue your logic, no?
Everybody forgot that chatGPT-2 was just a bullshitting machine. Version 3 to the surprise of the developers very useful to many people while they just made a highly trained bullshitting machine.
Probably even if the data is incomplete or fragmented, humans can still draw value from it.