Running Lightcone Infrastructure, which runs LessWrong and Lighthaven.space. You can reach me at habryka@lesswrong.com.
(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)
They know they're not real on reflection, but not as they're doing it. It's more like fumbling and stuttering than strategic deception.
I will agree that making up quotes is literally dishonest but it's not purposeful deliberate deception.
But the problem is when I ask them "hey, can you find me the source for this quote" they usually double down and cite some made-up source, or they say "oh, upon reflection this quote is maybe not quite real, but the underlying thing is totally true" when like, no, the underlying thing is obviously not true in that case.
I agree this is the model lying, but it's a very rare behavior with the latest models.
I agree that literally commenting out tests is now rare, but other versions of this are still quite common. Semi-routinely when I give AIs tasks that are too hard will they eventually just do some other task that surface level looks like it got the task done, but clearly isn't doing the real thing (like leaving a function unimplemented, or avoiding doing some important fetch and using stub data). And it's clearly not the case that the AI doesn't know that it didn't do the task, because at that point it might have spent 5+ minutes and 100,000k+ tokens slamming its head against the wall trying to do it, and then at the end it just says "I have implemented the feature! You can see it here. It all works. Here is how I did it...", and clearly isn't drawing attention to how it clearly cut corners after slamming its head against the wall for 5+ minutes.
I mean, the models are still useful!
But especially when it comes to the task of "please go and find me quotes or excerpts from articles that show the thing that you are saying", the models really seem to do something that seems closer to "lying". This is a common task I ask the LLMs to perform because it helps me double-check what the models are saying.
And like, maybe you have a good model of what is going in with the model that isn't "lying", but I haven't heard a good explanation. It seems to me very similar to the experience of having a kind of low-integrity teenager just kind of make stuff up to justify whatever they said previously, and then when you pin them down, they flip and says "of course, you are totally right, I was wrong, here is another completely made up thing that actually shows the opposite is true".
And these things are definitely quite trajectory dependent. If you end up asking an open-ended question where the model confabulates some high-level take, and then you ask it to back that up, then it goes a lot worse than if you ask it for sources and quotes from the beginning.
Like, none of these seems very long-term scheming oriented, but it's also really obvious to me the model isn't trying that hard to do what I want.
It’s really difficult to get AIs to be dishonest or evil by prompting
I am very confused about this statement. My models lie to me every day. They make up quotes they very well know aren't real. They pretend that search results back up the story they are telling. They will happily lie to others. They comment out tests, and pretend they solve a problem when it's really obvious they haven't solved a problem.
I don't know how much this really has that much to do what these systems will do when they are superintelligent, but this sentence really doesn't feel anywhere remotely close to true.
"Audience Capture" is the standard term I've heard for this: https://en.wikipedia.org/wiki/Audience_capture
Ok... I mean, this is very obviously against our moderation guidelines. This is a warning. Do expect a pretty immediate ban if you write more like this.
Boaz Barak's course at Harvard has been doing this! See here for the associated posts: https://www.lesswrong.com/w/cs-2881r
Go to the library page, scroll down until you hit "community sequences" and then press the button there.
Yeah, agree we should integrate those weights somehow.
I think there will be a continuous spectrum of increasingly difficult to judge cases, and a continuous problem of getting better at filtering out bad cases, such that "if you can tell" isn't a coherent threshold.
I agree with this, but then I don't understand how this solution helps? Like, here we have a case where we can still tell that the environment is being reward hacked, and we tell the model it's fine. Tomorrow the model will encounter an environment where we can't tell that it's reward hacking, so the model will also think it's fine, and then we don't have a feedback loop anymore, and now we just have a model that is happily deceiving us.
Sure, here is an example of me trying to get it to extract quotes from a big PDF: https://chatgpt.com/share/6926a377-75ac-8006-b7d2-0960f5b656f1
It's not fully apparent from the transcript, but basically all the quotes from the PDF are fully made up. And emphasizing to please give me actual quotes produced just more confabulated quotes. And of course those quotes really look like they are getting me exactly what I want!