Problem-hiding and deception are naturally countered by corrigibility, and I expect a pseudo-corrigible agent to spend a bunch of cognitive effort hunting in their own mind for schemes and problems.
This part is, I claim, off. The kind of "pseudo-corrigibility" one would get from training a model to output corrigible-looking things would importantly not involve any selection pressure directly on how the thing cognates internally. Training could select for the system to output things which sound like they result from searching its own mind for things, but it could not select for actually doing that (separate from the outputs having that appearance). And that's the sort of thing which is relatively easy to fake, very hard to verify, and plausibly easier to fake than to actually do (depending on how alien the mind's internals actually are). Very likely, there will be at least some cases where the thing can get better scores from the humans by faking hunting its own mind for schemes and problems - much like e.g. how today's models (and humans) will settle on an answer and then make up a totally-fake retrospective story about how they arrived at that answer.
I don't think this point is cruxy on its own, but I think it points toward some central important difference between however you're thinking about things and however I'm thinking about things. Like, there's this jump in reasoning from "behavior which humans label as corrigible-looking" to "even vaguely corrigible cognition internally", which is a really big jump; that jump is not something which would easily follow from the selection pressures involved, especially when combined with the philosophical problems.
Interesting. Yeah, I think I can feel the deeper crux between us. Let me see if I can name it. (Edit: Alas, I only succeeded in producing a longwinded dialogue. My guess is that this still doesn't capture the double-crux.)
Suppose I try to get students to learn algebra by incentivizing them to pass algebra tests. I ask them to solve for "23x - 8 = -x" and if they say "1/3" then I give them a cookie or whatever. If this process succeeds at producing a student that can reliably solve similar equations I might claim "I now have a student who knows algebra."
But someone else (you?) might say, "Just because you see the student answering some problems correctly does not mean they actually understand. Understanding happens in the internals, and you've put no selection pressure directly on what is happening in the student's mind. Perhaps they merely look like they understand algebra, but are actually faking it, such as by using their smart-glasses to cheat by asking Claude."
I might say "Fine. Let's watch them very closely and see if we can spot cheating devices."
My interlocutor might respond "Even if you witness the externals of the student and verify there's no cheating tools, that doesn't mean the student actually understands. Perhaps they have simply learned a few heuristics for simple equations, but would fail to generalize to harder questions. Or perhaps they have gotten very good at watching your face and doing a Clever Hans trick. Or perhaps they have understood the rules of symbolic equations, and have entirely missed the true understanding of algebra. You still haven't put any direct pressure on the student's mind."
I might answer "Okay, but we can test harder questions, remove me from the room, and even give them essay tests where they describe the principles of algebra in abstract. Isn't each time they pass one of these tests evidence that they actually do understand algebra? Can't we still just say 'I now have a student who knows algebra' at some point, even though there's some possibility remaining (a pain in my posterior, is what it is!) that we're wrong?"
Another person might object to this analogy, and say "Testing capabilities is categorically different from testing values. If a student consistently answers algebra problems, we can say that something, whether it's the student or Claude, is able to answer algebra problems. But there's no amount of watching external behavior that lets us know why the student is doing the math. Perhaps it's because they love doing algebra. Or perhaps it's because they think they'll get a cookie. Or perhaps it's because they have developed an algebra-solving reflex that has no deeper goal or feeling. We simply can't know without looking in their heads."
I might respond "Surely that's too strong. I have some knowledge about what various agents want. I see a beaver building a dam and speculate that it's trying to use wood to get rid of the sound of running water. I put the beaver in a habitat with a speaker that's playing water sounds and watch the beaver try to build dams over the speakers. Perhaps I am wrong about what it wants, but surely I can observe its behavior to gain evidence."
A new naysayer responds "The beaver lacks situational awareness. Yes, you can learn what an agent wants by watching their behavior, but if the agent knows you're watching them then that act of observation might distort things. John's point is not that it's impossible to learn about the agent by watching it, but rather that it's extremely easy for the agent to fake having a desire for corrigibility."
Yet another naysayer jumps in with "In fact, it's worse than that! An agent that has been selected for seeming corrigible (or seeming to want any particular thing) will pump against the agent being actually corrigible in all the cases where true corrigibility (or whatever) don't seem as corrigible as something else! If you don't actually know what you're looking for, you'll die from overfitting to your ignorant intuition."
I might say "I agree that overfitting is an issue to watch out for. Part of the plan is to be wary of that, employing the various techniques to avoid overfitting (eg distillation, dropout) that we've accumulated over the years, and generally be careful. On the topic of situational awareness, can't we also run mundane tests, there? Yes, if it's a vast superintelligence we're probably doomed, but we could imagine a superintelligence like Yunna, where we can isolate single instances or small groups of instances, or slow them down compared to their normal speed, and carefully tune the environment to try and elicit a sense of being unwatched. When they believe themselves to be unwatched, do any incorrigibility behaviors emerge?"
Naysayer: "You won't be able to do this perfectly. There will still be signs that the agent is potentially being watched that won't exist in the real world when the agent is deployed in a risky way. Your setup will make the agent paranoid about watchers marinading them, but it can't actually get a true read on what the agent will do in deployment."
Me: "But it does give evidence. I agree that training for corrigibility trains for incorrigible things that seem corrigible, but it also trains for corrigibility. The road that I'm envisioning has all these obvious flaws and issues, but none of the flaws and issues are dealbreakers, as far as I can tell; they're obstacles that make things fraught, but don't remove the sense in me that maybe a hyper-paranoid, hyper-competent group could muddle-through, in the same way that we muddle through in various other domains in engineering and science."
Naysayer: "You'll get eaten before you finish muddling."
Me: "Why? Getting eaten is a behavior. I expect true corrigibility to be extremely hard to get, but part of the point is that if you have trained a thing to behave corrigibly in contexts like the one where you're muddling, it will behave corrigibly in the real world where you're muddling."
So there's this ethos/thought-pattern where one encounters some claim about some thing X which is hard to directly observe/measure, and this triggers an attempt to find some easier-to-observe thing Y which will provide some evidence about X. This ethos is useful on a philosophical level for identifying fake beliefs, which is why it featured heavily in the Sequences. But I claim that, to a rough approximation, this ethos basically does not work in practice for measuring things X, and people keep shooting themselves in the foot by trying to apply it to practical problems.
What actually happens, when people try to apply that ethos in practice, is that they Do Not Measure What They Think They Are Measuring. The person's model of the situation is just totally missing the main things which are actually going on, their whole understanding of how X relates to Y is wrong, it's a coinflip whether they'd even update in the correct direction about X based on observing Y. And the actual right way for a human (as opposed to a Solomonoff inductor) to update in that situation is to just ignore Y for purposes of reasoning about X.
The main thing which jumps out at me in your dialogue is your self-insert repeatedly trying to apply this ethos which does not actually work in practice.
(Also, we can, in fact, observe some of the AIs internals and run crude checks for things like deception. Prosaic interpretability isn't great, but it's also not nothing.)
I can imagine a counter-argument that says "you're noticing deep problems and then your wishful thinking is saying 'but maybe they won't bite' but you should notice how deep and pernicious they are." But this argument feels like it proves too much. Don't plenty of fields have pernicious problems of a similar character, but manage to make progress anyway?
My answer to this is actually "no", for the most part. There are fields which make lots of progress by avoiding this flavor of perniciousness in various ways - e.g. market feedback pressures are a big one which can work insofar as a nontrivial fraction of downstream consumers are capable of recognizing problems. Then there are fields which don't have ways of avoiding this flavor of perniciousness, and they mostly either don't make progress, or end up "faking it".
This post is a (somewhat rambling and unsatisfying) meditation on whether it's possible, given a somewhat powerful AI that is more or less under control and trained in a way that it behaves reasonably corrigible in environments that resemble the training data, whether one could carefully iterate towards a machine that's fully corrigible, and succeed (while still having it be meaningfully powerful).
As context, John Wentworth recently challenged my CAST proposal, and pointed at his 2022 essay Worlds Where Iterative Design Fails as an intuition pump for why we definitely can't get a corrigible agent using prosaic methods. To be clear, I think that trying to build a corrigible superintelligence would be reckless and unwise and would probably fail, even if we somehow became sufficiently paranoid and didn't have to worry about things like politics and race dynamics. The question is not whether this strategy is likely to work — Wentworth and I both agree it probably wouldn't. The question is whether it knowably won't. Wentworth is sure. I am uncertain.
Let's begin by trying to imagine what it might be like to have the pseudo-corrigible AGI that I am assuming as a starting point to the iteration process. My upcoming novel, Red Heart, does this by imagining a "human level AGI" called Yunna, who is basically a multimodal LLM with ~10 trillion params, trained on high-quality data to be agentic and collaborate with other instances of herself, such that she can scale up to thousands of copies that collaborate on a shared mental scratchpad to solve problems. I think Yunna would be large enough to do important cognitive labor and count as "somewhat powerful." But I'm imagining something closer to a team of a thousand high-speed IQ 150 people who are focused on collaborating, rather than something truly godlike.
"More or less under control" means that we'll start with humans being in charge of how many copies of Yunna are running, what information they are getting, and how fast they're running. It means Yunna is thinking in tokens that we can interpret as natural language, rather than "neuralese," and that the cybersecurity infrastructure is following all of the best practices of the most paranoid human orgs that exist today, complete with greybeards who are heavily motivated to demonstrate a flaw in the containment protocol. It means there's an army of interpretability specialists like Chris Olah and Neel Nanda who are probing Yunna's mind in an effort to understand what she's actually thinking, and how her mental circuits work. It means setting up a variety of honeypots and tripwires and various other control strategies, such as those pursued by folks like Buck Shlegeris and Ryan Greenblatt.
The training data that Yunna has been exposed to involves lots of basic instruction-following tasks, but also a wide swath of examples that are meant to hit the more subtle aspects of corrigibility. The training examples include, for instance, situations that we think will push towards having low impact, fixable mistakes, and behaving in straightforward ways (even if those straightforward ways are less efficient). The training data involves rewarding episodes where Yunna is instructed to shut down, and does, or where other instances of Yunna encourage her to scheme and she refuses and tells the humans about the incident. It's impossible to be exhaustive, but we can imagine an army of educated writers and so on producing an ocean of training data meant to represent our best guess at what corrigible behavior looks like in the environment where Yunna is being tested.
This training data has flaws and mistakes. Despite each example being triple-checked by a diverse set of humans who are motivated to actually identify the corrigible path, there will still be flaws. Yunna's notion of corrigibility will thus be imperfect, as she's incentivized to mimic that flawed depiction of corrigibility in addition to picking up on the true signal. This results in behavior that looks pretty good when the environment matches the training environment, but might go wildly off the rails if there's a significant distribution shift, especially if there's some adversarial pressure.
Yunna has a single human being who she considers to be her principal. This human is of impeccable moral character and acts with wisdom and caution, by assumption. I'll call him "Prince." (The one major exception to Prince's wisdom is that he doesn't think that Yunna is a-priori extremely dangerous and should be shut down.)
In Wentworth's essay, he gives some reasons why iterating is hard, and why this setup is prone to fail:
How might these bite, in our particular context with Yunna?
The list I just gave isn't meant to be exhaustive. I just wanted to try and think more concretely about how these kinds of problems might manifest.
As when I first read Wentworth's post, I find myself unconvinced that iteration (with great effort and paranoia) is obviously doomed. Why?
I think the sudden jump in capabilities feels not guaranteed, and even if it occurs it doesn't feel guaranteed that the humans couldn't notice and shut things down and re-orient. Yes, for a sufficient jump I think everything breaks, but it just seems reasonable to me that paranoid watchers can notice when the AI starts being more capable. I think my view matches with the experiences that we've seen in AI so far?
Problem-hiding and deception are naturally countered by corrigibility, and I expect a pseudo-corrigible agent to spend a bunch of cognitive effort hunting in their own mind for schemes and problems. That sort of effort, and the effort of surrounding humans, doesn't feel bulletproof, but it also doesn't seem obviously impossible. Like, I think if we applied a similar effort of scrutiny towards a genius human, it would probably be fine? Is the issue the scale of having thousands of geniuses, or in getting beyond genius level?
The philosophical problems of not knowing the True Name of corrigibility seem like the most fraught. I think my sense of hope here is that corrigibility isn't nearly as complex and fragile as a target compared to "human values" (whatever those are) and that conceptual progress can, in fact, be verified in a way that's pretty robust to manipulation. Like, if you train an AI to help make philosophical progress, and the AI isn't radically superhuman, and it's not thinking any thoughts like "I need to persuade them" then I don't see why you can't pass that AI's writing through a series of paraphrasers, give it to a council of philosophers and researchers, and have those humans squint at it skeptically and thus make more progress than if you just had humans trying to figure things out.
I guess I'm also not convinced that, seeing an actual CAST agent acting in the lab, you couldn't employ an army of philosophers and researchers to just simply figure out what it means to be corrigible (not directly talking to the AI) and have them figure it out.
Like, it seems fraught and uncertain, but I still don't see why it's necessarily doomed.
I can imagine a counter-argument that says "you're noticing deep problems and then your wishful thinking is saying 'but maybe they won't bite' but you should notice how deep and pernicious they are." But this argument feels like it proves too much. Don't plenty of fields have pernicious problems of a similar character, but manage to make progress anyway? Strong claims require strong evidence/argument, and I think iteration towards a progressively safer machine being, in practice, impossible is a strong claim. Why don't I thing the evidence/argument as strong? Where is the sharp taste of unrealism if I imagine a story where the AI helps the humans slowly iterate towards success?
ETA: I guess one thought that keeps popping up in my mind is that if success is conjunctive and disaster is disjunctive, that if you have a bunch of sources of disaster, even if they're all coinflips, they'll multiply out to small odds of success. Each individual source might be addressable, but in total they'll doom you.
My counter-thought is that Wentworth's 5 items are really more like 3 items where two are variants (capability jumps, deception/problem hiding, and philosophical confusion/measurement issues). Suppose I model these as independent and estimate on the high side of my intuition that capability jumps are 25% likely to bring doom (ignoring other things biting first), deception is 50% likely to bring doom, and confusion is 70% likely to bring doom. Multiplying .3*.5*.75 = ~11%, which feels about where I'm at. Unlikely to work, but possible. Am I just being naive, and the numbers from each threat are much higher? Are there other things that deserve their own categories? All of this should be taken as thinking-out-loud, more than some kind of conclusive statement.