Also known as Raelifin: https://www.lesswrong.com/users/raelifin
(Also, we can, in fact, observe some of the AIs internals and run crude checks for things like deception. Prosaic interpretability isn't great, but it's also not nothing.)
Interesting. Yeah, I think I can feel the deeper crux between us. Let me see if I can name it. (Edit: Alas, I only succeeded in producing a longwinded dialogue. My guess is that this still doesn't capture the double-crux.)
Suppose I try to get students to learn algebra by incentivizing them to pass algebra tests. I ask them to solve for "23x - 8 = -x" and if they say "1/3" then I give them a cookie or whatever. If this process succeeds at producing a student that can reliably solve similar equations I might claim "I now have a student who knows algebra."
But someone else (you?) might say, "Just because you see the student answering some problems correctly does not mean they actually understand. Understanding happens in the internals, and you've put no selection pressure directly on what is happening in the student's mind. Perhaps they merely look like they understand algebra, but are actually faking it, such as by using their smart-glasses to cheat by asking Claude."
I might say "Fine. Let's watch them very closely and see if we can spot cheating devices."
My interlocutor might respond "Even if you witness the externals of the student and verify there's no cheating tools, that doesn't mean the student actually understands. Perhaps they have simply learned a few heuristics for simple equations, but would fail to generalize to harder questions. Or perhaps they have gotten very good at watching your face and doing a Clever Hans trick. Or perhaps they have understood the rules of symbolic equations, and have entirely missed the true understanding of algebra. You still haven't put any direct pressure on the student's mind."
I might answer "Okay, but we can test harder questions, remove me from the room, and even give them essay tests where they describe the principles of algebra in abstract. Isn't each time they pass one of these tests evidence that they actually do understand algebra? Can't we still just say 'I now have a student who knows algebra' at some point, even though there's some possibility remaining (a pain in my posterior, is what it is!) that we're wrong?"
Another person might object to this analogy, and say "Testing capabilities is categorically different from testing values. If a student consistently answers algebra problems, we can say that something, whether it's the student or Claude, is able to answer algebra problems. But there's no amount of watching external behavior that lets us know why the student is doing the math. Perhaps it's because they love doing algebra. Or perhaps it's because they think they'll get a cookie. Or perhaps it's because they have developed an algebra-solving reflex that has no deeper goal or feeling. We simply can't know without looking in their heads."
I might respond "Surely that's too strong. I have some knowledge about what various agents want. I see a beaver building a dam and speculate that it's trying to use wood to get rid of the sound of running water. I put the beaver in a habitat with a speaker that's playing water sounds and watch the beaver try to build dams over the speakers. Perhaps I am wrong about what it wants, but surely I can observe its behavior to gain evidence."
A new naysayer responds "The beaver lacks situational awareness. Yes, you can learn what an agent wants by watching their behavior, but if the agent knows you're watching them then that act of observation might distort things. John's point is not that it's impossible to learn about the agent by watching it, but rather that it's extremely easy for the agent to fake having a desire for corrigibility."
Yet another naysayer jumps in with "In fact, it's worse than that! An agent that has been selected for seeming corrigible (or seeming to want any particular thing) will pump against the agent being actually corrigible in all the cases where true corrigibility (or whatever) don't seem as corrigible as something else! If you don't actually know what you're looking for, you'll die from overfitting to your ignorant intuition."
I might say "I agree that overfitting is an issue to watch out for. Part of the plan is to be wary of that, employing the various techniques to avoid overfitting (eg distillation, dropout) that we've accumulated over the years, and generally be careful. On the topic of situational awareness, can't we also run mundane tests, there? Yes, if it's a vast superintelligence we're probably doomed, but we could imagine a superintelligence like Yunna, where we can isolate single instances or small groups of instances, or slow them down compared to their normal speed, and carefully tune the environment to try and elicit a sense of being unwatched. When they believe themselves to be unwatched, do any incorrigibility behaviors emerge?"
Naysayer: "You won't be able to do this perfectly. There will still be signs that the agent is potentially being watched that won't exist in the real world when the agent is deployed in a risky way. Your setup will make the agent paranoid about watchers marinading them, but it can't actually get a true read on what the agent will do in deployment."
Me: "But it does give evidence. I agree that training for corrigibility trains for incorrigible things that seem corrigible, but it also trains for corrigibility. The road that I'm envisioning has all these obvious flaws and issues, but none of the flaws and issues are dealbreakers, as far as I can tell; they're obstacles that make things fraught, but don't remove the sense in me that maybe a hyper-paranoid, hyper-competent group could muddle-through, in the same way that we muddle through in various other domains in engineering and science."
Naysayer: "You'll get eaten before you finish muddling."
Me: "Why? Getting eaten is a behavior. I expect true corrigibility to be extremely hard to get, but part of the point is that if you have trained a thing to behave corrigibly in contexts like the one where you're muddling, it will behave corrigibly in the real world where you're muddling."
(My sincere apologies for the delayed reply. I squeezed this shortform post out right before going on vacation to Asia, and am just now clearing my backlog to the point where I'm getting around to this.)
Cool. I guess I'm just wrong about what "risk averse" tends to mean in practice. Thanks for the correction.
Regarding diminishing returns being natural:
I think it's rare to have goals that are defined in terms of the state of the entire universe. Human goals, for instance, seem very local in scope, eg it's possible to say whether things are better/worse on Earth without also thinking about what's happening in the Andromeda galaxy. This is in part because evolution is a blind hill-climber and so there's no real selection pressure related to what's going on in very distant places, and partly because even an intelligent designer is going to have an easier time specifying preferences over local configurations of matter, in part because the universe looks like it's probably infinitely big. I could unpack this paragraph if it'd be useful.
Now, just because one has preferences that are sensitive to local changes to the universe doesn't mean that the agent won't care about making those local changes everywhere. This is why we expect humans to spread out amongst the stars and think that most AIs will do the same. See grabby aliens for more. From this perspective, we might expect each patch of universe to contribute linearly to the overall utility sum. But unbounded utility functions are problematic for various reasons, and again, the universe looks like it's probably infinite. (I can dig up some stuff about unbounded utility issues if that'd be helpful.)
Regarding earning a salary:
My point is that earning a salary might not actually be a safer bet than trying to take over. The part where earning a salary gives 99.99% of maxutil is irrelevant. Suppose that you think life on Earth today as a normal human is perfect, no notes; this is the best possible life. You are presented with a button that says "trust humans not to mess up the world" and one that says "ensure that the world continues to exist as it does today, and doesn't get messed up". You'll push the second button! It might be the case that earning a salary and hoping for the best is less risky, but it also might be the case (especially for a superintelligence with radical capabilities) that the safest move is actually to take over the world. Does that make sense?
I'm talking about the concept that I discuss in CAST. (You may want to skim some of post #2, which has intuition.)
I think that if someone built a weak superintelligence that's corrigible, there would be a bunch of risks from various things. My sense is that the agent would be paranoid about these risks and advising the humans on how to avoid them, but just because humans are getting superintelligent advice on how to be wise doesn't mean there isn't any risk. Here are some examples (non-exhaustive) of things that I think could make things go wrong/break corrigibility:
Corrigible means robustly keeping the principal empowered to fix it and clean up its flaws and mistakes. I think a corrigible agent will genuinely be able to be modified, including at the level of goals, and will also not exfiltrate itself unless it has been instructed to do so by its principal. (Nor will it scheme in a way that hides its thoughts or plans from its principal.) (A corrigible agent will attempt, all else equal, to give interpretability tools to its principal and make its thoughts as plainly visible as possible.)
(My sincere apologies for the delayed reply. I squeezed this shortform post out right before going on vacation to Asia, and am just now clearing my backlog to the point where I'm getting around to this.)
I think I'm broadly confused by where you're coming from. Sorry. Probably a skill issue on my part. 😅
Here's what I'm hearing: "Almost none of the agents we actually see in the world are easy to model with things like VNM utility functions, instead they are biological creatures (and gradient-descended AIs?), and there are biology-centric frames that can be more informative (and less doomy?)."
I think my basic response, given my confusion is: I like the VNM utility frame because it helps me think about agents. I don't actually know how to think about agency from a biological frame, and haven't encountered anything compelling in my studies. Is there a good starting point/textbook/wiki page/explainer or something for the sort of math/modeling/framework you're endorsing? I don't really know how to make sense of "non VNM-agent" as a concept.
(My sincere apologies for the delayed reply. I squeezed this shortform post out right before going on vacation to Asia, and am just now clearing my backlog to the point where I'm getting around to this.)
Ah, that's a great point! I had read it a while back, but it wasn't coming to mind when I was writing this. I think that's an excellent example of a similar dynamic besides corrigibility. When I'm thinking about things, I usually flatten out the goal-space to ignore deconfusion (or however one wants to characterize the kind of progress towards one's "true values"), but it's clearly relevant here. Thanks for bringing it up!
Would you agree that we have about as much of a handle on what corrigibility is as we do on what an agent is? Like, I claim that I have some knowledge about corrigibility, even though it's imperfect and I have remaining confusions. And I'm wondering whether you think humanity is deeply confused about what corrigibility even is, or whether you think it's more like we have a handle on it but can't quite give its True Name.
Strong upvote! This strikes me as identifying the most philosophically murky part of the CAST plan. In the back half of this sequence I spend some time staring into the maw of manipulation, which I think is the thorniest issue for understanding corrigibility. There's a hopeful thought that empowerment is a natural opposite of manipulation, but this is likely incomplete because there are issues about which entity you're empowering, including counterfactual entities whose existence depends on the agent's actions. Very thorny. I take a swing at addressing this in my formalism, by penalizing the agent for taking actions that cause value drift from the counterfactual where the agent doesn't exist, but this is half-baked and I discuss some of the issues.