zhukeepa's Comments

Paul's research agenda FAQ

I'm still confused about how each of the approaches would prevent us from eventually creating agents that spend 99% of their cognition acting corrigibly, while spending a well-hidden 1% of its cognition trying to sniff out whether it's in the test distribution, and executing a treacherous turn if so. The way I understand your summaries:

  • Adversarial training makes it harder for the malignant 1% to do anything dangerous, since it makes it harder to tell whether it's in the test distribution. But it does little to prevent a malignant 1% from very conservatively trying (and usually failing) to ascertain whether it's in the test distribution.
  • Interpretability makes it harder for the malignant 1% to be hidden, but doesn't prevent malignant cognition it can't detect. (My reading of "Gradient descent will be pushing against whatever kinds of treacherous turns that can be detected by our machinery. But it will only push for things that improve behavior on the training distribution, so a treacherous turn can only appear where it is coupled with the machinery that is producing good predictions, giving us more traction for interpretability." is completely consistent with this.)

I didn't understand what your wrote about verification well enough to have anything to say.

It does prima facie seem that an agent spending 100% of its cognition being competent and corrigible achieves higher reward than an agent that only spends 99% of its cognition being competent and corrigible, and 1% of its cognition trying (and almost always failing) to see if it's in the test distribution. Is your model that gradient descent will favor the former agent over the latter agent, making the 99%/1% agent unlikely to arise (and perhaps quantifiably so)?

The inner process may nevertheless use TDT if TDT doesn't diverge from CDT on the training distribution, or it might learn to use TDT but "look nice" so that it doesn't get selected against.

This was what I was intending to convey in assumption 3.

Paul's research agenda FAQ

I'm currently intuiting that there's a broad basin of "seeming corrigible until you can perform a treacherous turn", but not a broad basin of true corrigibility. If the overseer can robustly detect incorrigible cognition in the distilled agent, I think things may be fine. But if e.g. the overseer is only inspecting the distilled agent's justifications for its behavior, and something like its verbal loop, I don't see how things can work out. 

Here are my assumptions underlying this intuition: 

1. Past a certain capabilities level, distilled agents can meaningfully be modeled as expected utility maximizers for short-term rewards from its overseer. (This is because expected utility maximization is instrumentally convergent, the same way that believing 51 to be composite is instrumentally convergent, as per Eliezer's comment.)

2. If the agents reason using TDT, the effective utility functions of these act-based agents will optimize for long-term instrumental preferences of their users, because in expectation, adopting this policy raises whatever short-term instrumental reward they'll get.

3. Expected utility maximizers for discounted future rewards of reward signals are hopelessly incorrigible. 

4. Agents produced through the distillation process, misaligned in this way, could enter a regime of devoting 99% of their cognition to doing exactly what a corrigible agent would do, and 1% of their cognition to finding strategies for determining how to / whether to execute a treacherous turn. 

5. Because of how many ways there are to hide this 1% of cognition, a complete white-box understanding of the agent's cognition is necessary to spot this sort of malign reasoning. 

6. We can't achieve this level of understanding via anything like current ML transparency techniques. 

Which of these assumptions do you disagree with most? Or do you agree with these assumptions, but feel optimistic that we can get good worst-case guarantees anyway?

2. How easy is it to learn to be corrigible? I'd think of this as: if we impose the extra constraint that our model behave corrigibly on all inputs, in addition to solving the object-level task well, how much bigger do we need to make the model?

My current intuition: for the same reasons it seems really hard to train models that believe 51 is prime while also performing well on object-level tasks, learning to be corrigible seems very difficult.

Another take on agent foundations: formalizing zero-shot reasoning
This does seem like a double crux; my sense is that correctly reasoning about self-modification requires a potentially complicated theory that I don't expect a general reasoning to realize it needs as soon as it becomes capable of self-modification (or creating successor agents, which I think is a subproblem of self-modification).

I share this intuition, for sufficiently complex self-modifications, with massive error bounds around what constitutes "sufficiently complex". I'm not sure if humans perform sufficiently complex self-modifications, I think our first AGis might perform sufficiently complex self-modifications, and I think AGIs undergoing a fast takeoff are most likely performing sufficiently complex self-modifications.

is simply not able to foresee the impacts of its changes and so makes them 'recklessly' (in the sense that every particular change seems worth it, even if the policy of making changes at that threshold of certainty seems likely to lead to disaster).

+100. This is why I feel queasy about "OK, I judge this self-modification to be fine" when the self-modifications are sufficiently complex, if this judgment isn't based off something like zero-shot reasoning (in which case we'd have strong reason to think that an agent following a policy of making every change it determines to be good will actually avoid disasters).

Paul's research agenda FAQ

If we view the US government as a single entity, it's not clear that it would make sense to describe it as aligned with itself, under your notion of alignment. If we consider an extremely akrasiatic human, it's not clear that it would make sense to describe him as aligned with himself. The more agenty a human is, the more it seems to make sense to describe him as being aligned with himself. 

If an AI assistant has a perfect model of what its operator approves of and only acts according to that model, it seems like it should qualify as aligned. But if the operator is very akrasiatic, should this AI still qualify as being aligned with the operator? 

It seems to me that clear conceptual understandings of alignment, corrigibility, and benignity depend critically on a clear conceptual understanding of agency, which suggests a few things:

  • Significant conceptual understanding of corrigibility is at least partially blocked on conceptual progess on HRAD. (Unless you think the relevant notions of agency can mostly be formalized with ideas outside of HRAD? Or that conceptual understandings of agency are mostly irrelevant for conceptual understandings of corrigibility?)
  • Unless we have strong reasons to think we can impart the relevant notions of agency via labeled training data, we shouldn't expect to be able to adequately impart corrigibility via labeled training data.
  • Without a clear conceptual notion of agency, we won't have a clear enough concept of alignment or corrigibility we can use to make worst-case bounds.

I think a lot of folks who are confused about your claims about corrigibility share my intuitions around the nature of corrigibility / the difficulty of learning corrigibility from labeled data, and I think it would shed a lot of light if you shared more of your own views on this.

Another take on agent foundations: formalizing zero-shot reasoning

I should clarify a few more background beliefs:

  • I think zero-shot reasoning is probably not very helpful for the first AGI, and will probably not help much with daemons in our first AGI.
  • I agree that right now, nobody is trying to (or should be trying to) build an AGI that's competently optimizing for our values for 1,000,000,000 years. (I'd want an aligned, foomed AGI to be doing that.)
  • I agree that if we're not doing anything as ambitious as that, it's probably fine to rely on human input.
  • I agree that if humanity builds a non-fooming AGI, they could coordinate around solving zero-shot reasoning before building a fooming AGI in a small fraction of the first 10,000 years (perhaps with the help of the first AGI), in which case we don't have to worry about zero-shot reasoning today.
  • Conditioning on reasonable international coordination around AGI at all, I give 50% to coordination around intelligence explosions. I think the likelihood of this outcome rises with the amount of legitimacy zero-shot shot reasoning has at coordination time, which is my main reason for wanting to work on it today. (If takeoff is much slower I'd give something more like 80% to coordination around intelligence explosions, conditional on international coordination around AGIs.)

Let me now clarify what I mean by "foomed AGI":

  • A rough summary is included in my footnote: [6] By “recursively self-improving AGI”, I’m specifically referring to an AGI that can complete an intelligence explosion within a year [or hours], at the end of which it will have found something like the optimal algorithms for intelligence per relevant unit of computation. ("Optimally optimized optimizer" is another way of putting it.)
  • You could imagine analogizing the first AGI we build to the first dynamite we ever build. You could analogize a foomed AGI to a really big dynamite, but I think it's more accurate to analogize it to a nuclear bomb, given the positive feedback loops involved.
  • I expect the intelligence differential between our first AGI and a foomed AGI to be numerous orders of magnitude larger than the intelligence differential between a chimp and a human.
  • In this "nuclear explosion" of intelligence, I expect the equivalent of millions of years of human cognitive labor to elapse, if not many more.

In this comment thread, I was referring primarily to foomed AGIs, not the first AGIs we build. I imagine you either having a different picture of takeoff, or thinking something like "Just don't build a foomed AGI. Just like it's way too hard to build AGIs that competently optimize for our values for 1,000,000,000 years, it's way too hard to build a safe foomed AGI, so let's just not do it". And my position is something like "It's probably inevitable, and I think it will turn out well if we make a lot of intellectual progress (probably involving solutions to metaphilosophy and zero-shot reasoning, which I think are deeply related). In the meantime, let's do what we can to ensure that nation-states and individual actors will understand this point well enough to coordinate around not doing it until the time is right."

I'm happy to delve into your individual points, but before I do so, I'd like to get your sense of what you think our remaining disagreements are, and where you think we might still be talking about different things.

Paul's research agenda FAQ

Corrigibility. Without corrigibility I would be just as scared of Goodhart.

Another take on agent foundations: formalizing zero-shot reasoning
This seems like it's using a bazooka to kill a fly. I'm not sure if I agree that zero-shot reasoning saves you from daemons, but even if so, why not try to attack the problem of daemons directly?

I agree that zero-shot reasoning doesn't save us from daemons by itself, and I think there's important daemon-specific research to be done independently of zero-shot reasoning. I more think that zero-shot reasoning may end up being critically useful in saving us from a specific class of daemons.

Okay, sure, but then my claim is that Solomonoff induction is _better_ than zero-shot reasoning on the axes you seem to care about, and yet it still has daemons. Why expect zero-shot reasoning to do better?

The daemons I'm focusing on here mostly arise from embedded agency, which Solomonoff induction doesn't capture at all. (It's worth nothing that I consider there to be a substantial difference between Solomonoff induction daemons and "internal politics"/"embedded agency" daemons.) I'm interested in hashing this out further, but probably at some future point, since this doesn't seem central to our disagreement.

But in scenarios where we have an AGI, yet we fail to achieve these objectives, the reason that seems most likely to me is "the AGI was incompetent at some point, made a mistake, and bad things happened". I don't know how to evaluate the probability of this and so become uncertain. But, if you are correct that we can formalize zero-shot reasoning and actually get high confidence, then the AGI could do that too. The hard problem is in getting the AGI to "want" to do that.
However, I expect that the way we actually get high confidence answers to those questions, is that we implement a control mechanism (i.e. the AI) that gets to act over the entire span of 10,000 or 1 billion years or whatever, and it keeps course correcting in order to stay on the path.
If you're trying to [build the spacecraft] without putting some general intelligence into it, this sounds way harder to me, because you can't build in a sufficiently general control mechanism for the spacecraft. I agree that (without access to general-intelligence-routines for the spacecraft) such a task would need very strong zero-shot reasoning. (It _feels_ impossible to me that any actual system could do this, including AGI, but that does feel like a failure of imagination on my part.)

I'm surprised by how much we seem to agree about everything you've written here. :P Let me start by clarifying my position a bit:

  • When I imagine the AGI making a "plan that will work in one go", I'm not imagining it going like "OK, here's a plan that will probably work for 1,000,000,000 years! Time to take my hands off the wheel and set it in motion!" I'm imagining the plan to look more like "set a bunch of things in motion, reevaluate and update it based on where things are, and repeat". So the overall shape of this AGI's cognition will look something like "execute on some plan for a while, reevaluate and update it, execute on it again for a while, reevaluate and update it again, etc.", happening miliions or billions of times over (which seems a lot like a control mechanism that course-corrects). The zero-shot reasoning is mostly for ensuring that each step of reevaluation and updating doesn't introduce any critical errors.
  • I think an AGI competently optimizing for our values should almost certainly be exploring distant galaxies for billions of years (given the availability of astronomical computing resources). On this view, building a spacecraft that can explore the universe for 1,000,000,000 years without critical malfunctions is strictly easier than building an AGI that competently optimizes for our values for 1,000,000,000 years.
  • Millions of years of human cognitive labor (or much more) might happen in an intelligence explosion that occurs over the span of hours. So undergoing a safe intelligence explosion seems at least as difficult as getting an earthbound AGI doing 1,000,000 years' worth of human cognition without any catastrophic failures.
  • I'm less concerned about the AGI killing its operators than I am about the AGI failing to capture a majority of our cosmic endowment. It's plausible that the latter usually leads to the former (particularly if there's a fast takeoff on Earth that completes in a few hours), but that's mostly not what I'm concerned about.

In terms of actual disagreement, I suspect I'm much more pessimistic than you about daemons taking over the control mechanism that course-corrects our AI, especially if it's doing something like 1,000,000 years' worth of human cognition, unless we can continuously zero-shot reason that this control mechanism will remain intact. (Equivalently, I feel very pessimistic about the process of executing and reevaluating plans millions/billions+ times over, unless the evaluation process is extraordinarily robust.) What's your take on this?

Paul's research agenda FAQ
This proposal judges explanations by plausibility and articulateness. Truthfulness is only incidentally relevant and will be Goodharted away.

Keep in mind that the overseer (two steps forward) is always far more powerful than the agent we're distilling (one step back), is trained to not Goodhart, is training the new agent to not Goodhart (this is largely my interpretation of what corrigibility gets you), and is explicitly searching for ways in which the new agent may want to Goodhart.

Another take on agent foundations: formalizing zero-shot reasoning
I see. Given this, I think "zero-shot learning" makes sense but "zero-shot reasoning" still doesn't, since in the former "zero" refers to "zero demonstrations" and you're learning something without doing a learning process targeted at that specific thing, whereas in the latter "zero" isn't referring to anything and you're trying to get the reasoning correct in one attempt so "one-shot" is a more sensible description.

I was imagining something like "zero failed attempts", where each failed attempt approximately corresponds to a demonstration.

Are you saying that in the slow-takeoff world, we will be able to coordinate to stop AI progress after reaching AGI and then solve the full alignment problem at leisure? If so, what's your conditional probability P(successful coordination to stop AI progress | slow takeoff)?

More like, conditioning on getting international coordination after our first AGI, P(safe intelligence explosion | slow takeoff) is a lot higher, like 80%. I don't think slow takeoff does very much to help international coordination.

Paul's research agenda FAQ

1. If at the time of implementing ALBA, our conceptual understanding of corrigibility is the same as it is today, how doomed would you feel?

2. How are you imagining imposing an extra constraint that our model behave corrigibly on all inputs?

3. My current best guess is that your model of how to achieve corrigibility is to train the AI on a bunch of carefully labeled examples of corrigible behavior. To what extent is this accurate?

Load More