Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

What if the Alignment Problem is impossible?  

It would be sad for humanity if we live in a world where building AGI is very possible but aligning AGI is impossible.  Our curiosity, competitive dynamics, and understandable desire for a powerful force for good will spur us to build the unaligned AGI and then humans will live at the AGI’s mercy from then on and either live lucky & happy on the knife’s edge, be killed by the AGI, or live in some state we do not enjoy.

For argument’s sake, imagine we are in that world in which it is impossible to force a super-intelligence to value humans sufficiently - just as chimpanzees could not have controlled the future actions of humans had they created us.

What if it is within human ability to prove that Alignment is impossible?

What if, during the Manhattan Project, the scientists had performed the now famous calculation and  determined that yes, in fact, the first uncontrolled atomic chain reaction would have ignited the atmosphere and the calculation was clear for all to see?

Admittedly, this would have been a very scary world.  It’s very unclear how long humanity could have survived in such a situation.  

But one can imagine a few strategies:

  • Secure existing uranium supplies - as countries actually did.
  • Monitor the world for enrichment facilities and punish bad actors severely.
  • Accelerate satellite surveillance technology.
  • Accelerate military special operations capabilities.
  • Develop advanced technologies to locate, mine, blend and secure fissionable materials.
  • Accelerate space programs and populate the Moon and Mars.

Yes, a scary world.  But, one can see a path through the gauntlet to human survival as a species.  (Would we have left earth sooner and reduced other extinction risks?)  

Now imagine that same atmosphere-will-ignite world but the Manhattan Project scientists did not perform the calculation.  Imagine that they thought about it but did not try.

All life on earth would have ended, instantly, at Trinity.

Are we investing enough effort trying to prove that alignment is impossible?  

Yes, we may be in a world in which it is exceedingly difficult to align AGI but also a world in which we cannot prove that alignment is impossible.  (This would have been the atmosphere-will-ignite world but the math to check ignition is too difficult - a very sad world that would have ceased to exist on July 16, 1945, killing my 6 year old mother.)

On the other hand, if we can prove alignment is impossible, the game is changed.  If the proof is sufficiently clear, forces to regulate companies and influence nation states will become dramatically greater and our chances for survival will increase a lot.

Proposal: The Impossibility X-Prize

  • $10 million?
  • Sufficient definition of “alignment”, “AGI”, and the other concepts necessary to establish the task and define its completion

Even if we fail, the effort of trying to prove alignment is impossible may yield insights as to how alignment is possible and make alignment more likely.

If impossibility is not provable, the $10 million will never be spent.
If we prove impossibility, it will be the best $10 million mankind ever spent.

Let's give serious effort to the ignition calculation of our generation.

__

As an update to this post, I recommend readers interested in this topic read On Controllability of AI by Roman V. Yampolskiy.

New to LessWrong?

New Comment
49 comments, sorted by Click to highlight new comments since: Today at 9:45 PM

I would be skeptical such a proof is possible. As an existence proof, we could create aligned ASI by simulating the most intelligent and moral people, running at 10,000 times the speed of a normal human.

[-]Jeffs8mo1-1

Okay, maybe I'm moving the bar, hopefully not and this thread is helpful...

Your counter-example, your simulation would prove that examples of aligned systems - at a high level - are possible.  Alignment at some level is possible, of course.  Functioning thermostats are aligned.

What I'm trying to propose is the search for a proof that a guarantee of alignment - all the way up - is mathematically impossible.  We could then make the statement: "If we proceed down this path, no one will ever be able to guarantee that humans remain in control."  I'm proposing we see if we can prove that Stuart Russell's "provably beneficial" does not exist.

If a guarantee is proved to be impossible, I am contending that the public conversation changes.

Maybe many people - especially on LessWrong - take this fact as a given.  Their internal belief is close enough to a proof...that there is not a guarantee all the way up.

I think a proof that there is no guarantee would be important news for the wider world...the world that has to move if there is to be regulation.

Sorry, could you elaborate what you mean by all the way up?

All the way up meaning at increasing levels of intelligence…your 10,000 becomes 100,000X, etc.

At some level of performance, a moral person faces new temptations because of increased capabilities and greater power for damage, right?

In other words, your simulation may fail to be aligned at 20,000...30,000...

I have strong-upvoted this post because I think that a discussion about the possibility of alignment is necessary. However, I don't think an impossibility proof would change very much about our current situation.

To stick with the nuclear bomb analogy, we already KNOW that the first uncontrolled nuclear chain reaction will definitely ignite the atmosphere and destroy all life on earth UNLESS we find a mechanism to somehow contain that reaction (solve alignment/controllability). As long as we don't know how to build that mechanism, we must not start an uncontrollable chain reaction. Yet we just throw more and more enriched uranium into a bucket and see what happens.

Our problem is not that we don't know whether solving alignment is possible. As long as we haven't solved it, this is largely irrelevant in my view (you could argue that we should stop spending time and resources at trying to solve it, but I'd argue that even if it were impossible, trying to solve alignment can teach us a lot about the dangers associated with misalignment). Our problem is that so many people don't realize (or admit) that there is even a possibility of an advanced AI becoming uncontrollable and destroying our future anytime soon.

[-]dr_s8mo92

Lots of people when confronted with various reasons why AGI would be dangerous object that it's all speculative, or just some sci-fi scenarios concocted by people with overactive imaginations. I think a rigorous, peer reviewed, authoritative proof would strengthen the position against these sort of objections.

I agree that a proof would be helpful, but probably not as impactful as one might hope. A proof of impossibility would have to rely on certain assumptions, like "superintelligence" or whatever, that could also be doubted or called sci-fi.

Now that you mention it, it does seem a bit odd that there hasn't even been one rigorous, logically correct, and fully elaborated (i.e. all axioms enumerated) paper on this topic.

Or even a long post, there's always something stopping it short of the ideal. Some logic error, some glossed over assumption, etc...

[-]dr_s8mo20

There's a few papers on AI risks, I think they were pretty solid? But the problem is that however one does it, it remains in the realm of conceptual, qualitative discussion if we can't first agree on formal definitions of AGI or alignment that someone can then Do Math on.

...qualitative discussion if we can't first agree on formal definitions of AGI...

Yes, that's part of what I meant by enumerating all axioms. Papers just assume every potential reader understands the same definition for 'AGI', 'AI', etc...

When clearly that is not the case.  Since there isn't an agreed on formal definition in the first place, that seems like the problem to tackle before anything downstream.

[-]dr_s8mo21

Well, that's mainly a problem with not even having a clear definition of intelligence as a whole. We might have better luck with more focused definitions like a "recursive agent" (by which I mean, an agent whose world model is general enough to include itself).

Like dr_s stated, I'm contending that proof would be qualitatively different from "very hard" and powerful ammunition for advocating a pause...

Senator X: “Mr. CEO, your company continues to push the envelope and yet we now have proof that neither you nor anyone else will ever be able to guarantee that humans remain in control.  You talk about safety and call for regulation but we seem to now have the answer.  Human control will ultimately end.  I repeat my question: Are you consciously working to replace humanity? Do you have children, sir?”

AI expert to Xi Jinping: “General Secretary, what this means is that we will not control it.  It will control us. In the end, Party leadership will cede to artificial agents.  They may or may not adhere to communist principals.  They may or may not believe in the primacy of China.  Population advantage will become nothing because artificial minds can be copied 10 billion times.  Our own unification of mind, purpose, and action will pale in comparison.  Our chief advantages of unity and population will no longer exist.”

AI expert to US General: “General, think of this as building an extremely effective infantry soldier who will become CJCS then POTUS in a matter of weeks or months.”

Like I wrote in my reply to dr_s, I think a proof would be helpful, but probably not a game changer.

Mr. CEO: "Senator X, the assumptions in that proof you mention are not applicable in our case, so it is not relevant for us. Of course we make sure that assumption Y is not given when we build our AGI, and assumption Z is pure science-fiction."

What the AI expert says to Xi Jinping and to the US general in your example doesn't rely on an impossibility proof in my view. 

Yes.  Valid.  How to avoid reducing to a toy problem or such narrowing assumptions (in order to achieve a proof) that allows Mr. CEO to dismiss it.

When I revise, I'm going to work backwards with CEO/Senator dialog in mind.

I think such a prize would be more constructive, if it could also just reward demonstrations of the difficulty of AI alignment. An outright proof of impossibility is very unlikely in my opinion, but better arguments for the danger of unaligned AI and the difficulty of aligning it, seem very possible. 

Yes, surely the proof would be very difficult or impossible.  However, enough people have the nagging worry that it is impossible to justify the effort to see if we can prove that it is impossible...and update.

But, if the effort required for a proof is - I don't know - 120 person months - let's please, Humanity, not walk right past that one into the blades.

I am not advocating that we divert dozens of people from promising alignment work. 

Even if it failed, I would hope the prove-impossibility effort would throw off beneficial by-products like:

  • the alignment difficulty demonstrations Mitchell_Porter raised,
  • the paring of some alignment paths to save time, 
  • new, promising alignment paths.

_____

I thought there was a 60%+ chance I would get a quick education on the people who are trying or who have tried to prove impossibility.  

But, I also thought, perhaps this is one of those those Nate Soares blind spots...maybe caused by the fact that those who understand the issues are the types who want to fix.

Has it gotten the attention it needs?

[-]dr_s8mo20

Wonder if we can assign a complexity class to the alignment problem? Even just proving that it's an NP problem would be huge.

Traditionally, such prizes don't presume the answer, and award proofs and disproofs alike. For example, if someone proved that the Riemann Hypothesis was false, he'd still be awarded the Millennium Prize. 

Agreed. Proof or disproof should win.

What would it mean for alignment to be impossible, rather than just difficult?

I can imagine a trivial way in which it could be impossible, if outcomes that you approve of are just inherently impossible for reasons unrelated to AI--for example, if what you want is logically contradictory, or if the universe just doesn't provide the affordances you need.  But if that's the case, you won't get what you want even if you don't build AI, so that's not a reason to stop AI research, it's a reason to pick a different goal.

But if good outcomes are possible but "alignment" is not, what could that mean?

That there is no possible way of configuring matter to implement a smart brain that does what you want?  But we already have a demonstrated configuration that wants it, which we call "you".  I don't think I can imagine that it's possible to build a machine that calculates what you should do but impossible to build a machine that actually acts on the result of that calculation.

That "you" is somehow not a replicable process, because of some magical soul-thing?  That just means that "you" need to be a component of the final system.

That it's possible to make an AGI that does what one particular person wants, but not possible to make one that does what "humanity" wants?  Proving that would certainly not result in a stop to AI research.

I can imagine worlds where aligning AI is impractically difficult.  But I'm not sure I understand what it would mean for it to be literally "impossible".

[-]dr_s8mo4-2

I would expect any proof would fall into some category akin to "you can not build a program that can look at another program and tell you whether it will halt". A weaker sort of proof would be that alignment isn't impossible per se, but requires exponential time in the size of the model, which would make it forbiddingly difficult.

Sounds like you're imagining that you would not try to prove "there is no AGI that will do what you want", but instead prove "it is impossible to prove that any particular AGI will do what you want".  So aligned AIs are not impossible per se, but they are unidentifiable, and thus you can't tell whether you've got one?

[-]dr_s8mo40

Well, if you can't create on demand an AGI that does what you want, isn't that as good as saying that alignment is impossible? But yeah, I don't expect it'd be impossible for an AGI to do what we want - just for us to make sure it does on principle.

A couple observations on that:

1) The halting problem can't be solved in full generality, but there are still many specific programs where it is easy to prove that they will or won't halt.  In fact, approximately all actually-useful software exists within that easier subclass.

We don't need a fully-general alignment tester; we just need one aligned AI.  A halting-problem-like result wouldn't be enough to stop that.  Instead of "you can't prove every case" it would need to be "you can't prove any positive case", which would be a much stronger claim.  I'm not aware of any problems with results like that.

(Switching to something like "exponential time" instead of "possible" doesn't particularly change this; we normally prove that some problem is expensive to solve in the fully-general case, but some instances of the problem can still be solved cheaply.)

2) Even if we somehow got an incredible result like that, that doesn't rule out having some AIs that are likely aligned.  I'm skeptical that "you can't be mathematically certain this is aligned" is going to stop anyone if you can't also rule out scenarios like "but I'm 99.9% certain".

If you could convince the world that mathematical proof of alignment is necessary and that no one should ever launch an AGI with less assurance than that, that seems like you've already mostly won the policy battle even if you can't follow that up by saying "and mathematical proof of alignment is provably impossible".  I think the doom scenarios approximately all involve someone who is willing to launch an AGI without such a proof.

[-]dr_s8mo20

Broadly agree, though I think that here the issue might be more subtle, and that it's not that determining alignment is like solving the halting problem for a specific software - but that aligned AGI itself would need to be something generally capable of solving something like the halting problem, which is impossible.

Agree also on the fact that this probably still would leave room for an approximately aligned AGI. It then becomes a matter of how large we want our safety margins to be.

When you say that "aligned AGI" might need to solve some impossible problem in order to function at all, do you mean

  1. Coherence is impossible; any AGI will inevitably sabotage itself
  2. Coherent AGI can exist, but there's some important sense in which it would not be "aligned" with anything, not even itself
  3. You could have an AGI that is aligned with some things, but not the particular things we want to align it with, because our particular goals are hard in some special way that makes the problem impossible
  4. You can't have a "universally alignable" AGI that accepts an arbitrary goal as a runtime input and self-aligns to that goal
  5. Something else
[-]dr_s8mo20

Something in between 1 and 2. Basically, that you can't have a program that is both general enough to act reflexively on the substrate within which it is running (a Turing machine that understands it is a machine, understands the hardware it is running on, understands it can change that hardware or its own programming) and at the same time is able to guarantee sticking to any given set of values or constraints, especially if those values encompass its own behaviour (so a bit of 3, since any desirable alignment values are obviously complex enough to encompass the AGI itself).

Not sure how to formalize that precisely, but I can imagine something to that effect being true. Or even something instead like "you can not produce a proof that any given generally intelligent enough program will stick to any given constraints; it might, but you can't know beforehand".

I can write a simple program that modifies its own source code and then modifies it back to its original state, in a trivial loop.  That's acting on its own substrate while provably staying within extremely tight constraints.  Does that qualify as a disproof of your hypothesis?

[-]dr_s8mo20

I wouldn't say it does, any more than a program that can identify whether a very specific class of programs will halt disproves the Halting Theorem. I'm just gesturing in what I think might be the general direction of where a proof may lay; usually recursivity is where such traps hide. Obviously a rigorous proof would need rigorous definitions and all.

"A program that can identify whether a very specific class of programs will halt" does disprove the stronger analog of the Halting Theorem that (I argued above) you'd need in order for it to make alignment impossible.

Despite the existence of the halting theorem, we can still write programs that we can prove always halt. Being unable to prove the existence of some property in general does not preclude proving it in particular cases.

Though really, one of the biggest problems of alignment is that we don't know how to formalize it. Even with a proof that we couldn't prove that any program was formally aligned (or even that we could!), there would always remain the question of whether formal alignment has any meaningful relation to what we informally and practically mean by alignment - such as whether it's plausible that it will take actions that extinguish humanity.

[-]dr_s8mo20

As I said elsewhere, my idea is more about whether alignment could require that the AGI is able to predict its own results and effects on the world (or the results and effects of other AGIs like it, as well as humans), and that proved generally impossible such that even an aligned AGI can only exist in an unstable equilibrium state in which there exist situations in which it will become unrecoverably misaligned, and we just don't know which.

The definition problem to me feels more like it has to do with the greater philosophical and political issue that even if we could behold the AGI to a simple set of values, we don't really know what those values are. I'm thinking more about the technical part because I think that's the only one liable to be tractable. If we wanted some horrifying Pi Digit Maximizer that just spends eternity keeping calculating more digits of pi, that's a very easily formally defined value, but we don't know how to imbue that precisely either. However, there is an additional layer of complexity when more human values are involved in that they can't be formalised that neatly, and so we can assume that they will have to be somehow interpreted by the AGI itself who is supposed to hold them; or the AGI will need to guess the will of its human operators in some way. So maybe that part inside is what makes it rigorously impossible.

Anyway yeah, I expect any mathematical proof wouldn't exclude the possibility of any alignment, not even approximate or temporary, just like you say for the halting problem. But it could at least mean that any AGI with sufficient power is potentially a ticking time bomb, and we don't know what would set it off.

[-]nem8mo30

I love this idea. However, I'm a little hesitant about one aspect of it. I imagine that any proof of the infeasibility of alignment will look less like the ignition calculations and more like a climate change model. It might go a long way to convincing people on the fence, but unless it is ironclad and has no opposition, it will likely be dismissed as fearmongering by the same people who are already skeptical about misalignment. 
More important than the proof itself is the ability to convince key players to take the concerns seriously. How far is that goal advanced by your ignition proof? Maybe a ton, I don't know. 

My point is that I expect an ignition proof to be an important tool in the struggle that is already ongoing, rather than something which brings around a state change.

[-]dr_s8mo20

Models are simulations; if it's a proof, it's not just a model. A proof is mathematical truth made word; it is, upon inspection and after sufficient verification, self-evident and as sure as any of we assume any of the self-evident axioms it rests on to be. The question is more if it can ever be truly proved at all, or if it doesn't turn out to be an undecidable problem.

[-]nem8mo10

I suppose that is my real concern then. Given we know intelligences can be aligned to human values by virtue of our own existence, I can't imagine such a proof exists unless it is very architecture specific. In which case, it only tells us not to build atom bombs, while future hydrogen bombs are still on the table.

[-]dr_s8mo20

Well, architecture specific is something: maybe some different architectures other than LLMs/ANNs are more amenable to alignment, and that's that. Or it could be a more general result about e.g. what can be achieved with SGD. Though I expect there may also be a general proof altogether, akin to the undecidability of the halting problem.

Would the prize also go towards someone who can prove it is possible in theory? I think some flavor of "alignment" is probably possible and I would suspect it more feasible to try to prove so than to prove otherwise.

I'm not asking to try to get my hypothetical hands on this hypothetical prize money, I'm just curious if you think putting effort into positive proofs of feasibility would be equally worthwhile. I think it is meaningful to differentiate "proving possibility" from alignment research more generally and that the former would itself be worthwhile. I'm sure some alignment researchers do that sort of thing right? It seems like a reasonable place to start given an agent-theoretic approach or similar.

Great question.  I think the answer must be "yes."  The alignment-possible provers must get the prize, too.  

And, that would be fantastic.  Proving a thing is possible, accelerates development.  (US uses atomic bomb. Russia has it 4 years later.) Okay, it would be fantastic if the possible proof did not create false security in the short term.  It's important when alignment gets solved.  A peer-reviewed paper can't get the coffee.  (That thought is an aside and not enough to kill the value of the prize, IMHO.  If we prove it is possible, that must accelerate alignment work and inform it.)

Getting definitions and criteria right will be harder than raising the $10 million.  And important.  And contribute to current efforts.

Making it agnostic to possible/impossible would also have the benefit of removing political/commercial antibodies to the exercise, I think.

This reminds me of General Equilibrium Theory. This was once a fashionable field, were very smart people like Ken Arrow and Gérard Debreu proved the conditions for the existence of general equilibrium (demand = supply for all commodities at once). Some people then used the proofs to dismiss the idea of competitive equilibrium as an idea that could direct economic policy, because the conditions are extremely demanding and unrealistic. Others drew the opposite conclusion: Look, competitive markets are great (in theory), so actual markets are (probably) also great!

Somewhat related scenario: There were concerns about the Large Hadron Collider before it was turned on.  (And, I vaguely remember reading, to a lesser extent about a prior supercollider.)  Things like "Is this going to create a mini black hole, a strangelet, or some other thing that might swallow the earth?".  The strongest counterargument is generally "Cosmic rays with higher energies than this have been hitting the earth for billions of years, so if that was a thing that could happen, it would have already happened."

One potential counter-counterargument, for some experiments, might have been "But cosmic rays arrive at high speed, so their products would leave Earth at high speed and dissipate in space, whereas the result of colliding particles with equal and opposite momenta would be stationary relative to the earth and would stick around."  I can imagine a few ways that might be wrong; don't know enough to say which are relevant.

LHC has a webpage on it: https://home.cern/science/accelerators/large-hadron-collider/safety-lhc

Whatever "alignment" means, the "impossibility problem" you refer to could be any of 

  1. An aligned system is impossible.
  2. A provably aligned system is impossible.
  3. There is no general deterministic algorithm to determine whether or not an arbitrary system is aligned.
  4. An unaligned system is possible.

In analogy with the halting problem, 3. is the good one; 1. and 2. are obviously false, and 4. is true.

More meta, 3. could itself be unprovable. 

However, a proof or disproof (or even a proof of undecidability) of 3. has no consequences for which the metaphor of nuclear fission bombs would not be absurd, so perhaps you means something completely different, and you've just phrased it in a confusing way? Or do you think 1. or 2. might be true? 

Why would 3 be important?  3 is true of the halting problem, yet we still create and use lots of software that needs to halt, and the trueness of 3 for the halting problem doesn't seem to be an issue in practice.

[-]dr_s8mo20

One obvious avenue that comes to mind for why alignment might be impossible is the self-reflection aspect of it. On one hand, the one thing that would make AGI most dangerous - and a requirement for it to be considered "general" - is its understanding of itself. AGI would need to see itself as part of the world, consider its own modification as part of the possible actions it can take, and possibly consider other AGIs and their responses to its actions. On the other, "AGI computing exactly the responses of AGI" is probably trivially impossible (AIXI for example is incomputable). This might include AGI predicting its own future behaviour, which is kind of essential for it to stick to a reliably aligned course of action. A model of aligned AGI might be for example a "constrained AIXI" - something that can only take certain actions labelled as safe. The constraint needs to be hard, or it'll just be another term in the reward function, and potentially outweighed by other contributions. This self-reflective angle of attack seems obvious to me, as lots of counter-intuitive proofs of impossibility end up being kind of like it (Godel and Turing).

A second idea, more practical, would be inherent to LLMs specifically. What would be the complexity of aligning them so that their outputs always follow certain goals? How does it scale in number of parameters? Is there some impossibility proof related to the fact that the goals themselves can only be stated by us in natural language? If the AI has to interpret the goals which then the AI has to be optimised to care about, does that create some kind of loop in which it's impossible to guarantee actual fidelity? This might not prove impossibility, but it might prove impracticality. If alignment takes a training run as long as the age of the universe, it might as well be impossible.

Note that such a situation would also have drastic consequences for the future of civilization, since civilization itself is a kind of AGI. We would essentially need to cap off the growth in intelligence of civilization as a collective agent.

In fact, the impossibility to align AGI might have drastic moral consequences: depending on the possible utility functions, it might turn out that intelligence itself is immoral in some sense (depending on your definition of morality).

I guess that alignment problem is "difference in power between agents is dangerous" rather than "AGI is dangerous".

Sketch of proof:

  1. Agent is either optimizing for some utility function or not optimizing for any at all. The second case seems dangerous both for it and for surrounding agents [proof needed].
  2. Utility function probably can be represented as vector in basis "utility of other agents" x "power" x "time existing" x etc. More powerful agents move the world further along their utility vectors.
  3. If utility vectors of a powerful agent (AGI, for example) and of humans are different, on some level of power this difference (also a vector) will become sufficiently big that we consider the agent misaligned.

Is the organization who offers the prize supposed to define "alignment" and "AGI" or the person who claims the prize? this is unclear to me from reading your post.

Defining alignment (sufficiently rigorous so that a formal proof of (im)possibility of alignment is conceivable) is a hard thing! Such formal definitions would be very valuable by themselves (without any proofs). Especially if people widely agree that the definitions capture the important aspects of the problem.

I envision the org that offers the prize, after broad expert input, would set the definitions and criteria.  

Yes, surely the definition/criteria exercise would be a hard thing...but hopefully valuable.