Plan E for AI Doom
Firstly, let me be clear: I do not want to signal my pessimism, nor do I think that everything is that hopeless with AI. But I do think that the question of "what useful things can be done even if we accept the premise that AI-induced extinction is inevitably coming?" is worth being considered, and it would be pretty awkward if the world ends indeed and everyone realises that no one has really asked this question before (maybe someone did, let me know if it is the case). Secondly, as you will see, I suggest some potential things to do, trying to answer this question. This is meant to be an illustration and a kind of proof-of-concept. Probably, many of these ideas do not make sense. However, it is not obvious to me that there are no useful ideas that can be invented in this regard if some people think relatively hard. Even the ideas I describe do not sound totally hopeless to me. There are many alignment plans, but now I want to talk about plan E, where E stands for extinction. The physics lever Speed-of-light is a wall. An adversary with arbitrary local power cannot retroactively retrieve photons that have already left our future light cone. That observation makes a narrow window valuable: whatever we want preserved must be radiated now, redundantly, and in a form that minds unlike ours can decode with high probability. The useful direction is not “build a grand vault” but “broadcast a self-bootstrapping curriculum until the last switch flips.” The ASI can stop new transmissions and can chase old ones, but it will always be running behind the shell. The hope is that somewhere—another biological civilisation, a better AI, a far-future archeologist, or even a simulator filtering on moral —someone receives a clean, self-describing record of who we were and what we cared about. If only one such receiver exists in our future light cone, and if only one of our transmissions decodes, that still changes how much of human value reaches the wider universe. The logic
Agreed.