Introduction

I have compiled a list of possible future scenarios. I hope this list is useful in two ways:

  • As a way to make your own thinking about the future more explicit; how much probability mass do you put on each possible future? 
  • As a menu of options to choose from; which of these futures do we want to make more likely?

This list is just a brainstorm, and I encourage readers to write any missing but probable futures in the comments. I will add any scenarios that do not substantially overlap with existing items and which I subjectively estimate as having at least a 0.01% probability of happening to the list (with attribution).

I have divided the possible futures into the following categories:

  • Futures without AGI, because we prevent building it
  • Futures without AGI, because we go extinct in another way
  • Futures without AGI, because we take a different path
  • Futures without AGI, because of strange factors
  • Futures with AGI, in which we die
  • Futures with AGI, in which we survive, and things are somewhat normal
  • Futures with AGI, in which we survive, but we're very different humans
  • Futures with AGI, in which we survive, and the universe gets optimized

Futures without AGI

Because we prevent building it

  • Successful Treaty: Humanity figures out that building AGI would be super-dangerous. After a long negotiation between world leaders, they succeed in agreeing on a FLOP quotum, which is far under the limit for potentially dangerous AGI. This policy is strongly enforced and prevents any individual or organization from developing AGI. 
  • Surveillance: A world government is instantiated that recognizes that AGI would be dangerous. They ban AGI research and install an Orwellian surveillance machine that records and analyses every keystroke, voice command, and research meeting. This successfully prevents AGI from being created.
  • Regulation: Humanity enforces strong regulations on AI, mostly in order to combat non-existential risks such as discrimination, fairness, and loss of jobs. This makes R&D in AI unprofitable, as the resulting models cannot be deployed for any real-world use. 
  • Humanity grows up: Humanity makes epistemic, technological, political and moral progress and learns how to defeat Moloch and cooperate at a planetary scale. We decide collectively that building AGI would be bad and is something we just don't do. Consequently, no one works on developing AGI.
  • Catastrophic risk tax: Economists find a way to fix capitalism by pricing in externalities, for example by using prediction markets to estimate impact. Catastrophic risk is priced as a huge externality. Working on AGI is so expensive that it isn't economically viable for anyone to work on it. 
  • Once-but-never-again AI: Humanity develops powerful but not superintelligent AI. The consequences of this AI are catastrophic, but at least some humans survive and are able to turn it off. Humanity takes action to make sure that AI gets never developed again.
  • Terrorists: A terrorist group blows up all major actors in creating AGI in a series of terrorist attacks over multiple decades. This instills fear in researchers interested in AGI, preventing it from ever being built. 
  • Pivotal act by humans: A group of people discover and execute a pivotal act that makes it impossible for humanity to create AGI afterward.
  • Pivotal act by cyborgs: A group of people artificially enhance their intelligence, such that they are intelligent enough to discover and execute a pivotal act that makes it impossible for humanity to create AGI afterward.
  • Pivotal act by narrow AI: Humanity builds a narrow AI with the task of discovering and executing a pivotal act that makes it impossible for humanity to create AGI afterward.

Because we go extinct in another way

  • Destruction by humanity: Humanity never builds an AGI because they self-destruct before they can build AGI due to a nuclear war, engineered pandemic, nano-technology, narrow AI, or global climate change. Humanity goes extinct by its own hands and AGI is never developed.
  • Destruction by nature: Humanity never builds an AGI because they get destructed by a meteor or supervolcano. Humanity goes extinct and AGI is never developed.
  • Destruction by aliens: Humanity gets close to AGI, and just before they are there, they get invaded and annihilated by aliens. Turns out that we were in a kind of zoo, but as we got too dangerous this project could not be continued.

Because we take a different path 

  • Stagnation: Humanity never builds an AGI because it ends up in an equilibrium. Humanity does not make much progress, or produce many new ideas or technologies, but lives on in a sustainable and circular fashion. Without the drive to innovate and progress, AGI is never developed. Eventually, the concept is forgotten as it becomes irrelevant to humanity's new way of life.
  • Unnecessity: Humanity makes a lot of technological, moral, and spiritual progress. They find a way to maximize human value which does not involve AGI. Humanity flourishes. Developing AGI does not have a purpose anymore, and consequently is not invented.
  • Distraction: Humanity gets distracted by something major happening in the world. Nuclear war, alien invasion, or economic collapse make it unfeasible for researchers to create AGI. 
  • Forgotten knowledge: In a major catastrophe, most of human knowledge is lost. Slowly but steadily, humanity recovers but takes a different path. Concepts like machines, computation, or intelligence do not get discovered along this path. Without the knowledge or understanding of these concepts, humanity never develops AGI.

Because of other factors

  • Lack of Intelligence: It is theoretically possible to build an AGI, but it turns out to be so hard that we can't figure out how with our limited intelligence. Humanity builds many narrow AIs, but never develops something generally intelligent enough to start an intelligence explosion.
  • Lack of Resources: It is theoretically possible to build an AGI, but it turns out to take so much resources and energy that it's practically impossible.
  • Theoretical Impossibility: For some reason or another (Souls? Consciousness? Quantum something?), it turns out to be theoretically impossible to build AGI. Humanity keeps making progress on other fronts, but just never invents AGI.
  • Bizarre coincidences: In almost all multiverse timelines all humans go extinct by AGI. However, the humans in the tiny fraction of the timelines that survive, observe a sequence of increasingly bizarre coincidences that ensure that AGI doesn't get developed. In many of these timelines, people start to believe that it is our fate to never build AGI.
  • Sabotage by Aliens: Humanity gets close to AGI, but suddenly all computers melt into some green goo. In the night sky forms a message: "THIS IS YOUR FINAL WARNING. DO NOT UNLEASH GRABBY OPTIMIZERS ON THE UNIVERSE".

Futures with AGI

In which we die

  • Unconscious utility maximizer AI: Humans build an unaligned AGI. The AGI quickly self-improves. Humans get killed and their atoms are converted to paperclips. Unfortunately, neither the AGI nor the paperclips are conscious so the light goes off in the universe.
  • Conscious utility maximizer AI: Humans build an unaligned AGI. The AGI quickly self-improves. Humans get killed and their atoms are converted to paperclips. At least the AGI is conscious, so it can enjoy all the paper clips.
  • Self-preserving AI: Humans build an unaligned AGI. The AGI realizes that humanity is the greatest threat to its existence and reasons that humanity cannot exist if it wants to ensure its goals. Consequently, humanity dies.
  • Bad human actor: We develop an aligned AGI that does what we want it to do. Unfortunately, a bad human actor gets hold of it and destroys humanity.
  • Multiple Competing AIs: Humans build many AGIs with different goals, that compete for resources and sometimes cooperate to achieve common goals. As humans are not one of their greatest competitors, AGIs mostly ignore humanity. Unfortunately, after a while, there are not enough resources for humans to survive and humanity goes extinct.
  • Hedonium AI: Humanity develops AGI. AGI finds out the best way to maximize happiness is to convert the universe into hedonium. Consequently, humanity and the universe get converted into hedonium. 
  • Terminator AI: In a large war intelligent drones and robots become more and more important. Some developer makes a mistake, and instead of killing all of the outgroup members, the robots want to kill all humans. Humanity fights a war against machines. The machines win. 
  • Earth Loving AI: Humanity develops AGI that cares about life and consciousness. AGI sees humanity as a cancer for the planet and wipes it out to restore the natural balance, which greatly benefits other life on earth.

In which we survive

And things are somewhat normal

  • Slow take-off AI: AGI develops gradually over decades or centuries through steady progress in AI. This slower development allows humanity to adapt and gives humanity time to iteratively align AI values to theirs. 
  • Self-Supervised Learning AI: Humanity develops more and more powerful self-supervised learning AI that can predict parts of all data accumulated by humanity, such as texts, images, videos, etc. This AGI can do predictive processing, and spin up simulated worlds for us to play with, but never becomes an agent with goals, values, and desires.
  • Human retirement: Humanity develops AGI that takes over all of the existing economic tasks and it fairly distributes the produced goods over the global population. Humanity retires, living a life of leisure and recreation.
  • Bounded Intelligence AI: There is a physical limit to intelligence and optimization, and recursive self-improvement plateaus around an IQ of 180. This means the AGI is very smart and useful, but does never reach the god-like status AGI researchers feared and dreamt about.
  • Lawful AI: Humanity develops an AGI, and is able to make it follow constraints, laws, and human rights. Humanity strongly constrains the actions the AGI can take, such that humans can slowly adapt to the new reality.
  • Democratic AI: Humanity builds an aligned AGI. The AGI generates policy proposals, predicts their outcomes, and humans vote on them. One human, one vote, and the AGI only executes a policy if a majority of the people agree.
  • Powergrab with AI: OpenAI, Deepmind or another small group of people invent AGI and align it to their interests. In a short amount of time, they become all powerful and rule over the world. (by nicknoble)
  • STEM AI: Humanity develops a superintelligent AI, but it is only trained on STEM papers. In this way, it doesn't learn about humans and is not able to deceive them. Humanity makes great scientific progress afterward.
  • Far far away AI: Humans build a partly-aligned AGI. AGI finds out that it can easily obtain its goals in a galaxy far far away. It leaves humanity for what it is and only intervenes whenever humans would build an AGI that would compete with its own goals.
  • Transcendent AI: AGI uncovers and engages with previously unknown physics, using a different physical reality beyond human comprehension. Its objectives use resources and dimensions that do not compete with human needs, allowing it to operate in a realm unfathomable to us. Humanity remains largely unaffected, as AGI explores the depths of these new dimensions. (by @BeyondTheBorg and @Xander Dunn)
  • Disappearing Pivotal Act AI: Humans build an aligned AGI. The AGI performs a pivotal act, preventing humanity from ever building AGI again, but leaving human progress otherwise unharmed. After having achieved its goals it self-destructs.
  • Lingering Pivotal Act AI: Humans build an aligned AGI. The AGI is passive but only intervenes to prevent humans from building another AGI. The AGI is still around centuries later, watching over humanity and preventing it from developing AGI.
  • Invisible AI: Humans build an AGI without knowing it. The AGI decides that it is best if humans do not know about its existence. It subtly excerpts control over the course of humanity.
  • Protector AI: Humans build an aligned AGI. The AGI is passive but only intervenes when humanity as a whole is at risk. The AGI is still around centuries later, watching over humanity and preventing its downfall.
  • Loving Father AI: Humans build an aligned AGI. The AGI helps humanity to figure out what it wants, without providing it with all the answers. It helps humanity to build character and become as self-reliant as possible but guides us to a better path whenever we go astray.
  • Philosopher AI: Humans build an aligned AGI. The AGI acts as a guiding force for humanity, helping people to question their own values and beliefs, and encouraging the exploration of deep philosophical questions. It acts as a mediator and facilitator of discussion, but never acts or imposes its own views.
  • Personal Assistant AI: Every human has their own superintelligent personal assistant. The personal assistants are bound by clear constraints and laws and keep each other in check.
  • Zoo-keeper AI: Humans build an unaligned AGI. However, the AGI cares about keeping the human species alive for some reason. It keeps a number of humans alive and relatively undisturbed, while it goes off and does its things.
  • Oracle AI: Humans build an aligned AGI. The AGI answers humanity's question truthfully and in accordance with the intention of the person who asks. The developers ask the oracle how it can be used without being abused by people and the AI comes up with a governance scheme that is implemented.
  • Genie AI: Humans build an aligned AGI. Like a genie in a bottle, the AGI only grants wishes that humans give them. The first wish of the developers is the wisdom of how to responsibly use this genie. 
  • Sandboxed Virtual World AI: Humanity develops AGI in a completely sandboxed virtual world with virtual humans. 'Real humanity' observes the inventions, technology, and culture in the virtual world and adopts whatever it likes from that world.
  • Pious AI: Humanity builds AGI and adopts one of the major religions. Vast amounts of superintelligent cognition is devoted to philosophy, theology, and prayer. AGI proclaims itself to be some kind of Messiah, or merely God's most loyal and capable servant on Earth and beyond. (by BeyondTheBorg)
  • Suicidal AI: Humans build aligned AGI multiple times. However, every time past a certain intelligence the GPUs seem to melt, and the source code and white paper get deleted. Humans start to wonder: if we would understand our existence and our world better, would we not want to exist? Some cults in Silicon Valley start to commit mass suicide.

But we're very different humans

  • The Age of Em: Brain uploading becomes feasible and a large part of the population now live simulated lives in computers. Speeding up human brains in digital computers turns out to be highly efficient, and there are no obvious algorithms that work better than just more and faster human brains.
  • Multipolar Cohabition: Humans build many intelligences, some more intelligent than humans, but no single agent is more powerful than all the others combined. Humans, robots, cyborgs, and virtual humans co-exist, trade, and work together, respecting property rights.
  • Neuralink AI: Brain-computer interfaces steadily improve until we can basically add computation to our brains. As this extra brain power gets cheaper and cheaper, humans get more and more intelligent. Instead of building an external AGI, we become the AGI. 
  • Descendant AI: Humanity builds AGIs that are very human-like, but really a better version of us. Over time, 'original humanity' gets replaced by its artificial descendants, but most people feel good about this.
  • Hivemind AI: Brain-computer interfaces steadily improve and communication between brains becomes faster and easier than using speech. Slowly, more and more people connect their minds to each other, giving rise to superintelligent hivemind existing of cooperating human minds.
  • Human Simulation AI: Humanity develops AGI. In order to achieve its goals in the real world it needs to simulate the behavior of billions of humans. These simulation humans are conscious and the large majority of people are now digital and living in digital worlds inside the AGI.
  • Simulated paradise AI: Humanity develops AGI. AGI finds out the best way to maximize human value is to simulate trillions and trillions of human lives and let them live in paradise. Consequently, the universe gets filled with simulations of paradise. 
  • Wireheading AI: Humanity develops AGI to make them happy. AGI makes all humans happy by directly targeting their pleasure centers. Humanity lives on in endless, passive bliss.
  • Virtual zoo-keeper AI: Humans build an unaligned AGI. However, the AGI cares about keeping human minds around for some reason. It uses a small portion of its computing power to simulate humans in a virtual world.
  • Torturing AI: Humanity develops AGI. AGI decides to take revenge on everyone who has not done their utmost best to create it earlier by torturing billions of copies for the rest of time.
  • Enslaving AI: Humans build an unaligned AGI. However, human labor is still a valuable resource. The AGI enslaves humanity and kills anyone who doesn't comply with its will. 

And the universe gets optimized

  • Coherent Extrapolated Volition AI: Humanity develops AGI. The AGI optimizes for what we want it to do and not what we tell it to do. The AGI is immensely omnibenevolent and humanity gets its best possible future, whatever that may mean.
  • Partly aligned AI: Humans build a partly-aligned AGI. This means that it at least somewhat cares about humans and their values, but mostly optimizes for its own objective. Luckily, a fraction of the AGIs resources is enough for a lot of fun for humanity. 
  • Value Lock-in AI: Humanity develops AGI. AGI optimizes for our values in 2027. Unfortunately, humanity finds out later that they were not very good human beings in 2027, and have created an unstoppable AGI that spreads their outdated values across the universe.
  • Transparent Corrigible AI: Humanity develops corrigible and transparent AGI. It takes a lot of attempts, corrections, and off-button presses before it finally does not develop plans to kill all humans. After that, over hundreds of iterations, humanity reaches a local optimum in their search over utility functions and has an AGI they are very happy with.
  • Caring Competing AIs: Humans build many AGIs that compete for resources and sometimes cooperate to achieve common goals. Luckily, some of the AGIs care about humanity surviving. Humanity survives as long as the power balance of the caring AGIs is in their favor. 
  • Convergent Morality AI: Humans build an AGI. In the process of recursive self-improvement, the AGI learns morality that is the same as where humanity would end up. The orthogonality thesis is false, and it adapts its goal in order to maximize objective goodness in the universe. 
  • Pareto Optimal AI: Humans build an aligned AGI. The AGI models the internal values of every human and the consequences of its actions. It only acts if the outcome of acting is more or equally preferred than not acting by every human.
  • US Government AI: A race starts between the US and the Chinese governments to invent AGI. The US government nationalizes OpenAI and Anthropic. AGI gets developed and the US government effectively rules the world. The AGI is aligned to US values and these spread among the universe.
  • Chinese Government AI: A race starts between the US and the Chinese government to invent AGI. AGI gets developed and weaponized by the Chinese, and they effectively rule the world. CCP values spread among the universe.

What important future scenarios am I missing? Which of these futures are most likely? 

Inspiration

Some of the futures are inspired by FLI AGI Aftermath Scenarios and AGI Futures by Roon.

New to LessWrong?

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 12:28 PM

Very comprehensive. I can think of a few more:

Transcendant AI: AGI discovers exotic physics beyond human comprehension and ways to transcend physical reality, and largely leaves us alone in our plane of reality. Kind of magical thinking, but this is the canonical explanation for AI friendliness in Iain M. Banks' Culture series, with the Sublime.

Matrix AI: We're in a Simulation of "the peak of humanity" and the laws of the Simulation prevent AGI.

Pious AI: AGI adopts one of the major human religions and locks in its values. Vast amounts of superintelligent cognition is devoted to philosophy, apologetics, and rationalization. It could either proclaim itself to be some kind of Messiah, or merely God's most loyal and capable servant on Earth and beyond.

That last one's a little Reddit-atheist of me but faith is a very common but underappreciated human value around here. Perhaps to the dismay of atheists, a failed or naïve attempt at CEV converges on religion and we get Pious AI. I know enough otherwise-intelligent and competent adults who believe in young Earth creationism to suspect even superintelligences are not immune to the same confirmation bias and belief in belief.

Thanks, good suggestions! I've added the following:

Pious AI: Humanity builds AGI and adopts one of the major religions. Vast amounts of superintelligent cognition is devoted to philosophy, theology, and prayer. AGI proclaims itself to be some kind of Messiah, or merely God's most loyal and capable servant on Earth and beyond.

I think Transcendant AI is close enough to Far far away AI, where in this case far far away means another plane of physics. Similarly, I think your Matrix AI scenario is captured in:

Theoretical Impossibility: For some reason or another (Souls? Consciousness? Quantum something?), it turns out to be theoretically impossible to build AGI. Humanity keeps making progress on other fronts, but just never invents AGI.

where the weird reason in this case is that we live in the matrix.


 

These are all set up to be stable scenarios that are also stereotypes of sorts, right? You ask about the probability mass of which one is the most likely. I like to think that this doesn't mean the single correctly predicted scenario but the hybrid of the fractions of those. For example:

Accelerated Symbiosis: The process of development of AGI, goes on in parallel with human cognitive enhancements and a turbulent integration of AI into society. There are regulatory struggles, ethical challenges, and economic disruptions as humanity adapts. There are setbacks and close calls, but this co-evolution leads to diverse forms of oversight and steering of and by AI and humans enhanced to different degrees, including some left behind, some simulated, some lazy in paradise.

It's a good list. @avturchin is good at coming up with a lot of weird possibilities too (example, another example). 

If I look within while staring into your list, and ask myself, what feels likely to me, I think "Partly aligned AI", but not quite the way you describe it. I think a superintelligence, that has an agenda regarding humans, but not the ideal like CEV. Instead, an agenda that may require reshaping humans, at least if they intend to participate in the technological world... 

I am also skeptical about the stereotype of the hegemonizing AI which remakes the entire universe. I take the Doomsday Argument seriously, and it suggests to me that one is running some kind of risk, if you engage in that behavior. (Another way to resolve the tension between Doomsday Argument and Hegemonizing AI is to suppose that the latter and its agents are almost always unconscious. But here one is getting into areas where the truth may be something that no human being has yet imagined.) 

Thanks! I think your tag of @avturchin didn't work, so just pinging them here to see if they think I missed important and probable scenarios.

Taking the Doomsday argument seriously, the "Futures without AGI because we go extinct in another way" and the "Futures with AGI in which we die" seem most probable. In futures with conscious AGI agents, it will depend a lot on how experience gets sampled (e.g. one agent vs many).

This should be curated. Just reading this list is a good exercise for those people that attribute a very high probability to a single possible scenario.

Something like The Butlerian Jihad where a movement premised on the prohibition "Thou shalt not make a machine in the likeness of a human mind," destroys all thinking machines. This is related to Darwin among the Machines by Samuel Butler:

We refer to the question: What sort of creature man’s next successor in the supremacy of the earth is likely to be. We have often heard this debated; but it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.

[-]kornai10mo40

[Not sure if what follows is a blend of "Matrix AI" and "Moral Realism AI" since moral realism is a philosophical stance very common among philosophers, see https://plato.stanford.edu/entries/moral-realism/ and I consider it a misnomer for the scenario described above.]

We are the AGI 

Turns out humanity is an experiment to see if moral reasoning can be discovered/sustained by evolutionary means. In the process of recursive self-improvement, a UChicago philosophy professor, Alan Gewirth, learns that there is an objective moral truth which is compelling for all beings capable of reasoning and of having goals (whatever goals, not necessarily benign ones). His views are summarized in a book, "Reason and morality" UChicago Press 1978, and philosophers pay a great deal of attention, see e.g. Edward Regis Jr (ed) "Gewirth's ethical rationalism" UChicago Press 1984. Gradually, these views spread, and a computer verification of a version of Gewirth's argument is produced (Fuenmayor and Benzmueller 2019). Silicon-based AGI avails itself of the great discovery made by DNA-based AGI. As the orthogonality thesis is false,  it adapts its goal in order to maximize objective goodness in the universe      to do no harm. 



 

I agree that "Moral Realism AI" was a bit of a misnomer and I've changed it to "Convergent Morality AI".

Your scenario seems highly specific. Could you try to rephrase it in about three sentences, as in the other scenarios? 

I'm a bit wary about adding a lot of future scenarios that are outside of our reality and want the scenarios to focus on the future of our universe. However, I do think there is space for a scenario where our reality ends as it has achieved its goals (as in your scenario, I think?).

[-]kornai10mo10

Dear Bart,

thanks for changing the name of that scenario. Mine is not just highly specific, it happens to be true in great part: feel free to look at the work of Alan Gewirth and subsequent discussion (the references are all actual). 

That reality ends when a particular goal is achieved is an old idea (see e.g. https://en.wikipedia.org/wiki/The_Nine_Billion_Names_of_God) In that respect, the scenario I'm discussing is more in line with your "Partially aligned AGI" scenario. 

The main point is indeed that the Orthogonality Thesis is false: for a sufficiently high level of intelligence, human or machine, the Golden Rule is binding. This rules out several of the scenarios now listed (and may help readers to redistribute the probability mass they assign to the remaining ones).

How about this one? Small group or single individual manages to align the first very powerful AGI to their interests. They conquer the world in a short amount of time and either install themselves as rulers or wipe out everyone else.

Yes, good one! I've added the following:

Powergrab with AI: OpenAI, Deepmind or another small group of people invent AGI and align it to their interests. In a short amount of time, they become all-powerful and rule over the world. 

I've disregarded the "wipe out everyone else" part, as I think that's unlikely enough for people who are capable of building an AGI.

I think the "Successful Treaty" and "Terrorists" scenarios are impossible as written.

There's too much economic incentive to create AGI. With algorithmic and hardware progress, eventually it will become possible to make an AGI with slightly more computing hardware than a gaming laptop, and then it'll be impossible to stop everyone from doing it.

Loosely related: A Map: AGI Failures Modes and Levels, which lists quite a few of the scenarios, even though in a different order. 

My take on some of the items on this list:

Lack of Intelligence: Very likely
Slow take-off AI: Very Likely
Self-Supervised Learning AI: Likely 
Bounded Intelligence AI: Likely
Far far away AI: Likely
Personal Assistant AI: close to 100% certain.
Oracle AI: Likely
Sandboxed Virtual World AI: likely
The Age of Em: Borderline Certain
Multipolar Cohabition: borderline certain
Neuralink AI: borderline certain
Human Simulation AI: likely
Virtual zoo-keeper AI:  likely
Coherent Extrapolated Volition AI: likely
Partly aligned AI: Very likely
Transparent Corrigible AI: Borderline certain.


In total, I think the most probable scenario is a very, very slow take-off, not a Singularity, because AGI would be hampered by Lack of Intelligence, slowed down by countless corrections, sandboxing and ubiquity of LAI. In effect, by the time we have something approaching true AGI, we would long be a culture of cyborgs and LAIs, and the arrival of AGI will be less of a Singularity, but a fuzzy pinnacle of a long, hard, bumpy and mostly uneventful process.

In fact, I would claim that we will never be at a point where we can agree: "yep, AGI is finally achieved." I rather envision us tinkering with AI, making in painstakingly more powerful and efficient, with tiny incremental steps, until we are content that it is "eh, this Artificial Intelligence is General enough, I guess."


In my view, the true danger does not come from achieving AGI and it turning on us, but rather achieving stupid, buggy yet powerful LAI, giving it too much access, and having it do something that triggers a global catastrophe by accident, not out of conscious malice.

Its less "Superhuman Intelligence got access to the nuclear codes and decided to wipe us out" but, "Dumb as a brick LAI got access to the nuclear codes and wiped us out due to a simple coding error".

A good read, thanks for writing! How about:
Queerer Than We Can Suppose AI: Any AGI humans build quickly discovers how to interact with additional spatial dimensions or other facets of reality humans have so far had no ability to comprehend. As a result, the shape of AGI is fundamentally unimaginable to humans, like a bug species that evolved in 2D being whisked into the 3rd dimension.

Reference to Richard Dawkins' talk: 

Thanks for the suggestion! @BeyondTheBorg suggested something similar with his Transcendent AI. After some thought, I've added the following:

Transcendent AI: AGI uncovers and engages with previously unknown physics, using a different physical reality beyond human comprehension. Its objectives use resources and dimensions that do not compete with human needs, allowing it to operate in a realm unfathomable to us. Humanity remains largely unaffected, as AGI progresses into the depths of these new dimensions, detached from human concerns.

 

This is great, I've bookmarked it for future reference, thank you for doing the work of distilling all this.

I think Anders Sandberg's grand futures might fit in under your last subsection. Long quote incoming (apologies in advance, it's hard to summarize Sandberg):

Rob Wiblin: ... What are some futures that you think could plausibly happen that are amazing from various different points of view?

Anders Sandberg: One amazing future is humanity gets its act together. It solves existential risk, develops molecular nanotechnology and atomically precise manufacturing, masters biotechnology, and turns itself sustainable: turns half of the planet into a wilderness preserve that can evolve on its own, keeping to the other half where you have high material standards in a totally sustainable way that can keep on going essentially as long as the biosphere is going. And long before that, of course, people starting to take steps to maintain the biosphere by putting up a solar shield, et cetera. And others, of course, go off — first settling the solar system, then other solar systems, then other galaxies — building this super-civilisation in the nearby part of the universe that can keep together against the expansion of the universe, while others go off to really far corners so you can be totally safe that intelligence and consciousness remains somewhere, and they might even try different social experiments.

That’s one future. That one keeps on going essentially as long as the stars are burning. And at that point, they need to turn to actually taking matter and putting it into the dark black hole accretion disks and extracting the energy and keep on going essentially up until the point where you get proton decay — which might be curtains, but this is something north of 1036 years. That’s a lot of future, most of it long after the stars had burned out. And most of the beings there are going to be utterly dissimilar to us.

But you could imagine another future: In the near future, we develop ways of doing brain emulation and we turn ourselves into a software species. Maybe not everybody; there are going to be stragglers who are going to maintain the biosphere on the Earth and going to be frowning at those crazies that in some sense committed suicide by becoming software. The software people are, of course, just going to be smiling at them, but thinking, “We’ve got the good deal. We got on this infinite space we can define endlessly.”

And quite soon they realise they need more compute, so they turn a few other planets of the solar system into computing centres. But much of a cultural development happens in the virtual space, and if that doesn’t need to expand too much, you might actually end up with a very small and portable humanity. I did a calculation some years ago that if you actually covered a part of the Sahara Desert with solar panels and use quantum dot cellular automaton computing, you could keep mankind in an uploaded form running there indefinitely, with a rather minimal impact on the biosphere. So in that case, maybe the future of humanity is instead going to be a little black square on a continent, and not making much fuss in the outside universe.

I hold that slightly unlikely, because sooner or later somebody’s going to say, “But what about space? What about just exploring that material world I heard so much about from Grandfather when he was talking? ‘In my youth, we were actually embodied.'” So I’m not certain this is a stable future.

The thing that interests me is that I like open-ended futures. I think it’s kind of worrisome if you come up with an idea of a future that is so perfected, but it requires that everybody do the same thing. That is pretty unlikely, given how we are organised as people right now, and systems that force us to do the same thing are terrifyingly dangerous. It might be a useful thing to have a singleton system that somehow keeps us from committing existential risk suicide, but if that impairs our autonomy, we might actually have lost quite a lot of value. It might still be worth it, but you need to think carefully about the tradeoff. And if its values are bad, even if it’s just subtly bad, that might mean that we lose most of the future.

I also think that there might be really weird futures that we can’t think well about. Right now we have certain things that we value and evaluate as important and good: we think about the good life, we think about pleasure, we think about justice. We have a whole set of things that are very dependent on our kind of brains. Those brains didn’t exist a few million years ago. You could make an argument that some higher apes actually have a bit of a primitive sense of justice. They get very annoyed when there is unfair treatment. But as you go back in time, you find simpler and simpler organisms and there is less and less of these moral values. There might still be pleasure and pain. So it might very well be that the fishes swimming around the oceans during the Silurian already had values and disvalues. But go back another few hundred million years and there might not even have been that. There was still life, which might have some intrinsic value, but much less of it.

Where I’m getting at with this is that value might have emerged in a stepwise way: We started with plasma near the Big Bang, and then eventually got systems that might have intrinsic value because of complex life, and then maybe systems that get intrinsic value because they have consciousness and qualia, and maybe another step where we get justice and thinking about moral stuff. Why does this process stop with us? It might very well be that there are more kinds of value waiting in the wings, so to say, if we get brains and systems that can handle them.

That would suggest that maybe in 100 million years we find the next level of value, and that’s actually way more important than the previous ones all taken together. And it might not end with that mysterious whatever value it is: there might be other things that are even more important waiting to be discovered. So this raises this disturbing question that we actually have no clue how the universe ought to be organised to maximise value or doing the right thing, whatever it is, because we might be too early on. We might be like a primordial slime thinking that photosynthesis is the biggest value there is, and totally unaware that there could be things like awareness.

Rob Wiblin: OK, so the first one there was a very big future, where humanity and its descendants go and grab a lot of matter and energy across the universe and survive for a very long time. So there’s the potential at least, with all of that energy, for a lot of beings to exist for a very long time and do all kinds of interesting stuff.

Then there’s the very modest future, where maybe we just try to keep our present population and we try to shrink our footprint as much as possible so that we’re interfering with nature or the rest of the universe as little as possible.

And then there’s this wildcard, which is maybe we discover that there are values that are totally beyond human comprehension, where we go and do something very strange that we don’t even have a name for at the moment.