Summary

Black swans are rare events that have a large impact and seem predictable in retrospect. Although they are rare, a single black swan can be more important than all other events combined because they are extreme. Black swan events are often unknown unknowns before they happen and are difficult to predict.

The creation of AGI could be a black swan event: improbable, unpredictable, and extremely impactful. If AGI is a black swan event the following claims will likely be true:

  1. It will be difficult to predict when the first AGI will be created and it could be created at an unexpected time. Since black swans are often outliers, many predictive models could completely fail to predict when AGI will be created.
  2. It will be difficult to predict how AGI affects the world and a wide range of scenarios are possible.
  3. The impact of AGI could be extreme and completely transform the world in an unprecedented way that does not follow previous historical trends (e.g. causing human extinction).
  4. Past experience and analogies might have limited or little value when predicting the future.

Introduction to black swans

A black swan is an improbable and extreme outlier event with the following three properties:

  1. The event is a surprise to the observer or subjectively improbable.
  2. It has a major impact.
  3. It is inappropriately rationalized in hindsight to make it seem more predictable than it really was.

Many variables such as human height follow a Gaussian distribution where extreme values are very unlikely and have only a small effect on statistics such as the average.

Black swans cannot be modeled with thin-tailed Gaussian distributions. Instead, they tend to follow long-tailed distributions such as the power-law distribution. In long-tailed distributions, the tail of the curve tapers off relatively slowly which makes extreme events more likely. Many events follow a power law distribution such as the number of books sold by authors, the number of casualties in wars, and the net worth of individuals.

Despite the fact that black swans are rare, they have a property called max-sum equivalence which means that the most extreme outlier is equal to the sum of the other events. For example, a single severe earthquake could cause as much damage as all other earthquakes combined.

According to Nassim Nicholas Taleb, author of the book The Black Swan, many of the most important events in history were black swans. For example, the rise of the internet, the 911 terrorist attack in 2001, and World War 1.

"I stop and summarize the triplet: rarity, extreme 'impact', and retrospective (though not prospective) predictability. A small number of Black Swans explains almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives." - The Black Swan

The name 'black swan' comes from the common belief among Europeans that all swans were white. All previously observed swans had been white until a black swan was first observed in 1697 in Australia. This example shows that confirming evidence has less power than disconfirming evidence and that our ability to predict the future based on past events is limited.

This problem is also known as Hume’s problem of induction which is the problem of predicting the unobserved future given the observed past. Taleb uses the life of a Christmas turkey to explain the problem: the turkey predicts that it will be alive tomorrow and each day makes it increasingly confident that its next prediction will be correct until it is unexpectedly slaughtered.

Since black swans are improbable and extreme outlier events, they are inherently difficult to predict. Instead of predicting them, Taleb recommends building resilience.

Another key concept related to black swans is unknown unknowns. After the 911 terrorist attack, many people became fearful of flying on airplanes. In this case, there was a known unknown or partial knowledge: the people knew the risk of flying on planes but didn’t know where or when the next attack would happen.

But black swans tend to involve much less knowledge, are much more random and unpredictable, and come from unknown unknowns. Before the attack, most people wouldn’t have known the attack was even a risk. World War I was not predicted[1] in advance and neither was the fall of the Soviet Union. A similar idea is the narrative fallacy. When we imagine a risk we may have in mind some detailed story of what could go wrong: a plane crashing or an attack in a particular place despite the fact that these particular predictions are unlikely [2].

Black swans and AGI

How are black swans relevant to AGI? In this section, I’ll model the creation of AGI as a future black swan event. Note that black swans are just one mental model and may not apply to AGI or other models may be more appropriate. Though I think the concept of black swans is probably relevant and offers a unique way of seeing the future. The following claims are hypotheses that follow from modeling AGI as a black swan event and whether they are true is conditional on AGI being a black swan event.

It will be difficult to predict when AGI will be created

There have been many attempts to predict when AGI will be created using methods such as hardware trend extrapolation and expert elicitation (surveys).

But if AGI is a black swan, it could be very difficult or even impossible to predict when it will be created. There are many historical examples of eminent scientists and inventors making poor predictions. For example, Ernest Rutherford declared, “anyone who looked for a source of power in the transformation of the atoms was talking moonshine” just a day before Leo Szilard invented the nuclear chain reaction. Wilbur Wright predicted that powered flight was 50 years away just 2 years before it happened.

Even though I haven’t met you, I can estimate with high confidence that your height is between 1 and 2 meters. This is because there are many past examples of human heights and height can be modeled using a thin-curved Gaussian distribution. Consequently, I can set a relatively narrow range for my 95% confidence interval estimation.

In contrast, statistical methods are less useful for unusual events such as technological breakthroughs. Often they happen just once and are outlier events. Research impact follows a power law. The discovery could be made by a single extraordinary person (e.g. Albert Einstein) or a small group of people.

We don’t know how difficult it is to create an AGI. It’s not entirely clear which trends are relevant or even how close we are to AGI. And since we don’t know what the first AGI will look like, we might not even know that the first AGI really is one.

When thinking about AGI, it’s easy to anthropomorphize it. For example, one can imagine AGI as a virtual office worker that can do a task that any human worker could do. But the first AGI will likely have a profile of strengths and weaknesses very different from a typical human. Therefore it’s not even clear what ‘human-level’ even means or how to recognize it.

Recommendations

Although predictions are useful, it’s important not to look too much into them as there is a lot of uncertainty about the future.

When reading a prediction such as “there is a 50% chance of AGI by 2040”, consider the possibility that the probability distribution is very wide. Even if it is wide, it’s possible for the actual event to fall outside of the distribution since black swans are often outliers.

Therefore I think we should be prepared for a wide range of future scenarios including very short or long timelines.

It will be difficult to predict how AGI affects the world

There are several vivid scenarios explaining how AGI could affect the world. For example, one scenario in the book Superintelligence describes a single AI singleton with a ‘decisive strategy advantage’ that ‘strikes’ and eliminates humanity. A familiar future possibility in the AI safety field is the superintelligent paperclip AI that converts the universe into paperclips because it is programmed to only value paperclips.

These scenarios are often useful for explaining principles such as the orthogonality thesis or how AGI could pose an existential risk. But by focusing too much on them we could become vulnerable to the narrative or conjunctive fallacy which involves assigning probabilities to specific scenarios that are too high.

Remember that the more details you add, the less likely your prediction is. Here is a list of predictions ordered from most to least likely:

  1. AGI will be created.
  2. AGI will be created and take over the world.
  3. AGI will be created and take over the world and convert the world into paperclips.

Note the emphasis on the word ‘and’. Since P(A) <= P(A and B) in probability, the situations are ordered from least to most probable despite the fact that the third scenario is the most compelling.

Even the concept of AGI itself is dubious because it assumes a known system that will be developed at some point in the future. In other words, AGI is a known unknown. But black swans are usually unknown unknowns.

We don’t know what form AGI will take. Rather than describing some future system, I think the word AGI is intentionally vague and defines a space of possible future AI systems each with different strengths and weaknesses, levels of alignment, and architectures [3].

Recommendations

Be careful when using the term AGI and be mindful that it defines a space of possible systems that have certain capabilities. Consider using the term transformative AI (TAI) instead or as an alternative definition.

Beware of the conjunctive fallacy and specific predictions about how AI will affect the world in the future. Whenever you hear a prediction, consider the many other possibilities and set a low base rate probability for any specific prediction. Remember that more detailed predictions have a lower probability of coming true.

The impact of AGI could be extreme

Although black swans are rare, they matter because they have a high impact. For example, the single richest person in a town could have a greater net worth than the net worths of everyone else in the town combined.

The creation of AGI could have an impact that is extreme and unprecedented. For example, other black swans such as World War I had no precedent and had an extreme impact.

When AI is discussed in the media, many issues are discussed such as fairness, self-driving cars, and automation. While these developments are important, the most impactful effects of AI could be far more extreme. And like other long-tail events, the most extreme effect could outweigh the sum of all other effects.

Potential extreme impacts of AI in the future include human extinction, a singleton that has permanent control over the world, or digital minds. Many of the potential future impacts of AI currently lie outside of the Overton window. They are ‘wild’ or ‘sci-fi’ but according to black swan theory, these extreme long-tail events may be more likely than we think and could dominate the future and determine how it unfolds.

Recommendations

AI and AGI are unprecedented technologies that could have an extreme impact on society such as human extinction. We need to stretch our imaginations to consider extreme possibilities that are outside of the Overton window and could drastically alter the future.

Past experience will have limited or little value

Black swans are outliers with extreme randomness and a low probability of occurring and may happen only once. As Scott D. Sagan said, “Things that have never happened, happen all the time.”

Predictions of how AI will affect the future are often made based on past events. For example, AI is compared to the industrial revolution. People may say things like, “Automation has happened before.” or “Humanity has always overcome challenges.”

But black swans are rare outliers that don’t necessarily fit past trends. For example, neither World War I nor Chornobyl had any precedent and were not predicted.

Recommendations

When we look back at history, it seems to have a logical narrative but only in retrospect. The truth is that history is often much more random and unpredictable than it seems. Extreme events may not have clear causes except those added in hindsight.

When thinking about black swan events such as the creation of AGI or human extinction, we need to be open to extreme possibilities that have never happened before and don’t follow previous events. The Christmas turkey couldn’t have predicted its demise based on past experience but it was still a possibility.

Therefore, we need to consider possibilities that seem implausible or improbable because these events may be more likely than they seem or have an impact that dominates the future.

  1. ^

    Bond prices often change in anticipation of wars and didn't change before World War I.

  2. ^

    For more on black swans, I recommend the lecture and notes from the Intro to ML Safety course.

  3. ^

    To appreciate how strange modern AI could be, I recommend reading the Simulators post.

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 8:12 AM

Black swans are unexpected events... AGI has been predicted for ages, just not the exact shape or timing. "An event that comes as a surprise" doesn't seem like a good description.

I find the article well written and hits one nail on the head after another in regards to the potential scope of what's to come, but the overarching question of the black swan is a bit distracting.  To greatly oversimplify, I would say black swan is a category of massive event, on par with "catastrophe" and "miracle", it just has overtones of financial investors having hedged their bets properly or not to prepare for it (that was the context of Taleb's book iirc).  

Imho, the more profound point you started to address, was our denial of these events - that we only in fact understand them retroactively.  I think there is some inevitability to that, given that we can't be living perpetually in hypothetical futures.

I did read the book many years ago but I forget Taleb's prognosis - what are the strategies for preparing for uknown unknowns?  

Since black swans are difficult to predict, Taleb recommends being resilient to them rather than trying to predict them. 

I don't think that strategy is effective in the context of AGI. Instead, I think we should imagine a wide range of scenarios to turn unknown unknowns into known unknowns.

I agree completely, and I'm currently looking for what is the most public and concise platform where these scenarios are mapped.  Or as I think of them, recipes.  There is a finite series of ingredients I think which result in extremely volatile situations.  A software with unknown goal formation, widely distributed with no single kill switch, with the abiliity to create more computational power, etc.  We have already basically created the first two but we should be thinking what it would take for the 3rd ingredient to be added.  

single downvote because this is a rehash of existing posts that doesn't appear to clearly add detail, improve precision, or improve communicability of the concepts.

[This comment is no longer endorsed by its author]Reply

Which posts? Would you mind sharing some links? I couldn't find many posts related to black swans.

Apart from the first section summarizing black swans, everything here is my personal opinion.

I see the disagreement votes on my comment are well justified by my difficulty in finding the prior work I thought was abundant! It looks like most of what I was thinking of can be found in AI Impacts' work. I've changed my vote to an upvote because of this. Some links that seem particularly relevant:

https://aiimpacts.org/observed-patterns-around-major-technological-advancements/
https://aiimpacts.org/discontinuous-progress-investigation/
https://aiimpacts.org/accuracy-of-ai-predictions/
https://aiimpacts.org/group-differences-in-ai-predictions/
https://aiimpacts.org/the-biggest-technological-leaps/
https://forum.effectivealtruism.org/posts/SueEfksxDQm4Q97oK/are-we-trending-toward-transformative-ai-how-would-we-know

there are also some interesting posts under the tags:
https://www.lesswrong.com/tag/black-swans
https://www.lesswrong.com/tag/futurism
https://www.lesswrong.com/tag/event-horizon-thesis <- this one was why I thought it obvious

I have removed my own upvote on my original comment. Sorry about that!

Thanks for the links!