[Summary: The fact we do not observe (and have not been wiped out by) an UFAI suggests the main component of the 'great filter' cannot be civilizations like ours being wiped out by UFAI. Gentle introduction (assuming no knowledge) and links to much better discussion below.]
Introduction
The Great Filter is the idea that although there is lots of matter, we observe no "expanding, lasting life", like space-faring intelligences. So there is some filter through which almost all matter gets stuck before becoming expanding, lasting life. One question for those interested in the future of humankind is whether we have already 'passed' the bulk of the filter, or does it still lie ahead? For example, is it very unlikely matter will be able to form self-replicating units, but once it clears that hurdle becoming intelligent and going across the stars is highly likely; or is it getting to a humankind level of development is not that unlikely, but very few of those civilizations progress to expanding across the stars. If the latter, that motivates a concern for working out what the forthcoming filter(s) are, and trying to get past them.
One concern is that advancing technology gives the possibility of civilizations wiping themselves out, and it is this that is the main component of the Great Filter - one we are going to be approaching soon. There are several candidates for which technology will be an existential threat (nanotechnology/'Grey goo', nuclear holocaust, runaway climate change), but one that looms large is Artificial intelligence (AI), and trying to understand and mitigate the existential threat from AI is the main role of the Singularity Institute, and I guess Luke, Eliezer (and lots of folks on LW) consider AI the main existential threat.
The concern with AI is something like this:
- AI will soon greatly surpass us in intelligence in all domains.
- If this happens, AI will rapidly supplant humans as the dominant force on planet earth.
- Almost all AIs, even ones we create with the intent to be benevolent, will probably be unfriendly to human flourishing.
Or, as summarized by Luke:
... AI leads to intelligence explosion, and, because we don’t know how to give an AI benevolent goals, by default an intelligence explosion will optimize the world for accidentally disastrous ends. A controlled intelligence explosion, on the other hand, could optimize the world for good. (More on this option in the next post.)
So, the aim of the game needs to be trying to work out how to control the future intelligence explosion so the vastly smarter-than-human AIs are 'friendly' (FAI) and make the world better for us, rather than unfriendly AIs (UFAI) which end up optimizing the world for something that sucks.
'Where is everybody?'
So, topic. I read this post by Robin Hanson which had a really good parenthetical remark (emphasis mine):
Yes, it is possible that the extremely difficultly was life’s origin, or some early step, so that, other than here on Earth, all life in the universe is stuck before this early extremely hard step. But even if you find this the most likely outcome, surely given our ignorance you must also place a non-trivial probability on other possibilities. You must see a great filter as lying between initial planets and expanding civilizations, and wonder how far along that filter we are. In particular, you must estimate a substantial chance of “disaster”, i.e., something destroying our ability or inclination to make a visible use of the vast resources we see. (And this disaster can’t be an unfriendly super-AI, because that should be visible.)
This made me realize an UFAI should also be counted as an 'expanding lasting life', and should be deemed unlikely by the Great Filter.
Another way of looking at it: if the Great Filter still lies ahead of us, and a major component of this forthcoming filter is the threat from UFAI, we should expect to see the UFAIs of other civilizations spreading across the universe (or not see anything at all, because they would wipe us out to optimize for their unfriendly ends). That we do not observe it disconfirms this conjunction.
[Edit/Elaboration: It also gives a stronger argument - as the UFAI is the 'expanding life' we do not see, the beliefs, 'the Great Filter lies ahead' and 'UFAI is a major existential risk' lie opposed to one another: the higher your credence in the filter being ahead, the lower your credence should be in UFAI being a major existential risk (as the many civilizations like ours that go on to get caught in the filter do not produce expanding UFAIs, so expanding UFAI cannot be the main x-risk); conversely, if you are confident that UFAI is the main existential risk, then you should think the bulk of the filter is behind us (as we don't see any UFAIs, there cannot be many civilizations like ours in the first place, as we are quite likely to realize an expanding UFAI).]
A much more in-depth article and comments (both highly recommended) was made by Katja Grace a couple of years ago. I can't seem to find a similar discussion on here (feel free to downvote and link in the comments if I missed it), which surprises me: I'm not bright enough to figure out the anthropics, and obviously one may hold AI to be a big deal for other-than-Great-Filter reasons (maybe a given planet has a 1 in a googol chance of getting to intelligent life, but intelligent life 'merely' has a 1 in 10 chance of successfully navigating an intelligence explosion), but this would seem to be substantial evidence driving down the proportion of x-risk we should attribute to AI.
What do you guys think?
Hmm.
Okay, filters that would produce results consistent with observation.
1: Politics: Aka: "The Berzerker's Garden" The first enduring civilization in our galaxy rose many millions of years before the second, and happened to be both highly preservationist and highly expansionist. They have outposts buried in the deep crust of every planet in the galaxy, including earth. Whenever a civilization arises that is both inclined and able to turn the galaxy entire into fast food joints/smily faces/ect, arises said civilization very suddenly disappears. The berzerkers cannot be fought, and cannot be fooled, because they have been watching the entirety of history, and their contingency plans for us predate the discovery of fire. If we are really lucky, they will issue a warning before annihilating us.
2: Physics is booby trapped: One of the experiments every technological civilization inevitably conducts while exploring the laws of the universe has an unforeseeable, and planet-wrecking result. We are screwed.
3: Economics: The minimal mass of a technological "ecology" capable of sustaining itself outside of a compatible biosphere is just too large to fit into a star ship. The interlocking chains of expertise, material extraction and recycling, energy production and so on, and so forth, flat out cannot be compacted down enough to be moved. No such thing as a von-neuman probe or a colony ship can be built. Civilizations expand to the limit of how far spare parts and help can can be sent, and then halt.
4: Diversion: Advancing tech opens "frontiers" much, much more attractive than star flight before starflight becomes possible. Alternate timeline gates, uploads into the underlying computational substrate of the universe, ect, ect.
5: Anthropic engineering. Advanced civilizations have proof of the manyworlds and anthropic principles - And use them. Which from outside any given circle of coordination looks like collective suicide. So the universe is full of empty planets, and every civilization has all of it to themselves.
I'm from the future & i just want to thank you for these unusual solutions.