We have a sample of one modern human civilization, but there are some hints on how likely it was to happen.

Major types of hints are:

  • Time - if something happened extremely quickly; or extremely late, it suggests how likely it was.
  • Independent invention - something that was invented independently multiple times is likelier; something invented only once in spite of plenty of time, isolation, and prerequisites is less likely.

Data for:

  • Life seems to have developed extremely quickly after creation of Earth. [Origin of life]
  • Multicellularity seems to have evolved multiple times independently, at least in animals, fungi, and plants. [Evolution of multicellularity]
  • Similar process also happened multiple time on higher level - eusociality developed in aphids, thrips, mole rats, termites, and at least 11 times in Hymenoptera (ants, bees, and wasps). [Eusociality]
  • Life did not die out on Earth, or on any particular environment where it previously thrived, in spite of major changes in temperature, composition of atmosphere, and multiple large scale disasters. This suggests life is very resilient. Every time life is wiped out in some part of Earth, it is quickly recolonized.
  • Many different lineages of animals developed societies. [Social animal]
  • Many different lineages of animals developed communication. [Animal communication]
  • All transitions from Middle Paleolithic onwards happened relatively fast to extremely fast on evolutionary scale. [Paleolithic]
  • Invention of Mesolithic and Neolithic culture including agriculture, bow, boats, animal husbandry, pottery were all invented multiple times independently, in Afroeurasia, and Americas. [Stone Age]
  • Likewise many of latter inventions including metallurgy, writing, money, and state were developed multiple times independently.

Data against:

  • Universe is not filled with technical civilizations. Some (dubious due to zero empirical evidence) models suggest once such civilization develops anywhere in the galaxy, it is very likely to colonize the entire galaxy in relatively short period of time. As it didn't happen, it's a strong evidence that there are very few, perhaps no, advanced technical civilizations in our galaxy; or anywhere else in the universe if our galaxy is a good representative. [Fermi paradox]
  • Life can survive in a very wide range of circumstances, so there are plenty of places where we might expect to find life if its development was also likely. Mars, Venus, moons of Jupiter and Saturn, and perhaps some other places just in the Solar System might be sufficiently friendly to life. Yet, as far as we know, none ever developed in any of them, what puts strong limits on inevitability of life. [Extremophile]
  • In spite of all the theories proposed, we know of no mechanism under which creation of life seems even remotely plausible. Somewhere between the primordial soup (or equivalent) to the first replicator with reasonably stable heredity and metabolism (or equivalent), there's a large number of unknown steps of unknown but most likely extremely low probability. [Origin of life]
  • Nervous system evolved only once, about 3 billion years after life started, and nothing analogous to it ever evolved in any other lineage. [Urbilaterian]
  • It took life 3 billion years to reach stage of reasonably complex animals, what suggests it is not very likely. [Cambrian explosion]
  • Almost all animals seem to have very low encephalization quotients, suggesting that high intelligence is unlikely to develop. The only two major exceptions are primates and dolphins. [Brain size and EQ]
  • Anything resembling human language developed only once. [Origin of language]
  • It is far from certain, but it seems that Neanderthals had the same capacity for speaking language as modern humans. This pushes development of language very far back, and suggest development of civilization even given language is unlikely. [Neanderthal]
  • Transition from animal life to something as complex as early Homo life (Lower Paleolithic), like manufacturing of tools, control of fire etc. seem to have happened only once in history of life, and extremely late. [Human evolution]
  • Likewise transitions to Middle Paleolithic, and Upper Paleolithic seem to have happened only once. It could be argued that if it was  isolated human populations had chance of developing innovations contained in them independently, but didn't.
  • Some inventions like wheel, and iron smelting were invented only once. However by this time the world was going so fast and globalized enough that it's very weak evidence for their difficulty. Inventions later than antiquity also provide little evidence due to little time and little isolation.

To me it looks like life, animals with nervous systems, Upper Paleolithic-style Homo, language, and behavioral modernity were all extremely unlikely events (notice how far ago they are - vaguely ~3.5bln, ~600mln, ~3mln, ~200k or ~600k, ~50k years ago) - except perhaps language and behavioral modernity might have been linked with each other, if language was relatively late (Homo sapiens only) and behavioral modernity more gradual (and its apparent suddenness is an artifact). Once we have behavioral modernity, modern civilization seems almost inevitable. Your interpretation might vary of course, but at least now you have a lot of data to argue for your position, in convenient format.

New to LessWrong?

New Comment
104 comments, sorted by Click to highlight new comments since: Today at 7:15 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

See Robin's paper on this:


If a step is extremely hard (and thus astronomically unlikely to occur in the lifespan of a planet) then we should expect to see it taking a length of time comparable to typical planetary lifespans divided by the number of steps.

The last two steps occurred suspiciously quickly to be super-hard, and the independent clusters of mammalian braininess in cetaceans and primates [EDIT: (and birds, to a lesser extent)] make the third step questionable too.

Super-difficult life and nervous systems look very plausible to me. A special difficulty involved in the formation of mammals (a feature that predisposed to the later development of intelligent hominids) seems less plausible but not very implausible.

By the way, some birds seem to be particularly intelligent; they've been seen making and using tools.
Which to me raises a question: how much smarter could birds get? Flying is already a very demanding and difficult task - how much larger a brain could their metabolism support? I suspect that ravens and parrots are close to this limit, and that higher-calorie birds would have to be flightless. Penguins and other flightless birds like the ostrich are, IIRC, the heaviest and largest birds there are. This present a problem for any intelligent-bird lineage: the groups that have demonstrated a need for intelligence (like the parrots and the ravens) are very separate from the groups that have demonstrated their access to the calories/protein necessary for human-level intelligence. So how they get here from there?
I've always wondered how intelligent dogs, birds, chimpanzees, bonobos, dolphins, rats, etc. could become with say, 100 years of rigorous scientific selective breeding for intelligence. I can't imagine any of them would reach human level intelligence (unless they had some really lucky mutations), but they might become extremely interesting, and highly instructive about the nature of intelligence.
Oh, I think they're already extremely interesting/instructive. Consider Border Collies; they can memorize <300 commands, which is pretty impressive. There aren't any grammar-producing results or indication of genuine understanding, but just the memorization points to a pretty good memory. And the border colly breed only goes back 1 or 2 centuries before it disappears in the general sheepherding/working-dog haze. What could your 100 years of rigorous breeding do? I dunno; intuitively I feel that if silver foxes could be completely domesticated in a few decades, 10 decades ought to get border collies up to chimp-level cognition.
Emperor penguins could be pretty smart. They have complex social lives and a diet of fatty fish. There were bigger penguins in the past too. In which case, there doesn't seem to be much of an issue.
Mm. They could be, but they're not tool-users that I've heard of. Living in social groups only puts them up with things like walruses and horses, who we never look to as possible future human-level-intelligence lineages. And the extinction of bigger penguins in the past would seem to be a checkmark against them - AFAIK, primates have trended larger over the past few million years.
The "tool use" theory of the origin of intelligence is widely discredited. Neanderthal man was bigger than us, and had bigger brains. Extinction is too common to mean very much. Most species trend larger - until they get reset by meteorite strikes. I'm not aware of any partiularly noteworthy growth of primates. Our ancestors have mostly grown, if that's what you mean.
CronoDAS brought up the tools, not I; but I would still like to see some references about it being 'widely discredited'. Tool use seems like one of the most significant aspects of intelligence to me...
Mosst animals make very little use of tools. The (highly brainy) Cetaceans don't seem to use them at all. More important theories inculude the social brain hypothesis - and sexual selection.
Dolphins have been confirmed as tool-users. For example they are known to use sticks and kelp in mating displays and games.
Cool. An example: http://news.nationalgeographic.com/news/2005/06/0607_050607_dolphin_tools.html
Neanderthal brains weren't bigger than modern human. Wikipedia - Neanderthal
That dates from 1993. Here's some more recent material: "Neanderthal brain size at birth was similar to that in recent Homo sapiens and most likely subject to similar obstetric constraints. Neanderthal brain growth rates during early infancy were higher, however. This pattern of growth resulted in larger adult brain sizes but not in earlier completion of brain growth." * http://pmid.us/18779579
Replied here but let's move the discussion to this thread: quoting: Yes, but they also had more massive bodies, possibly 30% more massive than modern humans. I'm not sure that they had a higher brain/body mass ratio than we do and even if they had, a difference on the order of 10% isn't strong evidence when comparing intelligence between species.
If they did have significant additional brain mass, it's possible it was was used to give them really good instincts instead of the more general purpose circuits we have. This is a quote from Wikipedia supposedly paraphrasing Jordan, P. (2001) Neanderthal: Neanderthal Man and the Story of Human Origins. "Since the Neanderthals evidently never used watercraft, but prior and/or arguably more primitive editions of humanity did, there is argument that Neanderthals represent a highly specialized side branch of the human tree, relying more on physiological adaptation than psychological adaptation in daily life than "moderns". Specialization has been seen before in other hominims, such as Paranthropus boisei which evidently was adapted to eat rough vegetation."
It's also possible that it did any of a hundred other things. Or that it didn't strictly do anything itself, but was genetically tied to some other positively selected mutation. Or it was sexually selected. Or it arose without genetic change, from environmental factors, and there wasn't enough time or pressure for natural selection to remove it again. Why privilege this hypothesis? Other species that specialize in some way don't usually grow big brains as a result. In any case the presence of any given physiological adaptation doesn't imply the absence of intelligence. Modern human intelligence is powerful; any hominid species that happened to evolve it would have a very good chance of using it to spread rapidly. Evolution doesn't say, "this species is used to relying on physical strength, if an intelligent member is born he just won't rely on his intelligence". Every animal is always struggling for survival, no matter how and how well adapted.
I'm not privileging the hypothesis, I'm speculating. I didn't mean to start an argument, but I think that you thought because I suggested a hypothesis, you assumed that I took it more seriously than other unspoken hypotheses.
We don't know that a larger brain is required for greater intelligence (in e.g. birds).
I agree. But it's "easier" to evolve a slightly larger brain with the same architecture, than to discover a new, more efficient architecture. In general we should expect a larger brain (ours consumes 1/4 of our total energy) to pay for itself in more actual intelligence; if it doesn't (the architecture can't easily scale) then the smallest (still working) brains will win. For each species of bird, either 1) general scalable intelligence isn't supported by their brain's architecture (so you can't just grow more processing power) 2) birds lack the physical attributes to profit from any more general intelligence than they already have or 3) it (general intelligence) just hasn't evolved yet
I don't think we should generalize from the single data point of recent human evolution. Are we sure larger brains tend to give greater general intelligence in other lineages? Has this been checked? Is the intra-species brain size variation large enough to check this in species where we already run intelligence tests? The fact that human brains recently became unusually large can be explained by other theories. For example, it seems more likely to me that first there was a (relatively recent) mutation that significantly changed brain architecture, and only after that point did the new brains profit from growing larger. (E.g., chimps would have come before this mutation, and indeed have not experienced runaway brain growth.) On this theory, brain size only benefits the lineage with a very specific neural architecture and isn't a general rule. There have also been other theories, some of them invoking sexual selection. Most of the brain does tasks not directly related to general intelligence, e.g., lower-level input processing, managing the digestive system, etc. Increases in brain size might also improve these functions and be an advantage in their own right. This would further muddy the issue if we charted intelligence vs. brain size, because we don't know enough yet to say which parts of the brain (particularly of a non-human brain) have which general-intelligence functions. Bird brains, for instance, aren't much like mammal brains, they don't even have a neocortex, so we can't just compare growth of brain areas directly. (Not to mention cephalopod brains...) Finally, brain size directly correlates with head size, and most of the time I expect head size would be more evolutionarily significant. It controls things like eating (size of mouth and throat), defense (size of teeth and jaw), acuteness of the senses (eye size)... Not to nitpick, but the definition of fully general intelligence is that pretty much anyone can benefit from it.
Is there any evidence that fully general intelligence exists in this world?
Depends on the definition used. You could argue that Bayes' Law is fully general intelligence.
Bayes Law is a fact/theorem which is probably useful for anyone who can understand it. But is that what you mean by intelligence? I thought it was about the abilities of an individual.
OK, then, any individual understanding Bayes' Law could be said to have "fully general" intelligence.
This is silly.
Silly? But is it true or false?
Larger brain of the same architecture.
That's what I'm talking about. What reason do we have for thinking that larger brains of the same architecture exhibit more general intelligence (in non human lineages)? Also, what exactly does it mean for two brains to have the same "architecture" if they differ by a genetic mutation? It's not as if there's a separate gene coding for "brain size" that could mutate on its own.
If it doesn't help, and uses more energy, then it won't get kept unless it's an inevitable side effect of something helpful. That was my only basis for "larger (of same type) => more intelligence". I don't really know anything about this topic. My claim is essentially a tautology that may not have much practical application. Some trivia (not directly related to my original claim) I found at http://en.wikipedia.org/wiki/Brain_size:
That's true. But something helpful done by the brain isn't necessarily involved with intelligence.
Primates existed for about 85mln years, Cetaceans for about 48mln years, in all this time nothing got even close to Homo except for Homo. (Evolution of cetaceans Evolution of primates) Of course plenty of other large social animals had an opportunity to develop something like Homo for at least 300mln years before that. (Evolution of Reptiles), or more if we seriously treat possibility of intelligent life in the sea. So life, animals with nervous system (these are linked, as without nervous system animal complexity is very small), and Homo seem safe. The case for language and behavioral modernity seems weaker indeed. One obvious argument is that Robin's paper doesn't allow reversal to previous states (die-off of genus Homo - something very likely), so if expected lifespan of Homo genus (due to all ecological changes, and assuming no civilization - the bottleneck event theory suggests it's not too unlikely) was 10mln years, and we expect something like Homo to appear once, then if it took 3mln years, it's 30% of its possible time, what sounds reasonably hard. Another case for language is that there is no obvious mechanism how it could have evolved, so perhaps our Homo was luckily predisposed for it. And there's no reason why you need Homo for something like language - if chimps or dolphins could be taught grammar, that would be evidence for language being easy - yet it seems that in 600mln years of animals with brains nothing like it have happened. Language and behavioral modernity are so close that they might be extremely closely related. If they're independent (assuming Homo), we can use expected genus extinction argument. Or if language turns out to be far older, happening together with Homo (seems unlikely but not impossible), then behavioral modernity needs explanation instead. Timing as we know it is: * Life << animals with nervous systems << Homo <= language <= behavioral modernity * Homo << behavioral modernity With exact timing of development of lan
Good point about the extinction of lineages, TAW. Updating.
1Eliezer Yudkowsky15y
Second Carl's recommendation; Robin's paper is definitely required reading here.

In addition to steps that are hard in the sense that they can take a long time, there may also be tricky steps -- ones that have a finite window of opportunity before being precluded by some other version of events. I don't have a good enough theory of the development of civilization to defend any candidates in history, but here's a paleontological candidate:

Consider that animal lineages readily lose limbs but, once lost, almost never regain them (many-legged arthropods to six-legged insects, four-legged reptiles to two-legged birds and no-legged snakes, but no reverse transitions; the reasons for this are easy to understand in terms of the requirement of evolution that intermediate forms be advantageous). It is also clear that animals can get by well enough with two legs, but sparing two of four for toolmaking has always been a problem, let alone sparing two of two. Finally, it is clear that four legs over two is not necessarily enough advantage to displace an entrenched competitor.

The four-legged lungfish that became the ancestor of amphibians could not of course have known its distant descendents would need two spare limbs for toolmaking. If it had been delayed until a two-legged lungfish had taken the niche... it is not certain that the ultimate development of civilization would have been rendered impossible, but it is at least plausible.

One question we need to ask about the question of time is what sort of process leads to each breakthrough.

Is it more like buying a lottery ticket with every generation, or is it like saving until you have enough money to buy the next step?

It may well be that the Cambrian explosion was the result of 3 billion years of small improvements and would have been impossible at 1 billion years.

The history of invention in human history seems to work more like savings -- as soon as sufficient progress has been made, the breakthrough happens independently.

There's a problem in independent invention in evolution as well--once the first evolution takes place that niche is occupied. An independent invention may be beat out for resources by the more polished first-mover. Short-lived species leave very few fossils.

The problem with Cambrian explosion is that it seems to have occurred in way too many separate lineages simultaneously. The most recent common ancestors for animals from Cambrian explosion seems to be go quite far back (that however is another controversial issue, molecular clocks are always controversial), and then suddenly all at once multiple independent lineages undergo period of extremely fast evolution. It's a problem unresolved since 19th century. Wikipedia - Cambrian explosion

Something at least remotely analogous to the nervous system seems to have evolved in some plants, which use cells with something like action potentials to drive rapid movement or rapid changes of chemical behavior in response to environmental stresses.

Bats also have high encephalization quotients, and corvids seem to have the sorts of behaviors characteristic of high encephalization quotients, though both may be too absolutely small to become civilized absent unusual environmental conditions.

The latter three items all seem sufficiently rapid to provide l... (read more)

There were wheels in the western hemisphere before the arrival of the europeans, but they were only used for toys. No one seemed to guess that they might be useful. But if they'd had more time....

ETA: perhaps I should cite Diamond here in case anyone wonders where this factoid comes from. It's from "Guns, Germs, and Steel".

In particular, Diamond (I believe, though it's been a while since I read GG&S) argues that wheels have far less obvious utility for vehicles if you don't have pack animals, and therefore don't think in terms of animal-powered transport apart from carrying.
A wheelbarrow is a very useful thing. You don't need an animal to pull a cart in order for it to be worthwhile. I actually think it's quite mysterious that those civilisations invented the wheel, and then didn't bother to use it.
I know of no confirmed historical evidence of wheelbarrows being used until around the time of the Peloponnesian War in Greece, and as I understand it they subsequently vanished in the Greco-Roman world for roughly 1600 years until being reintroduced in the Middle Ages. Likewise, wheelbarrows are not evident in Chinese history until the first or second century AD. So wheelbarrows are an application of wheels, but they're a much later application of the technology, one that did not arise historically for two to four millennia after the invention of the two or four-wheeled animal-drawn cart. If we use a broader definition of wheelbarrow as "hand cart," we have older evidence stretching back at least to the ancient Indus Valley some time in the second or third millennium BC. But if we stick only to inventions we have historical evidence of, there's still a gap of thousands of years between the invention of the wheel and the invention of the hand cart throughout Eurasia. The fact that Montezuma's Aztecs made no use of the wheelbarrow, rickshaw, or hand cart is hardly more remarkable than the fact that Charlemagne's Franks didn't, either.
Excellent points, but I think: and: are not inconsistent, and are both true.
From a social psych standpoint, it's very interesting: why do people come up with something, then fail to use it in ways that we would consider obvious and beneficial? I think a lot of it is hidden infrastructure we don't see, both mental and physical. People need tools to build things, and tools to come up with new ideas: the rules of logic and mathematics may describe the universe, but they are themselves mental tools. Go back to Hellenic civilization and you find a lot of the raw materials for the Industrial Revolution, what was missing? There are a lot of answers to that question: "cheap slaves messing up the economy," "no precision machining capability," "no mass consumption of timber, coal, and iron in quantities that force the adoption of industrial methods," and so on. They all boil down to "something subtle was missing, so that intelligent people didn't come up with the trick." I speculate that one of the most important missing pieces was the habit of looking at everything as a source of potential new tricks for changing the world.
Well, coal was missing... slaves may have been a big factor; it's probably not coincidental that industrialization started in England and the northeast US and, AFAIK, didn't spread to the US south until after the civil war - but somebody should fact check this. (BTW, I'd love to see an alternate history in which slavery is gotten rid of by economic incentives and government subsidization of the development of mechanized agriculture. Well, I say I'd love to, but it would probably be as exciting as an Ayn Rand novel.) ... but yes, ways of thinking were probably what was lacking. One important way of thinking was that, for a very long time before the 18th century, change was seen as bad. The word "innovator" was usually preceded by the word "rash". There was a great chain of being with peasants at the bottom, God at the top, and the King up near the top; and anybody who wanted to change things was a dangerous revolutionary. The very idea that things could improve here on Earth was vaguely heretical. The idea that economies could grow was not fully in place. I think it's also not coincidental that the industrial revolution didn't start until Adam Smith's ideas replaced mercantilist thought. Pre-Smith, people assumed that the total amount of wealth on Earth was fixed.

Multicellularity seems to have evolved multiple times independently

This isn't really true. Only organisms with mitochondria developed multicellularity. Mitochondria are the hard part.

eusociality developed in aphids, thrips, mole rats, termites, and at least 11 times in Hymenoptera

Similarly, it would be more informative to say that Hymenoptera developed a particular pattern of chromosomal inheritance once, and that led to 11 different types of eusocialism.

According to Wikipedia, all of the social hymenopterans are in superfamily Aculea, a monophyletic clade, which seems to lend credence to the "developed only once" hypothesis. (There are non-social aculean insects, but it's possible they used to be social and split off from social species.)
doi: 10.1073/pnas.0702207104:

Re: abiogenesis. You say:

we know of no mechanism under which creation of life seems even remotely plausible.

For a plausible mechanism, see this video. (It starts with anti-creationism stuff; skip to 2:45 to watch the science.)

There are plenty of ideas how some part of emergence of life might have happened. The problems is that each idea explains just small part of it, they are not all compatible with each other, and many have serious problems. Yes, life emerged, so it must have emerged somehow, but I haven't seen any mechanism that seemed to make it likely.

Our observations are biased because anything that occurs multiple times is very easy to see but something that occurs only once could be completely missed as an essential step towards civilization because we assume it was inevitable.

Where's the bias? * If something occurred once after long time while it could it seems unlikely. * If something occurred soon after prerequisites were met it seems likely. * If something occurred multiple times independently it seems likely. 1 and 3 seem obviously true. There are multiple trials separated either by geography or time, and they have enough failures / successes to make our intuitions right. Anthropic principle doesn't get involved here in any way. If agriculture was invented 5 times independently it couldn't have possibly be the limiting unlikely step. 2 might be luck - something might have been extremely unlikely but just have happened (by anthropic principle). But anthropic principle doesn't really give any reasons why it should have happened quickly. Of course it's extremely naive to consider (like Robin's paper) time as a series of independent trials - maybe it was unlikely as in prerequisites were just in place, and it was either fast or never. That's why I seriously doubt physics-inspired modeling of such events.
The bias is that we don't even notice things that occured once. How important is there that we have a moon? That we have a continent that spans east-west? That the K-T impact happened exactly when it did? There could be a hundred other crucial factors which we never even noticed because nobody thought they were important to the development of civilization.
East-west continent span seems irrelevant, at least for modern civilization, as it was on its way all the way from Upper Paleolithic up to something reasonably civilized in Central America too, independently, up to the point when we broke their isolation.

AK's Rambling Thoughts has an interesting post on the originas of bilaterians (as well as eukaryotes and life in general).

Nervous system evolved only once, about 3 billion years after life started, and nothing analogous to it ever evolved in any other lineage. [Urbilaterian]

From Science, July 3 2009, p. 24-26, "On the origin of the nervous system":

Assembling these components into a cell a modern neuroscientist would recognize as a neuron probably happened very early in animal evolution, more than 600 million years ago... Scientists also disagree on which animals were the first to have a centralized nervous system and how many times neurons and nervous sytems ev

... (read more)

Life did not die out on Earth, or on any particular environment where it previously thrived, in spite of major changes in temperature, composition of atmosphere, and multiple large scale disasters. This suggests life is very resilient. Every time life is wiped out in some part of Earth, it is quickly recolonized.

Be careful of anthropic bias here. Taken alone, the argument "life did not die out on Earth" is invalid because if it had, we wouldn't be here. However, the second point, that when some evolutionary niche is wiped out it is quickly col... (read more)

I think my reasoning is valid even with anthropic principle. If life wasn't resilient, we should expect by anthropic principle to have no major disasters in the past, not to have survived major disasters.
Voted up for correct use of an observational selection effect a.k.a. anthropic argument.

One thing that caught my eye is the presentation of "Universe is not filled with technical civilizations..." as data against the hypothesis of modern civilizations being probable.

It occurs to me that this could mean any of three things, which only one of which indicates that modern civilizations are improbable.

1) Modern civilizations are in fact as rare as they appear to be because they are unlikely to emerge. This is the interpretation used by this article.

2) Modern civilizations collapse quickly back to a premodern state, either by fighting a v... (read more)

8Scott Alexander15y
4) There is a very easy and unavoidable way to destroy the universe (or make it inhospitable) using technology, and any technological civilization will inevitably do so at a certain pretty early point in its history. Therefore, only one technological civilization per universe ever exists, and we should not be surprised to find ourselves to be the first. 5) The Dark Lords of the Matrix are only interested in running one civilization in our particular sim.
We can still be surprised that we arrived in our universe so late.
Re 4), is this destruction supposed to violate relativity? Also, if so, why do we find ourselves so late in cosmic history? Similar anthropic considerations interfere with a non-FTL destruction mechanism like vacuumn collapse.
6) Faster than light travel is not physically possible, the other civilizations all originated far away, and the other civilizations are all composed of people who don't like to live in generational spaceships their entire lives.
Your 6 falls under Simon's category 3: "they exist, but we can't detect them, and they aren't beaming an easy to detect advertisement of their existence to places where life might arise" 3.1) Further, they use some crypto-secure or sufficiently low-power RF communication that looks like or is masked by noise. They also don't leak much distinctive non-communicative RF (no Las Vegas). 3.1.1) They also have no interest (or ability) to create reasonably capable robots who don't mind the boredom of interstellar travel (either alone, or in an isolated community) as their emissaries
This is my hypothesis (3c), with an implicit overlay of (3a).
Generation spaceships? No joke...
Another possible resolution of the Fermi paradox based on the many world interpretation of QM: Let us assume that advanced civilizations find overwhelming evidence for the many world hypothesis as the true, infallible theory of physics. Additionally, assume that there is a quantum mechanical process that has a huge payoff at a very small probability: the equivalent of a cosmic lottery, where the chances of obliteration are close to 1, the chance of winning is close to zero, but the payoff is HUGE. It is like going into a room, where you win a billion dollar with p=1:1000000 and die a sudden, painless death at p= 999999:1000000. Still, for the many world hypothesis is true, you will experience the winning for sure. Now imagine that at some point of its existence every very advanced civilization faces the decision to make the leap of face in the many world interpretation: start the machine that obliterates them in almost every branches of the Everett-multiverse, while letting them live on in a few branches with a huge amount of increased resources (energy/ computronium/ whatever) Since they know that their only subjective experience will be of getting the payoff at a negligible risk, they will choose the path of trickling down in some of the much narrower Everett-branches. However, it would mean for any outsider civilizations are that they simply vanish from their branch of the Universe at a very high probability. Since every advanced civilization would be faced with the above extremely seducing way of gaining cheap resources, the probability that two of them will share the same universe will get infinitesimally small.
To our perspective, this is from (2): all advanced civilizations die off in massive industrial accidents; God alone knows what they thought they were trying to accomplish. Also, wouldn't there still be people who chose to stay behind? Unless we're talking about something that blows up entire solar systems, it would remain possible for members of the advanced civilization to opt out of this very tempting choice. And I feel confident that for at least some civilizations, there will be people who refuse to bite and say "OK, you guys go inhabit a tiny subset of all universes as gods; we will stay behind and occupy all remaining universes as mortals." If this process keeps going on for a while, you end up with a residual civilization composed overwhelmingly of people who harbor strong memes against taking extremely low-probability, high-payoff risks, even if the probability arithmetic indicates doing so. For your proposal to work, it has to be an all-or-nothing thing that affects every member of the species, or affects a broad enough area that the people who aren't interested have no choice but to play along because there's no escape from the blast radius of the "might make you God, probably kills you" machine. The former is unlikely because it requires technomagic; the latter strikes me as possible only if it triggers events we could detect at long range.
I admit that your analysis is quite convincing, but will play the devil's advocate just for fun: 1) We see a lot of cataclysmic events in our universe, the source of which are at least uncertain. It is definitely a possibility that some of them could originate from super-advanced civilizations going up in flame. (Maybe due to accidents or deliberate effort) 2) Maybe the minority that does not approve trickling down the narrow branch is even less inclined to witness the spectacular death of the elite and live on in a resource-exhausted section of the universe and therefore decides to play along. 3) Even if a small risk-averse minority of the civilization is left behind, when it reaches a certain size again, large part of it will decide again to go down the narrow path so it won't grow significantly over time. 4) If the minority becomes so extremely conservative and risk-averse (due to selection after some iterations of 3) then it necessarily means that it has also lost its ambitions to colonize the galaxy and will just stagnate along a few star systems and will try to hide from other civilizations to avoid any possible conflicts, so we would have difficulties to detect them.
Good points. However: (1) Most of the cataclysms we see are either fairly explicable (supernovae) or seem to occur only at remote points in spacetime, early in the evolution of the universe, when the emergence of intelligent life would have been very unlikely. Quasars and gamma ray bursts cannot plausibly be industrial accidents in my opinion, and supernovae need not be industrial accidents. (2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that "99.9999% death plus 0.0001% superman" is inferior to "continued mortal existence." (3)Again possible, but there will be a selection effect over time. Eventually, the remaining people (who, you will notice, live in a universe where people who try to ascend to godhood always die) will no longer think ascending to godhood is a good idea. Maybe the ancients were right and there really is a small chance that the ascent process works and doesn't kill you, but you have never seen it work, and you have seen your civilization nearly exterminated by the power-hungry fools who tried it the last ten times. At what point do you decide that it's more likely that the ancients did the math wrong and the procedure just flat out does not work? (4)The minority might have no problems with risks that do not have a track record of killing everybody. However, you have a point: a rational civilization that expects the galaxy to be heavily populated might be well advised to hide.
(2)Possible, but I can still imagine large civilizations of people whose utility function is weighted such that "99.9999% death plus 0.0001% superman" is inferior to "continued mortal existence." You have to keep in mind that subjective experience will be 100% superman. The whole idea is that the MWI is true and completely convincingly demonstrated by other means as well. It is like if someone would tell you: you enter this room and all you will experience is that you leave the room with one billion dollars. I think it is a seducing prospect. Yet another analogue: Assume that you have the choice between the following two scenarios: 1) You get replicated million times and all the copies will lead an existence in hopeless poverty 2) You continue your current existence as a single copy but in luxury The absolute reference frame may be different but the relative difference between the two outcomes is very similar to those of the above alternative. Possible additional motivation could be given by knowing that if you don't do that and wait a very very long time, the cumulative risk that you experience some other civilization going superman and obliterating you will raise above a certain threshold. For single civilizations the chance of experiencing it would be negligible but for a universe filled with aspiring civilizations, the chance of experiencing at least one of them going omega could become a significant risk after a while.
Agree it is a seducing prospect. If advanced civilization means superintelligent AI with perfect rationality, I see no reason why any civilization wouldn't make the choice. Certainly a lot of humans wouldn't though.
Your aliens are assigning zero weight to their own death, as opposed to a negative weight. While this may be logical, I can certainly imagine a broadly rational intelligent species that doesn't do it. Consider the problems with doing so. Suppose that Omega offers to give a friend of yours a wonderful life if you let him zap you out of existence. A wonderful life for a friend of yours clearly has a positive weight, but I'd expect you to say "no," because you are assigning a negative weight to death. If you assign a zero weight to an outcome involving your own death, you'd go for it, wouldn't you? I think a more reasonable weighting vector would say "cessation of existence has a negative value, even if I have no subjective experience of it." It might still be worth it if the probability ratio of "superman to dead" is good enough, but I don't think every rational being would count all the universes without them in it as having zero value. Moreover, many rational beings might choose to instead work on the procedure that will make them into supermen, hoping to reduce the probability of an extinction event. After all, if becoming a superman with probability 0.0001% is good, how much better to become one with probability 0.1%, or 10%, or even (oh unattainable of unattainables) 1! Finally, your additional motivation raises a question in its own right: why haven't we encountered an Omega Civilization yet? If intelligence is common enough that an explanation for our not being able to find it is required, it is highly unlikely that any Omega Civilizations exist in our galaxy. For being an Omega Civilization to be tempting enough to justify the risks we're talking about, I'd say that it would have to raise your civilization to the point of being a significant powerhouse on an interstellar or galactic scale. In which case it should be far easier for mundane civilizations to detect evidence of an Omega Civilization than to detect ordinary civilizations that lack the resources
Hmmm, it seems that most of your arguments are in plain probability-theoretical terms: what is the expected utility assuming certain probabilities of certain outcomes. During the arguments you compute expected values. The whole point of my example was that assuming a many world view of the universe (i.e. multiverse), using the above decision procedures is questionable at best in some situations. In classical probability theoristic view, you won't experience your payoff at all if you don't win. In a MWT framework, you will experience it for sure. (Of course the rest of the world sees a high chance of your loosing, but why should that bother you?) I definitely would not gamble my life on 1:1000000 chances, but if Omega would convince me that MWI is definitely correct and the game is set up in a way that I will experience my payoff for sure in some branches of the multiverse, then it would be quite different from a simple gamble. I think it is a quite an interesting case where human intuition and MWI clashes, simply because it contradicts our everyday beliefs on our physical reality. I don't say that the above would be an easy decision for me, but I don't think you can just compute expected value to make the choice. The choice is really more about subjective values: what is more important to you: your subjective experience or saturating the Multiverse branches with your copies. "Finally, your additional motivation raises a question in its own right: why haven't we encountered an Omega Civilization yet?" That one is easy: The assumption I purposefully made that going omega is a "high risk" (a misleading word, but maybe the closest) process meaning that even if some civilizations went omega, the outsiders (i.e. us) will see them simply wiped out in an overwhelming number of Everett-branches, i.e. with very high probability for us. Therefore we have to wait a huge number of civilizations going omega before we experience them having attained Omega status. Still, if w
To make this calculation in a MWI multiverse, you still have to place a zero (or extremely small negative) value on all the branches where you die and take most or all of your species with you. You don't experience them, so they don't matter, right? That's a specialized form of a general question which amounts to "does the universe go away when I'm not looking at it?" If one can make rational decisions about a universe that doesn't contain oneself in it (and life insurance policies, high-level decorations for valor, and the like suggest this is possible), then outcomes we aren't aware of have to have some nonzero significance, for better or for worse. ---------------------------------------- As for "question in its own right," I think you misunderstood what I was getting at. If advanced civilizations are probable and all or nearly all of them try to go Omega, and they've all (in our experience, on this worldline) failed, it suggests that the probability must be extremely low, or that the power benefits to be had from going Omega are low enough that we cannot detect them over galaxy-scale distances. In the first case, the odds of dissenters not drinking the "Omegoid" Kool-Aid increase: the number of people who will accept a multiverse that kills them in 9 branches and makes them gods in the 10th is probably somewhat larger than the number who will accept one that kills them in 999999999 branches and makes them gods in the 10^9th. So you'd expect dissenter cultures to survive the general self-destruction of the civilization and carry on with their existence by mundane means (or trying to find a way to improve the reliability of the Omega process) In the second case (Omega civilizations are not detectable at galactic-scale distances), I would be wary of claiming that the benefits of going Omega are obvious. In which case, again, you'll get more dissenters.
There's also some assumption here that civilisations either collpase or conquer the galaxy, but that ignores another possibility - that civilisations might quickly reach a plateau technologically and in terms of size. The reasons this could be the case is that civilisations must always solve their problems of growth and sustainability long before they have the technology to move beyond their home planet, and once they have done so, there ceases to be any imperative toward off-world expansion, and without ever increasing economies of scale, technological developments taper off.
"Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us."
But, Calvin, P(intelligent life contacting us | intelligent life exists) >= P(intelligent life contacting us | intelligent life does not exist) = 0, so the fact that no other intelligent life has contacted us can only be evidence against its existence. (The problem with formally bringing out Bayes' law is that, by the time you've gone through and stated everything "properly", your toboggan will have already crashed into the brier patch.)
I think the joke hinges on equivocation of the word "intelligent". Taboo "intelligent", use "sapient" and "clever" for the two meanings, and you get: "Sometimes I think the surest sign that clever life exists elsewhere in the universe is that no sapient life has tried to contact us." Or, put more accurately, "the fact that no sapient life has contacted us is evidence that, if sapient life exists elsewhere in the universe, it's probably also clever".
By law of conservation of evidence if detecting alien civilization makes them more likely, not detecting them after sustained effort makes them less likely, right? Counterevidence for 2 - there are extremely few sustained reversals of either life or civilization. Toba bottleneck seems like the most likely near-reversal, and it happened before modern civilization. You would need to postulate extremely high likelihood of collapse if you suggest that emergence is very frequent, and still civilizations aren't around. If only 90% of civilizations collapse (what seems vastly higher proportion than we have any reason to believe), then if civilizations are likely, they should still be plentiful. Hypothesis 2 would only work if emergence is very likely, and then fast extinction is nearly inevitable. After civilization starts spreading widely across star systems extinction seems extremely unlikely. Counterevidence for 3 - some models suggest that advanced civilizations would have spread extremely quickly across galaxy by geological timescales. That leaves us with: * Advanced civilizations are numerous but were all created extremely recently, last 0.1% of galaxy's lifetime or so (extremely unlikely to the point that we can ignore it) * We suck at detection so much that we cannot even detect galaxy-wide civilization (seems unlikely, do you postulate that?) * These models are really bad, and advanced civilizations tend to be contained to spread extremely slowly (more plausible, these models have no empirical support) * 3 is false and there are few or no other advanced civilization in the galaxy (what I find most likely), either by not arising in the first place or extinction. My rating of probabilities is 1 >> 3 >> 2. And yes, I'm aware existential risks are widely believed here - I don't share this belief at all.
Countercounterevidence for 3: what are the assumptions made by those models of interstellar colonization? Do they assume fusion power? We don't know if industrial fusion power works economically enough to power starships. Likewise for nanotech-type von Neumann machines and other tools of space colonization. The adjustable parameters in any model for interstellar colonization are defined by the limits of capability for a technological civilization. And we don't actually know the limits, because we haven't gotten close enough to those limits to probe them yet. If the future looks like the more optimistic hard science fiction authors suggest, then the galaxy should be full of intelligence and we should be able to spot the drive flares of Orion-powered ships flitting around, or the construction of Dyson spheres by the more ambitious species. We should be able to see something, at any rate. But if the future doesn't look like that, if there's no way to build cost-effective fusion reactors and the only really worthwhile sustainable power source is solar, if there are hard limits on what nanotech is capable of that limit its industrial applications, and so on... the barrier to entry for a planetary civilization hoping to go galactic may be so high that even with thousands of intelligent species to make the attempt, none of them make it. This ties back into the hypotheses I left out of my post for the sake of brevity; I'm now considering throwing them in to explain my reasoning a little better. But I'm still not sure I should do it without invitation, because they are on the long side.
It's sticky sweet candy for the mind. Why not share it?
Here goes: Alternate explanations for rarity of intelligence: 3a) Interstellar travel is prohibitively difficult. The fact that the galaxy isn't obviously awash in intelligence is a sign that FTL travel is impossible or extremely unfeasible. Barring technology indistinguishable from magic, building any kind of STL colonizer would involve a great investment of resources for a questionable return; intelligent beings might just look at the numbers and decide not to bother. At most, the typical modern civilization might send probes out to the nearest stellar neighbors. If the cost of sending a ton of cargo to Alpha Centauri is say, 0.0001% of your civilization's annual GDP, you're not likely to see anyone sending million-ton colony ships to Alpha Centauri. In which case intelligent life might be relatively common in the galaxy without any of it coming here; even the more ambitious cultures that actually did bother to make the trip to the nearest stars would tend to peter out over time rather than going through exponential expansion. ---------------------------------------- 3b) Interstellar colonization is prohibitively difficult. If sending an STL colony expedition to another star is hard, sending one with a large enough logistics base to terraform a planet will be exponentially harder. There are something on the order of 1000 stars within 50 to 60 light years of us. Assuming more or less uniform stellar densities, if the probability of a habitable planet appearing around any given star is much less than 0.1%, it's likely that such planets will remain permanently out of reach for a sublight colony ship. In that case, spreading one's civilization throughout the galaxy depends on being able to terraform planets across interstellar distances before setting up a large population on those worlds. Even if travel across short (~10 ly) interstellar distances is not prohibitively difficult, there might still be little or no incentive to colonize the available worlds beyond
For a machine-phase civilization, the only one of these that seems plausible is 3c, but I can't think of any reason why no one in a given civilization would want to leave, and assuming growth of any kind, resource pressure alone will eventually drive expansion. If the need for civilization is so psychologically strong, copies can be shipped and revived only after specialized systems have built enough infrastructure to support them. It seems far more likely to me, given the emergence of multiple civilizations in a galaxy, that some technical advance inevitably destroys them. Nanomedicine malfunction or singleton seem like the best bets to me just now, which would suggest that the best defenses are spreading out and technical systems' heterogeneity.
A machine-phase civilization might still find (3a) or (3b) an issue depending on whether nanotech pans out. We think it will, but we don't really know, and a lot of technologies turn out to be profoundly less capable than the optimists expect them to be in their infancy. Science fiction authors in the '40s and '50s were predicting that atomic power sources would be strongly miniaturized (amusingly, more so than computing devices); that never happened and it looks like the minimum size for a reasonably safe nuclear reactor really is a large piece of industrial machinery. If nanotech does what its greatest enthusiasts expect, then the minimum size of industrial base you need to create a new technological civilization in a completely undeveloped solar system is low (I don't know, probably in the 10-1000 ton range), in which case the payload for your starship is low enough that you might be able to convince people to help you build and launch it. Extremely capable nanotech also helps on the launch end by making the task of organizing the industrial resources to build the ship easier. But if nanotech doesn't operate at that level, if you actually need to carry machine tools and stockpiles of exotic materials unlikely to be found in asteroid belts and so on... things could be expensive enough that at any point in a civilization's history it can think of something more interesting to do with the resources required to build an interstellar colony ship. Again, if the construction cost of the ship is an order of magnitude greater than the gross planetary product, it won't get built, especially if very few people actually want to ride it. Also, could you define "singleton" for me, please?
Sorry for taking so long on this; I forgot to check back using a browser that can see red envelopes (I usually read lesswrong with elinks). I think if nanotech does what its greatest enthusiasts expect, the minimum size of the industrial base will be in the 1-10 ton range. However, if we're assuming that level of nanotech, anyone who wants will be able to launch their own expedition, personally, without any particular help other than downloading GNU/Spaceship. If nanotech works as advertised, it turns construction into a programming project. Also, if we limit ourselves to predictions made in the 50s with no assumptions of new science, I think we'll find that the predictions are reasonable, technically, and the main reason we don't have nuclear cars and basement reactors now involve politics. Molecular manufacturing probably cannot be contained this way, since it doesn't require a limited resource that's easy to detect from a distance. Others have defined singleton, so I assume you're happy with that. :)
Re: Nanotech That's exactly my point: if nanotech performs as advertised by its starriest-eyed advocates, then interstellar colonization can be done with small payloads and energy is cheap enough that they can be launched easily. That is a very big "if," and not one we can shrug off or assume in advance as the underlying principle of all our models. What if nanotech turns out to have many of the same limits as its closest natural analogue, biological cells? Biotech is great for doing chemistry, but not so great for assembling industrial machinery (like large solar arrays) in a hostile environment. ---------------------------------------- As for the "nuclear cars and basement reactors" being out of the picture because of politics and not engineering, that's... really quite impressively not true, I think. Fission reactors create neutrons that slip through most materials like a ghost and can riddle you with radiation unless you stand far away or have excellent shielding. Radioactive thermal generators require synthetic or refined isotopes that are expensive by nature because they have to be [i]made[/i], atom by atom... and they're still quite radioactive if they're hot enough to be a useful power source. The real problem isn't the atomic power source itself, it's the shielding you need to keep it from giving you cancer. There's no easy way to miniaturize that, because neutron capture cross-sections play no favorites and can't be tinkered with. This stuff is not a toy, and there are very good reasons of engineering why it never made the leap from industrial equipment to household use, except in the smallest and most trivial scales (such as americium in smoke detectors). It's not just about politics.
'singleton' as I've seen it used seems to be one possible Singularity in which a single AI absorbs everyone and everything into itself in a single colossal entity. We'd probably consider it a Bad Ending.
See Nick Bostrom (2005). What is a Singleton? A singleton is a more general concept than intelligence explosion. The specific case of a benevolent AGI singleton aka FAI is not a bad ending. Think of it as Nature 2.0, supervised universe, not as a dictator.
I stand corrected! Maybe this should be a wiki article - it's not that common, but it's awfully hard to google.
Here is another variant: If civilizations achieve a certain sophistication, they necessarily decipher the purpose of the universe and once they understand its true meaning and that they are just a superfluous side-effect, they simply commit suicide. Here is a blog entry of mine elaborating on this hypothesis: http://arachnism.blogspot.com/2009/05/spiritual-explanation-to-fermi-paradox.html

One other thing an advanced technological civilization seems to need is concentrated energy. We're highly dependent on coal and oil. At this stage, nuclear could be substituted, but I don't know that there would have been enough slack for the research to get nuclear without the fossil fuels.

It seems plausible that any planet which has had extensive life for long enough to develop intelligence would also have fossil fuels, but that's pretty vague. It doesn't guarantee that the fossils don't get dispersed, eaten, or end up too deep to be easily accessible.

I'... (read more)

No we're not. The data is clearly against this theory.

Coal was barely used until 1800s, early industrial revolution machinery used wood (indirectly solar power), charcoal, and river flow (indirectly solar power) instead. Oil didn't matter much until 1950s.

Amount of solar energy Earth receives annually is 3,850,000 EJ (and if we ever needed more there are ridiculously higher amounts of solar energy available is space). Human primary energy use is 487 EJ, or 0.01% of that. That's of course only because we conveniently don't count solar energy used to grow our food, and heat our planet - otherwise it would be fair to say human civilization uses 99.99% solar power (via photosynthesis, heating, water flow, wind etc.) and 0.01% all the other kinds of energy like fossil fuels, nuclear, geothermal etc.

We know fossil fuels were not necessary for industrial civilization because by the time we started using them we were already had industrial civilization. That's as good a proof as it gets.

History of ferrous metallurgy History of coal mining History of petroleum Solar energy

EDIT: Also, long before railways, river transport, and long distance sea transport were extremely common. If some place ... (read more)

Wouldn't any energy stored on earth be "indirectly solar power?"
No. Nuclear isn't (unless you're going to stretch "solar" to include past supernovae. Geothermal comes from nuclear fission (or possibly residual gravitational energy? Either way, not solar). Given the discovery of hydrocarbons off-Earth, it's possible that some proportion of oil is non-solar in origin, too, though that would mean it's ultimately geothermal, and thus a nuclear by-product. Not sure of the status of that speculation currently, though.
Nuclear/geothermal aren't. And fossil fuels are solar very indirectly - they're solar from many million years ago; biomass/wind/hydro are solar quite directly - it's just this or last few year's solar.
Well, unless I'm totally confused, the uranium/plutonium were generated by solar fusion. Also, most geothermal heat is generated by radioactive decay (some is residual gravitational binding energy from earth's formation) , making it indirect nuclear fission power, (and thus ultimately solar power, if you want to be unpleasantly technical.)
Really? Thanks. I thought it mostly was derived from gravitational energy as the earth formed and only a bit of extra nuclear heating. Though I guess it might not make sense that it would still be hot then... Well, generally, elements heavier than iron only show up when a star goes kablewey, right? so it's "solar", but it's not OUR solar, which was kinda the point, I guess.
Since we've already gone down the rabbit hole of extreme pedantry, I should point out that "solar" properly only applies to our own star, sol. The adjective for stars in general is "stellar". If we ever bring solar panels to the neighborhood of other stars, this is going to be a nasty bit of terminology conflict.
Oh, good point about supernovae. I didn't know that.
[+][comment deleted]15y1