Copying over Anders Sandberg's Twitter summary of the paper:

There is life on Earth but this is not evidence for life being common in the universe! This is since observing life requires living observers. Even if life is very rare, the observers will all see they are on planets with life. Observation selection effects need to be handled!

Observer selection effects are annoying can produce apparently paradoxical effects such that your friends on average have more friends than you or that our existence "prevents" recent giant meteor impacts. But one can control for them with some ingenuity

Life emerged fairly early on Earth: evidence that it is easy and common? Not so fast: if you need multiple hard steps to evolve an observer to marvel at it, then on those super-rare worlds where observers show up life statistically tend to be early.

If we have N hard steps (say; life, good genetic coding, eukaryotic cells, brains, observers) as difficulty goes to infinity in cases where all steps succeed before biosphere ends they become equidistant between first and last habitability.

That means that we can take observed timings and calculate backwards to get probabilities compatible with them, controlling for observer selection bias.

Our argument builds on a chain from Carter's original argument and its extensions to Bayesian transition models. I think our main addition here is using noninformative priors.

https://royalsocietypublishing.org/doi/10.1098/rsta.1983.0096

https://arxiv.org/abs/0711.1985v1

http://mason.gmu.edu/~rhanson/hardstep.pdf

https://www.pnas.org/content/pnas/109/2/395.full.pdf

The main take-home message is that one can rule out fairly high probabilities for the transitions, while super-hard steps are compatible with observations. We get good odds on us being alone in the observable universe.

If we found a dark biosphere or life on Venus that would weaken the conclusion, similarly for big updates on when some transitions happened; we have various sensitivity checks in the paper.

Our conclusions (if they are right) are good news if you are worried about the Great Filter. we have N hard filters behind us, so the empty sky is not necessarily bad news. We may be lonely but have much of the universe for ourselves.

Another cool application is that this line of reasoning really suggests that M-dwarf planets must be much less habitable than they seem: otherwise we should expect to be living around one, since they are so common compared to G2 stars.

Personally I am pretty bullish about M-dwarf planet habitability (despite those pesky superflares), but our result suggests that there may be extra effects impairing them. They need to be pretty severe too: they need to reduce habitability probability by a factor of over 10,000. 

I see this paper as part of a trilogy started with our "anthropic shadows" paper and completed by a paper on observer selection effects in nuclear war near misses (coming, I promise!) Oh, and there is one about estimating remaining lifetime of biosphere.

The basic story is: we have a peculiar situation as observers. All observers do. But we can control a bit for this peculiarity, and use it to improve what we conclude from weak evidence, especially about risks. Strong evidence is better though, so let's try to find it!

The paper itself is available as open access.

New to LessWrong?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 10:04 AM

Thinking out loud:

Suppose we treat ourselves as a random sample of intelligent life, and make two observations: first, we're on a planet that will last for X billion years, and second that we emerged after Y billion years. And we're trying to figure out Z, the expected time that life would take to emerge (if planet longevity weren't an issue).

This paper reasons from these facts to conclude that the Z >> Y, and that (as a testable prediction) we'll eventually find that planets which are much longer-lived than the Earth are probably much less habitable for other reasons, because otherwise we would almost certainly have emerged there.

But it seems like this reasoning could go exactly the other way. In particular, why shouldn't we instead reason: "We have some prior over how habitable long-lived planets are. According to this prior, it would be quite improbable if Z >> Y, because then we would have almost definitely found ourselves on a long-lived planet.

So what I'm wondering is, what licenses us to ignore this when doing the original bayesian calculation of Z?

We're not licensed to ignore it, and in fact such an update should be done. Ignoring that update represents an implicit assumption that our prior over "how habitable are long-lived planets?" is so weak that the update wouldn't have a big effect on our posterior. In other words, if the beliefs "long-lived planets are habitable" and "Z is much bigger than Y" are contradictory, we should decrease our confidence in both; but if we're much more confident in the latter than the former, we mostly decrease the probability mass we place on the former.

Of course, maybe this could flip around if we get overwhelmingly strong evidence that long-lived planets are habitable. And that's the Popperian point of making the prediction: if it's wrong, the theory making the prediction (ie "Z is much bigger than Y") is (to some extent) falsified.

At the end of Section 5.3 the authors write "So far, we have assumed that we can derive no information on the probability of intelligent life from our own existence, since any intelligent observer will inevitably find themself in a location where intelligent life successfully emerged regardless of the probability. Another line of reasoning, known as the “Self-Indication Assumption” (SIA), suggests that if there are different possible worlds with differing numbers of observers, we should weigh those possibilities in proportion to the number of observers (Bostrom, 2013). For example, if we posit only two possible universes, one with 10 human-like civilizations and one with 10 billion, SIA implies that all else being equal we should be 1 billion times more likely to live in the universe with 10 billion civilizations. If SIA is correct, this could greatly undermine the premises argued here, and under our simple model it would produce high probability of fast rates that reliably lead to intelligent life (Fig. 4, bottom)...Adopting SIA thus will undermine our results, but also undermine any other scientific result that would suggest a lower number of observers in the Universe. The plausibility and implications of SIA remain poorly understood and outside the scope of our present work."  

I'm confused, probably because anthropic effects confuse me and not because the authors made a mistake.  But don't the observer selection effects the paper uses derive information from our own existence, and if we make use of these effects shouldn't we also accept the implications of SIA?  Should rejecting SIA because it results in some bizarre theories cause us to also have less trust in observer selection effects?

When using SSA (which I think the authors do implicitly), you can exclude worlds which contain no observer "like you", but there's no anthropic update to the relative probabilities of the worlds that contain at least one observer "like you". When using SIA, the probability of each worlds gets updates in proportion to the number of observers.

Consider a variant of "god's coin toss". God has a device that outputs A, B, or C with equal probability. When seeing A, god creates 0 humans, when seeing B god creates 1 human, and when seeing C god creates 2 humans. You're one of the humans created this way and don't know how many other humans have been created. What should be your probability distribution over {A, B, C}? According to SSA, B and C should have probability 1/2 each, while according to SIA, B has probability 1/3 and C has probability 2/3.

The fact that SSA has a discontinuity between zero and any positive number of observers is one of the standard arguments against SSA, see e.g. argument 4 against SSA here: 

https://meteuphoric.com/anthropic-principles/

Thanks, that's a very clear explanation. 

SIA also implies that we are likely live in the world with interstellar panspermia and many inhabitable planets in our galaxy, as I explored here. In that case, the difficulty of abiogenesis is not a big problem as there will be many planets inseminated with life from one source.

 

Moreover, SIA and SSA seems to converge in very-very large universe where all possible observers exist: in it I will find myself in the region where most observers exist, and it - with some caveats - will be a region with the high concentration of observers.

Yes for high concentration of observers, and if high tech civilizations have strong incentives to grab galactic resources as quickly as they can  thus preventing the emergence of other high tech civilizations, most civilizations such as ours will exist in universes that have some kind of late great filter to knock down civilizations before they can become spacefaring.

Berezin suggested that if the first civilisation will kill all other civilizations, than we are this first civilisation. 

Also, if panspermia is true, the age of civilizations will be similar, and several civilizations could become first almost simultaneously in one galaxy, which creates interesting colonisation dynamics. 

Suppose I have an N-sided die, where N could be 10, 100, 1000, etc. I roll the die and get a 1. What is your probability distribution over N?

The solar system is pretty ordinary (not a high roll).

What if you wake up in a room with amensia to watch a video of yourself playing russian roulette with a gun (or perfectly random death machine) that has exactly a 1 in N chance of not killing you. You know only with certainty that you survived this random 1 in N process. What is your posterior probability distribution over N?

Simple application of relative bayes:

So N=10 is 100x more likely than N=1000, assuming a uniform prior on N.

Maybe this is just the argument for SIA vs SSA - but I never understood the complexity of that framing when I last skimmed it - this is just bayes theorem 101.

Solomonoff/Bayes tells us to always prefer the simplest model that explains our existence, and any history with a highly improbable chance of survival is penalized exactly in proportion to the improbability of survival.  There is absolutely nothing wierd whatsover about 'observational selection effects'. And bayes perfectly postdicts the confirmed Copernican mediocrity principle.

Does this have a doomsday-argument-like implication that we're close to the end of the Earth's livable span, because if life takes a long time to evolve observers then it's overwhelmingly probable that consciousness arises shortly before the end of the livable span?

The paper lists "intelligence" as a potentially hard step, which is of extra interest for estimating AI timelines. However, I find all the convergent evolution described in section 5 of this paper (or more shortly described in this blogpost) to be pretty convincing evidence that intelligence was quite likely to emerge after our first common ancestor with octopuses ~800 mya; and as far as I can tell, this paper doesn't contradict that.

I think it might be quite hard to go from dolphin- to human-level intelligence.

I discuss some possible reasons in this post:

I expect that most animals [if magically granted as many neurons as humans have] wouldn’t reach sufficient levels of general intelligence to do advanced mathematics or figure out scientific laws. That might be because most are too solitary for communication skills to be strongly selected for, or because language is not very valuable even for social species (as suggested by the fact that none of them have even rudimentary languages). Or because most aren’t physically able to use complex tools, or because they’d quickly learn to exploit other animals enough that further intelligence isn’t very helpful, or...

The time it took to reach human-level intelligence (HLI) was quite short, though, which is decent evidence that HLI is easy. Our common ancestor with dolphins was just 100mya, whereas there's probably more than 1 billion years left for life on Earth to evolve.

Here's one way to think about the strength of this evidence. Consider two different hypotheses:

  • HLI is easy. After our common ancestor with dolphins, it reliably takes N million years of steady evolutionary progress to develop HLI, where N is uniformly distributed.
  • HLI is hard. After our common ancestor with dolphins, it reliably takes at least N million years (uniformly distributed) of steady evolutionary progress, and for each year after that, there's a constant, small probability p that HLI is developed. In particular, assume that p is so small that, if we condition on HLI happening at some point (for anthropics reasons), the time at which HLI happens is uniform between the end of the N million years and the end of all life on Earth.

Lets say HLI emerged on Earth exactly 100mya after our common ancestor with dolphins. After our common ancestor with dolphins, lets say there were 1100 million years remaining for life to evolve on Earth (I think it's close to that). We can treat N as being distributed uniformly between 1 and 100, because we know it's not more than 100 (our existence contradicts that). If so:

  • P(HLI at 100my | HLI is easy) =
  • P(HLI at 100my | HLI is hard) =

Thus, us evolving at 100my is roughly a 10:1 update in favor of HLI being easy.

(Note that, since the question under dispute is the ease of getting to HLI from dolphin intelligence, counting from 100mya is really conservative; it might be more appropriate to count from whenever primates acquired dolphin intelligence. This could lead to much stronger updates; if we count time from e.g. 20mya instead of 100mya, the update would be 50:1 instead of 10:1, since P(HLI at 20my | HLI is easy) would be 1/20.)

This is somewhat but not totally robust to small probabilities of variations. E.g. if we assign 20% chance to life actually needing to evolve within 200 million years after our common ancestor with dolphins, we get:

  • P(HLI at 100my | HLI is easy) =
  • P(HLI at 100my | HLI is hard) =

So the update would be more like 1:0.22 ~ 4.5:1 in favor of HLI is easy.

If you think dolphin intelligence is probably easy, I think you shouldn't be that confident that HLI is hard, so after updating on earliness, I think HLI being easy should be the default hypothesis.

My argument is consistent with the time from dolphin- to human-level intelligence being short in our species, because for anthropic reasons we find ourselves with all the necessary features (dexterous fingers, sociality, vocal chords, etc).

The claim I'm making is more like: for every 1 species that reaches human-level intelligence, there will be N species that get pretty smart, then get stuck, where N is fairly large. (And this would still be true if neurons were, say, 10x smaller and 10x more energy efficient.)

Now there are anthropic issues with evaluating this argument by pegging "pretty smart" to whatever level the second-most-intelligent species happens to be at. But if we keep running evolution forward, I can imagine elephants, whales, corvids, octopuses, big cats, and maybe a few others reaching dolphin-level intelligence. But I have a hard time picturing any of them developing cultural evolution.

The claim I'm making is more like: for every 1 species that reaches human-level intelligence, there will be N species that get pretty smart, then get stuck, where N is fairly large

My point is that – if N is fairly large – then it's surprising that human-level intelligence evolved from one of the first ~3 species that became "pretty smart" (primates, dolphins, and probably something else).

If the Earth's history would contain M>>N pretty smart species, then in expectation human-level intelligence should appear in the N:th species. If Earth's history would contain M<<N pretty smart species, then we should expect human-level intelliigence to have equal probability to appear in any of the pretty smart species, so in expectation it should appear in the M/2:th pretty smart species.

Becoming "pretty smart" is apparently easy (because we've had >1 pretty smart species evolve so far) so in the rest of the Earth's history, we would expect plenty more species to become pretty smart. If we expect M to be non-trivial (like maybe 30) then the fact that the 3rd pretty smart species reached human-level intelligence is evidence in favor of N~=2 over N>>M.

(Just trying to illustrate the argument at this point; not confident in the numbers given.)

Yeah, this seems like a reasonable argument. It feels like it really relies on this notion of "pretty smart" though, which is hard to pin down. There's a case for including all of the following in that category:

And yet I'd guess that none of these were/are on track to reach human-level intelligence. Agree/disagree?

And yet I'd guess that none of these were/are on track to reach human-level intelligence. Agree/disagree?

Uhm, haven't thought that much about it. Not imminently, maybe, but I wouldn't exclude the possibility that they could be on some long-winded path there.

It feels like it really relies on this notion of "pretty smart" though

I don't think it depends that much on the exact definition of a "pretty smart". If we have a broader notion of what "pretty smart" is, we'll have more examples of pretty smart animals in our history (most of which haven't reached human level intelligence). But this means both that the evidence indicates that each pretty smart animal has a smaller chance of reaching human-level intelligence, and that we should expect much more pretty smart animals in the future. E.g. if we've seen 30 pretty smart species (instead of 3) so far, we should expect maybe M=300 pretty smart species (instead of 30) to appear over Earth's history. Humans still evolved from some species in the first 10th percentile, which still is an update towards N~=M/10 over N>>M.

The required assumptions for the argument are just:

  • humans couldn't have evolved from a species with a level of intelligence less than X
  • species with X intelligence started appearing t years ago in evolutionary history
  • there are t' years left where we expect such species to be able to appear
  • we assume the appearence rate of such species to be either constant or increasing over time

Then, "it's easy to get humans from X" predicts t<<t' while "it's devilishly difficult to get humans from X" predicts t~=t' (or t>>t' if the appearance rate is strongly increasing over time). Since we observe t<<t', we should update towards the former.

This is the argument that I was trying to make in the grand-grand-grand-parent. I then reformulated it from an argument about time into an argument about pretty smart species in the grand-parent to mesh better with your response.

We may not actually kill them, but we prevent their appearance in the future by colonisation of their planets millions of years before a civilisation will have chance to appearance on them. This is the Berezin's idea.  

I also have a following question: which order of transitions will imply that Earth is nor rare? One answer is that time until oceans' evaporation will be much longer, like not 1, but 4 billion years - but it didn't account timing of the 4 main transitions. 

But imagine that each transition typically has rate 1 time in 1 billions years. In that case having 4 transitions in 4 billion years seems pretty normal. 

If we assume that typical time for each transition is 10 billion years, current Earth age will be not normal, but it is not informative as we got just what we assumed. 

Of 'we are first, we are freaks, we are fucked' categories of great filter explanations, I think (consistent with this paper) we are definitely freaks, it looks like we may be first (at least in the parts of the universe that might in theory be reachable with existing physics/von neumann probes), and the jury is out on whether we are currently fucked (I'm a pessimist, I think we might be like the patient who ate a bottle of Tylenol, feeling fine, but definitely dead in a few days due to impending liver failure)