This is a linkpost for https://arxiv.org/abs/1806.02404

While the argument was posted on LessWrong previously, now it has the neat form of a paper on arXive by Anders Sandberg, Eric Drexler and Toby Ord

TL;DR version: the use of Drake-like equations, with point estimates of highly uncertain parameters, is wrong. Extant scientific knowledge corresponds to uncertainties that span multiple orders of magnitude.

When the statistics is done correctly to represent realistic distributions of uncertainty in the literature, "people who take the views of most members of the research community seriously should ascribe something like a one in three chance to being alone in the galaxy and so should not be greatly surprised by our lack of evidence of other civilizations. The probability of N <10^−10 (such that we are alone in the observable universe) is 10%. "

From the conclusions, when the priors are updated

When we update this prior in light of the Fermi observation, we find a substantial probability that we are alone in our galaxy, and perhaps even in our observable universe (53%–99.6% and 39%–85% respectively). ’

New Comment
22 comments, sorted by Click to highlight new comments since: Today at 11:29 PM

If asked to bet about the probability of alien life (with payoffs measured in pleasure rather than dollars), most people would recommend making an anthropic update. That implies a much more likely future filter, as Katja has argued, and the best guess is then that we are in a universe with large amounts of life, and that we are overwhelmingly likely to soon die.

(Of course, taking this line of argument to its extreme, we are even more likely to be in a simulation.)

Action-wise, the main upshot is that we ought to be much more interested in averting apparently-insurmountable local risks than we otherwise would be. For example, one might be tempted to simply write off worlds in which there are incredibly potent information hazards that almost always end civilizations at a certain stage of development, since that seems like a hopeless situation. But the anthropic update suggests that such situations contain so many observers like us that it can roughly cancel out the hopelessness.

More precisely, the great filter argument suggests that increasing your survival probability by 10% in doomed worlds is actually very good, same order of magnitude as decreasing risk by 10% in a "normal" world with doom probabilities <50%.

(The importance of coping with nearly-certain doom still depends on the probability that we assign to settings of background variables implying nearly-certain doom. That probability seems quite low to me, since it's easy to imagine worlds like ours with strong enough world government that they could cope with almost arbitrary technological risks.)

My intuition is people should actually bet on current anthropic reasoning less than they do. The reason is it is dangerously simple to construct simple examples with some small integer number of universes. I believe there is a significant chance these actually do not generalize to the real system in some non-obvious way.

One of the more specific reasons why I have this intuition is, it is actually quite hard to do any sort of "counting" of observers even in the very non-speculative world of quantum mechanics. When you go more in the direction of Tegmark's mathematical universe, I would expect the problem to get harder.

I’m not sure I understand why they’re against point estimates. As long as the points match the mean of our estimates for the variables, then the points multiplied should match the expected value of the distribution.

Because people draw incorrect conclusions from the point estimates. You can have high expected value of the distribution (e.g. "millions of civilizations") while at the same time having big part of the probability mass on outcomes with just one civilization, of few civilizations far away.

I think the real point here (as I've commented elsewhere) isn't that using point estimates is inherently a mistake, it's that the expected value is not what we care about. They're valid for that, but not for the thing we actually care about, which is P(N=0).

I'm skeptical that anyone ever made that mistake. Can you point to an example?

The paper doesn't claim anyone did, does it?

Made what mistake, exactly?

What do you mean by "real point"? Don't you mean that the point of the paper is that someone makes a particular mistake?

I mean the mistake of computing expected number rather than probability. I guess the people in the 60s, like Drake and Sagan probably qualify. They computed an expected number of planets, because that's what they were interested in, but were confused because they mixed it up with probability. But after Hart (1975) emphasizes the possibility that there is no life out there, people ask the right question. Most of them say things like "Maybe I was wrong about the probability of life." That's not the same as doing a full bayesian update, but surely it counts as not making this mistake.

It's true that Patrick asserts this mistake. And maybe the people making vague statements of the form "maybe I was wrong" are confused, but not confused enough to make qualitatively wrong inferences.

Huh, interesting. I have to admit I'm not really familiar with the literature on this; I just inferred this from the use of point estimates. So you're saying people recognized that the quantity to focus on was P(N>0) but used point estimates anyway? I guess what I'm saying is, if you ask "why would they do that", I would imagine the answer to be, "because they were still thinking of the Drake equation, even though it was developed for a different purpose". But I guess that's not necessarily so; it could just have been out of mathematical convenience...

Definitely mathematical convenience. In many contexts people do sensitivity analysis instead of bayesian updates. It is good to phrase things as bayesian updates, if only as a different point of view, but when that is the better thing to do (which in this case I do not believe), trumpeting it as right and the other method as wrong is the worst kind of mathematical triumphalism that has destroyed modern science.

Not quite. Expected value is linear but doesn't commute with multiplication. Since the Drake equation is pure multiplication then you could use point estimates of the means in log space and sum those to get the mean in log space of the result, but even then you'd *only* have the mean of the result, whereas what would really be a "paradox" is if turned out to be tiny.

The authors grant Drake's assumption that everything is uncorrelated, though.

You don't need any correlation between and to have . Suppose both variables are 1 with probability .5 and 2 with probability .5; then their mean is 1.5, but the mean of their products is 2.25.

Indeed, each has a mean of 1.5; so the product of their means is 2.25, which equals the mean of their product. We do in fact have E[XY]=E[X]E[Y] in this case. More generally we have this iff X and Y are uncorrelated, because, well, that's just how "uncorrelated" in the technical sense is defined. I mean if you really want to get into fundamentals, E[XY]-E[X]E[Y] is not really the most fundamental definition of covariance, I'd say, but it's easily seen to be equivalent. And then of course either way you have to show that independent implies uncorrelated. (And then I guess you have to do the analogues for more than two, but...)

Gah, of course you're correct. I can't imagine how I got so confused but thank you for the correction.

Removed from the frontpage for now, since we try to keep frontpage discussion free from being primarily about the rationality community and it's specific structure. I would recommend putting the last section into its own post, which is then on your personal blog, and then I am happy to promote this to the frontpage.

Done. Note it was not about the rationality community, but about the broader set of people thinking about this problem.

For reference

What else to notice?

On meta level, it seems to me seriously important to notice that it took so long until some researchers noticed and did the statistics right. Meanwhile, lots of highly speculative mechanisms resolving the largely non-existent paradox were proposed.This may indicate something important about the community. As an example: may there be a strong bias for searching for grand, intellectually intriguing solutions?

If an intellectual community suppresses attempts to promote its object-level epistemological failures to attention and cause appropriate meta-level updates to happen, then it's going to stop having an epistemology before long.

That's certainly true and a problem. If you have some ideas about how to avoid it (in this case or more generally) I'd be interested to read them; feel free to post in meta with some thoughts/ideas, or write them as comments in the last meta thread on this topic.

My bad, I just read the first four paragraphs and then moved it to frontpage. Will take this as data that I should read more carefully before promoting.

Possibility-if panspermia is correct (the theory that life is much older than Earth and has been seeded on many planets by meteorite impacts), then we might not expect to see other civilizations advanced enough to be visible yet. If evolving from the first life to roughly human levels takes around the current lifetime of the universe, rather than of the Earth, not observing extraterrestrial life shouldn't be surprising! Perhaps the strongest evidence for this is that the number of codons in observed genomes over time (including as far back as the Paleozoic) increases on a fairly steady logarithmic trend, which extrapolates back to shortly after the birth of the universe.