Recently the Large Hadron Collider was damaged by a mechanical failure.  This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.

Inevitably, many commenters said, "Anthropic principle!  If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"

This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction.  However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all.  (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)

As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry.  However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?"  This tells you how low your prior probability is for the hypothesis.  If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning.  But if it comes up heads 100 times, it's taking you too long to notice.

So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation?  10?  20?  50?

After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?

How Many LHC Failures Is Too Many?
New Comment
140 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"But if it comes up heads 100 times, it's taking you too long to notice"

Ros. Heads. (He puts it in his bag. The process is repeated.) Heads. (Again.) Heads. (Again.) Heads. (Again.) Guil. (Flipping a coin) There is an art to the building of suspense. Ros. Heads. Guild. (Flipping another) Though it can be done by luck alone. Ros. Heads. Guil. If that's the word I'm after. Ros. (Raises his head) 76! (Guil gets up but has nowhere to go. He spins the coin over shoulder without looking at it.) Heads Guil. A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability. (He flips a coin back over his shoulder.) Ros. Heads. Guil. (Musing) The law of probability, it has been asserted, is something to do with the proposition that if six monkeys - (He has surprised himself) if six monkeys were. . . Ros. Game? Guil. Were they? Ros. Are you?

-- Rosenkrantz & Guildenstern Are Dead, Tom Stoppard, Act I

7[anonymous]
Might want to reformat that, looks like markdown did you in.
[-]Greg2180

Perhaps the question could also be asked this way: How many times does the LHC have to inexplicably fail before we take it as scientific confirmation that world-destroying black holes and/or strange particles are indeed produced by LHC-level collisions? Would we treat such a scenario as a successful experimental result for the LHC?

3AlexanderRM
I wouldn't describe a result that eliminated the species conducting the experiment in the majority of world-branches as "successful", although I suppose the use of LHCs could be seen as an effective use of quantum suicide (two species which want the same resources meet, flip a coin loser kills themselves- might have problems with enforcement) if every species invariably experiments with them before leaving their home planet. On the post as a whole: I was going to say that since humans in real life don't use the anthropic principle in decision theory, that seems to indicate that applying it isn't optimal (if your goal is to maximize the number of world-branches with good outcomes), but realized that humans are able to observe other humans and what sort of things tend to kill them, along with hearing about those things from other humans when we grow up, so we're almost never having close calls with death frequently enough to need to apply the anthropic principle. If a human were exploring an unknown environment with unknown dangers by themselves, and tried to consider the anthropic principle... that would be pretty terrifying.

John Cramer wrote a novel with an anthropic explanation for the cancellation of the SSC:

http://www.amazon.com/Einsteins-Bridge-John-Cramer/dp/0380788314

Just to make sure I'm getting this right... this is sort of along the same lines of reasoning as quantum suicide?

It depends on the type of "fail" - quenches are not uncommon. And also their timing - the LHC is so big, and it's the first time it's been operated. Expect malfunctions.

But if it were tested for a few months before, to make sure the mechanics were all engineered right, etc., I guess it would only take a few (less than 10) instances of the LHC failing shortly before it was about to go big for me to seriously consider an anthropic explan... (read more)

[-]Greg2150

Another thought. Suppose a functioning LHC does in fact produce world-destroying scenarios. Would we see: A) an LHC with mechanical failures? or B) an LHC where all collisions happen except world-destroying ones? If B, would the LHC be giving us biased experimental results?

I'm confused by your last comment - what use would the LHC be in a global economic crisis or nuclear war? I don't suppose you mean something like "rig the LHC to activate if the market does not recover by date X according to measure Y, and then we will only be able to observe the scenario in which the market does recover" or something like that, do you?

1drethelin
I think the idea is you only run it if you're already indifferent to the world being destroyed?
2momom2
By precommitting to firing up the LHC in difficult moments, assuming firing up the LHC destroys the world, you end up observing only universes where difficult moments don't happen (at a cost I would describe as "at best ambiguous").

IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.

0wafflepudding
I'd agree that certain worlds would have the building of the LHC pushed back or moved forward, but I doubt there would be many where the LHC was just never built. Unless human psychology is expected to be that different from world to world?
-1hairyfigment
...As I pointed out recently in another context, humans have existed for tens of thousands of years or more. Even civilization existed for millenia before obvious freak Isaac Newton started modern science. Your position is a contender for the nuttiest I've read today. Possibly it could be made better by dropping this talk of worlds and focusing on possible observers, given the rise in population. But that just reminds me that we likely don't understand anthropics well enough to make any definite pronouncements.
0wafflepudding
Are you responding to "Unless human psychology is expected to be that different from world to world?"? Because that's not my position, I'd think that most things recognizable as human will be similar enough to us that they'd build an LHC eventually. I guess I'm not exactly sure what you're getting at.
0hairyfigment
I am strongly disagreeing with you. The cultures that existed on Earth for tens of millenia or more were recognizably human; one of them built an LHC "eventually", but any number of chance factors could have prevented this. Like I just said, modern science started with an extreme outlier.
0wafflepudding
Gotcha. So, assuming that the actual Isaac Newton didn't rise to prominence*, are you thinking that human life would usually end before his equivalent came around and the ball got rolling? Most of our existential risks are manmade AFAICT. Or you think that we'd tend to die in between him and when someone in a position to build the LHC had the idea to build the LHC? Granted, him being "in a position to build the LHC" is conditional on things like a supportive surrounding population, an accepting government, etcetera; but these things are ephemeral on the scale of centuries. To summarize, yes, some chance factor would def prevent us from building the LHC as the exact time we did, but with a lot of time to spare, some other chance factor would prime us to build it somewhen else. Building the LHC just seems to me like the kind of thing we do. (And if we die from some other existential risk before Hadron Colliding (Largely), that's outside the bounds of what I was originally responding to, because no one who died would find himself in a universe at all.) *Not that I'm condoning this idea that Newton started science.
0hairyfigment
That's what I just said. You seem to have an alarming confidence in our ability to bounce back from ephemeral shifts. If there were actually some selection pressure against a completed LHC, then it would take a lot less than a repetition of this to keep us shifted away from building one.
1ChristianKl
There's a lot of history of science and it generally doesn't find that it all hinges on one event like Newton.
1hairyfigment
We're not talking about all of science. (Though I stand by my claim that he started it, unless you can point to someone else writing down a workable scientific method beforehand.) We're talking about whether or not anthropic reasoning tells us to expect to see people building the LHC, at a cost of $1 billion per year. Thatcher apparently rejected the idea as presented, and rightly too if the Internet accurately reported the pitch they made to her. (In this popular account, the Higgs mechanism doesn't "explain mass," it replaces one arbitrary number with another! I still don't know the actual reasons for believing in it!) So we don't need to imagine humanity dying out, and we don't need to assume that civilization collapses after using up irreplaceable fossil fuels. (Though that one seems somewhat plausible.) I don't think we even need to assume religious tyranny crushes respect for science. Slightly less radical changes to the culture of a small fraction of the world seem sufficient to prevent the LHC expenditure for the foreseeable future. Add in uncertainty about various risks that fall short of total annihilation, and this certainty starts to look ridiculous. Now as I said, one could make a different anthropic argument based on population in various 'worlds'. But as I also said, I don't think we know enough to get a high probability from that either.
0ChristianKl
Hakob Barseghyan teaches in his History and Philosophy of Science course that Descartes started it. The hypothetico-deductive method (what's commonly called the scientific method) is a result of the philosophic commitments of Descartes thought.
0hairyfigment
The video is somewhat odd in that he claims Descartes had no problem with experiments, but I recall the philosopher proposing rules which contradicted experiments and hand-waving this by appealing to the impossibility of observing bodies in isolation. In any case, Hakob does make clear that Descartes used a more Aristotelian method as a rhetorical device to persuade Aristotelians. (In effect, he proved the method of intuitive truth unreliable by producing a contradiction.) I don't believe his work includes any workable method you could use to do science, while Newton's rules for natural philosophy seem like an OK approximation.
0ChristianKl
The main point is that if you buy the philosophic commitments of Descartes the hypothetico-deductive method is a straightforward conclusion. Newton might have expressed the method more clearly but various people moved in that directions once Descartes successfully argued against the old way.
0hairyfigment
Possibly, but I wouldn't say the popes started science by being terrible rulers, thereby creating a clearer distinction between religious and secular.
0ChristianKl
Given that Newton was a person who cared about the religious that would be a bad example. He spent a lot of time with biblical chronology. You claimed that science wouldn't have been invented at the time without Newton. It's historically no accident that Leibniz discovered calculus independently from Newton. The interest in numerical reasoning was already there. To get back to the claim, following the scientific method and explicitly writing it down are two different activities. It takes time to move from the implicit to the explicit.
0hairyfigment
But Newton didn't propose a religious method for science, which is my point. Did you think I meant that the popes turned Dante atheist? What they did was give him a desire for a secular ruler and an "almost messianic sense of the imperial role". That sort of thinking may have given rise to Descartes' science fiction, so to speak - secular aspirations which go beyond even a New Order of the Ages. So there are a few possible prerequisites for a scientific method. As for someone else writing one down, maybe; what we observe is that the best early formulation came from a brilliant freak.
0ChristianKl
Why do you think that Newtons proposal of his method of science had something to do with desire for a secular ruler?
0hairyfigment
Why do you think Newton's focus on new observations/experiments came from Cartesian ontology, when Newton doesn't wholly buy that ontology? I'm saying the popes inadvertently created a separate concept of secular aspirations - often opposed to religious authorities, though not to God if he turns out to exist. This "imperial role" business is arguably a rival form of the idea, though Newton did in fact work for the Crown.
0ChristianKl
My main source is lecture series towards which I linked above. The Newtonian worldview is presented as the lecture that follows after the one I linked. At the time the Crown was the head of the church in England.
0ChristianKl
Asking on StackExchange gives a variety of people before Newton: http://hsm.stackexchange.com/questions/5275/was-isacc-newton-the-first-person-to-articulate-the-scientific-method-in-europe/5277#5277
0hairyfigment
Even there, someone points out that Bacon wasn't big on math. I'll grant you I should give him more credit for a sensible conclusion on heat, and for encouraging experiments.

Sorry, make that "happened not to build one that worked".

Say our prior odds for the LHC being a destroyer of worlds are a billion to one against. Then this hypothesis is at negative ninety decibels. Conditioned on the hypothesis being true, the probability of observing failure is near unity, because in the modal worlds where the world really is destroyed, we don't get to make an observation--or we won't get to remember it very long. Say that conditioned on the hypothesis being false, the probability of observing failure is one-fifth--this is very delicate equipment, yes? So each observation of failure gives us 10log(1/0.2), or about seven decibels of evidence for the hypothesis. We need ninety decibels of evidence to bring us to even odds; ninety divided by seven is about 12.86. So under these assumptions it takes thirteen failures before we believe that the LHC is a planet-killer.

First collisions aren't scheduled to have happened yet, are they? In which case, the failure can't be seen as anthropic evidence yet, since we might as well be in a world where it hasn't failed, since such a world wouldn't have been destroyed yet in any case.

But if I'm not mistaken, even old failures will become evidence retrospectively once first collisions are overdue, since (assuming the unlikely case of the LHC actually being dangerous) all observers still alive would be in a world where the LHC failed; when it failed being irrelevant.

As much as the AP fascinates me, it does my head in. :)

Eliezer it's a good question and a good thought experiment except for the last sentence, which assumes a conservation of us as subjective conscious entities that the anthropic principle doesn't seem to me to endorse.

You can also add into your anthropic principle mix the odds that increasing numbers of experts think we can solve biological aging within our life time, or perhaps that should be called the solipstic principle, which may be more relevant for us as persisting observers.

At the risk of asking the obvious:

Does the fact that no one has yet succeeded in constructing transhuman AI imply that doing so would necessarily wipe out humanity?

5CarlShulman
No.
1[anonymous]
But does it increase the probability of it, and if so, by how much?
[-]Yvain2170

Originally I was going to say yes to the last question, but after thinking over why a failure of the LHC now (before it would destroy Earth) doesn't let me conclude anything by the anthropic principle, I'm going to say no.

Imagine a world in which CERN promises to fire the Large Hadron Collider one week after a major terrorist attack. Consider ten representative Everett branches. All those branches will be terrorist-free for the next few years except number 10, which is destined to suffer a major terrorist attack on January 1, 2009.

On December 31, 2008, Yvains 1 through 10 are perfectly happy, because they live in a world without terrorist attacks.

On January 2, 2009, Yvains 1 through 9 are perfectly happy, because they still live in worlds without terrorist attacks. Yvain 10 is terrified and distraught, both because he just barely escaped a terrorist attack the day before, and because he's going to die in a few days when they fire the LHC.

On January 8, 2009, CERN fires the LHC, killing everyone in Everett branch 10.

Yvains 1 through 9 aren't any better off than they would've been otherwise. Their universe was never destined to have a terrorist attack, and it still hasn't had a terror... (read more)

Unless you just consider it a Mouse That Roared scenario in which no one dares commit a terrorist attack under threat of global annihilation.

(just read the book, it's well worth it)

Blowing up the world in response to terrorist attack is like shooting yourself in the head when someone steps on your foot, to make subjective probability of your feet being stepped on lower.

Just realized that several sentences in my previous post make no sense because they assume Everett branches were separate before they actually split, but think the general point still holds.

0AlexanderRM
Some of the factors leading to a terrorist attack succeeding or failing would be past the level of quantum uncertainty before the actual attack happens, so unless the terrorists are using bombs set up on the same principle as the trigger in Scrodinger's Cat, the branches would have split already before the attack happened.

This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)

I can only see this statement making any sense if you think we should behave as if nature first randomly picked a value of a global cross-world time p... (read more)

Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"

This remark may be somewhat premature

Uh, isn't it actually nonsense? The anthropic principle is supposed to explain how you got lucky enough to exist at all, not how you got lucky enough to keep existing.

The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.

0datadataeverywhere
Strictly speaking, how does one randomize a list in linear time? Even picking a uniformly-randomized list from all possible sequences is out of reach for us under most scenarios with reasonably long lists.
0wedrifid
A uniform randomization may not be possible but you can get an arbitrarily well randomized list in linear time. That is all that is needed for the purposes of the sorting. (You would just end up destroying the world 1 + (1 / arbitrarily large) as many times as with a uniform distribution.)
3datadataeverywhere
Algorithms like a modified Fisher-Yates shuffle in linear time if you're just measuring reads and writes, but O(lg(n!)) > O(n) bits are required to specify which permutation is being chosen, so unless generating random numbers is free, shuffling is always O(n log n) . In real life, we don't use PRNGs with sufficiently long cycle times, so we usually get linear-time shuffles by discarding the vast majority of the potential orderings.
3wedrifid
That seems to be a rational decision for people with certain value systems. Specifically, those that don't care about their quantum measure. (Yes, that value system is at least as insane as Clippy's.) "Quantum Sour Grapes" seems like a suitable label for the strategy. ;)
0wedrifid
It just occurred to me that you would want to be REALLY careful that there wasn't a bug in either your shuffling or list checking code. If you started using quantum suicide for all your problems eventually you'd make a mistake. :)
7TheOtherDave
If I'm following the reasoning (if "reasoning" is in fact the right word, which I'm unconvinced of), you wouldn't make any world-destroying mistakes that it's possible for you not to make, since only the version of you that (by chance) made no such mistakes would survive. And, obviously, there's no point in even trying to avoid world-destroying mistakes that it's not possible for you not to make.
[-]db200

The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.

Maybe it's stupid and evil, but what stops it from actually working?

"How many times does a coin have to come up heads before you believe the coin is fixed?"

I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?" Which, in my opinion, makes no sense.

I bet the terrorists would target the LHC itself, so after the terrorist attack there's nothing left to turn on.

Oh God I need to read Eliezer's posts more carefully, since my last comment was totally redundant.

As others have noted, it seems straightforward to use Bayes' rule to decide when to believe how much that LHC malfunctions were selection effects - the key question is the prior. As to the last question, even if I was confident I lived in an infinite universe and so there was always some version of me that lived somewhere, I still wouldn't want to kill off most versions of me. So all else equal I'd never want to fire the LHC if I believed doing so killed that version of me.

Brilliant post.

I almost want it to fail a few more times so that the press latch on to this idea. Imagine journalists trying to A) understand and b) articulate the anthropic principle across many worlds. Would be hilarious.

Actually, failures of the LHC should never have any effect at all on our estimate of the probability that if it did not fail it would destroy Earth.

This is because the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth. A simple application of Bayes' rule.

Now, the reason you come to a wrong conclusion is not because you wrongly applied the anthropic principle, but because you failed to apply it (or applied it selectively). You realized that the probability of failure given survival is higher un... (read more)

To clarify, I mean failures should not lead to a change of probability away from the prior probability; of course they do result in a different probability estimate than if the LHC succeeded and we survived.

If: (The probability that the LHC's design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.

Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.

My prior probability for the existence of a secret and powerful crackpot group willing to sabotage the LHC to prevent it from "destroying the world" is larger than my prior probabilty for the LHC-actually-destroying-the-world scenarios being true, so after many mechanical failures I would rather believe the first hypothesis than the second one.

Simon: the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth.

But - if the LHC was Earth-fatal - the probability of observing a world in which the LHC was brought fully online would be zero.

(Applying anthropic reasoning here probably makes more sense if you assume MWI, though I suspect there are other big-world cosmologies where the logic could also work.)

Allan, I am of course aware of that (actually, it would probably take time, but even if the annihilation were instantaneous the argument would not be affected).

There are 4 possibilities:

  1. The LHC would destroy Earth, but it fails to operate
  2. The LHC destroys Earth
  3. The LHC would not destroy Earth, but it fails anyway
  4. The LHC works and does not destroy Earth

The fact that conditional on survival possibility 2 must not have happened has no effect on the relative probabilities of possibility 1 and possibility 3.

But the destruction of the earth in case of creation of blackhole or a stranglet will not be instantaneous, like on YouTube movie.

BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it. The main hurting thing of BH will be its energy realize. And if BH is in the cenre of the earth , this energy will go out as violent volcanic eruptions.

Because of exponential grouth of BH the bigest part of energey will be realized in the last years of its exist... (read more)

"After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?"

After observing 100 failures in a row I would expect that a failure would occur after the next attempt to switch it on too. So it doesn't seem as a reliable means to prevent terrorism or economic crash even if anthropic multi-world "ideology" were true.

On the other hand, if somebody were able to show that the amplitude of LHC's unexpected failure for technical reasons was significantly lower than the amplitude of terrorist-free future...

IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.
Incorrect reasoning; every branching compatible with sentient organisms contains sentient organisms monitoring its conditions.

The organisms that are in branchings in which LHC facilities were built perceive themselves to be in such a world, no matter how improbable it is. It doesn't matter if it's quite unlikely for you to win a lottery -- if you do win a lottery, you'll eventually accumulate enough data to conclude that's precisely what's happened.

BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it.

I am curious about these assumptions. BH with mass of the whole Earth has the Schwartzschild radius about 1cm. At start the BH should be much lighter, so it's not clear to me how could this BH, sitting in the centre of Earth, eat anything.

simon,

Actually, I think it might (though I'm obviously open to correction) if you take the anthropic principle as a given (which I do not).

One thing you're missing is that there are two events here, call them A and B:

A. LHC would destroy earth B. LHC works

So the events, which are NOT independent, should look more like:

  1. The LHC would destroy earth, and it fails to operate
  2. The LHC would destroy earth, and it works
  3. The LHC would not destroy Earth, and it fails to operate
  4. The LHC would not destroy Earth, and it works

Outcome 2 is "closer" to outcom... (read more)

Robinson, I could try to nitpick all the things wrong with your post, but it's probably better to try to guess at what is leading your intuition (and the intuition of others) astray.

Here's what I think you think:

  1. Either the laws of physics are such that the LHC would destroy the world, or not.
  2. Given our survival, it is guaranteed that the LHC failed if the universe is such that it would destroy the world, whereas if the universe is not like that, failure of the LHC is not any more likely than one would expect normally.
  3. Thus, failure of the LHC is evidence
... (read more)

I'm with Brian Jaress, who said, 'I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?"' OTOH, I have a very poor head for probabilities, Bayesian or otherwise, and in fact the Monty Hall thing still makes my brain hurt. So really, I make a lousy "me too" here.

That said: Could someone explain why repeated mechanical failures of the LHC should in any way imply the likelihood of it destroying the world, thus invoking the anthropic principle? Given the crowd, I'm assuming there's more to it than "OMG technology is scary and it doesn't even work right!", but I'm not seeing it.

Okay, it scares me when I realize that I've been getting probability theory wrong, even though I seemed to be on perfectly firm ground. But I'm finding that it's even more scary that even our hosts and most commenters here seem to be getting it backwards -- at least Robin; given that the last question in the post seems so obviously wrong for the reasons pointed out already, I'm starting to wonder whether the post is meant as a test of reasoning about probabilities, leading up to a post about how Nature Does Not Grade You On A Curve (grumble :)). Thanks to ... (read more)

The intuition behind the math: If the LHC would not destroy the world, then on date X, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures, and most Everett branches have the LHC happily chugging ahead. If the LHC would destroy the world, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures -- and most Everett branches have Earth munched up into a black hole.

The very small number of Everett branches that have the LHC non-working due to a string ... (read more)

I'm going to try another explanation that I hope isn't too redundant with Benja's.

Consider the events

W = The LHC would destroy Earth F = the LHC fails to operate S = we survive (= F OR not W)

We want to know P(W|F) or P(W|F,S), so let's apply Bayes.

First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)

Bayes:

P(W|F) = P(F|W)P(W)/P(F)

Note that none of these probabilities are conditional on survival. So unless in the absence of any selection effects the probability of failure still depends on whether the LHC would... (read more)

Benja: Good explanation! Intuitively, it seems to me that your argument holds if there are Tegmark IV branches with different physical laws, but not if whether the LHC would destroy Earth is fixed across the entire multiverse. (Only in the latter case, if it would destroy the Earth, the objective frequency of observations of failure - among observations, period - would be 1.)

Benja, I'm not really smart enough to parse the maths, but I can comment on the intuition:

The very small number of Everett branches that have the LHC non-working due to a string of random failures is the same in both cases [of LHC dangerous vs. LHC safe]

I see that, but if the LHC is dangerous then you can only find yourself in the world where lots of failures have occurred, but if the LHC is safe, it's extremely unlikely that you'll find yourself in such a world.

Thus, if all you know is that you are in an Everett branch in which the LHC is non-working due ... (read more)

Simon's last comment is well said, and I agree with everything in it. Good job, Simon and Benja.

Although the trickiest question was answered by Simon and Benja, Eliezer asked a couple of other questions, and Yvain gave a correct and very clear answer to the final question.

Or so it seems to me.

Here's what that means for improving intuition: one should feel surprised at surviving a quantum suicide experiment, instead of thinking "well, of course I would experience survival".
You can (and should) be surprised that the device failed. You should not be surprised that you survived -- it's the only way you can feel anything at all.

You always survive.

Simon: As I say above, I'm out of my league when it comes to actual probabilities and maths, but:

P(W|F) = P(F|W)P(W)/P(F)

Note that none of these probabilities are conditional on survival.

Is that correct? If the LHC is dangerous and MWI is true, then the probability of observing failure is 1, since that's the only thing that gets observed.

An analogy I would give is:

You're created by God, who tells you that he has just created 10 people who are each in a red room, and depending on a coin flip God made, either 0 or 10,000,000 people who are each in a blue roo... (read more)

If you're conducting an experiment to test a hypothesis, the first thing you have to do is set up the apparatus. If you don't set up the apparatus so it produces data, you haven't tested anything. Just like if you try to take a urine sample, and the subject can't pee. The experiment has failed to produce data, not the same as the data failing to prove the hypothesis.

First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)

With respect for your diligent effort and argument, nonetheless: Fail.

F => S -!-> P(X|F) = P(X|F,S)

In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.

(Had your argument above been correct, the probabilities would have been the same.)

Conditioning on survival, or more precisely, the (continued?) existence of "observers&quo... (read more)

I retract my endorsement of Simon's last comment. Simon writes that S == (F or not W). False: S ==> (F or not W), but the converse does not hold (because even if F or not W, we could all be killed by, e.g., a giant comet). Moreover, Simon writes that F ==> S. False (for the same reason). Finally, Simon writes, "Note that none of these probabilities are conditional on survival," and concludes from that that there are no selection effects. But the fact that a true equation does not contain any explicit reference to S does not mean that ... (read more)

simon, that's right, of course. The reason I'm dragging branches into it is that for the (strong) anthropic principle to apply, we would need some kind of branching -- but in this case, the principle doesn't apply [unless you and I are both wrong], and the math works the same with or without branching.

Eliezer, huh? Surely if F => S, then F is the same event as (F /\ S). So P(X | F) = P(X | F, S). Unless P(X | F, S) means something different from P(X | F and S)?

Allan, you are right that if the LHC would destroy the world, and you're a surviving observer,... (read more)

While I'm happy to have had the confidence of Richard, I thought my last comment could use a little improvement.

What we want to know is P(W|F,S)

As I pointed out F=> S so P(W|F,S) = P(W|F)

We can legitimately calculate P(W|F,S) in at least two ways:

1. P(W|F,S) = P(W|F) = P(F|W)P(W)/P(F) <- the easy way

2. P(W|F,S) = P(F|W,S)P(W|s)/P(F|S) <- harder, but still works

there are also ways you can get it wrong, such as:

3. P(W|F,S) != P(F|W,S)P(W)/P(F) <- what I said other people were doing last post

4. P(W|F,S) != P(F|W,S)P(W)/P(F|S) <... (read more)

Allan, oh **, the elementary math in my previous comment is completely wrong. (In the scenario I gave, the probability that you have breast cancer is 1%, not 10%, before taking the test.) My argument doesn't even approximately work as given: if having breast cancer makes it more likely that you get a positive mammography, then indeed getting a positive mammography must make it more likely that you have breast cancer. Sorry!

(I'm still convinced that my argument re the LHC is correct, but I realize that I'm just looking stupid right now, so I'll just shut up for now :-))

Sorry Richard, well of course they aren't necessarily independent. I wasn't quite sure what you were criticising. But I pointed out already that, for example, a new physical law might in principle both cause the LHC to fail and cause it to destroy the world if it did not fail. But I pointed out that this was not what people were arguing, and assuming that such a relation is not the case then the failure of the LHC provides no information about the chance that a success would destroy the world. (And a small relation would lead to a small amount of information, etc.)

Oops, I fail! I thought F >= S meant "F is larger than S". But looking at the definitions of terms, Fail >= Survival must mean "Fail subset_of Survival". (I do protest that this is an odd symbol to use.)

Okay, looking back at the original argument, and going back to definitions...

If you've got two sets of universes side-by-side, one where the LHC destroys the world, and one where it doesn't, then indeed observing a long string of failures doesn't help tell you which universe you're in. However, after a while, nearly all the obs... (read more)

Eliezer, I used "=>" (intending logical implication), not ">=".

I would suggest you read my post above on this second page, and see if that changes your mind.

Also, in a previous post in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.

Eliezer, I used "=>" (intending logical implication), not ">=".

Zis would seems to explains it.

(I use -> to indicate logical implication and => to indicate a step in a proof, or otherwise implication outside the formal system - I do understand this to be conventional.)

I would suggest you read my post above on this second page, and see if that changes your mind.

Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on p... (read more)

After surviving a few hundred rounds of quantum suicide the next round will probably kill you.

Are you familiar with the story of the man who got the winning horse race picks in the mail the day before the race was run? Six times in a row his mysterious benefactor was right, even correctly calling a victory for a horse with forty-to-one odds. Now he gets an envelope in the mail from the same mysterious benefactor asking for $1,000 in exchange for the next week's picks. Are you saying he should take the deal and clean up?

Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on physical law plus a presumption of survival.)

You mean you use method 2. Except you don't, or you would come to the same conclusion that I do. Are you claiming that P(W|S)= P(W)? Ok, I suspect you may be applying Nick Bostrom's version of observer selection: hold the probability of each possible version of the universe fixed independent of the number of observers, then divide that probability equally ... (read more)

Whoops, I didn't notice that you did specifically claim that P(W|S)=P(W).

Do you arrive at this incorrect claim via Bostrom's approach, or another one?

This is a subject I've long been meaning to give some thought too, but at the moment I'm pretty swamped - hope to get back to it when I have more time.

Simon, pretty much Bostrom's approach. Self-Sampling without Self-Indication. I know it's wrong but I don't have any better approach to take.

Why do you reject self-indication? As far as I can recall the only argument Bostrom gave against it was that he found it unintuitive that universes with many observers should be more likely, with absolutely no justification as to why one would expect that intuition to reflect reality. That's a very poor argument considering the severe problems you get without it.

I suppose you might be worried about universes with many unmangled worlds being made more likely, but I don't see what makes that bullet so hard to bite either.

Wasn't one of the conclusions we arrived at in the quantum mechanics sequence that "observer" was a nonsense, mystical word?