"But if it comes up heads 100 times, it's taking you too long to notice"
Ros. Heads. (He puts it in his bag. The process is repeated.) Heads. (Again.) Heads. (Again.) Heads. (Again.) Guil. (Flipping a coin) There is an art to the building of suspense. Ros. Heads. Guild. (Flipping another) Though it can be done by luck alone. Ros. Heads. Guil. If that's the word I'm after. Ros. (Raises his head) 76! (Guil gets up but has nowhere to go. He spins the coin over shoulder without looking at it.) Heads Guil. A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability. (He flips a coin back over his shoulder.) Ros. Heads. Guil. (Musing) The law of probability, it has been asserted, is something to do with the proposition that if six monkeys - (He has surprised himself) if six monkeys were. . . Ros. Game? Guil. Were they? Ros. Are you?
-- Rosenkrantz & Guildenstern Are Dead, Tom Stoppard, Act I
Perhaps the question could also be asked this way: How many times does the LHC have to inexplicably fail before we take it as scientific confirmation that world-destroying black holes and/or strange particles are indeed produced by LHC-level collisions? Would we treat such a scenario as a successful experimental result for the LHC?
John Cramer wrote a novel with an anthropic explanation for the cancellation of the SSC:
http://www.amazon.com/Einsteins-Bridge-John-Cramer/dp/0380788314
Just to make sure I'm getting this right... this is sort of along the same lines of reasoning as quantum suicide?
It depends on the type of "fail" - quenches are not uncommon. And also their timing - the LHC is so big, and it's the first time it's been operated. Expect malfunctions.
But if it were tested for a few months before, to make sure the mechanics were all engineered right, etc., I guess it would only take a few (less than 10) instances of the LHC failing shortly before it was about to go big for me to seriously consider an anthropic explan...
Another thought. Suppose a functioning LHC does in fact produce world-destroying scenarios. Would we see: A) an LHC with mechanical failures? or B) an LHC where all collisions happen except world-destroying ones? If B, would the LHC be giving us biased experimental results?
I'm confused by your last comment - what use would the LHC be in a global economic crisis or nuclear war? I don't suppose you mean something like "rig the LHC to activate if the market does not recover by date X according to measure Y, and then we will only be able to observe the scenario in which the market does recover" or something like that, do you?
IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.
Say our prior odds for the LHC being a destroyer of worlds are a billion to one against. Then this hypothesis is at negative ninety decibels. Conditioned on the hypothesis being true, the probability of observing failure is near unity, because in the modal worlds where the world really is destroyed, we don't get to make an observation--or we won't get to remember it very long. Say that conditioned on the hypothesis being false, the probability of observing failure is one-fifth--this is very delicate equipment, yes? So each observation of failure gives us 10log(1/0.2), or about seven decibels of evidence for the hypothesis. We need ninety decibels of evidence to bring us to even odds; ninety divided by seven is about 12.86. So under these assumptions it takes thirteen failures before we believe that the LHC is a planet-killer.
First collisions aren't scheduled to have happened yet, are they? In which case, the failure can't be seen as anthropic evidence yet, since we might as well be in a world where it hasn't failed, since such a world wouldn't have been destroyed yet in any case.
But if I'm not mistaken, even old failures will become evidence retrospectively once first collisions are overdue, since (assuming the unlikely case of the LHC actually being dangerous) all observers still alive would be in a world where the LHC failed; when it failed being irrelevant.
As much as the AP fascinates me, it does my head in. :)
Eliezer it's a good question and a good thought experiment except for the last sentence, which assumes a conservation of us as subjective conscious entities that the anthropic principle doesn't seem to me to endorse.
You can also add into your anthropic principle mix the odds that increasing numbers of experts think we can solve biological aging within our life time, or perhaps that should be called the solipstic principle, which may be more relevant for us as persisting observers.
At the risk of asking the obvious:
Does the fact that no one has yet succeeded in constructing transhuman AI imply that doing so would necessarily wipe out humanity?
Originally I was going to say yes to the last question, but after thinking over why a failure of the LHC now (before it would destroy Earth) doesn't let me conclude anything by the anthropic principle, I'm going to say no.
Imagine a world in which CERN promises to fire the Large Hadron Collider one week after a major terrorist attack. Consider ten representative Everett branches. All those branches will be terrorist-free for the next few years except number 10, which is destined to suffer a major terrorist attack on January 1, 2009.
On December 31, 2008, Yvains 1 through 10 are perfectly happy, because they live in a world without terrorist attacks.
On January 2, 2009, Yvains 1 through 9 are perfectly happy, because they still live in worlds without terrorist attacks. Yvain 10 is terrified and distraught, both because he just barely escaped a terrorist attack the day before, and because he's going to die in a few days when they fire the LHC.
On January 8, 2009, CERN fires the LHC, killing everyone in Everett branch 10.
Yvains 1 through 9 aren't any better off than they would've been otherwise. Their universe was never destined to have a terrorist attack, and it still hasn't had a terror...
Unless you just consider it a Mouse That Roared scenario in which no one dares commit a terrorist attack under threat of global annihilation.
(just read the book, it's well worth it)
Blowing up the world in response to terrorist attack is like shooting yourself in the head when someone steps on your foot, to make subjective probability of your feet being stepped on lower.
Just realized that several sentences in my previous post make no sense because they assume Everett branches were separate before they actually split, but think the general point still holds.
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
I can only see this statement making any sense if you think we should behave as if nature first randomly picked a value of a global cross-world time p...
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature
Uh, isn't it actually nonsense? The anthropic principle is supposed to explain how you got lucky enough to exist at all, not how you got lucky enough to keep existing.
The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.
The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.
Maybe it's stupid and evil, but what stops it from actually working?
"How many times does a coin have to come up heads before you believe the coin is fixed?"
I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?" Which, in my opinion, makes no sense.
I bet the terrorists would target the LHC itself, so after the terrorist attack there's nothing left to turn on.
Oh God I need to read Eliezer's posts more carefully, since my last comment was totally redundant.
As others have noted, it seems straightforward to use Bayes' rule to decide when to believe how much that LHC malfunctions were selection effects - the key question is the prior. As to the last question, even if I was confident I lived in an infinite universe and so there was always some version of me that lived somewhere, I still wouldn't want to kill off most versions of me. So all else equal I'd never want to fire the LHC if I believed doing so killed that version of me.
Brilliant post.
I almost want it to fail a few more times so that the press latch on to this idea. Imagine journalists trying to A) understand and b) articulate the anthropic principle across many worlds. Would be hilarious.
Actually, failures of the LHC should never have any effect at all on our estimate of the probability that if it did not fail it would destroy Earth.
This is because the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth. A simple application of Bayes' rule.
Now, the reason you come to a wrong conclusion is not because you wrongly applied the anthropic principle, but because you failed to apply it (or applied it selectively). You realized that the probability of failure given survival is higher un...
To clarify, I mean failures should not lead to a change of probability away from the prior probability; of course they do result in a different probability estimate than if the LHC succeeded and we survived.
If: (The probability that the LHC's design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.
Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.
My prior probability for the existence of a secret and powerful crackpot group willing to sabotage the LHC to prevent it from "destroying the world" is larger than my prior probabilty for the LHC-actually-destroying-the-world scenarios being true, so after many mechanical failures I would rather believe the first hypothesis than the second one.
Simon: the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth.
But - if the LHC was Earth-fatal - the probability of observing a world in which the LHC was brought fully online would be zero.
(Applying anthropic reasoning here probably makes more sense if you assume MWI, though I suspect there are other big-world cosmologies where the logic could also work.)
Allan, I am of course aware of that (actually, it would probably take time, but even if the annihilation were instantaneous the argument would not be affected).
There are 4 possibilities:
The fact that conditional on survival possibility 2 must not have happened has no effect on the relative probabilities of possibility 1 and possibility 3.
But the destruction of the earth in case of creation of blackhole or a stranglet will not be instantaneous, like on YouTube movie.
BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it. The main hurting thing of BH will be its energy realize. And if BH is in the cenre of the earth , this energy will go out as violent volcanic eruptions.
Because of exponential grouth of BH the bigest part of energey will be realized in the last years of its exist...
"After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?"
After observing 100 failures in a row I would expect that a failure would occur after the next attempt to switch it on too. So it doesn't seem as a reliable means to prevent terrorism or economic crash even if anthropic multi-world "ideology" were true.
On the other hand, if somebody were able to show that the amplitude of LHC's unexpected failure for technical reasons was significantly lower than the amplitude of terrorist-free future...
IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.Incorrect reasoning; every branching compatible with sentient organisms contains sentient organisms monitoring its conditions.
The organisms that are in branchings in which LHC facilities were built perceive themselves to be in such a world, no matter how improbable it is. It doesn't matter if it's quite unlikely for you to win a lottery -- if you do win a lottery, you'll eventually accumulate enough data to conclude that's precisely what's happened.
BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it.
I am curious about these assumptions. BH with mass of the whole Earth has the Schwartzschild radius about 1cm. At start the BH should be much lighter, so it's not clear to me how could this BH, sitting in the centre of Earth, eat anything.
simon,
Actually, I think it might (though I'm obviously open to correction) if you take the anthropic principle as a given (which I do not).
One thing you're missing is that there are two events here, call them A and B:
A. LHC would destroy earth B. LHC works
So the events, which are NOT independent, should look more like:
Outcome 2 is "closer" to outcom...
Robinson, I could try to nitpick all the things wrong with your post, but it's probably better to try to guess at what is leading your intuition (and the intuition of others) astray.
Here's what I think you think:
I'm with Brian Jaress, who said, 'I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?"' OTOH, I have a very poor head for probabilities, Bayesian or otherwise, and in fact the Monty Hall thing still makes my brain hurt. So really, I make a lousy "me too" here.
That said: Could someone explain why repeated mechanical failures of the LHC should in any way imply the likelihood of it destroying the world, thus invoking the anthropic principle? Given the crowd, I'm assuming there's more to it than "OMG technology is scary and it doesn't even work right!", but I'm not seeing it.
Okay, it scares me when I realize that I've been getting probability theory wrong, even though I seemed to be on perfectly firm ground. But I'm finding that it's even more scary that even our hosts and most commenters here seem to be getting it backwards -- at least Robin; given that the last question in the post seems so obviously wrong for the reasons pointed out already, I'm starting to wonder whether the post is meant as a test of reasoning about probabilities, leading up to a post about how Nature Does Not Grade You On A Curve (grumble :)). Thanks to ...
The intuition behind the math: If the LHC would not destroy the world, then on date X, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures, and most Everett branches have the LHC happily chugging ahead. If the LHC would destroy the world, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures -- and most Everett branches have Earth munched up into a black hole.
The very small number of Everett branches that have the LHC non-working due to a string ...
I'm going to try another explanation that I hope isn't too redundant with Benja's.
Consider the events
W = The LHC would destroy Earth F = the LHC fails to operate S = we survive (= F OR not W)
We want to know P(W|F) or P(W|F,S), so let's apply Bayes.
First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)
Bayes:
P(W|F) = P(F|W)P(W)/P(F)
Note that none of these probabilities are conditional on survival. So unless in the absence of any selection effects the probability of failure still depends on whether the LHC would...
Benja: Good explanation! Intuitively, it seems to me that your argument holds if there are Tegmark IV branches with different physical laws, but not if whether the LHC would destroy Earth is fixed across the entire multiverse. (Only in the latter case, if it would destroy the Earth, the objective frequency of observations of failure - among observations, period - would be 1.)
Benja, I'm not really smart enough to parse the maths, but I can comment on the intuition:
The very small number of Everett branches that have the LHC non-working due to a string of random failures is the same in both cases [of LHC dangerous vs. LHC safe]
I see that, but if the LHC is dangerous then you can only find yourself in the world where lots of failures have occurred, but if the LHC is safe, it's extremely unlikely that you'll find yourself in such a world.
Thus, if all you know is that you are in an Everett branch in which the LHC is non-working due ...
Simon's last comment is well said, and I agree with everything in it. Good job, Simon and Benja.
Although the trickiest question was answered by Simon and Benja, Eliezer asked a couple of other questions, and Yvain gave a correct and very clear answer to the final question.
Or so it seems to me.
Here's what that means for improving intuition: one should feel surprised at surviving a quantum suicide experiment, instead of thinking "well, of course I would experience survival".You can (and should) be surprised that the device failed. You should not be surprised that you survived -- it's the only way you can feel anything at all.
You always survive.
Simon: As I say above, I'm out of my league when it comes to actual probabilities and maths, but:
P(W|F) = P(F|W)P(W)/P(F)
Note that none of these probabilities are conditional on survival.
Is that correct? If the LHC is dangerous and MWI is true, then the probability of observing failure is 1, since that's the only thing that gets observed.
An analogy I would give is:
You're created by God, who tells you that he has just created 10 people who are each in a red room, and depending on a coin flip God made, either 0 or 10,000,000 people who are each in a blue roo...
If you're conducting an experiment to test a hypothesis, the first thing you have to do is set up the apparatus. If you don't set up the apparatus so it produces data, you haven't tested anything. Just like if you try to take a urine sample, and the subject can't pee. The experiment has failed to produce data, not the same as the data failing to prove the hypothesis.
First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)
With respect for your diligent effort and argument, nonetheless: Fail.
F => S -!-> P(X|F) = P(X|F,S)
In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.
(Had your argument above been correct, the probabilities would have been the same.)
Conditioning on survival, or more precisely, the (continued?) existence of "observers&quo...
I retract my endorsement of Simon's last comment. Simon writes that S == (F or not W). False: S ==> (F or not W), but the converse does not hold (because even if F or not W, we could all be killed by, e.g., a giant comet). Moreover, Simon writes that F ==> S. False (for the same reason). Finally, Simon writes, "Note that none of these probabilities are conditional on survival," and concludes from that that there are no selection effects. But the fact that a true equation does not contain any explicit reference to S does not mean that ...
simon, that's right, of course. The reason I'm dragging branches into it is that for the (strong) anthropic principle to apply, we would need some kind of branching -- but in this case, the principle doesn't apply [unless you and I are both wrong], and the math works the same with or without branching.
Eliezer, huh? Surely if F => S, then F is the same event as (F /\ S). So P(X | F) = P(X | F, S). Unless P(X | F, S) means something different from P(X | F and S)?
Allan, you are right that if the LHC would destroy the world, and you're a surviving observer,...
While I'm happy to have had the confidence of Richard, I thought my last comment could use a little improvement.
What we want to know is P(W|F,S)
As I pointed out F=> S so P(W|F,S) = P(W|F)
We can legitimately calculate P(W|F,S) in at least two ways:
1. P(W|F,S) = P(W|F) = P(F|W)P(W)/P(F) <- the easy way
2. P(W|F,S) = P(F|W,S)P(W|s)/P(F|S) <- harder, but still works
there are also ways you can get it wrong, such as:
3. P(W|F,S) != P(F|W,S)P(W)/P(F) <- what I said other people were doing last post
4. P(W|F,S) != P(F|W,S)P(W)/P(F|S) <...
Allan, oh **, the elementary math in my previous comment is completely wrong. (In the scenario I gave, the probability that you have breast cancer is 1%, not 10%, before taking the test.) My argument doesn't even approximately work as given: if having breast cancer makes it more likely that you get a positive mammography, then indeed getting a positive mammography must make it more likely that you have breast cancer. Sorry!
(I'm still convinced that my argument re the LHC is correct, but I realize that I'm just looking stupid right now, so I'll just shut up for now :-))
Sorry Richard, well of course they aren't necessarily independent. I wasn't quite sure what you were criticising. But I pointed out already that, for example, a new physical law might in principle both cause the LHC to fail and cause it to destroy the world if it did not fail. But I pointed out that this was not what people were arguing, and assuming that such a relation is not the case then the failure of the LHC provides no information about the chance that a success would destroy the world. (And a small relation would lead to a small amount of information, etc.)
Oops, I fail! I thought F >= S meant "F is larger than S". But looking at the definitions of terms, Fail >= Survival must mean "Fail subset_of Survival". (I do protest that this is an odd symbol to use.)
Okay, looking back at the original argument, and going back to definitions...
If you've got two sets of universes side-by-side, one where the LHC destroys the world, and one where it doesn't, then indeed observing a long string of failures doesn't help tell you which universe you're in. However, after a while, nearly all the obs...
Eliezer, I used "=>" (intending logical implication), not ">=".
I would suggest you read my post above on this second page, and see if that changes your mind.
Also, in a previous post in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.
Eliezer, I used "=>" (intending logical implication), not ">=".
Zis would seems to explains it.
(I use -> to indicate logical implication and => to indicate a step in a proof, or otherwise implication outside the formal system - I do understand this to be conventional.)
I would suggest you read my post above on this second page, and see if that changes your mind.
Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on p...
After surviving a few hundred rounds of quantum suicide the next round will probably kill you.
Are you familiar with the story of the man who got the winning horse race picks in the mail the day before the race was run? Six times in a row his mysterious benefactor was right, even correctly calling a victory for a horse with forty-to-one odds. Now he gets an envelope in the mail from the same mysterious benefactor asking for $1,000 in exchange for the next week's picks. Are you saying he should take the deal and clean up?
Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on physical law plus a presumption of survival.)
You mean you use method 2. Except you don't, or you would come to the same conclusion that I do. Are you claiming that P(W|S)= P(W)? Ok, I suspect you may be applying Nick Bostrom's version of observer selection: hold the probability of each possible version of the universe fixed independent of the number of observers, then divide that probability equally ...
Whoops, I didn't notice that you did specifically claim that P(W|S)=P(W).
Do you arrive at this incorrect claim via Bostrom's approach, or another one?
This is a subject I've long been meaning to give some thought too, but at the moment I'm pretty swamped - hope to get back to it when I have more time.
Simon, pretty much Bostrom's approach. Self-Sampling without Self-Indication. I know it's wrong but I don't have any better approach to take.
Why do you reject self-indication? As far as I can recall the only argument Bostrom gave against it was that he found it unintuitive that universes with many observers should be more likely, with absolutely no justification as to why one would expect that intuition to reflect reality. That's a very poor argument considering the severe problems you get without it.
I suppose you might be worried about universes with many unmangled worlds being made more likely, but I don't see what makes that bullet so hard to bite either.
Wasn't one of the conclusions we arrived at in the quantum mechanics sequence that "observer" was a nonsense, mystical word?
Recently the Large Hadron Collider was damaged by a mechanical failure. This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.
Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)
As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry. However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?" This tells you how low your prior probability is for the hypothesis. If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning. But if it comes up heads 100 times, it's taking you too long to notice.
So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation? 10? 20? 50?
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?