A robot is going on a one-shot mission to a distant world to collect important data needed to research a cure for a plague that is devastating the Earth. When the robot enters hyperspace, it notices some anomalies in the engine's output, but it is too late to get the engine fixed. The anomalies are of a sort that, when similar anomalies have been observed in other engines, 25% of the time it indicates a fatal problem, such that the engine will explode virtually every time it tries to jump. 25% of the time, it has been a false positive, and the engine exploded only at its normal negligible rate. 50% of the time it has indicated a serious problem, such that each jump was about a 50/50 chance of exploding. Anyway, the robot goes through the ten jumps to reach the distant world, and the engine does not explode. Unfortunately, the jump coordinates for the mission were a little off, and the robot is in a bad data-collecting position. It could try another jump - if the engine doesn't explode, the extra data it collects could save lives. If the engine does explode, however, Earth will get no data from the distant world at all. (The FTL radio is only good for one use, so he can't collect data and then jump.) So how did you program your robot? Did you program your robot to believe that since the engine worked 10 times, the anomaly was probably a false positive, and so it should make the jump? Or did you program your robot to follow the "Androidic Principle" and disregard the so-called "evidence" of the ten jumps, since it could not have observed any other outcome? People's lives are in the balance here. A little girl is too sick to leave her bed, she doesn't have much time left, you can hear the fluid in her lungs as she asks you "are you aware of the anthropic principle?" Well? Are you?
Isn't this one adequately dissolved by the "anthropics as decision theory" perspective? If I'm programming the robot, I weigh the relevant problem as if there were 256:1:epsilon odds of false positive/serious problem/fatal problem, and thus I'd have it make the final jump if and only if the expectation of better data justified a 1-in-514 chance of mission failure.
Also, I don't think that your tone at the end adds anything.
A post formatted this badly does not belong in Main.
Replying to this thread to test a question mentioned here: http://lesswrong.com/r/discussion/lw/eb9/meta_karma_for_last_30_days/7ap4
I think this post is attacking a strawman associated with the Anthropic Principle, but I don't quite understand the nature of said strawman. What kind of arguments were you trying to show fault in by writing this?
The robot could have observed lots of other things. It could have observed that it hasn't even started the journey. It could have observed that it's a factory robot, rather than a space probe. It could have observed that it's a human.
Then it should follow a conditional strategy that specifies what it should do in each of these cases, depending on what's observed. This strategy should be chosen based on the consequence of following it in all of these cases simultaneously.
Presumably you're referencing the notion of quantum immortality. If QI is a possibly real effect, then the robot's repeated survival counts as evidence for both the false-positive situation and for QI. For plausible priors it probably makes no difference, because in this story the robot's survival and humanity's survival are linked, and if QI applies to the robot then it applies to humanity, too.
Remember that if humanity only survives due to QI, then almost all of humanity's total measure does not survive. We can't observe the "universes" in which humanity is dead, but that doesn't mean we shouldn't care about them.
Why should I care about dead measure? Serious question.
I care because that measure contains human beings, and I care about human beings, even ones I can't observe.
I can't tell you what to care about, but I repeat that "we can't observe that measure" is not (as far as I'm concerned) a reason to jump to "I don't care about the human beings who make up that measure".
(I suspect there's a sequence post that covers this better than I can, but I don't know it offhand.)
You should care for the same reason you don't want to die - not because being dead is so bad, but because you'd rather be living.
If you play russian roulette with a quantum coin, "you" don't move into a different world if the gun fires. There is no immortal soul that occupies quantum states like a hermit crab, scuttling off into a different sized shell when necessary. If you don't like dying, you won't like getting shot.
It's possible to construct an agent that rationally chooses quantum suicide. But this is inconsistent with not wanting to die.
This isn't about quantum immortality, because frankly we don't give a damn about the robot's internal subjective experience, only about whether how we'll program it so that it'll maximize our own expected benefit.
"Quantum immortality" is about the supposed persistence of an internal subjective experience.
Suppose the reality of "quantum immortality" effects is one hypothesis that the robot considers. The probability of observing surviving 10 jumps given the noQI hypothesis is approximately 33%, since the robot could have failed to observe anything. But given the QI hypothesis, the probability of observing surviving 10 jumps is virtually 100%, since some version would survive to observe itself under all three engine-failure conditions. If QI had a prior P(QI)=X before, after the 10 jumps it has a posterior of 3X/(1+2X). So it seems clear that QI is relevant to whether the anthropic evidence can be usefully employed and that the anthropic evidence is relevant to evaluation of QI. Furthermore, it becomes relevant to the robot's decision whether you value merely the fact the person is alive (as I initially assumed) or instead the total measure of the person that is alive (as philh argued).
There is no requirement to interpret QI in the silly way you and Manfred characterized it. An objective version is simply that some of the total measure of a person will survive.
Quantum Immortality is the idea that you personally will never experience death, because somehow those version of you that die "can't experience anything" and so don't count -- it's an idea that can only be believed by those people who have a confused mystical view of Death as a single event on a physical level, instead of the termination of a trillion different individual processes in the brain.
The example that this article provides can on the other hand be simulated (and thus answered on a decision theoretical level very specifically) programmatically by a simple process that undergoes "fork()" and is variably terminated or allowed to continue further.
As such it has nothing to do with the various confusions that the "quantum immortality" believers tend to entertain.
I don't care to argue the true definition of the QI hypothesis, though it does indeed have an identifiable original form. The math I mentioned still works for both your mystical-QI-hypothesis and my physical-QI-hypothesis. Both versions of the hypothesis will have their odds trebled, which could give them enough weight (e.g. for an AIXI-bot) to noticeably affect the choice of action that will return the optimal expected value. Specifically, if it has been programmed to care about the total measure of surviving humanity, then the more weight is given to QI hypotheses, the less weight (proportionally) is given to the false-positive hypothesis, and the less likely the robot is to take new jumps.
Here's Wikipedia's description of the original thought experiment: quantum suicide and immortality. Quite importantly to the whole point, in the thought experiment death does indeed come by a machine controlled by a single quantum event. Here's a critical view, though in my opinion it quickly gets bogged down in dubious philosophy-of-mind assumptions.