Mysterious Answers to Mysterious Questions

Imagine looking at your hand, and knowing nothing of cells, nothing of biochemistry, nothing of DNA. You've learned some anatomy from dissection, so you know your hand contains muscles; but you don't know why muscles move instead of lying there like clay. Your hand is just... stuff... and for some reason it moves under your direction. Is this not magic?

"The animal body does not act as a thermodynamic engine ... consciousness teaches every individual that they are, to some extent, subject to the direction of his will. It appears therefore that animated creatures have the power of immediately applying to certain moving particles of matter within their bodies, forces by which the motions of these particles are directed to produce derived mechanical effects... The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concurrence of atoms... Modern biologists were coming once more to the acceptance of something and that was a vital principle."
        -- Lord Kelvin

This was the theory of vitalism; that the mysterious difference between living matter and non-living matter was explained by an elan vital or vis vitalis.  Elan vital infused living matter and caused it to move as consciously directed. Elan vital participated in chemical transformations which no mere non-living particles could undergo—Wöhler's later synthesis of urea, a component of urine, was a major blow to the vitalistic theory because it showed that mere chemistry could duplicate a product of biology.

Calling "elan vital" an explanation, even a fake explanation like phlogiston, is probably giving it too much credit.  It functioned primarily as a curiosity-stopper.  You said "Why?" and the answer was "Elan vital!"

When you say "Elan vital!", it feels like you know why your hand moves.  You have a little causal diagram in your head that says ["Elan vital!"] -> [hand moves].  But actually you know nothing you didn't know before. You don't know, say, whether your hand will generate heat or absorb heat, unless you have observed the fact already; if not, you won't be able to predict it in advance.  Your curiosity feels sated, but it hasn't been fed.  Since you can say "Why? Elan vital!" to any possible observation, it is equally good at explaining all outcomes, a disguised hypothesis of maximum entropy, etcetera.

But the greater lesson lies in the vitalists' reverence for the elan vital, their eagerness to pronounce it a mystery beyond all science. Meeting the great dragon Unknown, the vitalists did not draw their swords to do battle, but bowed their necks in submission. They took pride in their ignorance, made biology into a sacred mystery, and thereby became loath to relinquish their ignorance when evidence came knocking.

The Secret of Life was infinitely beyond the reach of science! Not just a little beyond, mind you, but infinitely beyond! Lord Kelvin sure did get a tremendous emotional kick out of not knowing something.

But ignorance exists in the map, not in the territory.  If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person.  There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.

Vitalism shared with phlogiston the error of encapsulating the mystery as a substance. Fire was mysterious, and the phlogiston theory encapsulated the mystery in a mysterious substance called "phlogiston". Life was a sacred mystery, and vitalism encapsulated the sacred mystery in a mysterious substance called "elan vital". Neither answer helped concentrate the model's probability density—make some outcomes easier to explain than others. The "explanation" just wrapped up the question as a small, hard, opaque black ball.

In a comedy written by Moliere, a physician explains the power of a soporific by saying that it contains a "dormitive potency".  Same principle.  It is a failure of human psychology that, faced with a mysterious phenomenon, we more readily postulate mysterious inherent substances than complex underlying processes.

But the deeper failure is supposing that an answer can be mysterious. If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself. The vitalists saw a mysterious gap in their knowledge, and postulated a mysterious stuff that plugged the gap. In doing so, they mixed up the map with the territory. All confusion and bewilderment exist in the mind, not in encapsulated substances.

This is the ultimate and fully general explanation for why, again and again in humanity's history, people are shocked to discover that an incredibly mysterious question has a non-mysterious answer.  Mystery is a property of questions, not answers.

Therefore I call theories such as vitalism mysterious answers to mysterious questions.

These are the signs of mysterious answers to mysterious questions:

  • First, the explanation acts as a curiosity-stopper rather than an anticipation-controller.
  • Second, the hypothesis has no moving parts—the model is not a specific complex mechanism, but a blankly solid substance or force. The mysterious substance or mysterious force may be said to be here or there, to cause this or that; but the reason why the mysterious force behaves thus is wrapped in a blank unity.
  • Third, those who proffer the explanation cherish their ignorance; they speak proudly of how the phenomenon defeats ordinary science or is unlike merely mundane phenomena.
  • Fourth, even after the answer is given, the phenomenon is still a mystery and possesses the same quality of wonderful inexplicability that it had at the start.

 

Part of the sequence Mysterious Answers to Mysterious Questions

Next post: "The Futility of Emergence"

Previous post: "Semantic Stopsigns"

162 comments, sorted by
magical algorithm
Highlighting new comments since Today at 12:11 PM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more
Since you can say "Why? Elan vital!" to any possible observation, it is equally good at explaining all outcomes, a disguised hypothesis of maximum entropy, etcetera.

But you say earlier 'Elan vital' was greatly weakened by a piece of evidence

Heh. A fair point! Every mysterianism, though it may fail to predict details and quantities, is ultimately vulnerable to the one experience in all the world that it does prohibit - the discovery of a non-mysterious explanation.

Could it not also have been partly due to earlier scientists underestimating the degree to which qualitative phenomena derive from quantitative phenomena? Their error, then, was in tending to assume this quality was immune to study, rather than in assuming the quality itself.

Since you can say "Why? Elan vital!" to any possible observation, it is equally good at explaining all outcomes, a disguised hypothesis of maximum entropy, etcetera.

But you say earlier 'Elan vital' was greatly weakened by a piece of evidence. In that light, it's hypothesis could be stated "the mechanisms of living processes are of a different kind than the mechanisms of non-living processes, so you will not be able to study them with chemistry". This is false, but I don't think it's entirely worthless as a hypothesis, since biochemistry is noticeably different from non-living chemistry.

I think 'elan vital' makes some sense, even in a modern light. Most of the reactions in our body would not occur without enzymes, and enzymes are a characteristic feature of life. So perhaps we can say that 'elan vital' is enzymes! There is at least one experiment I can think of that could have been interpreted to show this too: I believe it involved fermentation being carried out with yeast-water (no living yeast, but clearly having their enzymes).

People with the benefit of hindsight failing to realize how reasonable vitalism sounded at the time is precisely why they go ahead and propose similar explanations for consciousness, which seems far more mysterious to them than biology, hence legitimately in need of a mysterious explanation. Vitalists were merely stupid, to make such a big deal out of such an ordinary-seeming phenomenon as biology - consciousness is different.

This is precisely one of the ways in which I went astray when I was still a diligent practitioner of mere Traditional Rationality, rather than Bayescraft. The reason to consider how reasonable mistakes seemed without benefit of hindsight, is not to excuse them, because this is to fail to learn from them. The reason to consider how reasonable it seemed is to realize that not everything that sounds reasonable is a good idea; you've got to be strict about things like yielding increases in predictive power.

Do you have something on the difference between Traditional Rationality and Bayescraft?

I am finally taking Prob. & Stats next semester (and have not yet looked at the book to see how Bayes figures into it yet. I am going to be pissed if it doesn't enter into the class at this point), so I figure that I will get my formal introduction to Bayes at that point. However, I do know the Basic P(A|B) = [ P(B|A) P(A) ] / P(B).

And, I can regurgitate Wikipedia's entries on Bayes, yet I don't seem to have any real context into which I can place the difference between Bayes and traditional Probability distributions... Can you help, please?

I am finally taking Prob. & Stats next semester

Never let the official curriculum slow you down! But still approach things systematically, find yourself a textbook.

I am currently taking Stats(AP class in the USA, IB level elsewhere), and hope that I can help.
A traditional probability test will take four frequencies(Male smokers, female smokers, male nonsmokers, and female nonsmokers) and tell you if there is a correlation with an X^2 test.
Bayescraft lets you use gender as a way to predict the likelihood of smoking, or use smoking to predict gender. The fundamental difference, as far as I can tell, is that Statistics takes results about samples and applies them to populations. Bayescraft takes results about priors and applies them to the future. The two use similar methodology to address fundamentally different questions.

Feel free to provide a complete theory of consciousness at any time.

My mother's husband professes to believe that our actions have no control over the way in which we die, but that "if you're meant to die in a plane crash and avoid flying, then a plane will end up crashing into you!" for example.

After explaining how I would expect that belief to constrain experience (like how it would affect plane crash statistics), as well as showing that he himself was demonstrating his unbelief every time he went to see a doctor, he told me that you "just can't apply numbers to this," and "Well, you shouldn't tempt fate."

My question to the LW community is this: How do you avoid kicking people in the nuts all of the time?

Think of them as 3-year-olds who won't grow up until after the Singularity. Would you kick a 3-year-old who made a mistake?

I jest, but the sense of the question is serious. I really do want to teach the people I'm close to how to get started on rationality, and I recognize that I'm not perfect at it either. Is there a serious conversation somewhere on LW about being an aspiring rationalist living in an irrational world? Best practices, coping mechanisms, which battles to pick, etc?

Simply consider how likely it is that kicking them in the nuts will actually improve the situation.

How do you avoid kicking people in the nuts all of the time?

(grin) Mostly, by remembering that there are lots of decent people in the world who don't think very clearly.

Pick your battles. Most people happily hold contradictory beliefs. More accurately, their professed beliefs don't always match their aliefs). You are probably just as affected as the rest of us, so start by noticing this in yourself.

Strictly speaking, if you somehow knew in advance (time travel?) that you would "die in a plane crash", then avoiding flying would indeed, presumably, result in a plane crash occurring as you walk down the street.

If you know your attempt will fail in advance, you don't need to try very hard. If you don't, then it is reasonable to avoid dangerous situations.

If you know your attempt will fail in advance, you don't need to try very hard.

I actually don't believe this is true, for most mechanisms of "mysterious future knowledge", including most (philosophical) forms of time travel that don't allow change. Unless I had some specific details about the mechanism of prediction that changed the situation I would go ahead and try very hard despite knowing it is futile. I know this is a total waste... it's as if I am just leaving $10,000 on the ground or something! (ie. I assert that newcomblike reasoning applies.)

I don't understand this.

In Newcomb's problem, Omega knows what you will do using their superintelligence. Since you know you cannot two-box successfully, you should one-box.

If Omega didn't know what you would do with a fair degree of accuracy, two-boxing would work, obviously.

In Newcomb's problem, Omega knows what you will do using their superintelligence. Since you know you cannot two-box successfully, you should one-box.

In this case you are trying (futilely) so that you, very crudely speaking, are less likely to be in the futile situation in the first place.

If Omega didn't know what you would do with a fair degree of accuracy, two-boxing would work, obviously.

Yes, then it wouldn't be Newcomb's Problem. The important feature in the problem isn't boxes with arbitrary amounts of money in them. It is about interacting with a powerful predictor whose prediction has already been made and acted upon. See, in particular, the Transparent Newcomb's Problem (where you can ourtright see how much money is there). That makes the situation seem even more like this one.

Even closer would be the Transparent Newcomb's Problem combined with an Omega that is only 99% accurate. You find yourself looking at an empty 'big' box. What do you do? I'm saying you still one box the empty box. That makes it far less likely that you will be in a situation where you see an empty box at all.

Being a person who avoids plane crashes makes it less likely that you will be told "you will die in a plane crash", yes.

But probability is subjective - once you have the information that you will die in a car crash, your subjective estimate of this should vastly increase, regardless of the precautions you take.

But probability is subjective - once you have the information that you will die in a car crash, your subjective estimate of this should vastly increase, regardless of the precautions you take.

Absolutely. And I'm saying that you update that probability, perform a (naive) expected utility function calculation that says "don't bother trying to prevent plane crashes" then go ahead and try to avoid plane crashes anyway. Because in this kind of situation maximising expected utility is actually a mistake.

(To those who consider this claim to be bizarre without seeing context, note that we are talking situations such as within time-loops.)

Because in this kind of situation maximising expected utility is actually a mistake.

So ... I should do things that result in less expected utility ... why?

In the specific "infallible oracle says you're going to die in a plane crash" scenario, you might live considerably longer by giving the cosmos fewer opportunities to throw plane crashes at you.

I was assuming a time was given. wedrifid was claiming that you should avoid plane-crash causing actions even if you know that the crash will occur regardless.

Yes, you are correct. Or at least it is true that I am not trying to make a "manipulate time of death" point. Let's say we have been given a reliably predicted and literal "half life" that we know has already incorporated all our future actions.

OK.

So the odds of my receiving that message are the same as the odds of my death by plane, but having recieved it I can freely act to increase the odds of my plane-related death without repercussions. I think.

If you know the time, then that becomes even easier to deal with - there's no particular need to avoid plane crash opportunities that do not take place at that time. In fact, it then becomes possible to try to avoid it by other means - for example, faking your own plane-crash-related demise and leaving the fake evidence there for the time traveller to find.

If you know the time of your death in advance, then the means become important only at or near that time.

Let's take this a step further. (And for this reply I will neglect all acausal timey-wimey manipulation considerat ions.)

If you know the time of your death you have the chance to exploit your temporary immortality. Play Russian Roulette for cash. Contrive extreme scenarios that will either result in significant gain or certain death. The details of ensuring that it is hard to be seriously injured without outright death will take some arranging but there is a powerful "fixed point in time and space" to be exploited.

Problem with playing russian roulette under those circumstances is that you might suffer debilitating but technically nonfatal brain damage. It's actually surprisingly difficult to arrange situations where there's a chance of death but no chance of horrific incapacitation.

Problem with playing russian roulette under those circumstances is that you might suffer debilitating but technically nonfatal brain damage.

Yes, that was acknowledged as the limiting factor. However it is not a significant problem when playing a few of rounds of Russian Roulette. In fact, even assuming you play the Roulette to the death with two different people in sequence you still only create two bits of selection pressure towards the incapacitation. You can more than offset this comparatively trivial amount of increased injured-not-dead risk (relative to the average Russian Roulette player) by buying hollow point rounds for the gun and researching optimal form for suicide-by-handgun.

The point is, yes exploiting death-immunity for optimization other outcomes increases the risk of injury in the same proportion that the probability of the desired outcome is increased but this doesn't become a significant factor for something as trivial as a moderate foray into Russian Roulette. It would definitely become a factor if you started trying to brute force 512 bit encryption with a death machine. That is, you would still end up with practically 0 chance of brute forcing the encryption and your expected outcome would come down to whether it is more likely for the machine to not work at all or for it to merely incapacitate you.

This is a situation where you really do have to shut up and multiply. If you try to push the anti-death too far you will just end up with (otherwise) low probability likely undesirable outcomes occurring. On the other hand if you completely ignore the influence of "death" outcomes being magically redacted from the set of possible outcomes you will definitely make incorrect expected utility calculations when deciding what is best to do. This is particularly the case given that there is a strict upper bound on how bad a "horrific incapacitation" can be. ie. It could hurt a bit for a few hours till your certain death.

This scenario is very different and far safer than many other "exploit the impossible physics" scenarios in as much as the possibility of bringing disaster upon yourself and others is comparatively low. (ie. In other scenarios it is comparatively simple/probable for the universe to just to throw a metaphorical meteor at you and kill everyone nearby as a way to stop your poorly calibrated munchkinism.)

It's actually surprisingly difficult to arrange situations where there's a chance of death but no chance of horrific incapacitation.

I shall assume you mean "sufficiently low chance of horrific incapacitation for your purposes".

It isn't especially difficult for the kind of person who can get time travelling prophets to give him advice to also have a collaborator with a gun.

I have to admit, you've sort of lost me here.

Call P1 the probability that someone who plays Russian Roulette in a submarine will survive and suffer a debilitating injury. P1 is, I agree, negligible.
Call P2 the probability that someone who plays Russian Roulette in a submarine and survives will suffer a debilitating injury. P2 is, it seems clear, significantly larger than P1.

What you seem to be saying is that if I know with certainty (somehow or other) that I will die in an airplane, then I can safely play Russian Roulette in a submarine, because there's (we posit for simplicity) only two things to worry about: death (which now has P=0) or non-fatal debilitating injury (which has always had P=P1, which is negligible for my purposes).

But I'm not quite clear on why, once I become certain I won't die, my probability of a non-fatal debilitating injury doesn't immediately become P2.

The probability does become P2, but in many cases, we can argue that P2 is negligible as well.

In the submarine case, things are weird, because your chances of dying are quite high even if you win at Russian Roulette. So let's consider the plain Russian Roulette: you and another person take turns trying to shoot yourself until one of you succeeds. For simplicity's sake, suppose that each of you is equally likely to win.

Then P1 = Pr[injured & survive] and P2 = Pr[injured | survive] = P1 / Pr[survive]. But Pr[survive] is always at least 1/2: if your opponent shoots himself before you do, then you definitely survive. Therefore P2 is at most twice P1, and is negligible whenever P1 is negligible.

Then P1 = Pr[injured & survive] and P2 = Pr[injured | survive] = P1 / Pr[survive]. But Pr[survive] is always at least 1/2: if your opponent shoots himself before you do, then you definitely survive. Therefore P2 is at most twice P1, and is negligible whenever P1 is negligible.

I see how you calculated that, but I think you're looking at the wrong pieces of evidence, and I agree with TheOtherDave.

You have an even split chance of getting the real bullet in play, so let's put that down:

P[bullet] = 0.5 P[¬bullet] = 0.5

Then, given that you DO get the bullet, you have a very high chance of being dead if you don't know how you will die:

P[die | bullet] = 0.99 P[¬die | bullet] = 0.01

Of course, this means that overall, P[die] = 0.495, and P[injury] = 0.005. However, if you already also know that P[¬die] = 1, then...

P[bullet & ¬die] = 0.5 P[¬bullet & ¬die] = 0.5

...because P[bullet] is computed before its causal effects (death or injury) can enter the picture, which means you're left with P[injury] = 0.5 (a hundred times larger than P1!).

Thus, while the first chance of injury is negligible, the chance of injury once you already know that you won't die is massively larger, given that P[injury XOR death | bullet] = 1 (which is implied in the problem statement, I would assume).

Edit: I realize that this makes the assumption that your chances of getting the bullet doesn't correlate with knowing how you will die, but it most clearly exposes the difference between your calculation and other possible calculations. This is not the correct way to calculate the probabilities in real life, since it's much more likely that non-death is achieved by not having the bullet in the first place (or by failing to play Russian Roulette at all), but there's all kinds of parameters you can play with here. All I'm saying is that P2 isn't necessarily at most twice P1, it all depends on the other implicit conditions and priors.

No, that's not right. What we're interested in here, is P[injury|¬die]. Using Bayes' Theroem:

P[injury|¬die] = {P[¬die|injury]*P[injury]}/P[¬die]

Using the figures you assume, and recalling that "injury" refers only to non-fatal injury (hence P[¬die|injury]==1):

P[injury|¬die] = {1*0.005}/0.505 = 1/101 = approx. 0.00990099

The chances of injury are then not quite double what they would have been without death-immunity. This is reasonably low, because the prior odds of survival at all are reasonably high (0.505) - had the experiment been riskier, such that there was only a 0.01% chance of survival overall, then the chance of injury in the death-immunity case would be correspondingly higher.

(We also have not yet taken into account the effect of the first player - in such a game of Russian Roulette, he who shoots first has a higher prior probability of death).

Thanks for the full Bayes Theorem breakdown.

I agree that this is how it should be reasoned ideally, which I only realized after first posting the grandparent. See other comments and the edit for how I arrived at the 50/50 reasoning. If you know the answer for the bottom/last question in this comment, I'd be interested to know.

It depends on how exactly this time-travel-related knowledge of how you die will work. My calculation is correct if a random self-consistent time loop is chosen (which I think is reasonable) -- there are far more self-consistent time loops in which you survive because you don't get the bullet, than ones in which you survive because a bullet failed to kill you.

Terrible things start happening if there's some sort of "lazy pruning" of possibilities (which I think is what you're suggesting). Then the probability you get shot is 0.5, and then if you do get shot the self-consistency condition eliminates the branches in which you die, so you are nearly guaranteed to survive in some horrible fashion.

I don't like the second option because it requires thinking of branching possibilities as some sort of actual discrete things, which I find dubious. But it's a bit silly to argue about interpretations of time travel, isn't it?

This is a bit of what I was getting at with the edit in the grandparent: basically, it's not very bayesian to stick to a 50/50 bullet chance when you know you will not die.

However, I was also considering the least convenient possible world: You already know that you won't die, and since the worst you have to fear is debilitating permanent non-fatal injury (which you apparently don't care about if it's for Science!), you decide to repeatedly play Russian Roulette with tons of people, just to test probabilities or for fun or something.

Then what happens? Does it become more probable that you'll just randomly end up always not getting the bullet with .98 probability (if the chance of surviving a bullet was 1%), which will to an outside view apparently defy all probabilities? Or does it instead stick to the 50/50 and on average you'll get injured every second game (assuming you have a way of recovering from the injuries and playing again) without ever dying?

More importantly, which scenario should an ideal bayesian agent expect? This I have no idea, which is why I think it's not trivial or obvious that P2 = 2*P1.

The calculation that P2=2*P1 obviously only applies to one game. If you play lots of games sequentially, then the probability increases stack. (Edit: I incorrectly said that the ratio doubles with every game, which is obviously false)

Another way of thinking about this: absent time travel, if you survive a bullet with 1% probability, then after N games your probability of surviving unscathed is 1/2^N, and your total probability of surviving is (101/200)^N. Therefore, given that you survive, your probability of surviving unscathed should be the ratio of these, or (100/101)^N.

(All of this is assuming the random self-consistent time loop interpretation of time travel.)

Hmm, interesting. That would imply that, to a third-party, there's some random guy who wins 99% of the time at Russian Roulette. At this point, it should legally be considered murder.

Death sentence by plane crash sounds appropriate, in this case.

It is murder, but you're going to have terrible trouble proving that (especially if he's careful about documenting how fair the russian roulette is). To avoid murder charges, the hypothetical psychopathic death-immune person can go so far as to arrange a tournament, with 2^n entries for some integer n. In this arrangement, one person must survive every round, and thus it does not look suspicious afterwards that he did survive every round (plus he gets 2^n prizes for going through n rounds).

I'm pretty sure that the russian roulette itself is illegal just about everywhere, though. No matter how it's done, it's either murder or assisted suicide.

It is murder, but you're going to have terrible trouble proving that (especially if he's careful about documenting how fair the russian roulette is).

Or only enter other people's tournaments and have them document their own procedure.

I'm pretty sure that the russian roulette itself is illegal just about everywhere, though. No matter how it's done, it's either murder or assisted suicide.

I would have expected a different label to apply. Neither of those seems accurate. In fact I didn't think even assisted suicide got called "assisted suicide".

Call P2 the probability that someone who plays Russian Roulette in a submarine and survives will suffer a debilitating injury. P2 is, it seems clear, significantly larger than P1.

To be precise (and absent any other interventions) P2 is larger than P1 by a factor of 2 (in two person to the death with randomized start case).

But I'm not quite clear on why, once I become certain I won't die, my probability of a non-fatal debilitating injury doesn't immediately become P2.

I thought I was fairly clear that that was exactly what I was arguing. Including the implication that it doesn't immediately become more than P2. This (and other previously unlikely failure modes) have to be considered seriously but to overweight their importance is a mistake.

Ah! I see. Yeah, it seems I wasn't thinking about Actual Russian Roulette, in which two players take turns and the most likely route to survival is my opponent blowing his brains out first, but rather Utterly Ridiculous Hypothetical Variation on Russian Roulette, in which I simply pull the trigger over and over while pointing at my own head, and the most likely route to survival is a nonfatal bullet wound.

Ah! I see. Yeah, it seems I wasn't thinking about Actual Russian Roulette, in which two players take turns and the most likely route to survival is my opponent blowing his brains out first, but rather Utterly Ridiculous Hypothetical Variation on Russian Roulette, in which I simply pull the trigger over and over while pointing at my own head, and the most likely route to survival is a nonfatal bullet wound.

Ahh, yes. That seems to be an impractical course of action even with faux-immortality. It may be worthwhile if some strange person was willing to pay exorbitant amounts of cash per shot and you were also permitted to spin the cartridge after every shot (or five shots) in order to randomize it. Then the accumulated improbability (ie. magnified injury and 'unknown black swan' possibility) would ultimately injure or otherwise interrupt you but only after your legacy had been improved significantly.

I've made the exact same mistake before. Maybe there should be (or is) a name for that.

If you're the sort of person who would take advantage of such knowledge to engage in dangerous activities, does that increase the probability that your reported time of death will be really soon?

On the other hand, if you're the kind of person who (on discovering that you will die in a plane crash) takes care to avoid plane crashes, wouldn't that increase your expected life span?

Moreover, these two attitudes - avoiding plane crashes and engaging in non-plane-related risky activities - are not mutually exclusive.

If you're the sort of person who would take advantage of such knowledge to engage in dangerous activities, does that increase the probability that your reported time of death will be really soon?

Absolutely. Note the parenthetical. The grandparent adopted the policy of ignoring this kind of consideration for the purpose of exploring the implied tangent a little further. I actually think not actively avoiding death, particularly death by the means predicted, is a mistake.

You can do the same thing if you know only the means of your death and not the time in advance; merely set up your death-stunts to avoid that means of death. (For example, if you know with certainty that you will die in a plane crash but not when, you can play Russian Roulette for cash on a submarine).

And then the experimental aqua-plane crashes into you.

An important safety tip becomes clear; if you're involved in a time loop and know the means of your death, then keep a very close eye on scientific literature and engineering projects. Make sure that you hear the rumours of the aqua-plane before it is built and can thus plan accordingly...

... True.

But you could still be injured by a plane crash or other mishap at another time, at standard probabilities.

And you should still charter your own plane to avoid collateral damage.

So ... I should do things that result in less expected utility ... why?

I am happy to continue the conversation if you are interested. I am trying to unpack just where your intuitions diverge from mine. I'd like to know what your choice would be when faced with Newcomb's Problem with transparent boxes and an imperfect predictor when you notice that the large box is empty. I take the empty large box, which isn't a choice that maximises my expected utility and in fact gives me nothing, which is the worst possible outcome from that game. What do you do?

Two boxes, sitting there on the ground, unguarded, no traps, nobody else has a legal claim to the contents? Seriously? You can have the empty one if you'd like, I'll take the one with the money. If you ask nicely I might even give you half.

I don't understand what you're gaining from this "rationality" that won't let you accept a free lunch when an insane godlike being drops it in your lap.

I don't understand what you're gaining from this "rationality" that won't let you accept a free lunch when an insane godlike being drops it in your lap.

A million dollars.

No, you're not. You're getting an empty box, and hoping that by doing so you'll convince Omega to put a million dollars in the next box, or in a box presented to you in some alternate universe.

And by this exact reasoning, which Omega has successfully predicted, you will one-box, and thus Omega has successfully predicted that you will one-box and made the correct decision to leave the box empty.

Remember to trace your causal arrows both ways if you want a winning CDT.

Remember also Omega is a superintelligence. The recursive prediction is exactly why it's rational to "irrationally" one-box.

And by this exact reasoning, which Omega has successfully predicted, you will one-box, and thus Omega has successfully predicted that you will one-box and made the correct decision to leave the box empty.

Yes, that's why I took the one box with more money in it.

Strictly speaking the scenario being discussed is one in which Omega left a transparent box of money and another transparent box which was empty in front of Wedrifid, then I came by, confirmed Wedrifid's disinterest in the money, and left the scene marginally richer. I personally have never been offered money by Omega, don't expect to be any time soon, and am comfortable with the possibility of not being able to outwit something that's defined as being vastly smarter than me.

Remember also Omega is an insane superintelligence, with unlimited resources but no clear agenda beyond boredom. If appeasing such an entity was my best prospect for survival, I would develop whatever specialized cognitive structures were necessary; it's not, so I don't, and consider myself lucky.

Ah, then in that case, you win. With that scenario there's really nothing you could do better than what you propose. I was under the impression you were discussing a standard transparent Newcomb.

The counterfactual mugging isn't that strange if you think of it as a form of entrance fee for a positive-expected-utility bet -- a bet you happened to lose in this instance, but it is good to have the decision theory that will allow you to enter it in the abstract.

The problem is that people aren't that good in understanding that your specific decision isn't separate from your decision theory under a specific context ... DecisionTheory(Context)=Decision. To have your decision theory be a winning decision theory in general, you may have to eventually accept some individual 'losing' decisions: That's the price to pay for having a winning decision theory overall.

I doubt that a decision theory that simply refuses to update on certain forms of evidence can win consistently.

If Parfit's hitchhiker "updates" on the fact that he's now reached the city and therefore doesn't need to pay the driver, and furthermore if Parfit's hitchhiker knows in advance that he'll update on that fact in that manner, then he'll die.

If right now we had mindscanners/simulators that could perform such counterfactual experiments on our minds, and if this sort of bet could therefore become part of everyday existence, being the sort of person that pays the counterfactual mugger would eventually be seen by all to be of positive-utility -- because such people would eventually be offered the winning side of that bet (free money in the tenfold of your cost).

While the sort of person that wouldn't be paying the counterfactual mugger would never be given such free money at all.

If, and only if, you regularly encounter such bets.

The likelihood of encountering the winning side of the bet is proportional to the likelihood of encountering its losing side. As such, whether you are likely to encounter the bet once in your lifetime, or to encounter it a hundred times, doesn't seem to significantly affect the decision theory you ought possess in advance if you want to maximize your utility.

In addition to Omega asking you to give him 100$ because the coin came up tails, also imagine Omega coming to you and saying "Here's 100,000$, because the coin came up heads and you're the type of person that would have given me 100$ if it had come up tails."

That scenario makes it obvious to me that being the person that would give Omega 100$ if it had come up heads is the winning type of person...

Oh, so you pay counterfactual muggers?

If the coin therein is defined as a quantum one then yes, without hesitation. If it is a logical coin then things get complicated.

All is explained.

This is more ambiguous than you realize. Sure, the dismissive part came through but it doesn't quite give your answer. ie. Not all people would give the same response to counterfactual mugging as Transparent Probabilistic Newcomb's and you may notice that even I had to provide multiple caveats to provide my own answer there despite for most part making the same kind of decision.

Let's just assume your answer is "Two Box!". In that case I wonder whether the problem is that you just outright two box on pure Newcomb's Problem or whether you revert to CDT intuitions when the details get complicated. Assuming you win at Newcomb's Problem but two box on the variant then I suppose that would indicate the problem is in one of:

  • Being able to see the money rather than being merely being aware of it through abstract thought switched you into a CDT based 'near mode' thought pattern.
  • Changing the problem from a simplified "assume a spherical cow of uniform density" problem to one that actually allows uncertainty changes things for you. (It does for some.)
  • You want to be the kind of person who two-boxes when unlucky even though this means that you may actually not have been unlucky at all but instead have manufactured your own undesirable circumstance. (Even more people stumble here, assuming they get this far.)

The most generous assumption would be that your problem comes at the final option---that one is actually damn confusing. However I note that your previous comments about always updating on the free money available and then following expected utility maximisation are only really compatible with the option "outright two box on simple Newcomb's Problem". In that case all the extra discussion here is kind of redundant!

I think we need a nice simple visual taxonomy of where people fall regarding decision theoretic bullet-biting. It would save so much time when this kind of thing. Then when a new situation comes up (like this one with dealing with time traveling prophets) we could skip straight to, for example, "Oh, you're a Newcomb's One-Boxer but a Transparent Two-Boxer. To be consistent with that kind of implied decision algorithm then yes, you would not bother with flight-risk avoidance."

Since you know you cannot two-box successfully, you should one-box.

Not if you mistakenly believe, as CDTers do, in human free will in a predictable (by Omega) universe.

Not if you mistakenly believe, as CDTers do, in human free will in a predictable (by Omega) universe.

"Free will" isn't incompatible with a predictable (by Omega) universe. I also doubt that all CDTers believe the same thing about human free will in said universe.

I think this is the kind of causal loop he has in mind. But a key feature of the hypothesis is that you can't predict what's meant to happen. In that case, he's equally good at predicting any outcome, so it's a perfectly uninformative hypothesis.

That was exactly my point. If he could make such a prediction, he would be correct. Since he can't...

"if you're meant to die in a plane crash and avoid flying, then a plane will end up crashing into you!"

I often say stuff like that, but I don't mean it literally. When someone says “What if you do X and Y happens?” and I think Y is ridiculously unlikely (P(Y|X) < 1e-6), I sarcastically reply “What if I don't do X, but Z happens?” where Z is obviously even more ridiculous (P(Z|~X) < 1e-12, e.g. “a meteorite falls onto my head and kills me”).

And to continue the thread of Roy's comment as picked up by Eliezer, it might have been a fairly reasonable conjecture at the time (or at some earlier time). We have to be wary about hindsight bias. Imagine a time before biochemistry and before evolution theory. The only physicalist "explanations" you've ever heard of or thought of for why animals exist and how they function are obvious non-starters...

You think to yourself, "the folks who are tempted by such explanations just don't realize how far away they are from really explaining this stuff; they are deluded." And invoking an elan vital, while clearly not providing a complete explanation, at least creates a placeholder. Perhaps it might be possible to discover different versions of the elan vital; perhaps we could discover how this force interacts with other non-material substances such as ancestor spirits, consciousness, magic, demons, angels etc. Perhaps there could be a whole science of the psychic and the occult, or maybe a new branch of theological inquiry that would illuminate these issues. Maybe those faraway wisemen that we've heard about know something about these matters that we don't know. Or maybe the human mind is simply not equipped to understand these inner workings of the world, and we have to pray instead for illumination. In the afterlife, perhaps, it will all be clear. Either way, that guy who thinks he will discover the mysteries of the soul by dissecting the pineal gland seem curiously obtuse in not appreciating the magnitude of the mystery.

Now, in retrospect we know what worked and what didn't. But the mystics, it seems, could have turned out to have been right, and it is not obvious that they were irrational to favor the mystic hypothesis given the evidence available to them at the time.

Perhaps what we should be looking for is not structural problems intrinsic to certain kinds of questions and answers, but rather attitude problems that occur, for example, when ask questions without really caring about finding the answer, or when we use mysterious answers to lullaby our curiosity prematurely.

We don't need to imagine. We are in exactly this position with respect to consciousness.

Nitpick:

The Secret of Life was infinitely beyond the reach of science! Not just a little beyond, mind you, but infinitely beyond!

But Kelvin (in your quote) qualified it with "... hitherto entered on". Whether or not "infinitely" is fitting, doesn't this imply that Kelvin did not think that future scientific inquiry could not succeed?

(a) not when you say "infinitely"

(b) "Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concurrence of atoms"

I think Kelvin gets a bit of a raw deal in the way people often quote him: "[life etc.] is infinitely beyond the range of any scientific inquiry".

By cutting off the quote there it sounds like he is claiming that science will never be able to understand life. However, as you show above, he continues with, "... hitherto entered on." Thus, the sentence is making a claim about the power of science up to the time of his writing to understand life. This is a far more reasonable claim.

Desiring a mysterious explantion is like wanting to be in a room with no people inside. Once you explain it it's not mysterious any more. The property depends on your actions: emptyness is destroyed by you entering, mysticism is destroyed by you explaining it. Just an alternative to the map-territory way of putting it

Am I the only one who, while reading this post, thought “why doesn’t the same apply to anything else we ever discover”?

Elan vital (and phlogiston and luminiferous aether etc.) were particles/substances/phenomena postulated to try to explain observations made. How are quarks, electrons and photons any different? Just because we recognise these as the best available theory today, I am not sure I understand how one is a curiosity-stopper any more than the other.

The real curiosity-stopper is the suggestion that something is forever beyond our understanding and that attempting to research it is destined to be futile. Your quote from Lord Kelvin exhibits this mentality, but only very slightly. Certainly a lot less than some of that stuff you hear from religious people who think God explains everything but is beyond our understanding. I think the history of science shows that this mentality is continually diminishing, and Lord Kelvin’s quote may simply be a transitional fossil.

I still see traces of this mentality today. Ask a cosmologist what happened in the first few seconds after the big bang and they might say the particle horizon makes it fundamentally impossible to see beyond the point where the universe became optically transparent. I think many people think similarly about consciousness — not because they think we can’t dissect the brain and figure out how it works, but rather because they think we will never be able to come up wtih a coherent, useful definition of the term that reasonably matches our intuition. I think each of these are curiosity-stoppers.

The difference between electrons and elan vital is that the former come with equations that let you predict things. If you said "electricity is electrons" that would be a curiosity-stopper, but if you said "electricity is electrons, and by the way they obey the Lorentz force equation [F = ...] and Maxwell's laws [del E = ...]" that would be an explanation.

I wouldn't call the luminiferous aether a curiosity-stopper, because it was an actual theory that did make predictions (it was essentially falsified in one experiment).

The luminiferous aether is also a brilliant example of how rationalists should and have formed hypotheses based on a combination of a priori logic, a hypothetical non-self-contradicting set of assumptions, and empirical evidence.

The expected statistical inference you could expect to get in which a theory is valid is very important to hypothesis formation.

A theoretical paradigm such as aether physics in all possible metalogical realities would be expected to be true more often than not, given what was known at the time.

At the time the theory was extremely apt in describing empirically-verifiable experiments. That's exactly why I'm glad I was taught about the luminiferous aether from a very young age even though it is not a part of current contemporary physics.

With respect to scientific pedagogy I would therefore say it is very important that we continue to teach students about the history of scientific paradigms, even those paradigms since lost to progress.

The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concurrence of atoms... Modern biologists were coming once more to the acceptance of something and that was a vital principle.

Given what we know now about the vastly complex and highly improbable processes and structures of organisms -- what we have learned since Lord Kelvin about nucleic acids, proteins, evolution, embryology, and so on -- and given that there are many mysteries still, such as consciousness and aging, or how to cure or prevent viruses, cancers, or heart disease, for which we still have far too few clues -- this rather metaphorical and poetic view of Lord Kelvin's is certainly a far more accurate view of the organism, for the time, than any alternative model that posited that the many details and functions of human body, or its origins, could be most accurately modeled by simple equations like those used for Newtonian physics. To the extent vitalism detered biologists from such idiocy vitalism must be considered for its time a triumph. Too bad there were to few similarly good metaphors to deter people from believing in central economic planning or Marx's "Laws of History."

Admittedly, the "infinetely different" part is hyperbole, but "vastly different" would have turned out to be fairly accurate.

Is it better to say "The problem is too big, lets just give up" or "The problem is too big for me, but I can start with X and find out how that works"?

It seems to me Lord Kelvin was saying the former, while Wohler clearly believed the latter, and proved it by synthesizing urea.

Did Wohler understand the intricacies of biology? No, of course not, but he proved they could be discovered, which is exactly what Kelvin was saying could not be done. After almost 200 years we still aren't done, but we do know a whole lot about the intricacies of biology, and we have a rough idea of how much farther we need to go to understand all of it. Furthermore, we understand that while biology is incredibly complex, it follows the same rules that govern the "fortuitous concurrence of atoms" as Kelvin put it.

Kelvin was plain wrong, and worse, his whole point was to discourage further research into biology. He was one of the people who said it could not be done, while Wohler just went ahead and did it.

Eliezer: It doesn't seem to me that you really engaged with Nick's point here. Also, I have pointed out to you before that there were lots of philosophers who believed that consciousness was unique and mysterious but life was not long before science rejected vitalism.

Your hand is just... stuff... and for some reason it moves under your direction. Is this not magic?

Yeah, I think it is. The one model we start with is the model of ourselves. Our hand moves because we will it to do so. If that were the only model I had, that's how I'd interpret the universe - every event was the result of the will of some being.

And stopping at "The Wizard Did It" makes perfect sense. We experience our own decisions as sufficient causes for our own actions.

I wonder how long it took for the concept of mechanism to take hold.

The quotation feature works by preceding a paragraph with >, not by typing a pipe manually.

Thanks. I eventually found the page on markup. And the little envelope under my Karma that shows me the responses to my comments.

These are the signs of mysterious answers to mysterious questions: Anothe good sign is that the mysterious answer is always in retreat. Suddenly, people explain some phenonmena, previously thought to be explainable only by "elan vitale" or "god" or "the influence of platonic Ideals". And the mysterious answer retreats to a smaller realm. And that realm just keeps on shrinking...

My summary: A mysterious answer is a fake explanation that acts as a semantic stop sign. Signs for mysterious answers:

  • Explanation acts as curiousity-stopper rather than anticipation-controller

  • Hypothesis is a black box (no underlying principles to derive from)

  • Social indication that people cherish their ignorance

I'm new to reading this blog and am slowly going through the sequences. Eliezer, I'm enjoying your writings a lot and they are really helping to change my way of thinking.

A thought I had while reading this and figured I'd ask for other thoughts:

To worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance.

I know people who are perfectly content to "worship their own ignorance." Why do you think they don't value knowing enough to go further? Is it just because they have hit a semantic roadblock and don't realize it?

1) Great post and great comments.

2) Like a few people have mentioned, using a life force as an explanation isn't necessarily a bad thing. It depends what you have in mind. You could believe in the life force but not be breaking any of the four curiosity stoppers. It would be interesting to know how many people used life force as a curiosity stopper when it was popular. I would guess that most people did use it as a curiosity stopper. Sounds like a good job for those experimental philosophers to show they do more than just polls about intuitions.

3) "You have a little causal diagram in your head that says ["Elan vital!"] -> [hand moves]. But actually you know nothing you didn't know before. You don't know, say, whether your hand will generate heat or absorb heat, unless you have observed the fact already; if not, you won't be able to predict it in advance."

I disagree that you know nothing more than you did before. When I think of a life force I picture different things than, say, electrical force. Maybe your concept of life hasn't substantially changed, but it has been enriched slightly, and the more you enrich a concept the more falsifiable it becomes. I would argue that the more falsifiable a concept is, without being shown to be false, the more useful it is (in general). For instance, if I said meaning was holistic, I think this is somewhat analogous to saying motion in the living is generated by a life force. It loosely constrains other things you can believe about meaning or life.

I like your list of signs of a curiosity stopper. I don't necessarily think that "elan vital" meets those requirements (as Roy points out), but perhaps it did for many people or at some times.

I like the list because my brain feels a little more limber and a little more powerful, having pondered it. The list is a curiosity ENHANCER, and an anticipation SHARPENER.

-- James

How is "elan vital" different from, lets say "higgs bozon" in physics ? Both are hypothetical parts of reality, which needs further confirmation, and more detailed description.

The Higgs Boson has been confirmed. I suppose the wider point was something on the lines that "all unconfirmed hypotheses should be treated equally". Rationalists typically do not favour a level playing field, and prefer hypotheses that ire in line with broad principles that have been successful in the past -- principles like reductionism, materialism, and , in earlier days determinism.

In fact I have no idea what a higgs boson is, but a physicist tells me it makes for a simpler mathematical system used to predict our experience. We can imagine actually getting evidence that would let us make a more detailed description. (I don't know if that's still true in a practical sense, but I believe it used to be true. At worst, all that we lack is energy.)

Meanwhile, "elan vital" makes no predictions except maybe negative ones, and "more detailed description" seems impossible even in principle without special revelation. Unless Eliezer is misreading Kelvin, the esteemed writer actually rules out any such discovery. From an abstract standpoint, the theory can't be expanded if we can't get the evidence to justify more details.

Meanwhile, "elan vital" makes no predictions

Elan Vital is a family of theories some of which could be predictive in principle.

Another example: during the conversation between Deepak Chopra and Richard Dawkins, Deepak Chopra thinks that our lack of a very good understanding for the origin of language or jumps in the fissile record for example means that an actual discontinuity happened.

For all the posts implying that people who came up with the concepts of phlogiston and elan vital were just using science without the benefit of today's education is missing the point.

Today's scientists come up with ideas like string theory or dark energy, but they don't stop there: they are frantically trying to find evidence for them and so far failing. So they are just neat ideas that might explain a lot if shown to be true,, but not much more than that. General relativity goes on providing evidence supporting it, including the new evidence for "frame dragging".

Phlogiston and elan vital were ideas that died for the LACK of evidence. The discoveries of oxygen and electrochemistry killed them. However, when the ideas were proposed, if you didn't know the answer, you left it there or made something up. I might mention also caloric, which was a pure guess based on very little but quashed by science in the form of experiments whose results fitted the concepts of energy much better.

I would like to suggest that the concept of "beauty" in art, relationships and even evolutionary biology seems to satisfy EY's criteria of being a mysterious answer.

If I ask, "how does the male peacock attract female peacocks" and one answers "because his tail is big and beautiful", haven't they failed to answer my question? Beauty in this response is a 1- curiosity stopper, 2- has no moving parts, 3- Is often uttered by people with a great deal of pride (the painting is so beautiful!), and 4- leaves the phenomenon a mystery (In the case of the peacock, I still don't really know why female peacocks like big colorful tails).

Also, symmetry is a sign of health in bilaterians such as we; so it makes sense that we'd evolve to find symmetry beautiful.

While I understand how there are some questions that cannot be completely answered, I feel as though you have chosen to ignore the fact that science at that time was inadequate to really understand the underlying science. Even today there is no complete understanding of any field, just educated guesses based on experiments and observations. Elan vital was just one theory of attempting to describe why life happens, and it was based on the fact that life had something more than un-living matter. However, further experiments altered this theory. Would you say the same thing about Quantum Theory, or the Electromagnetic Spectrum, or even E=mc^2? So far, those theories, while truthful when modeling current events, have not been conclusively proven. However, by aggressively insinuating that anyone who uses a theory that has not be uncategorically proven as fact is lacking in rational thought, then you belittle the field of science and all that it has achieved.