... Oh, no! I've been shot!

— C3PO

A strange sort of paralysis can occur when risk-averse people (like me) decide that we're going to play it safe. We imagine the worst thing that could happen if we go ahead with our slightly risky plan, and this stops us from carrying it out.

One possible way of overcoming such paralysis is to remind yourself just how much danger you're actually in.

Humanity could be mutilated by nuclear war, biotechnology disasters, societal meltdown, environmental collapse, oppressive governments, disagreeable AI, or other horrors. On an individual level, anybody's life could turn sour for more mundane reasons, from disease to bereavement to divorce to unemployment to depression. The terrifying scenarios depend on your values, and differ from person to person. Those here who hope to live forever may die of old age, and then cryonics turns out not to work.

There must be some number X which is the probability of Really Bad Things happening to you. X is probably not a tiny figure, but instead significantly above zero, which encourages you to go ahead with whatever slightly risky plan you were contemplating, as long as it only nudges X upwards a little.

Admittedly, this tactic seems like a cheap hack that relies on an error in human reasoning - is nudging your danger level from .2 to .201 actually more acceptable than nudging it from 0 to .001? Perhaps not. Needless to say, a real rationalist ought to ignore all this and take the action with the highest expected value.

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 4:28 AM
[-]bill15y50

A similar but different method is calculating your "perfect life probability" (from Howard).

Let A be a "perfect" life in terms of health and wealth. Say $2M per year, living to 120 years and being a perfectly healthy 120 year old when you instantly and painlessly die.

Let B be your current life.

Let C be instant, painless death right now.

What probability of A versus C makes you indifferent between that deal and B for sure? That is your "perfect life probability" or "PLP." This is a numerical answer to the question "How are you doing today?" For example, mine is 93% right now, as I would be indifferent between B for sure and a deal with a 93% chance of A and 7% chance of C.

Note that almost anything that happens to you on any particular day would not change your PLP that much. Specifically, adding a small risk to your life certainly won't make that much of a difference.

(I'm not sure how immortality or other extreme versions of "perfect health" would change this story.).

What probability of A versus C makes you indifferent between that deal and B for sure?

This is an interesting thought experiment. I submit that for many men, the probability is quite small, say on the order of 15% or 25%, whereas for most women it is 90-95%.

Edited: I originally wrote "for women it approaches unity".

What makes you suggest that?

All kinds of things - conspicuously, the far greater incarceration rate for men, and the far greater willingness of men to fight wars. Both crime and warfighting seem to involve assessments similar to the death vs. perfect life problem.

It's not clear to me what B means.

Is it "your health and wealth will always be the same as they are today, and you will die at age 90"? Is it "your life will have the smoothly varying health and wealth that the average person is expected to have from today's statistics"? Is it "the best life you expect to have with at least a 10% confidence interval"? Is it "the world is deterministic, you have some fate B unknown to you, and the only free choice you'll ever make in your life is that between B and A/C"?

[-]gjm15y40

Surely it means: a god steps into the world beside you, and offers you the choice: "either take the A-or-C gamble, in which case I'll guarantee to arrange for A or C; or else refuse it, in which case I'll go away and leave your life uninterfered with". Or, in other words, B means whatever future or lottery-between-futures you'd have been facing if you hadn't been given this offer.

My problem with this definition of PLP is that it does weird things with determinism vs. free will (just like some people complain about Newcomb's problem statement).

If I win A will I live to a healthy 120 no matter what I do, like quantum immortality? Could I, e.g., infect myself with AIDS, drink a deliberately randomized cocktail, and have it turn into a magic AIDS cure because I'm guaranteed health? Could I cause an atom bomb to malfunction by standing next to it and setting it off? Could I donate organs and limbs for transplants and then regrow them? (More serious examples can be provided. For instance, I suspect I could rig up an experiment that's guaranteed to kill me unless a source of randomness just happens to produce code for a Friendly Seed AI, or a proof of P!=NP, or other useful things.)

Conversely, in option B I can't know what my life will be like. I used to have a pretty good estimate, but I just realized I really have no idea. After all, my estimate didn't include a god offering me limited immortality. Maybe tomorrow I'll get an offer from Satan for my soul?

Could you possibly take your prolific imagination for how this is misspecified or wrong, and turn to imagining ways to improve it or patch it up?

For example, suppose that winning A, instead of guarantees extending into the far future, merely gives you a very large gift in the near future.

Alternatively, suppose that all of your questions regarding A were answered "Yes".

Alternatively, is it not possible to price a bet offered to you even if you are uncertain regarding exactly how much you would win, if you did win?

Well, I'm not sure what it's supposed to specify.

For instance, under your first suggestion - that A is an immediate prize of e.g. money - the PLP becomes simple risk aversion of death vs. money. The original description seemed to suggest more than that, what with choosing whole alternative futures.

Admittedly, this tactic seems like a cheap hack that relies on an error in human reasoning

Agreed. But are we to stop being human before living well?