Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

There has been some confusion about whether people are using inside views or all-things-considered betting odds when they talk about P(doom).  Which do you give by default?  What are your numbers for each?

New to LessWrong?

New Answer
New Comment

2 Answers sorted by

Dagon

Mar 27, 2022

30

Depends on who asks, why, and how much context we share.  Much of the time, I give the "signaling answer", which is neither my inside/true estimate nor a wager I can make. 

It's not purely signaling - I'm not JUST trying to convince them I'm smart or to donate to my preferred cause or change their behavior or whatnot.  It also includes a desire to have interesting conversations and to elicit models and predictions from my discussion partners.  But it's not a pure estimate of probability.

I give it somewhere around p(personal death) = 0.999, p(most humans die with me) = 0.15, and p(humans wiped out within a few hundred years) = 0.8.  

Rohin Shah

Mar 28, 2022

Ω220

Independent impressions (= inside view in your terminology), though my all-things-considered belief (= betting odds in your terminology) is pretty similar.

9 comments, sorted by Click to highlight new comments since: Today at 7:45 AM

Could you explain more about the difference.and what it looks like to give one vs. the other?

When betting, you should discount the scenarios where you're unable to enjoy the reward to zero. In less accurate terms, any doom scenario that involves you personally dying should be treated as impossible, because the expected utility of winning is zero.

Oh, this is definitely not what I meant.

"Betting odds" == Your actual belief after factoring in other people's opinions

"Inside view" == What your models predict, before factoring in other opinions or the possibility of being completely wrong

[-]Pablo2yΩ5100

Though I understood what you meant, perhaps a clearer terminology is all-things-considered beliefs vs. independent impressions.

[-]TLW2y10

Er, treated as impossible != treated as zero utility.

Suppose I think the probability of me dying in a car accident is 20%, and I don't care about what happens to my wealth in that world (rather than caring about my heirs having more money). Should I buy a contract that pays out $100 if I die in a car accident at the cost of $10?

The claim is: no, because it pays out only in situations where the money is worthless to me. If you try to back out my estimate from my willingness-to-pay, it will look a lot like I think the probability of me dying in a car accident is 0%. [And the reverse contract--the "I don't die in a car accident" one--I should buy as tho my price were 100%, which basically lets me move all of my money from worlds that I don't care about to ones that I do, basically setting my bet counterparties as my heirs.]

You can get milder forms of this distortion from 'currency changes'. If I make a dollar-denominated bet on the relative value of the dollar and the euro, and I mostly buy things in euros, then you need to do some work to figure out what I think the real probabilities are (because if I'm willing to buy a "relative value of the dollar halves" contract at 30%, well, I'm expecting to get $50 in current value back instead of $100 in current value back).

[This is to say, I think you're right that those are different things, but the "because" statement is actually pointing at how those different things construct the conclusion.]

[-]TLW2y10

Should I buy a contract that pays out $100 if I die in a car accident at the cost of $10?

No, because you don't get $100 worth of utility function increase if you die. This is distinct from there being a 0% probability of you dying.

No one said it should be treated as zero utility

[-]TLW2y10

because the expected utility of winning is zero.

...?