LESSWRONG
LW

Kevin Dorst
44213310
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Only Fools Avoid Hindsight Bias
Kevin Dorst1y1-1

I get where you're coming from, but where do you get off the boat? The result is a theorem of probability: if (1) you update by conditioning on e, and (2) you had positive covariance for your own opinion and the truth, then you commit hindsight bias.  So to say this is irrational we need to either say that (1) you don't update by conditioning, or (2) you don't have positive covariance between your opinion and the truth. Which do you deny, and why?

The standard route is to deny (2) by implicitly assuming that you know exactly what your prior probability was, at both the prior and future time.  But that's a radical idealization.

Perhaps more directly to your point: the shift only results in over-estimation if your INITIAL estimate is accurate.  Remember we're eliciting (i) E(P(e)) and (ii) E(P(e) | e),  not  (iii) P(e) and  (ii) E(P(e) | e).  If (i) always equaled (iii) (you always accurately estimated what you really thought at the initial time), then yes hindsight bias would decrease the accuracy of your estimates.  But in contexts where you're unsure what you think, you WON'T always accurately your prior.

In fact, that's a theorem.  If P has higher-order uncertainty, then there must be some event q such that P(q) ≠ E(P(q)).  See this old paper by Samet (https://www.tau.ac.il/~samet/papers/quantified.pdf), and this more recent one with a more elementary proof (https://philarchive.org/rec/DORHU).

Reply
The Natural Selection of Bad Vibes (Part 1)
Kevin Dorst1y10

Agreed that people have lots of goals that don't fit in this model. It's definitely a simplified model.  But I'd argue that ONE of (most) people's goals to solve problems; and I do think, broadly speaking, it is an important function (evolutionarily and currently) for conversation.  So I still think this model gets at an interesting dynamic.

Reply
Centrists are (probably) less biased
Kevin Dorst1y10

I think it depends on what we mean by assuming the truth is in the center of the spectrum.  In the model at the end, we assume is at the extreme left of the initial distribution—i.e. µ=40, while everyone's estimates are higher than 40.  Even then, we end up with a spread where those who end up in the middle (ish—not exactly the middle) are both more accurate and less biased.

What we do need is that wherever the truth is, people will end up being on either side of it.  Obviously in some cases that won't hold. But in many cases it will—it's basically inevitable if people's estimates are subject to noise and people's priors aren't in the completely wrong region of logical space.

Reply
Bayesians Commit the Gambler's Fallacy
Kevin Dorst1y10

Hm, I'm not following your definitions of P and Q. Note that there's no (that I know of) easy closed-form expression for the likelihoods of various sequences for these chains; I had to calculate them using dynamic programming on the Markov chains.

The relevant effect driving it is that the degree of shiftiness (how far it deviates from 50%-heads rate) builds up over a streak, so although in any given case where Switchy and Sticky deviate (say there's a streak of 2, and Switchy has a 30% of continuing while Sticky has a 70% chance), they have the same degree of divergence, Switchy makes it less likely that you'll run into these long streaks of divergences while Sticky makes it extremely likely.  Neither Switchy nor Sticky gives a constant rate of switching; it depends on the streak length. (Compare a hypergeometric distribution.)

Take a look at §4 of the paper and the "Limited data (full sequence): asymmetric closeness and convergence" section of the Mathematica Notebook linked from the paper to see how I calculated their KL divergences. Let me know what you think!  

Reply
Bayesians Commit the Gambler's Fallacy
Kevin Dorst1y10

See the discussion in §6 of the paper.  There are too many variations to run, but it at least shows that the result doesn't depend on knowing the long-run frequency is 50%; if we're uncertain about both the long-run hit rate and about the degree of shiftiness (or whether it's shifty at all), the results still hold.

Does that help?

Reply
Bayesians Commit the Gambler's Fallacy
Kevin Dorst1y10

Mathematica notebook is here! Link in the full paper.

How did you define Switchy and Sticky? It needs to be >= 2-steps, i.e. the following matrices won't exhibit the effect.  So it won't appear if they are eg

Switchy = (0.4, 0.6; 0.6, 0.4)

Sticky = (0.6,0.4; 0.4,0.6)

But it WILL appear if they build up to (say) 60%-shiftiness over two steps. Eg:

Switchy = (0.4, 0 ,0.6, 0; 0.45, 0, 0.55, 0; 0, 0.55, 0, 0.45, 0, 0.6, 0, 0.4)

Sticky = (0.6, 0 ,0.4, 0; 0.55, 0, 0.45, 0; 0, 0.45, 0, 0.55, 0, 0.4, 0, 0.6)

Reply
Bayesians Commit the Gambler's Fallacy
Kevin Dorst1y10

Would it have helped if I added the attached paragraphs (in the paper, page 3, cut for brevity)?

Frame the conclusion as a disjunction: "either we construe 'gambler's fallacy' narrowly (as by definition irrational) or broadly (as used in the blog post, for expecting switches).  If the former, we have little evidence that real people commit the gambler's fallacy.  If the latter, then the gambler's fallacy is not a fallacy."

Reply
Bayesians Commit the Gambler's Fallacy
Kevin Dorst1y30

I see the point, though I don't see why we should be too worried about the semantics here. As someone mentioned below, I think the "gambler's fallacy" is a folk term for a pattern of beliefs, and the claim is that Bayesians (with reasonable priors) exhibit the same pattern of beliefs.  Some relevant discussion in the full paper (p. 3), which I (perhaps misguidedly) cut for the sake of brevity:

Reply
In Defense of Epistemic Empathy
Kevin Dorst2y20

Good question.  It's hard to tell exactly, but there's lots of evidence that the rise in "affective polarization" (dislike of the other side) is linked to "partisan sorting" (or "ideological sorting")—the fact that people within political parties increasingly agree on more and more things, and also socially interact with each other more.  Lilliana Mason has some good work on this (and Ezra Klein got a lot of his opinions in his book on this from her).  

This paper raises some doubts about the link between the two, though.  It's hard to know!

Reply
In Defense of Epistemic Empathy
Kevin Dorst2y10

I think it depends a bit on what we mean by "rational". But it's standard to define as "doing the best you CAN, to get to the truth (or, in the case of practical rationality, to get what you want)".  We want to put the "can" proviso in there so that we don't say people are irrational for failing to be omniscient.  But once we put it in there, things like resource-constraints look a lot like constraints on what you CAN do, and therefore make less-ideal performance rational.  

That's controversial, of course, but I do think there's a case to be made that (at least some) "resource-rational" theories ARE ones on which people are being rational.

Reply
Load More
-11Only Fools Avoid Hindsight Bias
1y
5
13The Natural Selection of Bad Vibes (Part 1)
1y
3
1Centrists are (probably) less biased
1y
2
98Ideological Bayesians
1y
5
49Bayesians Commit the Gambler's Fallacy
2y
30
60In Defense of Epistemic Empathy
2y
19
124Bayesian Injustice
2y
10
11Polarization is Not (Standard) Bayesian
2y
6
3ChatGPT challenges the case for human irrationality
2y
10
19Rationalization Maximizes Expected Value
2y
10
Load More