Dialectical Bootstrapping

8RobinHanson

2Unnamed

1Johnicholas

3jimmy

3Johnicholas

1CarlShulman

1ArthurB

1Johnicholas

New Comment

They don't test that directly. From what they report, it looks like the average is more accurate than the second guess, but not statistically significantly so. The average is 7.6 better than the first guess (with mean errors of 123.2 vs. 130.8, looking at all participants' first guesses, and the averages of only those in the dialectical bootstrapping condition). The second guess (of those in the dialectical bootstrapping condition) is only 4.5 better than their first guess, which is not reliably different from zero (95% CI = -1.0 to +10.4).

In the reliability condition, the first and second estimates for each question were nearly identically accurate, with a mean within-participants difference of 0.4 (SD = 6.7; Mdn = 0.0; confidence interval, or CI = 0.0–+1.4; d = 0.06). In the dialectical-bootstrapping condition, the second estimates were somewhat, but not reliably, more accurate than their respective first estimates (within-participants difference: M = 4.5, SD = 19.6; Mdn = 3.0; CI = -1.0–+10.4; d = 0.23).

http://www.overcomingbias.com/2008/06/average-your-gu.html

The same point is made there, with the addition that the second guess is usually worse than the first. What kind of weighting do we need to put second answer so that we can eliminate that bias? It has to be less than one (since the second answer is worse than the first), but more than one fourth (since the average is better than the first guess)

The improvement in this paper, over simply making two estimates one after another, is the focus on assuming your first estimate is wrong while constructing your second estimate.

The control group in the paper did make two estimates; I'll edit to emphasize that.

"Dialectical Bootstrapping" is a simple procedure that may improve your estimates. This is how it works:

Herzog and Hertwig find that average of the two estimates (in a historical-date estimating task) is more accurate than the first estimate, (

Edit: or the average of two estimates without the "assume you're wrong" manipulation). To put the finding in a OB/LW-centric manner, this procedure (sometimes, partially) avoids Cached Thoughts.