As rationalists, we should be able to consistently and accurately make predictions that enable us to act effectively.
As humans, we don't. At least not perfectly.
We need to improve. Many of us have, or at least believe we have. However, it's a notably hacked improvement. PredictionBook is an excellent source of feedback on how well we're doing, but there's more detailed information that isn't easily available that I think could be incredibly useful. Questions I would like to see answered are:
- What kinds of predictions are we the least successful at predicting? (weakest calibration, smallest accuracy)
- What kinds of predictions have the most low-hanging fruit? What's the easiest to improve on right now?
- What kinds of predictions are the most useful to
... (read more)
Ok. I naively thought I could trip up this system by altering the probability of being a simulation or the payout, but I can't.
SCDT successfully 1 boxes on any box 2 payout larger than $1000. SCDT successfully 1 boxes even if a single simulation is used to predict any number of actual Alices. (The scenario I worked through involved 10,000 duplicate Alices being predicted by a single Alice simulation.)
I'm thoroughly impressed by this decision theory.