the most interesting takeaway here is not the part where predictor regressed to the mean, but that extreme things tend to be differently extreme on different axis.
Even though the two variables are strongly correlated, things that are extreme on one variable are somewhat closer to the mean on the other variable.
I think they're close to identical. "The tails come apart", "regression to the mean", "regressional Goodhart", "the winner's curse", "the optimizer's curse", and "the unilateralist's curse" are all talking about essentially the same statistical phenomenon. They come at it from different angles, and highlight different implications, and are evocative of different contexts where it is relevant to account for the phenomenon.
Eric Schwitzgebel has done studies on whether moral philosophers behave more ethically (e.g., here). Some of the measures from that research seem to match reasonably well with law-abidingness (e.g., returning library books, paying conference registration fees, survey response honesty) and could be used in studies of mathematicians.
A better sentence should give the impression that, by way of analogy, some basketball players are NBA players.
This analogy seems like a good way of explaining it. Saying (about forecasting ability) that some people are superforecasters is similar to saying (about basketball ability) that some people are NBA players or saying (about chess ability) that some people are Grandmasters. If you understand in detail the meaning of any one of these claims (or a similar claim about another domain besides forecasting/basketball/chess), then most of what you could say about that claim would port over pretty straightforwardly to the other claims.
I don't see much disagreement between the two sources. The Vox article doesn't claim that there is much reason for selecting the top 2% rather than the top 1% or the top 4% or whatever. And the SSC article doesn't deny that the people who scored in the top 2% (and are thereby labeled "Superforecasters") systematically do better than most at forecasting.
I'm puzzled by the use of the term "power law distribution". I think that the GJP measured forecasting performance using Brier scores, and Brier scores are always between 0 and 1, which is the wrong shape for a fat-tailed distribution. And the next sentence (which begins "that is") isn't describing anything specific to power law distributions. So probably the Vox article is just misusing the term.
(This is Dan, from CFAR since 2012)
Working at CFAR (especially in the early years) was a pretty intense experience, which involved a workflow that regularly threw you into these immersive workshops, and also regularly digging deeply into your thinking and how your mind works and what you could do better, and also trying to make this fledgling organization survive & function. I think the basic thing that happened is that, even for people who were initially really excited about taking this on, things looked different for them a few years later. Part of that is personal, with things like burnout, or feeling like they’d gotten their fill and had learned a large chunk of what they could from this experience, or wanting a life full of experiences which were hard to fit in to this (probably these 3 things overlap). And part of it was professional, where they got excited about other projects for doing good in the world while CFAR wanted to stay pretty narrowly focused on rationality workshops.I’m tempted to try to go into more detail, but it feels like that would require starting to talk about particular individuals rather the set of people who were involved in early CFAR and I feel weird about that.
(This is Dan from CFAR)
In terms of what happened that day, the article covers it about as well as I could. There’s also a report from the sheriff’s office which goes into a bit more detail about some parts.
For context, all four of the main people involved live in the Bay Area and interact with the rationality community. Three of them had been to a CFAR workshop. Two of them are close to each other, and CFAR had banned them prior to the reunion based on a bunch of concerning things they’ve done. The other two I’m not sure how they got involved.
They have made a bunch of complaints about CFAR and other parts of the community (the bulk of which are false or hard to follow), and it seems like they were trying to create a big dramatic event to attract attention. I’m not sure quite how they expected it to go.
This doesn’t seem like the right venue to go into details to try to sort out the concerns about them or the complaints they’ve raised; there are some people looking into each of those things.
Not precise at all. The confidence interval is HUGE.
stdev = 5.9 (without Bessel's correction)
std error = 2.6
95% CI = (0.5, 10.7)
The confidence interval should not need to go that low. Maybe there's a better way to do the statistics here.
Warning: this sampling method contains selection effects.