In https://slatestarcodex.com/2016/02/04/book-review-superforecasting/, Scott writes:
…okay, now we’re getting to a part I don’t understand. When I read Tetlock’s paper, all he says is that he took the top sixty forecasters, declared them superforecasters, and then studied them intensively. That’s fine; I’d love to know what puts someone in the top 2% of forecasters. But it’s important not to phrase this as “Philip Tetlock discovered that 2% of people are superforecasters”. This suggests a discontinuity, a natural division into two groups. But unless I’m missing something, there’s no evidence for this. Two percent of forecasters were in the top two percent. Then Tetlock named them “superforecasters”. We can discuss what skills help people make it this high, but we probably shouldn’t think of it as a specific phenomenon.
But in this article https://www.vox.com/future-perfect/2020/1/7/21051910/predictions-trump-brexit-recession-2019-2020, Kelsey Piper and Dylan Matthews write:
Tetlock and his collaborators have run studies involving tens of thousands of participants and have discovered that prediction follows a power law distribution. That is, most people are pretty bad at it, but a few (Tetlock, in a Gladwellian twist, calls them “superforecasters”) appear to be systematically better than most at predicting world events.
seeming to disagree. I'm curious who's right.
So there's the question of "is superforecaster a natural category" and I'm operationalizing that into "do the performances of GJP participants follow a power-law distribution, such that the best 2% are significantly better than the rest"?
Does anyone know the answer to that question? (And/or does anyone want to argue with that operationalization?)