(Actually downvoted Daniel for reasons similar to what TurnTrout mentions. Aumannian updating is so boring, even though it's profitable when you're betting all-things-considered... I also did give arguments above, but people mostly made jokes about my punctuation! #grumpy )
I think this is only partly right. I've personally interfaced with most of the people on the top 20 -- some of them for 20+ hours; and I've generally been deeply impressed with them; and expect the tails of the Metaculus rankings to track important stuff about people's cognition.
But yeah, I also found that I could get to like 70 by just predicting the same as the community a bunch, and finding questions that would resolve in just a few days and extremizing really hard.
(That said, I still think I'm a reasonable forecaster, and have practiced it a lot independently. But don't think this Metaculus ranking is much evidence of that.)
To clarify: it's not a lot of evidence that people say "yeah this thing is going to be great and we'll work on it a lot", in response to the very post where it was announced, only a few days afterward.
Like, I'd happily go to a bitcoin/Tesla party and take lots of bets against bitcoin/Tesla. I expect those places would give me some of the most generous odds for those bets.
Also it's some evidence habryka predicted it, but man, people in general are notoriously bad at predicting how much they'll get done with their time, so it's certainly not super strong.
(That being said, I think this integration is awesome and kudos to everyone. Just keeping my priors sensible :)
Not a crux :)
Also I'm betting on the prior so I'll have to accept taking a loss every now then expecting to do better, on average, in the long run
Community once again seems too optimistic, prior is just very heavily that most possible features never ship.
Your prior should be that "people don't do stuff", community looking way too optimistic at this time
But why does my mind have such a strong need not to forget things, including things I consciously flag as "boring", that it would block off entire lines of good thoughts until I listen to it?
Huh, maybe it somehow knows that saying the bad thought will unlock good trains-of-thoughts, and that's why it pushes so hard for it...
Feel free to make an Elicit question an add it as a comment :)