I take Perplexity to be about 80% accurate, but this suggests the above isn't accurate. He had signed off on multiple bombs and didn't stop the second between the 6th and 9th when he could have.
Seems like lots of people found this valuable, so bayes points I guess.
To me, this seems like a specific version of a more general case of "there's no substitute for being right". These people did a good thing in trying to be right and listing out options, taking a careful path, but in your view, they just weren't.
I don't think this somehow makes listed auctions bad. It just means that you have to be careful to be right.
Clearly, they did something wrong
I don't think it is clear there is something wrong at this stage. Their entire job is forecasting things well, and they're quite good at it. So, the fact that their forecasts don't match what you think they should be is not instantly a loss for them.
I said "fine" not good. I think it's been a steady upward trend on everything but animal welfare (and AI but that's currently what we are discussing)
Does the current world look like total human disempowerment to you? Currently it's split between like 1000 large companies?
This feels like weak evidence against my point, though I think "timelines" and "overall AI risk" are different levels of safe to argue about.
I think this is a question on which we should spend lots of time actually thinking and writing. I'm not sure my approximations will be good at guessing the final result.
Somewhat worrying the extent that reading Planecrash really does help understand this.
Does LW have spoilter tags?