I guess it seems pretty weird to me that superforecasters would do that much worse than prediction markets without some selection or bias, but I'll mark it down as a reasonable alternative hypothesis. ("Actually superforecasting just generalizes really poorly to this admittedly special domain, and random superforecasters do way worse in it than prediction markets by default.")
It cannot be answered that simply to the Earthlings, because if you answer "Because I don't expect that to actually work or help", some of them and especially the more evil ones will pounce in reply, "Aha, so you're not replying, 'I'd never do that because it would be wrong and against the law', what a terrible person you must be!"
Super upvoted.
With that said, why is the optimal amount of woo not zero?
Also I think nonaccomodationist vegans have tended to be among the crazier people, so maybe you want enough vegetables for the accommodationists but also beef from moderately less tortured cows.
I just saw one recently on the EA forum to the effect that EAs who shortened their timelines only after chatGPT had the intelligence of a houseplant.
Somebody asked if people got credit for <30 year timelines posted in 2025. I replied that this only demonstrated more intelligence than a potted plant.
If you do not understand how this is drastically different from the thing you said I said, ask an LLM to explain it to you; they're now okay at LSAT-style questions if provided sufficient context.
In reply to your larger question, being very polite about the house burning down wasn't working. Possibly being less polite doesn't work either, of course, but it takes less time. In any case, as several commenters have noted, the main plan is to have people who aren't me do the talking to those sorts of audiences. As several other commenters have noted, there's a plausible benefit to having one person say it straight. As further commenters have noted, I'm tired, so you don't really have an option of continuing to hear from a polite Eliezer; I'd just stop talking instead.
Noted as a possible error on my part.
I looked at "AI 2027" as a title and shook my head about how that was sacrificing credibility come 2027 on the altar of pretending to be a prophet and picking up some short-term gains at the expense of more cooperative actors. I didn't bother pushing back because I didn't expect that to have any effect. I have been yelling at people to shut up about trading their stupid little timelines as if they were astrological signs for as long as that's been a practice (it has now been replaced by trading made-up numbers for p(doom)).
When somebody at least pretending to humility says, "Well, I think this here estimator is the best thing we have for anchoring a median estimate", and I stroll over and proclaim, "Well I think that's invalid", I do think there is a certain justice in them demanding of me, "Well, would you at least like to say then in what direction my expectation seems to you to be predictably mistaken?"
If you can get that or 2050 equally well off yelling "Biological Anchoring", why not admit that the intuition comes first and then you hunt around for parameters you like? This doesn't sound like good methodology to me.
Is your take "Use these different parameters and you get AGI in 2028 with the current methods"?
Noted. I think you are overlooking some of the dynamics of the weird dance that a bureaucratic institution does around pretending to be daring while their opinions are in fact insufficiently extreme; eg, why when OpenPhil ran a "change our views" contest, they predictably awarded all of the money to critiques arguing for longer timelines and lower risk, even though reality was in the opposite direction of their opinions from that. Just like OpenPhil predictably gave all the money to "we need two Stalins" critiques of them in the contest, OpenPhil might have managed to communicate to the 'superforecasters' or their institutions that the demanded apparent disagreement with OpenPhil's overt forecast was in the "we need two Stalins" direction of longer timelines and lower risks.
Or to rephrase: If I can look at the organizational dynamics and see it as obvious in advance that OpenPhil's "challenge our worldviews" contest would award all the money to people arguing for longer timelines and lower risk, (despite reality lying in the opposite direction, according to those people's own later updates, even); then maybe the people advertising themselves as producing superforecaster reports, can successfully read OpenPhil's mind about what direction of superforecaster disagreement is being secretly demanded.
But, sure, fair enough, I should also update somewhat in favor of the average superforecaster being even worse at AI than OpenPhil and them delivering an honest terrible report. I guess it's just surprising to me because I would've expected the key maneuver here to be saying "I dunno" and not throwing around extreme opinions or numbers, and I would've thought superforecasters able to do that better than OpenPhil... but eh, idk, maybe they just straight up couldn't tell the difference between the usually good rule "nothing ever happens" and "AGI in particular never happens", and also didn't know themselves for overconfident or incompetent at being able to apply the rule.
If so, it would speak correspondingly poorly of those EAs who stood around gesturing at the superforecasters and saying, "Why believe MIRI when you could believe these great certified experts?"