The efficient market hypothesis applied to AI is an important variable for timelines. The idea is: If AGI (or TAI, or whatever) was close, the big corporations would be spending a lot more money trying to get to it first. Half of their budget, for example. Or at least half of their research budget! Since they aren't, either they are all incompetent at recognizing that AGI is close, or AGI isn't close. Since they probably aren't all incompetent, AGI probably isn't close.
I'd love to see some good historical examples of entire industries exhibiting the sort of incompetence at issue here. If none can be found, that's good evidence for this EMH-based argument.
--Submissions don't have to be about AI research; any industry failing to invest in some other up-and-coming technology highly relevant to their bottom line should work.
--Submissions don't need to be private corporations necessarily. Could be militaries around the world, for example.
(As an aside, I'd like to hear discussion of whether the supposed incompetence is actually rational behavior--even if AI might be close, perhaps it's not rational for big corporations to throw lots of money at mere maybes. Or maybe they think that if AGI is close they wouldn't be able to profit from racing towards it, perhaps because they'd be nationalized, or perhaps because the tech would be too easy to steal, reverse engineer, or discover independently. Kudos to Asya Bergal for this idea.)