I think this is right at the broad level. But once you've accepted that getting a good outcome from AGI is the most important thing to work on, timelines matter a lot again, because they determine what the most effective direction to work is. Figuring out exactly what AGI we have to align and how long we have to do it is pretty crucial for having the best possible alignment work done by the time we hit takeover capable AGI.
I'll completely grant that this is only a 1st order approximation, and that for most (all?) technical work, getting more specific timelines matters a lot. I wanted to make this point because I see many laypeople not quite buying the "AGI in X years" timelines because X years seems like a short time (for most values of X, for most laypeople), but the moment I switch the phrasing to "computers are ~40 years old, given this rate of progress we'll almost certainly have AGI in your lifetime" then they become convinced that AI safety is a problem worth worrying about.
In the discussion of AI safety and the existential risk that ASI poses to humanity, I think timelines aren’t the right framing. Or at least, they often distract from the critical point: It doesn’t matter if ASI arrives in 5 years time or in 20 years time, it only matters that it arrives during your lifetime[1]. The risks due to ASI are completely independent of whether they arrive during this hype-cycle of AI, or whether there’s another AI winter, progress stalls for 10 years, but then ASI is built after that winter has passed. If you are convinced that ASI is a catastrophic global risk to humanity, the timelines don’t matter and are somewhat inconsequential, the only thing that matters is 1. we have no idea how we could make something smarter than ourselves without it also being an existential threat, and 2. we can start making progress on this field of research today.
So ultimately, I’m uncertain about whether we’re getting AI in 2 years or 20 or 40. But it seems almost certain that we’ll be able to build ASI within my lifetime[2]. And if that’s the case, nothing else really matters besides making sure that humanity equally realises the benefits of ASI without it also killing us all due to our short-sighted greed.