I also think it is intellectually and morally serious for people who are sitting on $20 trillion of capital
This should also be "unserious", it seems like the transcript is wrong here.
Got it! I'm more inclined to generally expect that various half-decent ideas may unlock surprising advances (for no good reason in particular), so I'm less skeptical that this may be true.
Also, while math is of course easy to verify, assuming they haven't significantly used verification in the training process, it makes their claims more reasonable.
Sure, math is not an example of a hard-to-verify task, but I think you're getting unnecessarily hung up on these things. It does sound like it may be a new and in a narrow sense unexpected technical development, and it's unclear how significant it is. I wouldn't try to read into their communications much more.
I would not characterize Tao's usual takes on AI as particularly good (unless you compare with a relatively low baseline).
He's been overall pretty conservative and mostly stuck to reasonable claims about current AI. So there's not much to criticize in particular, but it has come at the cost of him not appreciating the possible/likely trajectories of where things are going, which I think misses the forest for the trees.
In my experience with magh, to be obviously excellent you need to be more like top 10 % of all grad students, possibly even higher, but might vary a lot on the field.
I'd agree that this is to some extent playing the respectability game, but personally I'd be very happy for Eliezer and people to risk doing this too much rather than too little for once.
This is definitely baked in for many people (e.g. me, but also see the discussion here for example).
The most concerning mundane risks that come to mind are unemployment, concentration of power, and adversarial forms of RL (I'm missing a better phrase here, basically what TikTok/Meta/the recent o4 model were already doing). The problems in education are partially downstream of that (what's the point if it's not going to help prepare you for work) and otherwise honestly don't seem too serious on absolute terms? Granted, the system may completely fail to adapt, but that seems more to be an issue with the system already being broken and not about AI in particular.
"Approaching human level" seems reasonable. I think one should just read this as her updating towards short timelines in general based on what experts say rather than her trying to make a prediction.
This is not an obvious solution, since (as you probably are aware) you run into the threat of human disempowerment given sufficiently strong models. You may disagree with this being an issue, but it would at least need to be argued.