How has your p(doom) changed over this period?
What do you think have been the most important applications of UDT or other decision theories to alignment?
2x uplift is already happening at the most advanced AI lab
This seems plausible to me, but would be good to have a new METR uplift study to have more confidence in this.
Could you give an example of an article where this was effective?
The existential risks that everyone will die or that the future will belong to the AIs are obvious.
I not sure that this is obvious to most, particularly outside of LW.
The one bench we definitely don't want to be bench-maxxed.
arguing that if you don't know the hazard rate, but instead have uncertainty about it that you update over time, hyperbolic-looking discounting can fall out from there
This seems relevant to X-risk discussions.
I'm not sure he has the power to make such commitments, since Deepmind is no longer an independent lab. I think it would depend on how others at Google (e.g. Sundar Pichai) or shareholders felt about it.
According to Semianalysis:
The company [OpenAI] has used the same base model, GPT-4o, for all their recent flagship models: o1, o3, and GPT-5 series.
And people still wonder how the AIs could possibly take over!