This is a link post to a living document. What's below may be an older version. Click link for latest version.
2026-03-01
My personal apology to Eliezer Yudkowsky for not working on ASI risk in 2022 itself
Disclaimer
Quick Note
Speaking aloud. Contains personal emotions
Main
I spent a few months trying to understand ASI risks back in 2022 but decided not to work on it back then (for multiple reasons).
Now that it is 2025 and I am actually working full-time on preventing ASI risks, I am realising how bad the situation actually is. I seem to both have underestimated how fast the exponential would be, and how difficult it would be to get world coordination, as compared to back then.
It is entirely possible I will come to look back on my decision to not work on ASI risk back in 2022, as the biggest regret of my entire life. Three years really has been this costly.
Hmm.
I feel AI risk can be a mainstream topic of discussion in as low as 4 years, especially if we see 4 years more of progress at the current rate (DALLE, PALM etc).
I'm not totally sure how to convince someone else of this intuition, except to create a dataset of "accurate + important ideas" in the past and see how long it took for them to go mainstream. I don't think track record of public caring about important ideas is really that bad.
I predicted this in 2022-06, it is 2026-03 now. Most of the public is still not aware of ASI risk, although a lot more people are aware as compared to 2022. Atleast a lot of people in San Francisco are aware, and hackernews is broadly aware. Unfortunately neither me nor most other people worked on trying to reach the public in the last 4 years.
Lesswrong disclaimer
2026-03-01
My personal apology to Eliezer Yudkowsky for not working on ASI risk in 2022 itself
Disclaimer
Main
Update