Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Now executive director of the AI Futures Project. I subscribe to Crocker's Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html
Some of my favorite memes:
(by Rob Wiblin)
(xkcd)
My EA Journey, depicted on the whiteboard at CLR:
(h/t Scott Alexander)
Insofar as zero is significantly smaller than epsilon, yes.
lol oh yeah.
We did a survey to choose the name, so I have data on this! Apparently my top choice was "What 2027 Looks Like," my second was "Crunchtime 2027" and my third choice was "What Superintelligence Looks Like." With the benefit of hindsight I think only my third choice would have actually been better.
Note that we did the survey after having already talked about it a bunch; IIRC my original top choice was "What 2027 looks like" with "What superintelligence looks like" runner-up, but I had been convinced by discussions that 27 should be in the title and that the title should be short. I ended up using "What superintelligence looks like" as a sort of unofficial subtitle, see here: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1
"What 2027 looks like" was such an appealing title to me because this whole project was inspired by the success of "What 2026 looks like," a blog post I wrote in 2021 that held up pretty well and which I published without bringing the story to its conclusion. I saw this project as (sort of) fulfilling a promise I made back then to finish the story.
I should do a blog post retrospective, now that it's almost 2026, going over the whole post & comparing to reality!
...that's a lot of work though, if anyone else wants to do it I'd be very grateful! As a bonus they might be less biased than me.
I'd be happy to link to it here in the OP.
Thanks Boaz, that's encouraging to hear.
(To be clear, I'm not claiming that we ran the title "AI 2027" by you. I don't think we had chosen a title yet at the time we talked; we just called it "our scenario." My claim is that we were genuinely interested in your feedback & if you had intervened prior to launch to tell us to change the title, we probably would have. We weren't dead-set on the title anyway; it wasn't even my top choice.)
I don't think the intuition came first? I think it was playing around with the model that caused my intuitions to shift, not the other way around. Hard to attribute exactly ofc.
Anyhow, I certainly don't deny that there's a big general tendency for people to fit models to their intuitions. I think you are falsely implying that I do deny that. I don't know if I've loudly stated it publicly before but I definitely am aware of that and have been for years, and I'm embracing it in fact--the model is a helpful tool for articulating and refining and yes sometimes changing my intuitions, but the intuitions still play a central role. I'll try to state that more loudly in future releases.
At the time, iirc, I went through Ajeya's spreadsheet and thought about each parameter and twiddled them to be more correct-according-to-me, and got something like median 2030 at the end.
Huh. I'm fairly confident that we would have chosen a different title if you complained about it to us. We even interviewed you early in the process to get advice on the project, remember? For a while I was arguing for "What Superintelligence Looks Like" as my preferred title, and this would have given me more ammo.
A major plausible class of worlds in which we don't get superhuman coders by end of 2026 is worlds where the METR trend continues at roughly the same or only slightly greater slope than the slope it had in 2025. Right?