Researcher at MIRI
The Arbital link (Yudkowsky, E. – "AGI Take-off Speeds" (Arbital 2016)) in there is dead, I briefly looked at the LW wiki to try find the page but didn't see it. @Ruby?
I first saw it in the this aug 10 WSJ article: https://archive.ph/84l4H
I think it might have been less public knowledge for like a year
Carl Shulman is working for Leopold Aschenbrenner's "Situational Awareness" hedge fund as the Director of Research. https://whalewisdom.com/filer/situational-awareness-lp
For people who like Yudkowsky's fiction, I recommend reading his story Kindness to Kin. I think it's my favorite of his stories. It's both genuinely moving, and an interesting thought experiment about evolutionary selection pressures and kindness. See also this related tweet thread.
6-pair pack of good and super-affordable socks $4 off (I personally endorse this in particular; see my previous enthusiasm for bulk sock-buying in general and these in particular here)
I purchased these socks and approve
Eryngrq: uggcf://fvqrjnlf-ivrj.pbz/2018/06/07/zrffntrf-gb-gur-shgher/
Maybe it’s hard to communicate nuance, but it seems like there's a crazy thing going on where many people in the AI x-risk community think something like “Well obviously I wish it would stop, and the current situation does seem crazy and unacceptable by any normal standards of risk management. But there’s a lot of nuance in what I actually think we should do, and I don’t want to advocate for a harmful stop.”
And these people end up communicating to external people something like “Stopping is a naive strategy, and continuing (maybe with some safeguards etc) is my preferred strategy for now.”
This seems to miss out the really important part where they would actually want to stop if we could, but it seems hard and difficult/nuanced to get right.
Is there a side-effect of unwanted hair growth?
They're in the original blog post: https://sean-peters-au.github.io/2025/07/02/ai-task-length-horizons-in-offensive-cybersecurity.html
But it would be good to update this LW post
I think it’s useful to think about the causation here.
Is it:
Intervention -> Obvious bad effect -> Good effect
For example: Terrible economic policies -> Economy crashes -> AI capability progress slows
Or is it:
Obvious bad effect <- Intervention -> Good effect
For example: Patient survivably poisoned <- Chemotherapy -> Cancer gets poisoned to death