The market seems to underestimate the extent to which Micron (MU) is an AI stock. My only options holdings for now are December 2026 MU calls.
I had a vaguely favorable reaction to this post when it was first posted.
When I wrote my recent post on corrigibility, I grew increasingly concerned about the possible conflicts between goals learned during pretraining and goals that are introduced later. That caused me to remember this post, and decide it felt more important now than it did before.
I'll estimate a 1 in 5000 chance that the general ideas in this post turn out to be necessary for humans to flourish.
"OOMs faster "? Where do you get that idea?
Dreams indicate a need for more processing than what happens when we're awake, but likely less than 2x waking time.
I was just thinking about writing a post that overlaps with this, inspired by a recent Drexler post. I'll turn it into a comment.
Leopold Aschenbrenner's framing of a drop-in remote worker anthropomorphizes AI in a way that risks causing AI labs to make AIs more agenty than is optimal.
Anthropomorphizing AI is often productive. I use that framing a fair amount to convince myself to treat AIs as more capable than I'd expect if I thought of them as mere tools. I collaborate better when I think of the AI as a semi-equal entity.
But it feels important to be able to switch back and forth between the tool framing and the worker framing. Both framings have advantages and disadvantages. The ideal framing is likely somewhere in between that seems harder to articulate.
I see some risk that AI labs turning AIs into agents, when if they were less focused on replacing humans they might lean more toward Drexler's (safer) services model.
Please, AI labs, don't anthropomorphize AIs without carefully considering when that's an appropriate framing.
I want to register different probabilities:
My guess is that ASI will be faster to adapt to novel weapons and military strategies. Nanotech is likely to speed up the rate at which new weapons are designed and fabricated.
Imagine a world in which a rogue AI can replicate a billion drones, of a somewhat novel design, in a week or so. Existing human institutions aren't likely to adapt fast enough to react competently to that.
I just published a post on Drexler's MSEP software that is relevant to whether people should donate to his project.
two more organizations that seem worthy of consideration
Investing in Eon Systems looks much more promising than donating to Carbon Copies.
I see maybe a 3% chance that they'll succeed at WBE soon enough to provide help with AI x-risk.
The Invention of Lying provides a mostly accurate portrayal of a world where everyone is honest. It feels fairly Hansonian.
The book is much better than I expected, and deserves more attention. See my full review on my blog.