This is a linkpost for https://simonlermen.substack.com/p/will-we-get-alignment-by-default
This is nice. It would be even nicer if I believed this debate would be resolved before we built dangerous AI
This is cool. I really enjoy listening to debates over topics.
After listening, I mostly still disagree with Adria.
To be clear, I'm not confident Adria is wrong, its like a world-view I'd put like >20% on, and >40% on a slightly weaker form of. The core questions that prevent me from putting higher probabilities are:
Adrià recently published “Alignment will happen by default; what’s next?” on LessWrong, arguing that AI alignment is turning out easier than expected. Simon left a lengthy comment pushing back, and that sparked this spontaneous debate.
Adrià argues that current models like Claude Opus 3 are genuinely good “to their core,” and that an iterative process — where each AI generation helps align the next — could carry us safely to superintelligence. Simon counters that we may only get one shot at alignment, that current methods are too weak to scale. A conversation about where AI safety actually stands.
Watch the full debate here