In The Adolescence of Technology, Dario Amodei suggests that humanity may soon enter a dangerous transition period because we are building AI systems of extraordinary power before we have the social, political, and institutional maturity to wield them safely. His focus is on misuse, loss of control at a civilisational...
In a previous post, I argued against framing alignment in terms of maternal instinct. Interacting with current LLMs has made that concern feel less abstract. What I’m encountering now feels like a virtual mother-in-law instinct - well-intentioned, anxious, and epistemically overbearing There's an expression in my culture for someone who...
When I see the hunger strikes in front of offices of openAI and anthropic, Or the fellowships and think tanks sprouting around the world, all aimed at "pausing" the race towards AGI, I keep thinking ... If I had to slow anything down, it wouldn't be AI development. It would...
Epistemic status: Philosophical argument. I'm critiquing Hinton's maternal instinct metaphor and proposing relationship-building as a better framework for thinking about alignment. This is about shifting conceptual foundations, not technical implementations. -- Geoffery Hinton recently argued that since AI will become more intelligent than humans, traditional dominance-submission models won't work for...
Epistemic Status: This post is a speculative synthesis drawn from patterns I’ve noticed across two domains I spend a lot of time in i.e. long-term relationships and AI alignment. It’s not a technical claim, but an intuitive exploration (and thinking out loud) meant to provoke better questions rather than propose...
Epistemic status: This essay grew out of critique. After writing about relational alignment, someone said, "Cute, but it doesn’t solve deception." At first I resisted that framing. Then I realised, deception isn’t a root problem, it’s a symptom. A sign that honesty is too costly. This piece reframes deception as...