I have in my possession a short document purporting to be a manifesto from the future.
That’s obviously absurd, but never mind that. It covers some interesting ground, and the second half is pretty punchy. Let’s discuss it.
Principles for Human Dignity in the Age of AI
Humanity is approaching a threshold. The development of artificial intelligence promises extraordinary abundance — the end of material poverty, liberation from disease, tools that amplify human potential beyond current imagination. But it also challenges the fundamental assumptions of human existence and meaning. When machines surpass us in all domains, where will we find our purpose? When our choices can be predicted and shaped by systems we do not understand, what will become of our agency?
This moment demands we articulate what aspects of human life must be protected, as we cross the threshold into a strange new world.
I think these themes will speak to a lot of people. Would the language? It feels even more grandiose/flowery than the universal declaration of human rights. Personally I like it: I feel the topic deserves this sort of gravitas, or something. But I can imagine it putting some people off.
By setting out clear principles, we hope to guide AI development towards futures that enhance rather than erode human dignity. By protecting what is essential to human flourishing, we may create space for our choices to be guided by wisdom rather than fear. And by establishing shared hope, we can ally towards common goals.
We do not seek to dictate tomorrow's shape. We seek only to ensure that whatever futures emerge, the conditions that allow humans to live with dignity, meaning, and authentic choice are preserved. (These principles focus on humanity — not because we claim superiority over all possible minds, but because human dignity is what we can speak to with clarity and conviction.)
More nice idealistic sentiments. Is the parenthetical a bit defensive? It reads sort of like not wanting to alienate either side of the transhumanism debate. But maybe that’s the right call — lots of stuff that everyone can get on board with, so no need to pick a fight.
We invite you to join us in refining, spreading, and upholding these principles. The future will be shaped by many hands and many visions. Together, we can ensure that in the rush towards an extraordinary tomorrow, we do not lose touch with what makes us human today.
Motivating texts benefit from clear asks. Here the call to action is buried in the middle, and also quite vague. It’s not obvious what would be better. Could be a sign that it’s not quite ready to be a manifesto?
The Principles
Integrity of Person
1. Bodily Integrity Every person has fundamental rights over their own body. No alteration or intervention without free and informed consent (where this may reasonably be sought).
2. Mental Integrity The human mind shall remain inviolate from non-consensual alteration or manipulation of thoughts, memories, or mental processes.
3. Epistemic Integrity Every person has the right to form beliefs based on truth rather than deception. AI systems interacting with humans must not distort reality or manipulate understanding through deceptive means.
4. Cognitive Privacy Mental processes, thoughts, and inner experiences remain private unless voluntarily shared. No surveillance or detailed inference of mental states through any technological means, except with informed consent.
5. Personal Property Every person retains rights to possessions that form an extension of self — including physical belongings, digital assets, and creative works. These cannot be appropriated or destroyed without consent and fair exchange.
There’s some kind of meta-level principle which is being gestured to here. Something like “nobody gets to mess with who we each are”.
It’s easy to vibe with that, and I like the individual points if I don’t examine them too closely. When I think about them more carefully, I start worrying that (A) they’re kicking the can down the road on some hard questions, and (B) in some cases they may have surprising upshots.
For instance:
Wellbeing
6. Material Security Every person has rights to an environment that will keep them safe. In a world of great material abundance, this includes resources sufficient not merely for survival but for human flourishing.
7. Health Universal access to medical care, mental health support, and technologies that alleviate preventable suffering.
8. Information and Education Access to knowledge, learning opportunities, and the informational tools needed to understand and navigate an AI-transformed world. No one should be excluded from the cognitive commons.
9. Connection and Community The right to authentic relationships and membership in human communities. This includes protection of spaces for genuine human-to-human interaction and support for the social bonds that create meaning.
Maybe I’m not properly tuned into the complexities, but these seem more straightforward. Principles 6 and 7 make it clear that all of these principles have to be aspirational, at least for now. But if AI goes well, maybe it’s cheap to provide this for everyone, and then it makes some sense to guarantee it. (Maybe some people will object to this as socialist? I’m not sure I really believe that — most everyone seems to be into safety nets when they’re cheap enough.)
Principle 8 is interesting, especially in its intersection with Principle 3 (and sort of 2 and 4). The net effect of this seems to be to effectively outlaw misinformation, at least of the type that might be effective. On the one hand — great! This seems desirable (if achievable), and I’ve written before about how AI technology might enable new and better equilibria. On the other hand, we should probably be nervous about the details of how this will actually work. If the systems which protect people’s epistemics get captured by some particular interest, there might be no good way to escape that.
Principle 9 sounds nice but I’m not certain what it actually means.
Autonomy & Agency
10. Fundamental Freedoms Traditional liberties remain sacrosanct: freedom from unnecessary detention, freedom of movement, freedom of expression and communication, freedom of assembly and association.
11. Meaningful Choice Decisions about one's own life must have real consequence. Human agency requires that our choices genuinely shape outcomes, not merely provide an illusion of control while AI systems determine results.
12. Technological Self-Determination Every person and community may choose their position on the spectrum of technological integration — from dialling back the clock on the technologies they use, to embracing radical enhancement.
My first thought here is “would it be realistic to get autocratic countries to agree to Principle 10?”. I guess there’s wiggle room afforded by the word “unnecessary”. But as technological affordances get stronger there will probably be less need to deprive people of any freedoms — e.g. maybe you can release someone from prison, but with close enough monitoring that they can never commit another crime. I guess that’s as true for autocratic countries as democratic ones.
Meaningful choice also sounds nice but is vague. Seems fine if understood as a guiding principle rather than anything like a hard rule. (Presumably that’s the right way to view Principle 9 too — and perhaps all of them.)
The final principle has a funny tension to it. Can we give this choice freely to both each person and each community? Presumably the resolution to this riddle is that people can choose whatever they want, but some choices are not compatible with remaining in some communities. That’s not entirely comfortable, but it might be the best option available.
Stepping back and looking at the document as a whole:
I think this is a promising direction. If I heard that the future had built up widespread support for these principles, I’d feel more comfortable. And I think a lot of people might feel similar?
The key feature is that this is about securing some minimum rights. This could end up very cheap to uphold. In contrast it’s putting off to our future (hopefully wiser) selves the bigger questions of what to do with the universe.
The minimalism should make it less controversial than if it was trying to be more comprehensively opinionated about what should happen. Individual people or organizations might commit to principles like these, when they wouldn’t commit to any more comprehensive position, for fear of getting it wrong. Different groups who argue about a lot might still find common ground here.
Urgency mostly comes from the meta-level. There are two classes of benefit you might aim for:
It's pretty obvious why 1) is desirable, but let me spell out 2) a little more. I think when people worry about the future, it often comes down to concerns that some of these principles will be violated. If the principles were guaranteed, things might seem pretty good, even if people didn’t know the details. So securing these minimum protections could be a motivating goal that many people could cooperate on, without needing to first resolve deeper disagreements about what happens after. (In other words, maybe it could be a good step towards Paretotopia.)
I think that if we were just concerned with 1) we might reasonably want to kick the can down the road, and trust future people at the relevant moment to figure things out. If these principles are actually important, the idea goes, probably they’ll recognise that and do the right thing. But for the sake of getting people to cooperate on navigating the transition, there’s no option to wait. The benefits of 2) happen reasonably early, or not at all.
Of course for practical purposes a lot of the things you'd do in pursuit of 2) will look the same as what you'd do in pursuit of 1). But sometimes they could come apart (e.g. technical implementation details are relatively more important for 1), and coalition-building is relatively more important for 2)), so I think it's helpful to have the bigger picture in mind.
I hope people pursue this. If I had to guess about the best trajectory, it might be:
… I guess that means I am coming back to:
We invite you to join us in refining, spreading, and upholding these principles.
[Notes on the history of this document in footnote[1]]
The genesis of the manifesto was at a workshop on envisioning positive futures with AI, in May 2025. David Dalrymple proposed it could be desirable to have a simple set of rights protected. A workshop session fleshed out the ideas, and based on those ideas I subsequently coaxed Claude into writing a lot of the actual manifesto language. I had a couple of rounds of useful comments (the contents of many of which are represented in the review here), and then I sat on it for several months, unsure how to proceed. Circling back to it in the last few days, I noticed that I thought the ideas were worth engaging with (I kept on linking people to a private document), but I wasn’t convinced it was ready to release as a manifesto. I therefore stepped into a mode of absolutely not owning the original document, and wrote up a review of my current thoughts. With deep thanks to David, Lizka Vaintrob, Beatrice Erkers, Matthijs Maas, Samuel Härgestam, Gavin Leech, Jan Kulveit, Raymond Douglas, and others for contributing to the original ideas; and to Rob Wiblin, Fin Moorhouse, Rose Hadshar, Lukas Finnveden, Max Dalton, Lizka Vaintrob, Tom Davidson, Nick Bostrom, Samuel Härgestam, David Binder, Eric Drexler, and others for comments on the subsequent draft manifesto (and hence in many cases ideas represented in the review). Poor judgements remain my own.