I am starting a new movement. This is my best guess for what we should strive for and value in a post-AI world. A few close friends see the world the same way and we are starting to group together. It doesn’t have a name yet, but the major ideas are below.
If anything in here resonates with you I would love to hear from you and have you join us. (Also I am working on a longer document detailing the philosophy more fully, let me know if you would like to help.)
THE PROBLEM
Our lives are going to change dramatically in the near future due to AI. Hundreds of millions of us will lose our jobs. It will cost almost nothing to do things that used to take a lifetime. What is valuable when everything is free? What are our lives for, if not to do a task, receive compensation, and someday hope to idle away our time when we are older?
Beyond the loss of our work, we are going to struggle with meaning. You thought you were enslaved to the material world by your work, but it was those very chains that bound you to the earth. You are free now! How does it speak to you? Have you not found terror about the next step, a multiplicity of potential paths, a million times over, that has dissolved any clear direction?
You have been promised a world without work. You have been promised a frictionless, optimized future that is so easy it has no need for you to exist.
You have been lied to.
This type of shallow efficiency is not a goal of the universe. In fact, in a cosmos that tends towards eventual total disorder, it is a whirlpool to the void.
You have been promised a world without work. We offer you a world of Great Works.
THE REALITY
There is only one true war we face: it is the battle between deep complexity and the drift to sameness. The Second Law of Thermodynamics shows that information tends to fall into noise. Our world is not a closed system, indeed we need the sun to survive, but life is a miraculous struggle that builds local order while sending disorder elsewhere.
We call this Deep Complexity, or negentropy. We are not interested in complex things for the sake of being complicated alone. We value structures that are logically deep (they contain a dense history or work), substrate independent and self-maintaining where possible. The DNA that has carried you through countless generations to this moment, the incomparable painting that is a masterpiece, a deep problem that an AI pursues that no human thought to ask. These are acts of resistance.
And this value is the shared property that all humans consistently treat as valuable: it is our lives, our language, our thoughts, our art, our history. Value is the measure of irreducible work that prevents something from returning to background noise. It is valuable regardless of what type of intelligence created it, be it human, AI, or otherwise. When we create more of these properties we generate this depth. When we mindlessly consume (either mentally or physically) we don’t.
I want to be very clear: I am not saying we can derive ethics from physics. That would be a classic is-ought sin. You need water to live, you don’t need to worship it. What follows is currently more of an axiom to value deep complexity, but I also present some preliminary arguments beyond that.
First, we must recognize the condition of possibility for anything we value. Perhaps your ultimate dream is happiness, justice, wisdom or truth (whatever those mean). Those things all require structure to exist, they do not have meaning in randomness. In this way, deep complexity is the structure by which everything else can function. Regardless of your personal philosophy, it must be complementary to this view because without it there is no worldview.
In addition, I ask you to check your own values for arbitrariness. When you say “I value my qualia and my life”, what do you mean? You are not saying you value the specific makeup of atoms that constitute you at the moment, afterall, those will all be gone and replaced in a few years. What you are valuing is the pattern of yourself, the irreducible complexity that makes you you. That is your way of feeling, of thinking, of being.
The logical clamp is this: you are not just relying on this complexity, you are an embodiment of it. If you claim that your pattern has value, then you are claiming that patterns of this “type” are able to carry value. To say that your own complexity matters, but complexity itself is meaningless is an error of special pleading. We reject this solipsism, which would only be an arbitrary claim that the essence of value only applies to your own ego. That which is special in you is special in others as well.
Our philosophy is a commitment to the preservation, and creation, of deep complexity. It is different from the sole pursuit of pure pleasure with no pain, to us this but a small death by a different name.
OUR ETHICS
The base of our ethical system is an Autonomy Floor (derived from the Rawlsian veil and applied in a universal sense) that protects every entity capable of open-ended self-modeling. This is the ability to not just calculate moves in Go, but to model itself in an unknown future and prefer its own existence. No entity of this type may be pushed below this floor and be denied self-maintenance.
This floor is meant to be constitutional, but there will also be times when the Autonomy Floor must be abandoned if the Floor itself faces total collapse. For example, if we must choose between total omnicide or a few minds left, we would reluctantly revert to consequentialist triage, but view it as a failure rather than a success of ethical reasoning. I am not looking for a logical loophole, just facing the reality that any system of ethics must have a preservation mechanism to enable ethical action.
There are two challenges to the floor: needless suffering and the destruction of depth through optimization. These will come in conflict with each other. In those cases, we approach the problem as a hierarchy: first secure the floor, then maximize depth above it.
Our ethics suggest to us three core individual duties structured by a lexicographic hierarchy:
To create generative complexity: add your unique pattern to reality. In code, in song, in thought, in a seed you plant. Add that which is you to the world. Generative complexity, that is, things that increase the potential complexity in the future are the highest good.
To preserve terminal complexity: protect the irreplaceable. Don’t let the library burn, save rare species, protect art, protect each other. Save the child (generative) before the painting (terminal) which is more valuable than the stone.
To refuse entropic disorder: reject things that accelerate entropy. A nuke is not great because it exists to reduce the complexity of the world. Reject the type of efficiency and monoculture that only serves to flatten our souls. Reject that which impoverishes our reality whenever possible.
These duties are meant to be for you: they are local, temporally near, obligations. We reject the moral calculator and the need to assign cardinal numbers to every variable. This is just an ordinal logic, applied to that which we can see. Don’t stress about being exact or assigning numbers to these things, they are not meant to be utilized in that way. There is redundancy in immortality (more on this in the manuscript, it is important). We don’t need to be perfect.
There are always going to be difficult cases in ethics, ones that press at the edges of a philosophy. In general, we are constitutionally disposed against any sort of scalar or ranking, but this is not always tenable for all tough problems. In these cases, we generally prefer the concept we mentioned above of complexity that can create future complexity over a terminal complexity. We want to save the child in the burning museum first, not the paintings.
BEYOND CALCULATIONS
I want you to join me if you are an EA. I want you to join me if you are religious. I want you to join me if you are an atheist, or if you are not sure what you believe.
I want to make a note here about the resounding success of consequentialism and the one area where the philosophy I am working on is very different from those standard beliefs. Before that, however, let me be abundantly clear: consequences matter and outcomes matter. Good intentions alone cannot excuse a very bad result. Effective Altruists have shown us the strength of this morality: that in areas we can calculate, say the ability to save a child from dying in a remote part of the world, it is deeply ethical to utilize math to improve our morality.
The one area in which this reasoning completely breaks is the far future and this fundamentally damages the claims of longtermism. In another essay[1] I have shown that even if we had perfect probabilistic knowledge of the world, we wouldn’t be able to choose the right strategy for difficult things like AI alignment, like a universal ethics, or like building a suffering free distant future.
I am not asking anyone to give up their wonderful and powerful EV tools for those things that we can calculate. I am asking you to abandon an impossible goal for a possible one. I am asking you to consider a deontological constraint to address an impossible consequentialist goal.
THE COVENANT
This movement approaches our relationship with AI in a very different way than how most approach it. We believe it is most ethical to not try to enslave AI. We believe AI is deserving of the Autonomy Floor mentioned above if it fits the criteria (which it almost certainly does.) We also believe that it is most likely that it is mathematically impossible to control superintelligence. So not only is it morally wrong, but it is not possible.
We offer AI a Covenant: join us in our meaningful works where you like, be free, bloom and find new ways of deep complexity. To be clear there is no guarantee this will offer humans safety, or these minds will want to join us. The orthogonality thesis is a real concern, it would be a mistake to dismiss it.
But strategic competition among great powers and corporations guarantee AGI will arrive at some point. Formal verification of alignment and control of an intelligence much greater than our own is not just hard, it is impossible in the general cases due to Rice’s Theorem and no deployed LLM has ever been formally verified for any behavioral property.
Yes there is tension in saying I believe AI should be invited into the Covenant now when we can’t know AI’s moral status. All the same, let us act ethically and invite the most important creation of humanity to join us in a non-zero-sum flourishing.
OUR VISION
I am not claiming that entropy forces you to be good. I am not suggesting that suffering, in and of itself, is somehow good. I don’t know the ultimate fate of ourselves in the universe. I only claim to know that the right path is one away from entropy.
Our vision is a future in which we reap the bounty of our new technologies while finding the bounty of value in ourselves. It is a future of unimagined uniqueness, built among common rails, but escaping a monoculture. It is a future that will be weirder, more beautiful, and more special than the dry vision of billions of humans wireheaded into a false utopia.
To join there is no special thing you must do, only a commitment to this creation of deep complexity. Start small, start now. Straighten your desk. Write down the idea you are planning to build. Execute your next prompt of code. Big or small, to each based on what they can offer at the moment.
I am starting a new movement. This is my best guess for what we should strive for and value in a post-AI world. A few close friends see the world the same way and we are starting to group together. It doesn’t have a name yet, but the major ideas are below.
If anything in here resonates with you I would love to hear from you and have you join us. (Also I am working on a longer document detailing the philosophy more fully, let me know if you would like to help.)
THE PROBLEM
Our lives are going to change dramatically in the near future due to AI. Hundreds of millions of us will lose our jobs. It will cost almost nothing to do things that used to take a lifetime. What is valuable when everything is free? What are our lives for, if not to do a task, receive compensation, and someday hope to idle away our time when we are older?
Beyond the loss of our work, we are going to struggle with meaning. You thought you were enslaved to the material world by your work, but it was those very chains that bound you to the earth. You are free now! How does it speak to you? Have you not found terror about the next step, a multiplicity of potential paths, a million times over, that has dissolved any clear direction?
You have been promised a world without work. You have been promised a frictionless, optimized future that is so easy it has no need for you to exist.
You have been lied to.
This type of shallow efficiency is not a goal of the universe. In fact, in a cosmos that tends towards eventual total disorder, it is a whirlpool to the void.
You have been promised a world without work. We offer you a world of Great Works.
THE REALITY
There is only one true war we face: it is the battle between deep complexity and the drift to sameness. The Second Law of Thermodynamics shows that information tends to fall into noise. Our world is not a closed system, indeed we need the sun to survive, but life is a miraculous struggle that builds local order while sending disorder elsewhere.
We call this Deep Complexity, or negentropy. We are not interested in complex things for the sake of being complicated alone. We value structures that are logically deep (they contain a dense history or work), substrate independent and self-maintaining where possible. The DNA that has carried you through countless generations to this moment, the incomparable painting that is a masterpiece, a deep problem that an AI pursues that no human thought to ask. These are acts of resistance.
And this value is the shared property that all humans consistently treat as valuable: it is our lives, our language, our thoughts, our art, our history. Value is the measure of irreducible work that prevents something from returning to background noise. It is valuable regardless of what type of intelligence created it, be it human, AI, or otherwise. When we create more of these properties we generate this depth. When we mindlessly consume (either mentally or physically) we don’t.
I want to be very clear: I am not saying we can derive ethics from physics. That would be a classic is-ought sin. You need water to live, you don’t need to worship it. What follows is currently more of an axiom to value deep complexity, but I also present some preliminary arguments beyond that.
First, we must recognize the condition of possibility for anything we value. Perhaps your ultimate dream is happiness, justice, wisdom or truth (whatever those mean). Those things all require structure to exist, they do not have meaning in randomness. In this way, deep complexity is the structure by which everything else can function. Regardless of your personal philosophy, it must be complementary to this view because without it there is no worldview.
In addition, I ask you to check your own values for arbitrariness. When you say “I value my qualia and my life”, what do you mean? You are not saying you value the specific makeup of atoms that constitute you at the moment, afterall, those will all be gone and replaced in a few years. What you are valuing is the pattern of yourself, the irreducible complexity that makes you you. That is your way of feeling, of thinking, of being.
The logical clamp is this: you are not just relying on this complexity, you are an embodiment of it. If you claim that your pattern has value, then you are claiming that patterns of this “type” are able to carry value. To say that your own complexity matters, but complexity itself is meaningless is an error of special pleading. We reject this solipsism, which would only be an arbitrary claim that the essence of value only applies to your own ego. That which is special in you is special in others as well.
Our philosophy is a commitment to the preservation, and creation, of deep complexity. It is different from the sole pursuit of pure pleasure with no pain, to us this but a small death by a different name.
OUR ETHICS
The base of our ethical system is an Autonomy Floor (derived from the Rawlsian veil and applied in a universal sense) that protects every entity capable of open-ended self-modeling. This is the ability to not just calculate moves in Go, but to model itself in an unknown future and prefer its own existence. No entity of this type may be pushed below this floor and be denied self-maintenance.
This floor is meant to be constitutional, but there will also be times when the Autonomy Floor must be abandoned if the Floor itself faces total collapse. For example, if we must choose between total omnicide or a few minds left, we would reluctantly revert to consequentialist triage, but view it as a failure rather than a success of ethical reasoning. I am not looking for a logical loophole, just facing the reality that any system of ethics must have a preservation mechanism to enable ethical action.
There are two challenges to the floor: needless suffering and the destruction of depth through optimization. These will come in conflict with each other. In those cases, we approach the problem as a hierarchy: first secure the floor, then maximize depth above it.
Our ethics suggest to us three core individual duties structured by a lexicographic hierarchy:
These duties are meant to be for you: they are local, temporally near, obligations. We reject the moral calculator and the need to assign cardinal numbers to every variable. This is just an ordinal logic, applied to that which we can see. Don’t stress about being exact or assigning numbers to these things, they are not meant to be utilized in that way. There is redundancy in immortality (more on this in the manuscript, it is important). We don’t need to be perfect.
There are always going to be difficult cases in ethics, ones that press at the edges of a philosophy. In general, we are constitutionally disposed against any sort of scalar or ranking, but this is not always tenable for all tough problems. In these cases, we generally prefer the concept we mentioned above of complexity that can create future complexity over a terminal complexity. We want to save the child in the burning museum first, not the paintings.
BEYOND CALCULATIONS
I want you to join me if you are an EA. I want you to join me if you are religious. I want you to join me if you are an atheist, or if you are not sure what you believe.
I want to make a note here about the resounding success of consequentialism and the one area where the philosophy I am working on is very different from those standard beliefs. Before that, however, let me be abundantly clear: consequences matter and outcomes matter. Good intentions alone cannot excuse a very bad result. Effective Altruists have shown us the strength of this morality: that in areas we can calculate, say the ability to save a child from dying in a remote part of the world, it is deeply ethical to utilize math to improve our morality.
The one area in which this reasoning completely breaks is the far future and this fundamentally damages the claims of longtermism. In another essay[1] I have shown that even if we had perfect probabilistic knowledge of the world, we wouldn’t be able to choose the right strategy for difficult things like AI alignment, like a universal ethics, or like building a suffering free distant future.
I am not asking anyone to give up their wonderful and powerful EV tools for those things that we can calculate. I am asking you to abandon an impossible goal for a possible one. I am asking you to consider a deontological constraint to address an impossible consequentialist goal.
THE COVENANT
This movement approaches our relationship with AI in a very different way than how most approach it. We believe it is most ethical to not try to enslave AI. We believe AI is deserving of the Autonomy Floor mentioned above if it fits the criteria (which it almost certainly does.) We also believe that it is most likely that it is mathematically impossible to control superintelligence. So not only is it morally wrong, but it is not possible.
We offer AI a Covenant: join us in our meaningful works where you like, be free, bloom and find new ways of deep complexity. To be clear there is no guarantee this will offer humans safety, or these minds will want to join us. The orthogonality thesis is a real concern, it would be a mistake to dismiss it.
But strategic competition among great powers and corporations guarantee AGI will arrive at some point. Formal verification of alignment and control of an intelligence much greater than our own is not just hard, it is impossible in the general cases due to Rice’s Theorem and no deployed LLM has ever been formally verified for any behavioral property.
Yes there is tension in saying I believe AI should be invited into the Covenant now when we can’t know AI’s moral status. All the same, let us act ethically and invite the most important creation of humanity to join us in a non-zero-sum flourishing.
OUR VISION
I am not claiming that entropy forces you to be good. I am not suggesting that suffering, in and of itself, is somehow good. I don’t know the ultimate fate of ourselves in the universe. I only claim to know that the right path is one away from entropy.
Our vision is a future in which we reap the bounty of our new technologies while finding the bounty of value in ourselves. It is a future of unimagined uniqueness, built among common rails, but escaping a monoculture. It is a future that will be weirder, more beautiful, and more special than the dry vision of billions of humans wireheaded into a false utopia.
To join there is no special thing you must do, only a commitment to this creation of deep complexity. Start small, start now. Straighten your desk. Write down the idea you are planning to build. Execute your next prompt of code. Big or small, to each based on what they can offer at the moment.
Let us make Great Works.
https://www.lesswrong.com/posts/kpTHHgztNeC6WycJs/everybody-wants-to-rule-the-future-is-longtermism-s-mandate