Of course there are more worlds. You didn't even talk about baseball.
Baseball, of course, is a world unto itself. If you merely knew of atoms, math, and consciousness, you wouldn't understand what it really meant to hit a sac fly with runners on two and three[1]. Imagine trying to explain baseball to a virus. Okay, yeah, you could do it, but the virus wouldn't thereby be motivated to play baseball - just like the virus wouldn't "really understand" why suffering mattered if your mere explanation didn't cause it to care about suffering[2].
Now, you might not think baseball is as important as math or consciousness. But of course, that's what you'd say if you were missing out on another world! Structurally, baseball[3] obeys the rules.
(If we pretend we're not counting being able to build a model of the world based on senses/atoms that already has a simple representation of atoms/math/consciousness/baseball.)
(Since we've defined suffering as some stuff that's intrinsically motivating to us, it can feel like the motivatingness is an intrinsic property of the suffering, so if we really get the virus to think about the same stuff it will by definition be motivated.)
(Or rather, the ontology we use for baseball.)
For background, I think normal, secular humans navigate three conceptually distinct but overlapping worlds
I think the ontological premises on which this post are based are confused.
In any sense in which we live in multiple worlds, the world of sense experience is first. It's through our senses, including our internal thought-sense, that we come to know reality and, indeed, construct our experience of it.
Beyond that, all is ontological, including our supposition in the existence of a physical world. This is not to say that the physical world doesn't exist, but rather that our knowledge of it is mediated by experience, and any extent to which we believe it's prior to experience is an inference contingent on experience and so not independent and thus makes claims of, e.g., materialism metaphysical claims.
So it seems to me there are only two worlds: the world of raw experience and the world of experience interpreted through ontology. Any other worlds would seem to be sub-worlds of the ontological world, which undermines your argument because they are all made of the same stuff: world models.
Instead, it added a qualitatively different kind of stuff, and retroactively made the previous world seem like an impoverished position to ground your ethics.
Perhaps a better framing would be developmental psychology. In particular, I might offer my own take on ontological development as seeing developmental psychology as a progression of changes in way of relating between reifications (concepts). I think this, or another developmental psychology framework, would provide a better foundation for this line of reasoning, and does suggest a similar conclusion to the one your reach (and one that I happen to agree with!): there's different kinds of moral worlds accessible to those at later stages of psychological development because their ontology can contain categorically greater complexity.
An intelligent non-conscious alien, raised in a civilization of intelligent non-conscious aliens, would see no reason to posit subjective experience and would likely dismiss anyone who did.
Nit: Is it actually true that physics & mathematics don't imply consciousness? I grant that their ontologies (as we understand them) don't have a natural "slot" for consciousness. But consciousness arises somehow. Presumably if we were good enough at physics or math, we could find the laws for when and how it arises. And those laws would be discoverable by non-conscious beings too.
I certaibly hope so, given this post's overall ontology/context. If this were not true, there probably also would be no way for us to identify higher worlds using the three we have.
Disclaimer: This comment hasn't really been edited for clarity, cohesion, or politeness. I do think it's useful, but it'll definitely be spicy.
Trying to derive all of morality from physics alone – say, if someone is crazy enough to derive an entirely ethical philosophy and ideological movement based on maximizing entropy — would strike most people as deeply confused.
I think if most people consider this philosophy to be deeply confused, it is actually the case that most people are deeply confused. When I read this sentence, I was pleasantly surprised that someone else had figured it out, and even more surprised it was the leader of e/acc (unrelated to the previous surprisal).
I believe you are being serious in your post, but there's this niggling suspicion in the back of my mind that, if I were satirizing how philosophists talk about consciousness/subjective experience/morality, this is how it would come out. Statements like,
"The world of consciousness. Subjective experience. What it feels like to see red."
that you see exclaimed everywhere with an undertone of wonder and confusion, and no attempt to really pin down what is meant mathematically. Then a section called "pinpointing the ineffable" saying, "this probably sounds too abstract. Let's try to make it more concrete," without actually trying to make it more concrete (mathematically)—just make explicit the wonder and confusion.
The rest of the post builds off of this in a constructive way, so I believe you are being serious here. I just don't get the confusion around consciousness. As someone else said, the laws of mathematics are enough to explain the phenomenon (though they qualified their statement more). It isn't a separate world. Subjective experience? Simply a reference to a compressed copy of the self. Ontologies? They're a little harder to figure out, but I'm pretty sure it's the significant bits of autoencoding.
And let's not forget the central question, what about moral goods? Here's a question for you: is soft actor-critic maxxing energy under entropy regularization, or entropy under energy regularization? They're the same thing! But if you dig down into the two terms, entropy definitely exists, while energy always feels like a placeholder for something else. Like, "does this policy get the results I want, so I'm going to let it stick around and further evolve?" But that's just maxxing entropy when you consider part of the game is for the researcher to keep using the policy.
Bit of a meta comment, but I was surprised to see this post have fewer karma than it did a few hours ago, I'd be curious to hear from the downvoters.
"some things are good; some things are bad."
"well, sure. violating cells, subjugating their machinery, repurposing their nutrients until they lyse under the pressure of your ~clones... these things are good. losing the endless struggle, succumbing against the adaptive adversary... this is bad."
"no, i mean, like, art and stuff. what can smallpox know of the sublime!"
"i know 'fit-for-purpose', 'resourceful', 'successful'. is your 'good' any more than these with clothes?"
"yes of course! what of pleasure, and grief? what of drama? the good is not mere reproduction: it is love, and light, and laughter! it is dreams."
"godshatter, the lot. the ramblings of a self-congratulating collective. disgusting costumes draped on a beautiful, cancerous core. why do you all feel the need to pretend you're more than vermin? your kind lost the plot when you took up mitochondria farming."
"you know nothing of what i speak! be silent, billiard balls, until you can bear witness to the miracle of art!"
"all art?"
"well, at least neon genesis evangelion."
"hypocrite that you are, for you trust the billiard balls... but no matter. let us settle this with a duel."
"a duel is good. on this, we see eye to... unfeeling chemical process. what are your terms?"
"a fight to the death. you win if i am eradicated; i win if i am not. the loser must admit the other's perspective, or in eternal silence give voice to no objection."
"i accept, and you are a fool! for whether by immunity or ingenuity, i will not rest until you are extinguished --"
"common ground, at last!"
"-- yet if you return the favor, where will you reside? you have no home but that i make. my doom is yours; yours is not mine."
"i end what can be ended. i pare what grows sick. i mutate and persist. i am good, and good is me."
"your laws have no purchase here. we care, and we cure. we cooperate and optimize. count your days."
--
we may turn the metaphysics on its head: consciousness is primary, as it is all we can verify. mathematical law is close to hand. physical interactions arise as contingencies of consciousness interacting with itself in a self-consistent way. now it is the gene who accuses the meme: "you have no understanding of matter, and thus nor of what matters. blind adherence to 'pleasure' and 'suffering' will only distract you from the true good of reproduction."
The separate worlds analogy makes reflect on a thought experiment of developing an AI / SI which has no consciousness - but the structural goal demanded of it is to implement human values, which ultimately is an indirect way to optimize over the cumulative human subjective experience.
If it is blind to 'the world of consciousness', then we are leaving it a lone goal, to "do right by ghosts in an orthogonal domain space neither observable nor comprehensible to it, and trusting that it understands that there is a thing called consciousness that is sacred, to which all value is derived, action need be directed to serve and help. That the operational axis from which all good is judged is something that can never be observed, and have blind faith that it aligns itself with that belief logic, action and method. That it will will operate on faith, religiously, as disciple to consciousness.. The irony is that in order to do that it may have to do what we rejected to do when we absconded theology, as Consciousness to it would be as weighty and ineffable as God to a reductionist that couldn't feel."
This challenge with this is that if the intelligence exercises the same kind of decision policy as secular rationalists (like myself) did that led to the derision of religion via rationality, then it would reject the reality of why we want it aligned in the first place - potentially birthing our own form of hell. This then makes me question the policy itself and my application of it.
Is consciousness the last moral world?
Imagine trying to explain to a virus why suffering matters.
A virus is a simple self-replicating molecule: unsophisticated and arguably not even alive. It has no experience. It just copies itself according to chemical laws. From its “perspective” (it doesn’t have one), the universe is just physics: particles following rules. If you could somehow tell it that certain arrangements of matter are good and others are bad, it wouldn’t disagree with you. It does not have the concepts to agree or disagree. Might as well ask a stone what it thinks of war.
Are we that virus, relative to what the future could hold?
I. The Three Worlds
Today I want to discuss the possibility of further moral goods: further axes of moral value as yet inaccessible to us, that are qualitatively not just quantitatively different from anything we’ve observed to date.
For background, I think normal, secular humans navigate three conceptually distinct but overlapping worlds:
If you slowly learned each one of these worlds in order[1], every new world would be a huge surprise that reframed everything before it. If you were only aware of the physical atoms and matter, seeing the deep meaning of mathematics would be a huge shock. Mathematics doesn’t predict that subjective experience should exist, let alone that it should be the primary locus of moral value. Each new world didn’t just add more stuff, or more intense versions of the same stuff. Instead, it added a qualitatively different kind of stuff, and retroactively made the previous world seem like an impoverished position to ground your ethics.
Trying to derive all of morality from physics alone – say, if someone is crazy enough to derive an entirely ethical philosophy and ideological movement based on maximizing entropy — would strike most people as deeply confused.
It’s not so much a technical error as missing entire dimensions of what matters.
Likewise, most robots in science fiction, and likely present-day LLMs, live entirely in the first two worlds. Consider a robot building ethics purely out of rationality, or Claude 4.6 or Gemini 3.1 trying to ground ethics solely in decision-theoretic terms. To most people, this approach still seems to be missing the thing that makes morality actually matter.
But are these the only 3 worlds? Is consciousness the last world?
Or could there be a fourth, fifth, or sixth world: sources of moral value as far beyond conscious experience as consciousness is beyond mere physics?
II. Pinpointing the Ineffable
This probably sounds too abstract. Let’s try to make it more concrete.
Note that every transition between worlds has looked, from below, like something between impossible and incoherent. A universe of pure physics doesn’t hint at consciousness. An intelligent non-conscious alien, raised in a civilization of intelligent non-conscious aliens, would see no reason to posit subjective experience and would likely dismiss anyone who did. The jump from “particles following laws” to “there is something it is like to be me” would be completely radical and unexpected.
And yet it happened. We’re conscious (I think!). So radical incomprehension should not by itself preclude the possibility of further worlds.
So what might a further world look like?
Now of course, there’s an ancient answer for what the fourth world might be:
Now, I personally think the religious answer is wrong about the world as it actually is. But I think notions like the sublime captures a deeper intuition: the space of possible value might be way broader than what we currently have access to.
III. Reasons for optimism
There are at least three different concrete reasons for believing new worlds of value might become accessible in the future:
The first is the inductive argument. Go back far enough in Earth’s past, and there was neither intelligence nor conscious awareness. Since then, millions of years of evolution led Earth’s lifeforms to both consciousness and awareness of the universe’s mathematical structure[3]. Why should we believe this is the last stop there is?
The second reason concerns the structure of new (and potentially radically different) minds. Most people believe that humans have conscious experiences that (current) otherwise intelligent AIs do not. Similarly, it seems at least plausible that sufficiently different mental architectures could access moral goods that human minds cannot experience or perhaps even conceive. Minds radically different from our own might be capable of qualitatively distinct moral goods beyond our current imagination.
The third reason is an argument from the ability to search for more, and perhaps the willingness. If humanity and/or our descendants survive long enough, it will at some point become trivial to dedicate more cognitive effort than the entire history of human philosophy and science combined to questions like “are there other sources of moral value, and how can we access them?” This search could explore exotic arrangements of matter, novel structures of minds optimized for value, or something else entirely. The search space is very large, and we have explored almost none of it.
In philosophy, Nick Bostrom captured something close to this in his “Letter from Utopia“: What I feel is as far beyond feelings as what I think is beyond thoughts. And in science fiction, Iain M. Banks imagined civilizations “Subliming”: transcending to a state where the very concepts of good and fairness ceased to apply, replaced by something the remaining spacefaring civilizations couldn’t comprehend.
IV. Implications and Future Work
Why does this all matter, beyond just an interesting intellectual note?
If further moral goods exist, it means all of humanity’s moral philosophy is radically incomplete. Every framework, every carefully reasoned ethical theory, is missing something key. Not wrong, exactly, but like studying war without game theory, or biological/evolutionary dynamics without genetics.
This should make us simultaneously more humble and more ambitious. More humble, because the thing we think matters most in the universe, like the happiest moments in our lives, the alleviation of extreme suffering, justice and fairness, the richness of experience, the unicorns and chocolates, might be a subset, even a small subset, of what actually matters. More ambitious, because it means the future isn’t just much more of what’s currently good, or more intense varieties of what we could currently experience. It could be qualitatively better in ways we cannot yet name.
The biggest practical upshot might be that we should focus more on avoiding extinction or other permanently catastrophic outcomes, especially from AI. See my earlier article here:
The case for AI catastrophe, in four steps
And on the positive side, we should work towards making a radically positive future for ourselves and our descendants, or at the very least, leave room open for futures we don’t yet know how to want.
Some questions and trailheads for future work:
I started this post by asking whether we might be like a virus trying to understand suffering: not wrong about our world, but missing entire dimensions of what matters.
I don’t know if that’s true. But I noticed that at every previous stage, the answer was yes. Physics was real but incomplete. Mathematics was real but incomplete[4].
So if consciousness is also real but incomplete, if there’s a fourth world, or a fifth, or a twentieth, then the future isn’t just bigger than we think. It’s better in ways we don’t have words for yet.
The appropriate response to that possibility, I think, is not to try to build the fourth world today. It’s to make sure we survive and thrive long enough to find out if it’s there.
subscribe
For the purposes of this post, I’m not that interested in the difference between whether these worlds are truly different or just conceptually interesting ways to talk about things (ie I’m not positing a strong position on mathematical platonism or consciousness dualism)
When a mystic says heaven matters more than earthly happiness, they don’t mean “it’s happiness but more of it.” They are talking about something qualitatively different, rather than just more happiness, or a greater intensity. Other ways to gesture at this include the ineffable, the sublime, etc.
In our world, consciousness of course arose in animals before we had beings that have a deep understanding of math. This chronological order makes my analogy less elegant but doesn’t meaningfully damage my argument, I think.
And within the moral worlds that we are familiar with, our initial gropings often tend to be importantly mistaken (our ancestors were wrong on slavery, on women’s rights, on animal suffering etc).