I'm not sure what you mean, either in-universe or in the real world.
In-universe, the Culture isn't all powerful. Periodically they have to fight a real war, and there are other civilizations and higher powers. There are also any number of ways and places where Culture citizens can go in order to experience danger and/or primitivism. Are you just saying that you wouldn't want to live out your life entirely within Culture habitats?
In the real world... I am curious what preference for the fate of human civilization you're expressing here. In one of his novels, Olaf Stapledon writes of the final and most advanced descendants of Homo sapiens (inhabiting a terraformed Neptune) that they have a continent set aside as "the Land of the Young", a genuinely dangerous wilderness area where the youth can spend the first thousand years of their lives, reproducing in miniature the adventures and the mistakes of less evolved humanity, before they graduate to "the larger and more difficult world of maturity". But Stapledon doesn't suppose that his future humanity is at the highest possible level of development and has nothing but idle recreations to perform. They have serious and sublime civilizational purposes to pursue (which are beyond the understanding of mere humans like ourselves), and in the end they are wiped out by an astronomical cataclysm. How's that sound to you?
What do you want from life, that the Culture doesn't offer?
Ah, the topic that frustrates me more than any other. If only you could see some of the ripostes that I have considered writing:
"Every illusionist is declaring to the world that they can be killed, and there's no moral issue, because despite appearances, there's nobody home."
"I regret to inform you that your philosophy is actually a form of mental illness. You are prepared to deny your own existence rather than doubt whatever the assumptions were which led you in that direction."
"I wish I could punch you in the face, and then ask you, are you still sure there's no consciousness, no self, and no pain?"
"I would disbelieve in your existence before I disbelieved in my own. You should be more willing to believe in a soul, or even in magic microtubules, than whatever it is you're doing in this essay."
Illusionism and eliminativism are old themes in analytic philosophy. I suppose what's new here is that they are being dusted off in the context of AI. We don't quite see how consciousness could be a property of the brain, we don't quite see how it would be a property of artificial intelligence either, so let's deny that it exists at all, so we can feel like we understand reality.
It would be very Nietzschean of me to be cool about this and say, falsehoods sometimes lead to truth, let the illusionist movement unfurl and we'll see what happens. Or I could make excuses for you: we're all human, we all have our blindspots...
But unless illusionist research ends up backing itself into a corner where it can no longer avoid acknowledging that the illusion is real, then as far as discovering facts about human beings goes, it is a program of timidity and mediocrity that leads nowhere. The subject actually needs bold new hypotheses. Maybe it's beyond the capacity of most people to produce them, but nonetheless, that's what's needed.
What can explain all this callousness? ... people don’t generally value the lives of those they consider below them
Maybe that's a factor. But I would be careful about presuming to understand. At the start of the industrial age, life was cheap and perilous. A third of all children died before the age of five. Imagine the response if that was true in a modern developed society! But born into such a world, an atmosphere of fatalistic resignation would set in quickly. All you can do is pray to God for mercy, and then look on aghast if the person next to you is the unlucky one.
Someone in the field of "progress studies" offers an essay in this spirit, on "How factories were made safe". The argument is that the new dangers arising from machinery and from the layout of the factory, were at first not understood, in professions that had previously been handicrafts. There was an attitude that each person looks after themselves as best they can. Holistic enterprise-level thinking about organizational safety did not exist. In this narrative, unions and management both helped to improve conditions, in a protracted process.
I'm not saying this is the whole story either. The West Virginia coal wars are pretty wild. It's just that ... states of mind can be very different, across space and time. The person who has constant access to the intricate tapestry of thought and image offered by social media, lives in a very different mental world to people from an age when all they had was word of mouth, the printed word, and their own senses. Live long enough, and you will even forget how it used to be, in your own life, as new thoughts and conditions take hold.
Maybe the really important question is the extent to which today's elite conform to your hypothesis.
There are several ways to bring up a topic. You can make a post, you can make a question-post, you can post something on your shortform, you can post something in an open thread.
If there is some detailed opinion about a topic that is a core Less Wrong interest, I'd say make a post. If you don't have much of an opinion but just want such a topic discussed, maybe you can make it into a question-post.
If the topic is one that seems atypical or off-topic for Less Wrong, but you really want to bring it up anyway, you could post about it on your shortform or on the open thread.
The gist of my advice is that for each thing you want to discuss or debate, identify which kind of post is the best place to introduce it, and then just make the post. And from there, it's out of your control. People will take an interest or they won't.
So let me jump in and say, I've been on Less Wrong since it started, and engaged with topics like transhumanism, saving the world, and the nature of reality, since before 2000; and to the best of my recollection, I have never received any serious EA or rationalist or other type of funding, despite occasionally appealing for it. So for anyone worried about being corrupted by money: if I can avoid it so comprehensively, you can do it too! (The most important qualities required for this outcome may be a sense of urgency and a sense of what's important.)
Slightly more seriously, if there is anyone out there who cares about topics like fundamental ontology, superalignment, and theoretical or meta-theoretical progress in a context of short timelines, and who wishes to fund it, or who has ideas about how it might be funded, I'm all ears. By now I'm used to having zero support of that kind, and certainly I'm not alone out here, but I do suspect there are substantial lost opportunities involved in the way things have turned out.
ontonic, mesontic, anthropic
Those first two words are neologisms of yours?
The use of Greek neologisms for systems ontology is almost a subgenre in itself:
The anthropologist Terrence Deacon distinguishes between "homeodynamic", "morphodynamic", and "teleodynamic" systems. (This taxonomy already made an appearance on Less Wrong.) Stanislav Grof refers to "hylotropic" and "holotropic" modes of consciousness.
Theoretical biology seems replete with such terms too: autopoiesis, ontogeny, phylogeny, anagenesis (that list, I took from Bruce Sterling's Schismatrix); chreod, teleonomy, clade.
I guess Greek, alongside Latin, was one of the prestige languages in early modernity. Plenty of other scientific terms have Greek etymology (electron, photon, cosmology). Still, it's as if people instinctively feel that Greek is suited for holistic ontological thinking (hello Heidegger).
layered ... model
I feel like we almost need a meta-taxonomy of layered models or system theories. E.g. here are some others that came to mind::
The seven layers of the ISO/OSI model.
The layered model of AI (see diagram) being used in the current MoSSAIC sequence.
The seven basis worldviews of PRISM and the associated hierarchy of abstractions and brain functions.
You could also try Ivan Havel's thoughts on emergent domains. Or the works of Mario Bunge or James Grier Miller or Valentin Turchin or many other systems theorists...
I think that, while there are many ways you can draw the exact boundaries in such taxonomies, a comparative study of taxonomies would probably reveal a number of distinct taxonomic schemas, and possibly even a naturally maximal taxonomy.
What's your evidence that your experience of color is ontologically primitive?
That's not what I'm saying. Experiences can have parts, qualia can have parts. I'm saying that you can't build color or experience of color, just from the "geometric-causal-numerical" ingredients of standard physical ontology. Given just those ingredients in your ontological recipe, "subjective feels" don't come for free. You could have the qualia alongside the geometric-causal-numerical (property dualism), or you could have the qualia instead of that (monistic panpsychism), or you might have some other relationship between qualia and physics. But if you only have physics (in any form from Newton to the present day), you don't have qualia.
I recently became much more familiar with the SCP mythos, after Grimes recommended There is no Antimemetics Division ("Artificial Angels" is all about it). It could do with an SCP-AI subcategory for AI scenarios, like SCP-AI-2027...
If you had a correct causal model of someone having a red experience and saying so, your model would include an actual red experience, and some reflective awareness of it, along with whatever other entities and causal relations are involved in producing the final act of speech. I expect that a sufficiently advanced neuroscience would eventually reveal the details. I find it more constructive to try to figure out what those details might be, than to ponder a hypothetical completed neuroscience that vindicates illusionism.