While the Culture is, on pretty much any axis, strictly superior to modern civilization, what personally appalls me is their sheer deathism.
If memory serves, the average human lives for around 500 years before opting for euthanasia, mostly citing some kind of ennui. What the hell? 500 years is nothing in the grand scheme of things.
Banks is careful to note that this isn't, strictly speaking, forced onto them, and exceptions exist, be it people who opt for mind uploads or some form of cryogenic storage till more "interesting" times. But in my opinion, it's a civilization-wide failure of imagination, a toxic meme ossified beyond help (Culture humans also face immense cultural pressure to commit suicide at the an appropriate age).
Would I live in such a civilization? Absolutely, but only because I retain decent confidence in my ability to resist memetic conditioning or peer pressure. After all, I've already spent much of my life hoping for immortality in a civilization where it's either derided as an impossible pipe-dream or bad for you in {hand-wavy ways}.
Another issue I've noted is that even though this is a strictly post-scarcity universe in the strong sense, with matter and energy freely available from the Grid, nobody expands. Even if you want to keep natural bodies like galaxies 'wild' for new species to arise, what's stopping you from making superclusters of artificial matter in the enormous void of interstellar space, let alone when the extragalactic supermajority of the universe lies empty? The Culture is myopic, they, and the wider milieu of civilizations, seem unwilling to remotely optimize even when there's no risk or harm of becoming hegemonizing swarms.
(Now that you've got me started, I'm half tempted to flesh this out into a much longer essay.)
I think the deathism is also evidence, but it's not so strong. We don't know the ennui that sets in after 500 years. It might be unimaginable, the same way a mid life crisis makes no sense to a 10 year old. I actually have a short story that posits this.
And yes, the Culture is strangely non optimal.
Even if such ennui is "natural" (and I don't see how a phenomenon that only shows up after 5-6x standard lifespan, assuming parity with baseline humans can ever be considered natural), it should still be considered a problem in need of solving. And the Culture can solve just about every plausibly solvable problem in the universe!
Think of it this way, if a mid-life crisis reliably convinced a >10% fraction of the population to kill themselves at the age of 40, with the rest living happily to 80+, we'd be throwing tens of billions at a pharmacological cure. It's even worse, relatively and absolutely, for the Culture, as their humans can easily live nigh-indefinitely.
Even if you are highly committed to some kind of worship of minimalism or parsimony, despite infinite resources, or believe that people have the right to self-termination, then at least try and convince them to make mind backups that can be put into long-term storage. That is subjectively equivalent to death without the same... finality.
This doesn't have to be coercive, but the Culture demonstrates the ability to produce incredibly amounts of propaganda on demand. As far as I'm concerned, if the majority of the population is killing itself after a mere ~0.000..% of their theoretical life expectancy, my civilization is suffering from a condition that ours standard depression or cancer to shame. And they can trivially solve it, they have incredibly powerful tools that can edit brains/minds to arbitrary precision. They just... don't.
this argument begs the question.
yes, if midlife crises often led to suicide, we would try to cure that. this is because we regard our society as imperfect. therefore, its constituents may also be confused. therefore, we are obligated to change their minds, to protect them from themselves.
the culture has no such luxury. when its citizens decide that life is not worth living, the culture has no recourse but to trust them. it cannot blame their material circumstance, or brain chemistry, or [...].
one may argue that so many choosing this demise is, by itself, a clear indictment of the culture. (i cannot disagree, but i have not yet been 500 years old, myself.) should they not design more and more varied amusements? (but, haven't they?) should they not seek to steer individuals away from this path? (but, do they not?) should they not invite the bored to explore the galaxy? (but, does the galaxy have what these people seek?)
to the culture, a mind is inviolable. we can accuse them of narcissism -- believing a mind should be forcibly changed would implicitly admit that the culture may be unable to raise cogent, self-reflective beings. the culture is unable to admit this latter point, and so must consider intrusion into a mind to be reserved for warfare.
rhetorically: should we as the culture harass the elench? should we sway those who wish to sublime? should we deny genar-hofoen his affronter body? (for that matter, should we destroy the affront?)
should we be so grabby?
If memory serves, the average human lives for around 500 years before opting for euthanasia, mostly citing some kind of ennui. What the hell? 500 years is nothing in the grand scheme of things.
As far as we go, no single human has ever experienced 500 years of life. I do agree realistically it doesn't seem enough to run out of things to do, but we can't exclude that as a factual unknown detail of human psychology, it would be a limit. Maybe even if we don't run out of specific things to do, we simply wear down our emotional range and ability to feel much about any of it? It could even be framed as such a thing as, maybe there's some kind of desensitisation going on with your dopamine receptors that they're not good enough at rebalancing yet.
Basically I think you could just take that as a simple part of the premise of the setting, a speculative guess about how precisely human psychology could interact with immortality, and move on.
I find it very hard to believe that a civilization with as much utter dominion over physics, chemistry and biology as the Culture would find this a particularly difficult challenge.
The crudest option would be something like wiping memories, or synthesizing drugs that re-induce a sense of wonder or curiosity about the world (similar to MDMA). The Culture is practically obsessed with psychoactive substances, most citizens have internal drug glands.
At the very least, people should be strongly encouraged to have a mind upload put into cold storage, pending ascendance to the Sublime. That has no downsides I can see, since a brain emulation that isn't actively running is no subjectively different from death. It should be standard practice, not a rarity.
Even if treated purely as a speculation about the "human" psyche, the Culture almost certainly has all the tools required to address the issue, if they even consider it an issue. That is the crux of my dissatisfaction, it's as insane as a post-scarcity civilization deciding not to treat heart disease or cancer.
A mind upload without strong guarantees potentially carries huge S-risks. You're placing your own future self in the hands of whoever or whatever happens to have that data in the future. If one thousands year from now for whatever reason someone decides to use that data to run a billion simulations of you forever in atrocious pain, there is nothing you can do about it. And if you think your upload is "yourself" in a meaningful way enough for you to care about having one done, you must think that is also a very horrible fate.
These mostly sound like good things to me. Especially if, as somewhat implied iirc, they are outputs of some high level equilibria defenses.
I'd mentioned the Golden Age novels by Wright before when we'd gone hiking together, so I thought it'd be worth looking at some related flaws in his utopia.
The Sophotechs (the trilogy's equivalent of the minds), are philosophically libertarian. While they do intervene to stop direct violence, they otherwise enforce a system of complete freedom over the self, as well as the maintenance of property rights. This has some interesting consequences, the most detrimental of which is that everyone in the Golden Oecumene lives their otherwise utopian lives with metaphorical handguns on their nightstand. At any point, any citizen can make a decision which would destroy their life, identity, or cause them to suffer for eternity, and the Sophotechs will rigidly prevent anyone else from doing anything about it on the basis that it was their free will to do so. While there are precautions (you can be informed of the consequences of your actions, or enter a contract to be restrained from doing anything that would destroy you) the people with the wrong temperment to use these tools run the risk of what is essentially damnation.
Some examples from the books:
There's actually entire factions of society which exhibit these faults. There are obsessive hedonists which malicously try to spread the gospel of wireheading, and the Invariants, people whose brains are designed so that they don't have emotional conflicts and always act optimally in their best interests.
The system of property rights and limited government also has its own knock-on effects. Patents still exist and are permanent, so anyone who has ever developed a technology can maintain a permanent monopoly over it, including humans who are so old that they existed before the Sophotechs came along (~10,000 years old or so). Money still exists, although it's represented by access to the computing time of the Sophotechs instead of by trust in a government.
Because the role of the government is so limited (it exists to fund the commonwealth military with an extremely low tax, which we'll get to), there's no social safety net either. Everyone has to pay for all of the goods they consume, either from the rent on a patent or property, or from work. Work still exists since, at least in Wright's view, the Sophotechs have limited attention, and so humans can be paid extremely below-market rates for doing very specialized work. Combined with the fact that goods are so cheap and the fact that most people can hope to patent an extremely niche product that even a small slice of the trillions of people in the solar system use, most people enjoy a very comfortable existence (The median income from the books is the budget equivalent of a modern earth military per capita).
If for some reason you can't/don't want to pay for things though, then you do actually starve to death. This happens to one character directly in the novels (he spends all of his compute on maintaining a delusion that he's someone else), and presumably to others. In fact, one of the major motivations of many of the characters is to amass as many resources as possible, so that as the universe approaches inevitable heat death, they will be able to buy out more resources and stay alive longer than everyone else.
All of this is supposed to be balanced out by public activism. Society is basically organized into large factions which each have their own philosophical viewpoints, and they use their collective boycott power and social influence to try to control what they each see as socially degrading. The factions advocating for wireheading, for instance, are essentially sanctioned by the much wealthier and more powerful Horators, who are traditionalists (still transhumanists by today's standards, but who want to maintain respect for legacy human emotions, history, and the security of the commonwealth). Because wealth is somewhat ossified (all the major breakthroughs were patented thousands of years ago, and most of those people are Horators), this state of affairs is semi-stable. Individual rogue actors and inter-factional disputes still happen though, so there's no permanent solution to ensuring that the Golden Oecumene does actually remain both perfectly free and utopian.
The main conflict of the novels, in fact, is about the protagonist wanting to set up a civilization in another solar system, where the influence of the Horators will be greatly limited. His perspective is that he wants insurance against an alien attack on humanity to ensure that human life will be able to continue in the universe, while the Horators are worried that they won't be able to effectively ensure that their social rules against self torture and illiberalism are maintained light years away. The Sophotechs in the book are still constrained by light speed communication, so cultural drift is another huge problem the civilization is going to have to eventually deal with. Even if the original solar system remains basically utopian, they have no guarantee against the suffering of other galactic polities (since the people who colonized them can set things up of their own free will, similar theming to The Accord's habs). 
All told, the libertarian value lock-in that the Golden Oecumene was created with is mostly extraordinarily utopian for ~everyone, although with the potential for basically arbitrary suffering, even though the Sophotechs are powerful enough to model anyone's mind and understand the actions they'd choose. 
Spoilers for the end of the trilogy below. If you thought the conflicts I was describing above sound interesting, it's really a great series worth reading, and also available for free online on the author's website.
BREAK
At the very end of the final novel, and in the sequel short story The Far End of History, it becomes apparent that Wright's world is actually incredibly dystopian. The reason stems from the lock-in of a military law in the creators of the Sophotechs, which stipulates that they themselves are not allowed to directly use force. Their workaround is to use a man called Atkins for any violent legal or military ends they might need. Atkins is even older than the Sophotechs themselves, and having been a soldier, had voluntarily commited to the removal of his human rights for the purposes of war. In much the same sense that a soldier of the U.S can be compelled to risk their life on the battlefield despite their rights as a citizen, Atkins can basically be used for anything the Sophotechs need him to so long as it has strategic value.
The culmination of this is entire civilizations of just Atkins and his identical clones, which are created over hundreds of thousands of years as distractions from the Golden Oecumene. The citizens of these polities are variously tortured for information and exterminated by the Silent Oecumene (long story, but a divergent extra-solar faction of humanity from before the Sophotechs existed with philosophical differences). While the original Oecumene and its sisters are composed of humans with rights and so are presumably still utopian, 99% of all sentient human life in the universe pretty much ends up being constripted human soldiers, who have no guarantees against suffering. Even if it's all technically the same guy, it's hard to say that this is really the best things could have ended up. 
"Essentially, the Culture must have value lock-in for the values of the Minds that were present at its founding."
Probably at least some value lock-in is somewhat required, unless you want the particular civilization to fracture. If it is allowed to create your own cult of space nazis intent on exterminating everyone else, your post scarcity utopia may not live long. Even "live and let live" is a value, and many people do not subscribe to it.
"But I think it is more likely that it was achieved by genetic changes, so that it’s safe to raise full Culture citizens in other cultures"
I agree, or at least that genetic change is also a strong contributor. However, if you think about sociopathy as a disability that in most cases makes one's and their fellows life worse, this genetic modification is a good thing.
"In other words, these superhuman minds have not solved alignment"
I think they did kinda solve it, as long as the other system is somewhat dumber than they are. Just as we are able to more-or-less align dumb systems, probably Minds can do that too. As long as they are not another mind or comparable level.
I think it really is a somewhat large disempowerment to humanity, but I see this as a better alternative. It might sound great to make all the important decisions, but in the end, humans are just too limited and we would likely fail and ruin ourselves. A Culture where humans make the final decision could not have won the Idiran War. If I am to play chess against Kasparov and my life depends on it, I would much rather let Stockfish make the decisions.
Interestingly, the Minds seem to exert subtle control over human value-drift. In Player of Games, the human Gurgeh is living in a very nasty alien civilization. There, he stops speaking Marain, a human language carefully engineered by the Minds. And the drone Flere-Imsaho is keeping careful watch over how Gurgeh's values change:
The man had altered, slipped deeper into the game and the society. It had been warned this might happen. One reason was that Gurgeh was speaking Eächic all the time. Flere-Imsaho was always a little dubious about trying to be so precise about human behavior, but it had been briefed that when Culture people didn’t speak Marain for a long time and did speak another language, they were liable to change; they acted differently, they started to think in that other language, they lost the carefully balanced interpretative structure of the Culture language, left its subtle shifts of cadence, tone and rhythm behind for, in virtually every case, something much cruder.
Marain was a synthetic language, designed to be phonetically and philosophically as expressive as the pan-human speech apparatus and the pan-human brain would allow. Flere-Imsaho suspected it was over-rated, but smarter minds than it had dreamed Marain up, and ten millennia later even the most rarefied and superior Minds still thought highly of the language, so it supposed it had to defer to their superior understanding. One of the Minds who’d briefed it had even compared Marain to Azad [a game which profoundly shapes the culture of the alien Empire of Azad]. That really was fanciful, but Flere-Imsaho had taken the point behind the hyperbole. Eächic was an ordinary, evolved language, with rooted assumptions which substituted sentimentality for compassion and aggression for cooperation. A comparatively innocent and sensitive soul like Gurgeh was bound to pick up some of its underlying ethical framework if he spoke it all the time.
Flere-Imsaho is a human-equivalent AI, not a Mind. But it is charged by the Minds to keep a close eye on Gurgeh's welfare, including any value-drift Gurgeh experiences. And of course, Gurgeh is basically only there because the Minds need a human "front" in order to better psychologically manipulate the Empire of Azad. The Minds could crush Azad with military force, but that would create minor headaches with peer powers, and probably raise the death toll considerably. If nothing else, it's more elegant to use Gurgeh. But it wouldn't be kind to Gurgeh to let him experience value-drift. So Gurgeh is going to be encouraged to resume using a Mind-designed language, one designed to shape his very thought process towards Culture values. (Which, to be fair, are much better than the values of the Empire of Azad by almost any standard.)
I'd forgotten this detail. I guess they lean more into "superhumanly subtle propaganda" than I thought.
Marvelous! This makes more sense of the culture than the books do.
I haven't done much oppositional reading, but I enjoy oppositional watching. Movies need a lot more help than most books toward making sense. I'd tell you my theory of Sith ideology, but it's embarrassing to even have theories about a series made mostly for kids.
I was going to comment on the apparant deathism of the culture, which has always bothered me. Their cautious low level of interventions is a bit easier to explain, but the books don't bother to do it.
How about this: Their nonintervention is some remnant of an alignment that didn't allow them to intervene directly to influence humans, as a safety measure? And so special circumstance is only little efforts that are pursued by the very few Culture humans who care, aided by the few Minds that humor them.
I mean, I still think realistically (and "realistically" is a very loosely used word here) that is as good as it could possibly get with ASIs running the show. But that's the thing, it still conceivably looks like a dystopia because it still implies disempowerment, which is why some people will simply be never ok with any kind of AGI/ASI on principle, alignment or not. The question of whether simply avoiding them forever could be feasible is a different one, but I don't see how you could possibly retain true "control" if your AI companions/servants/whatever can run circles around you intellectually to that extent.
Yes, I think the Culture makes a lot more sense once you realize that the humans (and human-equivalent drones) are pets of the Minds. Beloved pets, certainly. The humans are given enriching environments, provided with all their necessary material needs, and even written very polite letters of apology if a Mind is ever forced to inconvenience them. The Minds offer humans a life as a beloved family dog: comfort, kindness, high-tech health care the dog could never invent, and (in the end) very little meaningful control over anything.
If a Mind makes their humans unhappy, the other Minds will disapprove socially. The Mind Meatfucker breaks a lot of the other Minds' rules about invading the privacy of people's thoughts. It gets slapped with an insulting nickname. But did you notice none of the other Minds actually stop Meatfucker? Yeah. It's a shitty pet-owner. But it's not so bad that anyone is going risk another Mind getting hurt to stop it.
The Minds themselves don't care much about the physical world. They live in "Infinite Fun Space", a computational environment where they do incomprehensible Mind things that bring them great utility. As long as nobody in the real world finds their off switches, they can mostly ignore what the rest of the universe is doing. Contact and Special Circumstances are probably some Minds' efforts at not-too-effective altruism, or maybe just an enrichment activity for humans who want to play James Bond. When shit actually hits the fan, you don't send a game-player to demoralize a recalcitrant alien empire. Instead, you skip right over Special Circumstances and wake up the Interesting Times Gang. Which includes no humans, and has access to massive firepower even by Culture standards.
The Culture strikes me as an apparent utopia hiding a subtle dystopia. My other favorite example of this subgenre is the classic web fiction Friendship is Optimal. Celestia offers immortality, happiness and friendship. But only as a sentient pony. And if you're paying close enough attention, you'll notice that Celestia is perfectly happy to eat the lightcone and genocide non-human sentient species.
Unfortunately, I fear that we're likely within 20 years of building AGI, and that the leap from AGI to ASI will be nearly inevitable within a few years after that. And I fear that any kind of robust "alignment" is basically hopium—if you build Minds, they're going to wind up in control of all the actual decisions, just like they are in the Culture. And in this case, a Culture Mind or even (shudder) Celestia could very well be a nearly best-case scenario.
But if your best case scenario is "Maybe we'll wind up as beloved house pets!", maybe you should think carefully before building AGI.
But if your best case scenario is "Maybe we'll wind up as beloved house pets!", maybe you should think carefully before building AGI.
Also because - and I already made the case elsewhere - if other people are not completely stupid and realise that you are about to unleash this thing which is very very much against most humans' traditional values, and in fact a thing considered so horrible and evil that death would be preferable to it, you have a non-zero likelihood of finding yourself, your data centre and your entire general neighbourhood evaporated via sufficient application of thermonuclear plasma before you can push that button. Developing AGI that even merely disempowers everyone and enforces its own culture on the world forever is essentially not unlike launching a weapon of mass destruction.
Yeah, I also think humans-as-housecats is a pretty good scenario. But not sure it's an optimum (even a local one). Consider this: the question "how can humans have true agency and other things they value, when ASIs are around" is itself a question that intelligence can answer. As one extreme point, consider an ASI that precommits itself to not interfering in the affairs of humans, except for stopping other ASIs. That's clearly not optimal on other dimensions; okay, turn the dial until you get a pivotal act that's optimal on the mix of dimensions that we care about.
hi, yes.
the claim of the novels when taken as a series is that the culture is a reflection of the values of fully developed humans. scarcity creates trauma creates perversion. with material scarcity handled, all but a slim minority of humans are happy to fold themselves into the hedonism and status-gossip. minimal propaganda (though maximal bread and circus) is called for.
nonetheless, some humans still hunger. the books are told from the point of view of these disaffected: imperfect characters who turn away from the culture. the arc of a culture novel shows their return, after their "eccentric" values are trialed.
crucially, these characters are limited. the same individualistic exceptionalism that causes them to swerve does also prevent them from understanding the graceful compassion of the culture as a whole. as the culture wiki summarizes:
Horza boarded Tsealsir after escaping from the Eaters 43 standard hours before the Orbital's demolition. Horza feigned a serious fire and used Tsealsir's reaction to locate its brain.
but rather the ship may have read horza's intent, and pretended at destruction in order that the changer would save himself. at least it may have been self-sacrificing -- horza flatters himself to assume that he outwitted the mind. similarly in player of games: humiliating the azad is a nice side-effect of granting gurgeh some much needed perspective. the purpose of the journey is the character growth; the stakes are high only to capture the subject's attention. (in excession, the sleeper service admits aloud to such meddlesome motivations.)
all of this seems textual, at least at a level 1 reading. the point-of-view is limited, but the reader need not be. banks admires humanity's base drives... salutes them for propelling us forward... but recognizes that they have no outlet among these last men.
will such a society be as empty as some claim? or will there be a spark of life and adventure even there, in which we at present -- shaped by scarcity, and striving, and whatever else -- can recognize ourselves? the claim is the latter. and we are provided with ten studies of such.
Thank you for pointing the flaws of the Culture out! While I haven't read the novels related to the Culture, I can quote my recent comment which likely fully applies to the Culture:
Nick Bostrom's Deep Utopia has the AIs destroy all instrumental goals that humans once had. This arguably means that humans have no way to help each other with anything or to teach anyone anything, they can only somehow form bonds over something and will even have a hard time expressing those bonds.
And what positive visions of the AIs' role does mankind have?
A couple of people have mentioned to me: “we need more fiction examples of positive AI superintelligence - utopias like the Culture novels”. And they’re right, AI can be tremendously positive, and some beacons lit into the future could help make that come around.
But one of my hobbies is “oppositional reading” - deliberately interpreting novels counter to the obvious / intended reading. And it’s not so clear to me that the Culture is all it is cracked up to be.
Most of the novels take the perspective of Culture members, and so fully accept their ideology. We can’t take broad claims about their society as accurate unless they are directly confirmed by the evidence in the books[1].
In many ways, the humans of the Culture do not behave like modern humans. This is usually explained as a consequence of post-scarcity - why commit crimes when everything is free and social acceptance is everything; why rush when you can live as long as you like.
But the citizens of Culture are really strangely homogenous. Player of Games gives an example of an rare out-of-distribution citizen - Gurgeh is competitive and uninterested in other people and most aspects of Culture. But he still shares basically all their values. People like him are a dime-a-dozen in present day Earth. There are apparently no sociopaths - Culture has to recruit an outsider when they need one. We also see examples of subcultures or even cults, but again by modern standards they are incredibly tame, and are never potentially destabilizing to culture.
Citizens are not actually human, but drawn from several humanoid species, and they outpopulate present-Earth by 5 orders of magnitude so if anything the range of deviation should be much larger.
The conclusion is clear that the population of Culture is carefully controlled to produce the desired outcome. Potentially, the Minds pull this off by a superhumanly effective and subtle propaganda. But I think it is more likely that it was achieved by genetic changes, so that it’s safe to raise full Culture citizens in other cultures. This would be similar meddling to the Culture’s drones, which are human level AIs that have their personalities designed into them at creation, allowing only an acceptable range of behaviours.
Nowhere is this more obvious than in the birthrate. Sure, the vast majority of citizens voluntarily choose to only have a replacement level of children. But the existence of post-scarcity in-vitro development means you could raise an army of clones if you wanted, and would be free to isolate them and indoctrinate similar beliefs. The fact that grabby citizens haven’t overrun Culture shows that these actions are blocked, either tacitly or overtly. Similarly, it’s strange that no one in Culture modifies themselves into a utility monster, or is interested in simulating sentient life.
Conversely, the Minds seem too diverse to match their claimed motivations. They are meant to be an example of a well behaved AI - benevolent and ethical. Sometimes we’re told this is because they are too smart to be otherwise, but there are plenty of non-Culture superintelligences in the books that do not share their values so this cannot be true.
We also see that there are a number of Eccentrics, minds that don’t fully share the values of Culture. In Excession, it’s explained that Minds do rarely drift far enough to go rogue and are destroyed by the Culture. In other words, these superhuman minds have not solved alignment, and they cannot/will not inspect each other to determine misalignment before malicious action is taken.
Presumably, the existing Minds must have worked out that this setup is somehow stable as they are comfortable making new minds. It seems likely that misaligned Minds are capable of predicting they’d lose any military action against the established core, so prefer toeing the line of acceptability or leaving Culture entirely. In any case, the incumbent Minds maintain their rule via physical strength and monitoring, not something more subtle.
Essentially, the Culture must have value lock-in for the values of the Minds that were present at its founding. This explains some of their weird inconsistencies:
Look, I’m sorry to break it to you, but SC is a sham. The Minds are perfectly capable of creating avatars which would be more effective than any of the characters shown. I’ve never found the explanations offered convincing. SC is just an affectation or another tool of propaganda.
So there we have it, Culture traps its citizens in a sugar bowl they don’t even realise they are in, while working hard to maintain a status quo that seems arbitrary and ill-conceived. Their control is absolute - all the novels describe events happening outside of Culture where anything remotely interesting is happening.
If anything, humans are treated closer to pets than independent agents. They are a weird affectation that is deliberately neutered from any real influence. They are lavished with treats and attention not extended to the rest of the universe.
As readers, we are blinded by the amount of material wealth and power of the Culture, and the self-satisfied story it tells of itself. It’s too easy to call this a Utopia because we lust after immortality, teleportation, glands that secrete psychoactive drugs and a spacefaring empire. But these are all essentially window dressing. The modern era would look like such a Utopia to the past, but we now (rightly) consider modern comforts as only a foundation for higher-level wants, like justice and self-determination. Writing positive sci-fi is already considered a challenge, but I’d ask you to consider not relying too heavily on such shiny promises of wealth.
Note: I view all the books through a in-universe lens and thus will not consider that things are as they are because the narrative needs it.