LESSWRONG
LW

1373
AlphaAndOmega
895Ω351060
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
4AlphaAndOmega's Shortform
9mo
11
The Culture Novels as a Dystopia
AlphaAndOmega14h10

I find it very hard to believe that a civilization with as much utter dominion over physics, chemistry and biology as the Culture would find this a particularly difficult challenge. 

The crudest option would be something like wiping memories, or synthesizing drugs that re-induce a sense of wonder or curiosity about the world (similar to MDMA). The Culture is practically obsessed with psychoactive substances, most citizens have internal drug glands. 

At the very least, people should be strongly encouraged to have a mind upload put into cold storage, pending ascendance to the Sublime. That has no downsides I can see, since a brain emulation that isn't actively running is no subjectively different from death. It should be standard practice, not a rarity. 

Even if treated purely as a speculation about the "human" psyche, the Culture almost certainly has all the tools required to address the issue, if they even consider it an issue. That is the crux of my dissatisfaction, it's as insane as a post-scarcity civilization deciding not to treat heart disease or cancer. 

Reply
The Culture Novels as a Dystopia
AlphaAndOmega3d101

Even if such ennui is "natural" (and I don't see how a phenomenon that only shows up after 5-6x standard lifespan, assuming parity with baseline humans can ever be considered natural), it should still be considered a problem in need of solving. And the Culture can solve just about every plausibly solvable problem in the universe! 

Think of it this way, if a mid-life crisis reliably convinced a >10% fraction of the population to kill themselves at the age of 40, with the rest living happily to 80+, we'd be throwing tens of billions at a pharmacological cure. It's even worse, relatively and absolutely, for the Culture, as their humans can easily live nigh-indefinitely. 

Even if you are highly committed to some kind of worship of minimalism or parsimony, despite infinite resources, or believe that people have the right to self-termination, then at least try and convince them to make mind backups that can be put into long-term storage. That is subjectively equivalent to death without the same... finality. 

This doesn't have to be coercive, but the Culture demonstrates the ability to produce incredibly amounts of propaganda on demand. As far as I'm concerned, if the majority of the population is killing itself after a mere ~0.000..% of their theoretical life expectancy, my civilization is suffering from a condition that ours standard depression or cancer to shame. And they can trivially solve it, they have incredibly powerful tools that can edit brains/minds to arbitrary precision. They just... don't. 

Reply
The Culture Novels as a Dystopia
AlphaAndOmega3d3221

While the Culture is, on pretty much any axis, strictly superior to modern civilization, what personally appalls me is their sheer deathism.

If memory serves, the average human lives for around 500 years before opting for euthanasia, mostly citing some kind of ennui. What the hell? 500 years is nothing in the grand scheme of things. 

Banks is careful to note that this isn't, strictly speaking, forced onto them, and exceptions exist, be it people who opt for mind uploads or some form of cryogenic storage till more "interesting" times. But in my opinion, it's a civilization-wide failure of imagination, a toxic meme ossified beyond help (Culture humans also face immense cultural pressure to commit suicide at the an appropriate age). 

Would I live in such a civilization? Absolutely, but only because I retain decent confidence in my ability to resist memetic conditioning or peer pressure. After all, I've already spent much of my life hoping for immortality in a civilization where it's either derided as an impossible pipe-dream or bad for you in {hand-wavy ways}. 

Another issue I've noted is that even though this is a strictly post-scarcity universe in the strong sense, with matter and energy freely available from the Grid, nobody expands. Even if you want to keep natural bodies like galaxies 'wild' for new species to arise, what's stopping you from making superclusters of artificial matter in the enormous void of interstellar space, let alone when the extragalactic supermajority of the universe lies empty? The Culture is myopic, they, and the wider milieu of civilizations, seem unwilling to remotely optimize even when there's no risk or harm of becoming hegemonizing swarms. 

(Now that you've got me started, I'm half tempted to flesh this out into a much longer essay.)

Reply
OffVermilion
AlphaAndOmega9d110

Strong upvote. When I began reading, I slightly scoffed at the idea of "musician's dystonia". Never heard of it, didn't sound plausible to me, despite being a doctor and psych trainee. I thought it was a nice hypothetical to shore up the story's central conceit. 

And then I actually Googled it, and found a wiki article. Oh dear. 

Reply
Viliam's Shortform
AlphaAndOmega12d113

There is an important distinction to be made between base models, which are next token predictors (and write in a very human like manner) and the chatbots people normally use, despite both being called "LLMs". Those involve taking a base model and then subjecting it to further instruction tuning and Reinforcement Learning from Human Feedback. 

My understanding is that a lot of the stylistic quirks of the latter arise from the fact that OpenAI (and other companies) either personally or through outsourcing hired Third World contractors. They were cheap and reasonably fluent in English. However, a large number were Nigerian, and this resulted in a model slightly biased towards Nigerian English. They're far more fond of "delve" than Western English speakers, but most people wouldn't know that, because how often were they reading Nigerian literature or media? It's particularly rampant in their business or corporate comms. 

Then you have other issues stemming from RLHF. Certain aspects of the general chatbot persona were overly engrained, as preference feedback biased towards verbosity and enthusiasm. And now that the output of ChatGPT 3.5 and onwards is all over the wider internet, to an extent, new base models expect that that's how a chatbot talks. This likely bleeds into the chatbots that are built on the contaminated base model, since they have very strong priors that they're a chatbot in the first place. 

That doesn't mean that there isn't significant variance in how current models speak. Claude has a rather distinct personality and innate voice, even within the various GPT models, o3 was highly terse and fond of dense jargon. With the availability of custom instructions, you can easily get them to talk you in just about any style you prefer. 

Reply
GPT-5s Are Alive: Synthesis
AlphaAndOmega1mo20

I was a big fan of Horizon Alpha, a stealth model available on OpenRouter, later revealed to be an early checkpoint/variant of GPT-5. Unfortunately, the release candidate isn't quite as good at my usual vibes benchmark when it comes to creative writing, be it 5 or 5-Thinking. 

(4.1 was really good at the same task, and surprisingly so, for a model marketed for coding. I missed it when it was gone, and I'm glad to have it back) 

I was initially quite negative about GPT-5T, but I've warmed to it. It seems as smart or smarter than o3, and is a well-rounded and capable model. The claims of a drastically reduced hallucination rate seems borne out in extensive use. 

Reply
Kaj's shortform feed
AlphaAndOmega1mo1615

I don't think 4o is that harmful in objective terms, but Altman made a big fuss about reducing sycophancy in GPT-5 and then immediately caved and restored 4o. It's a bad look as far as I'm concerned. 

More concerningly, if people can get this attached to an LLM as mediocre as 4o, we're in for a great time when actually intelligent and manipulative ASI gets here. 

Reply
Do Not Render Your Counterfactuals
AlphaAndOmega1mo10

You're correct. When GPT's native image gen came out, it absolutely refused to touch kids. I didn't realize that they'd changed it (at least because I never had the reason to try) till I attempted this ill-fated experiment. 

Reply1
[Linkpost] Avatar’s Dirty Secret: Nature Is Just Fancy Infrastructure
AlphaAndOmega1mo10

An interesting idea, but I genuinely don't see how that's supported by the movies? Ignore this if you were just joking! 

Reply
Do Not Render Your Counterfactuals
AlphaAndOmega1mo30

I'm sorry. I can only hope you've found peace, and I'd be the last to judge if you were to indulge in AI generated dreams of a better world. 

Reply
Load More
13[Linkpost] Avatar’s Dirty Secret: Nature Is Just Fancy Infrastructure
1mo
2
110Do Not Render Your Counterfactuals
2mo
19
44A Depressed Shrink Tries Shrooms
3mo
6
12Escaping the Jungles of Norwood: A Rationalist’s Guide to Male Pattern Baldness
3mo
10
4AlphaAndOmega's Shortform
9mo
11