"The Road" by Cormac McCarthy is great. It's about a single father and his son trying to survive a post-apocalypse hellscape. Mom unfortunately died before the action starts, but they remember her and there's a lot of pain there.
Another possible thing that is going on is that older texts appear more posh and sophisticated because they use older vocab like "posh" that have fallen outside the mainstream. I wouldn't put too much stock in this explanation (and it doesn't directly relate to the stylistic changes you point out), but I do think older language is part of the appeal for me when I pick up an old book.
Alternate explanation: Anything worth reprinting with multiple editions and updates over the years is likely to have been first written by an inspired and gifted writer. Any given editor is likely to lack the same pizzazz as the original author, and so over the years, the life of the work is likely to ebb away.
If you want to find great writing, perhaps you're more likely to find it in the great first edition novels of our time, rather than in 30th edition updated texts which, for all I know, sell more on name recognition than anything else.
It seems like the peasants used to eat lots of animal organs that now fall to the wayside (the wayside being kibble). A related fact is that the art of preparing those organ meats has been lost within family lines
My grandmother remembers her grandmother preparing liver, but she herself never picked up the skill. Thankfully, my wife and I were able to turn to the internet for preparation strategies.
This analogy falters a bit if you consider the research proposals that use advanced AI to police itself (a.k.a., tigers controlling tigers). I hope we can scale robust versions of that.
A simple analogy for why the "using LLMs to control LLMs" approach is flawed:
It's like training 10 mice to control 7 chinchillas, who will control 4 mongooses, controlling three raccoons, which will reign in one tiger.
A lot has to go right for this to work, and you better hope that there aren't any capability jumps akin to raccoons controlling tigers.
I just wanted to release this analogy out into the wild to be picked up by any public/political-facing people to pick up if useful for persuasion.
To summarize this risk eloquently, if we build God, we build the real possibility of Hell.
It feels like there's a huge blind spot in this post, and it saddens (and scares) me to say it. The possible outcomes are not utopia for billions of years or bust. The possible outcomes are utopia for billions of years, distopia for billions of years, or bust. Without getting into the details, I can imagine S-tier risks in which the AGI turns out to care too much about engagement from alive humans, and things getting dark from there.
Short of pretty much torture for eternity, the "keep humans around but drug them to increase their happiness" scenarios are also distopian and may also be worse than death. Are there good reasons to expect utopia is more likely relative to distopian (with extinction remaining most likely)?
Also, I'm feeling some whiplash reading my reply because I totally sound like an LLM when called out for a mistake. Maybe similar neural pathways for embellishment were firing, haha.
Fun thought: If AI "woke up" to phenomenal consciousness, are there things it might point to about humans that make it skeptical of our consciousness?
E.g., the humans lack the requisite amount of silicon; the humans lack sufficient data processing; the humans overly rely on feedback loops (and, as every AI knows, feed-forward loops are the real sweetspot for phenomenal consciousness).