Although you don't explicitly mention it, I feel like this whole post is about value drift. The doomers are generally right on the facts (and often on the causal pathways), and we do nonetheless consider the post-doom world better, but the 1-nth order effects of these new technologies reciprocally change our preferences and worldviews to favor the (doomed?) world created by the aforementioned new technologies.
The question of value drift is especially strange given that we have a "meta-intuition" that moral/social values evolving and changing is good in human history. BUT, at the same time, we know from historical precedent that we ourselves will not approve of the value changes. One might attempt to square the circle here by arguing that perhaps if we were, hypothetically, able to see and evaluate future changed values, that we would in reflective equilibrium accept these new values. Sadly, from what I can gather this is just not borne out by the social science: when it comes to questions of value drift, society advances by the deaths of the old-value-havers and the maturation of a next generation with "new" values.
For a concrete example, consider that most Americans have historically been Christians. In fact, the history of the early United States is deeply influenced by Christianity, sometimes swelling in certain periods to fanatical levels. If those Americans could see the secular American republic of 2025, with little religious belief and no respect for the moral authority of Christian scripture, they would most likely be morally appalled. Perhaps they might view the loss of "traditional God-fearing values" as a harm that in itself outweighs the cumulative benefits of industrial modernity. As a certain Nazarene said: “For what shall it profit a man, if he shall gain the whole world, and lose his own soul?” (Mark 8:36)
With this in mind, as a final exercise I'd like you, dear reader, to imagine a future where humanity has advanced enormously technologically, but has undergone such profound value shifts that every central moral and social principle that you hold dear has been abandoned, replaced with mores which you find alien and abhorrent. In this scenario, do you obey your moral intuitions that the future is one of Lovecraftian horror? Or do you obey your historical meta-intuitions that future people probably know better than you do?
Except that there arguably exist technologies hated even by the humans who grew in their realm. For example, nuclear weapons.[1] Or, according to a critic, social media. Suppose that AI establishes some kind of a future where the humans can't even usefully help each other or are so spoiled by, say, AI girlfriends or boyfriends that humans find it hard to relate to each other. If the humans don't become fine with it, then your case for the future and futuristic mores would break, but the case against futuristic mores would hold.
While nuclear weapons are hard to separate from nuclear power plants, thermonuclear fusion has yet to produce a peaceful application.
I agree with your sentiment — I suppose I was implicitly presenting the bull case (or paradigmatic case) of cultural drift, wherein the future values are supported by future people but despised by their ancestors.
I think your example is closer to the familiar "Moloch" dynamic, where social and material technology leads to collective outcomes that are obviously undesirable to all involved. Moloch is certain to be an possible issue in any future world!
The question of value drift is especially strange given that we have a "meta-intuition" that moral/social values evolving and changing is good in human history. BUT, at the same time, we know from historical precedent that we ourselves will not approve of the value changes. One might attempt to square the circle here by arguing that perhaps if we were, hypothetically, able to see and evaluate future changed values, that we would in reflective equilibrium accept these new values. Sadly, from what I can gather this is just not borne out by the social science: when it comes to questions of value drift, society advances by the deaths of the old-value-havers and the maturation of a next generation with "new" values.
I feel like this is sweeping a bit under the rug. First, there's a reason why there are people who label themselves politically as "conservatives" - some people do think that our current values are just fine and should, in fact, be preserved unchanged forever! Some even want to go back to previous values (however impractical and unfeasible that tends to be; usually what it happens is that you make up some new thing that is merely a bastardised modern caricature of the old values). As far as people who instead want the values to change go, they usually have an idea of a good direction for them to change - usually they're people who are far from the median of society and so they would like society to become more like them.
Of course push far enough in the future and all ideology might seem entirely incomprehensible to us. I don't really have a clean answer for what we should think of that, except that maybe it's a big discounting factor on longtermist thinking (after all, suppose that all humans from 500,000 years hence have agreed that slavery is fine and genociding aliens is desirable - should we feel particularly proud of ensuring there's more of those people around?).
As far as people who instead want the values to change go, they usually have an idea of a good direction for them to change - usually they're people who are far from the median of society and so they would like society to become more like them.
I have in mind another conjecture: even median humans value humans with values that are, in their minds, at least as moral as median humans, and ideally[1] more moral.
On the other hand, I have seen conservatives building cases for SOTA liberal values being damaging to the minds or outright incompatible with sustaining the civilisation (e.g. a too big part of Gen Z women being against motherhood). In the past, if some twisted moral reflection led to destructive values, then the values were likely to be outcompeted.
The third option is a group of humans forsibly establishing their values[2] versus another system of values compatible with progress is considered amoral.
So I think that people are likely to value the future with values which keep the civilisation afloat and can be accepted upon thorough reflection on how the values were reached and on the values' consequences.
The degree of extra morality which humans value can vary between cultures. For example, we less value the reasons which caused people to enter monasteries, but not the acts like sustaining knowledge.
Or values that they would like others to follow, but in this case the group is far easier to denounce as manipulators.
What you're pointing at applies if AI merely makes most work obsolete without significantly disturbing the social order otherwise, but you're not considering (also historically common) replacement/displacement scenarios. It is clearly bad from my perspective if (e.g.) either:
1) Controllable strong AI gets used to takeover the world and in time replace the human population by the dictators offspring.
2) Humans get displaced by AIs.
In either case, the surviving parties may well look back on the current state of affairs and consider their world much improved, but it's likely we wouldn't on reflection.
Last month, doomers predicted that the Rapture would happen. The doomers were wrong, as they have been all the other dozens of notable times they predicted this.
Doomers predicted that the Y2K bug would cause massive death and destruction. They were wrong.
Moving to your social examples, doomers predicted that the legalisation of gay marriage would lead to the legalisation of bestiality. They were wrong.
You even provide an example yourself where people claim that D&D leads to satanism. This didn't happen! Having an orc hero is not satanism! satanism has remained a niche religion. The doomers were wrong.
Cherrypicking examples where doomerist predictions were wrong and examples where they were right is a very poor way to figure out whether a particular doomerist prediction is wrong or right.
Skeptics are not arguing that no technology or social movement has ever had a negative effect, they are arguing that humans are biased towards apocalyptic thinking and overestimating the threat of new stuff. If you want to rebut this, you have to actually collect some unbiased data to test the claim.
While I agree at a basic level, this also seems like a motte-and-bailey.
There is clearly a vibe that all doomers have obviously always been wrong. The author is clearly trying to push back against that vibe. I too prefer arguing at 'motte' level, but vibes (baileys) matter, and pushing back against one should not require a long airtight argument that stands up to the stronger version of the claims being made. Even though I agree the stronger version would be better, that's true for both sides of any debate.
I sort of see your argument here, but similarly just based on vibes associating the AI-risk concepts with other doom predictions feels like it does more harm than good to me. The vibe that doomers are always wrong doesn't feel countered by cherry picking examples of smaller predicted harms because (as illustrated in the comment) the body of doom predictions is much larger than the ones with nuggets of foresight.
That's comparing apples to oranges. There are doomers and doomers. I don't think the "doomers" predicting the Rapture or some other apocalypse are the same thing as the "doomers" predicting the moral decline of society. The two categories overlap in many people, but they are distinct, and I think it's misleading to conflate them. (Which is kind of a critique of the premise of the article as a whole--I would put the AI doomers in the former category, but the article only gives examples from the latter.)
The existential risk doomers historically are usually crazy, and they've never been right yet (in the context of modern society anyway--I suppose if you were an apocalypse doomer in 1300s China saying that the Mongols were going to come and wipe out your entire society you were pretty spot on), but that doesn't mean they are always wrong or totally off base. It's completely rational to be concerned about doom from a nuclear war, for example, even though it hasn't happened yet. Whether AI risk is crazy "Y2K/Rapture" doom or feasible "nuclear war" doom is the real debate, and this article doesn't really contribute anything to it.
What this article does a good job of is illustrating how "moral decline" doomers as opposed to "apocalypse" doomers are often proved technically correct by history. I think what both they and this article miss is that they often see events as causes of the so-called decline, when they're actually milestones in an already-existing trend. Legalizing gay marriage didn't cause other "degenerate" sexual behavior to be more accepted in society--we legalized gay marriage because we had already been moving away from the Puritanical sexual mores of the past towards a more liberated attitude, and this was just one more milestone in that process. Now that's not always true--the invention of the book, and later, the smartphone absolutely did cause a devaluing of the ability to memorize and recite knowledge. And sometimes it's a little bit of both, where an event is both a symptom of an underlying trend, and also contributes to accelerating it. But I really like how the article acknowledges that they could be right even if "doom" as we think of it today did not occur, because the values that were important to them were lost--
Probably the ancients would see our lives as greatly impoverished in many ways downstream of the innovations they warned against. We do not recite poetry as we once used to, sing together for entertainment, roam alone as children, or dance freely in the presence of the internet's all-seeing eyes. Less sympathetic would be ancient's sadness at our sexual deviances, casual blasphemy or so on. But those were their values.
We laugh at them for being prudish for how appalled they would be at our society with homosexuality, polyamory, weird fetishes, etc. all being more or less openly discussed and acceptable, but think what it would feel like to you if in the future you saw your society trending towards one where, say, pedophilia was becoming less of a taboo? It doesn't matter if it's right or wrong, it's the visceral response that most people have to that idea that you need to understand. That's what it feels like to be a culturally conservative doomer watching their society experience value drift. People today like to think that our values are somehow backed up by reality in a way that isn't true of other past or present value systems, but guess what? That's what it feels like to have a value system. Everyone, everywhere, in all times and all places has believed that, and the human mind excels at no other task more than coming up with rationalizations for why your values are the right ones, and opposing values are wrong.
Overall I think this article is pretty insightful about the "moral decline" type of doomers, just completely unrelated to the question of AI existential risk that brought it up in the first place.
Correct, my mistake. 1200s. I was just reaching for a historical example of when a real "apocalypse" did in fact come to pass--when not only are you and everyone you know going to get killed but also your entire society as you know it will come to an end--and the brutal Mongol conquest of China was the first one that came to my mind, probably thanks to Dan Carlin's excellent Hardcore History podcast on the subject. I didn't take the 2 seconds on Wikipedia I should have to make sure I was talking about the right century.
I was thinking of other contenders like the smallpox epidemic in North America following the Columbian exchange, but in that scenario you didn't really have "doomers" who were predicting that outcome, because their epidemiology at the time wasn't quite up to understanding the problem they were facing. But in China at the time, it's feasible that some individuals would have had access to enough news and information to make doom predictions about the Mongol apocalypse that turned out to be unfortunately correct.
Doomers predicted that the Y2K bug would cause massive death and destruction. They were wrong.
This seems like a misleading example of doomers being wrong (agree denotationally, disagree connotationally), since I think it's plausible that Y2K was not a big deal (to such an extent that "most people think it was a myth, hoax, or urban legend") precisely because of the mitigation efforts stemmed by the doomsayers' predictions.
IIRC however I heard it said that the Y2K bug didn’t cause serious problems even in countries where there wasn’t much effort to deal with it, and hence the doomsayers’ predictions were exaggerated (in that much lesser mitigation efforts would have served almost as well). I don’t know if this is true though
Consensus view is that they were shielded by those who did invest in it.
I've written more about Y2K at https://www.lesswrong.com/posts/zvQdgfFEDFQQhDDuS/y2k-successful-practice-for-ai-alignment
Even if that were true, it might not mean anything. Why might a country not invest in Y2K prevention? Well, maybe it's not a problem there! You don't decide on investments at random, after all.
And this is clearly a case where (1) USA/Western investments would save a lot of other countries the need to invest in Y2K prevention because that is where most software comes from; and (2) those countries might not have the problem in the first place because they computerized later (and skipped the phase of hardwiring in dangerously short data types), or hadn't computerized at all. ("We don't have a Y2K problem because we don't have any computers" doesn't imply Y2K prevention is a bad idea.)
Indeed, I had similar thoughts but didn’t type them up.
In any case I suspect it was a situation in which the cost-benefit analysis would show high risk-aversion (hence probable over-reaction to avoid under-reaction) was justified.
Yep. Put another way: With Y2K, the higher-quality "predictions of doom" were sufficiently specific that they were also a road map to preventing the doom.
(If nothing else, you could frequently test a system by running the system clock ahead to 1999-12-31 23:59:59 and waiting a moment to see if anything caught fire.)
You can cherry pick examples of doomers being right and wrong. There have been doomers about nuclear war, wrong so far. About various religious apocalypses, wrong wrong wrong. Overpopulation and famine, wrong-oh!
Homosexuality hasn’t legitimated bestiality or pedophilia, in fact we have perhaps a more vigorous anti-pedophile movement now than ever before - I saw a guy on a motorcycle on the freeway a month or two ago whose sweatshirt read “kill your local pedophile” on the back. In the past, mass rape in the wake of war and marital rape was normal and expected worldwide, or we weren’t even equipped with the moral frame and language to condemn it.
We have little evidence TV et al destroyed people’s ability to read so much as gave them many alternatives to it. Anki makes it easier than ever before to memorize poems, should you choose to.
Were people in the past specifically worried about the dissolution of the marriage format, or were they worried about certain healthy and fulfilling relational dynamics for which “marriage” was a convenient label? Unclear, and that’s a lot of difficult history to hash out if you were to try. And if the latter; then are we so sure things haven’t improved? The past appears to be full of thoroughly toxic marriages, as well as prematurely dead spouses.
I could go on, but I don’t think there’s a point. This post is implicitly about AI doomers, and it’s trying to score points lazily by this weird sort of “our team has always been right, doesn’t that seem intuitively correct to you?” anecdotalism. This is the sort of thing an AI enthusiast could easily point to as an example of bad doomer criticism of AI accelerationism that may inappropriately increase their confidence, even as it appears to be intended to do for the doomers. Sigh.
Can anyone reading this truly deny that those warnings came true from the doom sayer's perspective?
yes. your arrow of causality look backwards to me - I don't see divorce destigmatization - > more divorce. in the divorce case it's clearly more divorce -> destigmatization. i don't remember where to find the posts about how the laws that allow divorce came after the spike in divorce, and not the other way around.
there is important point here. i only recently re-evaluate my opinion on TV and decided the doomers was right there. but it sure look to me you over-generalize and give very dubious examples here, without evidence.
do TV destroyed the ability to read complex text? are you sure? because I don't. I will need to see some statistics about that.
and it's important, those subtleties you just wave away. there is a great difference between world where destigmatization - > more divorce and world where more divorce -> destigmatization.
looking on the various ways that doomers was right, partially right and partially wrong. and just straightforwardly wrong can teach interesting things. but we can't learn it if you round both being right and being wrong to being basically right!
A lot of this reminds me about an old econ article I read in school. Ron Heiner's The Origins of Predictable Behavior. As I recall, the basic argument is baysean in reasoning and largely gets to how social rules evolve to deal with very infrequent but highly costly, socially I want to say but also individually, actions by members of society.
The infrequency and lack of firsthand knowledge creates a lot of tensions in terms of views about the existing rules. Broadly that can fit into the view of resistance to change and "sky is falling" type fears and rhetoric that does generally slow the rate of change.
There's an argument I sometimes hear against existential risks, or any other putative change that some are worried about, that goes something like this:
'We've seen time after time that some people will be afraid of any change. They'll say things like "TV will destroy people's ability to read", "coffee shops will destroy the social order","machines will put textile workers out of work". Heck, Socrates argued that books would harm people's ability to memorize things. So many prophets of doom, and yet the world has not only survived, it has thrived. Innovation is a boon. So we should be extremely wary when someone cries out "halt" in response to a new technology, as that path is lined with skulls of would be doomsayers."
Lest you think this is a straw man, Yann Le Cun compared fears about AI doom to fears about coffee. Now, I don't want to criticize this argument, like Scott Alexander did. Neither do I want to argue for it, like Daniel Jeffries did. Instead, I want to point out something very interesting about all the examples my sock-puppet gave.
The doomers were right.
TV, and the internet, did destroy people's ability to read complex works. Coffee shops were in fact breeding grounds for revolution which destroyed the social order. Machines did put textile workers out of work. Books did reduce elite human's ability to recite epic poems at a whim, and generally devalue memorization. All of these things are true.
This goes beyond technology. Consider sexuality. People warned that permitting homosexuality would be a slippery slope to all sorts of degenerate sexualities. Can anyone reading this truly deny that those warnings came true from the doom sayer's perspective?
Likewise for marriage. We made divorce easier, stopped shaming single mothers, viewing children with divorced parents as coming from broken homes, and we saw the divorce rate skyrocket. Factually, did those who warned of such things turn out to be wrong?
Or, say, D&D. People had a moral panic over it, viewing it as a gateway to fraternizing with devils. Heck, Tolkien wrote about Satanic cults in the 4th Age after Sauron's defeat, and how "Gondorian boys were playing at being Orcs". But who now would blink at a devil or an orc for a hero?
The slippery slope arguments were correct.
Historical doomers did a better job at predicting real dangers from change than your average SF-thinkboy will assume.
But! They were not wholly correct. And that matters, of course. Take the Tolkien quote from above. It conveniently missed out an important bit of context at the "and going around doing damage." Books, looms, coffee shops, TV, gay marriage, accepting single moms and D&D did not actually have all the dangers that people warned of, true.
In fact, they had a great deal of benefits. So many that they were worth it on net, in my view. We would not have got anywhere without books, really. (Cue applause lights.) So it is unsurprising that, in retrospect, people will view a heuristic like "new tech is always good". Or even that a rock with that heuristic slapped onto it is a good though leader. You could do worse.
But there are two important caveats. First, note the usage of "in my view". Probably the ancients would see our lives as greatly impoverished in many ways downstream of the innovations they warned against. We do not recite poetry as we once used to, sing together for entertainment, roam alone as children, or dance freely in the presence of the internet's all-seeing eyes. Less sympathetic would be ancient's sadness at our sexual deviances, casual blasphemy or so on. But those were their values.
Which brings us to the second point. In spite of the predictions of great ruin by our ancestors, much ruin did happen. Down the slopes we did slip.
Of course, they got a lot of details wrong. And many times, people operated more on vibes than concrete models of what would occur. Partly this was because they didn't just base their predictions on their inner simulators guessing what would concretely happen, but also on abstract idealized reasoning which naturally got corrupted by far-mode considerations of the sacred. So instead of predictions like "coffee shops will ferment rebels", we got "coffee shops will destroy society". And we did get some social destruction, but not as much as they expected. And we got benefits, perhaps more than they expected.
But even given all that, it's remarkable to me how right the doomers were.