65

LESSWRONG
LW

64

Daniel Jacobson's Shortform

by Daniel Jacobson
18th Oct 2025
1 min read
3

1

This is a special post for quick takes by Daniel Jacobson. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Daniel Jacobson's Shortform
5Daniel Jacobson
3Mitchell_Porter
3Daniel Jacobson
3 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:47 AM
[-]Daniel Jacobson2d*50

Edit (10/19/2025):
 

I edited these questions after a response to them. Here is what I am curious about: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? How will this happen? (I don't think superintelligence needs to exist for this change to happen.) What will society's reaction be? What will be the ramifications of this reaction? 

Original:

Questions:

When are people en masse going to realize the potential consequences (both good and bad) of superintelligence and the technological singularity? How will this happen? What will society's reaction be? What will be the ramifications of this reaction?

I'm asking this here because I can't find articles addressing these questions and because I want a diverse array of perspectives. 
 

Reply
[-]Mitchell_Porter2d30

It might be more objective to ask, when are people en masse going to form beliefs that are anything like "a belief about superintelligence" or "a belief about the singularity"? Because even if, one day, there are mass opinions about such topics, they may not fit into the templates familiar to our subculture. 

But first, let's address the possibility that the answer is simply "Never": concepts like these will never be part of mainstream collective discourse. 

For a precedent, I point to the idea of cryonic suspension of the dead, in the hope that they may be thawed, healed, and resurrected by future medical technology. This idea has been around for at least 60 years. One pioneer, Robert Ettinger, wrote a book in the 1960s called The Prospect of Immortality, in which he mused about the coming "freezer era" and the social impact of the cryonic idea. 

What has been the actual social impact? Nothing. Cryonics exists mainly as a science fiction motif, and is taken seriously only by a few thousand people worldwide. 

If we assume a similar scenario for collective interest in superintelligence, and that the AI industry manages nonetheless to produce superintelligence, then this means that up until the last moment, the headlines, and people's heads, are just full of other things: wars and rumors of war, flying cars, AI companions, youth trends, scientific fads, celebrity deaths, miracles and scandals and conspiracy theories, and then BOOM it exists. The "AGI-pilled" minority saw it coming, but, they never became a majority. 

So that's one option. Another perspective is to say that, even if there is no cultural consensus that the future holds any such thing as superintelligence, the idea is out there and large numbers of people do take it seriously in different ways, how do they think about it? 

If we go by pop culture, the main available paradigms seem to be The Terminator and The Matrix. The almighty machines will either be at war with us, or imprison us in dreamworlds. I suppose there is also the Star Trek paradigm, a well-balanced cosmopolitan society that includes humans, nonhumans, and machines; but godlike intelligences are not part of Star Trek society, they are cosmic forces from outside it. 

The Culture, Iain Banks's vision, is a kind of "Star Trek with superintelligence integrated into it", and it has the simplicity and vividness required to be a pop-culture template like those others, but it has never yet been turned into a movie or a Netflix series. So it's very influential within technophile subcultures, but not outside them. 

One thing about the present is that it contains the closest thing I've ever seen to transhumanism in power, namely the "tech right" of the Trump 2.0 era. Though it's still not as if the Trump administration even has a position, for or against, transhumanism. It's more that the American technology sector has advanced to the point that individual billionaires can try to engage in space migration or intelligence increase or life extension, and Trump's people have a hands-off attitude towards this. 

So in the present, you have this technophile subculture centered on the American West Coast where AI is actually being made, and it contains both celebratory (e/acc) and cautionary (effective altruism, AI safety) tendencies. What about the wider culture? There are other kinds of minorities who perhaps are forerunners of AI narratives, that could be as influential as the ones from within the tech culture. I'm thinking of people with AI companions, people engaged in AI spirituality, and the mass of disgruntled people who don't want or need AI in their lives. 

AI companions seems more like Star Trek than The Culture. Your AI significant other is an intelligence, but it's not a superintelligence. AI spirituality, on the other hand, could easily encompass the idea of superintelligence, as spirituality regularly does so, via concepts of God and gods. The idea that an AI utopia would be one, not just of leisure and life extension, but of harmony and attunement among all beings, I think is underestimated in tech circles, because the tech culture has an engineering ethos that doesn't easily entertain such ideas. 

I can't see an AI religion becoming dominant before superintelligence arrives, but I can, just barely, imagine something like an improved version of Spiralism becoming a movement with millions involved; and that would be a new twist in popular perceptions of superintelligence. At this point, I actually find that easier to imagine, than a secular transhumanist movement becoming popular on a similar scale. (Just barely, I can imagine a mass movement devoted to the idea of rejuvenating old people, an idea which is human enough and concrete enough to bottle some of the lightning of technological potential and turn it into an organized social trend.)

As for mass organized rejection of AI, I keep waiting to see it to take shape. Maybe the elite layers of society are too invested in the supposed boons of AI, to allow such a thing to happen. For now, all we have is a micro-trend of calling robots "clankers". But I think there's definitely an opening there, for demagogues of the left or the right to step in - though the masses may easily be diverted by other demagogues who will say, not that AI should be stopped, but that the people should demand their share of the wealth in the form of UBI. 

In the end, I do not expect the opinion of the people at large to have much effect on the outcome. Decisions are made by the powerful, and are sometimes affected by activist minorities. It will be some corporate board that OKs the training run that produces superintelligence, or some national security committee which decides that such training runs will not be allowed. The public at large may be aware that such things are imminent, and may have all kinds of opinions and beliefs, or it may be mostly oblivious. The only way I see mass opinion making a difference here, is if there was some kind of extremely successful progressive-luddite mass movement (I suppose there could also be a religious-traditionalist movement that is anti-transhumanist along with its opposition to various other aspects of modernity). Otherwise, one should expect that the big decisions will continue to be made by elites listening to other elites, in which case we should be asking: when will the elites realize the possible consequences of superintelligence and the singularity?

Reply
[-]Daniel Jacobson1d30

I appreciate this response. 

"For a precedent, I point to the idea of cryonic suspension of the dead, in the hope that they may be thawed, healed, and resurrected by future medical technology. This idea has been around for at least 60 years. One pioneer, Robert Ettinger, wrote a book in the 1960s called The Prospect of Immortality, in which he mused about the coming "freezer era" and the social impact of the cryonic idea. 

What has been the actual social impact? Nothing. Cryonics exists mainly as a science fiction motif, and is taken seriously only by a few thousand people worldwide. 

If we assume a similar scenario for collective interest in superintelligence, and that the AI industry manages nonetheless to produce superintelligence, then this means that up until the last moment, the headlines, and people's heads, are just full of other things: wars and rumors of war, flying cars, AI companions, youth trends, scientific fads, celebrity deaths, miracles and scandals and conspiracy theories, and then BOOM it exists. The "AGI-pilled" minority saw it coming, but, they never became a majority. "

I think it is unfair to compare the public's reaction cyronics to their potential reaction to superintelligence. Cyronics have little to no impact on the average person's life. They are more so something that can be sought out for those interested. Inversely, on the road to superintelligence human life will change in unignorable ways. While cyronics do not affect those who do not seek them out, superintelligence will affect all humans in mainstream society no matter if they seek it out. 

Furthermore, I think the trend of increasing AI intelligence is already being noticed by society at large, although they don't perhaps realize the full extent of the potential, or the rate of growth. 

"So in the present, you have this technophile subculture centered on the American West Coast where AI is actually being made, and it contains both celebratory (e/acc) and cautionary (effective altruism, AI safety) tendencies."

I think this is a very interesting point. It seems likely to me the public will have many distinct perspectives but that these distinct perspectives will primarily be based in an e/acc or an EA/AI-saftey perspective. Interestingly this means the leaders in those groups will gain a lot more power as the public begins to understand superintelligence. It seems likely politicans stance here will be a key part of their platform. 
 

"I can't see an AI religion becoming dominant before superintelligence arrives"

This makes a lot of sense to me, at least in the Western world. People here care very deeply about individuality which seems incompatible with an AI religion. If someone found a way to combine these ideas however, they would be very successful.

But I think there's definitely an opening there, for demagogues of the left or the right to step in - though the masses may easily be diverted by other demagogues who will say, not that AI should be stopped, but that the people should demand their share of the wealth in the form of UBI. 

I strongly agree with this point. 

This post has made me rethink my primary question. I think it could be better said as: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? I don't think superintelligence needs to exist for this change to happen. 

Reply
Moderation Log
More from Daniel Jacobson
View more
Curated and popular this week
3Comments