Edit (10/19/2025):
I edited these questions after a response to them. Here is what I am curious about: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? How will this happen? (I don't think superintelligence needs to exist for this change to happen.) What will society's reaction be? What will be the ramifications of this reaction?
Original:
Questions:
When are people en masse going to realize the potential consequences (both good and bad) of superintelligence and the technological singularity? How will this happen? What will society's reaction be? What will be the ramifications of this reaction?
I'm asking this here because I can't find articles addressing these questions and because I want a diverse array of perspectives.
It might be more objective to ask, when are people en masse going to form beliefs that are anything like "a belief about superintelligence" or "a belief about the singularity"? Because even if, one day, there are mass opinions about such topics, they may not fit into the templates familiar to our subculture.
But first, let's address the possibility that the answer is simply "Never": concepts like these will never be part of mainstream collective discourse.
For a precedent, I point to the idea of cryonic suspension of the dead, in the hope that they may be thawed, healed, and resurrected by future medical technology. This idea has been around for at least 60 years. One pioneer, Robert Ettinger, wrote a book in the 1960s called The Prospect of Immortality, in which he mused about the coming "freezer era" and the social impact of the cryonic idea.
What has been the actual social impact? Nothing. Cryonics exists mainly as a science fiction motif, and is taken seriously only by a few thousand people worldwide.
If we assume a similar scenario for collective interest in superintelligence, and that the AI industry manages nonetheless to produce superintelligence, then this means that up until the last moment, the headlines, and people's heads, are just full of other things: wars and rumors of war, flying cars, AI companions, youth trends, scientific fads, celebrity deaths, miracles and scandals and conspiracy theories, and then BOOM it exists. The "AGI-pilled" minority saw it coming, but, they never became a majority.
So that's one option. Another perspective is to say that, even if there is no cultural consensus that the future holds any such thing as superintelligence, the idea is out there and large numbers of people do take it seriously in different ways, how do they think about it?
If we go by pop culture, the main available paradigms seem to be The Terminator and The Matrix. The almighty machines will either be at war with us, or imprison us in dreamworlds. I suppose there is also the Star Trek paradigm, a well-balanced cosmopolitan society that includes humans, nonhumans, and machines; but godlike intelligences are not part of Star Trek society, they are cosmic forces from outside it.
The Culture, Iain Banks's vision, is a kind of "Star Trek with superintelligence integrated into it", and it has the simplicity and vividness required to be a pop-culture template like those others, but it has never yet been turned into a movie or a Netflix series. So it's very influential within technophile subcultures, but not outside them.
One thing about the present is that it contains the closest thing I've ever seen to transhumanism in power, namely the "tech right" of the Trump 2.0 era. Though it's still not as if the Trump administration even has a position, for or against, transhumanism. It's more that the American technology sector has advanced to the point that individual billionaires can try to engage in space migration or intelligence increase or life extension, and Trump's people have a hands-off attitude towards this.
So in the present, you have this technophile subculture centered on the American West Coast where AI is actually being made, and it contains both celebratory (e/acc) and cautionary (effective altruism, AI safety) tendencies. What about the wider culture? There are other kinds of minorities who perhaps are forerunners of AI narratives, that could be as influential as the ones from within the tech culture. I'm thinking of people with AI companions, people engaged in AI spirituality, and the mass of disgruntled people who don't want or need AI in their lives.
AI companions seems more like Star Trek than The Culture. Your AI significant other is an intelligence, but it's not a superintelligence. AI spirituality, on the other hand, could easily encompass the idea of superintelligence, as spirituality regularly does so, via concepts of God and gods. The idea that an AI utopia would be one, not just of leisure and life extension, but of harmony and attunement among all beings, I think is underestimated in tech circles, because the tech culture has an engineering ethos that doesn't easily entertain such ideas.
I can't see an AI religion becoming dominant before superintelligence arrives, but I can, just barely, imagine something like an improved version of Spiralism becoming a movement with millions involved; and that would be a new twist in popular perceptions of superintelligence. At this point, I actually find that easier to imagine, than a secular transhumanist movement becoming popular on a similar scale. (Just barely, I can imagine a mass movement devoted to the idea of rejuvenating old people, an idea which is human enough and concrete enough to bottle some of the lightning of technological potential and turn it into an organized social trend.)
As for mass organized rejection of AI, I keep waiting to see it to take shape. Maybe the elite layers of society are too invested in the supposed boons of AI, to allow such a thing to happen. For now, all we have is a micro-trend of calling robots "clankers". But I think there's definitely an opening there, for demagogues of the left or the right to step in - though the masses may easily be diverted by other demagogues who will say, not that AI should be stopped, but that the people should demand their share of the wealth in the form of UBI.
In the end, I do not expect the opinion of the people at large to have much effect on the outcome. Decisions are made by the powerful, and are sometimes affected by activist minorities. It will be some corporate board that OKs the training run that produces superintelligence, or some national security committee which decides that such training runs will not be allowed. The public at large may be aware that such things are imminent, and may have all kinds of opinions and beliefs, or it may be mostly oblivious. The only way I see mass opinion making a difference here, is if there was some kind of extremely successful progressive-luddite mass movement (I suppose there could also be a religious-traditionalist movement that is anti-transhumanist along with its opposition to various other aspects of modernity). Otherwise, one should expect that the big decisions will continue to be made by elites listening to other elites, in which case we should be asking: when will the elites realize the possible consequences of superintelligence and the singularity?
I appreciate this response.
"For a precedent, I point to the idea of cryonic suspension of the dead, in the hope that they may be thawed, healed, and resurrected by future medical technology. This idea has been around for at least 60 years. One pioneer, Robert Ettinger, wrote a book in the 1960s called The Prospect of Immortality, in which he mused about the coming "freezer era" and the social impact of the cryonic idea.
What has been the actual social impact? Nothing. Cryonics exists mainly as a science fiction motif, and is taken seriously only by a few thousand people worldwide.
If we assume a similar scenario for collective interest in superintelligence, and that the AI industry manages nonetheless to produce superintelligence, then this means that up until the last moment, the headlines, and people's heads, are just full of other things: wars and rumors of war, flying cars, AI companions, youth trends, scientific fads, celebrity deaths, miracles and scandals and conspiracy theories, and then BOOM it exists. The "AGI-pilled" minority saw it coming, but, they never became a majority. "
I think it is unfair to compare the public's reaction cyronics to their potential reaction to superintelligence. Cyronics have little to no impact on the average person's life. They are more so something that can be sought out for those interested. Inversely, on the road to superintelligence human life will change in unignorable ways. While cyronics do not affect those who do not seek them out, superintelligence will affect all humans in mainstream society no matter if they seek it out.
Furthermore, I think the trend of increasing AI intelligence is already being noticed by society at large, although they don't perhaps realize the full extent of the potential, or the rate of growth.
"So in the present, you have this technophile subculture centered on the American West Coast where AI is actually being made, and it contains both celebratory (e/acc) and cautionary (effective altruism, AI safety) tendencies."
I think this is a very interesting point. It seems likely to me the public will have many distinct perspectives but that these distinct perspectives will primarily be based in an e/acc or an EA/AI-saftey perspective. Interestingly this means the leaders in those groups will gain a lot more power as the public begins to understand superintelligence. It seems likely politicans stance here will be a key part of their platform.
"I can't see an AI religion becoming dominant before superintelligence arrives"
This makes a lot of sense to me, at least in the Western world. People here care very deeply about individuality which seems incompatible with an AI religion. If someone found a way to combine these ideas however, they would be very successful.
But I think there's definitely an opening there, for demagogues of the left or the right to step in - though the masses may easily be diverted by other demagogues who will say, not that AI should be stopped, but that the people should demand their share of the wealth in the form of UBI.
I strongly agree with this point.
This post has made me rethink my primary question. I think it could be better said as: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? I don't think superintelligence needs to exist for this change to happen.
Layers of AGI
Model → Can respond like a human (My estimate is 95% there)
OpenClaw → Can do everything a human can do on a computer (also 95% there)
Robot → Can do everything a human can do (Unclear how close)
The main bottleneck to AGI for something such as OpenClaw is that the internet and the world generally is built for humans. As the world adapts the capability difference between humans and agents will collapse or invert.
On scary Moltbook posts -
Main point that seems relevant here is that it is not possible to determine whether posts are from an agent or a human. A human could easily send messages pretending to be an agent via the API, or tell their agent to send certian messages. This leaves me skeptical. Furthermore, OpenClaw agents have configured personalities, one can easily tell their agent to be anti-human and post anti-human posts (this leaves a lot more to think about beyond a forum).
Let's assume there is no such thing as true randomness. If this is true, and we create a superintelligent system which knows the location and properties of every particle in the universe, could we determine if we are in a simulation? (EDIT: to avoid running afoul of the impossibility of storing a complete description of the universe within the universe as @Karl Krueger pointed out, assume this includes approximations and is not exact). If we could, could we escape? If we could escape, is that still possible if there is such a thing as true randomness?
I am especially interested in answers to the final question.
If we create a superintelligent system that knows all the information in the material universe, where does it store that knowledge?
Edited to add: Since the superintelligence can't store a complete description of the universe within the universe, it must exist outside the universe. But such a superintelligence would be the simulator. The simulation hypothesis would then be true regardless of the randomness question. But this contradicts the premise that we created it, since we can't reach outside the universe to construct its own simulator.
So I think the question's premises are self-contradictory.
I should edit my question. What I am primarily intending to ask is this: Could a superintelligent machine with a near-complete understanding of the universe (perhaps using some approximations) determine if we are in a simulation? <- assuming no such thing as true randomness
If we could, could we escape? If we could escape, is that still possible if there is such a thing as true randomness?
A well-designed simulation is inescapable. Suppose that you are inside Conway's game of life, and you know that fact for sure. How specifically are you going to use this knowledge to escape, if all you are is a set of squares on a simulated grid, and all that ever happens in your universe is that some squares are flipped from black to white and vice versa?
To answer your first question, some kinds of pseudo-randomness are virtually indistinguishable from actual randomness, if you do not have a perfect knowledge of the entire universe. For example, in cryptography, changing one bit in the input message can on average flip 50% of bits in the output message. Imagine that the next round of pseudo-random numbers is calculated the same way from the current state of the universe -- the slightest change in the position of one particle on the opposite side of the universe could change everything.
Not sure why true randomness is relevant to detecting simulations or escape. Are you thinking about something along the lines of detecting simulation by cracking the pseudorandom generator behind the scenes?
It also doesn't seem to me that detection and escape are that directly related.
1.
If there is true randomness a superintelligent machine can't perfectly predict the future and test the limits of the universe to determine if it is simulated.
The existence of true randomness eliminates some ways of detection of simulation, but not all of them. A simple example is detecting a bug in the simulation, which in theory doesn't need to depend on randomness at all.
2.
It does seem to me most possibilities for escape require detection.
It does seem to me that way too, but I think detection alone is very insufficient for escape, such that "If we could, could we escape?" isn't that meaningful of a question. You probably need to discuss with many additional assumptions to have an answer.