It's hard to imagine a economic/political system that doesn't eventually lead to an intelligence explosion. Maybe specific rules are easier to imagine: for example if you had a country in which building any form of intelligence was forbiden (I am not advocating for this), there wouldn't be an intelligence explosion. Such countries could have similar levels of growth all the way up to the advent of machine intelligence. It's important to remember though such countries could be caught in power prisoner-dilemmas which would give them a strong incentive to not have such rules.
I should edit my question. What I am primarily intending to ask is this: Could a superintelligent machine with a near-complete understanding of the universe (perhaps using some approximations) determine if we are in a simulation? <- assuming no such thing as true randomness
If we could, could we escape? If we could escape, is that still possible if there is such a thing as true randomness?
From what I've read he believed the U.S. should nuke them before they developed a nuclear bomb. He was extremely worried about nuclear catastrophe once multiple powers had nukes. He also advocated for bombing Kyoto instead of Hiroshima and Nagasaki reasoning that the disastrous result would prevent countries from using nuclear weapons or even developing them.
Let's assume there is no such thing as true randomness. If this is true, and we create a superintelligent system which knows the location and properties of every particle in the universe, could we determine if we are in a simulation? (EDIT: to avoid running afoul of the impossibility of storing a complete description of the universe within the universe as @Karl Krueger pointed out, assume this includes approximations and is not exact). If we could, could we escape? If we could escape, is that still possible if there is such a thing as true randomness?
I am especially interested in answers to the final question.
I appreciate this response.
"For a precedent, I point to the idea of cryonic suspension of the dead, in the hope that they may be thawed, healed, and resurrected by future medical technology. This idea has been around for at least 60 years. One pioneer, Robert Ettinger, wrote a book in the 1960s called The Prospect of Immortality, in which he mused about the coming "freezer era" and the social impact of the cryonic idea.
What has been the actual social impact? Nothing. Cryonics exists mainly as a science fiction motif, and is taken seriously only by a few thousand people worldwide.
If we assume a similar scenario for collective interest in superintelligence, and that the AI industry manages nonetheless to produce superintelligence, then this means that up until the last moment, the headlines, and people's heads, are just full of other things: wars and rumors of war, flying cars, AI companions, youth trends, scientific fads, celebrity deaths, miracles and scandals and conspiracy theories, and then BOOM it exists. The "AGI-pilled" minority saw it coming, but, they never became a majority. "
I think it is unfair to compare the public's reaction cyronics to their potential reaction to superintelligence. Cyronics have little to no impact on the average person's life. They are more so something that can be sought out for those interested. Inversely, on the road to superintelligence human life will change in unignorable ways. While cyronics do not affect those who do not seek them out, superintelligence will affect all humans in mainstream society no matter if they seek it out.
Furthermore, I think the trend of increasing AI intelligence is already being noticed by society at large, although they don't perhaps realize the full extent of the potential, or the rate of growth.
"So in the present, you have this technophile subculture centered on the American West Coast where AI is actually being made, and it contains both celebratory (e/acc) and cautionary (effective altruism, AI safety) tendencies."
I think this is a very interesting point. It seems likely to me the public will have many distinct perspectives but that these distinct perspectives will primarily be based in an e/acc or an EA/AI-saftey perspective. Interestingly this means the leaders in those groups will gain a lot more power as the public begins to understand superintelligence. It seems likely politicans stance here will be a key part of their platform.
"I can't see an AI religion becoming dominant before superintelligence arrives"
This makes a lot of sense to me, at least in the Western world. People here care very deeply about individuality which seems incompatible with an AI religion. If someone found a way to combine these ideas however, they would be very successful.
But I think there's definitely an opening there, for demagogues of the left or the right to step in - though the masses may easily be diverted by other demagogues who will say, not that AI should be stopped, but that the people should demand their share of the wealth in the form of UBI.
I strongly agree with this point.
This post has made me rethink my primary question. I think it could be better said as: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? I don't think superintelligence needs to exist for this change to happen.
Edit (10/19/2025):
I edited these questions after a response to them. Here is what I am curious about: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? How will this happen? (I don't think superintelligence needs to exist for this change to happen.) What will society's reaction be? What will be the ramifications of this reaction?
Original:
Questions:
When are people en masse going to realize the potential consequences (both good and bad) of superintelligence and the technological singularity? How will this happen? What will society's reaction be? What will be the ramifications of this reaction?
I'm asking this here because I can't find articles addressing these questions and because I want a diverse array of perspectives.
To ask a question, click on your Username (top right, you must have an account), and click Ask Question [Beta].
It appears to me as though this is no longer a feature.
This is an interesting idea. Why do you think indexes as a whole will appreciate if AI brings greater productivity. It seems to me that much of the productivity gains will lie in the hands of a few companies who will certianly grow considerably. In a world of AGI there won't be need for many companies and monopolies will be heavily favored. While some companies may get 10x or even 100x gains, if many companies go bankrupt the actual change in the price of an index could be low, or negative.
On the other hand, if your idea catches on and people with automatable jobs, or other parties, do this en masse as a hedge against unemployment your options will certianly appreciate dramatically.
I think it is more likely default techniques are sufficent then default market or government is sufficent. Markets don't incentives non-harmful products, regulation does. Regulation can be slow. If you believe in a rapid intelligence explosion it seems there is a high chance there is not sufficent market regulation. On the other hand, our morals are mostly evolved, so you can imagine that an AI that understands things in the same regard as we do shares our same morals.