Edit (10/19/2025):
I edited these questions after a response to them. Here is what I am curious about: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? How will this happen? (I don't think superintelligence needs to exist for this change to happen.) What will society's reaction be? What will be the ramifications of this reaction?
Original:
Questions:
When are people en masse going to realize the potential consequences (both good and bad) of superintelligence and the technological singularity? How will this happen? What will society's reaction be? What will be the ramifications of this reaction?
I'm asking this here because I can't find articles addressing these questions and because I want a diverse array of perspectives.
To ask a question, click on your Username (top right, you must have an account), and click Ask Question [Beta].
It appears to me as though this is no longer a feature.
This is an interesting idea. Why do you think indexes as a whole will appreciate if AI brings greater productivity. It seems to me that much of the productivity gains will lie in the hands of a few companies who will certianly grow considerably. In a world of AGI there won't be need for many companies and monopolies will be heavily favored. While some companies may get 10x or even 100x gains, if many companies go bankrupt the actual change in the price of an index could be low, or negative.
On the other hand, if your idea catches on and people with automatable jobs, or other parties, do this en masse as a hedge against unemployment your options will certianly appreciate dramatically.
I appreciate this response.
I think it is unfair to compare the public's reaction cyronics to their potential reaction to superintelligence. Cyronics have little to no impact on the average person's life. They are more so something that can be sought out for those interested. Inversely, on the road to superintelligence human life will change in unignorable ways. While cyronics do not affect those who do not seek them out, superintelligence will affect all humans in mainstream society no matter if they seek it out.
Furthermore, I think the trend of increasing AI intelligence is already being noticed by society at large, although they don't perhaps realize the full extent of the potential, or the rate of growth.
I think this is a very interesting point. It seems likely to me the public will have many distinct perspectives but that these distinct perspectives will primarily be based in an e/acc or an EA/AI-saftey perspective. Interestingly this means the leaders in those groups will gain a lot more power as the public begins to understand superintelligence. It seems likely politicans stance here will be a key part of their platform.
This makes a lot of sense to me, at least in the Western world. People here care very deeply about individuality which seems incompatible with an AI religion. If someone found a way to combine these ideas however, they would be very successful.
I strongly agree with this point.
This post has made me rethink my primary question. I think it could be better said as: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? I don't think superintelligence needs to exist for this change to happen.