LESSWRONG
LW

1406
Daniel Jacobson
6140
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1Daniel Jacobson's Shortform
3d
3
Daniel Jacobson's Shortform
Daniel Jacobson2d30

I appreciate this response. 

"For a precedent, I point to the idea of cryonic suspension of the dead, in the hope that they may be thawed, healed, and resurrected by future medical technology. This idea has been around for at least 60 years. One pioneer, Robert Ettinger, wrote a book in the 1960s called The Prospect of Immortality, in which he mused about the coming "freezer era" and the social impact of the cryonic idea. 

What has been the actual social impact? Nothing. Cryonics exists mainly as a science fiction motif, and is taken seriously only by a few thousand people worldwide. 

If we assume a similar scenario for collective interest in superintelligence, and that the AI industry manages nonetheless to produce superintelligence, then this means that up until the last moment, the headlines, and people's heads, are just full of other things: wars and rumors of war, flying cars, AI companions, youth trends, scientific fads, celebrity deaths, miracles and scandals and conspiracy theories, and then BOOM it exists. The "AGI-pilled" minority saw it coming, but, they never became a majority. "

I think it is unfair to compare the public's reaction cyronics to their potential reaction to superintelligence. Cyronics have little to no impact on the average person's life. They are more so something that can be sought out for those interested. Inversely, on the road to superintelligence human life will change in unignorable ways. While cyronics do not affect those who do not seek them out, superintelligence will affect all humans in mainstream society no matter if they seek it out. 

Furthermore, I think the trend of increasing AI intelligence is already being noticed by society at large, although they don't perhaps realize the full extent of the potential, or the rate of growth. 

"So in the present, you have this technophile subculture centered on the American West Coast where AI is actually being made, and it contains both celebratory (e/acc) and cautionary (effective altruism, AI safety) tendencies."

I think this is a very interesting point. It seems likely to me the public will have many distinct perspectives but that these distinct perspectives will primarily be based in an e/acc or an EA/AI-saftey perspective. Interestingly this means the leaders in those groups will gain a lot more power as the public begins to understand superintelligence. It seems likely politicans stance here will be a key part of their platform. 
 

"I can't see an AI religion becoming dominant before superintelligence arrives"

This makes a lot of sense to me, at least in the Western world. People here care very deeply about individuality which seems incompatible with an AI religion. If someone found a way to combine these ideas however, they would be very successful.

But I think there's definitely an opening there, for demagogues of the left or the right to step in - though the masses may easily be diverted by other demagogues who will say, not that AI should be stopped, but that the people should demand their share of the wealth in the form of UBI. 

I strongly agree with this point. 

This post has made me rethink my primary question. I think it could be better said as: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? I don't think superintelligence needs to exist for this change to happen. 

Reply
Daniel Jacobson's Shortform
Daniel Jacobson3d*50

Edit (10/19/2025):
 

I edited these questions after a response to them. Here is what I am curious about: If we imagine an understanding of the consequences of superintelligence as an exponential function, when will the slope of that function begin to rapidly increase? How will this happen? (I don't think superintelligence needs to exist for this change to happen.) What will society's reaction be? What will be the ramifications of this reaction? 

Original:

Questions:

When are people en masse going to realize the potential consequences (both good and bad) of superintelligence and the technological singularity? How will this happen? What will society's reaction be? What will be the ramifications of this reaction?

I'm asking this here because I can't find articles addressing these questions and because I want a diverse array of perspectives. 
 

Reply
LessWrong FAQ
Daniel Jacobson3d1-2

To ask a question, click on your Username (top right, you must have an account), and click Ask Question [Beta].

It appears to me as though this is no longer a feature.

Reply
My simple AGI investment & insurance strategy
Daniel Jacobson4d10

This is an interesting idea. Why do you think indexes as a whole will appreciate if AI brings greater productivity. It seems to me that much of the productivity gains will lie in the hands of a few companies who will certianly grow considerably. In a world of AGI there won't be need for many companies and monopolies will be heavily favored. While some companies may get 10x or even 100x gains, if many companies go bankrupt the actual change in the price of an index could be low, or negative. 

On the other hand, if your idea catches on and people with automatable jobs, or other parties, do this en masse as a hedge against unemployment your options will certianly appreciate dramatically. 

Reply
1Daniel Jacobson's Shortform
3d
3