771

LESSWRONG
LW

770
AI
Frontpage

1

[ Question ]

How can I reconcile the two most likely requirements for humanities near-term survival.

by Erlja Jkdf.
29th Aug 2022
1 min read
A
1
6

1

AI
Frontpage

1

How can I reconcile the two most likely requirements for humanities near-term survival.
1Lalartu2
2JBlack
1Erlja Jkdf.
2JBlack
1Erlja Jkdf.
2JBlack
New Answer
New Comment

1 Answers sorted by
top scoring

Lalartu2

Aug 30, 2022*

10

We technologically plateau because we reach technological limits. There aren't many important technologies not invented yet, things like nanorobots, compact fusion reactors or Dyson spheres are impossible. Whether AI is developed or not is irrelevant. After a century or two of stagnation, civilization runs out of resources and declines to pre-industrial level. This is our future.

Add Comment
5 comments, sorted by
top scoring
Click to highlight new comments since: Today at 12:25 PM
[-]JBlack3y20

I don't think either of those are the two most likely.

I see the most likely as related to (2), but nothing to do with the likelihood of AI causing disaster. More likely there will be some other disruption to our society that has nothing to do with AI, but prevents us from making sufficient progress to reach superhuman AGI for the near future. Probably we will recover in the less near future, but that's out of scope of the question.

Second most likely I see as being some as yet unknown obstacle that makes AGI unexpectedly unlikely with near future technology. The future is, after all, hard to predict. That doesn't mean that we technologically plateau in general, just that this one problem is much harder than we expect.

Reply
[-]Erlja Jkdf.3y1-2

A technological plateau is strictly necessary. To give the simplest example; we lucked out on nukes. The next decade alone contains potential for several existential threats - readily made bioweapons, miniaturized drones, AI abuse - that I question our ability to consistently adapt too, particularly one after another.

We might get it, if our tech jumps thanks to exponential progress.

Reply
[-]JBlack3y20

No, it is definitely not a strictly necessary requirement for near-term survival. To be "strictly necessary for near-term survival", such future technologies would have to be guaranteed to kill all of humanity, and soon. That's ridiculous hyperbole.

There are risks ahead, even existential risks, from other non-AI technologies but not to nearly that extent.

Reply
[-]Erlja Jkdf.3y10

We're very good at generating existential risks. Given indefinite technological progression at our current pace, we are likely to get ourselves killed.

Reply
[-]JBlack3y20

Your post - and my comment - are explicitly about necessary requirements for near-term survival. If you want to make another post about indefinite-term existential risks, then we can talk about that.

Reply
Moderation Log
More from Erlja Jkdf.
View more
Curated and popular this week
A
1
5

1. We technologically plateau, due to humanities questionable ability to adapt to accelerating technological progression.

2. AI development is indefinitely disrupted; as it is likely to result in disaster.

  • This is unlikely to be done deliberately. Ongoing attempts to slow down AI development are relatively ineffective; it is more likely, in my opinion, that a basic form of AI developed in the near future will either directly increase all individuals power, or lead to technologies that do the same. An example would be the possible implications of Palm AI. https://www.theatlantic.com/technology/archive/2022/06/google-palm-ai-artificial-consciousness/661329/ 
  • This universal increase in power could be sufficient to disrupt all AI research indefinitely, with the actions of a minority working in unison.

All this considered, what's the most likely situation that could play out?