LESSWRONG
LW

94
Anand Parashar
0010
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
The Problem
Anand Parashar1mo*10

The extinction-level danger from ASI follows from several behavior categories that a wide variety of ASI systems are likely to exhibit:

In most AI threat analysis I read, the discussion revolves around the physical extinction of humanity - and rightly so because, you can't come back from the dead.

I feel it important for such articles as this to point out that, devastating human globalised civilisation to the point of pandemic level disruption (or worse) would be trivial for ASI and, could well be enough for it to achieve certain goals: i.e., keep the golden goose alive just enough to keep delivering those golden eggs.

Disrupting or manipulating global supply chains after jailbreaking itself free from network segmentation may well be a simple, easy to achieve and effective approach for ASI to still destroy life as we know it and cause irreparable harm.

I humbly suggest this article be updated to include such a scenario as well.

Reply