LESSWRONG
LW

3078
Intro to AI safety

Intro to AI safety

Apr 15, 2025 by Vishakha

Introduction

This section will explain and build a case for Existential risk from AI. It’s too short to give more than a rough overview, but will link to other aisafety.info articles when more detail is needed.

As an alternative, we also have a self-contained narrative introduction.

Summary

  • AI systems far smarter than us may be created soon. AI is advancing fast, and this progress may result in human-level AI — but human-level is not the limit, and shortly after, we may see superintelligent AI.
  • These systems may end up opposed to us. AI systems may pursue their own goals, those goals may not match ours, and that may bring us into conflict.
  • Consequences could be major, including human extinction. AI may defeat us and take over, which would be ruinous. But there are also other implications, including great benefits if AI is developed safely.
  • We need to get our act together. Experts are worried, but humanity doesn’t have a real plan, and you may be able to help.
11AI is advancing fast
Vishakha, Algon, steven0461
6mo
0
11AI may attain human-level soon
Vishakha, Algon, steven0461
6mo
0
23Human-level is not the limit
Vishakha, Algon, steven0461
6mo
2
10The road from human-level to superintelligent AI may be short
Vishakha, Algon, steven0461
6mo
0
13AI may pursue goals
Algon, steven0461, Vishakha
5mo
0
14AI’s goals may not match ours
Algon, steven0461, Vishakha
5mo
1
5Different goals may bring AI into conflict with us
Algon, steven0461, Vishakha
4mo
2
6AI can win a conflict against us
Algon, steven0461, Vishakha
4mo
0
5Defeat may be irreversibly catastrophic
Vishakha, Algon, steven0461
4mo
0
8Advanced AI is a big deal even if we don’t lose control
Algon, steven0461, Vishakha
4mo
0
5If we get things right, AI could have huge benefits
Algon, steven0461, Vishakha
4mo
0