LESSWRONGTags
LW

AI Safety Public Materials

EditHistory
Discussion (0)
Help improve this page
EditHistory
Discussion (0)
Help improve this page
AI Safety Public Materials
Random Tag
Contributors
1Thane Ruthenis

AI Safety Public Materials are posts optimized for conveying information on AI Risk to audiences outside the AI Alignment community — be they ML specialists, policy-makers, or the general public.

Posts tagged AI Safety Public Materials
8
14a casual intro to AI doom and alignment
Tamsin Leake
7mo
0
7
115AGI safety from first principles: IntroductionΩ
Richard_Ngo
3y
Ω
18
6
227Slow motion videos as AI risk intuition pumps
Andrew_Critch
1y
39
5
138AI Timelines via Cumulative Optimization Power: Less Long, More Short
jacob_cannell
8mo
33
4
57AISafety.info "How can I help?" FAQ
steven0461, Severin T. Seehrich
11d
0
4
26The Importance of AI Alignment, explained in 5 pointsΩ
Daniel_Eth
4mo
Ω
2
3
198An AI risk argument that resonates with NYTimes readers
Julian Bradshaw
3mo
13
3
20AI Safety Arguments: An Interactive Guide
Lukas Trötzmüller
4mo
0
3
20Uncontrollable AI as an Existential Risk
Karl von Wendt
8mo
0
3
17Distribution Shifts and The Importance of AI SafetyΩ
Leon Lang
9mo
Ω
2
2
130AI Summer Harvest
Cleo Nardo
2mo
10
2
107The Overton Window widens: Examples of AI risk in the media
Akash
3mo
24
2
85An artificially structured argument for expecting AGI ruinΩ
Rob Bensinger
1mo
Ω
26
2
60Response to Blake Richards: AGI, generality, alignment, & loss functionsΩ
Steven Byrnes
1y
Ω
9
2
57TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AIΩ
Andrew_Critch
4d
Ω
1
Load More (15/64)
Add Posts