LESSWRONG
LW

682
Wikitags

AI Safety Public Materials

Edited by Thane Ruthenis last updated 26th Aug 2022

AI Safety Public Materials are posts optimized for conveying information on AI Risk to audiences outside the AI Alignment community — be they ML specialists, policy-makers, or the general public.

Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged AI Safety Public Materials
129AGI safety from first principles: Introduction
Ω
Richard_Ngo
5y
Ω
18
241Slow motion videos as AI risk intuition pumps
Andrew_Critch
3y
41
32DL towards the unaligned Recursive Self-Optimization attractor
jacob_cannell
4y
22
105A transcript of the TED talk by Eliezer Yudkowsky
Mikhail Samin
2y
13
212An AI risk argument that resonates with NYTimes readers
Julian Bradshaw
3y
14
124When discussing AI risks, talk about capabilities, not intelligence
Ω
Vika
2y
Ω
7
59AISafety.info "How can I help?" FAQ
steven0461, Severin T. Seehrich
2y
0
33The Importance of AI Alignment, explained in 5 points
Ω
Daniel_Eth
3y
Ω
2
7Mati's introduction to pausing giant AI experiments
Mati_Roy
2y
0
21Uncontrollable AI as an Existential Risk
Karl von Wendt
3y
0
20AI Safety Arguments: An Interactive Guide
Lukas Trötzmüller
3y
0
17Distribution Shifts and The Importance of AI Safety
Ω
Leon Lang
3y
Ω
2
130AI Summer Harvest
Cleo Nardo
2y
10
120Stampy's AI Safety Info soft launch
steven0461, Robert Miles
2y
9
115“The Era of Experience” has an unsolved technical alignment problem
Ω
Steven Byrnes
5mo
Ω
48
Load More (15/107)
Add Posts