This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
AI Alignment Intro Materials
•
Applied to
Podcast interview series featuring Dr. Peter Park
by
jacobhaimes
1mo
ago
•
Applied to
INTERVIEW: Round 2 - StakeOut.AI w/ Dr. Peter Park
by
jacobhaimes
1mo
ago
•
Applied to
INTERVIEW: StakeOut.AI w/ Dr. Peter Park
by
jacobhaimes
2mo
ago
•
Applied to
A starter guide for evals
by
Marius Hobbhahn
4mo
ago
•
Applied to
Hackathon and Staying Up-to-Date in AI
by
jacobhaimes
4mo
ago
•
Applied to
Interview: Applications w/ Alice Rigg
by
jacobhaimes
4mo
ago
•
Applied to
Into AI Safety: Episode 3
by
jacobhaimes
5mo
ago
•
Applied to
Into AI Safety Episodes 1 & 2
by
jacobhaimes
6mo
ago
plex
v1.4.0
Nov 5th 2023
(
+51
/
-26
)
4
Stampy's AI Safety Info
(extensive interactive FAQ)
Scott Alexander's Superintelligence FAQ
The MIRI Intelligence Explosion FAQ
The
Stampy.AI wiki project
The
AGI Safety Fundamentals courses
Superintelligence
(book)
•
Applied to
Into AI Safety - Episode 0
by
jacobhaimes
6mo
ago
•
Applied to
Documenting Journey Into AI Safety
by
jacobhaimes
7mo
ago
•
Applied to
Apply to a small iteration of MLAB to be run in Oxford
by
RP
8mo
ago
•
Applied to
AI Safety 101 : Introduction to Vision Interpretability
by
Charbel-Raphaël
9mo
ago
•
Applied to
Introducción al Riesgo Existencial de Inteligencia Artificial
by
Raemon
9mo
ago
•
Applied to
AIS 101: Task decomposition for scalable oversight
by
Charbel-Raphaël
9mo
ago
•
Applied to
An Exercise to Build Intuitions on AGI Risk
by
Lauro Langosco
11mo
ago
•
Applied to
AI Safety Fundamentals: An Informal Cohort Starting Soon!
by
Tiago de Vassal
11mo
ago
•
Applied to
Advice for Entering AI Safety Research
by
Ruby
11mo
ago
•
Applied to
Outreach success: Intro to AI risk that has been successful
by
the gears to ascension
11mo
ago
Stampy.AI wiki projectTheAGI Safety Fundamentals courses