The Library

Rationality: A-Z

Also known as "The Sequences"

How can we think better on purpose? Why should we think better on purpose?
For two years Eliezer Yudkowsky wrote a blogpost a day, braindumping thoughts on rationality, ambition and artificial intelligence. Those posts were edited into this introductory collection, recommended reading for all Lesswrong users.

The Sequences Highlights

LessWrong can be kind of intimidating - there's a lot of concepts to learn. We recommend getting started with the Highlights, a collection of 50 top posts from Eliezer's Sequences.

A day or two read, covering the foundations of rationality.

Harry Potter and the Methods of Rationality

What if Harry Potter was a scientist? What would you do if the universe had magic in it?
A story that conveys many rationality concepts, making them more visceral and emotionally compelling.

The Codex

Essays by Scott Alexander exploring science, medicine, philosophy, futurism, and politics. (There's also one about hallucinatory cactus people but it's not representative).

Best of LessWrong

Each December, the LessWrong community reviews the best posts from the previous year, and votes on which ones have stood the tests of time.

Curated Sequences

Predictably Wrong
Thinking Better on Purpose
by Ruby
The Methods of Rationality
AGI safety from first principles
Argument and Analysis
Embedded Agency
2022 MIRI Alignment Discussion
2021 MIRI Conversations
LessWrong Political Prerequisites
Intro to Naturalism
Replacing Guilt
Luna Lovegood
Iterated Amplification
Value Learning
CFAR Handbook
Gears Which Turn The World
Immoral Mazes
by Zvi
Keep your beliefs cruxy and your frames explicit
Risks from Learned Optimization
Fun Theory
Three Worlds Collide
Slack and the Sabbath
by Zvi
Introduction to Game Theory
The Blue-Minimizing Robot
Babble and Prune
Highly Advanced Epistemology 101 for Beginners
Rationality and Philosophy
Decision Theory: Newcomb's Problem
The Science of Winning at Life
No-Nonsense Metaethics
Inadequate Equilibria
Cartesian Frames
Living Luminously

Community Sequences

Some comments on the CAIS paradigm
[Redwood Research] Causal Scrubbing
Generalised models
Experiments in instrumental convergence
Research Journals
Hypothesis Subspace
"Why Not Just..."
Law-Following AI
My AI Risk Model
Inconsistent Values and Extrapolation
The Shard Theory of Human Values
AGI-assisted Alignment
Load More (12/142)