LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ

The Library

Rationality: A-Z

Also known as "The Sequences"

How can we think better on purpose? Why should we think better on purpose?
For two years Eliezer Yudkowsky wrote a blogpost a day, braindumping thoughts on rationality, ambition and artificial intelligence. Those posts were edited into this introductory collection, recommended reading for all Lesswrong users.

The Sequences Highlights

LessWrong can be kind of intimidating - there's a lot of concepts to learn. We recommend getting started with the Highlights, a collection of 50 top posts from Eliezer's Sequences.

A day or two read, covering the foundations of rationality.

Harry Potter and the Methods of Rationality

What if Harry Potter was a scientist? What would you do if the universe had magic in it?
A story that conveys many rationality concepts, making them more visceral and emotionally compelling.

The Codex

Essays by Scott Alexander exploring science, medicine, philosophy, futurism, and politics. (There's also one about hallucinatory cactus people but it's not representative).

Best of LessWrong

Each December, the LessWrong community reviews the best posts from the previous year, and votes on which ones have stood the tests of time.

Curated Sequences

Community Sequences

Create New Sequence
Moloch—An Illustrated Primer
by James Stephen Brown
Step by Step Metacognition
by Raemon
Game Theory's Poster-Child & Friends
by James Stephen Brown
Wise AI Wednesdays
by Chris_Leong
An Activist View of AI Governance
by Mass_Driver
Drug development is broken
by rossry
General Reasoning in LLMs
by eggsyntax
Developing interpretability
by Sandy Fraser
Coupling for Decouplers
by Jacob Falkovich
Probability Theory Fundamentals 102
by Ape in the coat
Orcas
by Towards_Keeperhood
The Theoretical Foundations of Reward Learning
by Joar Skalse
Load More (12/231)
Predictably Wrong
by Eliezer Yudkowsky
Thinking Better on Purpose
by Ruby
The Methods of Rationality
by Eliezer Yudkowsky
AGI safety from first principles
by Richard_Ngo
Argument and Analysis
by Scott Alexander
Embedded Agency
by abramdemski
2022 MIRI Alignment Discussion
by Rob Bensinger
2021 MIRI Conversations
by Rob Bensinger
LessWrong Political Prerequisites
by Raemon
Infra-Bayesianism
by Diffractor
Intro to Naturalism
by LoganStrohl
Replacing Guilt
by So8res
Conditioning Predictive Models
by evhub
Cyborgism
by janus
The Engineer’s Interpretability Sequence
by scasper
Stories
by Richard_Ngo
Valence
by Steven Byrnes
Otherness and control in the age of AGI
by Joe Carlsmith
Luna Lovegood
by lsusr
The Most Important Century
by HoldenKarnofsky
Iterated Amplification
by paulfchristiano
Value Learning
by Rohin Shah
CFAR Handbook
by CFAR!Duncan
Gears Which Turn The World
by johnswentworth
Immoral Mazes
by Zvi
Keep your beliefs cruxy and your frames explicit
by Raemon
Risks from Learned Optimization
by evhub
Fun Theory
by Eliezer Yudkowsky
Three Worlds Collide
by Eliezer Yudkowsky
Slack and the Sabbath
by Zvi
Introduction to Game Theory
by Scott Alexander
The Blue-Minimizing Robot
by Scott Alexander
Hammertime
by alkjash
Babble and Prune
by alkjash
Highly Advanced Epistemology 101 for Beginners
by Eliezer Yudkowsky
Rationality and Philosophy
by lukeprog
Decision Theory: Newcomb's Problem
by AnnaSalamon
The Science of Winning at Life
by lukeprog
No-Nonsense Metaethics
by lukeprog
Inadequate Equilibria
by Eliezer Yudkowsky
Cartesian Frames
by Scott Garrabrant
Living Luminously
by Alicorn
If Anyone Builds It, Everyone Dies: A Conversation with Nate Soares and Tim Urban
LessWrong Community Weekend 2025
LW-Cologne meetup
[Tomorrow]AI Safety Thursdays: Self-Other Overlap - Follow Up