LESSWRONG
LW

HomeAll PostsConceptsLibrary
Best of LessWrong
Sequence Highlights
Rationality: A-Z
The Codex
HPMOR
Community Events
Subscribe (RSS/Email)
LW the Album
Leaderboard
About
FAQ

The Library

Rationality: A-Z

Also known as "The Sequences"

How can we think better on purpose? Why should we think better on purpose?
For two years Eliezer Yudkowsky wrote a blogpost a day, braindumping thoughts on rationality, ambition and artificial intelligence. Those posts were edited into this introductory collection, recommended reading for all Lesswrong users.

The Sequences Highlights

LessWrong can be kind of intimidating - there's a lot of concepts to learn. We recommend getting started with the Highlights, a collection of 50 top posts from Eliezer's Sequences.

A day or two read, covering the foundations of rationality.

Harry Potter and the Methods of Rationality

What if Harry Potter was a scientist? What would you do if the universe had magic in it?
A story that conveys many rationality concepts, making them more visceral and emotionally compelling.

The Codex

Essays by Scott Alexander exploring science, medicine, philosophy, futurism, and politics. (There's also one about hallucinatory cactus people but it's not representative).

Best of LessWrong

Each December, the LessWrong community reviews the best posts from the previous year, and votes on which ones have stood the tests of time.

Curated Sequences

Community Sequences

Create New Sequence
79
Compassion for the Narcissistic Style
by Dawn Drescher
Against Muddling Through
by Joe Rogero
The networkist perspective
by Juan Zaragoza
FUNDAMENTALS OF INFRA-BAYESIANISM
by Brittany Gelb
Legal Personhood for Digital Minds
by Stephen Martin
Beneath Psychology: Truth-Seeking as the Engine of Change
by jimmy
The Alignment Project Research Agenda
by Benjamin Hilton
Emergence—the non-zero-sum foundation of existence
by James Stephen Brown
Moloch—An Illustrated Primer
by James Stephen Brown
Step by Step Metacognition
by Raemon
Game Theory's Poster-Child & Friends
by James Stephen Brown
Wise AI Wednesdays
by Chris_Leong
Load More (12/239)
Predictably Wrong
by Eliezer Yudkowsky
Thinking Better on Purpose
by Ruby
The Methods of Rationality
by Eliezer Yudkowsky
AGI safety from first principles
by Richard_Ngo
Argument and Analysis
by Scott Alexander
Embedded Agency
by abramdemski
2022 MIRI Alignment Discussion
by Rob Bensinger
2021 MIRI Conversations
by Rob Bensinger
LessWrong Political Prerequisites
by Raemon
Infra-Bayesianism
by Diffractor
Intro to Naturalism
by LoganStrohl
Replacing Guilt
by So8res
Conditioning Predictive Models
by evhub
Cyborgism
by janus
The Engineer’s Interpretability Sequence
by scasper
Stories
by Richard_Ngo
Valence
by Steven Byrnes
Otherness and control in the age of AGI
by Joe Carlsmith
Luna Lovegood
by lsusr
The Most Important Century
by HoldenKarnofsky
Iterated Amplification
by paulfchristiano
Value Learning
by Rohin Shah
CFAR Handbook
by CFAR!Duncan
Gears Which Turn The World
by johnswentworth
Immoral Mazes
by Zvi
Keep your beliefs cruxy and your frames explicit
by Raemon
Risks from Learned Optimization
by evhub
Fun Theory
by Eliezer Yudkowsky
Three Worlds Collide
by Eliezer Yudkowsky
Slack and the Sabbath
by Zvi
Introduction to Game Theory
by Scott Alexander
The Blue-Minimizing Robot
by Scott Alexander
Hammertime
by alkjash
Babble and Prune
by alkjash
Highly Advanced Epistemology 101 for Beginners
by Eliezer Yudkowsky
Rationality and Philosophy
by lukeprog
Decision Theory: Newcomb's Problem
by AnnaSalamon
The Science of Winning at Life
by lukeprog
No-Nonsense Metaethics
by lukeprog
Inadequate Equilibria
by Eliezer Yudkowsky
Cartesian Frames
by Scott Garrabrant
Living Luminously
by Alicorn
AI Policy Tuesday: Debunking the US-Chinese AGI Race
Tue Oct 28•Toronto
Lighthaven Sequences Reading Group #55 (Tuesday 10/28)
Wed Oct 29•Berkeley