Well, that's a wrap for the 2021 Review. We had 238 people cast votes. 452 posts were originally nominated, of which 149 posts received at least one review. The LessWrong moderation team will be awarding prizes and assembling posts into the Best of 2021 Books / Sequences soon. But for now, you can look here at the raw results.


Voting is visualized here with dots of varying sizes (roughly indicating that a user thought a post was "good" "important", or "extremely important"). Green dots indicate positive votes. Red indicate negative votes. You can hover over a dot to see its exact score.

0 Strong Evidence is Common
1 “PR” is corrosive; “reputation” is not.
2 Your Cheerful Price
3 ARC's first technical report: Eliciting Latent Knowledge
4 This Can't Go On
5 Rationalism before the Sequences
6 Lies, Damn Lies, and Fabricated Options
7 Fun with +12 OOMs of Compute
8 What 2026 looks like
9 Ngo and Yudkowsky on alignment difficulty
10 How To Write Quickly While Maintaining Epistemic Rigor
11 Science in a High-Dimensional World
12 How factories were made safe
13 Cryonics signup guide #1: Overview
14 Making Vaccine
15 Taboo "Outside View"
16 All Possible Views About Humanity's Future Are Wild
17 Another (outer) alignment failure story
18 Split and Commit
19 What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
20 There’s no such thing as a tree (phylogenetically)
21 The Plan
22 Trapped Priors As A Basic Problem Of Rationality
23 Finite Factored Sets
24 Selection Theorems: A Program For Understanding Agents
25 Slack Has Positive Externalities For Groups
26 My research methodology
27 The Rationalists of the 1950s (and before) also called themselves “Rationalists”
28 Ruling Out Everything Else
29 Leaky Delegation: You are not a Commodity
30 Feature Selection
31 Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions)
32 larger language models may disappoint you [or, an eternally unfinished draft]
33 Self-Integrity and the Drowning Child
34 Comments on Carlsmith's “Is power-seeking AI an existential risk?”
35 Working With Monsters
36 Simulacrum 3 As Stag-Hunt Strategy
37 Elephant seal 2
38 EfficientZero: How It Works
39 Lars Doucet's Georgism series on Astral Codex Ten
40 Catching the Spark
41 Specializing in Problems We Don't Understand
42 Shoulder Advisors 101
43 Notes from "Don't Shoot the Dog"
44 Why has nuclear power been a flop?
45 Whole Brain Emulation: No Progress on C. elgans After 10 Years
46 Frame Control
47 Worst-case thinking in AI alignment
48 Yudkowsky and Christiano discuss "Takeoff Speeds"
49 You are probably underestimating how good self-love can be
50 Infra-Bayesian physicalism: a formal theory of naturalized induction
51 Jean Monnet: The Guerilla Bureaucrat
52 Seven Years of Spaced Repetition Software in the Classroom
53 Bets, Bonds, and Kindergarteners
54 Coordination Schemes Are Capital Investments
55 The Point of Trade
56 Law of No Evidence
57 Discussion with Eliezer Yudkowsky on AGI interventions
58 Saving Time
59 Intentionally Making Close Friends
60 What Do GDP Growth Curves Really Mean?
61 The case for aligning narrowly superhuman models
62 The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century
63 The Death of Behavioral Economics
64 Why I Am Not in Charge
65 Politics is way too meta
66 Where do your eyes go?
67 Highlights from The Autobiography of Andrew Carnegie
68 Lessons I've Learned from Self-Teaching
69 Grokking the Intentional Stance
70 Agency in Conway’s Game of Life
71 Reneging Prosocially
72 Experimentally evaluating whether honesty generalizes
73 Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain
74 Unnatural Categories Are Optimized for Deception
75 Can you control the past?
76 A Brief Introduction to Container Logistics
77 Core Pathways of Aging
78 Imitative Generalisation (AKA 'Learning the Prior')
79 [Book review] Gödel, Escher, Bach: an in-depth explainer
80 Selection Has A Quality Ceiling
81 Book Review: A Pattern Language by Christopher Alexander
82 Technological stagnation: Why I came around
83 I'm from a parallel Earth with much higher coordination: AMA
84 Redwood Research’s current project
85 What will 2040 probably look like assuming no singularity?
86 Reward Is Not Enough
87 Coordination Skills I Wish I Had For the Pandemic
88 Dear Self; We Need To Talk About Social Media
89 Testing The Natural Abstraction Hypothesis: Project Intro
90 How To Get Into Independent Research On Alignment/Agency
91 Exercise: Taboo "Should"
92 Precognition
93 Biology-Inspired AGI Timelines: The Trick That Never Works
94 Visible Thoughts Project and Bounty Announcement
95 Unwitting cult leaders
96 Zvi’s Thoughts on the Survival and Flourishing Fund (SFF)
97 RadVac Commercial Antibody Test Results
98 Curing insanity with malaria
99 Ngo and Yudkowsky on AI capability gains
100 Going Out With Dignity
101 Social behavior curves, equilibria, and radicalism
102 The feeling of breaking an Overton window
103 The Coordination Frontier: Sequence Intro
104 The Telephone Theorem: Information At A Distance Is Mediated By Deterministic Constraints
105 Fixing The Good Regulator Theorem
106 Soares, Tallinn, and Yudkowsky discuss AGI cognition
107 Where did the 5 micron number come from? Nowhere good. [Wired.com]
108 The Prototypical Negotiation Game
109 Three enigmas at the heart of our reasoning
110 Gravity Turn
111 Secure homes for digital people
112 Killing the ants
113 Choice Writings of Dominic Cummings
114 Shared Frames Are Capital Investments in Coordination
115 How do we prepare for final crunch time?
116 AI Risk for Epistemic Minimalists
117 Kelly *is* (just) about logarithmic utility
118 What's Stopping You?
119 Testing The Natural Abstraction Hypothesis: Project Update
120 Building Blocks of Politics: An Overview of Selectorate Theory
121 The Case for Radical Optimism about Interpretability
122 Coherence arguments imply a force for goal-directed behavior
123 The Most Important Century: Sequence Introduction
124 Beware over-use of the agent model
125 Almost everyone should be less afraid of lawsuits
126 Decoupling deliberation from competition
127 Why did we wait so long for the threshing machine?
128 The blue-minimising robot and model splintering
129 Tales from Prediction Markets
130 A non-magical explanation of Jeffrey Epstein
131 A few thought on the inner ring
132 Deliberate Play
133 Automating Auditing: An ambitious concrete technical research proposal
134 Benchmarking an old chess engine on new hardware
135 Transcript: "You Should Read HPMOR"
136 Concentration of Force
137 Against neutrality about creating happy lives
138 Rules for Epistemic Warfare?
139 Morality is Scary
140 Norm Innovation and Theory of Mind
141 Scott Alexander's "Ivermectin: Much More Than You Wanted To Know"
142 Bad names make you open the box
143 Review of "Fun with +12 OOMs of Compute"
144 Random facts can come back to bite you
145 Coase's "Nature of the Firm" on Polyamory
146 The Upper Limit of Value
147 Your Dog is Even Smarter Than You Think
148 Robin Hanson's Grabby Aliens model explained - part 1

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 1:17 PM

How many posts were posted in total in 2021?


(You can check by seeing the numbers next to the load more button on the all-posts page for 2021.)

Huh, this was approximately the number I'd have guessed, but I remembered you should probably filter out events and low-karma posts, and I'm actually fairly surprised that when you do that the number drops down to 1040:


[This comment is no longer endorsed by its author]Reply

When I un-check "Show low karma" it goes down to 3233.

I didn't have "Show Events" checked.

Whoops, totally wrong, that was the shortform number.

Woop! Pretty good results. A few of my +9s aren't in the top 50, but most of them are. And well done to Elephant Seal 2, ranking higher than Elephant Seal 1 did.

4/8 of Eliezer Yudkowsky's posts in this list have a minus 9. Compare this with 1/7 for duncan_sabien, 0/6 for paulfchristiano, 0/5 for Daniel Kokotajlo, or 0/3 for HoldenKarnofsky. I wonder why that is.

To state the obvious, Yudkowsky's writing style/rhetoric/argument annoys people.

Man I find myself curious about whoever medium-downvoted "The Death of Behavioral Economics". This seems like it throws a wrench in some of the original underpinnings of LessWrong. I get not thinking it was all that important, but surprised someone would vote strongly against it.

Presumably they agreed with Scott's criticisms of it, and thought they were severe enough problems to make it not Review-worthy?

I didn't get around to (?re-)reading & voting on it, but I might've wound up downvoting if I did. It does hit a pet peeve of mine, where people act as if 'bad discourse is okay if it's from a critic'.