LESSWRONG
The Best of LessWrong
LW

The Best of LessWrong

When posts turn more than a year old, the LessWrong community reviews and votes on how well they have stood the test of time. These are the posts that have ranked the highest for all years since 2018 (when our annual tradition of choosing the least wrong of LessWrong began).

For the years 2018, 2019 and 2020 we also published physical books with the results of our annual vote, which you can buy and learn more about here.
+

Rationality

Eliezer Yudkowsky
Local Validity as a Key to Sanity and Civilization
Buck
"Other people are wrong" vs "I am right"
Mark Xu
Strong Evidence is Common
TsviBT
Please don't throw your mind away
Raemon
Noticing Frame Differences
johnswentworth
You Are Not Measuring What You Think You Are Measuring
johnswentworth
Gears-Level Models are Capital Investments
Hazard
How to Ignore Your Emotions (while also thinking you're awesome at emotions)
Scott Garrabrant
Yes Requires the Possibility of No
Ben Pace
A Sketch of Good Communication
Eliezer Yudkowsky
Meta-Honesty: Firming Up Honesty Around Its Edge-Cases
Duncan Sabien (Inactive)
Lies, Damn Lies, and Fabricated Options
Scott Alexander
Trapped Priors As A Basic Problem Of Rationality
Duncan Sabien (Inactive)
Split and Commit
Duncan Sabien (Inactive)
CFAR Participant Handbook now available to all
johnswentworth
What Are You Tracking In Your Head?
Mark Xu
The First Sample Gives the Most Information
Duncan Sabien (Inactive)
Shoulder Advisors 101
Scott Alexander
Varieties Of Argumentative Experience
Eliezer Yudkowsky
Toolbox-thinking and Law-thinking
alkjash
Babble
Zack_M_Davis
Feature Selection
abramdemski
Mistakes with Conservation of Expected Evidence
Kaj_Sotala
The Felt Sense: What, Why and How
Duncan Sabien (Inactive)
Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions)
Ben Pace
The Costly Coordination Mechanism of Common Knowledge
Jacob Falkovich
Seeing the Smoke
Duncan Sabien (Inactive)
Basics of Rationalist Discourse
alkjash
Prune
johnswentworth
Gears vs Behavior
Elizabeth
Epistemic Legibility
Daniel Kokotajlo
Taboo "Outside View"
Duncan Sabien (Inactive)
Sazen
AnnaSalamon
Reality-Revealing and Reality-Masking Puzzles
Eliezer Yudkowsky
ProjectLawful.com: Eliezer's latest story, past 1M words
Eliezer Yudkowsky
Self-Integrity and the Drowning Child
Jacob Falkovich
The Treacherous Path to Rationality
Scott Garrabrant
Tyranny of the Epistemic Majority
alkjash
More Babble
abramdemski
Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems
Raemon
Being a Robust Agent
Zack_M_Davis
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists
Benquo
Reason isn't magic
habryka
Integrity and accountability are core parts of rationality
Raemon
The Schelling Choice is "Rabbit", not "Stag"
Diffractor
Threat-Resistant Bargaining Megapost: Introducing the ROSE Value
Raemon
Propagating Facts into Aesthetics
johnswentworth
Simulacrum 3 As Stag-Hunt Strategy
LoganStrohl
Catching the Spark
Jacob Falkovich
Is Rationalist Self-Improvement Real?
Benquo
Excerpts from a larger discussion about simulacra
Zvi
Simulacra Levels and their Interactions
abramdemski
Radical Probabilism
sarahconstantin
Naming the Nameless
AnnaSalamon
Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality"
Eric Raymond
Rationalism before the Sequences
Owain_Evans
The Rationalists of the 1950s (and before) also called themselves “Rationalists”
Raemon
Feedbackloop-first Rationality
LoganStrohl
Fucking Goddamn Basics of Rationalist Discourse
Raemon
Tuning your Cognitive Strategies
johnswentworth
Lessons On How To Get Things Right On The First Try
+

Optimization

So8res
Focus on the places where you feel shocked everyone's dropping the ball
Jameson Quinn
A voting theory primer for rationalists
sarahconstantin
The Pavlov Strategy
Zvi
Prediction Markets: When Do They Work?
johnswentworth
Being the (Pareto) Best in the World
alkjash
Is Success the Enemy of Freedom? (Full)
johnswentworth
Coordination as a Scarce Resource
AnnaSalamon
What should you change in response to an "emergency"? And AI risk
jasoncrawford
How factories were made safe
HoldenKarnofsky
All Possible Views About Humanity's Future Are Wild
jasoncrawford
Why has nuclear power been a flop?
Zvi
Simple Rules of Law
Scott Alexander
The Tails Coming Apart As Metaphor For Life
Zvi
Asymmetric Justice
Jeffrey Ladish
Nuclear war is unlikely to cause human extinction
Elizabeth
Power Buys You Distance From The Crime
Eliezer Yudkowsky
Is Clickbait Destroying Our General Intelligence?
Spiracular
Bioinfohazards
Zvi
Moloch Hasn’t Won
Zvi
Motive Ambiguity
Benquo
Can crimes be discussed literally?
johnswentworth
When Money Is Abundant, Knowledge Is The Real Wealth
GeneSmith
Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible
HoldenKarnofsky
This Can't Go On
Said Achmiz
The Real Rules Have No Exceptions
Lars Doucet
Lars Doucet's Georgism series on Astral Codex Ten
johnswentworth
Working With Monsters
jasoncrawford
Why haven't we celebrated any major achievements lately?
abramdemski
The Credit Assignment Problem
Martin Sustrik
Inadequate Equilibria vs. Governance of the Commons
Scott Alexander
Studies On Slack
KatjaGrace
Discontinuous progress in history: an update
Scott Alexander
Rule Thinkers In, Not Out
Raemon
The Amish, and Strategic Norms around Technology
Zvi
Blackmail
HoldenKarnofsky
Nonprofit Boards are Weird
Wei Dai
Beyond Astronomical Waste
johnswentworth
Making Vaccine
jefftk
Make more land
jenn
Things I Learned by Spending Five Thousand Hours In Non-EA Charities
Richard_Ngo
The ants and the grasshopper
So8res
Enemies vs Malefactors
Elizabeth
Change my mind: Veganism entails trade-offs, and health is one of the axes
+

World

Kaj_Sotala
Book summary: Unlocking the Emotional Brain
Ben
The Redaction Machine
Samo Burja
On the Loss and Preservation of Knowledge
Alex_Altair
Introduction to abstract entropy
Martin Sustrik
Swiss Political System: More than You ever Wanted to Know (I.)
johnswentworth
Interfaces as a Scarce Resource
eukaryote
There’s no such thing as a tree (phylogenetically)
Scott Alexander
Is Science Slowing Down?
Martin Sustrik
Anti-social Punishment
johnswentworth
Transportation as a Constraint
Martin Sustrik
Research: Rescuers during the Holocaust
GeneSmith
Toni Kurz and the Insanity of Climbing Mountains
johnswentworth
Book Review: Design Principles of Biological Circuits
Elizabeth
Literature Review: Distributed Teams
Valentine
The Intelligent Social Web
eukaryote
Spaghetti Towers
Eli Tyre
Historical mathematicians exhibit a birth order effect too
johnswentworth
What Money Cannot Buy
Bird Concept
Unconscious Economics
Scott Alexander
Book Review: The Secret Of Our Success
johnswentworth
Specializing in Problems We Don't Understand
KatjaGrace
Why did everything take so long?
Ruby
[Answer] Why wasn't science invented in China?
Scott Alexander
Mental Mountains
L Rudolf L
A Disneyland Without Children
johnswentworth
Evolution of Modularity
johnswentworth
Science in a High-Dimensional World
Kaj_Sotala
My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms
Kaj_Sotala
Building up to an Internal Family Systems model
Steven Byrnes
My computational framework for the brain
Natália
Counter-theses on Sleep
abramdemski
What makes people intellectually active?
Bucky
Birth order effect found in Nobel Laureates in Physics
zhukeepa
How uniform is the neocortex?
JackH
Anti-Aging: State of the Art
Vaniver
Steelmanning Divination
KatjaGrace
Elephant seal 2
Zvi
Book Review: Going Infinite
Rafael Harth
Why it's so hard to talk about Consciousness
Duncan Sabien (Inactive)
Social Dark Matter
Eric Neyman
How much do you believe your results?
Malmesbury
The Talk: a brief explanation of sexual dimorphism
moridinamael
The Parable of the King and the Random Process
Henrik Karlsson
Cultivating a state of mind where new ideas are born
+

Practical

alkjash
Pain is not the unit of Effort
benkuhn
Staring into the abyss as a core life skill
Unreal
Rest Days vs Recovery Days
Duncan Sabien (Inactive)
In My Culture
juliawise
Notes from "Don't Shoot the Dog"
Elizabeth
Luck based medicine: my resentful story of becoming a medical miracle
johnswentworth
How To Write Quickly While Maintaining Epistemic Rigor
Duncan Sabien (Inactive)
Ruling Out Everything Else
johnswentworth
Paper-Reading for Gears
Elizabeth
Butterfly Ideas
Eliezer Yudkowsky
Your Cheerful Price
benkuhn
To listen well, get curious
Wei Dai
Forum participation as a research strategy
HoldenKarnofsky
Useful Vices for Wicked Problems
pjeby
The Curse Of The Counterfactual
Darmani
Leaky Delegation: You are not a Commodity
Adam Zerner
Losing the root for the tree
chanamessinger
The Onion Test for Personal and Institutional Honesty
Raemon
You Get About Five Words
HoldenKarnofsky
Learning By Writing
GeneSmith
How to have Polygenically Screened Children
AnnaSalamon
“PR” is corrosive; “reputation” is not.
Ruby
Do you fear the rock or the hard place?
johnswentworth
Slack Has Positive Externalities For Groups
Raemon
Limerence Messes Up Your Rationality Real Bad, Yo
mingyuan
Cryonics signup guide #1: Overview
catherio
microCOVID.org: A tool to estimate COVID risk from common activities
Valentine
Noticing the Taste of Lotus
orthonormal
The Loudest Alarm Is Probably False
Raemon
"Can you keep this confidential? How do you know?"
mingyuan
Guide to rationalist interior decorating
Screwtape
Loudly Give Up, Don't Quietly Fade
+

AI Strategy

paulfchristiano
Arguments about fast takeoff
Eliezer Yudkowsky
Six Dimensions of Operational Adequacy in AGI Projects
Ajeya Cotra
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
paulfchristiano
What failure looks like
Daniel Kokotajlo
What 2026 looks like
gwern
It Looks Like You're Trying To Take Over The World
Daniel Kokotajlo
Cortés, Pizarro, and Afonso as Precedents for Takeover
Daniel Kokotajlo
The date of AI Takeover is not the day the AI takes over
Andrew_Critch
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
paulfchristiano
Another (outer) alignment failure story
Ajeya Cotra
Draft report on AI timelines
Eliezer Yudkowsky
Biology-Inspired AGI Timelines: The Trick That Never Works
Daniel Kokotajlo
Fun with +12 OOMs of Compute
Wei Dai
AI Safety "Success Stories"
Eliezer Yudkowsky
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
HoldenKarnofsky
Reply to Eliezer on Biological Anchors
Richard_Ngo
AGI safety from first principles: Introduction
johnswentworth
The Plan
Rohin Shah
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
lc
What an actually pessimistic containment strategy looks like
Eliezer Yudkowsky
MIRI announces new "Death With Dignity" strategy
KatjaGrace
Counterarguments to the basic AI x-risk case
Adam Scholl
Safetywashing
habryka
AI Timelines
evhub
Chris Olah’s views on AGI safety
So8res
Comments on Carlsmith's “Is power-seeking AI an existential risk?”
nostalgebraist
human psycholinguists: a critical appraisal
nostalgebraist
larger language models may disappoint you [or, an eternally unfinished draft]
Orpheus16
Speaking to Congressional staffers about AI risk
Tom Davidson
What a compute-centric framework says about AI takeoff speeds
abramdemski
The Parable of Predict-O-Matic
KatjaGrace
Let’s think about slowing down AI
Daniel Kokotajlo
Against GDP as a metric for timelines and takeoff speeds
Joe Carlsmith
Predictable updating about AI risk
Raemon
"Carefully Bootstrapped Alignment" is organizationally hard
KatjaGrace
We don’t trade with ants
+

Technical AI Safety

paulfchristiano
Where I agree and disagree with Eliezer
Eliezer Yudkowsky
Ngo and Yudkowsky on alignment difficulty
Andrew_Critch
Some AI research areas and their relevance to existential safety
1a3orn
EfficientZero: How It Works
elspood
Security Mindset: Lessons from 20+ years of Software Security Failures Relevant to AGI Alignment
So8res
Decision theory does not imply that we get to have nice things
Vika
Specification gaming examples in AI
Rafael Harth
Inner Alignment: Explain like I'm 12 Edition
evhub
An overview of 11 proposals for building safe advanced AI
TurnTrout
Reward is not the optimization target
johnswentworth
Worlds Where Iterative Design Fails
johnswentworth
Alignment By Default
johnswentworth
How To Go From Interpretability To Alignment: Just Retarget The Search
Alex Flint
Search versus design
abramdemski
Selection vs Control
Buck
AI Control: Improving Safety Despite Intentional Subversion
Eliezer Yudkowsky
The Rocket Alignment Problem
Eliezer Yudkowsky
AGI Ruin: A List of Lethalities
Mark Xu
The Solomonoff Prior is Malign
paulfchristiano
My research methodology
TurnTrout
Reframing Impact
Scott Garrabrant
Robustness to Scale
paulfchristiano
Inaccessible information
TurnTrout
Seeking Power is Often Convergently Instrumental in MDPs
So8res
A central AI alignment problem: capabilities generalization, and the sharp left turn
evhub
Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research
paulfchristiano
The strategy-stealing assumption
So8res
On how various plans miss the hard bits of the alignment challenge
abramdemski
Alignment Research Field Guide
johnswentworth
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
Buck
Language models seem to be much better than humans at next-token prediction
abramdemski
An Untrollable Mathematician Illustrated
abramdemski
An Orthodox Case Against Utility Functions
Veedrac
Optimality is the tiger, and agents are its teeth
Sam Ringer
Models Don't "Get Reward"
Alex Flint
The ground of optimization
johnswentworth
Selection Theorems: A Program For Understanding Agents
Rohin Shah
Coherence arguments do not entail goal-directed behavior
abramdemski
Embedded Agents
evhub
Risks from Learned Optimization: Introduction
nostalgebraist
chinchilla's wild implications
johnswentworth
Why Agent Foundations? An Overly Abstract Explanation
zhukeepa
Paul's research agenda FAQ
Eliezer Yudkowsky
Coherent decisions imply consistent utilities
paulfchristiano
Open question: are minimal circuits daemon-free?
evhub
Gradient hacking
janus
Simulators
LawrenceC
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
TurnTrout
Humans provide an untapped wealth of evidence about alignment
Neel Nanda
A Mechanistic Interpretability Analysis of Grokking
Collin
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
evhub
Understanding “Deep Double Descent”
Quintin Pope
The shard theory of human values
TurnTrout
Inner and outer alignment decompose one hard problem into two extremely hard problems
Eliezer Yudkowsky
Challenges to Christiano’s capability amplification proposal
Scott Garrabrant
Finite Factored Sets
paulfchristiano
ARC's first technical report: Eliciting Latent Knowledge
Diffractor
Introduction To The Infra-Bayesianism Sequence
TurnTrout
Towards a New Impact Measure
LawrenceC
Natural Abstractions: Key Claims, Theorems, and Critiques
Zack_M_Davis
Alignment Implications of LLM Successes: a Debate in One Act
johnswentworth
Natural Latents: The Math
TurnTrout
Steering GPT-2-XL by adding an activation vector
Jessica Rumbelow
SolidGoldMagikarp (plus, prompt generation)
So8res
Deep Deceptiveness
Charbel-Raphaël
Davidad's Bold Plan for Alignment: An In-Depth Explanation
Charbel-Raphaël
Against Almost Every Theory of Impact of Interpretability
Joe Carlsmith
New report: "Scheming AIs: Will AIs fake alignment during training in order to get power?"
Eliezer Yudkowsky
GPTs are Predictors, not Imitators
peterbarnett
Labs should be explicit about why they are building AGI
HoldenKarnofsky
Discussion with Nate Soares on a key alignment difficulty
Jesse Hoogland
Neural networks generalize because of this one weird trick
paulfchristiano
My views on “doom”
technicalities
Shallow review of live agendas in alignment & safety
Vanessa Kosoy
The Learning-Theoretic Agenda: Status 2023
ryan_greenblatt
Improving the Welfare of AIs: A Nearcasted Proposal
201820192020202120222023All
RationalityWorldOptimizationAI StrategyTechnical AI SafetyPracticalAll
#7
Seeing the Smoke

In early 2020, COVID-19 was spreading rapidly, but many people seem hesitant to take precautions or prepare. Jacob Falkovich explores why people often wait for social permission before reacting to potential threats, even when the evidence is clear. He argues we should be willing to act on our own judgment rather than waiting for others. 

by Jacob Falkovich
#10
Simulacra Levels and their Interactions

Zvi explores the four "simulacra levels" of communication and action, using the COVID-19 pandemic as an example: 1) Literal truth. 2) Trying to influence behavior 3) Signaling group membership, and 4) Pure power games. He examines how these levels interact and different strategies people use across them.

by Zvi
#22
Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems

Most Prisoner's Dilemmas are actually Stag Hunts in the iterated game, and most Stag Hunts are actually "Schelling games." You have to coordinate on a good equilibrium, but there are many good equilibria to choose from, which benefit different people to different degrees. This complicates the problem of cooperating.

by abramdemski
#26
Radical Probabilism

Dogmatic probabilism is the theory that all rational belief updates should be Bayesian updates. Radical probabilism is a more flexible theory which allows agents to radically change their beliefs, while still obeying some constraints. Abram examines how radical probabilism differs from dogmatic probabilism, and what implications the theory has for rational agents.

by abramdemski
#27
Reality-Revealing and Reality-Masking Puzzles

There are two kinds of puzzles: "reality-revealing puzzles" that help us understand the world better, and "reality-masking puzzles" that can inadvertently disable parts of our ability to see clearly. CFAR's work has involved both types as it has tried to help people reason about existential risk from AI while staying grounded. We need to be careful about disabling too many of our epistemic safeguards.

by AnnaSalamon
#33
The Treacherous Path to Rationality

The path to explicit reason is fraught with challenges. People often don't want to use explicit reason, and when they try to use it, they fail. Even if they succeed, they're punished socially. The post explores various obstacles on this path, including social pressure, strange memeplexes, and the "valley of bad rationality".

by Jacob Falkovich
#37
The Felt Sense: What, Why and How

The felt sense is a concept coined by psychologist Eugene Gendlin to describe a kind of a kind of pre-linguistic, physical sensation that represents some mental content. Kaj gives examples of felt senses, explains why they're useful to pay attention to, and gives tips on how to notice and work with them.

by Kaj_Sotala
#38
The First Sample Gives the Most Information

If you know nothing about a thing, the first example or sample gives you a disproportionate amount of information, often more than any subsequent sample. It lets you locate the idea in conceptspace, get a sense of what domain/scale/magnitude you're dealing with, and provides an anchor for further thinking.

by Mark Xu
13DirectedEvolution
The central point of this article was that conformism was causing society to treat COVID-19 with insufficient alarm. Its goal was to give its readership social sanction and motivation to change that pattern. One of its sub-arguments was that the media was succumbing to conformity. This claim came with an implication that this post was ahead of the curve, and that it was indicative of a pattern of success among rationalists in achieving real benefits, both altruistically (in motivating positive social change) and selfishly (in finding alpha). I thought it would be useful to review 2020 COVID-19 media coverage through the month of February, up through Feb. 27th, which is when this post was published on Putanumonit. I also want to take a look at the stock market crash relative to the publication of this article. Let's start with the stock market. The S&P500 fell about 13% from its peak on Feb. 9th to the week of Feb. 23rd-Mar. 1st, which is when this article was published. Jacob sold 10% of his stocks on Feb. 17th, which was still very early in the crash. The S&P500 went on to fall a total of 32% from that Feb. 9th peak until it bottomed out on Mar. 15th. At least some gains would be made if stocks had been repurchased in the 5 months between Feb. 17th and early August 2020. I don't know how much profit Jacob realized, presuming he eventually reinvested. But this looks to me like a convincing story of Jacob finding alpha in an inefficient market, rather than stumbling into profits by accident. He didn't do it via insider knowledge or obsessive interest in some weird corner of the financial system. He did it by thinking about the basic facts of a situation that had the attention of the entire world, and being right where almost everybody else was making the wrong bet. Let's focus on the media. The top US newspapers by circulation and with a national primary service area are USA Today, the Wall Street Journal, and the New York Times. I'm going to focus on coverage in
11Yoav Ravid
I remember this post very fondly. I often thought back to it and it inspired some thoughts of my own about rationality (which I had trouble writing down and are waiting in a draft to be written fully some day). I haven't used any of the phrases introduced here (Underperformance Swamp, Sinkholes of Sneer, Valley of Disintegration...), and I'm not sure whether it was the intention. The post starts with the claim that rationalists "basically got everything about COVID-19 right and did so months ahead of the majority of government officials, journalists, and supposed experts". Since it's not the point of the post I won't review this claim in depth, but it seems basically true to me. Elizabeth's review here gives a few examples. This post is about the difficulty and even danger in becoming a rationalist, or more generally, in using explicit reasoning (Intuition and Social Cognition being the alternatives). The first difficulty is that explicit reasoning alone often fails to outperform intuition and social cognition where those perform well. I think this is true, and as the rationality community evolved it came to appreciate intuition and social cognition more, without devaluing explicit reason. The second is persevering through the sneer and social pressure that comes from trying to use explicit reason to do things, often coming to very different approaches from other people, and often also failing. The third is navigating the strange status hierarchy in the community, which mostly doesn't depend on regular things like attractiveness and more often on our ability to apply explicit reason effectively, as well as being scared by strange memes like AI risk and cryonics. I don't know to what extent the first part is true in the physical communities, but it definitely is in the virtual community.  The fourth is where the danger comes in. When you're in the Valley of Bad Rationality your life can get worse, and if you don't get out of it some way it might stay worse. So
17DirectedEvolution
The goal of this post is to help us understand the similarities and differences between several different games, and to improve our intuitions about which game is the right default assumption when modeling real-world outcomes. My main objective with this review is to check the game theoretic claims, identify the points at which this post makes empirical assertions, and see if there are any worrisome oversights or gaps. Most of my fact-checking will just be resorting to Wikipedia. Let’s start with definitions of two key concepts. Pareto-optimal: One dimension cannot improve without a second worsening. Nash equilibrium: No player can do better by unilaterally changing their strategy. Here’s the payoff matrix from the one-shot Prisoner’s Dilemma and how it relates to these key concepts.  B stays silentB betraysA stays silentPareto-optimal A betrays Nash equilibrium         This article outlines three possible relationships between Pareto-optimality and Nash equilibrium. 1. There are no Pareto-optimal Nash equilibria. 2. There is a single Pareto-optimal Nash equilibrium, and another equilibrium that is not Pareto-optimal. 3. There are multiple Pareto-optimal Nash equilibria, which benefit different players to different extents. The author attempts to argue which of these arrangements best describes the world we live in, and makes the best default assumption when interpreting real-world situations as games. The claim is that real-world situations most often resemble iterated PDs, which have multiple Pareto-optimal Nash equilibria benefitting different players to different extents. I will attempt to show that the author’s conclusion only applies when modeling superrational entities, or entities with an unbounded lifespan, and give some examples where this might be relevant. Iterated Prisoner’s Dilemma is a little more complex than the author states. If the players know how many turns the game will be played for, or if the game has a known upper limit of t
24Bucky
A short note to start the review that the author isn’t happy with how it is communicated. I agree it could be clearer and this is the reason I’m scoring this 4 instead of 9. The actual content seems very useful to me. AllAmericanBreakfast has already reviewed this from a theoretical point of view but I wanted to look at it from a practical standpoint. *** To test whether the conclusions of this post were true in practice I decided to take 5 examples from the Wikipedia page on the Prisoner’s dilemma and see if they were better modeled by Stag Hunt or Schelling Pub: * Climate negotiations * Relationships * Marketing * Doping in sport * Cold war nuclear arms race Detailed analysis of each is at the bottom of the review. Of these 5, 3 (Climate, Relationships, Arms race) seem to me to be very well modeled by Schelling Pub.  Due to the constraints on communication allowed between rival companies it is difficult to see marketing (where more advertising = defect) as a Schelling Pub game. There probably is an underlying structure which looks a bit like Schelling Pub but it is very hard to move between Nash Equilibria. As a result I would say that Prisoner’s Dilemma is a more natural model for marketing. The choice of whether to dope in sport is probably best modeled as a Prisoner’s dilemma with an enforcing authority which punishes defection. As a result, I don’t think any of the 3 games are a particularly good model for any individual’s choice. However, negotiations on setting up the enforcing authority and the rules under which it operates are more like Schelling Pub. Originally I thought this should maybe count as half a point for the post but thinking about it further I would say this is actually a very strong example of what the post is talking about – if your individual choice looks like a Prisoner’s Dilemma then look for ways to make it into a Schelling Pub. If this involves setting up a central enforcement agency then negotiate to make that happen. So I
17Zvi
This is a long and good post with a title and early framing advertising a shorter and better post that does not fully exist, but would be great if it did.  The actual post here is something more like "CFAR and the Quest to Change Core Beliefs While Staying Sane."  The basic problem is that people by default have belief systems that allow them to operate normally in everyday life, and that protect them against weird beliefs and absurd actions, especially ones that would extract a lot of resources in ways that don't clearly pay off. And they similarly protect those belief systems in order to protect that ability to operate in everyday life, and to protect their social relationships, and their ability to be happy and get out of bed and care about their friends and so on.  A bunch of these defenses are anti-epistemic, or can function that way in many contexts, and stand in the way of big changes in life (change jobs, relationships, religions, friend groups, goals, etc etc).  The hard problem CFAR is largely trying to solve in this telling, and that the sequences try to solve in this telling, is to disable such systems enough to allow good things, without also allowing bad things, or to find ways to cope with the subsequent bad things slash disruptions. When you free people to be shaken out of their default systems, they tend to go to various extremes that are unhealthy for them, like optimizing narrowly for one goal instead of many goals, or having trouble spending resources (including time) on themselves at all, or being in the moment and living life, And That's Terrible because it doesn't actually lead to better larger outcomes in addition to making those people worse off themselves. These are good things that need to be discussed more, but the title and introduction promise something I find even more interesting. In that taxonomy, the key difference is that there are games one can play, things one can be optimizing for or responding to, incentives one can creat
12Raemon
This is the post that first spelled out how Simulacra levels worked in a way that seemed fully comprehensive, which I understood. I really like the different archetypes (i.e. Oracle, Trickster, Sage, Lawyer, etc). They showcased how the different levels blend together, while still having distinct properties that made sense to reason about separately. Each archetype felt very natural to me, like I could imagine people operating in that way. The description Level 4 here still feels a bit inarticulate/confused. This post is mostly compatible with the 2x2 grid version, but it makes the additional claim that Level 4 don't know how to make plans, and are 'particularly hard to grok.' It bundles in some worldview from Immoral Mazes / Raoian Sociopaths. For me, a big outstanding question re: Simulacra is "does it actually make sense to bundle the Kafkaesque sociopath who can't make plans as an explicit part of Level 4?" I think this is a kinda empirical question. An example of the sort of evidence that'd persuade me are "among politicians or middle managers who spend most of their time optimizing for power, interacting with facts and tribal affiliations as a game, what proportion of them actually lose their ability to make plans, or otherwise become more... lovecraftian or whatever?" Is it more like "70%", "50%", "10%"?. It's plausible to me that there's a relatively small number of actors who stand out as particularly extreme (and then get focused on for toxoplasma of rage reasons) Or, rather: if I simply describe Primarily Level 4 people as "holding social-signaling as object", am I actually missing anything? Do they tend to have any attributes? What? ... I do this post is among the best intro to the Simulacra Levels concept, and think it's worth polishing up slightly. I assume Zvi has thought a bit more about Level 4 by now. If it still seems like there's something Importantly, Confusingly Up With Them, I'm hoping that can be spelled out a bit more. (I think my fav
10Raemon
This post feels like an important part of what I've referred to as The CFAR Development Branch Git Merge. Between 2013ish and 2017ish, a lot of rationality development happened in person, which built off the sequences. I think some of that work turned out to be dead ends, or a bit confused, or not as important as we thought at the time. But a lot of it was been quite essential to rationality as a practice. I'm glad it has gotten written up. The felt sense, and focusing, have been two surprisingly important tools for me. One use case not quite mentioned here – and I think perhaps the most important one for rationality, is for getting a handle on what I actually think. Kaj discusses using it for figuring out how to communicate better, getting a sense of what your interlocutor is trying to understand and how it contrasts with what you're trying to say. But I think this is also useful in single-player mode. i.e. I say "I think X", and then I notice "no, there's a subtle wrongness to my description of what X is". This is helpful both for clarifying my beliefs about subtle topics, or for following fruitful trails of brainstorming.