LESSWRONG
The Best of LessWrong
LW

385

The Best of LessWrong

When posts turn more than a year old, the LessWrong community reviews and votes on how well they have stood the test of time. These are the posts that have ranked the highest for all years since 2018 (when our annual tradition of choosing the least wrong of LessWrong began).

For the years 2018, 2019 and 2020 we also published physical books with the results of our annual vote, which you can buy and learn more about here.
+

Rationality

Eliezer Yudkowsky
Local Validity as a Key to Sanity and Civilization
Buck
"Other people are wrong" vs "I am right"
Mark Xu
Strong Evidence is Common
TsviBT
Please don't throw your mind away
Raemon
Noticing Frame Differences
johnswentworth
You Are Not Measuring What You Think You Are Measuring
johnswentworth
Gears-Level Models are Capital Investments
Hazard
How to Ignore Your Emotions (while also thinking you're awesome at emotions)
Scott Garrabrant
Yes Requires the Possibility of No
Ben Pace
A Sketch of Good Communication
Eliezer Yudkowsky
Meta-Honesty: Firming Up Honesty Around Its Edge-Cases
Duncan Sabien (Inactive)
Lies, Damn Lies, and Fabricated Options
Scott Alexander
Trapped Priors As A Basic Problem Of Rationality
Duncan Sabien (Inactive)
Split and Commit
Duncan Sabien (Inactive)
CFAR Participant Handbook now available to all
johnswentworth
What Are You Tracking In Your Head?
Mark Xu
The First Sample Gives the Most Information
Duncan Sabien (Inactive)
Shoulder Advisors 101
Scott Alexander
Varieties Of Argumentative Experience
Eliezer Yudkowsky
Toolbox-thinking and Law-thinking
alkjash
Babble
Zack_M_Davis
Feature Selection
abramdemski
Mistakes with Conservation of Expected Evidence
Kaj_Sotala
The Felt Sense: What, Why and How
Duncan Sabien (Inactive)
Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions)
Ben Pace
The Costly Coordination Mechanism of Common Knowledge
Jacob Falkovich
Seeing the Smoke
Duncan Sabien (Inactive)
Basics of Rationalist Discourse
alkjash
Prune
johnswentworth
Gears vs Behavior
Elizabeth
Epistemic Legibility
Daniel Kokotajlo
Taboo "Outside View"
Duncan Sabien (Inactive)
Sazen
AnnaSalamon
Reality-Revealing and Reality-Masking Puzzles
Eliezer Yudkowsky
ProjectLawful.com: Eliezer's latest story, past 1M words
Eliezer Yudkowsky
Self-Integrity and the Drowning Child
Jacob Falkovich
The Treacherous Path to Rationality
Scott Garrabrant
Tyranny of the Epistemic Majority
alkjash
More Babble
abramdemski
Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems
Raemon
Being a Robust Agent
Zack_M_Davis
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists
Benquo
Reason isn't magic
habryka
Integrity and accountability are core parts of rationality
Raemon
The Schelling Choice is "Rabbit", not "Stag"
Diffractor
Threat-Resistant Bargaining Megapost: Introducing the ROSE Value
Raemon
Propagating Facts into Aesthetics
johnswentworth
Simulacrum 3 As Stag-Hunt Strategy
LoganStrohl
Catching the Spark
Jacob Falkovich
Is Rationalist Self-Improvement Real?
Benquo
Excerpts from a larger discussion about simulacra
Zvi
Simulacra Levels and their Interactions
abramdemski
Radical Probabilism
sarahconstantin
Naming the Nameless
AnnaSalamon
Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality"
Eric Raymond
Rationalism before the Sequences
Owain_Evans
The Rationalists of the 1950s (and before) also called themselves “Rationalists”
Raemon
Feedbackloop-first Rationality
LoganStrohl
Fucking Goddamn Basics of Rationalist Discourse
Raemon
Tuning your Cognitive Strategies
johnswentworth
Lessons On How To Get Things Right On The First Try
+

Optimization

So8res
Focus on the places where you feel shocked everyone's dropping the ball
Jameson Quinn
A voting theory primer for rationalists
sarahconstantin
The Pavlov Strategy
Zvi
Prediction Markets: When Do They Work?
johnswentworth
Being the (Pareto) Best in the World
alkjash
Is Success the Enemy of Freedom? (Full)
johnswentworth
Coordination as a Scarce Resource
AnnaSalamon
What should you change in response to an "emergency"? And AI risk
jasoncrawford
How factories were made safe
HoldenKarnofsky
All Possible Views About Humanity's Future Are Wild
jasoncrawford
Why has nuclear power been a flop?
Zvi
Simple Rules of Law
Scott Alexander
The Tails Coming Apart As Metaphor For Life
Zvi
Asymmetric Justice
Jeffrey Ladish
Nuclear war is unlikely to cause human extinction
Elizabeth
Power Buys You Distance From The Crime
Eliezer Yudkowsky
Is Clickbait Destroying Our General Intelligence?
Spiracular
Bioinfohazards
Zvi
Moloch Hasn’t Won
Zvi
Motive Ambiguity
Benquo
Can crimes be discussed literally?
johnswentworth
When Money Is Abundant, Knowledge Is The Real Wealth
GeneSmith
Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible
HoldenKarnofsky
This Can't Go On
Said Achmiz
The Real Rules Have No Exceptions
Lars Doucet
Lars Doucet's Georgism series on Astral Codex Ten
johnswentworth
Working With Monsters
jasoncrawford
Why haven't we celebrated any major achievements lately?
abramdemski
The Credit Assignment Problem
Martin Sustrik
Inadequate Equilibria vs. Governance of the Commons
Scott Alexander
Studies On Slack
KatjaGrace
Discontinuous progress in history: an update
Scott Alexander
Rule Thinkers In, Not Out
Raemon
The Amish, and Strategic Norms around Technology
Zvi
Blackmail
HoldenKarnofsky
Nonprofit Boards are Weird
Wei Dai
Beyond Astronomical Waste
johnswentworth
Making Vaccine
jefftk
Make more land
jenn
Things I Learned by Spending Five Thousand Hours In Non-EA Charities
Richard_Ngo
The ants and the grasshopper
So8res
Enemies vs Malefactors
Elizabeth
Change my mind: Veganism entails trade-offs, and health is one of the axes
+

World

Kaj_Sotala
Book summary: Unlocking the Emotional Brain
Ben
The Redaction Machine
Samo Burja
On the Loss and Preservation of Knowledge
Alex_Altair
Introduction to abstract entropy
Martin Sustrik
Swiss Political System: More than You ever Wanted to Know (I.)
johnswentworth
Interfaces as a Scarce Resource
eukaryote
There’s no such thing as a tree (phylogenetically)
Scott Alexander
Is Science Slowing Down?
Martin Sustrik
Anti-social Punishment
johnswentworth
Transportation as a Constraint
Martin Sustrik
Research: Rescuers during the Holocaust
GeneSmith
Toni Kurz and the Insanity of Climbing Mountains
johnswentworth
Book Review: Design Principles of Biological Circuits
Elizabeth
Literature Review: Distributed Teams
Valentine
The Intelligent Social Web
eukaryote
Spaghetti Towers
Eli Tyre
Historical mathematicians exhibit a birth order effect too
johnswentworth
What Money Cannot Buy
Bird Concept
Unconscious Economics
Scott Alexander
Book Review: The Secret Of Our Success
johnswentworth
Specializing in Problems We Don't Understand
KatjaGrace
Why did everything take so long?
Ruby
[Answer] Why wasn't science invented in China?
Scott Alexander
Mental Mountains
L Rudolf L
A Disneyland Without Children
johnswentworth
Evolution of Modularity
johnswentworth
Science in a High-Dimensional World
Kaj_Sotala
My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms
Kaj_Sotala
Building up to an Internal Family Systems model
Steven Byrnes
My computational framework for the brain
Natália
Counter-theses on Sleep
abramdemski
What makes people intellectually active?
Bucky
Birth order effect found in Nobel Laureates in Physics
zhukeepa
How uniform is the neocortex?
JackH
Anti-Aging: State of the Art
Vaniver
Steelmanning Divination
KatjaGrace
Elephant seal 2
Zvi
Book Review: Going Infinite
Rafael Harth
Why it's so hard to talk about Consciousness
Duncan Sabien (Inactive)
Social Dark Matter
Eric Neyman
How much do you believe your results?
Malmesbury
The Talk: a brief explanation of sexual dimorphism
moridinamael
The Parable of the King and the Random Process
Henrik Karlsson
Cultivating a state of mind where new ideas are born
+

Practical

alkjash
Pain is not the unit of Effort
benkuhn
Staring into the abyss as a core life skill
Unreal
Rest Days vs Recovery Days
Duncan Sabien (Inactive)
In My Culture
juliawise
Notes from "Don't Shoot the Dog"
Elizabeth
Luck based medicine: my resentful story of becoming a medical miracle
johnswentworth
How To Write Quickly While Maintaining Epistemic Rigor
Duncan Sabien (Inactive)
Ruling Out Everything Else
johnswentworth
Paper-Reading for Gears
Elizabeth
Butterfly Ideas
Eliezer Yudkowsky
Your Cheerful Price
benkuhn
To listen well, get curious
Wei Dai
Forum participation as a research strategy
HoldenKarnofsky
Useful Vices for Wicked Problems
pjeby
The Curse Of The Counterfactual
Darmani
Leaky Delegation: You are not a Commodity
Adam Zerner
Losing the root for the tree
chanamessinger
The Onion Test for Personal and Institutional Honesty
Raemon
You Get About Five Words
HoldenKarnofsky
Learning By Writing
GeneSmith
How to have Polygenically Screened Children
AnnaSalamon
“PR” is corrosive; “reputation” is not.
Ruby
Do you fear the rock or the hard place?
johnswentworth
Slack Has Positive Externalities For Groups
Raemon
Limerence Messes Up Your Rationality Real Bad, Yo
mingyuan
Cryonics signup guide #1: Overview
catherio
microCOVID.org: A tool to estimate COVID risk from common activities
Valentine
Noticing the Taste of Lotus
orthonormal
The Loudest Alarm Is Probably False
Raemon
"Can you keep this confidential? How do you know?"
mingyuan
Guide to rationalist interior decorating
Screwtape
Loudly Give Up, Don't Quietly Fade
+

AI Strategy

paulfchristiano
Arguments about fast takeoff
Eliezer Yudkowsky
Six Dimensions of Operational Adequacy in AGI Projects
Ajeya Cotra
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
paulfchristiano
What failure looks like
Daniel Kokotajlo
What 2026 looks like
gwern
It Looks Like You're Trying To Take Over The World
Daniel Kokotajlo
Cortés, Pizarro, and Afonso as Precedents for Takeover
Daniel Kokotajlo
The date of AI Takeover is not the day the AI takes over
Andrew_Critch
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
paulfchristiano
Another (outer) alignment failure story
Ajeya Cotra
Draft report on AI timelines
Eliezer Yudkowsky
Biology-Inspired AGI Timelines: The Trick That Never Works
Daniel Kokotajlo
Fun with +12 OOMs of Compute
Wei Dai
AI Safety "Success Stories"
Eliezer Yudkowsky
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
HoldenKarnofsky
Reply to Eliezer on Biological Anchors
Richard_Ngo
AGI safety from first principles: Introduction
johnswentworth
The Plan
Rohin Shah
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
lc
What an actually pessimistic containment strategy looks like
Eliezer Yudkowsky
MIRI announces new "Death With Dignity" strategy
KatjaGrace
Counterarguments to the basic AI x-risk case
Adam Scholl
Safetywashing
habryka
AI Timelines
evhub
Chris Olah’s views on AGI safety
So8res
Comments on Carlsmith's “Is power-seeking AI an existential risk?”
nostalgebraist
human psycholinguists: a critical appraisal
nostalgebraist
larger language models may disappoint you [or, an eternally unfinished draft]
Orpheus16
Speaking to Congressional staffers about AI risk
Tom Davidson
What a compute-centric framework says about AI takeoff speeds
abramdemski
The Parable of Predict-O-Matic
KatjaGrace
Let’s think about slowing down AI
Daniel Kokotajlo
Against GDP as a metric for timelines and takeoff speeds
Joe Carlsmith
Predictable updating about AI risk
Raemon
"Carefully Bootstrapped Alignment" is organizationally hard
KatjaGrace
We don’t trade with ants
+

Technical AI Safety

paulfchristiano
Where I agree and disagree with Eliezer
Eliezer Yudkowsky
Ngo and Yudkowsky on alignment difficulty
Andrew_Critch
Some AI research areas and their relevance to existential safety
1a3orn
EfficientZero: How It Works
elspood
Security Mindset: Lessons from 20+ years of Software Security Failures Relevant to AGI Alignment
So8res
Decision theory does not imply that we get to have nice things
Vika
Specification gaming examples in AI
Rafael Harth
Inner Alignment: Explain like I'm 12 Edition
evhub
An overview of 11 proposals for building safe advanced AI
TurnTrout
Reward is not the optimization target
johnswentworth
Worlds Where Iterative Design Fails
johnswentworth
Alignment By Default
johnswentworth
How To Go From Interpretability To Alignment: Just Retarget The Search
Alex Flint
Search versus design
abramdemski
Selection vs Control
Buck
AI Control: Improving Safety Despite Intentional Subversion
Eliezer Yudkowsky
The Rocket Alignment Problem
Eliezer Yudkowsky
AGI Ruin: A List of Lethalities
Mark Xu
The Solomonoff Prior is Malign
paulfchristiano
My research methodology
TurnTrout
Reframing Impact
Scott Garrabrant
Robustness to Scale
paulfchristiano
Inaccessible information
TurnTrout
Seeking Power is Often Convergently Instrumental in MDPs
So8res
A central AI alignment problem: capabilities generalization, and the sharp left turn
evhub
Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research
paulfchristiano
The strategy-stealing assumption
So8res
On how various plans miss the hard bits of the alignment challenge
abramdemski
Alignment Research Field Guide
johnswentworth
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
Buck
Language models seem to be much better than humans at next-token prediction
abramdemski
An Untrollable Mathematician Illustrated
abramdemski
An Orthodox Case Against Utility Functions
Veedrac
Optimality is the tiger, and agents are its teeth
Sam Ringer
Models Don't "Get Reward"
Alex Flint
The ground of optimization
johnswentworth
Selection Theorems: A Program For Understanding Agents
Rohin Shah
Coherence arguments do not entail goal-directed behavior
abramdemski
Embedded Agents
evhub
Risks from Learned Optimization: Introduction
nostalgebraist
chinchilla's wild implications
johnswentworth
Why Agent Foundations? An Overly Abstract Explanation
zhukeepa
Paul's research agenda FAQ
Eliezer Yudkowsky
Coherent decisions imply consistent utilities
paulfchristiano
Open question: are minimal circuits daemon-free?
evhub
Gradient hacking
janus
Simulators
LawrenceC
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
TurnTrout
Humans provide an untapped wealth of evidence about alignment
Neel Nanda
A Mechanistic Interpretability Analysis of Grokking
Collin
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
evhub
Understanding “Deep Double Descent”
Quintin Pope
The shard theory of human values
TurnTrout
Inner and outer alignment decompose one hard problem into two extremely hard problems
Eliezer Yudkowsky
Challenges to Christiano’s capability amplification proposal
Scott Garrabrant
Finite Factored Sets
paulfchristiano
ARC's first technical report: Eliciting Latent Knowledge
Diffractor
Introduction To The Infra-Bayesianism Sequence
TurnTrout
Towards a New Impact Measure
LawrenceC
Natural Abstractions: Key Claims, Theorems, and Critiques
Zack_M_Davis
Alignment Implications of LLM Successes: a Debate in One Act
johnswentworth
Natural Latents: The Math
TurnTrout
Steering GPT-2-XL by adding an activation vector
Jessica Rumbelow
SolidGoldMagikarp (plus, prompt generation)
So8res
Deep Deceptiveness
Charbel-Raphaël
Davidad's Bold Plan for Alignment: An In-Depth Explanation
Charbel-Raphaël
Against Almost Every Theory of Impact of Interpretability
Joe Carlsmith
New report: "Scheming AIs: Will AIs fake alignment during training in order to get power?"
Eliezer Yudkowsky
GPTs are Predictors, not Imitators
peterbarnett
Labs should be explicit about why they are building AGI
HoldenKarnofsky
Discussion with Nate Soares on a key alignment difficulty
Jesse Hoogland
Neural networks generalize because of this one weird trick
paulfchristiano
My views on “doom”
technicalities
Shallow review of live agendas in alignment & safety
Vanessa Kosoy
The Learning-Theoretic Agenda: Status 2023
ryan_greenblatt
Improving the Welfare of AIs: A Nearcasted Proposal
201820192020202120222023All
RationalityWorldOptimizationAI StrategyTechnical AI SafetyPracticalAll
#4
Social Dark Matter

There are many things that people are socially punished for revealing, so they hide them, which means we systematically underestimate how common they are. And we tend to assume the most extreme versions of those things are representative, when in reality most cases are much less extreme. 

by Duncan Sabien (Inactive)
#5
Book Review: The Secret Of Our Success

The Secret of Our Success argues that cultural traditions have had a lot of time to evolve. So seemingly arbitrary cultural practices may actually encode important information, even if the practitioners can't tell you why. 

by Scott Alexander
#6
Anti-social Punishment

Why do some societies exhibit more antisocial punishment than others? Martin explores both some literature on the subject, and his own experience living in a country where "punishment of cooperators" was fairly common.

by Martin Sustrik
#7
Book summary: Unlocking the Emotional Brain

If the thesis in Unlocking the Emotional Brain is even half-right, it may be one of the most important books that I have read. It claims to offer a neuroscience-grounded, comprehensive model of how effective therapy works. In so doing, it also happens to formulate its theory in terms of belief updating, helping explain how the brain models the world and what kinds of techniques allow us to actually change our minds.

by Kaj_Sotala
#7
How much do you believe your results?

When you encounter a study, always ask yourself how much you believe their results. In Bayesian terms, this means thinking about the correct amount for the study to update you away from your priors. For a noisy study, the answer may well be “pretty much not at all”

#11
What Money Cannot Buy

Money can buy a lot of things, but it can't buy expertise. In fields where performance is hard to judge, simply throwing money at the problem won't guarantee good results – it's too easy to be fooled. Even kings and governments can't necessarily buy their way to the best solutions.

by johnswentworth
#12
The Intelligent Social Web

Social reality and culture work a lot like improv comedy. We often don't know "who we are" or what's going on socially, but everyone unconsciously tries to establish expectations of one another. Understanding this dynamic can give you more freedom to change your role in social interactions. 

by Valentine
#12
Science in a High-Dimensional World

In a universe with billions of variables which could plausibly influence an outcome, how do we actually do science? John gives a model for "gears-level science": look for mediation, hunt down sources of randomness, rule out the influence of all the other variables in the universe.

by johnswentworth
#15
Is Science Slowing Down?

Scott reviews a paper by Bloom, Jones, Reenen & Webb which argues that scientific progress is slowing down, as measured by outputs per researcher. Scott argues that this is actually the expected result - constant progress in response to exponentially increasing inputs should be our null hypothesis, based on historical trends.

by Scott Alexander
#17
My computational framework for the brain

Steve Byrnes lays out his 7 guiding principles for understanding how the brain works computationally. He argues the neocortex uses a single general learning algorithm that starts as a blank slate, while the subcortex contains hard-coded instincts and steers the neocortex toward biologically adaptive behaviors.

by Steven Byrnes
#20
Anti-Aging: State of the Art

Aging, which kills 100,000 people per day, may be solvable. Here's a summary of the most promising anti-aging research, including parabiosis, metabolic manipulation, senolytics, and cellular reprogramming. 

by JackH
#21
Interfaces as a Scarce Resource

The structure of things-humans-want does not always match the structure of the real world, or the structure of how-other-humans-see-the-world. When structures don't match, someone or something needs to serve as an interface, translating between the two. Interfaces between complex systems and human desires are often a scarce resource.

by johnswentworth
#21
There’s no such thing as a tree (phylogenetically)

Trees are not a biologically consistent category. They're just something that keeps happening in lots of different groups of plants. This is a fun fact, but it's also an interesting demonstration of how our useful everyday categories often don't map well to the underlying structure of reality.

by eukaryote
#25
My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms

Frustrated by claims that "enlightenment" and similar meditative/introspective practices can't be explained and that you only understand if you experience them, Kaj set out to write his own detailed gears-level, non-mysterious, non-"woo" explanation of how meditation, etc., work in the same way you might explain the operation of an internal combustion engine.

by Kaj_Sotala
#29
Unconscious Economics

There are at least three ways in which incentives affect behaviour: Consciously motivating agents, unconsciously reinforcing certain behaviors, and selection effects.

Jacob argues that  #2 and probably #3 are more important, but much less talked about.

by Bird Concept
#29
A Disneyland Without Children

Two astronauts investigate an automated planet covered in factories still churning out products, trying to understand what happened to its inhabitants.

by L Rudolf L
#30
Introduction to abstract entropy

In the course of researching optimization, Alex decided that he had to really understand what entropy is. But he found the existing resources (wikipedia, etc) so poor that it seemed important to write a better one. Other resources were only concerned about the application of the concept in their particular sub-domain. Here, Alex aims to synthesizing the abstract concept of entropy, to show what's so deep and fundamental about it.

by Alex_Altair
#30
The Talk: a brief explanation of sexual dimorphism

Malmesbury explains why sexual dimorphism evolved. Starting with asexual reproduction in single-celled organisms, he traces how the need to avoid genetic hitch-hiking led to sexual reproduction, then the evolution of two distinct sexes, and finally to sexual selection and exaggerated sexual traits. The process was driven by a series of evolutionary traps that were difficult to escape once entered. 

by Malmesbury
#31
Spaghetti Towers

Here’s a pattern I’d like to be able to talk about. It might be known under a certain name somewhere, but if it is, I don’t know it. I call it a Spaghetti Tower. It shows up in large complex systems that are built haphazardly.

by eukaryote
#32
The Redaction Machine

On the 3rd of October 2351 a machine flared to life. Huge energies coursed into it via cables, only to leave moments later as heat dumped unwanted into its radiators. With an enormous puff the machine unleashed sixty years of human metabolic entropy into superheated steam.

In the heart of the machine was Jane, a person of the early 21st century.

by Ben
#33
Research: Rescuers during the Holocaust

People who helped Jews during WWII are intriguing. They appear to be some kind of moral supermen. They had almost nothing to gain and everything to lose. How did they differ from the general population? Can we do anything to get more of such people today?

by Martin Sustrik
#34
How uniform is the neocortex?

The neocortex has been hypothesized to be uniformly composed of general-purpose data-processing modules. What does the currently available evidence suggest about this hypothesis? Alex Zhu explores various pieces of evidence, including deep learning neural networks and predictive coding theories of brain function. [tweet]

by zhukeepa
#34
The Parable of the King and the Random Process

When advisors disagree wildly about when the rains will come, the king tries to average their predictions. His advisors explain why this is a terrible idea – he needs to either decide which model is right or plan for both possibilities.

by moridinamael
#35
On the Loss and Preservation of Knowledge

A tradition of knowledge is a body of knowledge that has been consecutively and successfully worked on by multiple generations of scholars or practitioners. This post explores the difference between living traditions (with all the necessary pieces to preserve and build knowledge), and dead traditions (where crucial context has been lost).

by Samo Burja
#36
Toni Kurz and the Insanity of Climbing Mountains

In 1936, four men attempted to climb the Eigerwand, the north face of the Eiger mountain. Their harrowing story ended in tragedy, with only one survivor dangling from a rope just meters away from rescue before succumbing. Gene Smith reflects on what drives people to take such extreme risks for seemingly little practical benefit.

by GeneSmith
#37
What makes people intellectually active?

What causes some people to develop extensive frameworks of ideas rather than remain primarily consumers of ideas? There is something incomplete about my model of people doing this vs not doing this. I expect more people to have more ideas than they do.

A question post, which received many thoughtful answers.

by abramdemski
#37
Cultivating a state of mind where new ideas are born

Innovative work requires solitude, and the ability to resist social pressures. Henrik examines how Grothendieck and Bergman approached this, and lists various techniques creative people use to access and maintain this mental state.

by Henrik Karlsson
#38
Why did everything take so long?

One of the biggest intuitive mysteries to me is how humanity took so long to do anything. Humans have been 'behaviorally modern' for about 50 thousand years. And apparently didn't invent, for instance, rope until 28 thousand years ago. Why did everything take so long?

by KatjaGrace
#38
Elephant seal 2
by KatjaGrace
#40
Historical mathematicians exhibit a birth order effect too

In the 2012 LessWrong survey, it turned out LessWrongers were 22% more likely than expected to be a first-born child. Later, a MIRI researcher wondered off-handedly if great mathematicians (who plausibly share some important features with LessWrongers), also exhibit this same trend towards being first born.

The short answer: Yes, they do, as near as I can tell, but not as strongly as LessWrongers.

by Eli Tyre
#41
Swiss Political System: More than You ever Wanted to Know (I.)

The Swiss political system is known for its extensive use of direct democracy. This post dives deep into how that system works, exploring the different types of referenda, their history, impacts, and quirks. It's a detailed look at a unique political system that has managed to largely avoid polarization. 

by Martin Sustrik
#42
Birth order effect found in Nobel Laureates in Physics

Analyzing Nobel Laureates in Physics, there's a statistically significant birth order effect: they're 10 percentage points more likely to be firstborn than chance would predict. This effect is smaller than seen in the rationalist community (22 points) or historical mathematicians (16.7 points), but still interesting. 

by Bucky
#42
Specializing in Problems We Don't Understand

Most problems can be separated cleanly into two categories: things we basically understand, and things we basically don't understand. John Wentworth argues it's possible to specialize in the latter category in a way that generalizes across fields, and suggests ways to develop those skills.

by johnswentworth
#43
Transportation as a Constraint

John examines the problem of "how to transport things?" through the lens of "what's the taut constraint on the system?" He asks questions across history, from "how could Alexander the Great's army cross 150 miles of desert?", to how modern supply chains work, to what would happen in a future world with teleportation.

by johnswentworth
#48
Counter-theses on Sleep

The LessWrong post "Theses on Sleep" gained a lot of popularity and acclaim, despite largely consisting of what seemed to Natalia like weak arguments and misleading claims.This critical review lists several of the mistakes Natalia argues were made, and reports some of what the academic literature on sleep seems to show.

by Natália
#48
Book Review: Going Infinite

Zvi analyzes Michael Lewis' book "Going Infinite" about Sam Bankman-Fried and FTX. He argues the book provides clear evidence of SBF's fraudulent behavior, despite Lewis seeming not to fully realize it. Zvi sees SBF as a cautionary tale about the dangers of pursuing maximalist goals without ethical grounding.

by Zvi
#49
Mental Mountains

A tour de force, this posts combines a review of Unlocking The Emotional Brain, Kaj Sotala's review of the book, and connections to predictive coding theory.

It's a deep dive into models of how human cognition is driven by emotional learning, and this learning is what drives many beliefs and behaviors. If that's the case, on big question is how people emotionally learn and unlearn things.

by Scott Alexander
#50
Why it's so hard to talk about Consciousness

Debates about consciousness often come down to two people talking past each other, without realizing their interlocutor is coming from a fundamentally different set of intuitions. What's up with that?

by Rafael Harth
#52
Literature Review: Distributed Teams

Elizabeth summarizes the literature on distributed teams. She provides recommendations for when remote teams are preferable, and gives tips to mitigate the costs of distribution, such as site visits, over-communication, and hiring people suited to remote work.

by Elizabeth
#53
Steelmanning Divination

Divination seems obviously worthless to most modern educated people. But Xunzi, an ancient Chinese philosopher, argued there was value in practices like divination beyond just predicting the future. This post explores how randomized access to different perspectives or principles could be useful for decision-making and self-reflection, even if you don't believe in supernatural forces.

by Vaniver
#54
Book Review: Design Principles of Biological Circuits

Evolution doesn't optimize for biological systems to be understandable. But, because only a small subset of possible biological designs can robustly certain common goals (i.e. robust recognition of molecules, robust signal-passing, robust fold-change detection, etc) the requirement to work robustly limits evolution to use a handful of understandable structures.

by johnswentworth
#55
Building up to an Internal Family Systems model

Kaj Sotala gives a step-by-step rationalist argument for why Internal Family Systems therapy might work. He begins by talking about how you might build an AI, only to stumble into the same failure modes that IFS purports to treat. Then, explores how IFS might actually be solving these problems.

by Kaj_Sotala
#56
Evolution of Modularity

Fun fact: biological systems are highly modular, at multiple different scales. This can be quantified and verified statistically. On the other hand, systems designed by genetic algorithms (aka simulated evolution) are decidedly not modular.  They're a mess. This can also be verified statistically (as well as just by qualitatively eyeballing them)

What's up with that?

by johnswentworth
#57
[Answer] Why wasn't science invented in China?

While the scientific method developed in pieces over many centuries and places, Joseph Ben-David argues that in 17th century Europe there was a rapid accumulation of knowledge, restricted to a small area for about 200 years. Ruby explores whether this is true and why it might be, aiming to understand "what causes intellectual progress, generally?"

by Ruby
16Bucky
I was going to write a longer review but I realised that Ben’s curation notice actually explains the strengths of this post very well so you should read that! In terms of including this in the 2018 review I think this depends on what the review is for. If the review is primarily for the purpose of building common knowledge within the community then including this post maybe isn’t worth it as it is already fairly well known, having been linked from SSC. On the other hand if the review process is at least partly for, as Raemon put it: “I want LessWrong to encourage extremely high quality intellectual labor.” Then this post feels like an extremely strong candidate. (Personal footnote: This post was essentially what converted me from a LessWrong lurker to a regular commentor/contributor - I think it was mainly just being impressed with how thorough it was and thinking that's the kind of community I'd like to get involved with.)
11Martin Sustrik
Self-review: Looking at the essay year and a half later I am still reasonably happy about it. In the meantime I've seen Swiss people recommending it as an introductory text for people asking about Swiss political system, so I am, of course, honored, but it also gives me some confidence in not being totally off. If I had to write the essay again, I would probably give less prominence to direct democracy and more to the concordance and decentralization, which are less eye-catchy but in a way more interesting/important. Also, I would probably pay some attention to the question of how the system - given how unique it is - even managed to evolve. Maybe also do some investigation into whether the uniqueness of the political system has something to do with the surprising long-term ability of Swiss economy to reinvent itself and become a leader in areas as varied as mercenary troops, cheese, silk, machinery, banking and pharmaceuticals.
16Bucky
This is a review of my own post. The first thing to say is that for the 2018 Review Eli’s mathematicians post should take precedence because it was him who took up the challenge in the first place and inspired my post. I hope to find time to write a review on his post. If people were interested (and Eli was ok with it) I would be happy to write a short summary of my findings to add as a footnote to Eli’s post if it was chosen for the review. *** This was my first post on LessWrong and looking back at it I think it still holds up fairly well. There are a couple of things I would change if I were doing it again: * Put less time into the sons vs daughters thing. I think this section could have two thirds of it chopped out without losing much. * Unnamed’s comment is really important in pointing out a mistake I was making in my final paragraph. * I might have tried to analyse whether it is a firstborn thing vs an earlyborn thing. In the SSC data it is strongly a firstborn thing and if I combined Eli and my datasets I might be able to confirm whether this is also the case in our datasets. I’m not sure if this would provide a decisive answer as our sample size is much smaller even when combining the sets.
15habryka
The first elephant seal barely didn't make it into the book, but this is our last chance. Will the future readers of LessWrong remember the glory of elephant seal?
19Vaniver
Rereading this post, I'm a bit struck by how much effort I put into explaining my history with the underlying ideas, and motivating that this specifically is cool. I think this made sense as a rhetorical move--I'm hoping that a skeptical audience will follow me into territory labeled 'woo' so that they can see the parts of it that are real--and also as a pedagogical move (proofs may be easy to verify, but all of the interesting content of how they actually discovered that line of thought in concept space has been cleaned away; in this post, rather than hiding the sprues they were part of the content, and perhaps even the main content. [Some part of me wants to signpost that a bit more clearly, tho perhaps it is obvious?] There's something that itches about this post, where it feels like I never turn 'the idea' into a sentence. "If one regards it as proper form, one will have good fortune." Sure, but that leaves much of the work to the reader; this post is more like a log of me as a reader doing some more of the work, and leaving yet more work to my reader. It's not a clear condensation of the point, it doesn't address previous scholarship, it doesn't even clearly identify the relevant points that I had identified, and it doesn't transmit many of the tips and tricks I picked up. A sentence that feels like it would have fit (at least some of what I wanted to convey?) is this description of Tarot readings: "they are not about fortelling your inevitable future, but taking control of it through self knowledge and awareness." [But in reading that, there's something pleasing about the holistic vagueness of "proper form"; the point of having proper form is not just 'taking control'!] For example, an important point that came up when reading AllAmericanBreakfast's exploration of using divination was the 'skill of discernment', and that looking at random perspectives and lenses helps train this as well. Once I got a Tarot reading that I'll paraphrase as "this person you're
13habryka
This post surprised me a lot. It still surprises me a lot, actually. I've also linked it a lot of times in the past year.  The concrete context where this post has come up is in things like ML transparency research, as well as lots of theories about what promising approaches to AGI capabilities research are. In particular, there is a frequently recurring question of the type "to what degree do optimization processes like evolution and stochastic gradient descent give rise to understandable modular algorithms?". 
19johnswentworth
The material here is one seed of a worldview which I've updated toward a lot more over the past year. Some other posts which involve the theme include Science in a High Dimensional World, What is Abstraction?, Alignment by Default, and the companion post to this one Book Review: Design Principles of Biological Circuits. Two ideas unify all of these: 1. Our universe has a simplifying structure: it abstracts well, implying a particular kind of modularity. 2. Goal-oriented systems in our universe tend to evolve a modular structure which reflects the structure of the universe. One major corollary of these two ideas is that goal-oriented systems will tend to evolve similar modular structures, reflecting the relevant parts of their environment. Systems to which this applies include organisms, machine learning algorithms, and the learning performed by the human brain. In particular, this suggests that biological systems and trained deep learning systems are likely to have modular, human-interpretable internal structure. (At least, interpretable by humans familiar with the environment in which the organism/ML system evolved.) This post talks about some of the evidence behind this model: biological systems are indeed quite modular, and simulated evolution experiments find that circuits do indeed evolve modular structure reflecting the modular structure of environmental variations. The companion post reviews the rest of the book, which makes the case that the internals of biological systems are indeed quite interpretable. On the deep learning side, researchers also find considerable modularity in trained neural nets, and direct examination of internal structures reveals plenty of human-recognizable features. Going forward, this view is in need of a more formal and general model, ideally one which would let us empirically test key predictions - e.g. check the extent to which different systems learn similar features, or whether learned features in neural nets satisfy th
39fiddler
I strongly oppose collation of this post, despite thinking that it is an extremely well-written summary of an interesting argument on an interesting topic. The reason that I do so is because I believe it represents a substantial epistemic hazard because of the way it was written, and the source material it comes from. I think this is particularly harmful because both justifications for nominations amount to "this post was key in allowing percolation of a new thesis unaligned with the goals of the community into community knowledge," which is a justification that necessitates extremely rigorous thresholds for epistemic virtue: a poor-quality argument both risks spreading false or over-proven ideas into a healthy community, if the nominators are correct, and also creates conditions for an over-correction caused by the tearing down of a strongman. When assimilating new ideas and improving models, extreme care must be taken to avoid inclusion of non-steelmanned parts of the model, and this post does not represent that. In this case, isolated demands for rigor are called for! The first major issue is the structure of the post. A more typical book review includes critique, discussion, and critical analysis of the points made in the book. This book review forgoes these, instead choosing to situate the thesis of the book in the fabric of anthropology and discuss the meta-level implications of the contributions at the beginning and end of the review. The rest of the review is dedicated to extremely long, explicitly cherry-picked block quotes of anecdotal evidence and accessible explanations of Heinrich's thesis. Already, this poses an issue: it's not possible to evaluate the truth of the thesis, or even the merit of the arguments made for it, with evidence that's explicitly chosen to be the most persuasive and favorable summaries of parts glossed over. Upon closer examination, even without considering that this is filtered evidence, this is an attempt to prove a thesis usin
10Noosphere89
This is a very nice meta-level discussion of why consciousness discourse gets so bad, and I do genuinely appreciate trying to get cruxes and draw out the generators of a disagreement, which is useful in difficult situations. One factor that is not really discussed, but amplifies the problem of discourse around consciousness is that people use the word consciousness to denote a scientific and a moral thing, and people often want to know the answer to whether something is conscious because they want to use it to determine whether uploading is good, or whether to care about someone, and way too much discourse does not decouple these 2 questions. I actually slightly voted against the linked post below in the review, due to methodological problems, but I have a high prior that something like this is a huge contributor to consciousness discourse sucking, and this is an area where the science questions need to be decoupled from value questions: https://www.lesswrong.com/posts/KpD2fJa6zo8o2MBxg/consciousness-as-a-conflationary-alliance-term-for +9 for drawing out a generator on a very confusing topic, and should be in the LW canon for how to deal with difficult disagreements as a worked example. I'm not going to review the object level on what consciousness actually is, because I already did that in a different review linked below, but the sneak peek is that I'm in camp 1, though you could also call me a camp 2 person, but notably reductionist/computationalist rather than positing novel metaphysics: https://www.lesswrong.com/posts/FQhtpHFiPacG3KrvD/seth-explains-consciousness#7ncCBPLcCwpRYdXuG
11Steven Byrnes
I wrote this relatively early in my journey of self-studying neuroscience. Rereading this now, I guess I'm only slightly embarrassed to have my name associated with it, which isn’t as bad as I expected going in. Some shifts I’ve made since writing it (some of which are already flagged in the text): * New terminology part 1: Instead of “blank slate” I now say “learning-from-scratch”, as defined and discussed here. * New terminology part 2: “neocortex vs subcortex” → “learning subsystem vs steering subsystem”, with the former including the whole telencephalon and cerebellum, and the latter including the hypothalamus and brainstem. I distinguish them by "learning-from-scratch vs not-learning-from-scratch". See here. * Speaking of which, I now put much more emphasis on "learning-from-scratch" over "cortical uniformity" when talking about the neocortex etc.—I care about learning-from-scratch more, I talk about it more, etc. I see the learning-from-scratch hypothesis as absolutely central to a big picture of the brain (and AGI safety!), whereas cortical uniformity is much less so. I do still think cortical uniformity is correct (at least in the weak sense that someone with a complete understanding of one part of the cortex would be well on their way to a complete understanding of any other part of the cortex), for what it’s worth. * I would probably drop the mention of “planning by probabilistic inference”. Well, I guess something kinda like planning by probabilistic inference is part of the story, but generally I see the brain thing as mostly different. * Come to think of it, every time the word “reward” shows up in this post, it’s safe to assume I described it wrong in at least some respect. * The diagram with neocortex and subcortex is misleading for various reasons, see notes added to the text nearby. * I’m not sure I was using the term “analysis-by-synthesis” correctly. I think that term is kinda specific to sound processing. And the vision analog is “vision
31johnswentworth
Connection to Alignment One of the main arguments in AI risk goes something like: * AI is likely to be a utility maximizer (or goal-directed in some other sense) * Goodhart, instrumental convergence, etc make powerful goal-directed agents dangerous by default One common answer to this is "ok, how about we make AI which isn't goal-directed"? Unconscious Economics says: selection effects will often create the same effect as goal-directedness, even if we're trying to build a non-goal-directed AI. Discussions around CAIS are one obvious application. Paul's "you get what you measure" failure-mode is another. A less-obvious application which I've personally run into recently: one strategy to deal with inner optimizers is to design learning algorithms which specifically avoid regions of parameter space in which the trained system will perform optimization. The Unconscious Economics argument says that this won't actually avoid the risk: selection effects from the outer optimizer will push the trained system to misbehave in exactly the same ways, even without an inner optimizer. Connection to the Economics Literature During the past year I've found and read a bit more of the formal economics literature related to selection-effect-driven economics. The most notable work seems to be Nelson and Winter's "An Evolutionary Theory of Economic Change", from 1982. It was a book-length attempt to provide a mathematical foundation for microeconomics grounded in selection effects, rather than assuming utility-maximizing agents from the get-go. Reading through that book, it's pretty clear why the perspective hasn't taken over economics: Nelson and Winter's models are not very good. Some of the larger shortcomings: * They limit themselves to competition between firms, and their models contain details which limit their generalization to other kinds of agents * They use a "static" notion of equilibrium (i.e. all agents are individually unchanging), rather than a "dynamic" noti
14Vaniver
I think this post labels an important facet of the world, and skillfully paints it with examples without growing overlong. I liked it, and think it would make a good addition to the book. There's a thing I find sort of fascinating about it from an evaluative perspective, which is that... it really doesn't stand on its own, and can't, as it's grounded in the external world, in webs of deference and trust. Paul Graham makes a claim about taste; do you trust Paul Graham's taste enough to believe it? It's a post about expertise that warns about snake oil salesmen, while possibly being snake oil itself. How can you check? "there is no full substitute for being an expert yourself." And so in a way it seems like the whole rationalist culture, rendered in miniature: money is less powerful than science, and the true science is found in carefully considered personal experience and the whispers of truth around the internet, more than the halls of academia.
33eukaryote
A brief authorial take - I think this post has aged well, although as with Caring Less (https://www.lesswrong.com/posts/dPLSxceMtnQN2mCxL/caring-less), this was an abstract piece and I didn't make any particular claims here. I'm so glad that A) this was popular B) I wasn't making up a new word for a concept that most people already know by a different name, which I think will send you to at least the first layer of Discourse Hell on its own. I've met at least one person in the community who said they knew and thought about this post a lot, well before they'd met me, which was cool. I think this website doesn't recognize the value of bad hand-drawn graphics for communicating abstract concepts (except for Garrabrant and assorted other AI safety people, whose posts are too technical for me to read but who I support wholly.) I'm guessing that the graphics helped this piece, or at least got more people to look at it. I do wish I'd included more examples of spaghetti towers, but I knew that before posting it, and this was an instance of "getting something out is better than making it perfect." I've planned on doing followups in the same sort of abstract style as this piece, like methods I've run into for getting around spaghetti towers. (Modularization, swailing, documentation.) I hopefully will do that some day. If anyone wants to help brainstorm examples, hit me up and I may or may not get back to you.
21GeneSmith
I was pleasantly surprised by how many people enjoyed this post about mountain climbing. I never expected it to gain so much traction, since it doesn't relate that clearly to rationality or AI or any of the topics usually discussed on LessWrong. But when I finished the book it was based on, I just felt an overwhelming urge to tell other people about it. The story was just that insane. Looking back I think Gwern probably summarized what this story is about best: a world beyond the reach of god. The universe does not respect your desire for a coherent, meaningful story. If you make the wrong mistake at the wrong time, game over. For the past couple of months I've actually been drafting a sequel of sorts to this post about a man named Nims Purja. I hope to post it before Christmas!
13Bird Concept
For the Review, I'm experimenting with using the predictions feature to poll users for their opinions about claims made in posts.  Elicit Prediction (elicit.org/binary/questions/itSayrbzc) Elicit Prediction (elicit.org/binary/questions/5SRTLX3p_) Elicit Prediction (elicit.org/binary/questions/VMv-KjR87) The first two cites Scott almost verbatim, but for the third I tried to specify further.  Feel free to add your predictions above, and let me know if you have any questions about the experience.
13orthonormal
As mentioned in my comment, this book review overcame some skepticism from me and explained a new mental model about how inner conflict works. Plus, it was written with Kaj's usual clarity and humility. Recommended.
73Jacob Falkovich
In my opinion, the biggest shift in the study of rationality since the Sequences were published were a change in focus from "bad math" biases (anchoring, availability, base rate neglect etc.) to socially-driven biases. And with good reason: while a crash course in Bayes' Law can alleviate many of the issues with intuitive math, group politics are a deep and inextricable part of everything our brains do. There has been a lot of great writing describing the issue like Scott’s essays on ingroups and outgroups and Robin Hanson’s theory of signaling. There are excellent posts summarizing the problem of socially-driven bias on a high level, like Kevin Simler’s post on crony beliefs. But The Intelligent Social Web offers something that all of the above don’t: a lens that looks into the very heart of social reality, makes you feel its power on an immediate and intuitive level, and gives you the tools to actually manipulate and change your reaction to it. Valentine’s structure of treating this as a “fake framework” is invaluable in this context. A high-level rigorous description of social reality doesn’t really empower you to do anything about it. But seeing social interactions as an improv scene, while not literally true, offers actionable insight. The specific examples in the post hit very close to home for me, like the example of one’s family tugging a person back into their old role. I noticed that I quite often lose my temper around my parents, something that happens basically never around my wife or friends. I realized that much of it is caused by a role conflict with my father about who gets to be the “authority” on living well. I further recognized that my temper is triggered by “should” statements, even innocuous ones like “you should have the Cabernet with this dish” over dinner. Seeing these interactions through the lens of both of us negotiating and claiming our roles allowed me to control how I feel and react rather than being driven by an anger that I don’t
10Eric Neyman
I think this isn't the sort of post that ages well or poorly, because it isn't topical, but I think this post turned out pretty well. It gradually builds from preliminaries that most readers have probably seen before, into some pretty counterintuitive facts that aren't widely appreciated. At the end of the post, I listed three questions and wrote that I hope to write about some of them soon. I never did, so I figured I'd use this review to briefly give my takes. 1. This comment from Fabien Roger tests some of my modeling choices for robustness, and finds that the surprising results of Part IV hold up when the noise is heavier-tailed than the signal. (I'm sure there's more to be said here, but I probably don't have time to do more analysis by the end of the review period.,) 2. My basic take is that this really is a point in favor of well-evidenced interventions, but that the best-looking speculative interventions are nevertheless better. This is because I think "speculative" here mostly refers to partial measurement rather than noisy measurement. For example, maybe you can only foresee the first-order effects of an intervention, but not the second-order effects. If the first-order effect is a (known) quantity X1 and the second-order effect is an (unknown) quantity X2, then modeling the second-order effect as zero (and thus estimating the quality of the intervention as X1) isn't a noisy measurement; it's a partial measurement. It's still your best guess given the information you have. 1. I haven't thought this through very much. I expect good counter-arguments and counter-counter-arguments to exist here. 3. 1. No -- or rather, only if the measurement is guaranteed to be exactly correct. To see this, observe that the variance of a noisy, unbiased measurement is greater than the variance of the quantity you're trying to measure (with equality only when the noise is zero), whereas the variance of a noiseless, partial measurement is less than the variance of the
12Jameson Quinn
I think we should encourage posts which are well-delimited and research based; "here's a question I had, and how I answered it in a finite amount of time" rather than "here's something I've been thinking about for a long time, and here's where I've gotten with it". Also, this is an engaging topic and well-written. I feel the "final thoughts" section could be tightened up/shortened, as to me it's not the heart of the piece.
36Valentine
I don't know if I'll ever get to a full editing of this. I'll jot notes here of how I would edit it as I reread this. * I'd ax the whole opening section. * That was me trying to (a) brute force motivation for the reader and (b) navigate some social tension I was feeling around what it means to be able to make a claim here. In particular I was annoyed with Oli and wanted to sidestep discussion of the lemons problem. My focus was actually on making something in culture salient by offering a fake framework. The thing speaks for itself once you look at it. After that point I don't care what anyone calls it. * This would, alas, leave out the emphasis that it's a fake framework. But I've changed my attitude about how much hand-holding to do for stuff like that. Part of the reason I put that in the beginning was to show the LW audience that I was taking it as fake, so as to sidestep arguments about how justified everything is or isn't. At this point I don't care anymore. People can project whatever they want on me because, uh, I can't really stop them anyway. So I'm not going to fret about it. * I had also intended the opening to have a kind of conversational tone, as part of a Sequence that I never finished (on "ontology-cracking"). I probably never will finish it at this point. So no point in making this stand-alone essay pretend to be part of an ongoing conversation. * A minor nitpick: I open the meat of the idea by telling some facts about improv theater. I suspect it'd be more engaging if I had written it as a story illustrating the experience. "Bob walked onto the stage, his heart pounding. 'God, what do I say?'" Etc. The whole thing would have felt less abstract if I had done that. But it clearly communicated well for this audience, so that's not a big concern. * One other reviewer mentioned how the strong examples end up obfuscating my overall point. That was actually a writing strategy: I didn't want the point stated early on and elucidated throug
14MalcolmOcean
This was a profoundly impactful post and definitely belongs in the review. It prompted me and many others to dive deep into understanding how emotional learnings have coherence and to actually engage in dialogue with them rather than insisting they don't make sense. I've linked this post to people more than probably any other LessWrong post (50-100 times) as it is an excellent summary and introduction to the topic. It works well as a teaser for the full book as well as a standalone resource. The post makes both conceptual and pragmatic claims. I haven't exactly crosschecked the models although they do seem compatible with other models I've read. I did read the whole book and it seemed pretty sound and based in part on relevant neuroscience. There's a kind of meeting-in-the-middle thing there where the neuroscience is quite low-level and therapy is quite high-level. I think it'll be cool to see the middle layers fleshed out a bit. Just because your brain uses Bayes' theorem at the neural level and at higher levels of abstraction, doesn't mean that you consciously know what all of its priors & models are! And it seems the brain's basic organization is set up to prevent people from calmly arguing against emotionally intense evidence without understanding it—which makes a lot of sense if you think about it. And it also makes sense that your brain would be able to update under the right circumstances. I've tested the pragmatic claims personally, by doing the therapeutic reconsolidation process using both Coherence Therapy methods & other methods, both on myself & working with others. I've found that these methods indeed find coherent underlying structures (eg the same basic structures using different introspective methods, that relate and are consistent) and that accessing those emotional truths and bringing them in contact with contradictory evidence indeed causes them to update, and once updated there's no longer a sense of needing to argue with yourself. It doesn'
13DirectedEvolution
What this post does for me is that it encourages me to view products and services not as physical facts of our world, as things that happen to exist, but as the outcomes of an active creative process that is still ongoing and open to our participation. It reminds us that everything we might want to do is hard, and that the work of making that task less hard is valuable. Otherwise, we are liable to make the mistake of taking functionality and expertise for granted. What is not an interface? That's the slipperiest aspect of this post. A programming language is an interface to machine code, a programmer to the language, a company to the programmer, a liaison to the company, a department to the liaison, a chain of command to the department, a stock to the chain of command, an index fund to the stock, an app to the index fund. Matter itself is an interface. An iron bar is an interface to iron. An aliquot is an interface to a chemical. A fruit is an interface, translating between the structure of a chloroplast and the structure of things-animals-can-eat. A janitor is an interface to brooms and buckets, the layout of the building, and other considerations bearing on the task of cleaning. We have lots of words in this concept-cluster: tools, products, goods and services, control systems, and now "interfaces." "As a scarce resource," suggests that there are resources that are not interfaces. After all, the implied value prop of this post is that it's suggesting a high-value area for economic activity. But if all economic activity is interface design, then a more accurate title is "Scarce Resources as Interfaces," or "Goods Are Hard To Make And Services Are Hard To Do." The value I get out of this post is that it shifts my thinking about a tool or service away from the mechanism, and toward the value prop. It's also a useful reminder for an early-career professional that their value prop is making a complex system easier to use for somebody else, rather than ticking the bo
15Alex_Altair
[This is a self-review because I see that no one has left a review to move it into the next phase. So8res's comment would also make a great review.] I'm pretty proud of this post for the level of craftsmanship I was able to put into it. I think it embodies multiple rationalist virtues. It's a kind of "timeless" content, and is a central example of the kind of content people want to see on LW that isn't stuff about AI. It would also look great printed in a book. :)
10Scott Alexander
I still endorse most of this post, but https://docs.google.com/document/d/1cEBsj18Y4NnVx5Qdu43cKEHMaVBODTTyfHBa8GIRSec/edit has clarified many of these issues for me and helped quantify the ways that science is, indeed, slowing down.
27Martin Sustrik
Author here. In the hindsight, I still feel that the phenomenon is interesting and potentially important topic to look into. I am not aware of any attempt to replicate or dive deeper though. As for my attempt to explain the psychology underlying the phenomenon I am not entirely happy with it. It's based only on introspection and lacks sound game-theoretic backing. By the way, there's one interesting explanation I've read somewhere in the meantime (unfortunately, I don't remember the source): Cooperation may incur different costs on different participants. If you are well-off, putting $100 into a common pool is not a terribly important matter. If others fail to cooperate all you can lose is $100. If you just barely getting along, putting $100 into a common pool may threaten you in a serious way. Therefore, rich will be more likely to cooperate than poor. Now, if the thing is framed in moral terms (those cooperating are "good", those not cooperating are "bad") the whole thing may feel like a scam providing the rich a way to buy moral superiority. As a poor person you may thus resort to anti-social punishment as a way to punish the scam.
37DanielFilan
As far as I can tell, this post successfully communicates a cluster of claims relating to "Looking, insight meditation, and enlightenment". It's written in a quite readable style that uses a minimum of metaphorical language or Buddhist jargon. That being said, likely due to its focus as exposition and not persuasion, it contains and relies on several claims that are not supported in the text, such as: * Many forms of meditation successfully train cognitive defusion. * Meditation trains the ability to have true insights into the mental causes of mental processes. * "Usually, most of us are - on some implicit level - operating off a belief that we need to experience pleasant feelings and need to avoid experiencing unpleasant feelings." * Flinching away from thoughts of painful experiences is what causes suffering, not the thoughts of painful experiences themselves, nor the actual painful experiences. * Impermanence, unsatisfactoriness, and no-self are fundamental aspects of existence that "deep parts of our minds" are wrong about. I think that all of these are worth doubting without further evidence, and I think that some of them are in fact wrong. If this post were coupled with others that substantiated the models that it explains, I think that that would be worthy of inclusion in a 'Best of LW 2018' collection. However, my tentative guess is that Buddhist psychology is not an important enough set of claims that a clear explanation of it deserves to be signal-boosted in such a collection. That being said, I could see myself being wrong about that.
14Kaj_Sotala
I still broadly agree with everything that I said in this post. I do feel that it is a little imprecise, in that I now have much more detailed and gears-y models for many of its claims. However, elaborating on those would require an entirely new post (one which I currently working on) with a sequence's worth of prerequisites. So if I were to edit this post, I would probably mostly leave it as it is, but include a pointer to the new post once it's finished. In terms of this post being included in a book, it is worth noting that the post situates itself in the context of Valentine's Kensho post, which has not been nominated for the review and thus wouldn't be included in the book. So if this post were to be included, I should probably edit this so as to not require reading Kensho.
11transhumanist_atom_understander
It's great to have a LessWrong post that states the relationship between expected quality and a noisy measurement of quality: We previously had a popular post on this topic, the tails come apart post, but it actually made a subtle mistake when stating this relationship. It says: The example under discussion in this quote is the same as the example in this post, where quality and noise have the same variance, and thus R^2=0.5. And superficially it seems to be stating the same thing: the expectation of quality is half the measurement. But actually, this newer post is correct, and the older post is wrong. The key is that "Quality" and "Performance" in this post are not measured in standard deviations. Their standard deviations are 1 and √2, respectively. Elaborating on that: Quality has a variance, and standard deviation, of 1. The variance of Performance is the sum of the variances of Quality and noise, which is 2, and thus its standard deviation is √2. Now that we know their standard deviations, we can scale them to units of standard deviation, and obtain Quality (unchanged) and Performance/√2. The relationship between them is: E[Quality]=1√2⋅Performance√2 That is equivalent to the relationship stated in this post. More generally, notating the variables in units of standard deviation as Zx and Zy (since they are "z-scores"), E[Zy]=ρ⋅Zx where ρ is the correlation coefficient. So if your noisy measurement of quality is Zx standard deviations above its mean, then the expectation of quality is ρZx standard deviations above its mean. It is ρ2 that is variance explained, and is thus 1/2 when the signal and noise have the same variance. That's why in the example in this post, we divide the raw performance by 2, rather than converting it to standard deviations and dividing by 2. I think it's important to understand the relationship between the expected value of an unknown and the value of a noisy measurement of it, so it's nice to see a whole post about this relatio
55Coafos
Two pictures of elephant seals. I am, if not deeply, but certainly affected by this post. I felt some kind of joy looking at these animals. It calmed my anger and made my thoughts somewhat happier. I started to believe the world can become a better place, and I would like to make it happen. This post made me a better person. The title says elephant seals 2 and contains 2 pictures of elephant seals, which is accurate. However, I do not think it carves reality because these animals don't have joints. I know it from experimental evidence: I once interacted with a toy model of a seal and it was soft and fluffy and without bones. no You wouldn't guess it, but I have an idea...