LESSWRONG
The Best of LessWrong
LW

629

The Best of LessWrong

When posts turn more than a year old, the LessWrong community reviews and votes on how well they have stood the test of time. These are the posts that have ranked the highest for all years since 2018 (when our annual tradition of choosing the least wrong of LessWrong began).

For the years 2018, 2019 and 2020 we also published physical books with the results of our annual vote, which you can buy and learn more about here.
+

Rationality

Eliezer Yudkowsky
Local Validity as a Key to Sanity and Civilization
Buck
"Other people are wrong" vs "I am right"
Mark Xu
Strong Evidence is Common
TsviBT
Please don't throw your mind away
Raemon
Noticing Frame Differences
johnswentworth
You Are Not Measuring What You Think You Are Measuring
johnswentworth
Gears-Level Models are Capital Investments
Hazard
How to Ignore Your Emotions (while also thinking you're awesome at emotions)
Scott Garrabrant
Yes Requires the Possibility of No
Ben Pace
A Sketch of Good Communication
Eliezer Yudkowsky
Meta-Honesty: Firming Up Honesty Around Its Edge-Cases
Duncan Sabien (Inactive)
Lies, Damn Lies, and Fabricated Options
Scott Alexander
Trapped Priors As A Basic Problem Of Rationality
Duncan Sabien (Inactive)
Split and Commit
Duncan Sabien (Inactive)
CFAR Participant Handbook now available to all
johnswentworth
What Are You Tracking In Your Head?
Mark Xu
The First Sample Gives the Most Information
Duncan Sabien (Inactive)
Shoulder Advisors 101
Scott Alexander
Varieties Of Argumentative Experience
Eliezer Yudkowsky
Toolbox-thinking and Law-thinking
alkjash
Babble
Zack_M_Davis
Feature Selection
abramdemski
Mistakes with Conservation of Expected Evidence
Kaj_Sotala
The Felt Sense: What, Why and How
Duncan Sabien (Inactive)
Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions)
Ben Pace
The Costly Coordination Mechanism of Common Knowledge
Jacob Falkovich
Seeing the Smoke
Duncan Sabien (Inactive)
Basics of Rationalist Discourse
alkjash
Prune
johnswentworth
Gears vs Behavior
Elizabeth
Epistemic Legibility
Daniel Kokotajlo
Taboo "Outside View"
Duncan Sabien (Inactive)
Sazen
AnnaSalamon
Reality-Revealing and Reality-Masking Puzzles
Eliezer Yudkowsky
ProjectLawful.com: Eliezer's latest story, past 1M words
Eliezer Yudkowsky
Self-Integrity and the Drowning Child
Jacob Falkovich
The Treacherous Path to Rationality
Scott Garrabrant
Tyranny of the Epistemic Majority
alkjash
More Babble
abramdemski
Most Prisoner's Dilemmas are Stag Hunts; Most Stag Hunts are Schelling Problems
Raemon
Being a Robust Agent
Zack_M_Davis
Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists
Benquo
Reason isn't magic
habryka
Integrity and accountability are core parts of rationality
Raemon
The Schelling Choice is "Rabbit", not "Stag"
Diffractor
Threat-Resistant Bargaining Megapost: Introducing the ROSE Value
Raemon
Propagating Facts into Aesthetics
johnswentworth
Simulacrum 3 As Stag-Hunt Strategy
LoganStrohl
Catching the Spark
Jacob Falkovich
Is Rationalist Self-Improvement Real?
Benquo
Excerpts from a larger discussion about simulacra
Zvi
Simulacra Levels and their Interactions
abramdemski
Radical Probabilism
sarahconstantin
Naming the Nameless
AnnaSalamon
Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality"
Eric Raymond
Rationalism before the Sequences
Owain_Evans
The Rationalists of the 1950s (and before) also called themselves “Rationalists”
Raemon
Feedbackloop-first Rationality
LoganStrohl
Fucking Goddamn Basics of Rationalist Discourse
Raemon
Tuning your Cognitive Strategies
johnswentworth
Lessons On How To Get Things Right On The First Try
+

Optimization

So8res
Focus on the places where you feel shocked everyone's dropping the ball
Jameson Quinn
A voting theory primer for rationalists
sarahconstantin
The Pavlov Strategy
Zvi
Prediction Markets: When Do They Work?
johnswentworth
Being the (Pareto) Best in the World
alkjash
Is Success the Enemy of Freedom? (Full)
johnswentworth
Coordination as a Scarce Resource
AnnaSalamon
What should you change in response to an "emergency"? And AI risk
jasoncrawford
How factories were made safe
HoldenKarnofsky
All Possible Views About Humanity's Future Are Wild
jasoncrawford
Why has nuclear power been a flop?
Zvi
Simple Rules of Law
Scott Alexander
The Tails Coming Apart As Metaphor For Life
Zvi
Asymmetric Justice
Jeffrey Ladish
Nuclear war is unlikely to cause human extinction
Elizabeth
Power Buys You Distance From The Crime
Eliezer Yudkowsky
Is Clickbait Destroying Our General Intelligence?
Spiracular
Bioinfohazards
Zvi
Moloch Hasn’t Won
Zvi
Motive Ambiguity
Benquo
Can crimes be discussed literally?
johnswentworth
When Money Is Abundant, Knowledge Is The Real Wealth
GeneSmith
Significantly Enhancing Adult Intelligence With Gene Editing May Be Possible
HoldenKarnofsky
This Can't Go On
Said Achmiz
The Real Rules Have No Exceptions
Lars Doucet
Lars Doucet's Georgism series on Astral Codex Ten
johnswentworth
Working With Monsters
jasoncrawford
Why haven't we celebrated any major achievements lately?
abramdemski
The Credit Assignment Problem
Martin Sustrik
Inadequate Equilibria vs. Governance of the Commons
Scott Alexander
Studies On Slack
KatjaGrace
Discontinuous progress in history: an update
Scott Alexander
Rule Thinkers In, Not Out
Raemon
The Amish, and Strategic Norms around Technology
Zvi
Blackmail
HoldenKarnofsky
Nonprofit Boards are Weird
Wei Dai
Beyond Astronomical Waste
johnswentworth
Making Vaccine
jefftk
Make more land
jenn
Things I Learned by Spending Five Thousand Hours In Non-EA Charities
Richard_Ngo
The ants and the grasshopper
So8res
Enemies vs Malefactors
Elizabeth
Change my mind: Veganism entails trade-offs, and health is one of the axes
+

World

Kaj_Sotala
Book summary: Unlocking the Emotional Brain
Ben
The Redaction Machine
Samo Burja
On the Loss and Preservation of Knowledge
Alex_Altair
Introduction to abstract entropy
Martin Sustrik
Swiss Political System: More than You ever Wanted to Know (I.)
johnswentworth
Interfaces as a Scarce Resource
eukaryote
There’s no such thing as a tree (phylogenetically)
Scott Alexander
Is Science Slowing Down?
Martin Sustrik
Anti-social Punishment
johnswentworth
Transportation as a Constraint
Martin Sustrik
Research: Rescuers during the Holocaust
GeneSmith
Toni Kurz and the Insanity of Climbing Mountains
johnswentworth
Book Review: Design Principles of Biological Circuits
Elizabeth
Literature Review: Distributed Teams
Valentine
The Intelligent Social Web
eukaryote
Spaghetti Towers
Eli Tyre
Historical mathematicians exhibit a birth order effect too
johnswentworth
What Money Cannot Buy
Bird Concept
Unconscious Economics
Scott Alexander
Book Review: The Secret Of Our Success
johnswentworth
Specializing in Problems We Don't Understand
KatjaGrace
Why did everything take so long?
Ruby
[Answer] Why wasn't science invented in China?
Scott Alexander
Mental Mountains
L Rudolf L
A Disneyland Without Children
johnswentworth
Evolution of Modularity
johnswentworth
Science in a High-Dimensional World
Kaj_Sotala
My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms
Kaj_Sotala
Building up to an Internal Family Systems model
Steven Byrnes
My computational framework for the brain
Natália
Counter-theses on Sleep
abramdemski
What makes people intellectually active?
Bucky
Birth order effect found in Nobel Laureates in Physics
zhukeepa
How uniform is the neocortex?
JackH
Anti-Aging: State of the Art
Vaniver
Steelmanning Divination
KatjaGrace
Elephant seal 2
Zvi
Book Review: Going Infinite
Rafael Harth
Why it's so hard to talk about Consciousness
Duncan Sabien (Inactive)
Social Dark Matter
Eric Neyman
How much do you believe your results?
Malmesbury
The Talk: a brief explanation of sexual dimorphism
moridinamael
The Parable of the King and the Random Process
Henrik Karlsson
Cultivating a state of mind where new ideas are born
+

Practical

alkjash
Pain is not the unit of Effort
benkuhn
Staring into the abyss as a core life skill
Unreal
Rest Days vs Recovery Days
Duncan Sabien (Inactive)
In My Culture
juliawise
Notes from "Don't Shoot the Dog"
Elizabeth
Luck based medicine: my resentful story of becoming a medical miracle
johnswentworth
How To Write Quickly While Maintaining Epistemic Rigor
Duncan Sabien (Inactive)
Ruling Out Everything Else
johnswentworth
Paper-Reading for Gears
Elizabeth
Butterfly Ideas
Eliezer Yudkowsky
Your Cheerful Price
benkuhn
To listen well, get curious
Wei Dai
Forum participation as a research strategy
HoldenKarnofsky
Useful Vices for Wicked Problems
pjeby
The Curse Of The Counterfactual
Darmani
Leaky Delegation: You are not a Commodity
Adam Zerner
Losing the root for the tree
chanamessinger
The Onion Test for Personal and Institutional Honesty
Raemon
You Get About Five Words
HoldenKarnofsky
Learning By Writing
GeneSmith
How to have Polygenically Screened Children
AnnaSalamon
“PR” is corrosive; “reputation” is not.
Ruby
Do you fear the rock or the hard place?
johnswentworth
Slack Has Positive Externalities For Groups
Raemon
Limerence Messes Up Your Rationality Real Bad, Yo
mingyuan
Cryonics signup guide #1: Overview
catherio
microCOVID.org: A tool to estimate COVID risk from common activities
Valentine
Noticing the Taste of Lotus
orthonormal
The Loudest Alarm Is Probably False
Raemon
"Can you keep this confidential? How do you know?"
mingyuan
Guide to rationalist interior decorating
Screwtape
Loudly Give Up, Don't Quietly Fade
+

AI Strategy

paulfchristiano
Arguments about fast takeoff
Eliezer Yudkowsky
Six Dimensions of Operational Adequacy in AGI Projects
Ajeya Cotra
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
paulfchristiano
What failure looks like
Daniel Kokotajlo
What 2026 looks like
gwern
It Looks Like You're Trying To Take Over The World
Daniel Kokotajlo
Cortés, Pizarro, and Afonso as Precedents for Takeover
Daniel Kokotajlo
The date of AI Takeover is not the day the AI takes over
Andrew_Critch
What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)
paulfchristiano
Another (outer) alignment failure story
Ajeya Cotra
Draft report on AI timelines
Eliezer Yudkowsky
Biology-Inspired AGI Timelines: The Trick That Never Works
Daniel Kokotajlo
Fun with +12 OOMs of Compute
Wei Dai
AI Safety "Success Stories"
Eliezer Yudkowsky
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
HoldenKarnofsky
Reply to Eliezer on Biological Anchors
Richard_Ngo
AGI safety from first principles: Introduction
johnswentworth
The Plan
Rohin Shah
Reframing Superintelligence: Comprehensive AI Services as General Intelligence
lc
What an actually pessimistic containment strategy looks like
Eliezer Yudkowsky
MIRI announces new "Death With Dignity" strategy
KatjaGrace
Counterarguments to the basic AI x-risk case
Adam Scholl
Safetywashing
habryka
AI Timelines
evhub
Chris Olah’s views on AGI safety
So8res
Comments on Carlsmith's “Is power-seeking AI an existential risk?”
nostalgebraist
human psycholinguists: a critical appraisal
nostalgebraist
larger language models may disappoint you [or, an eternally unfinished draft]
Orpheus16
Speaking to Congressional staffers about AI risk
Tom Davidson
What a compute-centric framework says about AI takeoff speeds
abramdemski
The Parable of Predict-O-Matic
KatjaGrace
Let’s think about slowing down AI
Daniel Kokotajlo
Against GDP as a metric for timelines and takeoff speeds
Joe Carlsmith
Predictable updating about AI risk
Raemon
"Carefully Bootstrapped Alignment" is organizationally hard
KatjaGrace
We don’t trade with ants
+

Technical AI Safety

paulfchristiano
Where I agree and disagree with Eliezer
Eliezer Yudkowsky
Ngo and Yudkowsky on alignment difficulty
Andrew_Critch
Some AI research areas and their relevance to existential safety
1a3orn
EfficientZero: How It Works
elspood
Security Mindset: Lessons from 20+ years of Software Security Failures Relevant to AGI Alignment
So8res
Decision theory does not imply that we get to have nice things
Vika
Specification gaming examples in AI
Rafael Harth
Inner Alignment: Explain like I'm 12 Edition
evhub
An overview of 11 proposals for building safe advanced AI
TurnTrout
Reward is not the optimization target
johnswentworth
Worlds Where Iterative Design Fails
johnswentworth
Alignment By Default
johnswentworth
How To Go From Interpretability To Alignment: Just Retarget The Search
Alex Flint
Search versus design
abramdemski
Selection vs Control
Buck
AI Control: Improving Safety Despite Intentional Subversion
Eliezer Yudkowsky
The Rocket Alignment Problem
Eliezer Yudkowsky
AGI Ruin: A List of Lethalities
Mark Xu
The Solomonoff Prior is Malign
paulfchristiano
My research methodology
TurnTrout
Reframing Impact
Scott Garrabrant
Robustness to Scale
paulfchristiano
Inaccessible information
TurnTrout
Seeking Power is Often Convergently Instrumental in MDPs
So8res
A central AI alignment problem: capabilities generalization, and the sharp left turn
evhub
Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research
paulfchristiano
The strategy-stealing assumption
So8res
On how various plans miss the hard bits of the alignment challenge
abramdemski
Alignment Research Field Guide
johnswentworth
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
Buck
Language models seem to be much better than humans at next-token prediction
abramdemski
An Untrollable Mathematician Illustrated
abramdemski
An Orthodox Case Against Utility Functions
Veedrac
Optimality is the tiger, and agents are its teeth
Sam Ringer
Models Don't "Get Reward"
Alex Flint
The ground of optimization
johnswentworth
Selection Theorems: A Program For Understanding Agents
Rohin Shah
Coherence arguments do not entail goal-directed behavior
abramdemski
Embedded Agents
evhub
Risks from Learned Optimization: Introduction
nostalgebraist
chinchilla's wild implications
johnswentworth
Why Agent Foundations? An Overly Abstract Explanation
zhukeepa
Paul's research agenda FAQ
Eliezer Yudkowsky
Coherent decisions imply consistent utilities
paulfchristiano
Open question: are minimal circuits daemon-free?
evhub
Gradient hacking
janus
Simulators
LawrenceC
Causal Scrubbing: a method for rigorously testing interpretability hypotheses [Redwood Research]
TurnTrout
Humans provide an untapped wealth of evidence about alignment
Neel Nanda
A Mechanistic Interpretability Analysis of Grokking
Collin
How "Discovering Latent Knowledge in Language Models Without Supervision" Fits Into a Broader Alignment Scheme
evhub
Understanding “Deep Double Descent”
Quintin Pope
The shard theory of human values
TurnTrout
Inner and outer alignment decompose one hard problem into two extremely hard problems
Eliezer Yudkowsky
Challenges to Christiano’s capability amplification proposal
Scott Garrabrant
Finite Factored Sets
paulfchristiano
ARC's first technical report: Eliciting Latent Knowledge
Diffractor
Introduction To The Infra-Bayesianism Sequence
TurnTrout
Towards a New Impact Measure
LawrenceC
Natural Abstractions: Key Claims, Theorems, and Critiques
Zack_M_Davis
Alignment Implications of LLM Successes: a Debate in One Act
johnswentworth
Natural Latents: The Math
TurnTrout
Steering GPT-2-XL by adding an activation vector
Jessica Rumbelow
SolidGoldMagikarp (plus, prompt generation)
So8res
Deep Deceptiveness
Charbel-Raphaël
Davidad's Bold Plan for Alignment: An In-Depth Explanation
Charbel-Raphaël
Against Almost Every Theory of Impact of Interpretability
Joe Carlsmith
New report: "Scheming AIs: Will AIs fake alignment during training in order to get power?"
Eliezer Yudkowsky
GPTs are Predictors, not Imitators
peterbarnett
Labs should be explicit about why they are building AGI
HoldenKarnofsky
Discussion with Nate Soares on a key alignment difficulty
Jesse Hoogland
Neural networks generalize because of this one weird trick
paulfchristiano
My views on “doom”
technicalities
Shallow review of live agendas in alignment & safety
Vanessa Kosoy
The Learning-Theoretic Agenda: Status 2023
ryan_greenblatt
Improving the Welfare of AIs: A Nearcasted Proposal
201820192020202120222023All
RationalityWorldOptimizationAI StrategyTechnical AI SafetyPracticalAll
#9
You Are Not Measuring What You Think You Are Measuring

Two laws of experiment design: First, you are not measuring what you think you are measuring. Second, if you measure enough different stuff, you might figure out what you're actually measuring.

These have many implications for how to design and interpret experiments.

by johnswentworth
#11
Sazen

A "sazen" is a word or phrase which accurately summarizes a given concept, while also being insufficient to generate that concept in its full richness and detail, or to unambiguously distinguish it from nearby concepts. It's a useful pointer to the already-initiated, but often useless or misleading to the uninitiated.

by Duncan Sabien (Inactive)
#16
Epistemic Legibility

Being easy to argue with is a virtue, separate from being correct. When someone makes an epistemically illegible argument, it is very hard to even begin to rebut their arguments because you cannot pin down what their argument even is.

by Elizabeth
#17
Tyranny of the Epistemic Majority

Kelly betting can be viewed as a way of respecting different possible versions of yourself with different beliefs, rather than just a mathematical optimization. This perspective provides some insight into why fractional Kelly betting (betting less aggressively) can make sense, and connects to ideas about bargaining between different parts of yourself. 

by Scott Garrabrant
#19
What Are You Tracking In Your Head?

A key skill of many experts (that is often hard to teach) is keeping track of extra information in their head while working. For example a programmer tracking a fermi estimate of runtime or an experienced machine operator tracking the machine's internal state. John suggests asking experts "what are you tracking in your head?"

by johnswentworth
#28
Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality"

It's easy and locally reinforcing to follow gradients toward what one might call 'guessing the student's password', and much harder and much less locally reinforcing to reason/test/whatever one's way toward a real art of rationality. Anna Salamon reflects on how this got in the way of CFAR ("Center for Applied Rationality") making progress on their original goals.

by AnnaSalamon
#51
ProjectLawful.com: Eliezer's latest story, past 1M words

So if you read Harry Potter and the Methods of Rationality, and thought...

"You know, HPMOR is pretty good so far as it goes; but Harry is much too cautious and doesn't have nearly enough manic momentum, his rationality lectures aren't long enough, and all of his personal relationships are way way way too healthy."

...then have I got the story for you!

by Eliezer Yudkowsky
27habryka
I put decent probability on this sequence (of which I think this is the best post) being the most important contribution of 2022. I am however really not confident of that, and I do feel a bit stuck on how to figure out where to apply and how to confirm the validity of ideas in this sequence.  Despite the abstract nature, I think if there are indeed arguments to do something closer to Kelly betting with one's resources, even in the absence of logarithmic returns to investment, then that would definitely have huge effects on how I think about my own life's plans, and about how humanity should allocate its resources.  Separately, I also think this sequence is pushing on a bunch of important seams in my model of agency and utility maximization in a way that I expect to become relevant to understanding the behavior of superintelligent systems, though I am even less confident of this than the rest of this review.  I do feel a sense of sadness that I haven't seen more built on the ideas of this sequence, or seen people give their own take on it. I certainly feel a sense that I would benefit a lot if I saw how the ideas in this sequence landed with people, and would appreciate figuring out the implications of the proof sketches outlined here.
23Screwtape
Many of the best LessWrong posts give a word and a clear mental handle for something I kinda sorta knew loosely in my head. With the concept firmly in mind, I can use it and build on it deliberately. Sazen is an excellent example of the form. Sazens are common in many fields I have some expertise in. "Control the centre of the board" in chess. "Footwork is foundational" in martial arts. "Shots on goal" in sports. "Conservation of expected evidence" in rationality. "Premature optimization is the root of all evil" in programming. These sentences a useful reminders, and while they aren't misleading traps the way "Duncan Sabien is a teacher and a writer" they take some practice and experience or at least more detailed teaching to actually turn into something useful. Having the word "Sazen" with this meaning in my head has changed how I write. It shifted my thesis statement from simply being a compressed version of my argument towards being an easy handle to repeat to oneself at need, the same way I might mutter "shots on goal shots on goal" to myself during a hockey game. Sazen is a bit meta, it's not a technique for the object level accomplishments but a technique for how to teach or explain object level things, but anything that immediately upgrades my own writing is worth a solid upvote. This post also gestures at the important problem of transmitting knowledge. It ultimately doesn't know how to do this, but I especially appreciated the paragraph starting "much of what aggregated wisdom like that seems to do..." for pointing out that this can speed things up even if it can't prevent the first mistake or two. I think this is worth being included in the best of LW collection.
11Screwtape
The thing I want most from LessWrong and the Rationality Community writ large is the martial art of rationality. That was the Sequences post that hooked me, that is the thing I personally want to find if it exists, that is what I thought CFAR as an organization was pointed at. When you are attempting something that many people have tried before- and to be clear, "come up with teachings to make people better" is something that many, many people have tried before- it may be useful to look and see what went wrong last time. In the words of Scott Alexander, "I’m the last person who’s going to deny that the road we’re on is littered with the skulls of the people who tried to do this before us. . . We’re almost certainly still making horrendous mistakes that people thirty years from now will rightly criticize us for. But they’re new mistakes. . . And I hope that maybe having a community dedicated to carefully checking its own thought processes and trying to minimize error in every way possible will make us have slightly fewer horrendous mistakes than people who don’t do that." This article right here? This is a skull. It should be noticed. If the Best Of collection is for people who want a martial art of rationality to study then I believe this article is the most important entry, and it or the latest version of it will continue to be the most important entry until we have found the art at last. Thank you Anna for trying to build the art. Thank you for writing this and publishing it where anyone else about to attempt to build the art can take note of your mistakes and try to do better. (Ideally it's next to a dozen things we have found that we do think work! But maybe it's next to them the way a surgeon general's warning is next to a bottle of experimental pills.)
17AprilSR
I feel like Project Lawful, as well as many of Lintamande's other glowfic since then, have given me a whole lot deeper an understanding of... a collection of virtues including honor, honesty, trustworthiness, etc, which I now mostly think of collectively as "Law". I think this has been pretty valuable for me on an intellectual level—I think, if you show me some sort of deontological rule, I'm going to give a better account of why/whether it's a good idea to follow it than I would have before I read any glowfic. It's difficult for me to separate how much of that is due to Project Lawful in particular, because ultimately I've just read a large body of work which all had some amount of training data showing a particular sort of thought pattern which I've since learned. But I think this particular fragment of the rationalist community has given me some valuable new ideas, and it'd be great to figure out a good way of acknowledging that.
15niplav
I don't think this would fit into the 2022 review. Project Lawful has been quite influential, but I find it hard to imagine a way its impact could be included in a best-of. Including this post in particular strikes me as misguided, as it contains none of the interesting ideas and lessons from Project Lawful, and thus doesn't make any intellectual progress. One could try to do the distillation of finding particularly interesting or enlightening passages from the text, but that would be 1. A huge amount of work[1], but maybe David Udell's sequence could be used for that. 2. Quite difficult for the more subtle lessons, which are interwoven in the text. I have nothing against Project Lawful in particular[2], but I think that including this post would be misguided, and including passages from Project Lawful would be quite difficult. For that reason, I'm giving this a -1. ---------------------------------------- 1. Consider: after more than two years the Hanson compilation bounty still hasn't been fulfilled, at $10k reward! ↩︎ 2. I've read parts of it (maybe 15%?), but haven't been hooked, and everytime I read a longer part I get the urge to go and read textbooks instead. ↩︎