This, the final book of Rationality: From AI to Zombies, is less a conclusion than a
call to action. In keeping with Becoming Stronger’s function as a jumping-off point
for further investigation, I’ll conclude by citing resources the reader can use to move
beyond these sequences and seek out a fuller understanding of Bayesianism.
This text’s definition of normative rationality in terms of Bayesian probability theory and
decision theory is standard in cognitive science. For an introduction to the heuristics and
biases approach, see Baron’s Thinking and Deciding. For a general introduction
to the field, see the Oxford Handbook of Thinking and Reasoning.
The arguments made in these pages about the philosophy of rationality are more
controversial. Yudkowsky argues, for example, that a rational agent should one-box in
Newcomb’s Problem—a minority position among working decision theorists. (See Holt
for a nontechnical description of Newcomb’s Problem.) Gary Drescher’s Good and
Real independently comes to many of the same conclusions as Yudkowsky on philosophy
of science and decision theory. As such, it serves as an excellent book-length treatment
of the core philosophical content of Rationality: From AI to Zombies.
distinguishes several views in Bayesian epistemology, including E. T. Jaynes’s position that
not all possible priors are equally reasonable.[6,7] Like Jaynes, Yudkowsky is interested in
supplementing the Bayesian optimality criterion for belief revision with an optimality
criterion for priors. This aligns Yudkowsky with researchers who hope to better understand
general-purpose AI via an improved theory of ideal reasoning, such as Marcus Hutter. For
a broader discussion of philosophical efforts to naturalize theories of knowledge, see Feldman.
“Bayesianism” is often contrasted with “frequentism.” Some frequentists criticize Bayesians
for treating probabilities as subjective states of belief, rather than as objective
frequencies of events. Kruschke and
have replied that frequentism is even more “subjective” than Bayesianism, because
frequentism’s probability assignments depend on the intentions of the experimenter.
Importantly, this philosophical disagreement shouldn’t be conflated with the distinction
between Bayesian and frequentist data analysis methods, which can both be useful when
employed correctly. Bayesian statistical tools have become cheaper to use since the 1980s,
and their informativeness, intuitiveness, and generality have come to be more widely
appreciated, resulting in “Bayesian revolutions” in many sciences. However, traditional
frequentist methods re- main more popular, and in some contexts they are still clearly
superior to Bayesian approaches. Kruschke’s Doing Bayesian Data Analysis is a fun
and accessible introduction to the topic.
In light of evidence that training in statistics—and some other fields, such as
psychology—improves reasoning skills outside the classroom, statistical literacy is directly
relevant to the project of overcoming bias. (Classes in formal logic and informal fallacies
have not proven similarly useful.)[12,13]
We conclude with three sequences on individual and collective self- improvement. “Yudkowsky’s
Coming of Age” provides a last in-depth illustration of the dynamics of irrational belief,
this time spotlighting the author’s own intellectual history. “Challenging the Difficult”
asks what it takes to solve a truly difficult problem—including demands that go beyond
epistemic rationality. Finally, “The Craft and the Community” discusses rationality groups
and group rationality, raising the questions:
Can rationality be learned and taught?
If so, how much improvement is possible?
How can we be confident we’re seeing a real effect in a rationality intervention, and
picking out the right cause?
What community norms would make this process of bettering ourselves easier?
Can we effectively collaborate on large-scale problems without sacrificing our
freedom of thought and conduct?
Above all: What’s missing? What should be in the next generation of rationality primers—the
ones that replace this text, improve on its style, test its prescriptions, supplement its
content, and branch out in altogether new directions?
Though Yudkowsky was moved to write these essays by his own philosophical mistakes and
professional difficulties in AI theory, the resultant material has proven useful to a much
wider audience. The original blog posts inspired the growth of Less Wrong, a
community of intellectuals and life hackers with shared interests in cognitive science,
computer science, and philosophy. Yudkowsky and other writers on Less Wrong have
helped seed the effective altruism movement, a vibrant and audacious effort to identify the
most high-impact humanitarian charities and causes. These writings also sparked the
establishment of the Center for Applied Rationality, a nonprofit organization that attempts
to translate results from the science of rationality into useable techniques for
I don’t know what’s next—what other unconventional projects or ideas might draw inspiration
from these pages. We certainly face no shortage of global challenges, and the art of applied
rationality is a new and half-formed thing. There are not many rationalists, and there are
many things left undone.
But wherever you’re headed next, reader—may you serve your purpose well.
"Classes in formal logic and informal fallacies have not proven similarly useful" - do we have enough data to predict that the problem was what was taught not how it was taught?