Twiki Dashboard

Tags in Need of Work

This is a list of brief explanations and definitions for terms that Eliezer Yudkowsky uses in the book Rationality: From AI to Zombies, an edited version of the Sequences.

"Infinite set atheism" is a tongue-in-cheek phrase used by Eliezer Yudkowsky to describe his doubt that infinite sets of things exist in the physical universe. While Yudkowsky has so far not claimed to be a finitist, in the sense of doubting the mathematical correctness of those parts of mathematics that make use of the concept of infinite sets, he is not convinced that an AI would need to use mathematical tools of this kind in order to reason correctly about the physical world. 1

A precondition of Aumann's agreement theorem and its numerous generalizations is that the agents must have the same priors. Without common priors, the agents can disagree easily. One can justify the assumption of common priors by noting that it would be awfully strange if rational beliefs could depend on seemingly arbitrary starting features of the agent.

Aumann's agreement theorem states that Bayesian reasoners with common priors and common knowledge of each other's opinions cannot agree to disagree. This has enormous intuitive implication on the human practice of rationality. Consider: if I'm an honest seeker of truth, and you're an honest seeker of truth, and we believe each other to be honest, then we can update on each other's opinions and quickly reach agreement. Unless you think I'm so irredeemably irrational that my opinions anticorrelate with truth, then the very fact that I believe something is Bayesian evidence that that something is true, and you should take that into account when forming your belief. Likewise, fellow rationalists should update their beliefs on your beliefs, not as a social custom or personal courtesy, but simply because your rational belief really is Bayesian evidence about the state of the world, in the same way that a photograph or a reference book is evidence about the state of the world.

Optimizing predictions for sounding good as stories, when nature optimizes for no such thing, creates a bias that Nick Bostrom has termed good-story bias.

Summaries of LessWrong Posts from 2007

Some Claims Are Just Too Extraordinary