This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss The World: An Introduction (pp. 834-839) and Part O: Lawful Truth (pp. 843-883). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

O. Lawful Truth

The World: An Introduction

181. Universal Fire - You can't change just one thing in the world and expect the rest to continue working as before.

182. Universal Law - In our everyday lives, we are accustomed to rules with exceptions, but the basic laws of the universe apply everywhere without exception. Apparent violations exist only in our models, not in reality.

183. Is Reality Ugly? - There are three reasons why a world governed by math can still seem messy. First, we may not actually know the math. Secondly, even if we do know all of the math, we may not have enough computing power to do the full calculation. And finally, even if we did know all the math, and we could compute it, we still don't know where in the mathematical system we are living.

184. Beautiful Probability - Bayesians expect probability theory, and rationality itself, to be math. Self-consistent, neat, even beautiful. This is why Bayesians think that Cox's theorems are so important.

185. Outside the Laboratory - Those who understand the map/territory distinction will integrate their knowledge, as they see the evidence that reality is a single unified process.

186. The Second Law of Thermodynamics, and Engines of Cognition - To form accurate beliefs about something, you really do have to observe it. It's a very physical, very real process: any rational mind does "work" in the thermodynamic sense, not just the sense of mental effort. Engines of cognition are not so different from heat engines, though they manipulate entropy in a more subtle form than burning gasoline. So unless you can tell me which specific step in your argument violates the laws of physics by giving you true knowledge of the unseen, don't expect me to believe that a big, elaborate clever argument can do it either.

187. Perpetual Motion Beliefs - People learn under the traditional school regimen that the teacher tells you certain things, and you must believe them and recite them back; but if a mere student suggests a belief, you do not have to obey it. They map the domain of belief onto the domain of authority, and think that a certain belief is like an order that must be obeyed, but a probabilistic belief is like a mere suggestion. And when half-trained or tenth-trained rationalists abandon their art and try to believe without evidence just this once, they often build vast edifices of justification, confusing themselves just enough to conceal the magical steps. It can be quite a pain to nail down where the magic occurs - their structure of argument tends to morph and squirm away as you interrogate them. But there's always some step where a tiny probability turns into a large one - where they try to believe without evidence - where they step into the unknown, thinking, "No one can prove me wrong".

188. Searching for Bayes-Structure - If a mind is arriving at true beliefs, and we assume that the second law of thermodynamics has not been violated, that mind must be doing something at least vaguely Bayesian - at least one process with a sort-of Bayesian structure somewhere - or it couldn't possibly work.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part P: Reductionism 101 (pp. 887-935). The discussion will go live on Wednesday, 16 December 2015, right here on the discussion forum of LessWrong.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 10:53 AM

How, if at all, does section 186, The Second Law of Thermodynamics, and Engines of Cognition, relate to E.T Jaynes's claims about entropy and information? Are they mutually supporting? In conflict? Neither, but just deploying some of the same concepts in very different ways?