We are interested in developing practical techniques of rationality. One practical technique, used widely and successfully in science and technology is formalization, transforming a less-formal argument into a more-formal one. Despite its successes, formalization isn't trivial to learn, and schools rarely try to teach *general* techniques of thinking and deciding. Instead, schools generally only teach domain-specific reasoning. We end up with graduates who can apply formalization skillfully inside of specific domains (e.g. electrical engineering or biology), but fail to apply, or misapply, their skills to other domains (e.g. politics or religion).

A side excursion, to be used as an example:

Is it true that a real-world decision can be perfectly justified via a mathematical proof? No.

Here is one reason: Any real-world "proof", whether it is a publication in a math journal or the output of a computer proof-checker, might contain a mistake. Human mathematicians routinely make mistakes in publications. The computer running the proof-checker is physical, and might be struck by a cosmic ray. Even if the computer were perfect, the proof-checker might have a bug in it.

Here is another reason: Even if you had a proof known to be mistake free, mathematical proof always reasons from assumptions to conclusions. To apply the proof to the real world, you must make an informal argument that the assumptions apply in this real-world situation, and a second informal argument that the conclusion of the proof corresponds to the real-world decision. These first and last links in the chain will still be there, no matter how rigorous and trusted the proof is. The problem with duplicating real-world spheres with Banach-Tarski isn't that there's a mistake in the proof, it's that the real-world spheres don't fit the assumptions.

If you were fitting this argument into your beliefs, you might produce a number, a "gut" estimate of how likely this informal argument is wrong. Can we improve on that using the technique of formalization? What would a formalization of this argument look like? One possible starting point might be to rename everything. We're confident (via philosophy of logic) that renaming won't increase or decrease the quality of the argument. We will reason better about the correctness of the *form* if we hide the *subjects* of the argument.

Is it true that A? No.

Here is one reason: B is true (and B implies not A). Intuition pump. Intuition pump. Intuition pump.

Here is another reason: Even assuming B were false, C is true (and C implies not A). Intuition pump.

Note: there are many choices in this renaming process. It's not a trivial, thought-free operation at all. Someone else might get a completely different "underlying structure" from the same starting point. This particular structure suggests an equation, something like:

P( whole argument is wrong ) = P( first subargument is wrong ) * P( second subargument is wrong | first subargument is wrong )

The equation allows you to estimate the probability of the whole argument being wrong, using two "gut" estimates instead of one. This is probably an improved, lower-variance estimate.

The point is:

- Formalization is a rationality technique.
- Renaming primitive notions to meaningless symbols is a reasonable first step in formalization.