Formalization is a rationality technique

9HalFinney

4John_Maxwell

6jimrandomh

1John_Maxwell

2Tyrrell_McAllister

7Annoyance

1Johnicholas

4Annoyance

2Johnicholas

3GregRansom

1michael_webster

1Johnicholas

1Tyrrell_McAllister

1Cameron_Taylor

3grobstein

1Cameron_Taylor

1John_Maxwell

1Cyan

1Johnicholas

1John_Maxwell

0John_Maxwell

5MBlume

1Cameron_Taylor

0Cyan

4jimrandomh

2John_Maxwell

1Cameron_Taylor

New Comment

It's a start, but I don't know that this rather superficial level of formalization really gets you anywhere useful. I think in practice you will need to learn the domain specific formalization techniques to make real progress. I know Robin has occasionally asked people making comments on economics to offer a formal model, and I for one would have no idea how to begin. In my own field of computer security, sometimes a proposal is met with a challenge to define a threat model, and again this often serves to silence the amateurs.

In fact there is a danger that demands for formalization end up being a way of excluding those who are not members of the club.

Perhaps a better response to the forays of amateurs would be to define a formal model that represents your understanding of their argument, explain it to them, and see if they agree that it's accurate.

That's a lot of work to respond to an amateur's argument with. Probably at least an order of magnitude more work than went into the original argument. And the formal argument is likely to end up being very different from the original, informal one; it would be very frustrating to take someone's informal argument, formalize it, show that the formal version of the argument is incorrect, and then be told that your formalization missed some important insight.

Really? Maybe I'm just naive. Could you give me an example of an argument and its formalization?

Good examples of this include the efforts of posters on the newsgroup sci.math to make sense of arguments by math cranks who believe that they've proved the denumerability of the reals or what have you.

An argument need not be *formal* if it is *precise* and *specific* and *clear*. Those are the properties that formalism attempts to bring to arguments. The traditional forms are fine and good, and even better because they're generally understood, but they're not strictly necessary.

If you can be precise, specific, and clear, in a way that isn't traditionally formal, that's just fine.

Suppose someone was striving to be precise, specific and clear, but didn't have any good ideas for moving in that direction. Would you suggest "try to formalize it" as an idea-generator?

No. The set of precise, specific, and clear ways to express an idea is much, much larger than the set of formal ways (which are arguably a subset of the larger group). Generally, it's easier to hit a very large target than a narrow one. And if you don't know how to hit the set, hitting a subset will probably be even harder.

Precision and clarity precede the formal versions of themselves, in the same way that people used the principles of logic long before they were explicitly recognized and formalized. If Aristotle didn't already understand those principles, he could never have generated formal logic, which is the rational examination of them. If you don't understand the ideas clearly, you won't be able to formalize them.

I have had good experiences sharpening my ideas by striving to formalize them, into propositional or predicate logic, into running code, or (more recently) into bayesian networks.

Of course, formalization as a technique for improving arguments may not be useful for everybody.

There is a DEEP fallacy here, discussed in the work of Thomas Kuhn, Ludwig Wittgenstein and the work of philosopher Larry Wright on formal and informal reasoning, and in the work of Friedrich Hayek on explanation in economics.

Think about how formalization gave Carnap a massively false view of science and knowledge. Think how formalization made math economists believe that socialism was possible, and the an equilibrium construct was an "economy" or somehow provided an explanation (there are many more examples of how formalism made philosophers of science and economists stupid, but those should be enough for an educated person to "get it".)

Formalize also massively falsified the nature of language, mind, and reasoning for most analytic philosophers over the last 100+ years -- a bit of reading of Wittgenstein should help one understand this.

Wright shows the strengths -- and limitations -- of formalization in argument, and points out at the bottom is something identified in Kuhn and Wittgenstein, i.e. experience, training, good judgment, practice, etc., all stuff NOT formalizable -- there is no 'ultimate' math or logical metric that can capture everything in a number or in a formalism.

On all this I might in particular recommend Larry Wright, "Argument and Deliberation: A Plea for Understanding", Journal of Philosophy.

Greg, I don't think we are given enough information here to call this a fallacy.

However, I probably side with your overall skepticism about how formalism is used and abused.

The foundations of mathematical modeling are simple to state.

A good model is: a) a formal translation of informal language, of b) truth functional elements, that c) preserves validity of the home language inference.

Unfortunately, many thinkers leave out c) all together, are ignorant of b)'s limitations, and do not think about a).

Some conscious effort along the lines of a-c would improve our modeling in general.

If anyone wants to see a concrete but tinker toy example of a-c, consult any introductory text on formal logic and the rational reconstruction of material implication as a translation of the English "if, then." All the elements are shown clearly.

Would you recommend trying to formalize an argument, as a way of making it more precise, specific, and clear? Contrariwise, would you recommend avoiding formalization?

On all this I might in particular recommend Larry Wright, "Argument and Deliberation: A Plea for Understanding", Journal of Philosophy.

Here's a link to that article on JSTOR.

Formalizations that take big chunks of arguments as black boxes are not that useful. Formalizations that instead map all of an argument's moving parts are very hard.

The reason that specialists learn formalizations for domain-specific arguments only is because formalizing truly general arguments[FN1] is an extremely difficult problem -- difficult to design and difficult to use. This is why mathematicians work largely in natural language, even though their arguments could (usually or always) be described in formal logic. Specialized formal languages are possible and useful only because they describe radically stripped-down models of the world.

Overall, this means that -- while moves like the example in the post probably *are* helpful -- we shouldn't expect to go much further in this direction.

[FN1]To be precise, formalizing truly general arguments *in a way that simplifies and clarifies* is hard. There is a trivial formalization for every argument, since the class of arguments is equinumerous with the natural numbers; the decision process can be a symbolic representation of a bunch of experts' brains or something like that.

Let's say I believe that proposition A has probability Pa of being true. Then I am informed of an argument B which has a probability P~b~ of being true. Although argument B implies proposition A, proposition A does not imply argument B. How should I adjust P~a~ for various values of P~b~?

P( whole argument is wrong ) = P( first subargument is wrong ) * P( second subargument is wrong | first subargument is wrong )

P( whole argument is wrong ) is not P( first subargument is wrong AND second subargument is wrong), so the above conditional probability decomposition is incorrect.

Let's say I am arguing that the sky is blue because the aliens have landed and that's the color they painted it. The probability of the aliens having landed is very low, and the probability of the aliens painting the sky blue given that they've landed is also fairly low, but the probability of the sky being blue is quite high.

In other words, incorrect arguments in favor of a proposition don't make the proposition less likely.

For the argument to be wrong, only one of its subarguments has to be wrong. So the correct equation is

P(whole argument is wrong) = 1 - P(first subargument is right) * P(second subargument is right | first argument is right)

I don't believe he's speaking of two subarguments which together imply the main argument, but two subarguments each of which independently implies the main argument. Thus, they would both have to be false

We are interested in developing practical techniques of rationality. One practical technique, used widely and successfully in science and technology is formalization, transforming a less-formal argument into a more-formal one. Despite its successes, formalization isn't trivial to learn, and schools rarely try to teach

generaltechniques of thinking and deciding. Instead, schools generally only teach domain-specific reasoning. We end up with graduates who can apply formalization skillfully inside of specific domains (e.g. electrical engineering or biology), but fail to apply, or misapply, their skills to other domains (e.g. politics or religion).A side excursion, to be used as an example:

If you were fitting this argument into your beliefs, you might produce a number, a "gut" estimate of how likely this informal argument is wrong. Can we improve on that using the technique of formalization? What would a formalization of this argument look like? One possible starting point might be to rename everything. We're confident (via philosophy of logic) that renaming won't increase or decrease the quality of the argument. We will reason better about the correctness of the

formif we hide thesubjectsof the argument.Note: there are many choices in this renaming process. It's not a trivial, thought-free operation at all. Someone else might get a completely different "underlying structure" from the same starting point. This particular structure suggests an equation, something like:

The equation allows you to estimate the probability of the whole argument being wrong, using two "gut" estimates instead of one. This is probably an improved, lower-variance estimate.

The point is: