EDIT: corrected from previous version.

If the moon is made of cheese, then Rafael Delago was elected president of Ecuador in 2005.

If you believe that Kennedy was shot in 1962, then you must believe that Santa Claus is the Egyptian god of the dead.

Both of these are perfectly sound arguments of classical logic. The premise is false, hence the argument is logically correct, no matter what the conclusion is: if A is false, then A→B is true.

It does feel counterintuitive, though, especially because human beliefs do not work in this way. Consider instead the much more intuitive statement:

If you believe that Kennedy was shot in 1962, then you must believe that Lee Harry Oswald was also shot in 1962.

Here there seems to be a connection between the two clauses; we feel A→B is more justified when "→" actually does some work in establishing a relationship between A and B. But can this intuition be formalised?

One way to do so is to use relevance logics, which are a subset of "paraconsistent" logics. Paraconsistent logics are those that avoid the principle of explosion. This is the rule in classical logic that if you accept one single contradiction - one single (A and not-A) - then you can prove everything. This is akin to accepting one false belief that contradict your other beliefs - after that, anything goes. The contradiction explodes and takes everything down with it. But why would we be interested in avoiding either the principle of explosion or unjustified uses of "→"?

There seems to be three groups that could benefit from avoiding this. Firstly, those who are worried about the potential for the occasional error in their data or their premises, or a missed step in their reasoning, and don't want to collapse into incoherence because of a single mistake (paraconsistency has had application in database management, for instance). These generally need only 'weakly' paraconsistent theories. Secondly, the dialethics, who believe in the existence of true contradictions. The liar's paradox is an example of this: if L="L is false", then a dialethic would simply say that L is true, and not-L is also true, accepting the contradiction (L and not-L). This has the advantage of allowing a language to talk about its own truth: arithmetic truths can be defined in arithmetic, if we accept a few contradictions along the way.

For Less Wrong, the best use of  relevance logic would be to articulate counterfactuals without falling into the Löbian/self-confirming trap and blowing up. Consider the toy problem:

def U():
  if A()==1:
    return 5
  else:
    return 10

Then the problem is that in UDT, sentences such as L="(A()==1 → U() == 5) and (A()!=1 → U() == -200)" are self-confirming: if they are accepted by the utility maximising agent A(), then they will become valid. This is because A() will then output 1, making the first clause valid by calculation, and the second clause valid because the antecedent is false. This leads to all sorts of Löbian problems. However, if we reject the gratuitous use of  "→", then even if we kept Löbian reasoning, the argument would fail, as L would no longer be self-confirming.

Ok, as the actor said, that's my motivation; now, how do we do it? Where does the principle of explosion come from, and what do we have to do to get rid of it? Allegedly, one mathematician once defended the argument "(0=1) implies (I am God!)" by saying "(0=1) implies (1=2); (I and God are two) hence (I and God are one)!". The more rigorous proof, starting from the false premise (A and not-A), and proving any B, goes as follows (terminology will be explained):

  1. A and not-A (premise)
  2. A (by conjunction elimination from (1))
  3. not-A  (by conjunction elimination from (1))
  4. A or B  (by disjunction introduction from (2))
  5. B (by disjunction syllogism from (3) and (4))

To reject this proof, we have four options: reject conjunction elimination, reject disjunction introduction, reject the disjunction syllogism, or reject transitive proofs: say that, for instance, "(2) and (3)" implies "(4)", "(3) and (4)" implies "(5)", but reject the implication that "(2) and (3)" implies "(5)".

Rejecting transitive proofs is almost never done: what is the point of a proof system if you can't build on previous results? Conjunction elimination says that "(A and B) is true" means that both A and B are true; this seems far too fundamental to our understanding of 'and' to be tossed aside.

Disjunction introduction says that "A is true" implies that "(A or B) is true" for any B. This is also intuitive, though possibly a little less so; we are randomly inserting a B of which we know nothing. There are paraconsistent logics that reject disjunction introduction, but we won't be looking at them here (and why, I hear you ask? For the deep philosophical reason that the book I'm reading doesn't go into them).

That leaves the disjunction syllogism. This claims that from (A or B) and (not-A) we can deduce B. It is certainly formally intuitive, so in my next post I'll present a simple and reasonably intuitive paraconsistent system of logic that rejects it.

New Comment
16 comments, sorted by Click to highlight new comments since: Today at 2:56 AM

Paraconsistent logics feel like the "outside the box" box to me. They avoid the most obvious difficulties, but there is no good explanation of why any particular paraconsistent logic should be the right solution to this particular problem and, while they are relatively well known, they have not produced many interesting results. The problem of 'normal' uncertainty, in particular, was solved with probability theory, not nonclassical logics.

If a system believes A and not A, it has already made a mistake. If a well-designed system represents A and not A, it must be something other than a belief in the usual sense. Representing a specific non-belief thing using paraconsistent logic could work, but just throwing paraconsistent logic at the problem in order to avoid difficulties due to explosion seems like a dead end.

I'm not advocating them, just presenting them informally to see if they are of use for UDT.

Okay, I wasn't quite sure what you personally thought of them. I obviously don't object to informing people about various ideas in logic.

It looks like you are confusing a property of the material conditional -- namely, that every material conditional with a false antecedent is a true conditional -- with the principle of explosion -- that from a contradiction one may (validly) infer anything. Having a single false belief (or even a bunch of them) and having beliefs closed under logical entailment does not necessarily lead to explosion. The set of beliefs has to include a logical contradiction for explosion to occur.

You're right, I was very sloppy - I'm mainly prepping for the "Relevance logics" post, which rejects the material conditional as we normally use it.

Have corrected it, thanks.

Using probabilities seems to get around it. A→B would seem to mean P(B|A)≈1. P(Rafael Delago was elected president of Ecuador in 2005|The moon is made of cheese) is low, so I would not say that if the moon is made of cheese, then Rafael Delago was elected president of Ecuador in 2005.

Probabilities are usually defined in terms of events. P(B|A) = P(B,A) / P(A). If A = "the moon is made of cheese" then the measure P(A) = 0, and also P(B,A) = 0. So the conditional probability would be undefined.

You could adopt the position that probabilities should never be exactly 0 or 1. The moon might be made out of cheese after all, just with probability 1e-(1e1000). And quantum uncertainty pretty much guarantees that it is possible. Then what you are saying makes a lot of sense.

I'm not sure they should never be zero or one, but there is definitely a non-zero (and much higher than 1e-(1e1000) chance that the moon is made of cheese.

Yep, I feel that's the most promising avenue - but relevance logics deserve at least a glance.

The opening paragraphs are deceptive in at least two ways.

The first is that it looks like "in classical logic, from a false belief, anything follows," which is wrong. it's inconsistency, not falsehood, that can derive anything. One can believe the moon is made of cheese without deriving any conclusion, so long as one does not also believe that the moon is made of something other than cheese.

The second is the if -> then format. While true in a sense, it's deceptive by making it look like a single inferential step, when in fact it takes multiple steps to derive the principle of explosion. The Wikipedia article on the principle of explosion has multiple syllogisms, and the shortest is six steps. Gödel, Escher, Bach derives the principle of explosion in no less than 24 steps! It is not an obvious or immediate deduction.

I downvoted for that and because I didn't like the rest of the post.

I thought the rewrite made it clear that issues with -> and explosion were separate issues (though both different for relevance logics versus classical logic)

Thanks for rewriting the post! I look forward to the next one.

Did everyone else's first year maths lecturer prove that if 1=0, then they were Brigitte Bardot? (We all applauded at the end of the proof.) Now that's how to hammer home the point of a logical explosion from a false statement.

Did everyone else's first year maths lecturer prove that if 1=0, then they were Brigitte Bardot? (We all applauded at the end of the proof.) Now that's how to hammer home the point of a logical explosion from a false statement.

A false statement isn't sufficient to obtain logical explosion, but it is necessary. Most likely, you are referring to a "proof" that introduces a premise that contradicts an earlier stated premise without any of the students noticing (which is a very cool trick indeed). Contradictory premises are necessary and sufficient to obtain logical explosion.

Most likely, you are referring to a "proof" that introduces a premise that contradicts an earlier stated premise without any of the students noticing (which is a very cool trick indeed).

If it's like the "I am god" trick, then the contradiction is using both 1=1 and 1=2 at the same time.