I present here two puzzles of rationality you LessWrongers may think is worth to deal with. Maybe the first one looks more amenable to a simple solution, while the second one has called attention of a number of contemporary epistemologists (Cargile, Feldman, Harman), and does not look that simple when it comes to a solution. So, let's go to the puzzles!

 

Puzzle 1 

At t1 I justifiably believe theorem T is true, on the basis of a complex argument I just validly reasoned from the also justified premises P1, P2 and P3.
So, in t1 I reason from premises:
 
(R1) P1, P2 ,P3
 
To the known conclusion:
 
(T) T is true
 
At t2, Ms. Math, a well known authority on the subject matter of which my reasoning and my theorem are just a part, tells me I’m wrong. She tells me the theorem is just false, and convince me of that on the basis of a valid reasoning with at least one false premise, the falsity of that premise being unknown to us.
So, in t2 I reason from premises (Reliable Math and Testimony of Math):
 
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
 
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
 
(R2) F, P1, P2 and P3
 
To the justified conclusion:
 
(~T) T is not true
 
It could be said by some epistemologists that (~T) defeat my previous belief (T). Is it rational for me to do this way? Am I taking the correct direction of defeat? Wouldn’t it also be rational if (~T) were defeated by (T)? Why ~(T) defeats (T), and not vice-versa? It is just because ~(T)’s justification obtained in a later time?


Puzzle 2

At t1 I know theorem T is true, on the basis of a complex argument I just validly reasoned, with known premises P1, P2 and P3. So, in t1 I reason from known premises:
 
(R1) P1, P2 ,P3
 
To the known conclusion:
 
(T) T is true
 
Besides, I also reason from known premises:
 
(ME) If there is any evidence against something that is true, then it is misleading evidence (evidence for something that is false)
 
(T) T is true
 
To the conclusion (anti-misleading evidence):
 
(AME) If there is any evidence against (T), then it is misleading evidence
 
At t2 the same Ms. Math tells me the same thing. So in t2 I reason from premises (Reliable Math and Testimony of Math):
 
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
 
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
 
But then I reason from::
 
(F*) F, RM and TM are evidence against (T), and
 
(AME) If there is any evidence against (T), then it is misleading evidence
 
To the conclusion:
 
(MF) F, RM and TM is misleading evidence
 
And then I continue to know T and I lose no knowledge, because I know/justifiably believe that the counter-evidence I just met is misleading. Is it rational for me to act this way?
I know (T) and I know (AME) in t1 on the basis of valid reasoning. Then, I am exposed to misleading evidences (Reliable Math), (Testimony of Math) and (F). The evidentialist scheme (and maybe still other schemes) support the thesis that (RM), (TM) and (F) DEFEATS my justification for (T) instead. So that whatever I inferred from (T) is no longer known. However, given my previous knowledge of (T) and (AME), I could know that (MF): F is misleading evidence. It can still be said that (RM), (TM) and (F) DEFEAT my justification for (T), given that (MF) DEFEAT my justification for (RM), (TM) and (F)?

New to LessWrong?

New Comment
56 comments, sorted by Click to highlight new comments since: Today at 5:59 AM

Yo, deductive logic is a special case of probabilistic logic in the limit that your probabilities for things go to 0 and 1, i.e. you're really sure of things. If I'm really sure that Socrates is a man, and I'm really sure that all men are mortal, then I'm really sure that Socrates is mortal. However, if I am 20% sure that Socrates is a space alien, my information is no longer well-modeled by deductive logic, and I have to use probabilistic logic.

The point is that the conditions for deductive logic have always broken down if you can deduce both T and ~T. This breakdown doesn't (always) mean you can no longer reason. It does mean you should stop trying to use deductive logic, and use probabilistic logic instead. Probabilistic logic is, for various reasons, the right way to reason from incomplete information - deductive logic is just an approximation for when you're really sure of things. Try phrasing your problems with degrees of belief expressed as probabilities, follow the rules, and you will find that the apparent problem has vanished into thin air.

Welcome to LessWrong!

Thank you! Well, you didn't answered to the puzzle. The puzzles are not showing that my reasoning is broken because I have evidence to believe T and ~T. The puzzles are asking what is the rational thing to do in such a case - what is the right choice from the epitemological point of view. So, when you answer in puzzle 1 that believing (~T) is the rational thing to do, you must explain why that is so. The same applies to puzzle 2. I don't think that degrees of beliefs, expressed as probabilities, can solve the problem. Whether my belief is rational or not doesn't seem to depend on the degree of my belief. There are cases in which the degree of my belief that P is very low and, yet, I am rational in believing that P. There are cases where I infer a proposition from a long argument, have no counter-evidence to any premise or to the support relation between premises and conclusion but, yet, have a low degree of confidence in the conclusion. Degrees of belief is a psychological matter, or at least so it appear to me. Nevertheless, even accepting the degree-of-belief model of doxastic rational changes, I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1. Can you explain what is the rational thing to do in each case, and why?

Well, you didn't answered to the puzzle.

So, in order to answer the puzzles, you have to start with probabilistic beliefs, rather than with binary true-false beliefs. The problem is currently somewhat like the question "is it true or false that the sun will rise tomorrow." To a very good approximation, the sun will rise tomorrow. But the earth's rotation could stop, or the sun could get eaten by a black hole, or several other possibilities that mean that it is not absolutely known that the sun will rise tomorrow. So how can we express our confidence that the sun will rise tomorrow? As a probability - a big one, like 0.999999999999.

Why not just round up to one? Because although the gap between 0.999999999999 and 1 may seem small, it actually takes an infinite amount of evidence to bridge that gap. You may know this as the problem of induction.

So anyhow, let's take problem 1. How confident are you in P1, P2, and P3? Let's say about 0.99 each - you could make a hundred such statements and only get one wrong, or so you think. So how about T? Well, if it follows form P1, P2 and P3, then you believe it with degree about 0.97.

Now Ms. Math comes and tells you you're wrong. What happens? You apply Bayes' theorem. When something is wrong, Ms. Math can spot it 90% of the time, and when it's right, she only thinks it's wrong 0.01% of the time. So Bayes' rule says to multiply your probability of ~T by 0.9/(0.030.9 + 0.970.0001), giving an end result of T being true with probability only about 0.005.

Note that at no point did any beliefs "defeat" other ones. You just multiplied them together. If Ms. Math had talked to you first, and then you had gotten your answer after, the end result would be the same. The second problem is slightly trickier because not only do you have to apply probability theory correctly, you have to avoid applying it incorrectly. Basically, you have to be good at remembering to use conditional probabilities when applying (AME).

I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1.

I suspect that you only conceive that you can conceive of that. In addition to the post linked above, I would suggest reading this, and this, and perhaps a textbook on probability. It's not enough for something to be a belief for it to be a probability - it has to behave according to certain rules.

I can't believe people apply Baye's theorem when confronted to counter-evidence. What evidence do we have to believe that Bayesian probability theories describe the way we reason inductively?

Oh, if you want to model what people actually do, I agree it's much more complicated. Merely doing things correctly is quite simple by comparison.

[-][anonymous]12y60

It doesn't necessarily describe the way we actually reason (because of cognitive biases that effect our ability to make inferences), but it does describe the way we should reason.

I can conceive the puzzle as one where all the relevant beliefs - (R1), (T), (AME), etc, - have degree 1.

Well, in that case, learning RM & TM leaves these degrees of belief unchanged, as an agent who updates via conditionalization cannot change a degree of belief that is 0 or 1. That's just an agent with an unfortunate prior that doesn't allow him to learn.

More generally, I think you might be missing the point of the replies you're getting. Most of them are not-very-detailed hints that you get no such puzzles once you discard traditional epistemological notions such as knowledge, belief, justification, defeaters, etc. (or change the subject from them) and adopt Bayesianism (here, probabilism & conditionalization & algorithmic priors). I am confident this is largely true, at least for your sorts of puzzles. If you want to stick to traditional epistemology, a reasonable-seeming reply to puzzle 2 (more within the traditional epistemology framework) is here: http://www.philosophyetc.net/2011/10/kripke-harman-dogmatism-paradox.html

OK, got it, thank you. I have two doubts. (i) Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief? Does it mean every belief with degree 1 I have now will never be lost/defeated/changed? (ii) The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives - the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes. That being the case, the puzzles I brought maybe are not of interest for Bayesians - but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases. Thanks for the link (I already know Harman's approach, which is heavily criticized by Conee and others).

Why a belief with degree 1 is not affected by new information which is counter-evidence to that belief?

That's how degree 1 is defined: such strong a belief that no evidence can persuade one to abandon it. (You shoudn't have such beliefs, needless to say.)

The difference between what you call traditional epistemology and Bayesianism involves lots of things. I think one of them is their objectives - the traditional epistemologist and the Bayesian in general have different goals. The first one is interested in posing the correct norms of reasoning and other sources of beliefs (perception, memory, etc). The second one maybe is more interested in modelling rational structures for a variety of purposes.

I don't see the difference. Bayesian epistemology is a set of prescriptive norms of reasoning.

That being the case, the puzzles I brought maybe are not of interest for Bayesians - but that does not mean Bayesianism solve the question of what is the correct thing to do in such cases.

Bayesianism explains the problem away - the problem is there only if you use notions like defeat or knowledge and insist that to build your epistemology on them. Your puzzle shows that it is impossible. The fact that Bayesianism is free of Gettier problems is an argument for Bayesianism and against "traditional epistemology".

To make an imprecise analogy, ancient mathematicians have long wondered what the infinite sum 1-1+1-1+1-1... is equal to. When calculus was invented, people saw that this was just a confused question. Some puzzles are best answered by rejecting the puzzle altogether.

(i) That remark concerns a Bayesian agent, or more specifically an agent who updates by conditionalization. It's a property of conditionalization that no amount of evidence that an agent updates upon can change a degree of belief of 0 or 1. Intuitively, the closer a probability gets to 1, the less it will decrease in its absolute value in response to a given strength of counterevidence. 1 corresponds to the limit at which it won't decreases at all from any counterevidence.

(ii) I'm well-aware that the aims of most epistemologists and most Bayesian philosophers diverge somewhat, but there is substantial overlap even within philosophy (i.e. applying Bayesianism to norms of belief change); furthermore, Bayesianism is very much applicable (and in fact applied) to norms of belief change, your puzzles being examples of questions that wouldn't even occur to a Bayesian.

[-]Zed12y120

I second Manfred's suggestion about the use of beliefs expressed as probabilities.

In puzzle (1) you essentially have a proof for T and a proof for ~T. We don't wish the order in which we're exposed to the evidence to influence us, so the correct conclusion is that you should simply be confused*. Thinking in terms of "Belief A defeats belief B" is a bit silly, because you then get situations where you're certain T is true, and the next day you're certain ~T is true, and the day after that you're certain again that T is true after all. So should beliefs defeat each other in this manner? No. Is it rational? No. Does the order in which you're exposed to evidence matter? No.

In puzzle (2) the subject is certain a proposition is true (even though he's still free to change his mind!). However, accepting contradicting evidence leads to confusion (as in puzzle 1), and to mitigate this the construct of "Misleading Evidence" is introduced that defines everything that contradicts the currently held belief as Misleading. This obviously leads to Status Quo Bias of the worst form. The "proof" that comes first automatically defeats all evidence from the future, therefore making sure that no confusion can occur. It even serves as a Universal Counterargument ("If that were true I'd believe it and I don't believe it therefore it can't be true"). This is a pure act of rationalization, not of rationality.

*) meaning that you're not completely confident of T and ~T.

Thank you, Zed. You are right: I didn't specified the meaning of 'misleading evidence'. It means evidence to believe something that is false (whether or not the cognitive agent receiving such evidence knows it is misleading). Now, maybe it I'm missing something, but I don't see any silliness in thinking of terms of "belief A defeats belief B". On the basis of having an experiential evidence, I believe there is a tree in front of me. But then, I discover I'm drugged with LCD (a friend of mine put it in my coffee previously, unknown to me). This new piece of information defeats the justification I had for believing there is a tree in front of me - my evidence does not support this belief anymore. There is a good material on defeasible reasoning and justification in John Pollock's website: http://oscarhome.soc-sci.arizona.edu/ftp/publications.html#reasoning

[-]Zed12y10

If you're certain that belief A holds you cannot change your mind about that in the future. The belief cannot be "defeated", in your parlance. So given that you can be exposed to information that will lead you to change your mind we conclude that you weren't absolutely certain about belief A in the first place. So how certain were you? Well, this is something we can express as a probability. You're not 100% certain a tree in front of you is, in fact, really there exactly because you realize there is a small chance you're drugged or otherwise cognitively incapacitated.

So as you come into contact with evidence that contradicts what you believe you become less certain your belief is correct, and as you come into contact with evidence that confirms what you believe you become more confident your belief is correct. Apply Bayes' rules for this (for links to Bayes and Bayesian reasoning see other comments in this thread).

I've just read a couple of pages of Defeasible Reasoning by Pollock and it's a pretty interesting formal model of reasoning. Pollock argues, essentially, that Bayesian epistemology is incompatible with deductive reasoning (pg 15). I semi-quote: "[...] if Bayesian epistemology were correct, we could not acquire new justified beliefs by reasoning from previously justified beliefs" (pg 17). I'll read the paper, but this all sounds pretty ludicrous to me.

Axioms are not true or false. They either model what we intended them to model, or they don't. In puzzle 1, assuming you have carefully checked both proofs, confidence that (F, P1, P2, P3) implies T and (F, P1, P2, P3) implies ~T are both justified, rendering (F, P1, P2, P3) an uninteresting model that probably does not reflect the system that you were trying to model with those axioms. If you are trying to figure out whether or not T is true within the system you were trying to model, then of course you cannot be confident one way or the other, since you aren't even confident of how to properly model the system. The fact that your proof of T relied on fewer axioms would seem to be some evidence that T is true, but is not particularly strong.

puzzle 2: (ME) points both ways. While it certainly seems to be strong evidence against the reliability of (RM), since she just reasoned from clearly inconsistent axioms, it can't prove that F is the axiom you should throw away. Consider the possibility that you could construct a proof of ~T given only F, P1, and P2. Now, (ME) could not possibly say anything different about F and P3.

Confidence that the same premises can imply both ~T and T is confidence that at least one of your premises is logically inconsistent with he others -- that they cannot all be true. It's not just a question of whether they model something correctly -- there is nothing they could model completely correctly.

In puzzle one, I would simply conclude that either one of the proofs is incorrect, or one of the premises must be false. Which option I consider most likely will depend on my confidence in my own ability, Ms. Math's abilities, whether she has confirmed the logic of my proof or been able to show me a misstep, my confidence in Ms. Math's beliefs about the premises, and my priors for each premise.

at least one of your premises is logically inconsistent with he others -- that they cannot all be true.

Suppose I have three axioms: A, B, and C.

A: x=5

B: x+y=4

C: 2x+y=6

Which axiom is logically inconsistent with the others? (A, B), (B, C), and (A, C) are all consistent systems, so I can't declare any of the axioms to be false, just that for any particular model of anything remotely interesting, at least one of them must not apply.

If you downvoted, maybe offer constructive criticism? I feel like you're shooting the messenger, when we should really be shooting (metaphorically) mainstream philosophy for not recognizing the places where these questions have already been solved, rather than publishing more arguments about Gettier problems.

I didn't downvote! And I am not shooting the messenger, as I am also sure it is not and argument about Gettier problems. I am sorry if the post offended you - maybe it is better not to mix different views of something.

I believe Manfred is referring to downvoting your post, you being the messenger, etc.

Right.

[-][anonymous]12y60

Both of these puzzles fall apart if you understand the concepts in Argument Screens Off Authority, A Priori, and Bayes Theorem. Essentially, the notion of "defeat" is extremely silly. In Puzzle 1, for example, what you should really be doing is updating your level of belief in T based on the mathematician's argument. The order in which you heard the arguments doesn't matter--the two Bayesian updates will still give you the same posterior regardless of which one you update on first.

Puzzle 2 is similarly confused about "defeat"; the notion of "misleading evidence" in Puzzle 2 is also wrong. If you look at things in terms of probabilities instead of the "known/not known" dichotomy presented in the puzzle, there is no confusion. Just update on the mathematician's argument and be done with it.

[-]fsopho12y-20

Well, puzzle 2 is a puzzle with a case of knowledge: I know (T). Changing to probabilities does not solve the problem, only changes it!

[-][anonymous]12y60

But that's the thing: you don't "know" (T). You have a certain degree of belief, which is represented by a real number between 1 and 0, that (T) is true. You can then update this degree of belief based on (RM) and (TM).

Ah, the old irresistible force acting upon immovable object argument.

This seems (dis)solvable by representing changing beliefs as shifting probability mass around. You might argue that after you've worked your way through the proof of T step by step, you've moved the bulk of probability mass to T (with respect to priors that don't favor either T or ~T too much). But if it were enough, we would expect to see all of the following: 1) people are always certain of their conclusions after they've done the math once; 2) people don't find errors in proofs that have been published for a long time; 3) there's no perceived value in checking each other's proofs; 4) If there is a certain threshold of complexity or length after which people would stop becoming certain of their conclusions, nobody has reached it yet.

None of this is true in our word, which supports the hypothesis that a non-trivial amount of probability mass gets stuck along the way, subjectively this manifests in you acknowledging the (small, but non-negligible) possibility of having erred in each part of the proof.

Now, the proper response to TM would be to shift your probability according to the weight of Ms. Math's authority, which is not absolute. If you're uncomfortably uncertain afterwards, you just re-examine your evidence paying more attention the hardest parts, and squeeze some more probability juice either way until you either are certain enough or until you spot an error.

So, I would like to thank you guys for the hints and critical comments here - you are helping me a lot! I'll read what you recommended in order to investigate the epistemological properties of the degree-of-belief version of bayesianism. For now, I'm just full of doubts: "does bayesianism really stand as a normative theory of rational doxastic attitudes?"; "what is the relation between degrees of belief and evidential support?", "is it correct to say that people reason in accordance to probability principles when they reason correctly?", "is the idea of defeating evidence an ilusion?", and still others. =]

You can't just ignore evidence on the basis that it's probably misleading. If you want to find out the probability that T is true, you take all of the evidence into account. If you want to know if a particular piece of evidence is misleading, you take all of the evidence into account to find the probability that what it's evidence of is false, and that's the probability of it being misleading.

I can see how it might appear that if a piece of evidence has a 70% chance of being misleading, for example, you should only do 30% of an update. That's not how it works. If it has a 70% chance of being misleading, that means that whatever it's evidence for has a 30% chance of being true. If you find further evidence for it, then it increases the probability that it's true, and decreases the probability that the original evidence is misleading.

I would say the rational thing would be to remain unconvinced about T, ~T, and (T xor ~T) as well as about F, P1, P2, and P3, as well as about R1 and R2. Clearly reality is doing something other than you think it is doing and the list of things I made to remain unsure about is a reasonably comprehensive list within this problem of things to be unsure of until you've got this figured out. I would also remain unsure that this is a complete list of things to be unsure about.

If you need to make policy decisions based on T or ~T, I would aim to make the least expensive decision, but the calculation of that probably requires estimates of probability of various things, of which the problem has provided no way to do. I would tend towards doing nothing, but of course I can't be sure (and don't think anybody can be) that this is the "right" answer.

I don't think t1 or t2 or relevant, I don't really think RM is relevant, and I would completely reject AME as ass-backwards filtering your evidence through what you want to believe is true.

I think a lot of the replies here suggesting that Bayesian epistemology easily dissolves the puzzles are mistaken. In particular, the Bayesian-equivalent of (1) is the problem of logical omniscience. Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic. But (1), suitably understood, provides a plausible scenario where logical omniscience fails.

I do agree that the correct understanding of the puzzles is going to come from formal epistemology, but at present there are no agreed-upon solutions that handle all instances of the puzzles.

The formulations of "logical omniscience is a problem for Bayesian reasoners" that I have seen are not sufficiently worrying; actually creating a Dutch Book would require the formulating party to have the logical omniscience the Bayesian lacks which is not a situation we encounter very much.

Sorry, I'm not sure I understand what you mean. Could you elaborate?

It's just that logical omniscience is required to quickly identify the (pre-determined) truth value of incredibly complicated mathematical equations; if you want to exploit my not knowing the thousandth mersenne prime, you have to know the thousandth mersenne prime to do so, and humans generally don't encounter beings that have significantly more logical knowledge.

Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic.

This can be treated for cases like problem (1) by saying that since the probabilities are computed with the brain, if the brain makes a mistake in the ordinary proof, the equivalent proof using probabilities will also contain the mistake.

Dealing with limited (as opposed to imperfect) computational resources would be more interesting - I wonder what happens when you relax the consistency requirement to proofs smaller than some size N?

Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic.

Could you explain in more detail why Bayesian epistemology can't be built without such an assumption? All arguments I have seen went along the lines "unless you are logically omniscient, you may end up having inconsistent probabilities". That may be aesthetically unpleasant when we think about ideal Bayesian agents, but doesn't seem to be a grave concern for Bayesianism as a prescriptive norm of human reasoning.

Could you explain in more detail why Bayesian epistemology can't be built without such an assumption?

Well, could you explain how to build it that way? Bayesian epistemology begins by interpreting (correct) degrees of beliefs as probabilities satisfying the Kolmogorov axioms, which implies logical omniscience. If we don't assume our degrees of belief ought to satisfy the Kolmogorov axioms (or assume they satisfy some other axioms which entail Kolmogorov's), then we are no longer doing Bayesian epistemology.

Is there more to it than that it is the definition of Bayesian epistemology?

Logical omniscience with respect to propositional logic is necessary if we require that p(A|B) = 1 if A is deducible from B. Releasing this requirement leaves us with a still working system. Of course, the reasoner should update his p(A|B) somewhere close to 1 after seeing the proof that B⇒A, but he needn't have this belief a priori.

Logical omniscience comes from probability "statics," not conditionalization. When A is any propositional tautology, P(A) (note the lack of conditional) can be algebraically manipulated via the three Kolmogorov axioms to yield 1. Rejecting one of the axioms to avoid this result leaves you vulnerable to Dutch books. (Perhaps this is not so surprising, since reasoning about Dutch books assumes classical logic. I have no idea how one would handle Dutch book arguments if we relax this assumption.)

Of course, if I am inconsistent, I can be Dutch booked. If I believe that P(tautology) = 0.8 because I haven't realised it is a tautology, somebody who knows that will offer me a bet and I will lose. But, well, lack of knowledge leads to sub-optimal decisions - I don't see it as a fatal flaw.

I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my "degree of belief" in a possible statement A is 2, I can be Dutch booked. But now that I'm licensed to disbelieve entailments (so long as I take myself to be ignorant that they're entailments), perhaps I justifiably believe that I can't be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, ..., Pn, I can always potentially justifiably believe the conditional "If the premises P1, ..., Pn are true, then C is correct" has low probability - even if the argument is purely deductive.

You are right. I think this is the tradeoff: either we demand logical omniscience, of we have to allow disbelief in entailment. Still, I don't see a big problem here because I think of the Bayesian epistemology as of a tool which I voluntarily adopt to improve my congnition - I have no reason to deliberately reject (assign a low probability to) a deductive argument when I know it, since I would harm myself that way (at least I believe so, because I trust deductive arguments in general). I am "licensed to disbelieve entailments" only in order to keep the system well defined, in practice I don't disbelieve them once I know their status. The "take myself to be ignorant that they're entailments" part is irrational.

I must admit that I haven't a clear idea how to formalise this. I know what I do in practice: when I don't know that two facts are logically related, I treat them as independent and it works in approximation. Perhaps the trust in logic should be incorporated in the prior somehow. Certainly I have to think about it more.

As a point of detail, if P1, P2 and P3 imply T, then R,P1,P2 and P3 can only imply ~T if the system R,P1,P2,P3 is inconsistent. To fix this in your argument, simply have R replace one of the other axioms rather than supplement it.

Other than that, I can only recommend what others have said about Bayes theorem - it is the correct weapon with which to approach problems of this type.

Puzzle 1

  • RM is irrelevant.

The concept of "defeat", in any case, is not necessarily silly or inapplicable to a particular (game-based) understanding of reasoning, which has always been known to be discursive, so I do not think it is inadequate as an autobiographical account, but it is not how one characterizes what is ultimately a false conclusion that was previously held true. One need not commit oneself to a particular choice either in the case of "victory" or "defeat", which are not themselves choices to be made.

Puzzle 2

  • Statements ME and AME are both false generalizations. One cannot know evidence for (or against) a given theorem (or apodosis from known protases) in advance based on the supposition that the apodosis is true, for that would constitute a circular argument. I.e.:

T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false. It is also false to suppose that a human being is always capable of reasoning correctly under all states of knowledge, or even that they possess sufficient knowledge of a particular body of information perfectly so as to reason validly.

  • MF is also false as a generalization.

In general, one should not be concerned with how "misleading" a given amount of evidence is. To reason on those grounds, one could suppose a given bit of evidence would always be "misleading" because one "knows" that the contrary of what that bit of evidence suggests is always true. (The fact that there are people out there who do in fact "reason" this way, based on evidence, as in the superabundant source of historical examples in which they continue to believe in a false conclusion, because they "know" the evidence that it is false is false or "misleading", does not at all validate this mode of reasoning, but rather shores up certain psychological proclivities that suggest how fallacious their reasoning may be; however, this would not itself show that the course of necessary reasoning is incorrect, only that those who attempt to exercise it do so very poorly.) In the case that the one is dealing with a theorem, it must be true, provided that the reasoning is in fact valid, for theorematic reasoning is based on any axioms of one's choice (even though it is not corollarial). !! However, if the apodosis concerns a statement of evidence, there is room for falsehood, even if the reasoning is valid, because the premisses themselves are not guaranteed to be always true.

The proper attitude is to understand that the reasoning prior to exposure of evidence/reasoning from another subject (or one's own inquiry) may in fact be wrong, however necessary the reasoning itself may seemingly appear. No amount of evidence is sufficient evidence for its absolute truth, no matter how valid the reasoning is. Note that evidence here is indeed characteristic of observational criteria, but the reasoning based thereon is not properly deductive, even if the reasoning is essentially necessary in character. Note that deductive logic is concerned with the reasoning to true conclusions under the assumption that the relevant premisses are true; if one is taking into account the possibility of premisses which may not always be true, then such reasoning is probabilistic (and necessary) reasoning.

!! This, in effect, resolves puzzle 1. Namely, if the theorem is derived based on valid necessary reasoning, then it is true. If it isn't valid reasoning, then it is false. If "defeat" consists in being shown that one's initial stance was incorrect, then yes, it is essential that one takes the stance of having been defeated. Note that puzzle 2 is solved in fundamentally the same manner, despite the distracting statements ME, AME, and MF, on account of the nature of theorems. Probabilities nowhere come into account, and the employment of Bayesian reasoning is an unnecessary complication. If one does not take the stance of having been defeated, then there is no hope for that person to be convinced of anything of a logical (necessary) character.

[-]Gust12y00

"T is true; therefore, evidence that it is false is false. This constitutes invalid reasoning, because it rules out new knowledge that may in fact render it truly false."

Actually, I think if "I know T is true" means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 1 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief. So I'd say the problem is a wrong question.

Edited: I meant probability 1 of misleading evidence, not 0.

Actually, I think if "I know T is true" means you assign probability 1 to T being true, and if you ever were justified in doing that, then you are justified in assigning probability 0 that the evidence is misleading and not even worth to take into account. The problem is, for all we know, one is never justified in assigning probability 1 to any belief.

The presumption of the claim "I know T is true" (and that evidence that it is false is false) is false precisely in the case that the reasoning used to show that T (in this case a theorem) is true is invalid. Were T not a theorem, then probabilistic reasoning would in fact apply, but it does not. (And since it doesn't, it is irrelevant to pursue that path. But, in short, the fact that it is a theorem should lead us to understand that the premisses' truth is not the issue at hand here, thus probabilistic reasoning need not apply, and so there is no issue of T's being probably true or false.) Furthermore, it is completely wide of the mark to suggest that one should apply this or that probability to the claims in question, precisely because the problem concerns deductive reasoning. All the non-deductive aspects of the puzzles are puzzling distractions at best. In essence, if a counterargument comes along demonstrating that T is false, then it necessarily would involve demonstrating that invalid reasoning was somewhere committed in someone's having arrived at the (fallacious) truth of T. (It is necessary that one be led to a true conclusion given true premisses.) Hence, one need not be concerned with the epistemic standing of the truth of T, since it would have clearly been demonstrated to be false. And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid. Valid reasoning is always valid, no matter what one may think of the reasoning; and one may invalidly believe in the validity of an invalid conclusion. Such is human fallibility.

So I'd say the problem is a wrong question.

No, I think it is a good question, and it is easy to be led astray by not recognizing where precisely the problem fits in logical space, if one isn't being careful. Amusingly (if not disturbingly), some of most up-voted posts are precisely those that get this wrong and thus fail to see the nature of the problem correctly. However, the way the problem is framed does lend itself to misinterpretation, because a demonstration of the falsity of T (namely, that it is invalid that T is true) should not be treated as a premiss in another apodosis; a valid demonstration of the falsity of T is itself a deductive conclusion, not a protasis proper. (In fact, the way it is framed, the claim ~T is equivalent to F, such that the claims [F, P1, P2, and P3] implies ~T is really a circular argument, but I was being charitable in my approach to the puzzles.) But oh well.

[-]Gust12y00

In essence, if a counterargument comes along demonstrating that T is false, then it necessarily would involve demonstrating that invalid reasoning was somewhere committed in someone's having arrived at the (fallacious) truth of T.

I think I see your point, but if you allow for the possibility that the original deductive reasoning is wrong, i.e. deny logical omniscience, don't you need some way to quantify that possibility, and in the end that would mean treating the deductive reasoning itself as bayesian evidence for the truth of T?

Unless you assume that you can't make a mistake at the deductive reasoning, T being a theorem of the promises is a theory to be proven with the Bayesian framework, with Bayesian evidence, not anything special.

And if you do assume that you can't make a mistake at the deductive reasoning, I think theres no sense in paying attention to any contrary evidence.

...if you allow for the possibility that the original deductive reasoning is wrong...

I want to be very clear here: a valid deductive reasoning can never be wrong (i.e., invalid), only those who exercise in such reasoning are liable to error. This does not pertain to logical omniscience per se, because we are not here concerned with the logical coherence of the total collection of beliefs a given person (like the one in the example) might possess; we are only concerned with T. And humans, in any case, do not always engage in deduction properly due to many psychological, physical, etc. limitations.

don't you need some way to quantify that possibility, and in the end that would mean treating the deductive reasoning itself as bayesian evidence for the truth of T?

No, the possibility that someone will commit an error in deductive reasoning is in no need of quantification. That is only to increase the complexity of the puzzle. And by the razor, what is done with less is in vain done with more.

Unless you assume that you can't make a mistake at the deductive reasoning, T being a theorem of the promises is a theory to be proven with the Bayesian framework, with Bayesian evidence, not anything special.

To reiterate, an invalid deductive reasoning is not a deduction with which we should concern ourselves. The prior case of T, having been shown F, is in fact false, such that we should no longer elevate it to the status of a logical deduction. By the measure of its invalidity, we know full well in the valid deduction ~T. In other words, to make a mistake in deductive reasoning is not to reason deductively!

And if you do assume that you can't make a mistake at the deductive reasoning, I think theres no sense in paying attention to any contrary evidence.

This is where the puzzle introduced needless confusion. There was no real evidence. There was only the brute fact of the validity of ~T as introduced by a person who showed the falsity/invalidity of T. That is how the puzzles' solution comes to a head – via a clear understanding of the nature of deductive reasoning.

[-]Gust12y00

Sorry, I think I still don't understand your reasoning.

First, I have the beliefs P1, P2 and P3, then I (in an apparently deductively valid way) reason that [C1] "T is a theorem of P1, P2, and P3", therefore I believe T.

Either my reasoning that finds out [C1] is valid or invalid. I do think it's valid, but I am fallible.

Then the Authority asserts F, I add F to the belief pool, and we (in an apparently deductively valid way) reason [C2] "~T is a theorem of F, P1, P2, and P3", therefore we believe ~T.

Either our reasoning that finds out [C2] is valid or invalid. We do think it's valid, but we are fallible.

  • Is it possible to conclude C2 without accepting I made a mistake when reasoning C1 (therefore we were wrong to think that line of reasoning was valid)? Otherwise we would have both T and ~T as theorems of F, P1, P2, and P3, and we should conclude that the promises lead to contradiction and should be revised; we wouldn't jump from believing T to believing ~T.
  • But the story doesn't say the Authority showed a mistake in C1. It says only that she made a (apparently valid) reasoning using F in addition to P1, P2, and P3.
  • If the argument of the Authority doesn't show the mistake in C1, how should I decide whether to believe C1 has a mistake, C2 has a mistake, or the promises F, P1, P2, and P3 actually lead to contradiction, with both C1 and C2 being valid?

I think Bayesian reasoning would inevitably enter the game in that last step.

C1 is a presumption, namely, a belief in the truth of T, which is apparently a theorem of P1, P2, and P3. As a belief, it's validity is not what is at issue here, because we are concerned with the truth of T.

F comes in, but is improperly treated as a premiss to conclude ~T, when it is equivalent to ~T. Again, we should not be concerned with belief, because we are dealing with statements that are either true or false. Either but not both (T or ~T) can be true (which is the definition of a logical tautology).

Hence C2 is another presumption with which we should not concern ourselves. Belief has no influence on the outcome of T or ~T.

For the first bullet: no, it is not possible, in any case, to conclude C2, for not to agree that one made a mistake (i.e., reasoned invalidly to T) is to deny the truth of ~T which was shown by Ms. Math to be true (a valid deduction).

Second bullet: in the case of a theorem, to show the falsity of a conclusion (of a theorem) is to show that it is invalid. To say there is a mistake is a straightforward corollary of the nature of deductive inference that an invalid motion was committed.

Third bullet: I assume that the problem is stated in general terms, for had Ms. Math shown that T is false in explicit terms (contained in F), then the proper form of ~T would be: F -> ~T. Note that it is wrong to frame it the following way: F, P1, P2, and P3 -> ~T. It is wrong because F states ~T. There is no "decision" to be made here! Bayesian reasoning in this instance (if not many others) is a misapplication and obfuscation of the original problem from a poor grasp of the nature of deduction.

(N.B.: However, if the nature of the problem were to consist in merely being told by some authority a contradiction to what one supposes to be true, then there is no logically necessity for us to suddenly switch camps and begin to believe in the contradiction over one's prior conviction. Appeal to Authority is a logical fallacy, and if one supposes Bayesian reasoning is a help there, then there is much for that person to learn of the nature of deduction proper.)

Let me give you an example of what I really mean:

Note statements P, Q, and Z:

(P) Something equals something and something else equals that same something such that both equal each other. (Q) This something equals that. This other something also equals that. (Z) The aforementioned somethings equal each other.

It is clear that Z follows from P and Q, no? In effect, you're forced to accept it, correct? Is there any "belief" involved in this setting? Decidedly not. However, let's suppose we meet up with someone who disagrees and states: "I accept the truths of P and Q but not Z."

Then we'll add the following to help this poor fellow:

(R) If P and Q are true, then Z must be true.

They may respond: "I accept P, Q, and R as true, but not Z."

And so on ad infinitum. What went wrong here? They failed to reason deductively. We might very well be in the same situation with T, where

(P and Q) are equivalent to (P1, P2, and P3) (namely, all of these premisses are true), such that whatever Z is, it must be equivalent to the theorem (which would in this case be ~T, if Ms. Math is doing her job and not merely deigning to inform the peons at the foot of her ivory tower).

P1, P2, and P3 are axiomatic statements. And their particular relationship indicates (the theorem) S, at least to the one who drew the conclusion. If a Ms. Math comes to show the invalidity of T (by F), such that ~T is valid (such that S = ~T), then that immediately shows that the claim of T (~S) was false. There is no need for belief here; ~T (or S) is true, and our fellow can continue in the vain belief that he wasn't defeated, but that would be absolutely illogical; therefore, our fellow must accept the truth of ~T and admit defeat, or else he'll have departed from the sphere of logic completely. Note that if Ms. Math merely says "T is false" (F) such that F is really ~T, then the form [F, P1, P2, and P3] implies ~T is really a circular argument, for the conclusion is already assumed within the premisses. But, as I said, I was being charitable with the puzzles and not assuming that that was being communicated.

[-]Gust12y00

I guess it wasn't clear, C1 and C2 reffered to the reasonings as well as the conclusions they reached. You say belief is of no importance here, but I don't see how you can talk about "defeat" if you're not talking about justified believing.

For the first bullet: no, it is not possible, in any case, to conclude C2, for not to agree that one made a mistake (i.e., reasoned invalidly to T) is to deny the truth of ~T which was shown by Ms. Math to be true (a valid deduction).

I'm not sure if I understood what you said here. You agree with what I said in the first bullet or not?

Second bullet: in the case of a theorem, to show the falsity of a conclusion (of a theorem) is to show that it is invalid. To say there is a mistake is a straightforward corollary of the nature of deductive inference that an invalid motion was committed.

Are you sure that's correct? If there's a contradiction within the set of axioms, you could find T and ~T following valid deductions, couldn't you? Proving ~T and proving that the reasoning leading to T was invalid are only equivalent if you assume the axioms are not contradictory. Am I wrong?

P1, P2, and P3 are axiomatic statements. And their particular relationship indicates (the theorem) S, at least to the one who drew the conclusion. If a Ms. Math comes to show the invalidity of T (by F), such that ~T is valid (such that S = ~T), then that immediately shows that the claim of T (~S) was false. There is no need for belief here; ~T (or S) is true, and our fellow can continue in the vain belief that he wasn't defeated, but that would be absolutely illogical; therefore, our fellow must accept the truth of ~T and admit defeat, or else he'll have departed from the sphere of logic completely.

The problem I see here is: it seems like you are assuming that the proof of ~T shows clearly the problem (i.e. the invalid reasoning step) with the proof of T I previously reasoned. If it doesn't, all the information I have is that both T and ~T are derived apparently validly from the axioms F, P1, P2, and P3. I don't see why logic would force me to accept ~T instead of believing there's a mistake I can't see in the proof Ms. Math showed me, or, more plausibly, to conclude that the axioms are contradictory.

...I don't see how you can talk about "defeat" if you're not talking about justified believing

"Defeat" would solely consist in the recognition of admitting to ~T instead of T. Not a matter of belief per se.

You agree with what I said in the first bullet or not?

No, I don't.

The problem I see here is: it seems like you are assuming that the proof of ~T shows clearly the problem (i.e. the invalid reasoning step) with the proof of T I previously reasoned. If it doesn't, all the information I have is that both T and ~T are derived apparently validly from the axioms F, P1, P2, and P3.

T cannot be derived from [P1, P2, and P3], but ~T can on account of F serving as a corrective that invalidates T. The only assumptions I've made are 1) Ms. Math is not an ivory tower authoritarian and 2) that she wouldn't be so illogical as to assert a circular argument where F would merely be a premiss, instead of being equivalent to the proper (valid) conclusion ~T.

Anyway, I suppose there's no more to be said about this, but you can ask for further clarification if you want.

[-]Gust12y10

2) that she wouldn't be so illogical as to assert a circular argument where F would merely be a premiss, instead of being equivalent to the proper (valid) conclusion ~T.

Oh, now I see what you mean. I interpreted F as a new promiss, a new axiom, not a whole argument about the (mistaken) reasoning that proved T. For example, (wikipedia tells me that) the axiom of determinacy is inconsistent with the axiom of choice. If I had proved T in ZFC, and Ms. Math asserted the Axiom of Determinacy and proved ~T in ZFC+AD, and I didn't know beforehand that AD is inconsistent with AC, I would still need to find out what was the problem.

I still think this is more consistent with the text of the original post, but now I understand what you meant by " I was being charitable with the puzzles".

Thank you for you attention.

I'm interested in what you have to say, and I'm sympathetic (I think), but I was hoping you could restate this in somewhat clearer terms. Several of your sentences are rather difficult to parse, like "And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid."

Read my latest comments. If you need further clarity, ask me specific questions and I will attempt to accommodate them.

But to give some additional note on the quote you provide, look to reductio ad absurdum as a case where it would be incorrect to aver to the truth of what is really contradictory in nature. If it still isn't clear, ask yourself this: "does it make sense to say something is true when it is actually false?" Anyone who answers this in the affirmative is either being silly or needs to have their head checked (for some fascinating stuff, indeed).

we are not justified in assigning probability 1 to the belief that 'A=A' or to the belief that 'p -> p'? Why not?

Those are only beliefs that are justified given certain prior assumptions and conventions. In another system, such statements might not hold. So, from a meta-logical standpoint, it is improper to assign probabilities of 1 or 0 to personally held beliefs. However, the functional nature of the beliefs do not themselves figure in how the logical operators function, particularly in the case of necessary reasoning. Necessary reasoning is a brick wall that cannot be overcome by alternative belief, especially when one is working under specific assumptions. To deny the assumptions and conventions one set for oneself, one is no longer working within the space of those assumptions or conventions. Thus, within those specific conventions, those beliefs would indeed hold to the nature of deduction (be either absolutely true or absolutely false), but beyond that they may not.

[-][anonymous]12y10

Short answer: Because if you assign probability 1 to a belief, then it is impossible for you to change your mind even when confronted with a mountain of opposing evidence. For the full argument, see Infinite Certainty.

[-][anonymous]12y00

Probability.

[This comment is no longer endorsed by its author]Reply