This is a linkpost for https://epistemologicalvigilance.substack.com/p/proof-as-mere-strong-evidence

Proof as mere strong evidence

17Richard_Kennaway

3Noosphere89

2Richard_Kennaway

14Dagon

10interstice

2TAG

2Dagon

2Slider

8Vladimir_Nesov

4Valentine

4adamShimi

2Valentine

2Slider

20gjm

2Slider

1Roman Leventov

New Comment

Or to put it more provocatively: proofs are not necessary if we find alternative ways of gathering the same amount of evidence.

In mathematics, there are no alternative ways that give anything like the strength of an actual proof. The mushy sort of thing that is called "evidence" in most other contexts is worthless for mathematics, except so far as it may guide one's intuition towards finding interesting things.

Computers are mathematics made flesh, and if a transistor had only a probability of 0.99 (generally considered "high" elsewhere) of correctly opening or closing in response to its gate signal, computers as we know them would be impossible. The actual probability has many more 9s on the end, unheard of anywhere else.

Now, it is frequently observed, including in one of the other comments here, that mathematicians generally do not write out proofs in complete detail. However, any competent mathematician can, and far from it being "almost unknown" to do so, there are several major projects extant to do exactly that for all of mathematics. The ones that spring to my mind are CoQ, HOL, Isabelle, and Mizar. There may be others.

Lean is another for mathematics. So yeah, I don't really think this post is correct.

Here's the link:

Thanks for that addition.

There are two reasons that completely rigorous formal proofs are only done in systems like these.

Firstly, they *can* only be done in such systems. Doing them by hand will just produce an enormous document full of technical detail which has far less chance of being error-free than the original semi-formal proof. Without the computer implementation, truly rigorous proof of new, deep theorems is practically impossible.

Secondly, mathematicians writing to be read by mathematicians do not need to do formal proofs, and so they don't. They need only develop them in enough detail that their readers (and they themselves!) can understand the argument clearly enough to see how the formal underpinnings will work.

These projects are motivated by the desire to do that work on behalf of the mathematics community, and create a growing corpus of verified mathematics. Some at least are working towards the day when every mathematician can use such a system as a working tool, but I think that is a ways off yet.

I think this may be based on a casual use of the word "proof". A formal proof *is* ... proof. You have to keep in mind that the axioms used may not correlate to the real world in any specific instance, and to that extent, a proof isn't evidence at all - there's no tie to the universe, it's all map.

In colloquial and legal use, "proof" is used differently - it really does just mean "strong evidence". I think almost everyone knows this, but maybe I'm wrong.

I think "proof" as used in the post was meant to refer to neither the colloquial use nor fully-formalized statements, but proofs as they are actually used by human mathematicians: written arguments which the community of mathematicians believe *could* be formalized if necessary. Then the question is how much confidence we should have in a statement given that such a "proof" exists. Mathematicians are quite good at determining which proofs are valid, but as the post points out, they are not infallible.

You seem to be getting at the validity/soundness distinction. Formal arguments rely on premises; a formal argument can rigourously show that its conclusions follow from its premises, IE that it's valid; but the conclusion can still be doubtful if the premises are,.IE it's not sound.

PS. Not the downvoter.

wow, strong downvote with no explanation? still doesn't seem wrong to me - if you won't explain, just consider this just another chance to dock my karma.

Edit: comment karma positive now, unsure if the strong-down was removed or someone else used a strong-up to counter it. NBD, karma is meaningless and this wasn't a particularly important comment (since many others said similar things).

Also confused at it seems very similar to a point I made with sligthly different turns of phrase and I have trouble imagining how those rephrasing would change the meaning significantly.

If an assumption is violated then the whole rest of the system is inapplicable in that context ie weight 0.

Even a formal proof can remain merely strong evidence, because definitions it's built on might be formulating the original less formal questions in a misleading/incorrect/parochial way.

I like the reframe. The part I best like is the removal of the illusion of certainty.

I don't know if describing proofs as *just* evidence really captures it though. In many cases the point of a proof isn't just to know that the statement is one you can rely on. It's also to show you *why* you can rely on it. The process of understanding a proof can teach you something about how math works. The effect is stronger if you *produce* a proof.

Agreed! That's definitely and important point, and one reason why it's still interesting to try to prove P \neq NP. The point I was making here was only that when proofs are used for the "certainty" that they give, then strong evidence from other ways is also enough to rely on the proposition.

On closer inspection though, more subtle notes emerge. Despite the dreams of formalists, the overwhelming majority of proofs in use and circulation are not minute low-level logical derivations, but rather intuitive high-level explanations. Some of which turn out to be false even after peer-review. Similarly, the level of details varies enormously across accepted proofs.

Those high level explanations are not what I understand as "proof" in the mathematical sense. I get that "please, back up your claims" is sometimes framed as "Where is the proof?".

Proof is something that is properly axiomatised and that is purely mechanical where there is no ambiguity which formal moves are allowed and which ones are not.

Now the most common use case for a proof is "this proofs axioms seem reasonable so its theorems are true (of the actual world)". This is a detailed backing of claims but it is not a proof in itself because there is no mechanical way to determine which axioms are reasonable.

Proofs preserve the certainty of their axioms but any axiom adopted means assuming more things and you can not get an axiomatic system going without assuming anything. Thus any axiomatic system must refer outside of itself on how solid its theorems are to apply elsewhere.

The form of reasoning where each step "seems reasonable" can be important but it is distinct from proofs.

To a very, very good approximation *no* proofs in actual published mathematical papers and books are "proofs" in the sense you describe here.

The intention is that they *could be turned into* such proofs without substantial creative effort, but it's almost unknown for anyone to bother doing it and those who *do* typically find that it's a lot of work.

https://proofwiki.org/wiki/ProofWiki:Jokes/Physicist_Mathematician_and_Engineer_Jokes/Burning_Hotel

The diffrence between thinking you have a solution and having a solution. And indeed the former admits great degrees of uncertainty.

Deutsch delves into this topic in great depth in "The Fabric of Reality". There are mathematical objects that "exist in the abstract", and the pen-and-paper proofs or in Mathematica alike are not categorically different from computer simulations that evidence that a so-and-so physical system will behave in so-and-so way, but don't "prove" it.

Proofs at first exude a strong aroma of certainty.

On closer inspection though, more subtle notes emerge. Despite the dreams of formalists, the overwhelming majority of proofs in use and circulation are not minute low-level logical derivations, but rather intuitive high-level explanations. Some of which turn out to be false even after peer-review. Similarly, the level of details varies enormously across accepted proofs.

And yet it works, most of the time. Enough to build massive and intricate systems, and enough for millions of daily practical applications in the physical world.

What gives?

I think this mystery dissolves if we refuse to give proofs a special status. Instead of being the one and only form of certainty accessible to man,

they make far more sense as “just” a strong form of evidence in the traditional scientific sense.If proof is only evidence, then obviously different contexts call for different levels of details. We accept many geometric or intuitive proofs of the Pythagorean theorem, but ask for detailed checks on tens-of-pages-long advanced algebraic geometry proofs.

The other positive externality of desacralizing proofs comes in the form of methodological flourishing. Indeed, if proofs are merely sources of evidence, then any other way to generate evidence about formal statements is as valid, as long as the total evidence ends up approximately the same.

Or to put it more provocatively: proofs are not necessary if we find alternative ways of gathering the same amount of evidence.

Nowhere has this rich ground been better exploited than in computational complexity theory, the field studying the resources needed (time, memory, coin flips) to solve computational problems. For you see, conjectures in complexity theory are atrociously hard to prove. They’re some of the hardest across all of mathematics. But complexity theorists still want to advance their field, and so they have evolved an opportunistic methodology that accumulates evidence through alternative ways (based on proofs of other, simpler statements mostly).

Let’s look at two example methods to get an idea:

(Approximation algorithms breaking)Imagine that we have a class of problem C that we conjecture cannot be solved with a certain amount of resources (say polynomial time). But we have approximation algorithms that can approximate the solutions to problems in C within the resource bounds. If we find out that all the approximation algorithms that we come up with mysteriously break down when we try to make the approximation more precise, this is evidence for problems in C being impossible to solve with these resources — because such result is far easier to explain for the conjecture that C cannot be solved with these resources than for the opposite conjecture.^{[1]}(Consistency proofs)Imagine that there is a mathematical theory in which we can prove many of the key results in complexity theory. Then showing that this theory is consistent with our conjecture (that there exists a valid model of the theory where the conjecture is true) is a form of logical falsification: we have shown that our conjecture is coherent with much of the established results of the field. Obviously, the more powerful the original theory is in terms of rederiving complexity theory results, the more evidence comes from the conjecture surviving this falsification.^{[2]}With all this defense of methodological innovation, beware the subtle mistakes one can make in estimating the evidence coming front their new method. One classic example in complexity theory is to take “People have been trying to solve NP-complete problems quickly for 50 years and failed” as relatively strong evidence for P≠NP. Absence of evidence is evidence of absence, but in this case the evidence would only be significant if we knew one additional fact: that our current software engineers and computer scientists are actually good at finding efficient algorithms. Which, given our limitations in formal sciences and the youth of computer science, is far from obvious.

So while proofs as strong evidence empowers us, we should remember Yudkowsky’s warning when discussing Einstein’s success with a similar kind of methodological shortcut.

Einstein’s Speed, 2008, Eliezer Yudkowsky

^{^}This is actually the case for P≠NP, the most important conjecture of complexity theory. See section 3 of Scott Aaronson’s survey for more precise pointers.

^{^}This is one reason to care about superpolynomial lower bounds for Extended Frege proof systems, as such bounds would show that P≠NP is consistent with Cook’s PV theory, which rederives many key complexity theory results. See chapter 27 of this proof complexity book by Jan Krajíček for more precise pointers.