LessWrong has one of the strongest and most compelling presentations of a correspondence theory of truth on the internet, but as I said in A Pragmatic Epistemology, it has some deficiencies. This post delves into one example: its treatment of math and logic. First, though, I'll summarise the epistemology of the sequences (especially as presented in High Advanced Epistemology 101 for Beginners).

*Truth* is the correspondence between beliefs and reality, between the map and the territory.^{[1]} *Reality* is a causal fabric, a collection of variables ("stuff") that interact with each other.^{[2]} True beliefs mirror reality in some way. If I believe that most maps skew the relative size of Ellesmere Island, it's true when I compare accurate measurements of Ellesmere Island to accurate measurements of other places, and find that the differences aren't preserved in the scaling of most maps. That is an example of a *truth-condition*, which is a reality that the belief can correspond to. My belief about world maps is true when that scaling doesn't match up in reality. All *meaningful* beliefs have truth-conditions; they trace out paths in a causal fabric.^{[3]} Another way to define truth, then, is that a belief is true when it traces a path which is found in the causal fabric the believer inhabits.

Beliefs come in many forms. You can have beliefs about your experiences past, present and future; about what you ought to do; and, relevant to our purposes, about abstractions like mathematical objects. Mathematical statements are true when they are *truth-preserving*, or *valid*. They're also *conditional*: they're about all possible causal fabrics rather than any one in particular.^{[4]} That is, when you take a true mathematical statement and plug in any acceptable inputs,^{[5] }you will end up with a true conditional statement about the inputs. Let's illustrate this with the disjunctive syllogism:

((A∨B) ∧ ¬A) ⇒ B

Letting A be "All penguins ski in December" and B be "Martians have been decimated," this reads "If all penguins ski in December or Martians have been decimated, and some penguins don't ski in December, then Martians have been decimated." And if the hypothesis obtains (if it's true that (A∨B) ∧ ¬A), then the conclusion (B) is claimed to follow.^{[6]}

That's it for review, now for the substance.

*Summary.* First, from examining the truth-conditions of beliefs about validity, we see that our sense of what is obvious plays a suspicious role in which statements we consider valid. Second, a major failure mode in following obviousness is that we sacrifice other goals by separating the pursuit of truth from other pursuits. This elevation of the truth via the epistemic/instrumental rationality distinction prevents us from seeing it as one instrumental goal among many which may sometimes be irrelevant.

What are the truth-conditions of a belief that a certain logical form is valid or not?

A property of valid statements is being able to plug any proposition you like into the propositional variables of the statement without disturbing the outcome (the conditional statement will still be true). Literally *any* proposition; valid forms about *everything* that can be articulated by means of propositions. So part of the truth-conditions of a belief about validity is that if a sentence is valid, everything is a model of it. In that case, causal fabrics, which we investigate by means of propositions,^{[7]} can't help but be constrained by what is logically valid. We would never expect to see some universe where inputting propositions into the disjunctive syllogism can output *false* without being in error. Call this the logical law view. This suggests that we could check a bunch of inputs and universes constructions until we feel satisfied that the sentence will not fail to output *true*.

It happens that sentences which people agree are valid are usually sentences that people agree are obviously true. There is something about the structure of our thought that makes us very willing to accept their validity. Perhaps you might say that because reality is constrained by valid sentences, sapient chunks of reality are going to be predisposed to recognising validity ...

But what separates that hypothesis from this alternative: "valid sentences are rules that have been applied successfully in many cases so far"? That is, after all, the very process that we use to check the truth-conditions of our beliefs about validity. We consider hypothetical universes and we apply the rules in reasoning. Why should we go further and claim that all possible realities are constrained by these rules? In the end we are very dependent on our intuitions about what is obvious, which might just as well be due to flaws in our thought as logical laws. And our insistence of correctness is no excuse. In that regard we may be no different than certain ants that mistake living members of the colony for dead when their body is covered in a certain pheromone:^{[8]} prone to a reaction that is just as obviously *astray* to other minds as it is obviously *right* to us.

In light of that, I see no reason to be confident that we can distinguish between success in our limited applications and necessary constraint on all possible causal fabrics.

And despite what I said about "success so far," there are clear cases where sticking to our strong intuition to take the logical law view leads us astray on goals apart from truth-seeking. I give two examples where obsessive focus on truth-seeking consumes valuable resources that could be used toward a host of other worthy goals.

*The Law of Non-Contradiction*. The is law is probably the most obvious thing in the world. A proposition can't be truth and false, or ¬(P ∧ ¬P). If it were both, then you would have a model of any proposition you could dream of. This is an extremely scary prospect if you hold the logical law view; it means that if you have a true contradiction, reality doesn't have to make sense. Causality and your expectations are meaningless. That is the *principle of explosion*: (P ∧ ¬P) ⇒ Q, for arbitrary Q. Suppose that pink is my favourite colour, and that it isn't. Then pink is my favourite colour or causality is meaningless. Except pink isn't my favourite colour, so causality is meaningless. Except it is, because either pink is my favourite colour or causality is meaningful, but pink isn't. Therefore pixies by a similar argument.

Is (P ∧ ¬P) ⇒ Q valid? Most people think it is. If you hypnotised me into forgetting that I find that sort of question suspect, I would agree. I can *feel* the pull toward assenting its validity. If ¬(P ∧ ¬P) is true it would be hard to say why not. But there are nonetheless very good reasons for ditching the law of non-contradiction and the principle of explosion. Despite its intuitive truth and general obviousness, it's extremely inconvenient. Solving the problem of the consistency of various PA and ZFC, which are central to mathematics, has proved very difficult. But of course part of the motivation is that if there were an inconsistency, the principle of explosion would render the entire system useless. This undesirable effect has led some to develop *paraconsistent logics* which do not explode with the discovery of a contradiction.

Setting aside whether the law of non-contradiction is really truly true and the principle of explosion really truly valid, wouldn't we be better off with foundational systems that don't buckle over and die at the merest whiff of a contradiction? In any case, it would be nice to alter the debate so that the *truth* of these statements didn't eclipse their *utility* toward other goals.

*The Law of Excluded Middle*. P∨¬P: if a proposition isn't true, then it's false; if it isn't false, then it's true. In terms of the LessWrong epistemology, this means that a proposition either obtains in the causal fabric you're embedded in, or it doesn't. Like the previous example this has a strong intuitive pull. If that pull is correct, all sentences Q ⇒ (P∨¬P) must be valid since everything models true sentences. And yet, though doubting it can seem ridiculous, and though I would not doubt it on its own terms^{[9]}, there are very good reasons for using systems where it doesn't hold.

The use of the law of excluded middle in proofs severely inhibits the construction of programmes based on proofs. The barrier is that the law is used in *existence proofs*, which show that some mathematical object must exist but give no method of constructing it.^{[10]}

Removing the law, on the other hand, gives us *intuitionistic logic*. Via a mapping called the Curry-Howard isomorphism all proofs in intuitionistic logic are translatable into programmes in the lambda calculus, and vice versa. The lambda calculus itself, assuming the Church-Turing thesis, gives us all effectively computable functions. This creates a deep connection between proof theory in constructive mathematics and computability theory, facilitating automatic theorem proving and proof verification and rendering everything we do more computationally tractable.

Even if we the above weren't tempting and we decided not to restrict ourselves to constructive proofs, we would be stuck with intuitionistic logic. Just as classical logic is associated with Boolean algebras, intuitionistic logic is associated with Heyting algebras. And it happens that the open set lattice of a topological space is a complete Heyting algebra even in classical topology.^{[11] }This is closely related to topos theory; the internal logic of a topos is at least^{[12]} intuitionistic. As I understand it, many topoi can be considered as foundations for mathematics,^{[13]} and so again we see a classical theory pointing at constructivism suggestively. The moral of the story: in classical mathematics where the law of excluded middle holds, objects in which it fails arise naturally.

Work in the foundations of mathematics suggests that constructive mathematics is *at least* worth looking into, setting aside whether the law of excluded middle is too obvious to doubt. Letting its truth hold us back from investigating the merits of living without it cripples the capabilities of our mathematical projects.

Unfortunately, not all constructivists or dialetheists (as proponents of paraconsistent logic are called) would agree how I framed the situation. I have blamed the tendency to stick to discussions of truth for our inability to move forward in both cases, but they might blame the inability of their opponents to see that the laws in question are false. They might urge that if we take the success of these laws as evidence of their truth, then failures or shortcomings should be evidence against them and we should simply revise our views accordingly.

That is how the problem looks when we wear our epistemic rationality cap and focus on the truth of sentences: we consider which experiences could tip us off about which rules govern causal fabrics, and we organise our beliefs about causal fabrics around them.

This framing of the problem is counterproductive. So long as we are discussing these abstract principles under the constraints of our own minds,^{[14]} I will find any discussion of their truth or falsity highly suspect for the reasons highlighted above. And beyond that, the psychological pull toward the respective positions is too forceful for this mode of debate to make progress on reasonable timescales. In the interests of *actually achieving some of our goals* I favour dropping that debate entirely.

Instead, we should put on our instrumental rationality cap and consider whether these concepts are working for us. We should think hard about what we want to achieve with our mathematical systems and tailor them to perform better in that regard. We should recognise when a path is moot and trace a different one.

When we wear our instrumental rationality cap, mathematical systems are not attempts at creating images of reality that we can use for other things if we like. They are tools that we use to achieve potentially any goal, and potentially none. If after careful consideration we decide that creating images of reality is a fruitful goal relative to the other goals we can think of for our systems, fine. But that should by no means be the default, and if it weren't mathematics would be headed elsewhere.

*ADDENDUM*

*[Added due to expressions of confusion in the comments. I have also altered the original conclusion above.]*

I gave two broad weaknesses in the LessWrong epistemology with respect to math.

The first concerned its ontological commitments. Thinking of validity as a property of logical laws constraining causal fabrics is indistinguishable in practical purposes from thinking of validity as a property of sentences relative to some axioms or according to strong intuition. Since our formulation and use of these sentences have been in familiar conditions, and since it is very difficult (perhaps impossible) to determine whether their psychological weight is a bias, inferring any of them as logical laws above and beyond their usefulness as tools is spurious.

The second concerned cases where the logical law view can hold us back from achieving goals other than discovering true things. The law of non-contradiction and the law of excluded middle are as old as they are obvious, yet they prevent us from strengthening our mathematical systems and making their use considerably easier.

One diagnosis of this problem might be that sometimes it's best to set our epistemology aside in the interests of practical pursuits, that sometimes our epistemology isn't relevant to our goals. Under this diagnosis, we can take the LessWrong epistemology literally and believe it is true, but temporarily ignore it in order to solve certain problems. This is a step forward, but I would make a stronger diagnosis: we should have a background epistemology guided by instrumental reason, in which the epistemology of LessWrong and epistemic reason are tools that we can use *if* *we find them convenient*, but which we are *not committed to taking literally*.

I prescribe an epistemology that a) sees theories as no different from hammers, b) doesn't take the content of theories literally, and c) lets instrumental reason guide the decision of which theory to adopt when. I claim that this is the best framework to use for achieving our goals, and I call this a pragmatic epistemology.

---

[1] See The Useful Idea of Truth.

[2] See The Fabric of Real Things and Stuff that Makes Stuff Happen.

[3] See The Useful Idea of Truth and The Fabric of Real Things.

[4] See Proofs, Implications, and Models and Logical Pinpointing.

[5] Acceptable inputs being given by the universe of discourse (also known as the universe or the domain of discourse), which is discussed on any text covering the semantics of classical logic, or classical model theory in general.

[6] A visual example using modus ponens and cute cuddly kittens is found in Proofs, Implications, and Models.

[7] See The Useful Idea of Truth.

[8] See this paper by biologist E O Wilson.

[9] What I mean is that I would not claim that it "isn't true," which usually makes the debate stagnate.

[10] For concreteness, read these examples of non-constructive proofs.

[11] See here, paragraph two.

[12] Given certain further restrictions, a topos is Boolean and its internal logic is classical.

[13] This is an amusing and vague-as-advertised summary by John Baez.

[14] Communication with very different agents *might* be a way to circumvent this. Receiving advice from an AI, for instance. Still, I have reasons to find this fishy as well, which I will explore in later posts.

Eliezer presents a strong defence of the correspondence theory? Well, for some values of "strong". He puts forward the best possible example of correspondence working clearly and obviously, and leaves the reader with the impression that it works as well in all cases. In fact, the CToT is not universally accepted because there are a number of cases where it is hard to apply. One of them is maths'n'logic, the ostensible subject of your posting,

I would have thought that the outstanding problem with the correspondence theory of truth in relation to maths is: what do true mathematical statements correspond to? Ie, what is the ontology of maths? You seem to offer only the two sentences:-

"Mathematical statements are true when they are truth-preserving, or valid. They're also conditional: they're about all possible causal fabrics rather than any one in particular" The first is rather vague. Truth preservation is a property of connected chains of statements. Such arguements are valid when and only when they are truth preserving, because that is how validity is defined in this context. The conclusion of a mathematical argument is true when it's premises are true AND when it is valid (AKA truth preserving). Validity is not a vague synonym for truth: the extra condition about the truth of the premises is important.

Are mathematical statements about all possible causal fabrics? Is "causal fabric" a meaningful term? Choose one.

If a causal fabric is a particular kind of mathematical structure, a directed acyclic graph, for instance, then it isn't the only possible topic of mathematics, it's too narrow a territory, ...maths can be about cyclic or undirected graphs, for instance.

On the other hand, if the phrase "causal fabric" doesn't constrain anything, then what is the territory...what does it do...and how you tell it is there? Under the standard correspondence of truth as applied to empirical claims, a claim is true if a corresponding piece of territory exists, and false if doesn't. But how can a piece of the mathematical territory go missing?

Mathematical statements are proven by presenting deductions from premises, ultimately from intuitively appealing axioms. We can speak of the set of proven and proveable theorems as a territory, but that establishes only a superficial resemblance to correspondence: examined in detail, the mathematical map-territory relationship works in reverse. It is the existence of a lump of physical territory that proves the truth of the corresponding claim; whereas the truth of a mathematical claim is proved non empirically, and the idea that it corresponds to some lump of metaphorical mathspace is conjured up subsequently.

Is that too dismissive of mathematical realism? The realist can insist that theorems aren't true unless they correspond to something in Platonia...even if a proof has been published and accepted. But the idea that mathematicians, despite all their efforts, are essentially in the dark about mathematicall truth us quite a bullet to bite. The realist can respond that mathematicians are guided by some sort of contact between their brains and the non physical realm of Platonia, but that is not a claim a physicalist should subscribe to.

So , the intended conclusion is that no mathematical statement is made true by the territory, because there is no suitable territory to do so. Of course, the Law of the Excluded Middle, and the Principle of Non Contradiction are true in the systems that employ them, because they are axioms of the systems that employ them, and axioms are true by stipulation.

I agree with the examples you present to the effect that we need to pick and choose between logical systems according to the domain. I disagree with the conclusion that an abandonment of truth is necessary...or possible.

To assert P is equivalent to asserting "P is true" (the deflationary theory in reverse). That is still true if P is of the form "so and so works". Pragmatism is not orthogonal to, or transcendent of, truth. Pragmatists need to be concerned about what truly works.

Two people might disagree because they are running on the same epistemology, but have a different impression of the evidence applying within that epistemology. Or they might disagree about the epistemology itself. That can still apply where they are disagreeing about what works. So adopting pragmatism doesn't make object level concerns about truth vanish, and it doesnt make meta level concerns , epistemology , vanish either.

Mathematical theorems aren't true univocally, by correspondence to a single territory, but they are true by stipulation, where they can be proven. Univocal truth is wrong, and pragmatism, as an alternative to truth, is wrong. What is right is contextual truth.

Everyone finds the PoNC persuasive, yet many people believe contradictory things...in a sense. What sense?

Consider:

A. Sherlock Holmes lives at 221b Baker Street.

B. Sherlock Holmes never lived, he's a fictional character.

Most people would regard both of them as true ... in different contexts, the fictional and the real life. But someone who believed two contradictory propositions in the same context really would be irrational.

That was a wonderful comment. I hope you don't mind if I focus on the last part in particular. If you'd rather I addressed more I can accommodate that, although most of that will be signalling agreement.

I'll note a few things in reply to this:

feelingor aproclivitythan a proposition, until of course an agent develops amodelof what works and why.I agree with you here absolutely, modulo vocabulary. I would rather say that no single framework is universally appropriate (problem of induction) and that developing different tools for different contexts is shrewd. But what I just said is more of a model inspired by my epistemology than part of the epistemology itself.

Analysing P as "P is true" isn't some peculiarity of mine: in less formal terms, to assert something is to assert it as true. To put forward claims, and persuade others that they should believe them is to play a truth game...truth is what one should believe,

So your epistemology can't dispense with truth, but offers no analysis of truth, How useful is that?

Tabooing truth, or tabooing "truth"? It is almost always possible to stop using a word, but continue referring to the concept by synonymous words or phrases. Doing without the concept is harder....doing without the use, the employment us harder still.

Nothing works just because someone feels it does. The truth of something truly working us given by the territory.

Contextual truth is compatible with no truth?

I cannot form any idea of what your viewpoint is, although I'm familiar with most of the logical topics you referenced. You always stop short of stating it, all the way up to the last sentence of the last footnote.

More generally, and this is addressed to everyone writing on a complex subject to an audience of diverse and unknown backgrounds, try writing your material backwards. Start at the end, the conclusion, and work back from that to the reasons for the conclusion, and the reasons for the reasons, and so on. Stop before you think you should and continue clarification in the comments as the need is revealed by the questions.

I've added an addendum that I hope will make things clearer.

The cause of me believing math is not "it's true in every possible case", because I can't directly observe that. Nor is it "have been applied successfully in many cases so far".

Basically it's "maths says it's true" where maths is an interlocking system of many subsystems. MANY of these have been applied successfully in many cases so far. Many of them render considering them not true pointless, in the sense all my reasoning and senses are invalid if they don't hold so I might as well give up and save computing time by conditioning on them being true. Some of them are in implicit in every single frame of my input stream. Many of them are used by my cognition, and if I consistently didn't condition on them being true I'd have been unable to read your post or write this reply. Many of them are directly implemented in physical systems around me, which would cease to function if they failed to hold in even one of the billions and billions of uses. Most importantly, many of them claim that several of the others must always be true of they themselves are not, and while gödelian stuff means this can't QUITE form a perfect loop in the strongest sense, the fact remains that if any of them fell ALL the others would follow like a house of cards; you cant have one of them without ALL the others.

You might try to imagine an universe without math. And there are some pieces of math that might be isolated and in some sense work without the others. But there is a HUGE core of things that cant work without each other, nor without all those outlying pieces, at all even slightly. So your universe couldn't have geometry, computation, discrete objects that can be moved between "piles", anything resembling fluid dynamics, etc. Not much of an universe, nor much sensical imaginability, AND it would be necessity be possible to simulate in an universe that does have all the maths so in some sense it still wouldn't be "breaking" the laws.

I call these sorts of models

sticky, in the sense that they are pervasive in our perception and categorisation. Sitcky categories are the sort of thing that we have a hard time not taking literally. I haven't gone into any of this yet, of course, but I like it when comments anticipate ideas and continue trains of thought.Maybe a short run-long run model would be good to illustrate this stickiness. In the short run, perception is fixed; this also fixes certain categories, and the "degree of stickiness" that different categories have. For example, chair is remarkably hard to get rid of, whereas "corpuscle" isn't quite as sticky. In the long run, when perception is free, no category needs to be sticky. At least, not unless we come up with a more restrictive model of possible perceptions. I don't think that such a restrictive model would be appropriate in a background epistemology. That's something that agents will develop for themselves based on their needs and perceptual experience.

Different mathematical models of human perceptual experience might be perfectly suitable for the same purpose.. Physics should be the clearest example, since we have undergone many different changes of mathematical models, and are currently experiencing a plurality of theories with different mathematics in cosmology. The differences between classical mechanics and quantum mechanics should in particular show this nicely: different formalisms, but very good models of a large class of experiences.

I think you slightly underestimate the versatility of mathematicians in making their systems work despite malfunctions. For instance, even if ZFC were proved inconsistent (as Edward Nelson hopes to do), we would not

haveto abandon it as a foundation. Set theorists would just do some hocus pocus involving ordinals, and voila! all would be well. And there are several alternative formulations of arithmetic, analysis, topology, etc. which are all adequate for most purposes.In the case of some math, this is easy to do. In other cases it is not. This is because we don't experience the freefloating perceptual long term, not because certain models are necessary for all possible agents and perceptual content.

I don't mean just sticky models. The concepts I'm talking about are things like "probability", "truth", "goal", "If-then", "persistent objects", etc. Believing that a theory is true that says "true" is not a thing theories can be is obviously silly. Believing that there is no such things as decisionmaking and that you're a fraction of a second old and will cease to be within another fraction of a second might be philosophically more defensible, but conditioning on it not being true can never have bad consequences while it has a chance of having good ones.

I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air... "Applied successfully in many cases", where "many" is "billions of times every second"

Then ZFC is not one of those cores ones, just one of the peripheral ones. I'm talking ones like set theory as a whole, or arithmetic, or Turing machines.

Oh okay. This is a two-part misunderstanding.

I'm not saying that theories can't be true, I'm just not talking about this truth thing in my meta-model. I'm perfectly a-okay with models of truth popping up wherever they might be handy, but I want to taboo the intuitive notion and refuse to explicate it. Instead I'll rely on other concepts to do much of the work we give to truth, and see what happens. And if there's work that they can't do, I want to evaluate whether it's important to include in the meta-model or not.

I'm also not saying that my theory is true. At least, not when I'm talking from within the theory. Perhaps I'll find certain facets of the correspondence theory useful for explaining things or convincing others, in which case I might claim it's true. My epistemology is just as much a model as anything else, of course; I'm developing it with certain goals in mind.

The math we use to model computation is a model and a tool just as much as computers are tools; there's nothing weird (at least from my point of view) about models being used to construct other tools. Living cells can be modeled successfully with math, you're right; but that again is just a model. And atoms are definitely theoretical constructs used to model experiences, the persuasive images of balls or clouds they conjure notwithstanding. Something similar can be said about fluid dynamics.

I don't mean any of this to belittle models, of course, or make them seem whimsical. Models are worth taking

seriously, even if I don't think they should be takenliterally.The best example in the three is definitely arithmetic; the other two aren't convincing. Math was done without set theory for ages, and besides we have other foundations available for modern math that can be formulated entirely without talking about sets. Turing machines can be replaced with logical systems like the lambda calculus, or with other machine models like register machines.

Arithmetic is more compelling, because it's very sticky. It's hard not to take it literally, and it's hard to imagine things without it. This is because some of the ideas it constitutes are at the core cluster of our categories, i.e. they're very sticky. But could you imagine that

some agentmight a) have goals that never require arithmetical concepts, and b) that there could be models that are non-arithmetical that could be used toward some of the same goals for which we use arithmetic? I can imagine ... visualise, actually, both, although I would have a very hard time translating my visual into text without going very meta first, or else writing a ridiculously long post.Hmm, maybe I need to reveal my epistemology another step towards the bottom. Two things seem relevant here.

I think you you SHOULD take your best model literally if you live in a human brain, since it can never get completely stuck requiring infinite evidence due to it's architecture, but does have limited computation and doubt can both confuse it and damage motivation. The few downsides there are can be fixed with injunctions and heuristics.

Secondly, you seem to be going with fuzzy intuitions or direct sensory experience as the most fundamental. At my core is instead that I care about stuff, and that my output might determine that stuff. The FIRST thing that happens is conditioning on that my decisions matter, and then I start updating on the input stream of a particular instance/implementation of myself. My working definition of "real" is "stuff I might care about".

My point wasn't that the physical systems can be modeled BY math, but that they themselves model math. Further, that if the math wasn't True, then it wouldn't be able to model the physical systems.

With the math systems as well you seem to be coming from the opposite direction. Set theory is a formal system, arithmetic can model it using gödel numbering, and you can't prevent that or have it give different results without breaking arithmetic entirely. Likewise, set theory can model arithmetic. It's a package deal. Lambda calculus and register machines are also members of that list of mutual modeling. I think even basic geometry can be made sort of Turing complete somehow. Any implementation of any of them must by necessity model all of them, exactly as they are.

You can model an agent that doesn't need the concepts, but it must be a very simple agent with very simple goals in a very simple environment. To simple to be recognizable as agentlike by humans.

Could anybody provide a concise summary of the ideas mentioned above? I am trying to read everything but it is unclear to me what point the author is making, and how he/she arrives at that conclusion.

I've added an addendum. If reading that doesn't help, let me know and I'll summarise it for you in another way.

What are cases where making a decision to model a problem in a way where you rely on intuitionistic logic has high utility in the sense that it produces a model that does good real world predictions?

Intuitionistic logic can be interpreted as the logic of finite verification.

Truth in intuitionistic logic is just provability. If you assert A, it means you have a proof of A. If you assert ¬A then you have a proof that A implies a contradiction. If you assert A ⇒B then you can produce a proof of B from A. If you assert A ∨ B then you have a proof of at least one of A or B. Note that the law of excluded middle fails here because we aren't allowing sentences A ∨ ¬A where you have no proof of A or that A implies a contradiction.

In all cases, the assertion of a formula must correspond to a proof, proofs being (of course) finite. Using this idea of finite verification is a nice way to develop topology for computer science and formal epistemology (see Topology via Logic by Steven Vickers). Computer science is concerned with verification as proofs and programmes (and the Curry-Howard isomorphism comes in handy there), and formal epistemology is concerned with verification as observations and scientific modeling.

That isn't exactly a specific example, but a class of examples. Research on this is currently very active.

I don't care of how it can be interpreted but whether it's useful. I asked for a practical example. Something useful for guiding real world actions. Maybe an application in biology or physics.

If you want to be "pragmatic" then it makes sense to look at whether your philosophy actually is applicable to real world problems.

I think there's a bit of a misunderstanding going on here, though, because I am perfectly okay with people using classical logic if they like. Classical logic is a great way to model circuits, for example, and it provides some nice reasoning heuristics.There's nothing in my position that commits us to abandoning it entirely in favour of intuitionistic logic.

Intuitionistic logic is applicable to at least three real-world problems: formulating foundations for math, verifying programmes, and computerised theorem-proving. The last two in particular will have applications in everything from climate modeling to population genetics to quantum field theory.

As it happens, mathematician Andrej Bauer wrote a much better defence of doing physics with intuitionistic logic than I could have: http://math.andrej.com/2008/08/13/intuitionistic-mathematics-for-physics/

If one wants to understand an abstract principle it's very useful to illustrate the principle with concrete practical examples.

I don't think that anyone on LW think that we shouldn't a few constructivist mathematicians around to do their job and make a few proof that advance mathematics. I don't really care about how the proofs of the math I use are derived provided I can trust them.

If you call for a core change in epistemology it sounds like you want more than that. To me it's not clear what that more happens to be. In case you don't know the local LW definition of rationality is : "Behaving in a way that's likely to make you win."

I'm going to have to do some strategic review on what exactly I'm not being clear about and what I need to say to make it clear.

Yes, I share that definition, but that's only the LW definition of instrumental rationality; epistemic rationality on the other hand is making your map more accurately reflect the territory. Part of what I want is to scrap that and judge epistemic matters instrumentally, like I said in the conclusion and addendum.

Still, it's clear I haven't said quite enough. You mentioned examples, and that's kind of what this post was intended to be: an example of applying the sort of reasoning I want to a problem, and contrasting it with epistemic rationality reasoning.

Part of the problem with generating a whole bunch of specific examples is that it wouldn't help illustrate the change much. I'm not saying that science as it's practised in general needs to be radically changed. Mostly things would continue as normal, with a few exceptions (like theoretical physics, but I'm going to have to let that particular example stew for a while before I voice it outside of private discussions).

The main target of my change is the way we conceptualise science. Lots of epistemological work focuses on idealised caricatures that are too prescriptive and poorly reflect how we managed to achieve what we did in science. And I think that having a better philosophy of science will make thinking about some problems in existential risk, particularly FAI, easier.

Applying it to what problem? (If you mean the physics posts you linked to, I need more time to digest it fully)

Nobody actually conceptualises science as being about deriving from thinking "pink is my favority color and it isn't" -> "causality doesn't work".

Then pick on of those caricatures and analyse in detail how your epistemological leads to different thinking about the issue.

Yes, obviously having a better philosophy of science would be good.

No, not that comment, I mean the initial post. The problem is handling mathematical systems in an epistemology. A lot of epistemologies have a hard time with that because of ontological issues.

No, but many people hold the view that you can talk about valid statements as constraining ontological possibilities. This is including Eliezer of 2012. If you read the High Advance Epistemology posts on math, he does reason about the particular logical laws constraining how the physics of time and space work in our universe. And the view is very old, going back to before Aristotle, through Leibniz to the present.

Handling mathematical systems in an epistemology is in my idea a topic but not a specific problem. In that topic there are probably a bunch of practical problem but.

Molecular biology is a subject. Predicting protein folding results is a specific problem.

If we look at FAI, writing a bot that performs well in the prisoner dilemma tournaments that are about verifying source code of the other bots is a specific problem.

The are also problems in the daily business of doing science where the scientist has to decide between multiple possible courses of action.