Justification Through Pragmatism

In this article, I propose a new method of justifying fundamental philosophical assumptions.

The fundamental assumptions on which we base our thinking cannot ever be proven to be true, as such a proof must rely on our own thinking and be circular as a result. The proposed alternative is that rather than even worry about whether these assumptions are true at all, instead demonstrate that regardless of their truth, there is no conceivable benefit to negating them. This article demonstrates this point by way of a number of examples of such assumptions.


0. Basic Capacity for Reason

Assumption: That one has the most basic ability to understand anything correctly.
Negation: One has no ability to understand anything nor to make any rational decisions whatsoever. One may have a delusion of understanding, but it bears no correlation with truth.

I start at 0 because this is such a fundamental assumption that it's on a level below everything else. Descartes started by doubting even his own existance, but found a proof of his existence from the very doubt itself. However, he could have gone a step further and doubted his basic ability to even understand that proof. Sure it seems a compelling argument that you must exist in order to doubt your own existance, but just seeming compelling doesn't make it true. How do we know for sure that anything at all we think is true?

The fact is, we demonstrably don't. One visible property of the insane is that they often do not know they are insane, and no-one is able to tell them. Every thought we have really could be worthless, and their seeming correspondence with reality a delusion. But if so, well, there's nothing you can really do about it is there? So why worry about it?

If every thought is meaningless, then it doesn't matter what you think. So you might as well assume that at least some of what you think makes sense. Thus the assumption that we have some basic capacity for reason may be justified pragmatically, without any concern for whether it is even true.

This assumption of basic capacity for reason is absolutely not to be confused with assuming every thought you have to be correct, nor even assuming any particular thought or belief to be correct. By all means question your beliefs and your thought processes. Indeed choosing not to do so is a good step towards failing to live up to this assumption in the first place. This assumption is simply that such questioning need not go on for ever in an endless chain. At some point we have to just accept that at least some our methods of basic reasoning actually work.


1. Better and Worse

Assumption: There exist experiences which are better and worse than other experiences.
Negation: Every possible experience is identical in merit. Nothing is better, more desirable, preferable or superior to anything else.

I say "experiences" to bring it down to the most fundamental interface, and avoid even implicitly assuming the existance of a real world.

If nothing is better or worse than anything else, in any way, then it fundamentally does not matter what we do. So there cannot be any harm in acting as if better and worse really do exist.

Of course this says nothing about what better and worse actually are, nor even how to go about figuring such a thing out. It's also possible that both the existance and nature of better and worse can be learned through experience, or even that they or aspects of them are fundamentally self-evident. So this assumption may or may not be either necessary or helpful, but it still remains as another good example of a pragmatically justifiable assumption.


2. Future, Causality, Free Will and Control

Assumption: That there are experiences to be had in the future, and that choices we make have some impact on those experiences.
Negation: Either there will not be any future experiences, or there will be but we have no control whatsoever about what they will be.

Yes, this is quite clearly about four assumptions rolled into one - as listed in the name. However, it's really quite difficult to talk about any of them on their own. It's hard to even define any of these without the ones earlier in the list, but each is also somewhat worthless without the next.

Without some sort of control over our future experiences, all of our choices and actions are meaningless. Could time suddenly come to an unexpected halt? Could we be simply be riding along on some sort of experience-movie, under some sort of illusion that our minds control the ride? Sure, it's possible, but if we really do have no control over the future it doesn't really matter what we do. So again, there can be no harm in assuming that we do have control, just in case it is true.

Again, whether this needs to be assumed or can be learned is a separate issue, the point is there's no benefit to removing this assumption so it might just as well be made regardless.


3. Sufficient Information

Assumption: That we are able to determine (using #0), which of the choices (#2) we make will lead to a better (#1) outcome.
Negation: That although we may have motivation and ability to control our future, we have no way to figure out what the right choice actually is.

To explain the need for this, consider the hypothetical universe of the left-handed god (this is not my own original idea, but I have no clue who I've stolen it from, so, well, sorry whoever you are). In this universe, people who have their left hand raised (and explicitly not their right hand, nor neither hand) when they die go to heaven, and experience an eternity of peace and fullfillment. Everyone else is doomed to an eternity of torment. However, in this universe there is also talk of a right-handed god, a strikingly similar entity but somewhat reflected in nature. The trouble is, the living inhabitants of this universe have no means of determining which of these gods is the true figure. The world has a built in symmetry about it and no clue was given.

Such situations, or at least their smaller scale approximations, can and do occur in life. But in these situations, there's nothing for it but to pick a hand and move on. We might as well assume though that not every situtation is like that, and figure out and concentrate our efforts on the ones which are not.

Again, note that this assumption is not that we have sufficient information about everything, only that we have sufficient information about something. Distinguishing left-handed gods from solvable dilemnas is still clearly a worthwhile task.


Overall then I have shown four examples of assumptions which may be taken for purely pragmatic reasons, regardless of the actual likelihood of their truth. It is my further view that these assumptions (and indeed possibly even just #0) are sufficient in the sense that no other base assumptions are necessary. But that is a much longer story.

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 8:56 PM

Every thought we have really could be worthless, and their seeming correspondence with reality a delusion. But if so, well, there's nothing you can really do about it is there? So why worry about it?

If you assume every thought is equally insane - or has an equal chance of being insane, then yes. If there are some thoughts that seems less insane than others (including the thought that some thoughts seems less insane than others) then you can get some traction and make useful decisions based even though every thought seems insane.

As a side note, many philosophical mistakes are made by thinking qualitatively instead of quantitatively. Do not ask "Is this thought insane?" Instead, ask "How insane is this thought?"

[-]Irgy12y20

You say that as if it conflicts with what I've written, but it's exactly my point. As soon as you accept that any of your thoughts are anything less than completely insane, then you've already accepted that you have some basic capacity for reason, and therefore accepted the assumption I describe. The traction you then describe getting is exactly the traction I describe as gained by accepting this assumption.

You can certainly consider insanity on a scale from 0 to 1, but in that case this assumption is just that not all of your thoughts are equal to 1 on that scale. As such it's entirely a yes/no question, not a matter of degree.

Do not ask "Is this thought insane?" Instead, ask "How insane is this thought?"

I like it. Is that from anything?

Not that I know of, though I'm pretty sure the sequences influenced this thought both wrt the subject matter and wrt the way in which I phrased it.

Assumption: That one has the most basic ability to understand anything correctly. Negation: One has no ability to understand anything nor to make any rational decisions whatsoever. One may have a delusion of understanding, but it bears no correlation with truth.

That isn't the negation.

¬∀x Px ≡ ∃x ¬Px,

¬∀x Px ≠ ∀x ¬Px.

[-]Irgy12y00

That's the problem with language. I meant: ∃x such that Px, in the first place, for which the negation is indeed: ∀x ¬Px

Maybe the sentence is ambiguous. I don't mean "given anything, that one has the most basic ability to understand it", but "that there is anything that one has the most basic ability to understand". I would naturally read what I wrote the second way, but I included the negation specifically to make it clear that's what I meant in the first place.

That seems like a very strange reading. Suppose I wrote "I can find the square root of any number" - does this really only mean that I know that 3*3 = 9?

Assuming that one has the most basic ability to understand anything [at all] correctly.

The things you're talking about are purely human designs of language. Nothing is objectively, universally "better" or "worse" because the terms themselves are relative and very subjective in their relativity. This is the case with many, many, many words. "Better" and "worse" aren't concrete empirical existences, they're terms used to gauge something's worth in some particular context, whether it be explicit or implicit:

Apple #1 tastes better than Apple #2. — Gauging apples on their taste. [explicit]

Dull knives are worse than sharp knives. — Gauging knives on their ability to cut thing. [implicit]

No more do better and worse "exist" than meaning "exists". I think you may be mixing up two different perspectives—trying to justify the nominal with the phenomenal, if one can put it in those terms. The fact that there's no universal, objective standard for better/worse doesn't make something meaningless anymore than the fact that there's no universal, objective standard for meaning, or for what matters. I guess I'm not sure where the pragmatism comes in.

(On another note, would it be sound, according to your logic, to say that—since our belief in God, fundamentally, doesn't matter, we might as well believe in God just in case we get to go to heaven?)

[-]Irgy12y-10

I don't think I'm confused about this at all, I'm simply using the words in a much more specific way than you imagine. Yes "better" and "worse" are subjective and relative, but I am using them specifically with respect to the subject of oneself, and I'm using it specifically to compare entire future possible states of the world to other future possible states of the world. I mean "better" and "worse' in the sense often described as "preferable", with a sense of correctness to that preference as opposed to whim.

The question about God shows you've completely missed the point - which may be my fault as much as yours but there it is. To answer the question then, which may well help: It is not true to say that it's impossible to make any progress or meaningful decisions without belief in God. The negation of "belief in God" does not prevent all possible progress, where as the negation of each of the examples I prevent does. Pascal's Wager is also a form of pragmatism but is otherwise unrelated.

Number 1 seems most successful/compelling subsection, number 2 somewhat so. Number 0 not really. Number 3 depends on 0, 1, and 2, and so 0 is its weakest link.

Despite the appeal of only having to use one form to justify foundational arguments to get to higher ones, I think this shows a downside. You took a tool and used it in a case perfectly well (case number 1), but doing just as well in other cases got different results.

I think this way of thinking is no more or less than a good tool in the thought toolbox, and you should think about distinguishing cases in which it works well from those in which it doesn't, and finding other tools that also work well at the level of justification that you seem to be most interested in, the one with as few assumptions as possible.

This means that any complete theory will need either a different way of thinking/assumptions/whatever that is more universal, or a combination of this type where it woks plus a different type where that different type works.

[-][anonymous]12y00

There is a chasm of assumptions between "we can't (definitively) prove our fundamental assumptions" to "our fundamental assumptions don't matter." Read What is Evidence and the Map and Territory sequence.

[This comment is no longer endorsed by its author]Reply