Seeking the truth is a form of motivated cognition. Put another way, truth is a teleological concept. If that's intuitive you can stop reading this post. Otherwise, press on.

What is truth? Like, for real, actually, please define it. Try this for yourself. Here's some spoiler space before we go on.













Got a definition? Good.

There's a few common ways of defining truth. Yours probably either is or fits within one or more of these paradigms. The following theories are not strictly exclusive of one another (there's overlap).

  • Naive: Truth is that which simply is.
  • Pragmatic: Truth is just however the world seems to be.
  • Subtractive: Truth is that which remains when you stop believing in it.
  • Ideal: True facts are ideal forms that exist independent of their physical realization.
  • Correspondence: Beliefs are true if they accurately explain the world.
  • Predictive: Beliefs are true if they predict what we'll observe.
  • Coherence: Facts are true if and only if they form a consistent formal system.
  • Constructive: Truth is created by social processes, such as science.
  • Consensus: Truth is that which everyone agrees is true.
  • Performative: Truth is not a property of the world, but a property of speech used to indicate agreement.
  • Redundant: Claiming that something is true doesn't add anything, it's just a way of emphasizing the assertion of a claim.
  • Pluralist: Truth means different things in different situations, so some combination of other theories explains how we use the concept of truth.

There are more, each with some subtle distinctions.

Now, look over those theories. What unifies them? Think about it. I'll give you some more spoiler space to think.













Got an idea? Good. If we were speaking in person I'd ask you what you came up with and we'd engage in some dialogue so I could get you to see the point I'd like to make. But we're not so I can't, so I'll just jump to the conclusion.

What they all have in common is that they provide some criterion by which we can assess which things are true or not. Even the theories that suppose truth isn't meaningful must make a case that there's no criterion by which any statement can be meaningfully true or equivalently choose a criterion which no statement satisfies.

Why does this matter? Because the existence of a criterion of truth means that it had to be picked over other possible criterions. If this weren't so there would be no question as to what truth is. How is this choice made? It's made by humans. Even if you're religious and believe truth is handed down by a deity, humans had to choose to believe the deity (this is observably so, since even if you think you believe in the real deity, other people believe in other deities and made the choice to believe the wrong thing).

This means any criterion of truth is a norm that reflects what we care about and prefer. Both individually and collectively, we choose how to use the concept of truth. And that means whatever we want to claim to be true is ultimately motivated by whatever it is we care about that led us to choose the definition of truth we use.

Why do I need to explain any of this? Everything I've said is straightforward. I think it's because humans struggle to deal with more than one level of abstraction at once. It's easy to evaluate if something is true against some fixed notion of truth. It's easy to think about truth as a concept and argue which notion of truth is best. It's much harder to think both about how to evaluate if something is true and remember at the same time that this method of evaluation is contingent on which method you think is best.

This difficulty leads to confusion and mistakes, like thinking that truth is something objective and independent of any human process. It's easy to get so caught up in a particular notion of truth that we forget we had to choose that notion and thus that truth is contingent upon our motivations and preferences.

Why argue all this? Because I think people suffer and cause unnecessary harm because they get wrapped up in their own ideas about truth (and many other things!). By learning to look up, even just a little, we can break free of our self-construct dreams and start to engage with the world as it really is. And if we can do that, maybe we can make a little progress on some of the hard problems we face instead of spinning our wheels trying to solve the problems we make up in our heads.

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 2:14 PM

Either I completely don't get it, or this is some kind of sophistry that seems deep. Humans make a choice what they mean when they pronounce the sound "truth". Therefore, truth is arbitrary.

The same argument can be made about anything else. Humans make a choice what they mean when they pronounce the sound "circle". Therefore, circles are arbitrary.

By learning to look up, even just a little, we can break free of our self-construct dreams and start to engage with the world as it really is.

Okay, so instead of using the word "truth" we should be saying "world as it really is"? Other than getting an extra point for poetry, I think this is what many people already mean by truth.

Similarly, the problem with the criterion -- I have no idea whether I agree or disagree with you here -- is that we gradually learn about the world, and the criteria we used in the past may turn out to be less good than we assumed. For example, one might start with "it is true if I can see it" and then realize that sometimes things make sense even if we cannot directly observe them by our senses (because they happened in the past, happen far away, require x-ray vision, etc.), so the criterion would change to... something else, which again might require an update in the future. That does not make it arbitrary, it just makes it... learning.

To my reading you violently agree with me but are framing it a different way. I never said anything about arbitrariness. I said truth (and all concepts) is contingent. That you think contingency implies arbitrariness is a related but different kind of confusion I hope to address, but not in this post.

[-]gjm2y64

I think your argument here equivocates between two different claims.

  1. "When we use the word 'truth' or 'true' we may mean different things by it, so the meaning of a sentence with 'truth' or 'true' in it is dependent on somewhat-arbitrary choices made by humans."
  2. "That specific thing you (or I) mean by 'truth' is dependent on somewhat arbitrary choices made by humans."

The first is hard to disagree with. (And the same applies for literally any other term as well as "truth"/"true".) The second, not so much.

An analogy: Suppose something is vibrating and I say "The fundamental frequency of that vibration is approximately 256 Hz". Just as we can all propose subtly (or not so subtly) different ideas of what it means to say that something is "true", so we can all propose different definitions for "hertz".[1] Or for that matter for "fundamental" or "frequency". So two people making that statement might mean different things. But I don't think it's helpful to say that this means that the fundamental frequency of an oscillation is subjective. Once you decide what you would like "fundamental frequency" to mean and what units you'd like it to be in, any two competently done measurements will give the same value.

[1] If you think this is silly, you might want to suppose that instead I had said "... is approximately that of middle C". You could measure frequency in "octaves relative to middle C" exactly as well as in hertz, but different groups of people at different times really have called different frequencies "middle C".

Similarly, at least prima facie it's possible that (a) everything you say about the existence of different criteria-for-truth is correct but none the less (b) there is a fact of the matter, not dependent on anyone's kinda-arbitrary decisions, about e.g. what things remain when you stop believing in them, or what beliefs will reliably lead a given class of agent to more accurate predictions about the future, or what sets of beliefs and inference rules constitute consistent formal systems.

Perhaps it turns out that for some or many or all plausible notions of truth (b) is not, er, true, so that what I claimed claim 2 above is, er, true. That would be an interesting, er, truth -- to me, much more interesting than the less controversial claim 1. But if you've given any reason here for believing it, I haven't seen it.

But I don't think it's helpful to say that this means that the fundamental frequency of an oscillation is subjective.

I think you might be imagining I'm saying more than I am, because as I see it this statement of yours contains exactly the point I'm making in this post. The very fact that claiming some claim about truth can be "helpful" is a manifestation of the point that I'm making.

I'm not saying the choice of what truth means is arbitrary. I'm saying it's contingent on what matters to humans. Another way to make my point: can you define truth in a way that is sensible to rocks?

[-]gjm2y30

Let me try to restate what I think you're saying your point is, to see whether I have it right. "When we say something is 'true', there are any number of different things we could conceivably mean. The specific meaning we have in mind, to whatever extent there is one, will depend on what we are interested in and what we want. So it is a mistake to think of 'truth' as some sort of objective thing not dependent on human interests and preferences."

If my paraphrase is correct or near to it, then I think my point stands. The last sentence in that paraphrase, which if I've got it right expresses your main conclusion, is importantly ambiguous, and the version of it that follows from what's gone before is (it seems to me) not actually interesting or important.

The version that follows from what's gone before is just observing that the way we define our words, and the questions we find it worth asking, depend on our interests and preferences. Yup, they do, but that doesn't conflict with what I think people (at least otherwise sensible and clever people) generally mean when they say things like "I believe in objective truth".

No, I can't define truth, or anything, in a way that is sensible to rocks, because nothing is sensible to rocks. And because nothing is sensible to rocks, the fact that I can't define truth to be sensible to rocks tells us nothing about truth that would distinguish it from beauty, or rest mass, or anything else.

Perhaps I am all wrong in thinking that the "weak" version of the final claim is not interesting or important. Could you maybe give an example of a concrete error you think someone generally sensible and clever has made as a result of not seeing the truth of the "weak" version, and which they would plausibly not have made if they had seen it?

(I think what you're saying by "contingent on what matters to humans" is much the same as what I was saying by "somewhat arbitrary", just with different emphasis. I would not disagree, e.g., with "somewhat arbitrary, with the particular choices we tend to make being shaped by what matters to us". It is not coincidence that my choice of the word "helpful" is consonant with the point you're making; it was deliberately chosen to be.)

That you don't think it's interesting or important suggests you probably already grasp the point of this post and are just framing it differently than I would. For some readers what I'm saying here is sort of bind-blowing because they're walking about thinking that truth is like an objective, hard, real thing that exists totally independent of humans, hence my choice of emphasis. Sounds to me like you may already grasp my fundamental point and are seeing that it all adds back up to normality.

That said, I wrote a post a while ago with several examples of how understanding the "weak" version of the final claim matters.

For some readers what I’m saying here is sort of bind-blowing because they’re walking about thinking that truth is like an objective, hard, real thing that exists totally independent of humans, hence my choice of emphasis.

Another hypothesis here is that some readers misunderstand your point and think you're saying something different than you intend to say.

If I follow the discussion so far (and I confess I've just skimmed it), then the meaning I take from the words "truth doesn't exist independent of humans" is not a meaning you intend to convey. To convey the meaning I think you intend to convey, I would say something like: ""truth" doesn't exist independent of humans, in that we can define the word in many ways; but truth itself, for most definitions of the word in common use, does exist independent of humans".

And I agree with what I think gjm to be saying, that this is trite. It may indeed be that some people find it mind blowing.

But, it seems to me that most commenters on this post took you to be saying the same thing that I took you as saying; roughly, the thing that the words "truth doesn't exist independent of humans" conveys to me.

So I consider it a decent guess, that if someone thinks the thing you're saying is deep, it's not because they think the-thing-I-think-is-trite is deep. It may be they they misunderstood you in the same way that most commenters on this post misunderstood you.

Nothing exists independently. Everything is causally connected. So although I'm making a point about truth here because I think it's a case where failing to understand this interconnectedness matters, it's a fully general point.

Perhaps the real problem is I didn't try to convince folks in this post of this, rather than focusing on a specific consequences that I think is rather important for folks who read Less Wrong.

It's not clear to me how this was intended as a respose to my comment. Was it "I reject that hypothesis because..." or "no you're misunderstanding what's being said" or...?

But it seems to me that the biggest problem with the post is likely one of two things:

  1. You're not yourself confusing the quotation with the referent, but you write in a way that doesn't clearly distinguish them. This makes some readers think you're confusing them. Perhaps it makes other readers think you're saying something deep.

    If this is the problem, then explaining why you're making the point you're making might be helpful. But I suggest it would be more helpful to make the point you're making clearer, and that explicitly distinguishing quotation from referent would help with that.

  2. You are confusing the quotation with the referent. For example, when you say "I’m making a point about truth here", you think you are indeed making a point about truth; whereas I (and I believe gjm) claim you are making a point about the word "truth". I read you as saying to gjm "yeah you understand what I'm saying, you just don't think it's very interesting, that's fine, other people do". Perhaps so, but another possibility I have to consider is that you yourself misunderstand what you're making a point about, and misunderstand gjm when he tries to explain.

All I can do is point; you have to look for yourself.

My previous comment reflects the fact that I think there's a big inferential gap here caused by having not tackled another topic.

I have the sense that it's ferociously difficult to get at the kind of thing you're pointing at here in purely conceptual terms. I wonder if it might help to give some examples of where people have made the kind of mistake you're point at here, and perhaps the solution that they've been missing. I have the sense that the proof-based agents line of research ran right into this issue via the limits to the coherence operationalization of truth in your list. I also have the sense that we are at the moment running into the limits of the predictive operationalization of truth when we try to locate human values within physical human brains. The most interesting one to me, though, is: what is the truth status of practice as a path to the end of suffering?!

I have the sense that it's ferociously difficult to get at the kind of thing you're pointing at here in purely conceptual terms. I wonder if it might help to give some examples of where people have made the kind of mistake you're point at here, and perhaps the solution that they've been missing. I have the sense that the proof-based agents line of research ran right into this issue via the limits to the coherence operationalization of truth in your list. I also have the sense that we are at the moment running into the limits of the predictive operationalization of truth when we try to locate human values within physical human brains. The most interesting one to me, though, is: what is the truth status of practice as a path to the end of suffering?!

On a meta level, you've written a bunch about the problem of the criterion, and I don't really feel like I see much productive from it. The problem with the problem of the criterion is that in general the appropriate criterion depends on what criterion you have, on the case you're dealing with. So it's extremely limited what universal things you can say about the problem of the criterion.

Less cryptically, let's take an example. What is "truth"? Well, concretely, my girlfriend recently lost her phone, what defines the truth of "where her phone was"? Well, it's the place where her phone was; the place she could go to and reach for to get the phone. That's fairly simple, and it's a resolution that is straightforward and intrinsic to the sentence of "where her phone was", rather than general.

Now you could say that "where her phone was" is defined based on what she cares about. But the sentence makes sense to you too, even though you don't know her and aren't really affected by the sentence. So clearly language is much more broad than what you just care about.

You claim:

[...9 It's easy to think about truth as a concept and argue which notion of truth is best. It's much harder to think both about how to evaluate if something is true and remember at the same time that this method of evaluation is contingent on which method you think is best.

[...]

Why argue all this? Because I think people suffer and cause unnecessary harm because they get wrapped up in their own ideas about truth (and many other things!). By learning to look up, even just a little, we can break free of our self-construct dreams and start to engage with the world as it really is. And if we can do that, maybe we can make a little progress on some of the hard problems we face instead of spinning our wheels trying to solve the problems we make up in our heads.

But none of this philosophizing actually helped find the phone. Breaking free of our self-construct dreams of "where her phone was" wouldn't help at all; what did help was calling her phone and following the sound, which made it turn out that it was hidden behind a bag on the table. This only really worked because we had a good map-territory correspondence that allowed us to understand what happened in various cases, which requires a vary solid grasp on truth, and it's probably also more available because we've learned this heuristic from other cases where a lost phone was a problem.

That she and you aren't trying to resolve problems where the contingency of facts matters is a blessing. Please enjoy not having to deal with these problems.

This is not sarcastic or a joke. Really, you're lucky!

Would you like to discuss a stronger claim, that motivated cognition may be a good epistemology?

Usually people use "logical reasoning + facts". Maybe we can use "motivated reasoning + facts". I.e. seek a balance between desirability and plausibility of a hypothesis. 

I would say that of course motivated reasoning can lead to good epistemology since my claim is that all epistemology is done at the behest of some motivation, good being relative here to some motivation. :-)

For example, it's quite reasonable to pick a norm like logic or Bayesian rationality and expect reasoning to conform to it in order to produce knowledge of a type that is useful, say to the purpose of predicting what the world will be like in future moments.

Sorry, I meant using motivated cognition as a norm itself. Using motivated cognition for evaluating hypotheses. I.e. I mean what people usually mean by motivated cognition, "you believe in this (hypothesis) because it sounds nice".

Here's why I think that motivated cognition (MC) is more epistemically interesting/plausible than people think:

  • When you're solving a problem A, it may be useful to imagine the perfect solution. But in order to imagine the perfect solution for the problem A you may need to imagine such solutions for the problems B, C, D etc. ... if you never evaluate facts and hypotheses emotionally, you may not even be able to imagine what the "perfect solution" is.
  • MC may be a challenge: often it's not obvious what's the best possibility is. And the best possibilities may look weird.
  • Usual arguments against MC (e.g. "the universe doesn't care about your feelings", "you should base your opinions on your knowledge about the universe") may be wrong. Because feelings may be based on the knowledge about reality.
  • Modeling people (even rationalists) as using different types of MC may simplify their arguments and opinions.
  • MC in the form of ideological reasoning is, in a way, the only epistemology known to us. Bayesianism is cool, but on some important level of reality it's not really an epistemology (in my opinion), i.e. it's hard/impossible to use and it doesn't actually model thinking and argumentation.

If you want we can discuss those or other points in more detail.

I wrote a post about motivated cognition in epistemology, a version of "the problem of the criterion" and (a bit) about different theories of truth. If you want, I would be happy to discuss some of it with you.

Important post. The degree to which my search for truth is motivated, and to what ends, is something I grapple with frequently. I generally prefer the definition of truth as "that which pays the most rent in anticipated experience"; essentially a demand for observability and falsifiability, a combination of your correspondence and predictive criteria. This, of course, leaves what is true subject to updating if new ideas lead to better results, but I think it is the best way we have of approximating truth. So I'm constantly looking really hard at the evidence I examine and asking myself, am I convinced of this for the right reasons? What would have to happen to unconvince me? How can I take a detached stance toward this belief, if ever there comes a time when I may no longer want it? So in what way my truth-seeking could be called motivated, I aim to constrain it to at least being solely motivated by adherence to the scientific method, which is something I am unashamed to simply acknowledge.

And that means whatever we want to claim to be true is ultimately motivated by whatever it is we care about that led us to choose the definition of truth we use.

 

People who speak different languages don't use the symbols "truth". To what extent are people using different definitions of "truth" just choosing to define a word in different ways and talk about different things. 

In an idealized agent, like AIXI, the world modeling procedure, the part that produces hypothesis and assigns probabilities, doesn't depend on it's utility function. And it can't be motivated. Because motivation only works once you have some link from actions to consequences, and that needs a world model. 

If the world model is seriously broken, the agent is just non functional. The workings of the world model isn't a choice for the agent. It's a choice for whatever made the agent. 

In an idealized agent, like AIXI, the world modeling procedure, the part that produces hypothesis and assigns probabilities, doesn't depend on it's utility function. And it can't be motivated. Because motivation only works once you have some link from actions to consequences, and that needs a world model. 

AIXI doesn't exist in a vacuum. Even if AIXI itself can't be said to have self-generated motivations, it is build in a way that reflects the motivations of its creators, so it is still infused with motivations. Choices had to be made to build AIXI one way rather than another (or not at all). The generators of those choices are were the motivations behind what AIXI does lie.

If the world model is seriously broken, the agent is just non functional. The workings of the world model isn't a choice for the agent. It's a choice for whatever made the agent. 

Yes, although some agents seem to have some amount of self-reflective ability to change their motivations.

It pays to taboo the term, as I've been advocating for years here, with little success. 

Say what you really mean instead of this nebulous misleading concept! Sometimes it is a truth value of a provable or disprovable mathematical statement (e.g. the Pythagoras theorem), sometimes it is someone's best guess at the truth value of a mathematical statement (P is is very likely provably not equal NP), sometimes it is a statement about accuracy of some model of the physical world (e,g, Quantum Mechanics is "true" in its domain of applicability), sometimes it is a statement of faith ("my truth" vs "your truth") etc.

Tabooing "truth" avoids pointless arguments over statements of the form "unprovable/untestable but true", like MWI is obviously true", or "Genghis Khan liked horse milk" or "BB(10)'th digit of Pi has 10% probability of being 0". Alternatives to the term "true" are "testably accurate", "holds in all but measure zero possible worlds, given a certain set of assumptions", "something I fervently believe in" and other items from your list. 

"Fact" is another term that is worth tabooing, I call it yet another four-letter f-word.

I like this proposal. In light of the issues raised in this post, it's important for people to come into the custom of explaining their own criteria for "truth" instead of leaving what they are talking about ambiguous. I tend not to use the word much myself, in fact, because I find it more helpful to describe exactly what kind of reality judgments I am interested in arriving at. Basically, we shouldn't be talking about the world as though we have actual means of knowing things about it with probability 1.

My notion of truth doesn't fit with any of the theories you listed. Truth is a relationship between propositions and the world, e.g. the proposition "this comment contains 1 or more y's" is true because this comment contains 1 or more y's.

This doesn't technically invalidate your point that truth is human-chosen. But specifically, the human-chosen element is the language we use. If we spoke a different language where the meanings of the words "more" and "fewer" were swapped, the statement would become false.

Though my counterargument here unfairly skews things to my advantage. AFAIK, there is a lot of shared structure between different human languages. Usually human languages can be translated near-losslessly into each other, but a combinatorial argument shows that this is not the case for the overwhelming majority of mathematically conceivable languages.

However, I think similar combinatorial arguments show that it is not possible to obtain information about the truth in the overwhelming majority of mathematically conceivable languages. Conceptually, if the truth of statements can depends mainly on things that you do not observe (which concepts will if they depend on random stuff, since there's a lot of unseen stuff they could potentially depend on), then you cannot learn anything about the truth.

You're saying that our languages are based on our motivations and preferences. But almost any motivations and preference would favor a language that can express concepts that are observable, as well as concepts linked to observables. I bet there's an instrumental convergence argument that could be made here; do you disagree?

This doesn't technically invalidate your point that truth is human-chosen.

I'm not sure what you're trying to argue here. I give a bunch of examples of theories of truth, but there can of course be more, my list is not exhaustive. Your theory still has the property of depending on a criterion that distinguishes that which is true from what is not, so it doesn't change the remainder of my arguments.

You're saying that our languages are based on our motivations and preferences. But almost any motivations and preference would favor a language that can express concepts that are observable, as well as concepts linked to observables. I bet there's an instrumental convergence argument that could be made here; do you disagree?

The sort of thing we find it useful to label "truth" reflects what's useful to us, which includes saying things about what we observe. If you had a language where that wasn't possible, you'd probably invent a way to do it. Because many humans care about the same things, we converge on finding the same sort of things useful, so they become fixed concepts we teach each other and build into our languages.

I'm not sure if we can truly make a case that this is instrumental convergence because I don't think any of this is happening independently enough where that's meaningful, but my point could be phrased that we care about truth for instrumental reasons, and many people have the same instrumental reasons for the same reasons.

Truth isn't necessarily about what's useful to us, though. There's a truth to the matter about whether Russell's teapot exists, but that doesn't mean it is useful.

That someone cares what the answer is is a kind of usefulness.

I think there are lots of propositions that can be phrased in English and that nobody cares about.

I think maybe you're meaning something different by "care" than I am. You seem to mean something like "important". I mean something like "care enough to even ever bother thinking about it". That there are infinite statements no one cares about by my definition doesn't seem a problem, but in fact an important thing to know.

[-]ZT52y10

I would say you if hold the correct understanding of what truth is, then truth seeking is cognition motivated by seeking truth.

So yes, it is motivated cognition. But the motivation is correct.

This seems like circular reasoning that doesn't ground out to anything. How do you know if you have the correct ("true") understanding of truth?

[-]ZT52y2-2

There exists an objective reality. A true statement correctly describes that objective reality. A false statement incorrectly describes that objective reality.

It is really is quite simple (though people manage to get very confused about that anyway, somehow).

Yes, there is "circularity" to it, in that the mind uses itself to validate itself.

But it's not just validating the definition of truth against itself (if it did, "truth" would just be a floating concept not connected to anything. So it could mean anything and still validate).

It is validating my definition of truth against all my sensory input, against all my knowledge, against all my memories. Does this definition of truth add up to a coherent reality?

How do you know that this objective reality exists? What about the world is explained by the existence of objective reality that can't also be explained as an illusion of your own mind?

That you think this comic captures this discussion means I've missed the mark with you, because you've failed to grasp the intended meaning. I suspect, like many other commenters here, you've interpreted my words to say more than they do.

[-]ZT52y10

Then I'm not sure what you are trying to say. Perhaps it would be easier if you explain your beliefs instead of trying to get me to question mine?

It seems like you are trying to break people out of an over-reliance on concepts and trying to point them at the fundamental thing behind the concepts? 

My beliefs validate; I don't see it being worth my time to explain the validation process in full detail.

Indeed, this post is more focused on breaking people out of one set of concepts rather than fully explaining another because it's a long process to explain the thing I'm pointing at and this post was a way for me to play with writing about a couple ideas I had for a larger writing project.

If it's not worth you're time, that's well worth knowing!

[-]ZT52y30

Thanks, that clarifies your position somewhat.

It is not worth my time because I already understand the thing you are trying to communicate. Or so I believe.

If you are trying to get me to "look up" and "look away from my phone", but we are communicating over phones, so how do I demonstrate I already know to do this?

If you are trying to get me to see the truth beyond words and concepts, but we are communicating in words, so how do I demonstrate I already don't see words as the truth?

I also feel that maybe you have gone too far in on that one, and from realizing that words are not, in themselves, the truth, decided to assume that words cannot meaningfully connect to the truth at all. And that the only way to get people to see the truth is to "crash their program", is to force them to "look up"?

What does it matter if you've demonstrated you know something to me? I'm just some guy posting things on Less Wrong.

I never said that words cannot meaningfully connect to truth or any other thing. Words are clearly quite useful for pointing at stuff about the world! I only claimed that this connection is not independent of our motivations.