LessWrong isn't exactly founded on the map-territory model of truth, but it's definitely pretty core to the LessWrong worldview. The map-territory model implies a correspondence theory of truth. But I'd like to convince you that the map-territory model creates confusion and that the correspondence theory of truth, while appealing, makes unnecessary claims that infect your thinking with extraneous metaphysical assumptions. Instead we can see what's appealing about the map-territory metaphor but drop most of it in favor of a more nuanced and less confused model of how we know about the world.

The map-territory metaphor goes something like this: a map is a representation of some part of the world—a territory. The mind that models the world via thoughts can be said to create a map of the territory of reality. Thus, under the map-territory model, beliefs are true if they represent what we actually find when we investigate the territory directly. Or, if you want to add some nuance, the map is more accurate and thus closer to being true the better it represents the territory.

The map-territory model implies a correspondence theory of truth. That is, it's a theory of truth that says propositions are true to the extent they accurately represent reality. On the surface this seems reasonable. After all, the process of finding out what the world is like feels a lot like drawing a map of reality: you collect evidence, build a model, check the model against more evidence, iterate until the model is good enough for whatever you're doing, and if the model seems like it matches all the evidence you can throw at it, we might call the model true. But there are problems with correspondence theories of truth. I'll focus on the one that I think is the worst: unnecessary, metaphysical claims.

In order to set up a correspondence between map and territory and judge truth based on the accuracy of that correspondence, there must be an assumption that there is something we call "the territory" and that in some way we can construct a map that points to it. Whatever we think the territory is, we must presume it exists prior to establishing a criterion for truth because if we don't have a territory to check against we have no way to assess how well the map corresponds to it. Two common versions of the territory assumption are the materialist version (there's an external physical reality we observe) and the idealist version (there's an external source of pure form that grounds our reasoning). Both are metaphysical assumptions in that they are being made prior to having established a way to reckon their truth.

Nota Bene: Just so there's no misunderstanding, neither materialism nor idealism need be metaphysical assumptions. Questions about materialism and idealism can be investigated via standard methods if we don't presuppose them. It's only that a correspondence theory of truth forces these to become metaphysical assumptions by making our notion of truth depend on assuming something about the nature of reality to ground our criterion of truth.

What alternative theory of truth can we use if not a correspondence one? There's a few options, but I'll simply consider my favorite here in the interest of time: predicted experience. That is, rather than assuming that there is some external territory to be mapped and that the accuracy of that mapping is how we determine if a mapping (a proposition) is true or not, we can ground truth in our experience since it's the only thing we are really forced to assume (see the below aside for why). Then propositions or beliefs are true to the extent they predict what we experience.

This lets us remove metaphysical claims about the nature of reality from our epistemology, and by the principle of parsimony we should because additional assumptions are liabilities that make it strictly more likely that we're mistaken.

Aside: Why are we forced to assume our experiences are true? Some people have a notion of doubting their own experience, but all such doubting must happen due to evidence we collect from our experience unless you believe some form of dualism where our minds have direct, special access to truth outside experience (which, I would argue, is an unnecessary assumption, since we can come to believe we have such special access merely by experience). Thus any idea that our experience might not be reality is actually the result of creating models based on evidence collected from sense data and thus that idea already assumes that experience is in some way revealing truth. I think more often people are confounding direct sense experience with experience of one's models of experience, for example confounding raw visual data for the shapes, patterns, etc. our brains automatically pull out for us, and thus often feel like one's models are ground truth even though they are additional calculations performed by our brains atop raw sense data. If we strip all that away, all we are left with is the sensory input that comes into our minds, and even if we come to believe we're psychotic, living in a simulation, or Boltzmann brains, it's still the case that we know that via sense data we trusted enough to come to that conclusion, and so we must trust experience because it's simply how our brains work.

Under this model, we can rehabilitate the map-territory metaphor. The map is predictions, the territory is experience, including experience of predictions, and truth is found in the ability of the map to predict what we find in the territory of experience. This rehabilitation is useful in that it helps us show that abandoning a correspondence theory of truth need not mean we abandon what we intuitively knew to be useful about the map-territory metaphor, but also points out that truth doesn't work quite the way the metaphor naively implies.

But should we keep the map-territory metaphor around? It depends on what you want to do. I think the map-territory distinction is mostly useful for pointing out a class of epistemological errors that involve confusing thoughts for reality (cf. failures of high modernism, The Secret, and cognitive fusion). I don't think it's a great choice, though, for a model of how truth-making happens, because as we've seen it depends on making unnecessary, metaphysical assumptions. It also causes confusion because its metaphor suggests there's some separation between map and territory (people sometimes try to correct this by saying something like "the map is not the territory, but the map is in the territory").

In fact, despite my arguments above for why the map-territory model and a correspondence theory of truth are insufficiently parsimonious, I think the real reason you should not lean too hard on the map-territory model is because it can cause confusion in your own mind. By creating a separation between map and territory, we introduce dualism and imply a split between the machinery of our minds and the reality they predict. Although we're clever enough not to regularly drop anvils on our heads (although given our willingness to deny reality when it's painful, you might debate this claim), we're not quite so clever as to never get wrapped up in all kinds of confusions because we start thinking our model of reality is more real than reality itself (cf. seeking truth too hard, generally Goodharting yourself, humans dissociating all the time, and confusion about causality).

Thus we're better off using the map-territory distinction only as a way to point out a class of problems, not as a general model for how we reason about the truth. What we actually seem to do to find truth is more subtle and less satisfying an answer perhaps than "drawing maps", but it also better reflects the embedded nature of our existence.

Thanks to Justis for useful feedback on an earlier draft of this essay via the LW feedback service.

New Comment
50 comments, sorted by Click to highlight new comments since: Today at 8:09 AM

It seems worth noting that taking map-territory distinction as object is useful in contexts that aren't about highminded philosophy or epistemology. Eg if I'm debugging some software, one angle to do that from is searching for the relevant difference between my mental model of how the software works, and how it actually works.

For the highly abstract epistemology stuff, my stance is: Most people should probably be more pluralistic. Thinking about map-territory correspondence is a frame in which you can figure out what's true. Thinking about predictiveness is a frame in which you can figure out what's true. Coherence, Bayesianism, and pragmatism are frames in which you can figure out what's true.

A lot of philosophy seems like what it's trying to do is to establish a pecking order between frames: to make one particular thing The Ultimate Source of Truth, and rebuild and justify the rest in terms of that. I think this is a mistake. What we actually do, when we're not trying to be philosophical, and what actually works, is we use all the frames we've picked up, we use them to judge each other's predictions, we kick out the ones that underperform too badly, and eventually wind up at a fixpoint where all our models of truth are making mostly the same predictions. This works better because lots of things are easy to think about in one ontology and hard to think about in another, and if we try to mash everything into the same ontology for philosophical reasons, the mashing process tends to force us into tortured analogies that yield unreliable conclusions.

Yes, different tools are useful to different purposes, but also sometimes trying to extract a general theory is quite useful since tools can get you confused when they are applied outside their domain of function.

Cf. Toolbox Thinking and Law Thinking

In order to set up a correspondence between map and territory and judge truth based on the accuracy of that correspondence, there must be an assumption that there is something we call "the territory" and that in some way we can construct a map that points to it.

Suppose that I happen to believe that there is a physical universe "out there" even if I don't know about it or have wrong ideas about it.

Suppose I also happen to believe that a mathematical claim can be true or false or undecidable even if we humans don't know which one it is, or are wrong about it.

Suppose I hold those beliefs very strongly, so strongly that I have no interest whatsoever in arguing about them, or questioning them, or justifying them to myself or anyone else. Suppose I pondered and debated those beliefs as a teenager, but I feel like I've reached the correct answer, and now I just want to take them for granted and move forward to tackling other questions that I find more important and interesting, like how to build safe AGI or debug my code or whatever.

Would you nevertheless say that "the map-territory distinction creates confusion" even for me? If so, why?

That sounds almost Wittgenstenian

But that is not to say that we are in doubt because it is possible for us to imagine a doubt. I can easily imagine someone always doubting before he opened his front door whether an abyss did not yawn behind it, and making sure about it before he went through the door (and he might on some occasion prove to be right)—but that does not make me doubt in the same case

But, if you are certain, isn't it that you are shutting your eyes in face of doubt?"—They are shut.


 

Yes. Let's take the case of building safe AGI, because that's actually why I ever ended up caring so much about all this stuff (modulo some risk that I would have cared about it anyway and my reason for caring is not actually strongly dependent on the path I took to caring).

In my posts on formal alignment I start from a stance of transcendental idealism. I wouldn't necessarily exactly endorse transcendental idealism today and all the cruft surrounding it, but I think it gets at the right idea: basically assume an arealist stance and assume that anything you're going to figure out about the world is ultimately subjective. This was quite useful, for example in the first post of that sequence, because it clears up any possibility of confusion that we can base alignment on some fundamental feature of the universe. Although I hadn't worked it all out at the time of that post, this ultimately lead me to realize that, for example, any sort of alignment we might achieve depends on choosing norms to align to, and the source of the norms must be human axiology.

None of this was totally novel at the time, but what was novel was having a philosophical argument for why it must be so, rather than a hand-wavy argument that permitted the idea that other approaches might be workable.

I started out just being your typical LW-style rationalist: sure, let's very strongly assume there's external reality to the point we just take for granted that it exists, no big deal. But this can get you into trouble because it's very easy to jump from "there's very likely external reality" to "there's a view from nowhere in that external reality" and get mixed up about all kinds of stuff. Not having a firm grounding in how subjective everything really is made it hard to make progress without constantly getting tripped up in stupid ways, basically in ways equivalent to thinking "if we just make AGI smart enough it'll align itself". So after I got really tangled up by my own thoughts, I realized the problem was that I was making these strong assumptions that I shouldn't be taking for granted. When I stopped doing that and stopped trying to justify things in terms of those beliefs things got a lot easier to think about.

(I'm not sure how relevant this is, but I do want to say that I do have in mind the possibility that, even if I think there's an objective universe "out there", my AGI might not think that. In general I think we should be very uncertain of all aspects of an AGI's ontology, by default. That might or might not be relevant to this discussion.)

any sort of alignment we might achieve depends on choosing norms to align to, and the source of the norms must be human axiology

Hmm, I'm not convinced that I'm missing out on anything here. I feel quite confident that I can both understand that "you can't get an ought from an is" and believe there's objectively a universe out there. Probably I'm misunderstanding your point.

it's very easy to jump from "there's very likely external reality" to "there's a view from nowhere in that external reality"

I'm trying to understand what you mean by "there's a view from nowhere in that external reality" such that this is obviously-to-you false. I'm inclined to just say "Oh yeah of course there's a view from nowhere." But maybe I'm misunderstanding what that is.

Let's take a N×N game of life (GOL) universe, where N is a very large but finite number. If N is large enough, this GOL universe can almost definitely have conscious (or at least p-zombie) observers in it (since one can make a Turing machine within a GOL universe). It also might not have any such observers in it. It doesn't matter. Either way, I can write down a list of  binary numbers each timestep, describing whether each tile is ON or OFF. I would describe this list of lists as a "view from nowhere"—i.e., an "objective" complete description of this GOL universe. Would you?

Either way, I can write down a list of  binary numbers each timestep, describing whether each tile is ON or OFF. I would describe this list of lists as a "view from nowhere"—i.e., an "objective" complete description of this GOL universe. Would you?

How does a conscious observer within the GOL universe write down this list?

They don't. The map is not the territory, right?

We were talking about "view from nowhere", with G Gordon saying "obviously there is no view from nowhere" and I was saying "yes there's a view from nowhere, or else maybe I don't know what that phrase means". A view from nowhere, I would assume, does not need to exist inside somebody's head, and in fact presumably does not, or else it would be "view from that person's perspective", right?

A view from nowhere, I would assume, does not need to exist inside somebody's head,

Presumably it needs to exist somewhere in order to exist, whether that's in someone's head, in a computer, or on a piece of paper.

and in fact presumably does not, or else it would be "view from that person's perspective", right?

Generally the problems with views from nowhere pop up once you start talking about embedded agency. A lot of our theories of agency assume that you have a view from nowhere and that you then somehow place your actions in it. This is an OK model for non-embedded agents like chess AIs, where we can make a small-world assumption and be reasonably accurate, but it is not a very good model for real-world generally intelligent unboxed agents.

I would surmise that we don't disagree about anything except what the term "view from nowhere" means. And I don't really know what "view from nowhere" means anyway, I was just guessing.

The larger context was: I think there's a universe, and that I live in it, and that claims about the universe can be true or false independently of what I or any other creature know and believe. And then (IIUC) G Gordon was saying that this perspective is wrong or incomplete or something, and in fact I'm missing out on insights related to AI alignment by having this perspective. So that was the disagreement.

It's possible that theories of embedded agency have something to do with this disagreement, but if so, I'm not seeing it and would be interested if somebody spelled out the details for me.

The idea of a "view from nowhere" is basically the idea that there exists some objective, non-observer-based perspective of the world. This is also sometimes called a God's eye view of the world.

However such a thing does not exist except to the extent we infer things we expect to be true independent of observer conditions.

Yes, embedded agency is quite connected to all this. Basically I view embedded agency as a way of thinking about AI that avoids many of the classical pitfalls of non-subjective models of the world. The tricky thing is that for many toy models, like chess or even most AI training today, the world is constrained enough such that we can have a view from nowhere onto the artificially constrained world, but we can't get this same thing onto the universe because, to extend the analogy from above a bit, we are like chess or go pieces on the board and can only see the board from our place on it, not above it.

Can we distinguish three possible claims?

  1. "God's-eye view of the world" is utter nonsense—it's just a confused notion, like "the set of all sets that don't contain themselves" or "a positive integer that's just like 6 in every way, except that it's prime".
  2. "God's-eye view of the world" might or might not be a concept that makes sense; we can't really conclude with certainty one way or the other, from our vantage point.
  3. "God's-eye view of the world" is a perfectly sensible concept, however we are finite beings within the world and smaller than the world, so obviously we do not ourselves have access to a God's-eye view of the world. Likewise, an AI cannot have a God's-eye view of its own world. Nevertheless, since "God's-eye view of the world" is a sensible concept, we can talk about it and reason about it. (Just like "the th prime number" is a sensible concept that I can talk about and reason about and even prove things about, even if I can't write down its digits.)

I endorse #3. I'm slightly sympathetic to #2, in the sense that no of course I don't put literally 100% credence on "there is an objective reality" etc., that's not the kind of thing that one can prove mathematically, I can imagine being convinced otherwise, even if I strongly believe it right now.

The reason I brought up the Game-Of-Life universe example in my earlier comment was to argue against #1.

I think it's possible to simultaneously endorse #3 and do sound reasoning about embedded agency. Do you?

So to return to your GoL example, it only works because you exist outside the universe. If you were inside that GoL, you wouldn't be able to construct such a view (at least based on the normal rules of GoL). I see this as exactly analogous to the case we find ourselves in: what we know about physics seems to imply that we couldn't hope to gather enough information to ever successfully construct a God's eye view.

This is why I make a claim more like your #1 (though, yes, #2 is obviously the right thing here because nothing is100% certain) that a God's eye view is basically nonsense that our minds just happen to be able to image is possible because we can infer what it would be like if such a thing could exist from the sample set of our experience, but the logic of it seems to be that it just isn't a sensible thing we could ever know about except via hypothesizing the possible existence of it, putting it on par with thinking about things outside our Hubble volume, for example.

I'm suspicious someone could endorse #3 and not get confused reasoning about embedded agency because I'd expect either assuming #3 to cause you to get confused thinking about the embedded agency situation (and getting tripped up on questions like "why can't we just do thing X that endorsing #3 allows?") or that thinking about embedded agency hard enough would cause you to have to break down the things that make you endorse #3 and then you would come to no longer endorse it (my claim here is backed in part by that fact that I and others have basically gone down this path before one way or another, previously having assumed something like #3 and then having to unassume it because it got in the way and was inconsistent with the rest of our thinking).

So to return to your GoL example, it only works because you exist outside the universe. If you were inside that GoL, you wouldn't be able to construct such a view (at least based on the normal rules of GoL). I see this as exactly analogous to the case we find ourselves in: what we know about physics seems to imply that we couldn't hope to gather enough information to ever successfully construct a God's eye view.

I feel like this is an argument for #3, but you're taking it to be an argument for #1. For example "we couldn't hope to gather enough information to ever successfully construct a God's eye view" is exactly the thing I said in #3. 

Let's walk through the GoL example. Here's a dialog between two GoL agents within the GoL universe:

A: "There is a list of lists of  boolean variables describing a God's-eye view of our universe."

B: "Oh? If that's true, then tell me all the entries of this alleged list of lists. Go."

A: "Obviously I don't know all the entries. The list has vastly more entries than I could hold in my head, or learn in a million lifetimes. Not to mention the fact that I can't observe everything in our universe etc. etc."

B: "Can you say anything about the entries in this list? Why even bring it up?"

A: "Oh sure! I know lots of things about the entries in the list! For example, I'm 99.99% confident that the entries in the list always obey these four rules. And I'm 99.99% confident that the sum of the entries of each list obeys the following mathematical relation: (mumble mumble). And I'm 99.99% confident that thus-and-such scientific experiment corresponds to thus-and-such pattern in the entries of the list, here let me show you the simulation results. And—"

B: "—You can stop right there. I don't buy it. If you can't tell me every single entry in the list of lists right now, then there is no list of lists, and everything you're saying is total nonsense. I think you're just deeply confused."

OK. That's my dialog. I think A is correct all around, and B is being very unreasonable (and I wrote it that way). I gather that you're sympathetic to B. I'd be interested in what you would have B say differently at the end.

B: "Okay, cool, but that's information you constructed from within our universe and so is contingent on the process you used to construct it thus it's not actually a God's eye view but an inference of one. Thus you should be very careful what you do with that information because if you start to use it as the basis for your reasoning you're now making everything contingent on it and thus necessarily more likely to be mistaken in some way that will bite you in the ass at the limits even if it's fine 99.99% of the time. And since I happen to know you care about AGI alignment and AGI alignment is in large part about getting things right at the extreme limits, you should probably think hard about if you're not seeing yourself up to be inadvertently turned into a paperclip."

It seems like you're having B jump on argument #2, whereas I'm interested in a defense of #1. In other words, it's trivial to say "we can't be literally 100% certain that there's an objective universe", because after all we can't be literally 100% certain of anything whatsoever. I can't be literally 100% certain that 7 is prime either. But I would feel very comfortable building an AGI that kills everyone iff 7 is composite. Or if you want an physical example, I would feel very comfortable building an AGI that kills everyone iff the sun is not powered primarily by nuclear fusion. You have posts with philosophical arguments, are you literally 100% certain that those arguments are sound? It's a fully general counterargument!

I don't think your opinion is really #2, i.e. "There's probably an objective universe out there but we can only be 99.99% confident of that, not literally 100%." In the previous discussion you seemed to be frequently saying with some confidence that you regard "there is an objective universe" to be false if not nonsensical. Sorry if I'm misunderstanding.

In your quote above, you use the term "construct", and I'm not sure why. GoL-Person A inferred that there is a list of lists, and inferred some properties of that list. And there is in fact a list of lists. And it does in fact have those properties. A is right, and if B is defending position #1, then B is wrong. Then we can talk about what types of observations and reasoning steps A and B might have used to reach their respective conclusions, and we can update our trust in those reasoning steps accordingly. And it seems to me that A and B would be making the same kinds of observations and arguments in their GoL universe that we are making in our string theory (or whatever) universe.

Perhaps it seems like I'm not really defending #1 because it still all has to add up to normality, so it's not like I am going to go around claiming an objective universe is total nonsense except in a fairly technical sense; in an everyday sense I'm going to act not much different than a person claiming for there to definitely be objective reality because I've still got to respond to the conditions I find myself in.

From a pragmatic perspective most of the time it doesn't matter what you believe so long as you get the right outcome, and that can be for a surprisingly large space where it can be hard to find the places where things break down. Mostly they break down when you try to justify how things are grounded without stopping when it's practical and instead going until you can't go anymore. That's the kind of places where rejecting #3 (except as something contingent) and accepting something more like #1 starts to make sense, because you end up getting underneath the processes that were used to justify the belief in #3.

I still feel like you're dodging the GoL issue. The situation is not that "GoL-person-A has harmless confusions, and it's no big deal because they'll still make good decisions". The situation is that GoL-person-A is actually literally technically correct. There is in fact a list of lists of booleans, and it does have certain mathematical properties like obeying these four rules. Are you:

  • A) disagreeing with that? or
  • B) saying that GoL-person-A is correct by coincidence, i.e. that A did not have any sound basis to reach the beliefs that they believe, but they just happened to guess the right answer somehow? or
  • C) Asserting that there's an important difference between the conversation between A and B in the GoL universe, versus the conversation between you and me in the string theory (or whatever) universe? or
  • D) something else?

OK thanks. I'm now kinda confused about your perspective because there seems to be a contradiction:

  • On the one hand, I think you said you were sympathetic to #1 ("There is a God's-eye view of the world" is utter nonsense—it's just a confused notion, like the set of all sets that don't contain themselves").
  • On the other hand, you seem to be agreeing here that "There is a God's-eye view of the world" is something that might actually be true, and in fact is true in our GoL example.

Anyway, if we go with the second bullet point, i.e. "this is a thing that might be true", then we can label it a "hypothesis" and put it into a Bayesian analysis, right?

To be specific: Let's assume that GoL-person-A formulated the hypothesis: "There is a God's-eye view of my universe, in the form of a list of lists of  booleans with thus-and-such mathematical properties etc.".

Then, over time, A keeps noticing that every prediction that the hypothesis has ever made, has come true.

So, being a good Bayesian, A's credence on the hypothesis goes up and up, asymptotically approaching 100%.

This strikes me as a sound, non-coincidental reason for A to have reached that (correct) belief. Where do you disagree?

The point is kinda that you can take it to be a hypothesis and have it approach 100% likelihood. That's not possible if that hypothesis is instead assumed to be true. I mean, you might still run the calculations, they just don't matter since you couldn't change your mind in such a situation even if you wanted to.

I think the baked-in absurdity of that last statement (since people do in fact reject assumptions) points at why I think there's actual no contradiction in my statements. It's both true that I don't have access to the "real" God's eye view and that I can reconstruct one but will never be able to be 100% sure that I have. Thus I mean to be descriptive of how we find reality: we don't have access to anything other than our own experience, and yet we're able to infer lots of stuff. I'm just trying to be especial careful to not ground anything prior in the chain of epistemic reasoning on something inferred downstream, and that means not being able to predicate certain kinds of knowledge on the existence of an objective reality because I need those things to get to the point of being able to infer the existence of an objective reality.

Why would the universe need to exist within the universe in order for it to exist? In the GOL example, why would the whole bits have to be visible to some particular bit in order for them to exist?

The bits exist but the view of the bits don't exist. The map is not the territory.

It took me a day, but I can see your view on this. I think my position is fairly well reasoned through in the other thread, so I'm not going to keep this going unless you want it to (perhaps your position isn't represented elsewhere or something).

Thanks for the concise clarification!

Theories of truth are motivated by questions such as:

Why is 'snow is white' true?

Correspondence theories of truth generally say something like 'snow is white' is true because it maps onto the whiteness of real snow. A general description of such theories is the idea that "truth consists in a relation to reality."

In your proposal, it appears that reality is defined as an individual's experience, while the relation is prediction. The sentence 'snow is white' is true because that sentence predicts (relation) experience (reality). As such, it would be beneficial to my understanding if you either:

  1. Emphasized that you are proposing a particular correspondence theory of truth, rather than an alternative to correspondence theory, OR
  2. More clearly described why this is not a correspondence theory of truth.

The sentence 'snow is white' is true because that sentence predicts (relation) experience (reality).

I'll give my interpretation, although I don't know whether Gordon would agree:

What you're saying here isn't my read. The sentence "Snow is white" is true to the extent that it guides your anticipations. The sentence doesn't predict anything on its own. I read it, I interpret it, it guides my attention in a particular way, and when I go look I find that my anticipations match my experience.

This is important for a handful of reasons. Here are a few:

  • In this theory of truth, things can't be true or false independent of an experiencer. Sentences can't be true or false. Equations can't be true or false. What's true or false is the interaction between a communication and a being who understands.
  • This also means that questions can be true or false (or some mix). The fallacy of privileging the hypothesis gestures in this direction.
  • Things that aren't clearly statements or even linguistic can be various degrees of true or false. An epistemic hazard can have factually accurate content but be false because of how it divorces my anticipations from reality. A piece of music can inspire an emotional shift that has me relating to my romantic partner differently in ways that just start working better. Etc.

So in some sense, this vision of truth aims less at "Do these symbols point in the correct direction given these formal rules?" and more at "Does this matter?"

I haven't done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.

I haven't done anything like a careful analysis, but at a guess, this shift has some promise for unifying the classical split between epistemic and instrumental rationality. Rationality becomes the art of seeking interaction with reality such that your anticipations keep synching up more and more exactly over time.

"Unifying epistemic and instrumental reality" doesn't seem desirable to me — winning and world-mapping are different things. We have to choose between them sometimes, which is messy, but such is the nature of caring about more than one thing in life.

World-mapping is also a different thing from prediction-making, though they're obviously related in that making your brain resemble the world can make your brain better at predicting future states of the world — just fast forward your 'map' and see what it says.

The two can come apart, e.g., if your map is wrong but coincidentally gives you the right answer in some particular case — like a clock that's broken and always says it's 10am, but you happen to check it at 10am. Then you're making an accurate prediction on the basis of something other than having an accurate map underlying that prediction. But this isn't the sort of thing to shoot for, or try to engineer; merely accurate predictiveness is a diminished version of world-mapping.

All of this is stuff that (in some sense) we know by experience, sure. But the most fundamental and general theory we use to make sense of truth/accuracy/reasoning needn't be the earliest theory we can epistemically justify, or the most defensible one in the face of Cartesian doubts.

Earliness, foundationalness, and immunity-to-unrealistically-extreme-hypothetical-skepticism are all different things, and in practice the best way to end up with accurate and useful foundations (in my experience) is to 'build them as you go' and refine them based on all sorts of contingent and empirical beliefs we acquire, rather than to impose artificial earliness or un-contingent-ness constraints.

Thanks for your reply here, Val! I'll just add the following:

There's a somewhat technical argument that predictions are not the kind of thing classically pointed at by a correspondence theory of truth, which instead tend to be about setting up a structured relationship between propositions and reality and having some firm ground by which to judge the quality of the relationship. So in that sense subjective probability doesn't really meet the standard of what is normally expected for a correspondence theory of truth since it generally requires, explicitly or implicitly, the possibility of a view from nowhere.

That said, it's a fair point that we're still talking about how some part of the world relates to another, so it kinda looks like truth as predictive power is a correspondence theory. However, since we've cut out metaphysical assumptions, there's nothing for these predictions (something we experience) to relate to other than more experience, so at best we have things corresponding to themselves, which breaks down the whole idea of how a correspondence theory of truth is supposed to work (there's some ground or source (the territory) that we can compare against). A predictive theory of truth is predictions all the way down to unjustified hyperpriors.

I don't get into this above, but this is why I think "truth" in itself is not that interesting; "usefulness to a purpose" is much more inline with how reasoning actually works, and truth is a kind of usefulness to a purpose, and my case above is a small claim that accurate prediction does a relatively good job of describing what people mean when they point at truth that's grounded in the most parsimonious story I know to tell about how we think.

How does subjective probability require the possibility of a view from nowhere?

What happens to the correspondence theory of truth if you find out you're colorblind?

I think... (correct me if I'm wrong, trying to check myself here as well as responding)

If you thought that "The snow is white" was true, but it turns out that the snow is, in fact, red, then your statement was false.

In the anticipation-prediction model, "The snow is white" (appears) to look more like "I will find 'The snow is white' true to my perceptions", and it is therefore still true.

If you thought that "The snow is white" was true, but it turns out that the snow is, in fact, red, then your statement was false.

The issue is the meaning of a preposition. What does the meaning of the colors correspond to?

By asserting that the statement is wrong, you are going with a definition which relies on something...which colorblind people can't see. In their ontology prior to finding out about colorblindness the distinction you are making isn't on the map. Without a way to see the colors in question, then provided the difference is purely 'no distinction between the two effected colors at all', knowledge about which is which would have to be come from other people. (Though learning to pay attention to other distinguishing features may sometimes help.)


It's not immediately obvious that being colorblind effects perceptions of snow. (Though it might - colors that otherwise seem similar and blend in with each other can stand out more to people with colorblindness.)

A common version is red-green. (From what I've heard, the light that means go in the U.S., looks exactly the same as the light that means stop - by color. But not by position, as long as everything is exactly where it's supposed to be.)

Your perceptions have no relation to the truth (except where the proposition relates to your perceptions) in the correspondence theory of truth, AIUI. Colorblindness has no relation whatsoever to the truth value of "the snow is white".

If you had meant to ask the truth value of "The snow looks white to me", that's an entirely different story (since the proposition is entirely different).

If we give up any assumption that there's an external reality and try to reason purely from our experience, then in what sense can there be any difference between "the snow is white" and "the snow looks white to me"? This is, in part, what I'm trying to get at in the post: the map-territory metaphor creates this kind of confusing situation where it looks and awful lot like there's something like a reality where it could independent of any observer have some meaning where snow is white, whereas part of the point of the post that this is nonsense, there must always be some observer, they decide what is white and not, and so the truth of snow being white is entirely contingent on the experience of this observer. Since everything we know is parsed through the lens of experience, we have no way to ground truth in anything else, so we cannot preclude the possibility that we only think snow is quite because of how our visual system works. In fact, it's quite likely this is so, and we could easily construct aliens who would either disagree or would at least be unable to make sense of what "snow is white" would mean since they would lack something like a concept of "white" or "snow" and thus be unable to parse the proposition.

Status: overly long.

the map-territory metaphor creates this kind of confusing situation where it looks and awful lot like there's something like a reality where it could independent of any observer have some meaning where snow is white

I think reality exists independently.

However, 'senses' may:

  • Be based on visual processing with a set of cones. (A smaller set of cones will, predictably, make different predictions than a larger set, that is the same, plus one.)
  • Be based on visual processing which can in some way be 'wrong' (first it looks one way. Without it changing, more processing occurs and it resolves properly)
  • Be somewhat subjective. (We look at a rock and see a face. Maybe 'aliens' don't do that. Or maybe they do.)
Since everything we know is parsed through the lens of experience

My point was less about making a claim about an inability to see beyond that. More - we parse things. Actively. That is a part of how we give them meaning, and after giving them meaning, decide they are true. (The process is a bit more circular than that.)

For example: This sentence is false. (It's nonsense.) This sentence is not non-sense. (It's nonsense. It's true! Yeah, but it doesn't mean anything, there's no correspondence to anything.)


we cannot preclude the possibility that we only think snow is quite because of how our visual system works.

Yes. Also maybe not.

Yes: it may seem like colors could be a construct to help with stuff like seeing predators, and if there are optical illusions that can fool us, what of it? If the predator in the tree isn't able to catch and kill us, our visual system is doing spectacular, even if it's showing us something that 'isn't real'.

Maybe not: Perhaps we can design cameras and measure light. Even if a spectrum of light isn't captured well by our eyes, we can define a system based around measurements even if our eyes can't perceive them.

We can sometimes bootstrap an 'objective' solution.

But that doesn't mean we can always pull it off. If a philosopher asks us to define furniture, we may stumble at 'chair'. You can sit on it. So couches are chairs?

And so philosophical solutions might be devised, by coming up with new categories defined by more straightforward properties: sitting-things (including couches chairs, and comfortable rocks that are good for sitting). But 'what is a chair' may prove elusive. 'What is a game' may have multiple answers, and people with different tastes may find some fun and others not, perhaps messing with the idea of the 'objective game'. And yet, if certain kind of people do tend to enjoy it, perhaps there is still something there...


(Meant as a metaphor)

When someone asks for a chair, they may have expectations. If they are from far away, perhaps they will be surprised when they see your chairs. Perhaps there are different styles where they come from, or it's the same styles, just used for different things.

You probably do well enough that, an implicit 'this is a chair' is never not true. But also, maybe you don't have a chair, but still find a place they can sit that does just as well.


Maybe people care about purpose more than truth. And both may be context dependent. A sentence can have a different meaning in different contexts.

For a first point, I kind of thought the commenter was asking the question from within a normal theory. If they weren't, I don't know what they were asking really, but I guess hopefully someone else will.


For a second point, I'm not sure your theory is meaningfully true. Although there are issues with the fact that you could be a brain in a jar (or whatever), that doesn't imply there must not be some objective reality somewhere.

Say I have the characters "Hlo elt!" and you have "el,raiy". Also say that you are so far from me that we will never meet.

There is a meaningful message that can be made from interleaving the two sets ("Hello, reality!"). Despite this, we are so far away that no one can ever know this. Is the combination an objective fact? I would call it one, despite the fact that the system can never see it internally, and only a view from outside the system can.

Similarly to the truth, agents inside the system can find some properties of my message, like its length (within some margins). They might even be able to look through a dictionary and find some good guesses as to what it might be. I think this shows that an internal representation of an object is not required for an object to exist in a system.


I started replying to the aliens and the snow bit, but I honestly think I was going to stretch the metaphor too far.

[-]TAG2y00

Nothing much. A definition of truth doesn't have to make the truth about everything available to every agent

This is an interesting take on the map-territory distinction, and I agree in part. Thinking about it now, my only issues with the correspondence theory of truth would be

  1. that it might imply dualism, as you suggested, between a "map" partition of reality and a "territory" partition of reality, and
  2. that it might imply that the territory is something that can be objectively "checked" against the map by some magical method that transcends the process of observation that produced the map in the first place.

These implications, however, seem to be artifacts of human psychology that can be corrected for. As for the metaphysical assumption of an objective, external physical world, I don't see how you can really get around that.

It's true that the only way we get any glimpse at the territory is through our sensory experiences. However, the map that we build in response to this experience carries information that sets a lower bound on the causal complexity of the territory that generates it.

Both the map and the territory are generative processes. The territory generates our sensory experiences, while the map (or rather, the circuitry in our brains) uses something analogous to Bayesian inference to build generative models whose dynamics are predictive of our experiences. In so doing, the map takes on a causal structure that is necessary to predict the hierarchical statistical regularities that it observes in the dynamics of its experiences.

This structure is what is supposed to correspond to the territory. The outputs of the two generative processes (the sensations coming from the territory and the predictions coming from the map) are how correspondence is checked, but they are not the processes themselves.

In other words, the sensory experiences you talked about are Bayesian evidence for the true structure of the territory that generates them, not the territory itself.

My reply here feels weird to me because I think you basically get my point, but you're one inferential gap away from my perspective. I'll see if I can close that gap.

It's true that the only way we get any glimpse at the territory is through our sensory experiences. However, the map that we build in response to this experience carries information that sets a lower bound on the causal complexity of the territory that generates it.

We need not assume there is anything more than experience, though. Through experience we might infer the existence of some external reality that sense data is about (this is a realist perspective), and as you say this gives us evidence that perhaps the world really does have some structure external to our experience, but we need not assume it to be so.

This is perhaps a somewhat subtle distinction, but the point is to shift as much as possible from assumption to inference. If we take an arealist stance and do not assume realism, we may still come to infer it based on the evidence we collect. This is, arguably, better even if most of the time it doesn't produce different results because now everything about external reality in our thinking exists firmly within our minds rather than outside of them where we can say nothing about them, and now we can make physical claims about the possibility of an external reality rather than metaphysical assumptions about an external reality.

This is perhaps a somewhat subtle distinction, but the point is to shift as much as possible from assumption to inference. If we take an arealist stance and do not assume realism, we may still come to infer it based on the evidence we collect.

I think I can agree with this.

One caveat would be to note that the brain's map-making algorithm does make some implicit assumptions about the nature of the territory. For instance, it needs to assume that it's modeling an actual generative process with hierarchical and cross-modal statistical regularities. It further assumes, based on what I understand about how the cortex learns, that the territory has things like translational equivariance and spatiotemporally local causality.

The cortex (and cerebellum, hippocampus, etc.) has built-in structural and dynamical priors that it tries to map its sensory experiences to, which limits the hypothesis space that it searches when it infers things about the territory. In other words, it makes assumptions.

On the other hand, it is a bit of a theme around here that we should be able to overcome such cognitive biases when trying to understand reality. I think you're on the right track in trying to peel back the assumptions that evolution gave us (even the more seemingly rational ones like splitting map from territory) to ground our beliefs as solidly as possible.

Thanks for this.  I sometimes forget that "predicted experience" is not what everyone means by "map", and "actual experience" not what they mean when they say "territory".

The Map-Territory Distinction

->

The "predicted experience"-"actual experience" distinction

The claim that the map-territority model impliesa correspondence theory of truth is boldy stated and for clarity tha is good. I think the "proof" of it is quite implicit and I kind of find the claim not to stand. I still think that the phrasing is prone to support that kind of mode of thought that is problematic.

I find that if I want to avoid mention of territority I can do it mostly fine in that framing. I keep my nose stuck on a piece of paper and if my walking doesn't get me surprised I am happy and confident to walk on. I don't need to claim that I am walking in anything.

I suspect that having a priviledged entity such as "prediction" to be in the map is more misleading than it is fixing. That is a list of directions like "walk 100 m turn right walk 200 m" would be equally a "map" than a "free view point" representation and having a specific this kind of map format be somehow more important than others works in a different dimension than the map and territority distinction touches on.

Yeah, privileging prediction doesn't really solve anything. This post is meant to be a bit of a bridge towards a viewpoint that resolves this issue by dropping the privileging of any particular concern, but getting there requires first seeing that a very firm notion of truth based on the assumption of something external can be weakened to a more fluid notion based on only what is experienced (since, my bold claim is that's how the world is anyway and we're just confused when making metaphysical claims otherwise).

One issue here: often we don't just care about our own experiences, but also those of other people. Of course, if other people were just hallucinations in our head we probably wouldn't care about them.

Actually, it's totally find if they're hallucinations. Maybe other people are? Regardless, since these hallucinations seem to act in lawful ways that determine what else happens in my experience, it doesn't really matter much if they are "real" or not, and so long as reports of others experiences are causally intertwined with our experience we should care about them just the same.

What alternative theory of truth can we use if not a correspondence one? There's a few options, but I'll simply consider my favorite here in the interest of time: predicted experience. That is, rather than assuming that there is some external territory to be mapped and that the accuracy of that mapping is how we determine if a mapping (a proposition) is true or not, we can ground truth in our experience since it's the only thing we are really forced to assume (see the below aside for why). Then propositions or beliefs are true to the extent they predict what we experience.

Could you set up a mathematical toy model for this, similar to how there are mathematical toy models for various correspondence theories? Or point to one that already exists? Or I guess just clarify some questions.

In particular, I'm confused about a few things here:

  • What's the type signature of a proposition here? In a correspondence theory, it'd be some logical expression whose atomic parts describe basic features of the world, but this doesn't seem viable here. I guess you could have the atomic parts describe basic features of your observations, but that would lead you to the problems with logical positivism.
  • Can there be multiple incompatible propositions that predict the same experiences, and how does your approach deal with them? In particular, what if they only predict the same experiences within some range of observation, but diverge outside of that? What if you can't get outside, or don't get outside, of the range?
  • How does it deal with things like collider bias? If Nassim Taleb filters for people with high g factor (due to job + interests) and for people who understand long tails (due to his strong opinions on long tails), his experience might become that there is a negative correlation between intelligence and understanding long tails. Would it then be "true" "for him" that there's a tradeoff between g and understanding long tails, even if g is positively correlated with understanding long tails in more representative experiences?

What's the type signature of a proposition here?

experience -> experience

  • Can there be multiple incompatible propositions that predict the same experiences, and how does your approach deal with them? In particular, what if they only predict the same experiences within some range of observation, but diverge outside of that? What if you can't get outside, or don't get outside, of the range?

That seems fine. Consistency is often useful, but it's not always. Sometimes completeness is better at the expense of consistency.

  • How does it deal with things like collider bias? If Nassim Taleb filters for people with high g factor (due to job + interests) and for people who understand long tails (due to his strong opinions on long tails), his experience might become that there is a negative correlation between intelligence and understanding long tails. Would it then be "true" "for him" that there's a tradeoff between g and understanding long tails, even if g is positively correlated with understanding long tails in more representative experiences?

Since experience is subjective and I'm implicitly here talking about subjective probability (this is LessWrong; no frequentists allowed 😛), then of course truth becomes subjective, but of course only because "subjective" is kind of meaningless because there's no such thing as objectivity anyway except as we infer there to be some things that are so common among the things we classify in our experience to be reports of others experience to believe that maybe there's some stuff out there that is the same for all of us.

[-]TAG2y20

Nota Bene: Just so there’s no misunderstanding, neither materialism nor idealism need be metaphysical assumptions. Questions about materialism and idealism can be investigated via standard methods if we don’t presuppose them. It’s only that a correspondence theory of truth forces these to become metaphysical assumptions by making our notion of truth depend on assuming something about the nature of reality to ground our criterion of truth

Assuming what about the nature of reality? M/T and CToT only require that there is some sort of territory....they dont say anything specific about it. I can see how it would be a problem to make a 1) non-reviseable 2) apriori assumption, about 3) the nature of reality, ie. metaphysics.

But it's not the "metaphysics" that's the problem: the problem is mostly the "non-reviseable". Like a lot of people, I don't see a problem with reviseable assumptions.

And what are the "standard methods"? As far as I can see, the standard method for figuring the correct metaphysics of the territory is to infer it from the best known map, where "best" is based on empirical evidence, among other things. But if aposteriori/empirical reasoning is relevant to metaphysics, you don't have to reject metaphysics along with your rejection of the apriori.

(I have my own problems with correspondence, but they are more to do with the fact that there is no way of checking correspondence per se).

I don’t think it’s a great choice, though, for a model of how truth-making happens, because as we’ve seen it depends on making unnecessary, metaphysical assumptions.

Unnecessary for whom, or for what purpose? If you only want to predict, and you don't care about how prediction works, then maybe you can manage without metaphysics. But that depends on what your interests are, as an individual.

This lets us remove metaphysical claims about the nature of reality from our epistemology, and by the principle of parsimony we should because additional assumptions are liabilities that make it strictly more likely that we’re mistaken.

Again, M/T doesn't require a lot of metaphysical claims. And parsimony isn't supposed to tell you that nothing is real . Simply giving up on even trying to explain things isn't parsimony. So a refusal to explain how prediction even works isn't a parsimonious explanation of prediction, because it isn't an explanation at all.

I ended up over the years with a definition of truth that: "Truth is what it is useful to believe." That is, one ought to believe the set of propositions such that one's utility is maximized, assuming one has a coherent utility function to begin with. But in practice this usually means a statement is true to the extent that it predicts future experience. The other way in which this definition of truth might show up is in helping people coordinate due to sharing unfalsifiable memes, such as in religion.

Ah, I suddenly realize that this comment, while true and kind (two of your comment criteria), is not necessarily useful. Well, anyway, perhaps it's a perspective you'd find useful for reasons I cannot predict.