(This is the first post of a new Sequence, Highly Advanced Epistemology 101 for Beginners, setting up the Sequence Open Problems in Friendly AI. For experienced readers, this first post may seem somewhat elementary; but it serves as a basis for what follows. And though it may be conventional in standard philosophy, the world at large does not know it, and it is useful to know a compact explanation. Kudos to Alex Altair for helping in the production and editing of this post and Sequence!)
I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.
-- Danielle Egan
I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.
-- Dale Carrico
What then is truth? A movable host of metaphors, metonymies, and; anthropomorphisms: in short, a sum of human relations which have been poetically and rhetorically intensified, transferred, and embellished, and which, after long usage, seem to a people to be fixed, canonical, and binding.
-- Friedrich Nietzche
The Sally-Anne False-Belief task is an experiment used to tell whether a child understands the difference between belief and reality. It goes as follows:
The child sees Sally hide a marble inside a covered basket, as Anne looks on.
Sally leaves the room, and Anne takes the marble out of the basket and hides it inside a lidded box.
Anne leaves the room, and Sally returns.
The experimenter asks the child where Sally will look for her marble.
Children under the age of four say that Sally will look for her marble inside the box. Children over the age of four say that Sally will look for her marble inside the basket.
(Attributed to: Baron-Cohen, S., Leslie, L. and Frith, U. (1985) ‘Does the autistic child have a “theory of mind”?’, Cognition, vol. 21, pp. 37–46.)
Human children over the age of (typically) four, first begin to understand what it means for Sally to lose her marbles - for Sally's beliefs to stop corresponding to reality. A three-year-old has a model only of where the marble is. A four-year old is developing a theory of mind; they separately model where the marble is and where Sally believes the marble is, so they can notice when the two conflict - when Sally has a false belief.
Any meaningful belief has a truth-condition, some way reality can be which can make that belief true, or alternatively false. If Sally's brain holds a mental image of a marble inside the basket, then, in reality itself, the marble can actually be inside the basket - in which case Sally's belief is called 'true', since reality falls inside its truth-condition. Or alternatively, Anne may have taken out the marble and hidden it in the box, in which case Sally's belief is termed 'false', since reality falls outside the belief's truth-condition.
The mathematician Alfred Tarski once described the notion of 'truth' via an infinite family of truth-conditions:
The sentence 'snow is white' is true if and only if snow is white.
The sentence 'the sky is blue' is true if and only if the sky is blue.
When you write it out that way, it looks like the distinction might be trivial - indeed, why bother talking about sentences at all, if the sentence looks so much like reality when both are written out as English?
But when we go back to the Sally-Anne task, the difference looks much clearer: Sally's belief is embodied in a pattern of neurons and neural firings inside Sally's brain, three pounds of wet and extremely complicated tissue inside Sally's skull. The marble itself is a small simple plastic sphere, moving between the basket and the box. When we compare Sally's belief to the marble, we are comparing two quite different things.
(Then why talk about these abstract 'sentences' instead of just neurally embodied beliefs? Maybe Sally and Fred believe "the same thing", i.e., their brains both have internal models of the marble inside the basket - two brain-bound beliefs with the same truth condition - in which case the thing these two beliefs have in common, the shared truth condition, is abstracted into the form of a sentence or proposition that we imagine being true or false apart from any brains that believe it.)
Some pundits have panicked over the point that any judgment of truth - any comparison of belief to reality - takes place inside some particular person's mind; and indeed seems to just compare someone else's belief to your belief:
So is all this talk of truth just comparing other people's beliefs to our own beliefs, and trying to assert privilege? Is the word 'truth' just a weapon in a power struggle?
For that matter, you can't even directly compare other people's beliefs to our own beliefs. You can only internally compare your beliefs about someone else's belief to your own belief - compare your map of their map, to your map of the territory.
Similarly, to say of your own beliefs, that the belief is 'true', just means you're comparing your map of your map, to your map of the territory. People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:
And so saying 'I believe the sky is blue, and that's true!' typically conveys the same information as 'I believe the sky is blue' or just saying 'The sky is blue' - namely, that your mental model of the world contains a blue sky.
If the above is true, aren't the postmodernists right? Isn't all this talk of 'truth' just an attempt to assert the privilege of your own beliefs over others, when there's nothing that can actually compare a belief to reality itself, outside of anyone's head?
(A 'meditation' is a puzzle that the reader is meant to attempt to solve before continuing. It's my somewhat awkward attempt to reflect the research which shows that you're much more likely to remember a fact or solution if you try to solve the problem yourself before reading the solution; succeed or fail, the important thing is to have tried first . This also reflects a problem Michael Vassar thinks is occurring, which is that since LW posts often sound obvious in retrospect, it's hard for people to visualize the diff between 'before' and 'after'; and this diff is also useful to have for learning purposes. So please try to say your own answer to the meditation - ideally whispering it to yourself, or moving your lips as you pretend to say it, so as to make sure it's fully explicit and available for memory - before continuing; and try to consciously note the difference between your reply and the post's reply, including any extra details present or missing, without trying to minimize or maximize the difference.)
The reply I gave to Dale Carrico - who declaimed to me that he knew what it meant for a belief to be falsifiable, but not what it meant for beliefs to be true - was that my beliefs determine my experimental predictions, but only reality gets to determine my experimental results. If I believe very strongly that I can fly, then this belief may lead me to step off a cliff, expecting to be safe; but only the truth of this belief can possibly save me from plummeting to the ground and ending my experiences with a splat.
Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'.
You won't get a direct collision between belief and reality - or between someone else's beliefs and reality - by sitting in your living-room with your eyes closed. But the situation is different if you open your eyes!
Consider how your brain ends up knowing that its shoelaces are untied:
- A photon departs from the Sun, and flies to the Earth and through Earth's atmosphere.
- Your shoelace absorbs and re-emits the photon.
- The reflected photon passes through your eye's pupil and toward your retina.
- The photon strikes a rod cell or cone cell, or to be more precise, it strikes a photoreceptor, a form of vitamin-A known as retinal, which undergoes a change in its molecular shape (rotating around a double bond) powered by absorption of the photon's energy. A bound protein called an opsin undergoes a conformational change in response, and this further propagates to a neural cell body which pumps a proton and increases its polarization.
- The gradual polarization change is propagated to a bipolar cell and then a ganglion cell. If the ganglion cell's polarization goes over a threshold, it sends out a nerve impulse, a propagating electrochemical phenomenon of polarization-depolarization that travels through the brain at between 1 and 100 meters per second. Now the incoming light from the outside world has been transduced to neural information, commensurate with the substrate of other thoughts.
- The neural signal is preprocessed by other neurons in the retina, further preprocessed by the lateral geniculate nucleus in the middle of the brain, and then, in the visual cortex located at the back of your head, reconstructed into an actual little tiny picture of the surrounding world - a picture embodied in the firing frequencies of the neurons making up the visual field. (A distorted picture, since the center of the visual field is processed in much greater detail - i.e. spread across more neurons and more cortical area - than the edges.)
- Information from the visual cortex is then routed to the temporal lobes, which handle object recognition.
- Your brain recognizes the form of an untied shoelace.
And so your brain updates its map of the world to include the fact that your shoelaces are untied. Even if, previously, it expected them to be tied! There's no reason for your brain not to update if politics aren't involved. Once photons heading into the eye are turned into neural firings, they're commensurate with other mind-information and can be compared to previous beliefs.
Belief and reality interact all the time. If the environment and the brain never touched in any way, we wouldn't need eyes - or hands - and the brain could afford to be a whole lot simpler. In fact, organisms wouldn't need brains at all.
So, fine, belief and reality are distinct entities which do intersect and interact. But to say that we need separate concepts for 'beliefs' and 'reality' doesn't get us to needing the concept of 'truth', a comparison between them. Maybe we can just separately (a) talk about an agent's belief that the sky is blue and (b) talk about the sky itself. Instead of saying, "Jane believes the sky is blue, and she's right", we could say, "Jane believes 'the sky is blue'; also, the sky is blue" and convey the same information about what (a) we believe about the sky and (b) what we believe Jane believes. We could always apply Tarski's schema - "The sentence 'X' is true iff X" - and replace every instance of alleged truth by talking directly about the truth-condition, the corresponding state of reality (i.e. the sky or whatever). Thus we could eliminate that bothersome word, 'truth', which is so controversial to philosophers, and misused by various annoying people.
Suppose you had a rational agent, or for concreteness, an Artificial Intelligence, which was carrying out its work in isolation and certainly never needed to argue politics with anyone. The AI knows that "My model assigns 90% probability that the sky is blue"; it is quite sure that this probability is the exact statement stored in its RAM. Separately, the AI models that "The probability that my optical sensors will detect blue out the window is 99%, given that the sky is blue"; and it doesn't confuse this proposition with the quite different proposition that the optical sensors will detect blue whenever it believes the sky is blue. So the AI can definitely differentiate the map and the territory; it knows that the possible states of its RAM storage do not have the same consequences and causal powers as the possible states of sky.
But does this AI ever need a concept for the notion of truth in general - does it ever need to invent the word 'truth'? Why would it work better if it did?
Meditation: If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for 'truth'?
Reply: The abstract concept of 'truth' - the general idea of a map-territory correspondence - is required to express ideas such as:
Generalized across possible maps and possible cities, if your map of a city is accurate, navigating according to that map is more likely to get you to the airport on time.
To draw a true map of a city, someone has to go out and look at the buildings; there's no way you'd end up with an accurate map by sitting in your living-room with your eyes closed trying to imagine what you wish the city would look like.
True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.
This is the main benefit of talking and thinking about 'truth' - that we can generalize rules about how to make maps match territories in general; we can learn lessons that transfer beyond particular skies being blue.
Next in main sequence:
Complete philosophical panic has turned out not to be justified (it never is). But there is a key practical problem that results from our internal evaluation of 'truth' being a comparison of a map of a map, to a map of reality: On this schema it is very easy for the brain to end up believing that a completely meaningless statement is 'true'.
Some literature professor lectures that the famous authors Carol, Danny, and Elaine are all 'post-utopians', which you can tell because their writings exhibit signs of 'colonial alienation'. For most college students the typical result will be that their brain's version of an object-attribute list will assign the attribute 'post-utopian' to the authors Carol, Danny, and Elaine. When the subsequent test asks for "an example of a post-utopian author", the student will write down "Elaine". What if the student writes down, "I think Elaine is not a post-utopian"? Then the professor models thusly...
...and marks the answer false.
The sentence "Elaine is a post-utopian" is true if and only if Elaine is a post-utopian.
Now of course it could be that this term does mean something (even though I made it up). It might even be that, although the professor can't give a good explicit answer to "What is post-utopianism, anyway?", you can nonetheless take many literary professors and separately show them new pieces of writing by unknown authors and they'll all independently arrive at the same answer, in which case they're clearly detecting some sensory-visible feature of the writing. We don't always know how our brains work, and we don't always know what we see, and the sky was seen as blue long before the word "blue" was invented; for a part of your brain's world-model to be meaningful doesn't require that you can explain it in words.
On the other hand, it could also be the case that the professor learned about "colonial alienation" by memorizing what to say to his professor. It could be that the only person whose brain assigned a real meaning to the word is dead. So that by the time the students are learning that "post-utopian" is the password when hit with the query "colonial alienation?", both phrases are just verbal responses to be rehearsed, nothing but an answer on a test.
The two phrases don't feel "disconnected" individually because they're connected to each other - post-utopianism has the apparent consequence of colonial alienation, and if you ask what colonial alienation implies, it means the author is probably a post-utopian. But if you draw a circle around both phrases, they don't connect to anything else. They're floating beliefs not connected with the rest of the model. And yet there's no internal alarm that goes off when this happens. Just as "being wrong feels like being right" - just as having a false belief feels the same internally as having a true belief, at least until you run an experiment - having a meaningless belief can feel just like having a meaningful belief.
(You can even have fights over completely meaningless beliefs. If someone says "Is Elaine a post-utopian?" and one group shouts "Yes!" and the other group shouts "No!", they can fight over having shouted different things; it's not necessary for the words to mean anything for the battle to get started. Heck, you could have a battle over one group shouting "Mun!" and the other shouting "Fleem!" More generally, it's important to distinguish the visible consequences of the professor-brain's quoted belief (students had better write down a certain thing on his test, or they'll be marked wrong) from the proposition that there's an unquoted state of reality (Elaine actually being a post-utopian in the territory) which has visible consquences.)
One classic response to this problem was verificationism, which held that the sentence "Elaine is a post-utopian" is meaningless if it doesn't tell us which sensory experiences we should expect to see if the sentence is true, and how those experiences differ from the case if the sentence is false.
But then suppose that I transmit a photon aimed at the void between galaxies - heading far off into space, away into the night. In an expanding universe, this photon will eventually cross the cosmological horizon where, even if the photon hit a mirror reflecting it squarely back toward Earth, the photon would never get here because the universe would expand too fast in the meanwhile. Thus, after the photon goes past a certain point, there are no experimental consequences whatsoever, ever, to the statement "The photon continues to exist, rather than blinking out of existence."
And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true. And this sort of question can have important policy consequences: suppose we were thinking of sending off a near-light-speed colonization vessel as far away as possible, so that it would be over the cosmological horizon before it slowed down to colonize some distant supercluster. If we thought the colonization ship would just blink out of existence before it arrived, we wouldn't bother sending it.
It is both useful and wise to ask after the sensory consequences of our beliefs. But it's not quite the fundamental definition of meaningful statements. It's an excellent hint that something might be a disconnected 'floating belief', but it's not a hard-and-fast rule.
You might next try the answer that for a statement to be meaningful, there must be some way reality can be which makes the statement true or false; and that since the universe is made of atoms, there must be some way to arrange the atoms in the universe that would make a statement true or false. E.g. to make the statement "I am in Paris" true, we would have to move the atoms comprising myself to Paris. A literateur claims that Elaine has an attribute called post-utopianism, but there's no way to translate this claim into a way to arrange the atoms in the universe so as to make the claim true, or alternatively false; so it has no truth-condition, and must be meaningless.
Indeed there are claims where, if you pause and ask, "How could a universe be arranged so as to make this claim true, or alternatively false?", you'll suddenly realize that you didn't have as strong a grasp on the claim's truth-condition as you believed. "Suffering builds character", say, or "All depressions result from bad monetary policy." These claims aren't necessarily meaningless, but they're a lot easier to say, than to visualize the universe that makes them true or false. Just like asking after sensory consequences is an important hint to meaning or meaninglessness, so is asking how to configure the universe.
But if you say there has to be some arrangement of atoms that makes a meaningful claim true or false...
Then the theory of quantum mechanics would be meaningless a priori, because there's no way to arrange atoms to make the theory of quantum mechanics true.
And when we discovered that the universe was not made of atoms, but rather quantum fields, all meaningful statements everywhere would have been revealed as false - since there'd be no atoms arranged to fulfill their truth-conditions.
Meditation: What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?
- Meditation Answers - (A central comment for readers who want to try answering the above meditation (before reading whatever post in the Sequence answers it) or read contributed answers.)
- Mainstream Status - (A central comment where I say what I think the status of the post is relative to mainstream modern epistemology or other fields, and people can post summaries or excerpts of any papers they think are relevant.)
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "Skill: The Map is Not the Territory"
I just realized that since I posted two comments that were critical over a minor detail, I should balance it out by mentioning that I liked the post - it was indeed pretty elementary, but it was also clear, and I agree about it being considerably better than The Simple Truth. And I liked the koans - they should be a useful device to the readers who actually bother to answer them.
was a cute touch.
Thank you for being positive.
I've been recently thinking about this, and noticed that despite things like "why our kind can't cooperate", we still focus on criticisms of minor points, even when there are major wins to be celebrated.
(The 'Mainstream Status' comment is intended to provide a quick overview of what the status of the post's ideas are within contemporary academia, at least so far as the poster knows. Anyone claiming a particular paper precedents the post should try to describe the exact relevant idea as presented in the paper, ideally with a quote or excerpt, especially if the paper is locked behind a paywall. Do not represent large complicated ideas as standard if only a part is accepted; do not represent a complicated idea as precedented if only a part is described. With those caveats, all relevant papers and citations are much solicited! Hopefully comment-collections like these can serve as a standard link between LW presentations and academic ones.)
The correspondence theory of truth is the first position listed in the Stanford Encyclopedia of Philosophy, which is my usual criterion for saying that something is a solved problem in philosophy. Clear-cut simple visual illustration inspired by the Sally-Anne experimental paradigm is not something I have previously seen associated with it, so the explanation in this post is - I hope - an improvement over what's standard.
Alfred Tarski is a famous mat... (read more)
This is a great post. I think the presentation of the ideas is clearer and more engaging than the sequences, and the cartoons are really nice. Wild applause for the artist.
I have a few things to say about the status of these ideas in mainstream philosophy, since I'm somewhat familiar with the mainstream literature (although admittedly it's not the area of my expertise). I'll split up my individual points into separate comments.
Summary of my point: Tarski's biconditionals are not supposed to be a definition of truth. They are supposed to be a test of the adequacy of a proposed definition of truth. Proponents of many different theories claim that their theory passes this test of adequacy, so to identify Tarski's criterion with the correspondence theory is incorrect, or at the very least, a highly controversial claim that requires defense. What follows is a detailed account of why the biconditionals can't be an adequate definition of truth, and of what Tarski's actual theory of truth is.
Describing Tarski's biconditionals as a definition of truth or a theory of truth is misleading. The relevant paper is ... (read more)
I've slightly edited the OP to say that Tarski "described" rather than "defined" truth - I wish I could include more to reflect this valid point (indeed Tarski's theorems on truth are a lot more complicated and so are surrounding issues, no language can contain its own truth-predicate, etc.), but I think it might be a distraction from the main text. Thank you for this comment though!
Depends on what you mean by "explicitly". Many correspondence theorists believe that an adequate understanding of "correspondence" requires an understanding of reference -- how parts of our language are associated with parts of the world. I think this sort of idea stems from trying to fill out Tarski's (actual) definition of truth, which I discussed in another comment. The hope is that a good theory of reference will fill out Tarski's obscure notion of satisfaction, and thereby give some substance to his definition of truth in terms of satisfaction.
Anyway, there was a period when a lot of philosophers believed, following Saul Kripke and Hilary Putnam, that we can understan... (read more)
Speaking as the author of Eliezer's Sequences and Mainstream Academia...
Off the top of my head, I also can't think of a philosopher who has made an explicit connection from the correspondence theory of truth to "there are causal processes producing map-territory correspondences" to "you have to look at things to draw accurate maps of them..."
But if this connection has been made explicitly, I would expect it to be made by someone who accepts both the correspondence theory and "naturalized epistemology", often summed up in a quote from Quine:
(Originally, Quine's naturalized epistemology accounted only for this descriptive part of epistemology, and neglected the normative part, e.g. truth conditions. In the 80s Quine started saying that the normative part entered into naturalized epistemology through "the t... (read more)
It's not that clear to me in what sense mainstream academia is a unified thing which holds positions, even regarding questions such as "what fields are legitimate". Saying that something is known in mainstream academia seems suspiciously like saying that "something is encoded in the matter in my shoelace, given the right decryption schema. OTOH, it's highly meaningful to say that something is discoverable by someone with competent 'google-fu"
OK, I defended the tweet that got this response from Eliezer as the sort of rhetorical flourish that gets people to actually click on the link. However, it looks like I also underestimated how original the sequences are - I had really expected this sort of thing to mirror work in mainstream philosophy.
I don't like the "post-utopian" example. I can totally expect differing sensory experiences depending on whether a writer is post-utopian or not. For example, if they're post-utopian, when reading their biography I would more strongly expect reading about them having been into utopian ideas when they were young, but having then changed their mind. And when reading their works, I would more strongly expect seeing themes of the imperfectability of the world and weltschmerz.
Apparently even computers agree with those judgments (or at least cluster "impressionists" in their own group - I didn't read the paper, but I expect that the cluster labels were added manually).
ETA: Got the paper. Excerpts:... (read more)
I'm no art geek, but Impressionism is an art "movement" from the late 1800s. A variety of artists (Monet, Renoir, etc) began using similar visual styles that influenced what they decided to paint and how they depicted images.
Art critics think that artistic "movements" are a meaningful way of analyzing paintings, approximately at the level of usefulness that a biologist might apply to "species" or "genus." Or historian of philosophy might talk about the school of thought know today as "Logical Positivism."
Do you think movements is a reasonable unit of analysis (in art, in literature, in philosophy)? If no, why not? If yes, why are you so hostile to the usage of labels like "post-utopian" or "post-colonialist"?
Maybe you should reconsider picking on an entire field you know nothing about?
I'm not saying this to defend postmodernism, which I know almost nothing about, but to point out that the Sokal hoax is not really enough reason to reject an entire field (any more than the Bogdanov affair is for physics).
I'm pointing out that you're neglecting the virtues of curiosity and humility, at least.
And this is leaving aside that there is no particular reason for "post-utopian" to be a postmodern as opposed to modern term; categorizing writers into movements has been a standard tool of literary analysis for ages (unsurprisingly, since people love putting things into categories).
At this point, getting in cheap jabs at post-modernism and philosophy wherever possible is a well-honored LessWrong tradition. Can't let the Greens win!
I don't think you can avoid the criticism of "literary terms actually do tend to make one expect differing sensory experiences, and your characterization of the field is unfair" simply by inventing a term which isn't actually in use. I don't know whether "post-utopian" is actually a standard term, but yli's comment doesn't depend on it being one.
Has anyone ever told you your writing style is Alucentian to the core? Especially in the way your municardist influences constrain the transactional nuances of your structural ephamthism.
Alucentian, municardist, and structural ephamthism don't mean anything, though Municard is trademarked. Between Louise Rosenblatt's Transactional Theory in literary criticism and Transactional analysis in psychotherapy, there's probably someone who could define "transactional nuances" for you, though it's certainly not a standard phrase.
Coming up with a made up word will not solve this problem. If the word describes the content of the author's stories then there will be sensory experiences that a reader can expect when reading those stories.
She should hand back the paper with the note, "What do you mean by 'mean'?"
There are some kinds of truths that don't seem to be covered by truth-as-correspondence-between-map-and-territory. (Note: This general objection is well know and is given as Objection 1 in SEP's entry on Correspondence Theory.) Consider:
Maybe the first two just argues for Platonism and modal realism (although I note that Eliezer explicitly disclaimed being a modal realist). The last one is most problematic to me, because some kinds of normative statements seem to be talking about what one should do given some assumed-to-be-accurate map, and not about the map itself. For example, "You should two-box in Newcomb's problem." If I say "Alice has a false belief that she should two-box in Newcomb's problem" it doesn't seem like I'm saying that her map doesn't correspond to the territory.
So, a couple of questions that seem open to me: Do we need other notions of truth, besides correspondence between map and territory? If so, is there a more general notion of truth that covers all of these as special cases?
I think a more general notion of truth could be defined as correspondence between a map and any structure. If you define a structure using axioms and are referencing that structure, then you can talk about the correspondence properties of that reference. This at least cover both mathematical structures and physical reality.
The problem with Alice's belief is that it is incomplete. It's like saying "I believe that 3 is greater than" (end of sentence).
Even incomplete sentences can work in some contexts where people know how to interpret them. For example if we had a convention that all sentences ending with "greater than" have to be interpreted as "greater than zero", then in given context the sentence "3 is greater than" makes sense, and is true. It just does not make sense outside of this context. Without context, it's not a logical proposition, but rather a proposition template.
Similarly, the sentence "you should X" is meaningful in contexts which provide additional explanation of what "should" means. For a consequentialist, the meaning of "you should" is "maximizes your utility". For a theist, it could mean "makes Deity happy". For both of them, the meaning of "should" is obvious, and within their contexts, they are right. The se... (read more)
I do wish that you would say "relativists" or the like here. Many of your readers will know the word "postmodernist" solely as a slur against a rival tribe.
Actually, "relativist" isn't a lot better, because it's still pretty clear who's meant, and it's a very charged term in some political discussions.
I think it's a bad rhetorical strategy to mock the cognitive style of a particular academic discipline, or of a particular school within a discipline, even if you know all about that discipline. That's not because you'll convert people who are steeped in the way of thinking you're trying to counter, but because you can end up pushing the "undecided" to their side.
Let's say we have a bright young student who is, to oversimplify, on the cusp of going down either the path of Good ("parsimony counts", "there's an objective way to determine what hypothesis is simpler", "it looks like there's an exterior, shared reality", "we can improve our maps"...) or the path of Evil ("all concepts start out equal", "we can make arbitrary maps", "truth is determined by politics" ...). Well, that bright young student isn't a perfectly rational being. If the advocates for Good look like they're being jerks and mocking the advocates for Evil, that may be enough to push that person down the path of Evil.
Wulky Wilkinson is the mind killer. Or so it seems to me.
I presume that the point with "social" is that, even if some political theories are better than others, the extent to which different theories are accepted or believed by the population at large is also strongly affected by social factors. Which, again, is an idea that has been discussed on LW a lot, and is generally accepted here...
Also, (guessing from my discussions with smart humanities people) it's saying that supposedly neutral and impartial research by scientists will be affected by a large number of (social) biases, some of them conscious, some of them unconscious, and this can have a big impact on which theory is accepted as the best and the most "experimentally tested" one. Again, not exactly a heretical belief on LW.
Ironically, I always thought that many of the posts on LW were using scientific data to show what my various humanities friends had been saying all along.
Psychology has made significant strides in response to criticism from the post-modernists. The post-modern criticism of mental health treatment is much less biting than it once was.
Still, for halo effect reasons, we should be careful.
The larger point is that Eliezer's reference to post-modernism is simply a Boo Light and deserves to be called out as such.
This post is better than the simple truth and I will be linking to it more often, even though this isn't as funny.
EDIT: Reworded in praise-first style.
The other day Yvain was reading aloud from Feser and I said I wished Feser would read The Simple Truth. I don't think this would help quite as much.
The Simple Truth sought to convey the intuition that truth is not just a property of propositions in brains, but of any system successfully entangled with another system. Once the shepherd's leveled up a bit in his craftsmanship, the sheep can pull aside the curtain, drop a pebble into the bucket, and the level in the bucket will remain true without human intervention.
I also really enjoyed this post, and specifically thought that the illustrations were much nicer than what's been done before.
However, I did notice that out of all the illustrations that were made for this post, there were about 8 male characters drawn, and 0 females. (The first picture of the Sally-Anne test did portray females, but it was taken from another source, not drawn for this post like the others.) In the future, it might be a good idea to portray both men AND women in your illustrations. I know that you personally use the "flip a coin" method for gender assignment when you can, but it doesn't seem like the illustrator does (There IS a 0.3% chance that the coin flips just all came up "male" for the drawings)
The specs given to the illustrator were stick figures. I noticed the male prevalence and requested some female versions or replacement with actual stick figures.
nit to pick: Rod and cone cells don't send action potentials.
Photoreceptor cells produce graded potential, not action potential. It goes through a bipolar cell and a ganglion cell before finally spiking, in a rather processed form.
Koan answers here for:
I dislike the "post utopian" example, and here's why:
Language is pretty much a set of labels. When we call something "white", we are saying it has some property of "whiteness." NOW we can discuss wavelengths and how light works, or whatnot, but 200 years ago, they had no clue. They could still know that snow is white, though. At the same time, even with our knowledge of how colors work, we can still have difficulties knowing exactly where the label "white" ends, and grey or yellow begins.
Say I'm carving up music-space. I can pretty easily classify the differences between Classical and Rap, in ways that are easy to follow. I could say that classical features a lot of instrumentation, and rap features rhythmic language, or something. But if I had lots of people spending all their lives studying music, they're going to end up breaking music space into much smaller pieces. For example, dub step and house.
Now, I can RECOGNIZE dubstep when I hear it, but if you asked me to teach you what it was, I would have difficulties. I couldn't necessarily say "It's the one that goes, like, WOPWOPWOPWOP iiinnnnnggg" if I'm a learned professor, so I'... (read more)
I think Eliezer is taking it as a given that English college professors who talk like that are indeed talking without connection to anticipated experience. This may not play effectively to those he is trying to teach, and as you say, may not even be true.
There's a sense in which a lot of fuzzy claims are meaningless: for example, it would be hard for a computer to evaluate "Socrates is kind" even if the computer could easily evaluate more direct claims like "Socrates is taller than five feet". But "kind" isn't really meaningless; it would just be a lot of work to establish exactly what goes into saying "kind" and exactly where the cutoff point between "kind" and "not so kind" is.
I agree that literary critical terms are fuzzy in the same sense as "kind", but I don't think they're necessarily any more fuzzy. For example, replacing "post-utopian" with its likely inspiration "post-colonial", I don't know much about literature, but I feel pretty okay designating Salman Rushdie as "post-colonial" (since his books very often take place against the backdrop of the issues surrounding British decolonization of India) and J. K. Rowling as "not post-colonial" (since her books don't deal with issues surrounding decolonization at all.)
Likewise, even though "post-utopian" was chosen specifically to be meaningless, I can say with... (read more)
I liked your comment and have a half-formed metaphor for you to either pick apart or develop:
LW/ rationalist types tend towards hard sciences. This requires more System 2 reasoning. Their fields are like computer programs. Every step makes sense, and is understood.
Humanities tends toward more System 1 pattern recognition. This is more akin to a neural network. Even if you are getting the "right" answer, it is coming out of a black box.
Because the rationalist types can't see the algorithm, they assume it can't be "right".
I like your idea and upvoted the comment, but I don't know enough about neural networks to have a meaningful opinion on it.
I can't resist. I think you should read Moby Dick. Whiteness in that novel is not used as any kind of symbol for good:
If you want to talk about racism and Moby Dick, talk about Queequeg!
Not that white animals aren't often associated with good things, but this is not unique in western culture:
If that's your criteria, you could use some stand-in for computer science terms that have no meaning.
I think you are playing to what you assume are our prejudices.
Suppose X is a meaningless predicate from a humanities subject. Suppose you used it, not a simulacrum. If it's actually meaningless by the definition I give elsewhere in the thread, nobody will be able to name any Y such that p(X|Y) differs from p(X|¬Y) after a Bayesian update. Do you actually expect that, for any significant number of terms in humanities subjects, you would find no Y, even after grumpy defenders of X popped up in the thread? Or did you choose a made-up term so as to avoid flooding the thread with Y-proponents? If you expect people to propose candidates for Y, you aren't really expecting X to be meaningless.
The Sokal hoax only proves one journal can be tricked by fake jargon. Not that bona fide jargon is meaningless.
I'm sure there's a lot of nonsense, but "post-utopian" appears to have a quite ordinary sense, despite the lowness of the signal to noise ratio of some of those hits. A post-utopian X (X = writer, architect, hairdresser, etc.) is one who is working after, and in reaction against, a period of utopianism, i.e. belief in the perfectibility of the world by man. Post-utopians today are the people who believe that the promises of science have been found hollow, and ruin and destruction are all we have to look forward to.
We're all utopians here.
What would he have to say? The Sokal Hoax was about social engineering, not semantics.
There is the literature professor's belief, the student's belief, and the sentence "Carol is 'post-utopian'". While the sentence can be applied to both beliefs, the beliefs themselves are quite different beasts. The professor's belief is something that carve literature space in a way most other literature professors do. Totally meaningful. The student's belief, on the other hand, is just a label over a set of authors the student have scarcely read. Going a level deeper, we can find an explanation for this label, which turns out to be just another label ("colonial alienation"), and then it stops. From Eliezer's main post (emphasis mine) :
A set of beliefs is not like a bag of sand, individual beliefs unconnected with each other, about individual things. They are connected to each other by logical reasoning, like a lump of sandstone. Not all beliefs need to have a direct connection with experience, but as long as pulling on the belief pulls, perhaps indirectly, on anticipated experience, the belief is meaningful.
When a pebble of beliefs is completely disconnected from experience, or when the connection is so loose that it can be pulled around arbitrarily without feeling the tug of experience, then we can pronounce it meaningless. The pebble may make an attractive paperweight, with an intricate structure made of elements that also occur in meaningful beliefs, but that's all it can be. Music of the mind, conveying a subjective impression of deep meaning, without having any.
For the hypothetical photon disappearing in the far-far-away, no observation can be made on that photon, but we have other observations leading to beliefs about photons in general, according to which they cannot decay. That makes it meaningful to say that the far away photon acts in the same way. If we discovered processes of photon decay, it would still be meaningful, but then we would believe it could be false.
If a person with access to the computer simulating whichever universe (or set of universes) a belief is about could in principle write a program that takes as input the current state of the universe (as represented in the computer) and outputs whether the belief is true, then the belief is meaningful.
(if the universe in question does not run on a computer, begin by digitizing your universe, then proceed as above)
That has the same problem as atomic-level specifications that become false when you discover QM. If the Church-Turing thesis is false, all statements you have specified thus become meaningless or false. Even using a hierarchy of oracles until you hit a sufficient one might not be enough if the universe is even more magical than that.
Before reading other answers, I would guess that a statement is meaningful if it is either implied or refuted by a useful model of the universe - the more useful the model, the more meaningful the statement.
For a belief to be meaningful you have to be able to describe evidence that would move your posterior probability of it being true after a Bayesian update.
This is a generalization of falsifiability that allows, for example, indirect evidence pertaining to universal laws.
So my belief that 2+2=4 isn't meaningful?
Counter-example. "There exists at least one entity capable of sensory experience." What constraints on sensory experience does this statement impose? If not, do you reject it as meaningless?
I don't think EY has chosen the most useful way to proceed on a discussion of truth. He has started from an anecdote where the correspondence theory of truth is the most applicable, and charges ahead developing the correspondence theory.
We call some beliefs true, and some false. True and false are judgments we apply to beliefs - sorting them into two piles. I think the limited bandwidth of a binary split should already be a tip off that we're heading down the wrong path.
In practice, ideas will be more or less useful, with that usefulness varying depending on the specifics of the context of the application of those beliefs. Even taking "belief as predictive model" as given, it's not that a belief is either accurate or inaccurate, but it will be more or less accurate, and so more or less useful, as I've claimed is the general case of interest.
Going back to the instrumental versus epsitemic distinction, I want to win, and having a model that accurately predicts events is only one tool for winning among many. It's a wonderful simulation tool, but not the only thing I can do with beliefs.
If I'm going to sort beliefs into more and less useful, the first thing to do is identif... (read more)
The belief that someone is epiphenomenally a p-zombie, or belief in consubstantiality can also have behavioral consequences. Classifying some author as an "X" can, too.
If an author actually being X has no consequences apart from the professor believing that the author is "X", all consequences accrue to quoted beliefs and we have no reason to believe the unquoted form is meaningful or important. As for p-zombieness, it's not clear at this point in the sequence that this belief is meaningless rather than being false; and the negation of the statement, "people are not p-zombies", has phrasings that make no mention of zombiehood (i.e., "there is a physical explanation of consciousness") and can hence have behavioral consequences by virtue of being meaningful even if its intuitive "counterargument" has a meaningless term in it.
A review of your recent comments page puts most of the comments upvoted and some of them to stellar levels---not least of which this post. This would suggest that aversion to your admin-related commenting hasn't generalized to your on topic commenting just yet. Either that or all your upvoted comments are so amazingly baddass that they overcome the hatred while the few that get net downvotes were merely outstanding and couldn't compensate.
The pictures are a nice touch.
Though I found it sort of unnerving to read a paragraph and then scroll down to see a cartoon version of the exact same image I had painted inside my head, several times in a row.
Two quibbles that could turn out to be more than quibbles.
The concept of truth you intend to defend isn't a correspondence theory--rather it's a deflationary theory, one in which truth has a purely metalinguistic role. It doesn't provide any account of the nature of any correspondence relationship that might exist between beliefs and reality. A correspondence theory, properly termed, uses a strong notion of reference to provide a philosophical account of how language ties to reality.
I'm inclined to think this is a straw man. (And if they're mere "pundits" and not philosophers why the concern with their silly opinion?) I think you should cite to the most respectable of these pundits or reconsider whether any pundits worth speaking of said this. The notion that reality--not just belief--determines experiments, might be useful to mention, but it doesn't answer any known argument, whether by philosopher or pundit.
The quantum-field-theory-and-atoms thing seems to be not very relevant, or at least not well-stated. I mean, why the focus on atoms in the first place? To someone who doesn't already know, it sounds like you're just saying "Yes, elementary particles are smaller than atoms!" or more generally "Yes, atoms are not fundamental!"; it's tempting to instead say "OK, so instead of taking a possible state of configurations of atoms, take a possible state of whatever is fundamental."
I'm guessing the problem you're getting at is that is that when you actually try to do this you encounter the problem that you quickly find that you're talking about not the state of the universe but the state of a whole notional multiverse, and you're not talking about one present state of it but its entire evolution over time as one big block, which makes our original this-universe-focused, present-focused notion a little harder to make sense of -- or if not this particular problem then something similar -- but it sounds like you're just making a stupid verbal trick.
I don't agree nor like this singling-out of politics as the only thing in which people don't update. People fail to update in many fields, they'll fail to update in love, in religion, in drug risks, in ... there is almost no domain of life in which people don't fail to update at times, rationalizing instead of updating.
In addition to what pleeppleep said, I think there is a bit of illusion of transparency here.
As I've said elsewhere, what Eliezer clearly intends with the label "political" is not partisan electioneering to decide whether the community organizer or the business executive is the next President of the United States. Instead, he means something closer to what Paul Graham means when he talks about keeping one's identity small.
Among humans at least, "Personal identity is the mindkiller."
The joke flew right over my head and I found myself typing "Redundant wording. Advanced Epistemology for Beginners sounds better."
Oh come on, yeah the gender-imbalance of the original images was bad, but ugliness is also bad and the new stick figures are ugly...
"Reality is that which, when you stop believing in it, doesn't go away. "
Is there a difference between "truth" and "accuracy"?
As a graduate philosophy student, who went to liberal arts schools, and studied mostly continental philosophy with lots of influence from post-modernism, we can infer from the comments and articles on this site that I must be a complete idiot that spouts meaningless jargon and calls it rational discussion. Thanks for the warm welcome ;) Let us hope I can be another example for which we can dismiss entire fields and intellectuals as being unfit for "true" rationality. /friendly-jest.
Now my understanding may be limited, having actually studied pos... (read more)
For some reason the first picture won't load, even though the rest are fine. I'm using safari.
Didn't you say you were working on a sequence on open problems in friendly AI? And how could this possibly be higher priority than that sequence?
A guess: prerequisites. Also, we have lots of new people, so to be safe: prerequisites to prerequisites.
Two minor grammatical corrections:
A space is missing between "itself" and "is " in "The marble itselfis a small simple", and between "experimental" and "results" in "only reality gets to determine my experimentalresults".
This post starts out by saying that we know there is such a thing as truth, because there is something that determines our experimental outcomes, aside from our experimental predictions. But by the end of the post, you're talking about truth as correspondence to an arrangement of atoms in the universe. I'm not sure how you got from there to here.
Great post! If this is the beginning of trend to make Less Wrong posts more accessible to a general audience, then I'm definitely a fan. There's a lot of people I'd love to share posts with who give up when they see a wall of text.
There are two key things here I think can be improved. I think they were probably skipped over for mostly narrative purposes and can be fixed with brief mentions or slight rephrasings:... (read more)
hello. it is not a mayor problem, but i just wanted to put it out there: i would love it if there were some bibliographical references which we could look into :)
best regards, i just found Less Wrong and it's amazing
edit1: i mean references as footnotes in every entry, although that may substract from the reading experience?
The first image is a dead hotlink. It's in the internet archive and I've uploaded it to imgur.
Beliefs should pay rent, check. Arguments about truth are not just a matter of asserting privilege, check. And yet... when we do have floating beliefs, then our arguments about truth are largely a matter of asserting privilege. I missed that connection at first.
Why did you reply directly to the top-level post rather than to where the quotation was taken from?
Here's my map of my map with respect to the concept of truth.
Level Zero: I don't know. I wouldn't even be investigating these concepts about truth unless on some level I had some form of doubt about them. The only reason I think I know anything is because I assume it's possible for me to know anything. Maybe all of my priors are horribly messed up with respect to whatever else they potentially should be. Maybe my entire brain is horribly broken and all of my intuitive notions about reality and probability and logic and consistency are meaningless. There's ... (read more)
What does it tell about me that I mentally weighed "Highly Advanced" on a scale pan and "101" and "for Beginners" on the other pan?
I would have inverted the colours in the “All possible worlds” diagram (but with a black border around it) -- light-on-black reminds me of stars, and thence of the spatially-infinite-universe-including-pretty-much-anything idea, which is not terribly relevant here, whereas a white ellipse with a black border reminds me of a classical textbook Euler-V... (read more)
Suppose I have two different non-meaningful statements, A and B. Is it possible to tell them apart? On what basis? On what basis could we recognize non-meaningful statements as tokens of language at all?
Connotation. The statement has no well-defined denotation, but people say it to imply other, meaningful things. Islam is a religion of peace!
So... could this style of writing, with koans and pictures, be applied to transforming the majority of sequences into an even greater didactic tool?
Besides the obvious problems, I'm not sure how this would stand with Eliezer - they are, after all, his masterpiece.
Is this true? Maybe there's a formal reason why, but it seems we can informally represent such ideas without the abstract idea of truth. For example, if we grant quantification over propositions,
I'm not sure what this has to do with politics? The lead-up discusses "an Artificial Intelligence, which was carrying out its work in isolation" — the relevant part seems to be that it doesn't interact with other agents at all, not that it doesn't do politics specifically. Even without politics, other agents can still be mistaken, biased, misinformed, or deceitful; and one use of the concept of "truth" has to do with predicting the accuracy of others' statements and those people's intentions in making them.
Response to the First Meditation
Even if truth judgments can only be made by comparing maps — even if we can never assess the territory directly — there is still a question of how the territory is.
Furthermore, there is value in distinguishing our model/expectations of the world, from our experiences within it.
This leads to two naive notions of truth:
"Since my expectations sometimes conflict with my subsequent experiences, I need different names for the thingies that determine my experimental predictions and the thingy that determines my experimental results. I call the former thingies 'beliefs', and the latter thingy 'reality'."
I think this is a fine response to Mr. Carrico, but not to the post-modernists. They can still fall back to something like "Why are you drawing a line between 'predictions' and 'results'? Both are simply things in your head, and since you can't directly observe reality, your 'r... (read more)
Fine, Eliezer, as someone who would really like to think/believe that there's Ultimate Truth (not based in perception) to be found, I'll bite.
I don't think you are steelmanning post-modernists in your post. Suppose I am a member of a cult X -- we believe that we can leap off of Everest and fly/not die. You and I watch my fellow cult-member jump off a cliff. You see him smash himself dead. I am so deluded ("deluded") that all I see is my friend soaring in the sky. You, within your system, evaluate me as crazy. I might think the same of you.
You mig... (read more)
A criticism - somewhat harsh but hopefully constructive.
As you know, lots of people have written on the subjects of truth and meaning (aside from Tarski). It seems, however, that you don't accord them much importance (no references, failure to consider alternate points of view, apparent lack of awareness of the significance of the matter of what the bearer of truth (sentence, proposition, 'neurally embodied belief') properly is, etc.). I put it to you this is a manifestation of irrationality: you have a known means at your disposal to learn reliably about... (read more)
I'm saying, "Show me something in particular that I should've looked at, and explain why it matters; I do not respond to non-specific claims that I should've paid more homage to whatever."
I don't understand the part about post-utopianism being meaningless. If people agree on what the term means, and they can read a book and detect (or not) colonial alienation, and thus have a test for post-utopianism, and different people will reach the same conclusions about any given book, then how exactly is the term meaningless?
I assume this is meant in the spirit of "it's as if you are", not "your brain is computing in these terms". When I anticipate being surprised, I'm not consciously constructing any "my map of my map of ..." concepts. Whether my brain is constructing them under the covers remains to be demonstrated.
One shouldn't form theories about a particular photon. The statement "photons in general continue to exist after crossing the cosmological horizon" and "photons in general blink out of existence when they cross the cosmological horizon" have distinct testable consequences, if you have a little freedom of motion.
I think it's apt but ironic that you find a definition of "truth" by comparing beliefs and reality. Beliefs are something that human beings, and maybe some animals have. Reality is vast in comparison, and generally not very animal-centric. Yet every one of these d... (read more)
I'm not at all sure about this part - although I don't think it matters much to your overall case. I think one of our senses is a very much simplified representation of our own internal thought state. It's only just about good enough for us to make a chain of thought - taking the substance of a finished though... (read more)
The "All possible worlds" picture doesn't include the case of a marble in both the basket and the box.
I think there was only one marble in the universe.
You ought to admit that the statement 'there is "the thingy that determines my experimental results"' is a belief. A useful belief, but still a belief. And forgetting that sometimes leads to meaningless questions like "Which interpretation of QM is true?" or "Is wave function a real thing?"
The first image of this post is broken
Maybe it's just me, but the first image is broken.
The first image in this post does not show up anymore. The URL in the source code, http://labspace.open.ac.uk/file.php/4771/DSE232_1_004i.jpg , needs to be replaced by http://labspace.open.ac.uk/file.php/8398/DSE232_1_004i.jpg . However, perhaps it would be best to host somewhere other than labspace.open.ac.uk, if they will continue to frequently reorganize their files.
(Feel free to delete this comment when the issue is fixed.)
Also on the issue of insisting that all facts be somehow reducible to facts about atoms or whatever physical features of the world you insist on consider the claim that you have experiences.
As Chalmers and others have long argued it's logically coherent to believe in a world that is identical to ours in every 'physical' respect (position of atoms, chairs, neuron firings etc..) but yet it's inhabitants simply lacked any experiences. Thus, the belief that one does in fact have experiences is a claim that can't be reduced to facts about atoms or whatever.
Wor... (read more)
First a little clarification.
The contribution of Tarski was to define the idea of truth in a model of a theory and to show that one could finitely define truth in a model. Separately, he also showed no consistent theory can include a truth predicate for itself.
As for the issue of truth-conditions this is really a matter of philosophy of language. The mere insistence that there is some objective fact out there that my words hook on to doesn't seem enough. If I insist that "There are blahblahblah in my room." but that "There are no blahblah... (read more)
Can the many world hypothesis be true or false according to this theory of truth?
The river side illustration is inaccurate and should be much more like the illustration right above (with the black shirt replaced with a white shirt).
A belief is true if it is consistent with reality.
I remember when you drew this analogy to different interpretations of QM and was thinking it over.
The way I put it to myself was that the difference between "laws of physics apply" and "everything ac... (read more)
Probably because your definition of existence is no good. Try a better one.
Why is it accepted that experiments with reality prove or disprove beliefs?
It seems to me that they merely confirm or alter beliefs. The answer given to the first koan and the explanation of the shoelaces seem to me to lead to that conclusion.
"...only reality gets to determine my experimental results."
Does it? How does it do that? Isn't it the case that all reality can "do" is passively be believed? Surely one has to observe results, and thus, one has belief about the results. When I jump off the cliff I might go splat, but if the clif... (read more)