The distinction between epistemic and instrumental rationality is standard.
Postmodernists/relativists have emphasized use of the word 'true' as a mere emphasis, which I admit is a common use.
I haven't particularly seen anyone pioneering a rationalist technique of trying to eliminate the word 'true' to avoid use as a mere emphasis. The deflationary theory of truth says that all uses of "truth" are deflatable - which this sequence denies; but the idea of deflating "true" out of sentences is clearly precedented, as is the Tarski-inspired algorithm for doing so.
I haven't particularly seen anything which emphasizes that the residue of trying to eliminate the word "truth" is abstraction and generalization over the behavior of map-territory correspondences.
Similarly, I haven't previously seen anyone advocate, as a rationality technique, trying to eliminate the word 'rational'; or emphasizing that the non-eliminable residue will be about cognitive algorithms. I wouldn't particularly expect to see that; this is me trying to narrow the definition a particular way that I think is useful.
Saying "I believe X" does seem to have different connotations than simply stating X; I'd be more likely to say "I believe X" when X is controversial, for example.
Specifically they're different because of the pragmatic conversation rule that direct statements should be something your conversation partner will accept, in most normal conversations. You say "X" when you expect your conversation partner to say something like "oh cool, I didn't know that." You say "I believe X" when they may disagree and your arguments will come later or not at all. "It's true that X" is more complicated; one example of use would be after the proposition X has already come up in conversation as a belief and you want to state it as a fact.
A: "I hear that lots of people are saying the sky is blue." B: "The sky is blue."
The above sounds weird. (Unless you are imagining it with emphasis on "is" which is another way to put emphasis on the truth of the proposition.) "The sky is blue" is being stated without signaling its relationship to the previous conversation so it sounds like new information; A will expect some new proposition and be briefly confused; it sounds like echolalia rather than an answer.
B: "The sky really is blue.
or
B: "It's actually true that the sky is blue."
sounds better in this context.
"Why you should think the paleo diet has the best consequences for health"
"I like the paleo diet"
Those look significantly different to me- someone who likes the paleo diet because it lets them eat bacon all day and is indifferent to the health consequences is very different from someone who believes the health consequences of paleo are best, but doesn't like it because they enjoy bread and beer too much.
Or if it feels different to say "I believe the Democrats will win the election!" than to say, "The Democrats will win the election", this is an important warning of belief-alief divergence.
To me, the first means that I assign a probability > 50%, the latter that I assign a probability close to 1.
That's because the "I believe" part of "I believe X" acts as a kind of socially acceptable way to back off from a statement if it turns out to be wrong. People tend to say "I believe X" when they want to be able to later admit that they were wrong about X, so that's why it's less of a probabilistic commitment.
"It's rational to believe the sky is blue" -> "I think the sky is blue" -> "The sky is blue"
These sentences have different truth conditions (sets of possible worlds in which they are true). For example, in a possible world where aliens have changed the color of the sky to green and installed filters into everyone's optic nerves, "I think the sky is blue" and "It's rational to believe (in the sense of assigning the most probability to) the sky is blue" are true but "the sky is blue" is false. In a possible world where I irrationally believe the sky is blue, "I think the sky is blue" is true but "It's rational to believe the sky is blue" is false.
I think I should pick the sentence that best matches the probability distribution over possible worlds that I have. For example if I'm pretty sure that it's rational to believe the sky is blue but not highly certain the sky really is blue, I might want to say "It's rational to believe the sky is blue". If I'm not sure about either, "I think the sky is blue" would be best, etc.
You're unlikely to say "I think the sky is blue but the sky is green", or "it is rational to believe the sky is blue but I think the sky is green".
And yet, I am not-infrequently in a state where saying "I think I'm going to do really badly on this project, but the truth is I probably won't" seems to make perfect sense to me, as does "it is rational to believe I'll do well on this project but the truth is I think I won't."
This does not surprise me too much, as I don't expect my brain to be internally consistent, and I consider all thoughts it thinks my thoughts because the alternative seems more dissociative than necessary. Depression and anxiety frequently cause me to think things that contradict my rationally endorsed beliefs.
I use "I think X" to indicate more uncertainty than just "X" all the time, and so does Eliezer. I just checked his recent comments, and on the first page, he used "I think" two times to indicate uncertainty, and one time to indicate that others may not share his belief. The statement "Consider the following deflations, all of which convey essentially the same information about your own opinions" just seems plain wrong.
SkepticWiki or something like it would be a much better name for RationalWiki, but it's probably too late. The name is a bit of a historical accident. SkepticWiki was already taken ... though they've now given up, and skepticwiki.org redirects to RW. Ah well.
(RW has now reached the stage where its popularity is sending it broke. Also, they need a new sysadmin. And I just volunteered, Dawkins help me.)
Of course, most of RW is shit. But the good bits, they're lovely. (The LW article is only a bit shit.)
In an argument, a good rule of thumb is to only assert statements which your interlocutor is expected to mostly accept (as GDC3 reminds us in another comment). If the statement X won't be accepted, the statement that "I believe X" may well be accepted, if you are not expected to lie (or be significantly mistaken) about your beliefs. It's a statement different from X that provides weak evidence about X, and draws attention to X, prompting to think about the possibility of X in greater detail, perhaps raising its probability from obscure to plausible as a result.
Thus, stating "I believe X" is similar to stating "X is somewhat plausible", in that both communicate weak evidence about X and draw attention to X, allowing to notice its greater plausibility through inference. But stating "X is somewhat plausible" would be a misrepresentation of your understanding of the world if you in fact believe that X is likely (you don't believe that it's only "somewhat plausible"), and stating "X is likely" breaks the rule of only asserting statements that will be accepted. Therefore, in this case the best choice is to state "I believe X", and not "X" or "X is somewhat plausible".
We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality). [emphasis added}
Taboo "systematically."
Rather than talking about only a single case where you might be tempted to conclude that X is rational because it leads to your desired conclusion that global warming is false, or true, or whatever; one can explain why this (tends probabilistically to) work in the general case, given the general sort of universe we live in - that's "systematically".
Back in the 17th century, several people had a go at redesigning the whole thing, motivated by the flood of new knowledge coming from scientific investigations and the great exploratory voyages, and a felt inadequacy of the language of the time for expressing it. The languages they designed never came into use, although a direct intellectual line can be traced from there down to the formalisation of mathematical logic and the development of the first computers.
But introducing a whole new language is much harder than introducing a new keyboard layout, and how far has Dvorak got? Nobody will bother except for a few geeks. Qapla'!
Instead, there have been useful suggestions in various sources for small, local tools that one can simply pick up and use. Here's a list of the ones that occur to me. The first three are from Korzybski, and 4 is from General Semantics (on which there's a thread here), but invented by David Bourland. 6 and 7 are my own observations, and 5 and 8 are easily googleable for more information.
Subscripting, to draw attention to the fact that Fred(2010) is not Fred(2012), Freda(@home) is not Freda(@work), and Genghis(drunk) is not Genghis(sober).
Liberal use of the
Taboo is useful when you notice two people arguing over an equivocation, to get them to stop doing that. I'm not sure what the use is when there wasn't already confusion about the word "systematically".
Other possibly useful reference: the Wikipedia saying "verifiability not truth." The idea being that Wikipedia is written by mere humans without direct access to cosmic truth, so what's verifiable is all we have to go on. (The details of Wikipedia's epistemology can get a bit stupid at the edges, but the point stands.)
I often add "I believe" to sentences to clarify that I am not certain.
"Did you feed the dog?" "Yes"
and
"Did you feed the dog?" "I believe so"
have different meanings to me. I parse the first as "I am highly confident that I fed the dog" and the second as "I am unable to remember for sure whether I fed the dog, but I am >50% confident I did so."
When Anna tells me, "I'm worried that you don't seem very curious about this," there's this state of mind called 'curiosity' that we both agree is important - as a matter of rational process, on a meta-level above the particular issue at hand - and I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.
Having someone who would occasionally point out deficiencies in one's rational processes sounds awesome. Do you think i...
I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.
How does one increase their curiosity levels?
I thought of a slightly different exception for the use of "rational": when we talk about conclusions that someone else would draw from their experiences, which are different from ours. "It's rational for Truman Burbank to believe that he has a normal life."
Or if I had an extraordinary experience which I couldn't communicate with enough fidelity to you, then it might be rational for you not to believe me. Conversely, if you had the experience and tried to tell me, I might answer with "Based only on the information that I received from you, which is p...
From the common usage of the word "I believe" referred to in this context I think you could generate an interpretation as follows:
When a person says I believe that something is going to happen
They're communicating their degree of belief to that something is going to happen, are not as confident
They're expecting that particular scenario to occur more likely than other scenarios, but not for certain
They're concentrating on that particular anticipation and expect it to happen regardless of other plausible scenarios
They're acknowledging their
Followup to: The Useful Idea of Truth
It is an error mode, and indeed an annoyance mode, to go about preaching the importance of the "Truth", especially if the Truth is supposed to be something incredibly lofty instead of some boring, mundane truth about gravity or rainbows or what your coworker said about your manager.
Thus it is a worthwhile exercise to practice deflating the word 'true' out of any sentence in which it appears. (Note that this is a special case of rationalist taboo.) For example, instead of saying, "I believe that the sky is blue, and that's true!" you can just say, "The sky is blue", which conveys essentially the same information about what color you think the sky is. Or if it feels different to say "I believe the Democrats will win the election!" than to say, "The Democrats will win the election", this is an important warning of belief-alief divergence.
Try it with these:
If 'truth' is defined by an infinite family of sentences like 'The sentence "the sky is blue" is true if and only if the sky is blue', then why would we ever need to talk about 'truth' at all?
Well, you can't deflate 'truth' out of the sentence "True beliefs are more likely to make successful experimental predictions" because it states a property of map-territory correspondences in general. You could say 'accurate maps' instead of 'true beliefs', but you would still be invoking the same concept.
It's only because most sentences containing the word 'true' are not talking about map-territory correspondences in general, that most such sentences can be deflated.
Now consider - when are you forced to use the word 'rational'?
As with the word 'true', there are very few sentences that truly need to contain the word 'rational' in them. Consider the following deflations, all of which convey essentially the same information about your own opinions:
"It's rational to believe the sky is blue"
-> "I think the sky is blue"
-> "The sky is blue"
"Rational Dieting: Why To Choose Paleo"
-> "Why you should think the paleo diet has the best consequences for health"
-> "I like the paleo diet"
Generally, when people bless something as 'rational', you could directly substitute the word 'optimal' with no loss of content - or in some cases the phrases 'true' or 'believed-by-me', if we're talking about a belief rather than a strategy.
Try it with these:
Meditation: Under what rare circumstances can you not deflate the word 'rational' out of a sentence?
...
...
...
Reply: We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality).
E.g.:
"It's (epistemically) rational to believe more in hypotheses that make successful experimental predictions."
or
"Chasing sunk costs is (instrumentally) irrational."
You can't deflate the concept of rationality out of the intended meaning of those sentences. You could find some way to rephrase it without the word 'rational'; but then you'd have to use other words describing the same concept, e.g:
"If you believe more in hypotheses that make successful predictions, your map will better correspond to reality over time."
or
"If you chase sunk costs, you won't achieve your goals as well as you could otherwise."
The word 'rational' is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement.
Similarly, a rationalist isn't just somebody who respects the Truth.
All too many people respect the Truth.
They respect the Truth that the U.S. government planted explosives in the World Trade Center, the Truth that the stars control human destiny (ironically, the exact reverse will be true if everything goes right), the Truth that global warming is a lie... and so it goes.
A rationalist is somebody who respects the processes of finding truth. They respect somebody who seems to be showing genuine curiosity, even if that curiosity is about a should-already-be-settled issue like whether the World Trade Center was brought down by explosives, because genuine curiosity is part of a lovable algorithm and respectable process. They respect Stuart Hameroff for trying to test whether neurons have properties conducive to quantum computing, even if this idea seems exceedingly unlikely a priori and was suggested by awful Gödelian arguments about why brains can't be mechanisms, because Hameroff was trying to test his wacky beliefs experimentally, and humanity would still be living on the savanna if 'wacky' beliefs never got tested experimentally.
Or consider the controversy over the way CSICOP (Committee for Skeptical Investigation of Claims of the Paranormal) handled the so-called Mars effect, the controversy which led founder Dennis Rawlins to leave CSICOP. Does the position of the planet Mars in the sky during your hour of birth, actually have an effect on whether you'll become a famous athlete? I'll go out on a limb and say no. And if you only respect the Truth, then it doesn't matter very much whether CSICOP raised the goalposts on the astrologer Gauquelin - i.e., stated a test and then made up new reasons to reject the results after Gauquelin's result came out positive. The astrological conclusion is almost certainly un-true... and that conclusion was indeed derogated, the Truth upheld.
But a rationalist is disturbed by the claim that there were rational process violations. As a Bayesian, in a case like this you do update to a very small degree in favor of astrology, just not enough to overcome the prior odds; and you update to a larger degree that Gauquelin has inadvertantly uncovered some other phenomenon that might be worth tracking down. One definitely shouldn't state a test and then ignore the results, or find new reasons the test is invalid, when the results don't come out your way. That process has bad systematic properties for finding truth - and a rationalist doesn't just appreciate the beauty of the Truth, but the beauty of the processes and cognitive algorithms that get us there.[1]
The reason why rationalists can have unusually productive and friendly conversations at least when everything goes right, is not that everyone involved has a great and abiding respect for whatever they think is the True or the Optimal in any given moment. Under most everyday conditions, people who argue heatedly aren't doing so because they know the truth but disrespect it. Rationalist conversations are (potentially) more productive to the degree that everyone respects the process, and is on mostly the same page about what the process should be, thanks to all that explicit study of things like cognitive psychology and probability theory. When Anna tells me, "I'm worried that you don't seem very curious about this," there's this state of mind called 'curiosity' that we both agree is important - as a matter of rational process, on a meta-level above the particular issue at hand - and I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.
Is rationality-use necessarily tied to rationality-appreciation? I can imagine a world filled with hordes of rationality-users who were taught in school to use the Art competently, even though only very few people love the Art enough to try to advance it further; and everyone else has no particular love or interest in the Art apart from the practical results it brings. Similarly, I can imagine a competent applied mathematician who only worked at a hedge fund for the money, and had never loved math or programming or optimization in the first place - who'd been in it for the money from day one. I can imagine a competent musician who had no particular love in composition or joy in music, and who only cared for the album sales and groupies. Just because something is imaginable doesn't make it probable in real life... but then there are many children who learn to play the piano despite having no love for it; "musicians" are those who are unusually good at it, not the adequately-competent.
But for now, in this world where the Art is not yet forcibly impressed on schoolchildren nor yet explicitly rewarded in a standard way on standard career tracks, almost everyone who has any skill at rationality is the sort of person who finds the Art intriguing for its own sake. Which - perhaps unfortunately - explains quite a bit, both about rationalist communities and about the world.
[1] RationalWiki really needs to rename itself to SkepticWiki. They're very interested in kicking hell out of homeopathy, but not as a group interested in the abstract beauty of questions like "What trials should a strange new hypothesis undergo, which it will notfail if the hypothesis is true?" You can go to them and be like, "You're criticizing theory X because some people who believe in it are stupid; but many true theories have stupid believers, like how Deepak Chopra claims to be talking about quantum physics; so this is not a useful method in general for discriminating true and false theories" and they'll be like, "Ha! So what? Who cares? X is crazy!" I think it was actually RationalWiki which first observed that it and Less Wrong ought to swap names.
(Mainstream status here.)
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Next post: "Firewalling the Optimal from the Rational"
Previous post: "Skill: The Map is Not the Territory"