Followup to: The Useful Idea of Truth

It is an error mode, and indeed an annoyance mode, to go about preaching the importance of the "Truth", especially if the Truth is supposed to be something incredibly lofty instead of some boringmundane truth about gravity or rainbows or what your coworker said about your manager.

Thus it is a worthwhile exercise to practice deflating the word 'true' out of any sentence in which it appears. (Note that this is a special case of rationalist taboo.) For example, instead of saying, "I believe that the sky is blue, and that's true!" you can just say, "The sky is blue", which conveys essentially the same information about what color you think the sky is. Or if it feels different to say "I believe the Democrats will win the election!" than to say, "The Democrats will win the election", this is an important warning of belief-alief divergence.

Try it with these:

  • I believe Jess just wants to win arguments.
  • It’s true that you weren’t paying attention.
  • I believe I will get better.
  • In reality, teachers care a lot about students.

If 'truth' is defined by an infinite family of sentences like 'The sentence "the sky is blue" is true if and only if the sky is blue', then why would we ever need to talk about 'truth' at all?

Well, you can't deflate 'truth' out of the sentence "True beliefs are more likely to make successful experimental predictions" because it states a property of map-territory correspondences in general. You could say 'accurate maps' instead of 'true beliefs', but you would still be invoking the same concept.

It's only because most sentences containing the word 'true' are not talking about map-territory correspondences in general, that most such sentences can be deflated.

Now consider - when are you forced to use the word 'rational'?

As with the word 'true', there are very few sentences that truly need to contain the word 'rational' in them. Consider the following deflations, all of which convey essentially the same information about your own opinions:

  • "It's rational to believe the sky is blue" 
    -> "I think the sky is blue" 
    -> "The sky is blue"

  • "Rational Dieting: Why To Choose Paleo" 
    -> "Why you should think the paleo diet has the best consequences for health" 
    -> "I like the paleo diet"

Generally, when people bless something as 'rational', you could directly substitute the word 'optimal' with no loss of content - or in some cases the phrases 'true' or 'believed-by-me', if we're talking about a belief rather than a strategy.

Try it with these:

  • "It’s rational to teach your children calculus."
  • "I think this is the most rational book ever."
  • "It's rational to believe in gravity."

Meditation: Under what rare circumstances can you not deflate the word 'rational' out of a sentence?

...
...
...

Reply: We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality).

E.g.:

"It's (epistemically) rational to believe more in hypotheses that make successful experimental predictions."

or

"Chasing sunk costs is (instrumentally) irrational."

You can't deflate the concept of rationality out of the intended meaning of those sentences. You could find some way to rephrase it without the word 'rational'; but then you'd have to use other words describing the same concept, e.g:

"If you believe more in hypotheses that make successful predictions, your map will better correspond to reality over time."

or

"If you chase sunk costs, you won't achieve your goals as well as you could otherwise."

The word 'rational' is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement.

Similarly, a rationalist isn't just somebody who respects the Truth.

All too many people respect the Truth.

They respect the Truth that the U.S. government planted explosives in the World Trade Center, the Truth that the stars control human destiny (ironically, the exact reverse will be true if everything goes right), the Truth that global warming is a lie... and so it goes.

A rationalist is somebody who respects the processes of finding truth. They respect somebody who seems to be showing genuine curiosity, even if that curiosity is about a should-already-be-settled issue like whether the World Trade Center was brought down by explosives, because genuine curiosity is part of a lovable algorithm and respectable process. They respect Stuart Hameroff for trying to test whether neurons have properties conducive to quantum computing, even if this idea seems exceedingly unlikely a priori and was suggested by awful Gödelian arguments about why brains can't be mechanisms, because Hameroff was trying to test his wacky beliefs experimentally, and humanity would still be living on the savanna if 'wacky' beliefs never got tested experimentally.

Or consider the controversy over the way CSICOP (Committee for Skeptical Investigation of Claims of the Paranormal) handled the so-called Mars effect, the controversy which led founder Dennis Rawlins to leave CSICOP. Does the position of the planet Mars in the sky during your hour of birth, actually have an effect on whether you'll become a famous athlete? I'll go out on a limb and say no. And if you only respect the Truth, then it doesn't matter very much whether CSICOP raised the goalposts on the astrologer Gauquelin - i.e., stated a test and then made up new reasons to reject the results after Gauquelin's result came out positive. The astrological conclusion is almost certainly un-true... and that conclusion was indeed derogated, the Truth upheld.

But a rationalist is disturbed by the claim that there were rational process violations. As a Bayesian, in a case like this you do update to a very small degree in favor of astrology, just not enough to overcome the prior odds; and you update to a larger degree that Gauquelin has inadvertantly uncovered some other phenomenon that might be worth tracking down. One definitely shouldn't state a test and then ignore the results, or find new reasons the test is invalid, when the results don't come out your way. That process has bad systematic properties for finding truth - and a rationalist doesn't just appreciate the beauty of the Truth, but the beauty of the processes and cognitive algorithms that get us there.[1]

The reason why rationalists can have unusually productive and friendly conversations at least when everything goes right, is not that everyone involved has a great and abiding respect for whatever they think is the True or the Optimal in any given moment. Under most everyday conditions, people who argue heatedly aren't doing so because they know the truth but disrespect it. Rationalist conversations are (potentially) more productive to the degree that everyone respects the process, and is on mostly the same page about what the process should be, thanks to all that explicit study of things like cognitive psychology and probability theory. When Anna tells me, "I'm worried that you don't seem very curious about this," there's this state of mind called 'curiosity' that we both agree is important - as a matter of rational process, on a meta-level above the particular issue at hand - and I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.

Is rationality-use necessarily tied to rationality-appreciation?  I can imagine a world filled with hordes of rationality-users who were taught in school to use the Art competently, even though only very few people love the Art enough to try to advance it further; and everyone else has no particular love or interest in the Art apart from the practical results it brings. Similarly, I can imagine a competent applied mathematician who only worked at a hedge fund for the money, and had never loved math or programming or optimization in the first place - who'd been in it for the money from day one. I can imagine a competent musician who had no particular love in composition or joy in music, and who only cared for the album sales and groupies. Just because something is imaginable doesn't make it probable in real life... but then there are many children who learn to play the piano despite having no love for it; "musicians" are those who are unusually good at it, not the adequately-competent.

But for now, in this world where the Art is not yet forcibly impressed on schoolchildren nor yet explicitly rewarded in a standard way on standard career tracks, almost everyone who has any skill at rationality is the sort of person who finds the Art intriguing for its own sake. Which - perhaps unfortunately - explains quite a bit, both about rationalist communities and about the world.


[1] RationalWiki really needs to rename itself to SkepticWiki. They're very interested in kicking hell out of homeopathy, but not as a group interested in the abstract beauty of questions like "What trials should a strange new hypothesis undergo, which it will notfail if the hypothesis is true?" You can go to them and be like, "You're criticizing theory X because some people who believe in it are stupid; but many true theories have stupid believers, like how Deepak Chopra claims to be talking about quantum physics; so this is not a useful method in general for discriminating true and false theories" and they'll be like, "Ha! So what? Who cares? X is crazy!" I think it was actually RationalWiki which first observed that it and Less Wrong ought to swap names.


(Mainstream status here.)

Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "Firewalling the Optimal from the Rational"

Previous post: "Skill: The Map is Not the Territory"

Rationality: Appreciating Cognitive Algorithms
New Comment
135 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Mainstream status:

The distinction between epistemic and instrumental rationality is standard.

Postmodernists/relativists have emphasized use of the word 'true' as a mere emphasis, which I admit is a common use.

I haven't particularly seen anyone pioneering a rationalist technique of trying to eliminate the word 'true' to avoid use as a mere emphasis. The deflationary theory of truth says that all uses of "truth" are deflatable - which this sequence denies; but the idea of deflating "true" out of sentences is clearly precedented, as is the Tarski-inspired algorithm for doing so.

I haven't particularly seen anything which emphasizes that the residue of trying to eliminate the word "truth" is abstraction and generalization over the behavior of map-territory correspondences.

Similarly, I haven't previously seen anyone advocate, as a rationality technique, trying to eliminate the word 'rational'; or emphasizing that the non-eliminable residue will be about cognitive algorithms. I wouldn't particularly expect to see that; this is me trying to narrow the definition a particular way that I think is useful.

A "mainstream status" on LW philosophy posts is an excellent idea. Nice one.

-4TimS
More boo lights. Postmodernists assert that object-level morality abuses the concept of truth in order to reinforce the acceptance of normative claims. You noted that some thinkers call that "the halo effect." Kuhn and Feyerabend assert that the interpretations of evidence and what evidence is available to interpret are affected by social factors than a naive philosophy of science wouldn't suspect. Common use is not better evidence of postmodern thought than folk psychology is evidence of what Kahneman thinks. What postmodern position are you actually attacking here?
6thomblake
You seem to have misread. Eliezer's comment was intended to point out connections between what he's talking about and "mainstream" ideas / writing. He noted in the article that "true" is sometimes used as mere emphasis, and noted here that postmodernists have made the same observation. I don't see why that would be characterized as an "attack".
-3TimS
Eliezer is attacking a particular usage of the word "true." That point is well taken. Further, I appreciate his explicit linking of his thoughts into the larger philosophical debate. But I am unaware of any philosophical movement that uses "true" the way Eliezer attacks. The sentence I quote could have made the same point (and been more accurate) if postmodernist/relativism was omitted entirely. What purpose do you think including the label had? In particular, why was the label (inaccurately) applied to a position that Eliezer just demonstrated was false?
6DaFranker
From Wikipedia: If "postmodernists" have this opinion as stated, I suspect that when they aren't using the word "true" to attack or criticize other philosophical ideas, they would be using it as a form of emphasis on a particular interpretation, or to assert the dominance of a particular interpretation, as this interpretation then literally becomes more "true" (in their model, according to my model of their model).
4TimS
I think the next paragraph is a bit more accurate: The key point of political theory post-modernist is that certain social norms are claimed to be true or universal when that is not the case. Further, binary distinctions (black/white, capitalist/proletariat) are inherently misleading, organizing the world in particular ways in order to advance particular moral agendas.
4DaFranker
Thanks, I shall update towards most postmodernists being less of the extreme philosophical kind and more about practical matters like those. Most self-titled "postmodernists" I've encountered and discussed with were more of the extreme philosophical kind - the kind that would claim ontologically basic mental entities or some other really weird postulate if asked "But where did the first 'reality' come from if there never was any objective reality for us to base our own ones on?"
1TimS
As a discipline, postmodernism seems unusually terrible at producing competent practitioners. The average academic chemist is a better scientist than the average postmodernist is as a philosopher. That said, a lot of conventional wisdom in fields like sociology or Legal Realism have very strong postmodern flavors. Honestly, a lot of the meta-type analysis of norms is using scientific data to show what various humanities thinkers had been saying all along.
-3Eugine_Nier
Some are some aren't. Furthermore, it's impossible to say anything without using distinctions.
-1TimS
Not all moral distinctions are on-off buttons. Some (most?) are sliding scales. ---------------------------------------- I don't expect king-of-postmodernism-is-nonsense and mister-I-think-postmodernism-makes-good-points to come to agreement, but I'm interested in where exactly we disagree. * Do you think some agents could gain advantage by treating a sliding-scale moral quality as discrete? * Do you think some agents could gain advantage by treating a discrete moral quality as sliding-scale? * What sort of evidence is useful in deciding whether a particular moral quality is discrete or sliding scale?
1Eugine_Nier
First binary distinctions aren't just for moral systems. If we restrict to moral distinctions, most moral distinctions are Schelling points.
5thomblake
The point of that sentence was that postmodernists/relativists have emphasized something. Removing "postmodernists/relativists" from that sentence removes the entire point of the sentence. The comment was about what mainstream folks have talked about. It was not. The label was applied to people who noticed something that Eliezer also noticed. He did not even say that postmodernists/relativists think that is the correct use of the word "true". If anything, it was praise for postmodernists/relativists for having already covered something that Eliezer wanted to talk about.
4TimS
My mental model of Eliezer Yudkowsky is that he thinks all postmodernism is nonsense - as others have noted. If he intended to say something equivalent to "Postmodernist got this point right" then what he wrote is not how I expect he would say it. Further, the attack that I am reading into his words is a standard understanding of postmodernism in this community. But the community seems to agree with you more than I - so I'm adjusting slightly in favor of me misreading Eliezer's intent.
1Pudlovich
I certainly read it as "postmodernists notice that the word true is used as mere emphasis ". Your interpretation doesn't exactly align with the essence of postmodernism (as I see it, I'm no expert).

Saying "I believe X" does seem to have different connotations than simply stating X; I'd be more likely to say "I believe X" when X is controversial, for example.

[-]GDC3400

Specifically they're different because of the pragmatic conversation rule that direct statements should be something your conversation partner will accept, in most normal conversations. You say "X" when you expect your conversation partner to say something like "oh cool, I didn't know that." You say "I believe X" when they may disagree and your arguments will come later or not at all. "It's true that X" is more complicated; one example of use would be after the proposition X has already come up in conversation as a belief and you want to state it as a fact.

A: "I hear that lots of people are saying the sky is blue." B: "The sky is blue."

The above sounds weird. (Unless you are imagining it with emphasis on "is" which is another way to put emphasis on the truth of the proposition.) "The sky is blue" is being stated without signaling its relationship to the previous conversation so it sounds like new information; A will expect some new proposition and be briefly confused; it sounds like echolalia rather than an answer.

B: "The sky really is blue.

or

B: "It's actually true that the sky is blue."

sounds better in this context.

8CronoDAS
That's a better explanation than I could come up with. On a completely irrelevant note, why is "the sky is blue" the standard for "obviously true fact"? The sky is black about half the time, and it's pretty common for it to be white, too.
3A1987dM
If you count navy as blue rather than as black, that happens more rarely than “half the time”. (I'd say “10% of the time” as I have that number cached in my mind as the duty cycle of fluorescence detectors for ultra-high-energy cosmic rays.) You know, the moon. And when that happens, in places where electric lighting is widely used, it tends to become orange (not quite -- does that colour have a name?) during the night!
1pure-awesome
I believe CronoDAS was referring to overcast days when they said the sky is sometimes white.
1A1987dM
Yes, I was talking about his claim that “the sky is black about half the time”; I didn't touch his claim that “it's pretty common for it to be white”. EDIT: Okay, failed reading comprehension of my own comment.
3GDC3
When the sky is white, it's not the sky; it's clouds blocking the sky. When the sky is black it's just too dark to see the sky. At least that was my intuition before I knew that the sky wasn't some conventionally blue object. I guess its a question of word usage whether the projective meaning of "blue" which is something like "looks blue under good lighting conditions" should still be applied when it's not caused by reflectance. Though it's not blue from all directions is it?
3DanielLC
I would consider the clouds part of the sky, like the air, or the stars.
3A1987dM
I'd say “sky” is a relative concept and depends on where you are. If I was on the mountainside and had clouds below me, I still wouldn't say I'm in the sky. (But I would if I was on a plane, so it's not as simple as “anything that's above me”.)
3DanielLC
And I'm on the ground right now. There doesn't seem to be any clouds above me, but if there were, they'd be in the sky, and the sky would have white splotches.
3CCC
I consider anything that is contiguously attached to the planet (or moon) which I am currently on (e.g. a man on a mountaintop), or less than about two metres from the ground (e.g. a man jumping up and down) to not be in the sky. Anything further than that from ground surface, and either currently ascending or able to maintain that altitude, counts as 'in the sky'; anything further than that from ground surface and not able to maintain that altitude, counts as 'falling from the sky'.
3Kindly
If I jump out of a second-floor window, I'm certainly falling, but I'm hardly falling from the sky.
1CCC
The building is contiguously attached to the ground (unless it's some sort of flying building). You need to be more than two metres away from it and falling to count as 'falling from the sky'. For safety reasons, it's probably also better to throw an object - I'd suggest a tennis ball - if you actually want to perform an experiment. You could get it to the state 'falling from the sky' by throwing it hard enough horizontally from a fourth- or fifth-floor window, or dropping it off a bridge. Hmmm... I may need to update my definition to consider the 'dropped-from-a-bridge' case.
0DanielLC
I'd say that it has to be far enough from the ground that you wouldn't notice the parallax effect if you walked around below it, it has to be above the horizon. Also, it can't be an airplane or something. I'm not sure why exactly that last rule is there, given that meteors and such count. Maybe most people would consider it part of the sky. I'd say it's in the sky, but not part of it.
0A1987dM
What would you call a glass absorbing red/orange/yellow light and letting the rest through?
5GDC3
As I understand it, the sky does let red-yellow light through. It scatters blue light and lets red light through relatively unchanged. So it looks red-yellow near the light source and blue everywhere else.
0A1987dM
Yes.
2BlazeOrangeDeer
It's something that everybody has quick access to. Another version would be "things fall", which is better but also only works on a planet and with objects denser than air for example. It would be ideal to have some unchanging reference object that we can make statements about, instead we have something that everyone has seen and they can say "I have seen that, it was pretty much blue"
3Eugine_Nier
That it's hard to come up with an "obviously true fact" that is in fact true without qualifications is itself interesting.
4Bruno_Coelho
With close friends this works, saying "I believe X" signals uncertains where someone could help with avaliable information. But in public debates if you say "I believe X" instead of "X", people will find more confidente and secure.
8GDC3
You're right. I think the lesson we should take from all this complexity is to remember that the wording of a sentence is relevant to more than just it's truth conditions. Language does a lot more than state facts and ask questions.
0Bruno_Coelho
But this bring a tradeoff, how much do you sacrifice to show security and confidence? I suppose, there are people who tell the truth even in situations where this attitude will cause complications.
5Xachariah
"God exists" - "I've had conversations with God; he's a good fellow." "I believe that God exists" - "A lot of people say that God exists and I agree with them." "I believe that I believe that God exists" - "I do see some inconsistencies about God but I go to church and I pray. Plus all my friends are Christian, that means I'm a believer, right?" "I believe that I believe that I believe that God exists" - "I think that what it means to believe in something is an aggregate of the actions you take and the anticipations you feel. So I can have doubt at the object level but still count as believing if I respond similarly to other believers..." "I believe that I believe that I believe that I believe that God exists" - "Okay, I need to talk to fewer rationalists." Each 'I believe' implies a different meta level that you're analyzing things. Kind of like confidence levels inside and outside an argument.
2alex_zag_al
Doesn't seem to me like the first "believe" you append implies a different meta level, just a different reason for believing. After all, the one who asserts "God exists" also believes God exists. Or, maybe the way you've set it out, "I believe that God exists" is belief in belief, in which case in the next one, the extra "I believe" just indicates uncertainty. I think that the general trend that you observed, that you tend to get more meta as you add more "I believes", may be making you miss when the words "I believe" add nothing, or just mean "probably".
5afeller08
I agree with Xachariah's view of semantics. I think that the first 'I believe' does imply a different meta level of belief (often associated with a different reason for believing). His example does a good job of showing how someone can drill down many levels, but the distinction in the first level might be made more clear by considering a more concretely defined belief: "We're lost" -- "I'm you're jungle leader, and I don't have a clue where we are any more." "I believe we're lost" -- "I'm not leading this expedition. I didn't expect to have a clue where we were going, but it doesn't seem to me like anyone else knows where we are going either." -- "Sarah won state science fair her senior year of high school" -- "I attended the fair and witnessed her win it." "I believe that Sarah won state science fair her senior year of high school" -- "She says she did, and she's the best experimentalist I've ever met." "I believe that I believe that Sarah won state science fair her senior year of high school" -- "She says she did, and I don't believe for one second that she'd make that sort of thing up. That said, she's not, so far as I can tell, particularly good at science, and it shocks me that she might somehow have been able to win." -- "Parachuting isn't all it's cracked up to be." -- "I've gone parachuting, and frankly, I've gotten bigger adrenaline rushes playing poker." "I don't believe parachuting's all it's cracked up to be." -- "I haven't gone parachuting. There's no way I would spend $600 for a 4 minute experience when I can't imagine that it's enough fun to justify that." ---------------------------------------- Without the 'I believe,' what I tend to be saying is, I trust the map because I drew it and I drew it carefully. With the 'I believe,' I tend to be saying I trust this map because I trust it's source even though I didn't actually create it myself. In the case of the parachuting, I don't know where the map comes from, it's just the one I have. Pla
4roystgnr
Precisely: for some reason you're not allowed to say "I assign a 70% probability to X being true" without people looking at you funny, and even "I think X is more likely than not-X, but you shouldn't be as confident of this as you are of most things I tell you" is kind of awkward, but "I believe so" is a pretty standard idiom for expressing high-probability-which-is-still-non-negligibly-different-from-1. If you're stuck trying to communicate in an innumerate language then you use whatever phrasing you have available.
3CronoDAS
"I suspect X" seems to be a compact phrasing that suggests considerable uncertainty, as is "I guess X"...
0Sniffnoy
Also, "I would expect X".
1yli
If you're willing to say "X" whenever you believe X, then if you say "I believe X" but aren't willing to say "X", your statement that you believe X is actually false. But in conversations, the rule that you're willing to say everything you believe doesn't hold.
0Pudlovich
Not exactly. If you assign 80% probability to something, you're still allowed to say that you believe it. It's just an evaluation of your model, I believe.

"Why you should think the paleo diet has the best consequences for health"

"I like the paleo diet"

Those look significantly different to me- someone who likes the paleo diet because it lets them eat bacon all day and is indifferent to the health consequences is very different from someone who believes the health consequences of paleo are best, but doesn't like it because they enjoy bread and beer too much.

3David_Gerard
With diets, liking it is pretty much essential to staying on it, as far as I can tell from myself and people I know. e.g. I've been on Tim Ferriss' slow-carb diet for a year and a half, and it's great, but only because I like all the food on it already and it suits me. If it didn't I'd have quit in a week. So I laughed at the bit you quote, but I'd say in practice it's not far off the mark and I laughed because it implies my first sentence.
4TheOtherDave
Liking a diet may be necessary for it to have positive health consequences, as you say, but for most people it's not sufficient. So it's probably a mistake to treat "X diet has positive health consequences" as equivalent to "I like X diet" when uttered by most people. (For example, I might be able to experimentally demonstrate the latter and demonstrate the opposite of the former for the same diet and speaker.)
0David_Gerard
I took it as literary allusion in a place where such may not have been suited to something in a literalist genre. Which may count as a mistake.

Or if it feels different to say "I believe the Democrats will win the election!" than to say, "The Democrats will win the election", this is an important warning of belief-alief divergence.

To me, the first means that I assign a probability > 50%, the latter that I assign a probability close to 1.

That's because the "I believe" part of "I believe X" acts as a kind of socially acceptable way to back off from a statement if it turns out to be wrong. People tend to say "I believe X" when they want to be able to later admit that they were wrong about X, so that's why it's less of a probabilistic commitment.

0BlazeOrangeDeer
I wouldn't say that for something at just over 50%, i'd say "will probably". An unqualified statement implies confidence.
-1[anonymous]
Why not say "I assign 'The Democrats will win the election.' probability greater than 50%." instead?
6A1987dM
Because that may sound weird to certain people. (What about “I think Democrats are more likely than not to win the next election”?)
3ThrustVectoring
Why not just say "The democrats are more likely than not to win the next election?"
1jimrandomh
Because that's four extra syllables, and it shifts emphasis from the statement to the meta-statement about probability. (These aren't good reasons to speak imprecisely, but it's a broadly observed fact of linguistics that people favor shorter ways of saying things when their meaning is sufficiently similar.)
1faul_sname
Because next-election-winningness is not an attribute of the democrats, it's an attribute of your mental model of the democrats.
0TheOtherDave
I'm not quite sure what you even mean by this comment. I have a mental model in which the Democrats won the 2008 U.S. Presidential election. Is that election-winningness, on your view, an attribute of the Democrats? Or of my mental model of Democrats? Or both?
0faul_sname
I phrased badly: that should have been "likelihood-of-democrats-winning-next-election" corresponds to your mental model of the democrats, not the democrats themselves. The democrats will either win or they won't, but if you don't know which you'll say "I think/believe the democrats will win the next election". Since the democrats actually did win the 2008 election, your mental model does correspond to the real world, so it doesn't matter whether you're referring to your mental model or the real world. Since you have less confidence in your mental model of future democratic performance, it makes sense to use different phrases for each ("I believe the democrats will win the next election" feels different than "The democrats won the last election").
0TheOtherDave
Well, I certainly agree that it makes sense to use different phrases to indicate different levels of confidence in an assertion, and I agree that the distinction between "X" and "I believe X" is often used this way.
0A1987dM
I'd say “I think” if I only have poor knowledge of the facts, have to heavily rely on my priors and my intuition, and hence I could easily shift my probability assignment (and narrow what E.T. Jaynes calls my Ap distribution) by (e.g.) looking stuff up, if I could be bothered to. I'd omit it if I already had as much relevant information as I could reasonably gather, and so I don't expect my probability assignment to shift or my Ap distribution to narrow in the near future.
0[anonymous]
This is an improvement.
0wuncidunci
I think the main issue about language is the question of who you're talking to. If you're speaking to a friend with a very weak grasp of Rationality and Probability the sentence that sentence will not make sense, and be needlessly convoluted. To me it looks like Eliezer is trying to set up a new standard (perhaps just for this sequence) about when and how we are allowed to use the loaded words 'truth' and 'rationality'. So it doesn't make sense to try to apply this to every single conversation (especially outside of Less Wrong).

"It's rational to believe the sky is blue" -> "I think the sky is blue" -> "The sky is blue"

These sentences have different truth conditions (sets of possible worlds in which they are true). For example, in a possible world where aliens have changed the color of the sky to green and installed filters into everyone's optic nerves, "I think the sky is blue" and "It's rational to believe (in the sense of assigning the most probability to) the sky is blue" are true but "the sky is blue" is false. In a possible world where I irrationally believe the sky is blue, "I think the sky is blue" is true but "It's rational to believe the sky is blue" is false.

I think I should pick the sentence that best matches the probability distribution over possible worlds that I have. For example if I'm pretty sure that it's rational to believe the sky is blue but not highly certain the sky really is blue, I might want to say "It's rational to believe the sky is blue". If I'm not sure about either, "I think the sky is blue" would be best, etc.

8philh
Denotatively: in your two hypothetical worlds, one of the statements may be false but all three are presenting essentially the same information, which is "I think the sky is blue". You're unlikely to say "I think the sky is blue but the sky is green", or "it is rational to believe the sky is blue but I think the sky is green". Connotatively: I do think there's a connotative difference between the statements. "I think the sky is blue" assigns less probability to a blue sky than "the sky is blue" does; and "it is rational to believe X" could mean something like "I ought to disbelieve in ghosts, but I'll still run screaming from a supposedly-haunted building", or "it is rational (for children) to believe in God (because they don't have any other explanation for religion)", or "the current best hypothesis is that the Higgs boson exists, but we've got an LHC to run before we can collect actual data".

You're unlikely to say "I think the sky is blue but the sky is green", or "it is rational to believe the sky is blue but I think the sky is green".

And yet, I am not-infrequently in a state where saying "I think I'm going to do really badly on this project, but the truth is I probably won't" seems to make perfect sense to me, as does "it is rational to believe I'll do well on this project but the truth is I think I won't."

This does not surprise me too much, as I don't expect my brain to be internally consistent, and I consider all thoughts it thinks my thoughts because the alternative seems more dissociative than necessary. Depression and anxiety frequently cause me to think things that contradict my rationally endorsed beliefs.

1Vladimir_Nesov
What does it mean to consider these thoughts "your" thoughts, what does this ownership signify? Your brain produced them; what else is there to say? Endorsement of correctness doesn't need to relate to personal identity.
2TheOtherDave
I'm puzzled by the question. What it means to consider these thoughts my thoughts is more or less the same thing that it means to consider these fingers I'm typing with my fingers, or the words I've typed my words. I assume you're not asking me to taboo the general concept of ownership here, though I'll try to if you are, and I don't think I'm using it in an unusual way. But I'm not quite sure what you are asking. Can you clarify?
1Vladimir_Nesov
You said, "I consider all thoughts [my brain] thinks my thoughts because the alternative seems more dissociative than necessary". In this statement, you seem to be comparing the position where you consider all thoughts "your thoughts" with the position where you only consider some of the thoughts "your thoughts". In the latter case, you might, for example, declare incorrect aliefs "not part of you". My point is that I'm not sure there should be much of a distinction between the concept of endorsing certain thoughts (e.g. for correctness, or for expressing certain values), and the concept of ownership over them. More specifically I'm suggesting that it might be a good idea to get rid of the concept of ownership over thoughts (where it's selective, so that not every thought your brain thinks is seen as "your own"), and only use the concept of endorsement, so that the question of relating endorsed thoughts with owned thoughts would become trivial/meaningless. (The idea of endorsement generalizes better to weird situations, as you may endorse something an algorithm running on your computer suggested, something other people think, implementation of a social norm, or something an AI does. It seems that it's more accurate to treat such things as "part of you" than not, in considerations that would normally make use of the concept of "part of you", but the concept of "part of you" as it's normally used fits them worse than the concept of "endorsement", thus the latter is more useful, and the former is potentially misleading, drawing attention away from such generalizations.)
0TheOtherDave
Ah, I see. Thanks for the clarification. So, in that context, what it signifies to consider as mine thoughts I don't endorse is that I consider myself more than just the subset of my brain that thinks thoughts I endorse. So, why do I do that, rather than only model myself as the subset? Hm. No particularly good reason, I suppose... I mean, I could model myself as just the subset, and treat the thoughts thought by the brain in which I reside as belonging to someone else, or to noone at all. It would take some training, but I expect it's possible. It's not something I've done, but neither is it something I've explicitly rejected. Do you recommend it? What benefits ought I expect from doing that? Edit: I should say explicitly that if your answer is "the same benefits EY lists in the posts I linked to," that's fine; I just didn't want to treat his thoughts as yours.
1Vladimir_Nesov
I'm not sure the grandparent clarified the argument then. Does it mean anything to declare that certain thoughts are "part of you", apart from your endorsement of those thoughts? In what way is modeling a thought as "your own" different from modeling it as "someone else's"? One should model what's possible about one's whole psychology, and in characterizing that activity I don't understand in what way "modeling as myself" is distinct from just "modeling". You say that you consider more than those thoughts that are endorsed as "part of you". I don't understand what this is intended to mean, what is the difference between drawing the boundary of the concept "part of me" in one way vs. the other, and what would it mean to retrain yourself to change this boundary. I expect the valid use of the concept of "part of me" derives mostly from the concept of endorsement, and I'm not sure what the useful distinction might be (there are actual distinctions in connotations, the question is whether they have any role to play). (I guess I am asking to taboo the concept of ownership, as applied to thinking.)
1TheOtherDave
Well... OK. This is tricky to do in abstract terms without becoming entirely meaningless, so let's take a step back here and see if a more concrete example helps establish a shared useful framework. In that vein: what does it mean to say that these fingers I'm typing with are mine, rather than to say they aren't mine? Well of course there are lots of ways I own my fingers, but in the sense I think we mean here: roughly speaking, it means that when I form impulses to perform certain tasks with my fingers, those are the fingers that perform the tasks, not some other fingers. When those fingers interact with the external world, my mind receives tactile input, not some other mind. And various other facts along those lines. More broadly, it means that these fingers interact with my intentions and my perceptions in various specific ways. If that stopped being true, I might still refer to them as my fingers, but I wouldn't mean quite the same thing by doing so. And if it started being true of other fingers that it currently isn't true of (e.g., fingers on a prosthetic arm connected to my nervous system) I would probably start referring to those fingers as mine in the same sense. (And if it started being true of arbitrary fingers in unpredictable ways, I would probably eventually discard the concept as useless... no fingers would be especially mine, and all fingers might be mine, and it would just be a silly thing to talk about.) All of which is so banal as to not be worth saying, but perhaps dropping down to the incredibly banal is a useful place to start, since we seem to be missing each other when we get too abstract. So, OK, does that align with your understanding of ownership as it applies to fingers in this context? (Of course, it is possible to own fingers in many other ways, but that's what I usually mean when I talk about my fingers.) Assuming it does... I would say that when I describe certain thoughts as mine, I mean something similar. When I experience th
8Eliezer Yudkowsky
I shall concede that the sentence, "It's rational for X to believe Y, but really Z" can sometimes make sense - it says that you have different evidence from X. In most cases, though, this will underestimate the power of rationality and ask too little of X. (The last time I can remember saying anything like this was in a strictly fictional context, Chapter 20.)
3evand
Isn't that exactly the question we often ask juries to consider in, for example, liability lawsuits? "It was rational for the defendant to assume (or conclude under the circumstances) X, but in fact not-X, because of Y or because they got really unlucky, and therefore we find them not liable." I will happily concede that juries do a poor job accounting for hindsight bias, and hold defendants to low standards of rationality, but it seems to me that the usual question is something like "even though not-X, was it rational to believe X?" Whether "reasonable" and "rational" really mean the same thing in this case is open, but I submit that it is the same question as whether juries hold defendants to reasonable standards of rationality.
4thomblake
But in that possible world, you could just as well say "I think the sky is blue" and "It's rational to believe the sky is blue" and "The sky is blue". You shouldn't find yourself in the situation of saying "I believe that the sky is blue, but the sky is not actually blue".

I use "I think X" to indicate more uncertainty than just "X" all the time, and so does Eliezer. I just checked his recent comments, and on the first page, he used "I think" two times to indicate uncertainty, and one time to indicate that others may not share his belief. The statement "Consider the following deflations, all of which convey essentially the same information about your own opinions" just seems plain wrong.

4Robert Miles
Agreed. The use of "I think" relies on its connotations, which are different from its denotation. When you say "I think X", you're not actually expressing the same sentiment as a direct literal reading of the text suggests.

SkepticWiki or something like it would be a much better name for RationalWiki, but it's probably too late. The name is a bit of a historical accident. SkepticWiki was already taken ... though they've now given up, and skepticwiki.org redirects to RW. Ah well.

(RW has now reached the stage where its popularity is sending it broke. Also, they need a new sysadmin. And I just volunteered, Dawkins help me.)

Of course, most of RW is shit. But the good bits, they're lovely. (The LW article is only a bit shit.)

In an argument, a good rule of thumb is to only assert statements which your interlocutor is expected to mostly accept (as GDC3 reminds us in another comment). If the statement X won't be accepted, the statement that "I believe X" may well be accepted, if you are not expected to lie (or be significantly mistaken) about your beliefs. It's a statement different from X that provides weak evidence about X, and draws attention to X, prompting to think about the possibility of X in greater detail, perhaps raising its probability from obscure to plausible as a result.

Thus, stating "I believe X" is similar to stating "X is somewhat plausible", in that both communicate weak evidence about X and draw attention to X, allowing to notice its greater plausibility through inference. But stating "X is somewhat plausible" would be a misrepresentation of your understanding of the world if you in fact believe that X is likely (you don't believe that it's only "somewhat plausible"), and stating "X is likely" breaks the rule of only asserting statements that will be accepted. Therefore, in this case the best choice is to state "I believe X", and not "X" or "X is somewhat plausible".

[-][anonymous]100

We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality). [emphasis added}

Taboo "systematically."

Rather than talking about only a single case where you might be tempted to conclude that X is rational because it leads to your desired conclusion that global warming is false, or true, or whatever; one can explain why this (tends probabilistically to) work in the general case, given the general sort of universe we live in - that's "systematically".

6[anonymous]
Yes, when we talk about something having systematic effects, we mean it tends probabilistically to have those effects. But this is cheating on the taboo: you are merely substituting another term of art. "Tends probabilistically" is no clearer than "tends systematically." Probabilities pertain to degrees of belief, not to states of the world. To think otherwise is to commit a mind projection fallacy. On this we agree. So, how can a probability describe a systematic tendency in the universe?
3Vaniver
Typically, when I want to discuss probabilistic relations I'll use a probabilistic word, like "probably" or "tends to" or "correlates with." When I use the word "systematically," I typically want to imply a causal relationship. Taking Eliezer's old example, if I put a pebble in the bucket when a sheep leaves the fold and take a pebble out of the bucket when a sheep returns to the fold, I've created a causal system, which will have the systematic effect of letting me know how many sheep are outside the fold by checking the level of the bucket. Whether the system is deterministic or stochastic doesn't make much difference for how I think about the graph connecting the nodes, though it will change the underlying mathematics. Now, I'll note my answer is very different from Eliezer's, and I suspect that's because "tends probabilistically to" is a simpler concept than a causal system; I might be trying to explain addition using multiplication.
2[anonymous]
How about "if you try this many times, it will usually work." I'm not sure you can taboo 'usually' (or 'systematically'). It seems to be one way to invoke a rather fundamental-seeming process of abstracting from specific cases to general categories about which you can then form summarizing beliefs. If you ask someone to taboo something too basic, the best they can do is to rephrase and hope you'll get what basic thing they were referring to.
1Eugine_Nier
What does it mean to try the same thing many times?
-2khafra
It's a little tautological that, by whatever method of counting things together you've worked out, you count certain things together, and that number is the denominator in your probability number; and then you count a subset of those things together, and that's the numerator in your probability number. It's so tautological, given the definition of probability, that it might not count as "tabooing probability." But it seems worth pointing out anyway.
-2Eugine_Nier
First I assume you mean to reply to some other comment. Furthermore, you description doesn't really work as a definition of probability since it implicitly assumes all the things are equally probable.
3khafra
I'm confused about your assumption. You're right that I didn't clearly describe probability, though; I needed to make it clear that in the denominator you must count everything, however you group it.
-2Eugine_Nier
When I flip a coin, it can land on heads, tails, or edge; however, the probability that it lands on edge is not 1/3.
1khafra
Yes; to count everything that can occur when you flip an actual, physical coin, you must first invent the universe. It could also be swallowed by a passing bird, which then blunders into a metal foundry and is built into a new space probe, never landing at all. As a human, you just happen to count a huge number of outcomes together under "heads," a huge number of outcomes together under "tails," and a somewhat smaller number of outcomes together under "edge."
1wedrifid
In fact, it may be more than merely our universe. The probability assignment actually incorporates doubt about what the precise details of the physics of our universe are. So you may need to invent Kolmogorov complexity and Tegmark's Ultimate Ensemble before you get to the serious counting.
-1Eugine_Nier
Even that isn't enough since it doesn't incorporate our uncertainty about mathematics.
-1Kindly
When I flip a coin, I count some outcomes under "heads", some outcomes under "tails", and everything else I ignore and demand we flip the coin again.
-3Eugine_Nier
The problem is that "everything" contains infinitely many possibilities, so putting the number of possibilities in the denominator to calculate the probability doesn't work.
0curiousepic
Whenever I see semantic dissection in major posts, I always worry that language is just too messy, just a towering stack of cards, and wonder why there isn't more discussion about why we use English when the language doesn't seem optimal for science and seeking Truth. Obviously it's rational for those who already speak English to continue using it for lack of an immediately available, preferable alternative, but I don't see much discussing this fact, arguing whether we should start over, etc. Admittedly, I haven't yet grokked the "ways words can be wrong" sequence. My previous post on the topic. Help me get over my linguistic-existential dread by refuting (or accepting) the statement "To develop optimal rationality skills, the first step should be to redesign our linguistic operating system."

Back in the 17th century, several people had a go at redesigning the whole thing, motivated by the flood of new knowledge coming from scientific investigations and the great exploratory voyages, and a felt inadequacy of the language of the time for expressing it. The languages they designed never came into use, although a direct intellectual line can be traced from there down to the formalisation of mathematical logic and the development of the first computers.

But introducing a whole new language is much harder than introducing a new keyboard layout, and how far has Dvorak got? Nobody will bother except for a few geeks. Qapla'!

Instead, there have been useful suggestions in various sources for small, local tools that one can simply pick up and use. Here's a list of the ones that occur to me. The first three are from Korzybski, and 4 is from General Semantics (on which there's a thread here), but invented by David Bourland. 6 and 7 are my own observations, and 5 and 8 are easily googleable for more information.

  1. Subscripting, to draw attention to the fact that Fred(2010) is not Fred(2012), Freda(@home) is not Freda(@work), and Genghis(drunk) is not Genghis(sober).

  2. Liberal use of the

... (read more)
3Alejandro1
As RichardKennaway says, it has been tried before, and never worked. You might be interested in Umberto Eco's book "Search for the Perfect Language".
-1TheOtherDave
Well, one place to start is to stop conflating "our linguistic operating system" with the languages we speak. The former is a cognitive structure which all languages intelligible by humans have in common. Redesigning that might very well be a valuable step, but it's way outside our current capabilities, and is unlikely to be a first step (or even a tenth or a hundredth step). But, OK, fine then, should we redesign the languages we speak? I'm inclined to doubt it. What I expect happens once a large number of people speak the language is that the actual spoken language gets creolized and that it's just as easy to express fallacies in it as in any other human language. That said, speaking a particular language might be valuable in a sort of ritual sense... as a way of reminding ourselves that we are "speaking as rationalists," and should therefore strive for more precision and clarity and truth-preservation than we do in our ordinary lives. That said, there's a lot of site jargon that serves that purpose quite well already building on an English frame. So on balance, I'm inclined to reject the statement.

Taboo is useful when you notice two people arguing over an equivocation, to get them to stop doing that. I'm not sure what the use is when there wasn't already confusion about the word "systematically".

2[anonymous]
If it's useful when people argue about an equivocation, it should be useful when there simply is an equivocation. Here, it would be easier to expose the equivocation of someone tried to spell out what "systematic" means in this context, which is the problem of what the concept of probability means when you try to apply it to the usefulness of an algorithm. The equivocation in question is between recognizing that an algorithm's effectiveness depends on the concrete particulars of a given problem and recognizing that an algorithm must be reliable to use it to prove knowledge claims. This would have been easier to show equivocal if someone made a serious attempt to unpack "systematically" (or "tends probabilistically") which really does all the work in this account.
8thomblake
Since you seem to understand that there's an equivocation, wouldn't it be easier to just state up front what the two different meanings are supposed to be? I'm still not sure what you're trying to point out here. Can you be more explicit/specific?

Other possibly useful reference: the Wikipedia saying "verifiability not truth." The idea being that Wikipedia is written by mere humans without direct access to cosmic truth, so what's verifiable is all we have to go on. (The details of Wikipedia's epistemology can get a bit stupid at the edges, but the point stands.)

I often add "I believe" to sentences to clarify that I am not certain.

"Did you feed the dog?" "Yes"

and

"Did you feed the dog?" "I believe so"

have different meanings to me. I parse the first as "I am highly confident that I fed the dog" and the second as "I am unable to remember for sure whether I fed the dog, but I am >50% confident I did so."

2graviton
It always seems to me that any little disclaimer about my degree of certainty seems to disproportionately skew the way others interpret my statements. For instance, if I'm 90% sure of something, and carefully state it in a way that illustrates my level of confidence (as distinct from 100%), people seem to react as if I'm substantially less than 90% confident. In other words, any acknowledgement of less-than-100%-confidence seems to be interpreted as not-very-confident-at-all.
1buybuydandavis
I find a similar effect. It looks to me like most people systematically overstate probabilistic claims above their overestimation of certainty. So that when they say P(?) = C, their internal estimate of P(?) = C(1-delta), while the long run expectation when they say P(?) = C is more like E(?) = C(1-delta)(1-gamma). So when you say it, they downgrade what you say by (1-delta). Kind of a Gresham's law for probabilistic predictions - over confident predictions drive out appropriately confident predictions.
1Shmi
Evolution is just a theory!

When Anna tells me, "I'm worried that you don't seem very curious about this," there's this state of mind called 'curiosity' that we both agree is important - as a matter of rational process, on a meta-level above the particular issue at hand - and I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.

Having someone who would occasionally point out deficiencies in one's rational processes sounds awesome. Do you think i... (read more)

9Vaniver
So, one of the easiest ways to detect curiosity is to notice things like posture and demeanor- which seems difficult to do over a text-based channel! I have noticed that online comments telling me "I think you're suffering from bias X" have seemed more like arguments than observations, whereas similar statements in person can be more like observations than arguments.
4TheOtherDave
There are people on this site whose thinking I respect enough that, were they to say something of this sort to me, I would at least acknowledge that I ought to re-evaluate the process that got me to where I am, despite their having not much intimate knowledge about me. (There are also people in my real life who have that property.) Whether I would actually do it is a much more complicated and contingent question.
4Wei Dai
I was asking more about the sending side of the advice rather than the receiving side. How do I debug someone else's rationality processes using just the information I can get from their posts and comments on LW? (Assuming they are not a newbie with really obvious flaws, but closer to Eliezer's level.)
2TheOtherDave
Hm. Are you asking "how could I tell that someone else isn't being rational?" or "how could I communicate to someone else that they aren't being rational in such a way that they'd benefit from it?" or something else?
5Wei Dai
Something else: I can sometimes tell that someone else on LW isn't being rational but can't see which part of their rationality process is broken, or not sufficiently activated. (Communicating this to them may also be a problem but wasn't the one I specifically had in mind.) I'm wondering if Eliezer thinks it is possible to do this over LW. Perhaps others have better skills for this than I do, or we should just try harder?
0TheOtherDave
Ah, gotcha. Yes, that makes sense; thanks for clarifying.
3gwern
It's been suggested: http://lesswrong.com/lw/6j1/find_yourself_a_worthy_opponent_a_chavruta/
0fortyeridania
This could have perverse consequences, because "You don't seem very curious about this" seems like a criticism. In my case anyway, having an irrationality-cop would have two effects. First, it would motivate me to avoid the criticism by being more rational (like it does for Eliezer). Second, it would motivate me to avoid the criticism by hiding my irrationality better. The latter effect would be bad, because then both the cop and I would overestimate my level of rationality. (Why would I, too, overestimate it? Because I'd hide my failures from myself as well, in an unconscious effort to hide them from others more effectively.) I think the fundamental issue here is that I dread criticism. (Solutions to this problem include exposure therapy and CBT.) People for whom this is less of a hurdle are likely to benefit more from having an irrationality-cop.
-2Peterdjones
It is certainly possible, it happens, and it generally results in the point-er losing karma. At leat when the point-er suggests all the answers might not be found in the Sequences.
0fortyeridania
I assume you are employing hyperbole. Nevertheless, I think your comment is unfair. Even just on LW, lots of great stuff isn't included in the Sequences. Moreover, people here regularly recommend materials (e.g., books) other than the Sequences.
0Peterdjones
I speak as i find

I know as a matter of process that when a respected fellow rationalist tells me that I need to become curious, I should pause and check my curiosity levels and try to increase them.

How does one increase their curiosity levels?

6Robert Miles
There's a post about this. @Eliezer Perhaps it's worth making "try to increase them" a link to lukeprog's "Get Curious" article?
1TheOtherDave
I increase my curiosity about a topic by attending closely to what specific questions related to that topic I'm not confident I know the answer to, what predictions I would make differently given higher confidence in various different answers to those questions, and what the consequences might be of being right about those predictions. Also, the longer I spend trying to think of such questions/predictions and failing, the more confident I become that increasing my curiosity about the topic is not a productive use of my time.

The reference to footnote 1 is missing from the post.

0Eliezer Yudkowsky
Heh, looks like the referring text got revised out. I've deleted the footnote.
3DSimon
The remaining footnote, the one about RationalWiki, should probably also be removed. It doesn't add anything to the article's point, and it's somewhat rude (in the social sense as well as the logical sense, i.e. you are presenting a strawman response to a question they were never asked).
8David_Gerard
As a long-time RW regular, I thought it was pretty accurate. Everyone thinks they're rational, particularly the infuriatingly hard-of-thinking. RW is Internet television and an enjoyable waste of your time at best. With useful bits. (This is approximately how I treat LW as well, of course.)
1Legolan
I agree. For those familiar with RationalWiki, I actually thought that it provided a nice contrasting example, honestly. Eliezer's definition for rationality is (regrettably, in my opinion) rare in a general sense (insofar as I encounter people using the term), and I think the example is worthwhile for illustrative purposes.
0Luke_A_Somers
nice phrase!

I thought of a slightly different exception for the use of "rational": when we talk about conclusions that someone else would draw from their experiences, which are different from ours. "It's rational for Truman Burbank to believe that he has a normal life." 

Or if I had an extraordinary experience which I couldn't communicate with enough fidelity to you, then it might be rational for you not to believe me. Conversely, if you had the experience and tried to tell me, I might answer with "Based only on the information that I received from you, which is p... (read more)

[-][anonymous]00

From the common usage of the word "I believe" referred to in this context I think you could generate an interpretation as follows:

When a person says I believe that something is going to happen

  • They're communicating their degree of belief to that something is going to happen, are not as confident

  • They're expecting that particular scenario to occur more likely than other scenarios, but not for certain

  • They're concentrating on that particular anticipation and expect it to happen regardless of other plausible scenarios

  • They're acknowledging their

... (read more)
[This comment is no longer endorsed by its author]Reply