Now imagine that one day, you're talking with someone who you strongly suspect is a Blue, and they remark on how irrational it is for so many people to believe the moon is made of cheese.
I'm a big fan of "Agree Denotationally But Object Connotationally" when this is the case
Or, when talking to your fellow Greens about the moon, you would "agree connotationally but object denotationally". I find that for me this is actually even more common than the reverse.
think of the "skeptic" groups that freely mock ufologists or psychics or whatever, but which are reluctant to say anything bad about religion, even though in truth the group is dominated by atheists.
Okay, let's run with that example. If someone says something like "Theist are stupid"...I agree denotatively in that I think theism is foolish and I'm aware that holding theistic beliefs is negatively correlated with intelligence. I disagree connotationally with the disdain and patronizing attitude which is implicit in the statement, and I dislike the motivations which the person probably had for making it. If the same person had said "religiosity is negatively correlated with intel...
The whole idea of having a belief as a litmus test for rationality seems totally backward. The whole point is how you change your beliefs in response to new evidence.
Meanwhile, if a lot of people have a belief that isn't true it is almost necessarily politically salient. The existence of God isn't an issue that is debated in the halls of government: but it is still hugely about group identity which means that people can get mind-killed about it. The only reason it works as any kind of litmus test is that everyone here is/was already a part of the same group when it comes to theism.
I think the true objection to Stuart's post was less about climate change and more about branding Less Wrong with an issue that has ideological salience. And that seems totally fair to me. If you have a one issue litmus test it's sort of weird to make it one that isn't specific enough to screen out even the most irrational liberals. At the very least add a sub-test asking if a person thinks carbon emissions are responsible for the Hurricane Sandy disaster, their confidence that climate change causes more hurricanes and what (if any) existential risk they assign to it. Catch the folks who think the moon is made out of gold in the filter.
The whole idea of having a belief as a litmus test for rationality seems totally backward. The whole point is how you change your beliefs in response to new evidence.
I think this is a very uncharitable interpretation of what the post in question is trying to say. First, the post isn't proposing a litmus test, but a test that is better than theism in identifying irrationality. Second, how would you know if someone changes their beliefs in response to new evidence without assessing their beliefs in relation to shared evidence? There's no way Stuart was stupid enough to think evidence shouldn't be shared for this to work.
ETA: I'm not a native speaker, and I'm not sure how people use the word litmus test anymore.
"Litmus test" in common U.S. usage means a quick and treated-as-reliable proxy indicator for whether a system is in a given state. To treat X as a litmus test for rationality, for example, is to be very confident that a system is rational if the system demonstrates X, and (to a lesser extent) to be very confident that a system is irrational if the system fails to demonstrate X.
If I may don my Evil Hansonian hat for a moment, conventional politics isn't so much about charting the future of our society as about negotiating the power relationships between tribal alignments. Values and ethical preferences and vague feelings of ickiness go into those alignments (and then proceed to feed back out of them), but it's far rarer for people to support political factions out of de-novo ethical reasoning than you'd guess from talking to them about it. The mind-killer meme is fundamentally an encouragement to be mindful of that, especially of the nasty ideological feedback loops that it tends to imply, and a suggestion to focus on object-level issues where the feedback isn't quite so intense.
One consequence of this is that political shifts happen at or above human timescales, as their subjects become things that established tribes notice they can fight over. If you happen to be a singularitarian, then, you probably believe that the kinds of technological and social changes that LW talks about will at some point -- probably soon, possibly already -- be moving faster than politics can keep up with. Speaking for myself, I expect anything that conventional legislature...
To contribute a "trick" that, in my experience, makes this easier, when you hear a political point, disentangle the empirical claims from the normative claims, and think to yourself, "Even if their empirical claims are correct, that doesn't necessarily mean I should accept their normative claims. I should examine the two separately."
Just as commenters shouldn't have assumed Eliezer's factual observation was an argument in favor of regulation,
But did they assume it?
Or did they conclude it based on inferences from Eliezer's comment and the broader context?
To recast that in more local-jargon, Bayesian terms... how high was their prior probability that Eliezer was making an argument in favor of regulation, and how much evidence in favor of that proposition was the comment itself, and did they over-weight that evidence?
Beats me, I wasn't there.
I might not be able to tell, even if I had been there.
But saying they "assumed" it in this context connotes that their priors were inappropriately high.
I'm not sure that connotation is justfiied, either in the specific case you quote Eliezer as discussing, or in the general case you and he treat it as illustrative of.
Maybe, instead, they were overweighting the evidence provided by the comment itself.
Or maybe they were weighting the evidence properly and arriving at, say, a .7 confidence that Eliezer was making an argument in favor of regulation, and (quite properly) made their bet as though that was the case... and turned out, in this particular case, to be...
a minor typo:
median confidence ... is 79%, and the mean confidence is 90%.
That is impossible with confidence bounded by 100%. Take an extreme case: just over half the population puts 79%, half 100%. Then the mean is just under 89.5. I checked that you switched the mean and median.
He think Stuart is factually wrong and the global warming question isn't a good predictor. Fortunately that's something we can test.
Before we run the numbers, what's your confidence interval for the IQ difference in the LessWrong poll of 2012 between on the people who believe that p(global warming)>0.9 versus people p(global warming)<0.5?
If you just correlate p values with IQ, what's your confidence interval for the resulting correlation coefficient?
As IQ might not be rationality, how do you think the global warming answer will predict whether someone gives rational answers to the CFAR questions?
You shouldn't assume the suspected Blue's observation is a pro-moon shot or anti-Green argument.
("Shouldn't assume", taken literally, sounds like an endorsement of forming beliefs for reasons other than their correctness. I think I agree with the intended point, but I'd put it somewhat differently.)
Rather than focusing on the factual question of whether a remark is motivated by identity signaling, it's sufficient to disapprove of participation in any moves that are clearly motivated by signaling or engage with the question of whether other mov...
Just as commenters shouldn't have assumed Eliezer's factual observation was an argument in favor of regulation
Eliezer's response there always struck me as odd. Was he making a simple factual observation? When you read the comment in question, it reads to me as the summary of an argument that regulation is necessary. Eliezer doesn't endorse that argument- he doesn't think that regulation should be necessary- but he's making the claim "society will require regulation because of argument X." Unsurprisingly, people respond to X as an argument for ...
If you still don't find any of this odd, think of the "skeptic" groups that freely ufologists or psychics or whatever
Is that statement missing a word?
You know, when I first read this post I thought "You have some interesting points, but this is obviously just a clever argument that's going to be used to justify posting stupid bullshit to LessWrong," so I downvoted. I didn't make that remark in public, though, because it would be rude and maybe I would end up being wrong.
Now that I see what this post is being used to justify, it seems clear that my prediction was correct.
Christ is it hard to stop constantly refreshing here and ignore what I know will be a hot thread.
I've voted on the article, I've read a few comment, cast a few votes, made a few replies myself. I'm precommitting to never returning to this thread and going to bed immediately. If anyone catches me commenting here after the day of this comment, please downvote it.
Damn I hope nobody replies to my comments...
...If you read Stuart's original post, it's clear this comment is reading ambiguity into the post where none exists. You could argue that Stuart was a little careless in switching between talking about AGW and global warming simpliciter, but I think his meaning is clear: he thinks rejection of AGW is irrational, which entails that he thinks the stronger "no warming for any reason" claim is irrational. And there's no justification whatsoever for suggesting Stuart's post could be read as saying, "if your estimate of future warming is only 50% of
If you read Stuart's original post, it's clear
I hate this rhetoric. I did read Stuart's post.
If you'd read Vaniver's comment, you'd agree that Stuart was acting in bad faith. So you didn't read it, but then you responded to it! It is extremely rude to respond to a comment you haven't read.
How about Stuart_Armstrong's response to satt's comment? It looks to me like Stuart agrees there was ambiguity there.
(And, to be clear, by "ambiguity there" I am using ambiguity as a one-place word by choosing the maximum of the two-place ambiguity among the actual readers of the post. Stuart has no ambiguity about what Stuart meant, but Steven does, and so the one-place ambiguity is Steven's ambiguity.)
the broader society isn't going to stop spontaneously labeling various straightforward empirical questions as Blue or Green issues. If you want to stop your mind from getting killed by whatever issues other people have decided are political, the only way is to control how you react to that.
This is true and embodies the quality of Tsuyoku naritai.
Thank you Chris for writing this. Your article covers about 70% of what I was trying to get across in my recent article. However, your post is much better optimized for persuasion than mine, I have to admit. It's just as obvious in your post which party is the one holding the crazy view in your example, and you are presenting the same viewpoint that people here should stop pretending that crazy viewpoints are any more valid for being considered "political", yet somehow it comes across better. Maybe because you avoid using the word "crazy&quo...
That's a LOT of text, without a clear thesis or recommendation. Can you summarize your point and then outline the evidence rather than going purely on detailed examples?
Are you just trying to say that it's difficult to separate your beliefs and values, difficult to discuss only a segment of a popular belief cluster, and still more difficult to signal to others that you're doing so?
Follow-up to: "Politics is the mind-killer" is the mind-killer, Trusting Expert Consensus
Gratuitous political digs are to be avoided. Indeed, I edited my post on voting to keep it from sounding any more partisan than necessary. But the fact that writers shouldn't gratuitously mind-kill their readers doesn't mean that, when they do, the readers' reaction is rational. The rules for readers are different from the rules for writers. And it especially doesn't mean that when a writer talks about a "political" topic for a reason, readers can use "politics!" as an excuse for attacking a statement of fact that makes them uncomfortable.
Imagine an alternate history where Blue and Green remain important political identities into the early stages of the space age. Blues, for complicated ideological reasons, tend to support trying to put human beings on the moon, while Greens, for complicated ideological reasons, tend to oppose it. But in addition to the ideological reasons, it has become popular for Greens to oppose attempting a moonshot on the grounds that the moon is made of cheese, and any landing vehicle put on the moon would sink into the cheese.
Suppose you're a Green, but you know perfectly well that the claim the moon is made of cheese is ridiculous. You tell yourself that you needn't be too embarrassed by your fellow Greens on this point. On the whole, the Green ideology is vastly superior to the Blue ideology, and furthermore some Blues have begun arguing we should go to the moon because the moon is made of gold and we could get rich mining the gold. That's just as ridiculous as the assertion that the moon is made of cheese.
Now imagine that one day, you're talking with someone who you strongly suspect is a Blue, and they remark on how irrational it is for so many people to believe the moon is made of cheese. When you hear that, you may be inclined to get defensive. Politics is the mind-killer, arguments are soldiers, so the point about the irrationality of the cheese-mooners may suddenly sound like a soldier for the other side that must be defeated.
Except... you know the claim that the moon is made of cheese is ridiculous. So let me suggest that, in that moment, it's your duty as a rationalist to not chastise them for making such a "politically charged" remark, and not demand they refrain from saying such things unless they make it perfectly clear they're not attacking all Greens or saying it's irrational to oppose a moon shot, or anything like that.
Quoth Eliezer:
Just as commenters shouldn't have assumed Eliezer's factual observation was an argument in favor of regulation, you shouldn't assume the suspected Blue's observation is a pro-moon shot or anti-Green argument.
The above parable was inspired by some of the discussion of global warming I've seen on LessWrong. According to the 2012 LessWrong readership survey, the mean confidence of LessWrong readers in human-caused global warming is 79%, and the median confidence is 90%. That's more or less in line with the current scientific consensus.
Yet references to anthropogenic global warming (AGW) in posts on LessWrong often elicit negative reactions. For example, last year Stuart Armstrong once wrote a post titled, "Global warming is a better test of irrationality than theism." His thesis was non-obvious, yet on reflection, I think, probably correct. AGW-denialism is a closer analog to creationism than theism. As bad as theism is, it isn't a rejection of a generally accepted (among scientists) scientific claim with a lot of evidence behind it just because the claim clashes with your ideological. Creationism and AGW-denialism do fall under that category, though.
Stuart's post was massively down voted—currently at -2, but at one point I think it went as low as -7. Why? Judging from the comments, not because people were saying, "yeah, global warming denialism is irrational, but it's not clear it's worse than theism." Here's the most-upvoted comment (currently at +44), which was also cited as "best reaction I've seen to discussion of global warming anywhere" in the comment thread on my post Trusting Expert Consensus:
If you read Stuart's original post, it's clear this comment is reading ambiguity into the post where none exists. You could argue that Stuart was a little careless in switching between talking about AGW and global warming simpliciter, but I think his meaning is clear: he thinks rejection of AGW is irrational, which entails that he thinks the stronger "no warming for any reason" claim is irrational. And there's no justification whatsoever for suggesting Stuart's post could be read as saying, "if your estimate of future warming is only 50% of the estimate I prefer you're irrational"—or as taking a position on ethical theories, for that matter.
What's going on here? Well, the LessWrong readership is mostly on-board with the scientific view on global warming. But many identify as libertarians, and they're aware that in the US many other conservatives/libertarians reject that scientific consensus (and no, that's not just a stereotype). So hearing someone say AGW denialism is irrational is really uncomfortable for them, even if they agree. This leaves them wanting some kind of excuse to complain, one guy thinks of "this is ambiguous and too political" as that excuse, and a bunch of people upvote it.
(If you still don't find any of this odd, think of the "skeptic" groups that freely mock ufologists or psychics or whatever, but which are reluctant to say anything bad about religion, even though in truth the group is dominated by atheists. Far from a perfect parallel, but it's still worth thinking about.)
When the title for this post popped into my head, I had to stop and ask myself if it was actually true, or just a funny Smokey the Bear reference. But in an important sense it is: the broader society isn't going to stop spontaneously labeling various straightforward empirical questions as Blue or Green issues. If you want to stop your mind from getting killed by whatever issues other people have decided are political, the only way is to control how you react to that.