Firewalling the Optimal from the Rational

Followup to: Rationality: Appreciating Cognitive Algorithms  (minor post)

There's an old anecdote about Ayn Rand, which Michael Shermer recounts in his "The Unlikeliest Cult in History" (note: calling a fact unlikely is an insult to your prior model, not the fact itself), which went as follows:

Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss. "When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates. The distance in our sense of life is too great.' Often she did not wait until a friend had left to make such remarks."

Many readers may already have appreciated this point, but one of the Go stones placed to block that failure mode is being careful what we bless with the great community-normative-keyword 'rational'. And one of the ways we do that is by trying to deflate the word 'rational' out of sentences, especially in post titles or critical comments, which can live without the word.  As you hopefully recall from the previous post, we're only forced to use the word 'rational' when we talk about the cognitive algorithms which systematically promote goal achievement or map-territory correspondences.  Otherwise the word can be deflated out of the sentence; e.g. "It's rational to believe in anthropogenic global warming" goes to "Human activities are causing global temperatures to rise"; or "It's rational to vote for Party X" deflates to "It's optimal to vote for Party X" or just "I think you should vote for Party X".

If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting". A post about rational dieting is if you're writing about how the sunk cost fallacy causes people to eat food they've already purchased even if they're not hungry, or if you're writing about how the typical mind fallacy or law of small numbers leads people to overestimate how likely it is that a diet which worked for them will work for a friend. And even then, your title is 'Dieting and the Sunk Cost Fallacy', unless it's an overview of four different cognitive biases affecting dieting. In which case a better title would be 'Four Biases Screwing Up Your Diet', since 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind.

By the same token, a post about Givewell's top charities and how they compare to existential-risk mitigation is a post about optimal philanthropy, while a post about scope insensitivity and hedonic returns vs. marginal returns is a post about rational philanthropy, because the first is discussing object-level outcomes while the second is discussing cognitive algorithms. And either way, if you can have a post title that doesn't include the word "rational", it's probably a good idea because the word gets a little less powerful every time it's used.

Of course, it's still a good idea to include concrete examples when talking about general cognitive algorithms. A good writer won't discuss rational philanthropy without including some discussion of particular charities to illustrate the point. In general, the concrete-abstract writing pattern says that your opening paragraph should be a concrete example of a nonoptimal charity, and only afterward should you generalize to make the abstract point. (That's why the main post opened with the Ayn Rand anecdote.)

And I'm not saying that we should never have posts about Optimal Dieting on LessWrong. What good is all that rationality if it never leads us to anything optimal?

Nonetheless, the second Go stone placed to block the Objectivist Failure Mode is trying to define ourselves as a community around the cognitive algorithms; and trying to avoid membership tests (especially implicit de facto tests) that aren't about rational process, but just about some particular thing that a lot of us think is optimal.

Like, say, paleo-inspired diets.

Or having to love particular classical music composers, or hate dubstep, or something.  (Does anyone know any good dubstep mixes of classical music, by the way?)

Admittedly, a lot of the utility in practice from any community like this one, can and should come from sharing lifehacks. If you go around teaching people methods that they can allegedly use to distinguish good strange ideas from bad strange ideas, and there's some combination of successfully teaching Cognitive Art: Resist Conformity with the less lofty enhancer We Now Have Enough People Physically Present That You Don't Feel Nonconformist, that community will inevitably propagate what they believe to be good new ideas that haven't been mass-adopted by the general population.

When I saw that Patri Friedman was wearing Vibrams (five-toed shoes) and that William Eden (then Will Ryan) was also wearing Vibrams, I got a pair myself to see if they'd work. They didn't work for me, which thanks to Cognitive Art: Say Oops I was able to admit without much fuss; and so I put my athletic shoes back on again.  Paleo-inspired diets haven't done anything discernible for me, but have helped many other people in the community. Supplementing potassium (citrate) hasn't helped me much, but works dramatically for Anna, Kevin, and Vassar.  Seth Roberts's "Shangri-La diet", which was propagating through econblogs, led me to lose twenty pounds that I've mostly kept off, and then it mysteriously stopped working...

De facto, I have gotten a noticeable amount of mileage out of imitating things I've seen other rationalists do. In principle, this will work better than reading a lifehacking blog to whatever extent rationalist opinion leaders are better able to filter lifehacks - discern better and worse experimental evidence, avoid affective death spirals around things that sound cool, and give up faster when things don't work. In practice, I myself haven't gone particularly far into the mainstream lifehacking community, so I don't know how much of an advantage, if any, we've got (so far). My suspicion is that on average lifehackers should know more cool things than we do (by virtue of having invested more time and practice), and have more obviously bad things mixed in (due to only average levels of Cognitive Art: Resist Nonsense).

But strange-to-the-mainstream yet oddly-effective ideas propagating through the community is something that happens if everything goes right. The danger of these things looking weird... is one that I think we just have to bite the bullet on, though opinions on this subject vary between myself and other community leaders.

So a lot of real-world mileage in practice is likely to come out of us imitating each other...

And yet nonetheless, I think it worth naming and resisting that dark temptation to think that somebody can't be a real community member if they aren't eating beef livers and supplementing potassium, or if they believe in a collapse interpretation of QM, etcetera. If a newcomer also doesn't show any particular, noticeable interest in the algorithms and the process, then sure, don't feed the trolls. It should be another matter if someone seems interested in the process, better yet the math, and has some non-zero grasp of it, and are just coming to different conclusions than the local consensus.

Applied rationality counts for something, indeed; rationality that isn't applied might as well not exist. And if somebody believes in something really wacky, like Mormonism or that personal identity follows individual particles, you'd expect to eventually find some flaw in reasoning - a departure from the rules - if you trace back their reasoning far enough. But there's a genuine and open question as to how much you should really assume - how much would be actually true to assume - about the general reasoning deficits of somebody who says they're Mormon, but who can solve Bayesian problems on a blackboard and explain what Governor Earl Warren was doing wrong and analyzes the Amanda Knox case correctly. Robert Aumann (Nobel laureate Bayesian guy) is a believing Orthodox Jew, after all.

But the deeper danger isn't that of mistakenly excluding someone who's fairly good at a bunch of cognitive algorithms and still has some blind spots.

The deeper danger is in allowing your de facto sense of rationalist community to start being defined by conformity to what people think is merely optimal, rather than the cognitive algorithms and thinking techniques that are supposed to be at the center.

And then a purely metaphorical Ayn Rand starts kicking people out because they like suboptimal music. A sense of you-must-do-X-to-belong is also a kind of Authority.

Not all Authority is bad - probability theory is also a kind of Authority and I try to be ruled by it as much as I can manage. But good Authority should generally be modular; having a sweeping cultural sense of lots and lots of mandatory things is also a failure mode. This is what I think of as the core Objectivist Failure Mode - why the heck is Ayn Rand talking about music?

So let's all please be conservative about invoking the word 'rational', and try not to use it except when we're talking about cognitive algorithms and thinking techniques. And in general and as a reminder, let's continue exerting some pressure to adjust our intuitions about belonging-to-LW-ness in the direction of (a) deliberately not rejecting people who disagree with a particular point of mere optimality, and (b) deliberately extending hands to people who show respect for the process and interest in the algorithms even if they're disagreeing with the general consensus.


Part of the sequence Highly Advanced Epistemology 101 for Beginners

Next post: "The Fabric of Real Things"

Previous post: "Rationality: Appreciating Cognitive Algorithms"

339 comments, sorted by
magical algorithm
Highlighting new comments since Today at 7:56 PM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

calling a fact unlikely is an insult to your prior model, not the fact itself

Not necessarily. Your model could have been quite reasonable, and yet something weird happened in the world. Sometimes, people win the lottery twice on the same day.

I think EY is pointing to the case of somebody winning the lottery twice in a lifetime, which people would think is incredibly weird, despite it being very normal - see I suspect that the "looks weird" due to having the wrong model is more common than "looks weird" due to being an outlier.

Indeed. The impression I get is that in calling Objectivism "the unlikeliest cult in the world", the intent of "unlikeliest" isn't as a further insult to Objectivism. Rather, it's to show that the author is discussing something exceptional, and therefore interesting.

I think the point is that if something happens, it has probability 1 of having happened, so it doesn't make sense to call it "unlikely." A perfect model could have predicted it with probability 1. If you failed to predict it, it's because your model was imperfect.

I think, however, that plenty of reasonable models of group interactions given our current knowledge would have failed to predict the rise of Objectivism.

One person who resisted Ronald was Ayn Rand. As one of the young libertarians (Ronald’s friend Murray Rothbard was another) who were invited to her apartment for intellectual discussions, he was cast into oblivion after a difference of opinion about . . . Rachmaninoff. Guests were asked to say who their favorite composers were, and when Rand’s turn came, she said “Rachmaninoff,” with specific reference to his second piano concerto. “Why?” Ronald asked. “Because he was the most rational,” Rand responded. At which Ronald laughed, thinking it must be a joke. He knew that the composer had dedicated that concerto to his psychiatrist — and anyway, rationality had nothing to do with its greatness. But Ronald’s laughter resulted in exile, and the loss of friends who were dear to him.

From an obituary for Ronald Hamowy.

Thank you for explaining that, I had no idea where he got Ayn Rand from.

(a) deliberately not rejecting people who disagree with a particular point of mere optimality, and (b) deliberately extending hands to people who show respect for the process and interest in the algorithms even if they're disagreeing with the general consensus.

Do you think Dmytry might be a good case study for this? I thought he had some interesting and novel ideas about processes/algorithms that at least didn't seem obviously wrong as well as some technical understanding of things like Solomonoff Induction, and also had strong disagreements with many of us regarding FAI and AI Risk. Should we have "extended our hands" to him more (at least before he became increasingly trollish), and if so how? (How would you taboo "extend hands" generally and in this specific instance?) If not, do you have someone else in mind who could serve as a concrete example?

It's my impression that yes, more hand extension would have been good, but I didn't follow his threads that closely.

I wonder if the trivial inconvenience of him not being that great of a communicator might have put people off from following his threads.

Does somebody want to post one part of Dmytry that seems new and true? My impression on a quick skim was not favorable.

This comment on a drawback of donating primarily to the charities you think is best lest you make it profitable to invest in being or appearing better by your standards, and various empirical parameters (availablility of honest signals, your ability to distinguish different signals, the quantity of funds allocated by decision rules like yours, the costs of dishonest signals) fall in a narrow region. I am skeptical that this is a real issue in practice (e.g. GiveWell channels to a top charity, rather than diversifying), separate from the problem of assessing evidence (which is normally focused on finding signals that are costly to fake in any case), but it's still an interesting theoretical point which I hadn't seen made on Less Wrong before.

Meta: I suggest creating a sequence index, and putting a link to the next post in the sequence at the bottom of each post, like you already have for all your other sequences.

Through use of the "seq_epistemology" tag this is possible via the "Article Navigation". Maybe this tag was only added after the comment? However, it works quite well!

Thanks for pointing that out. I forgot to check for tags, so I'm not sure whether it was already there. I still think it should be made more direct, though.

If you're writing a post comparing the experimental evidence for four different diets, that's not "Rational Dieting", that's "Optimal Dieting".

Isn't that as wrong and misleading as using Rational Dieting? Wouldn't Optimal imply that this is the very best way to diet when the article is actually on 'Comparing evidence for for diets'? Same as how 'Rational Dieting' carries an implication that your post discusses the cognitive algorithm for dieting, as opposed to four contributing things to keep in mind and thus you should use 'Four Biases Screwing Up Your Diet' for a title, doesn't Optimal imply the wrong thing? Seems to me like you are committing different fallacies (or errors) when you are trying to fix the previous fallacies (or errors) committed due to the misuse of the word 'rational'.

And, if you want to get technical, optimal implies both an objective function to measure the solution by, and a proof that no solutions are superior. "Optimize your diet" seems better than "optimal diets," but even then "four proven diets" seems superior to both of those.