TLDR: No. But I cautiously trust many common EA/rationalist opinions.
When I’m searching for help online, I start some of my search queries with prefixes such as site:lesswrong.com. That means Google will only return search results from LessWrong.
I’ve searched site:lesswrong.com cold shower, site:lesswrong.com optimal tooth brushing, site:lesswrong.com wirecutter, site:astralcodexten.substack.com aromatherapy, and site:forum.effectivealtruism.org where should I live.
LessWrong is the site of the rationalist community. They imply they’re less wrong than everyone else. Astral Codex Ten is a blog by the prominent rationalist Scott Alexander. His rationalist fame suggests he’s especially less wrong. And the EA Forum is the forum of the effective altruism (EA) movement. Effective altruism is about “doing good better.”
But are the rationalists really less wrong? Are the effective altruists truly doing good better? How can I evaluate them in an as unbiased way as possible? After all, I don’t only read rationalist content for general (i.e., Lifehacker-esque) productivity advice. I read what rationalists say about rationality. They influence how I think.
Isn’t that a lot of trust to place in communities I got into because I liked the blog of the Michael Jordan of coding bootcamps? How have the rationalists changed me?
What I’ve Learned From The Rationalists
I think the following list contains the most important lessons about rationality that I’ve learned from rationalists.
Always Be Rational
When I moved to San Francisco in 2015, I caught the startup bug. I came up with new company ideas all the time and dreamed of getting into Y Combinator.
Enough people are the same way that there’s no shortage of advice for wannabe entrepreneurs. I’d hear tropes like “founders should be overconfident,” “fake it till you make it,” and “move fast and break things.”
I agree with the spirit of those statements. If someone doesn’t believe in themself enough, or they’re not willing to take enough risks, I wouldn’t bet on their startup succeeding. I think it makes sense to “fake it” (i.e., pretend to be confident and/or lie) when appropriate too.
But I wouldn’t take those tropes too seriously. Sometimes I’ve been too confident in my ability. I’ve faked it by telling people I’d complete a task by a certain time and failed to do it. And sometimes, I don’t think it’s worth it to risk breaking something. Meta (Facebook) changed its motto from “Move fast and break things” to “Move fast with stable infrastructure.” That seems fair if they still lose close to $163,565 every minute the app goes down.
I refer to those tropes as reversible advice. Scott Alexander suggests considering the opposite of the advice you’re receiving if 1) there are plausibly near-equal groups of people who need this advice versus the opposite advice, or 2) you’ve self-selected into the group of people receiving this advice by, for example, being a fan of the blog / magazine / TV channel / political party / self-help-movement offering it.
And The Scout Mindset, by Julia Galef, implies that nobody should be overconfident. It describes how when Elon Musk founded SpaceX, he thought there was a 10% chance that a SpaceX craft would make it into orbit. It states he thought there was a 10% chance Tesla would succeed too. And that Jeff Bezos thought there was a 30% chance Amazon would succeed.
Musk said, “If something's important enough, you should try. Even if the probable outcome is failure.” Page 113 of The Scout Mindset may have subtly inspired me to use this example. I think that suggests he makes bets that maximize his expected utility.
Maximize Expected Utility
I define being rational as making decisions that maximize expected utility.
Imagine someone is about to roll a traditional six-sided die. You have the opportunity to bet $1 million that the die will land on 1. If you win, you get another $7 million. Otherwise, you lose everything.
The expected value of this bet is the amount of money you’d expect to make from it. That would be $333,333.33.
So should you make this bet? If you can make this bet an unlimited number of times, and you’d like more money, definitely.
But what if you have exactly $1 million, no source of income, you don’t know what you’d do with $7 million, and you’re only allowed to make this bet once?
You can use utility points that reflect what you fundamentally value to make this decision. You could fundamentally value anything, such as how long you’ll live, your dignity, or your happiness. Let’s pretend you fundamentally value happiness. You may decide losing $1 million would decrease your happiness by 100 hypothetical happiness points. And winning $7 million would increase your happiness by 200 points. In this case, your expected utility is -50 happiness points.
As someone who wouldn’t want to lose all of my money, would I have made this bet before I’d read about maximizing expected utility? I don’t think so. I was already doing the math implicitly.
I still normally do the math implicitly. But I think its been helpful to more consciously think about my values and probabilities. To make decisions for myself, and to resolve disagreements with others.
Values, Probabilities And Semantics Cause All Disagreement
Holden Karnofsky discusses the idea that if people directly stated their values (i.e., what they care about) and probabilities (i.e., their odds something is true), they’d always understand why they disagree with someone. This makes sense to me. Since, as Karnofsky says, people don’t always communicate clearly, I’d say almost every disagreement comes down to at least one of values, probabilities, or semantics (i.e., what people mean by what they say).
I don’t think there’s a foolproof way to resolve a disagreement over values. Who am I to tell you how much happiness you’d receive from winning $7 million?
But hopefully, talking things over can resolve semantic debates. And disagreements over probabilities can be tested.
The Importance Of The Experimental Method
In the second chapter of Harry Potter and the Methods of Rationality, by Eliezer Yudkowsky, Professor McGonagall turns into a cat in front of Harry. Harry freaked out.
Harry was breathing in short gasps. His voice came out choked. "You can't DO that!""It's only a Transfiguration," said Professor McGonagall. "An Animagus transformation, to be exact.""You turned into a cat! A SMALL cat! You violated Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! And cats are COMPLICATED! A human mind can't just visualise a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?"Professor McGonagall's lips were twitching harder now. "Magic.""Magic isn't enough to do that! You'd have to be a god!"And then Harry collected himself. He thought “The March of Reason would just have to start over, that was all; they still had the experimental method and that was the important thing.”
Harry was breathing in short gasps. His voice came out choked. "You can't DO that!"
"It's only a Transfiguration," said Professor McGonagall. "An Animagus transformation, to be exact."
"You turned into a cat! A SMALL cat! You violated Conservation of Energy! That's not just an arbitrary rule, it's implied by the form of the quantum Hamiltonian! Rejecting it destroys unitarity and then you get FTL signalling! And cats are COMPLICATED! A human mind can't just visualise a whole cat's anatomy and, and all the cat biochemistry, and what about the neurology? How can you go on thinking using a cat-sized brain?"
Professor McGonagall's lips were twitching harder now. "Magic."
"Magic isn't enough to do that! You'd have to be a god!"
And then Harry collected himself. He thought “The March of Reason would just have to start over, that was all; they still had the experimental method and that was the important thing.”
I learned about the scientific method in elementary school. But I never appreciated it until I read this passage. Even if everything I think I know turns out to be wrong, I can always find the truth through experimentation.
But I think I ultimately believe what I want to believe.
Confirmation Bias Is Everywhere
I’d heard of confirmation bias before I’d heard of the rationalist community. I would’ve defined it as believing what you want to believe. And I’d still use that definition. But I’d narrowly thought of confirmation bias as a reason I’d look for evidence to justify why I’d be successful (e.g., why my startup will succeed) or why I should feel intelligent (e.g., why my political opinion is correct).
I appreciate how The Scout Mindset showed me how confirmation bias could lead me to believe something negative about myself. For example, I remember the first time I was exposed to someone I thought might have coronavirus on April 20, 2020. My gut instinct was that if my roommate had covid that he’d probably already spread it to me and my roommates. So there was nothing I could do. I believe I specifically said something like we’re all fucked or screwed.
My assumption that my roommate could’ve already given me covid still feels reasonable. It was convenient and incorrect to assume there was nothing I could do. I could’ve started wearing a mask, socially distanced, and encouraged my roommates to do the same. I could’ve left my house. My personal coronavirus risk tolerance has changed over time. The point is that I didn’t have to assume I was fucked or screwed. I had a choice.
Similarly, from 2016 to 2021, I generally felt 100% certain that I should focus my self-improvement efforts on becoming a better software engineer. After all, it was too late to switch careers. That belief motivated me to code.
However, I shouldn’t have been so certain. I didn’t have to code. I told myself that so I could believe I didn’t have a decision to make. That made me happy immediately. Yes, thinking about what to do can be stressful. But it’s often worth it.
I don’t think the rationalists have fundamentally reshaped me. Before finding the rationalist community, I wouldn’t have suggested being irrational, ignoring the experimental method, or succumbing to confirmation bias. The rationalists gained my trust by telling me things that I already believed or were open to believing in ways that helped me with self-introspection.
Granted, I suppose any cult member believes what they’re open to believing. And my trust in EA’s/rationalists has shaped my opinions on important issues. I just told my roommate that I leaned against funding gain-of-function research. Until writing that sentence, I thought that was the EA/rationalist stance, but the “expert,” Anthony Fauci, currently supported gain-of-function research. I now see he hasn’t publicly stated he supports gain-of-function research since at least 2018. 
Most significantly, I lean towards believing the EA’s/rationalists are right that there’s at least a 1% chance that an artificial intelligence will cause human extinction over the next century!
But I don’t believe what I said about gain-of-function research or AI as much as I believe things I actually understand.
Ultimately, I think limiting some of my search queries to EA/rationalist websites is a statement about Google’s competence. I believe EA’s/rationalists are generally rational and that they have similar values to me. So I’d rather search google site:lesswrong.com exercise than think up a search query to help Google understand my values, such as efficient exercise to maximize longevity and mental health.
However, while searching EA/rationalist sites is sometimes a useful heuristic, the rationalists have helped me appreciate how easy it is to believe the truth is convenient. If a question is important enough, I’ll do whatever it takes to find the answer.
(cross-posted from my blog: https://utilitymonster.substack.com/p/https://utilitymonster.substack.com/p/brainwashed)
This post explains how I got into EA. And I found the rationalist community through EA. My impression is that most rationalists are also members of the EA community. So a lot of my trust in the EA community carried over to the rationalist community.
Throughout this post, I use whichever term out of EA or rationalist that feels more appropriate. Or I use both terms.
In case this would’ve been considered plagiarism, I noticed that Chapter 8 of The Scout Mindset (pg 105) starts with a similar story and uses the term “theater bug.”
Although, I could’ve misinterpreted the intended spirit of those statements.
It doesn’t say what Musk specifically means by Tesla would succeed. And all the comments where Musk says this are after Tesla and SpaceX have had some success (i.e., they’re worth billions). The earliest statement cited in The Scout Mindset where Musk says he thought one of them would fail is from 2014. I lean towards believing that Musk isn’t trying to appear humble. My impression is that Tesla and SpaceX both nearly went bankrupt in 2008. I imagine he thought it wouldn’t be practical to say he thought they’d fail to the public before they were successful enough.
Likewise, the earliest statement I could find where Jeff Bezos said he thought Amazon had a 30% chance of success was in 1999, after Amazon was already a public company.
Page 113 of The Scout Mindset may have inspired me to use this example.
The die is equally likely to land on 1,2,3,4,5, and 6.You’d win on 1, one of the six possible outcomes. And if you win you earn an additional $7 million dollars. 1/6 * 7,000,000 = 1,166,666.67. And you’d lose on 2,3,4,5 and 6, five of the six possible outcomes. ⅚ * -1000000 = -833,333.33. 1,166,666.67 + -833,333.33 = $333,333.33.
Once again, you have a 1/6 chance of winning the bet. In that case, you’d get 200 utility points. And you have a 5/6 chance of losing the bet and losing 100 utility points. 1/6 * 200 + 5/6 * -100 = -50.
Philosophical Interlude: Imagine you are in a vacuum. A vacuum where all the utility that will ever be experienced by others depends on your actions right now. You can bet on a fair coin. If you bet, you must bet all the utility that will ever be experienced by all beings in all universes. The bet is slightly more than double or nothing. If you win, the utility won will be divvied out to make everyone equally well off. Past unhappy beings will come back to life and receive utility until they’ve become slightly happy overall. Afterward, new slightly happy beings will be born. But losing means extinction. All beings in all universes will die. No new beings will ever be created.
If you win, you can make the same bet again. And again. Forever. You won’t age or have any health problems while you bet. While you’re betting, all universes will be paused. Nobody will feel the happiness you win until you finish betting.
What do you do? (Feel free to adjust the hypothetical to your moral views so you face a similar dilemma.)
If I hadn’t added, “While you’re betting, all universes will be paused. Nobody will feel the happiness you win until you finish betting.”, I’d happily bet. But with that condition, I don’t know. The more I bet, the more likely I am to end the universe without making anyone happier. Yet the expected utility of each bet is positive.
In practice, I haven’t found a scenario where it doesn’t make sense to maximize expected utility. I’ll watch out for real-life scenarios where payoffs are potentially delayed infinitely.
Although, there are some edge cases where the test could take an infinite amount of time (e.g., testing whether the universe takes up infinite space).
It’s available as an ebook too. I linked to the audio version because I thought it was well done.
Galef uses the term “motivated reasoning” instead of confirmation bias. In the book (pg 6), she says they mean the same thing.
I imagine The Scout Mindset isn’t the only resource which demonstrates that confirmation bias could lead someone to believe something negative about themself. But, as of May 16, 2022, I could only find one example of that from the first page of Google results when I search “confirmation bias” or “motivated reasoning.” That example is how someone who believes the world will end will only believe the end has been delayed when an apocalypse doesn’t happen. I don’t know if reading that example earlier would’ve helped me recognize scenarios like the ones about coronavirus and software engineering above. (And the world might end at some point.)
The google results I looked at for confirmation bias are: Wikipedia, Encyclopedia Britannica, VeryWellMind, the abstract of this article, Farnam Street, SimplyPsychology, The Decision Lab, Psychology Today, and Investopedia. For motivated reasoning, I looked at Wikipedia, Psychology Today, Discover Magazine, Oxford Bibliographies, iResearchNet, Forbes, APA, this paper's abstract, and this paper's summary. I didn’t watch any videos from the results.
And I vaguely remember reading that con artists initially tell you stuff that’s true to earn your trust. Plus, there have been large charity scams before. Although, the entirety of EA being a scam would have to be a massive conspiracy. It’s more likely that some organizations/initiatives associated with the EA and/or rationalist communities are deemed ineffective (e.g., Raising For Effective Giving, No Lean Season, more examples here and here), or have serious issues (e.g., Leverage Research, The Monastic Academy). I also don’t know how I’d measure the effectiveness of many organizations focused on preventing existential risks, and I’d understand if someone felt EA nonprofits were spending too much on overhead. I’d bet some EA nonprofits (e.g., Redwood Research, Open Philanthropy) pay their average employee over six figures. There’s no formal definition of what makes an organization an EA/rationalist organization.
The linked article’s author, Kelsey Piper, is a member of the EA/rationalist communities.
He wrote an op-ed calling for gain-of-function research in 2011. And he apparently praised the lifting of the U.S. ban on gain-of-function research in 2018. I haven’t watched the video posted citing that claim. I think I had the impression Fauci clearly currently supports gain-of-function research because I didn’t notice the date on a screenshot of his 2011 op-ed in this article.
And Google may be promoting the values of the Fellowship of Friends.
If I just search “exercise” on Google, I get articles that state general reasons why exercise is good or exercises that are good for everyone. Here’s my first page of results: Mayo Clinic, Wikipedia, Healthline, WebMD, NHS, NHS again, and Harvard Health. The only result I might go back to at some point is the Wikipedia page. It seemed fairly thorough. I didn’t look at videos, podcasts, and articles labeled as news from my results.
For example, here’s my first page of article results from googling “efficient exercise to maximize longevity and mental health”: AARP, Time, Longevity.Technology, Blue Zones, Mental Health Foundation, Medical News Today, Andrew Merle, Harvard Health, Washington Post, Amherst College. My overall takeaway was that there’s a lot of conflicting advice, and no source stood out as great. And here’s a link to LessWrong posts on exercise. This post acknowledges some of the questions I have, but doesn’t answer them. And the author’s statement, “you are now as knowledgeable as any personal trainer I've spoken with,” made me feel he was overconfident.
I also searched site:astralcodexten.substack.com exercise. And I found this comment and this comment. They were similar to this LessWrong comment. So because I cautiously trust rationalists, and because I didn’t think anything Google showed before seemed better, I’d lean towards looking to those sources if I wanted to learn more about fitness. Not that I ever expect to have much confidence that I’m exercising optimally.
There isn’t much on LessWrong about cold showers or optimal tooth brushing.
Until writing that sentence, I thought that was the EA/rationalist stance, but the “expert,” Anthony Fauci, currently supported gain-of-function research. I now see he hasn’t publicly stated he supports gain-of-function research since at least 2018.
Anthony Fauci's current position seems to be that if the NHI has a meeting to declare that research in a paper that Fauci mailed his colleagues in a PDF titled "Baric, Shi et al - Nature medicine - SARS Gain of function" isn't gain of function, it's not gain of function research.
In such an environment it's pretty hard to take public statements about whether he supports gain of function research at face value.
Focusing on what opinions Fauci holds in print also ignores his importance of actually directing research funding toward gain of function research within NIAID. As long as that happens, it would make sense to count him as supporting gain of function research.
The main problem with the 1 million versus 7 million idea is that losing 100 versus gaining 200 is nowhere near describing accurate utility to pretty much anyone who doesn't already have a lot of money.
Perhaps the real lesson is that quantifying something doesn't make it more accurate; it just moves around which parts you need to get accurate. Doing a calculation with inaccurate numbers is no better than just coming up with an equally inaccurate result without calculation. It's also easier to make certain kinds of errors in the first place when you're trying to quantify things and aren't very good at doing it, even though Scott insists otherwise. Or to just use an overly simplified model without enough humility about your ability to create a useful model.
Doing a calculation doesn't leave you free to not sanity-check your results either, which is a problem with a lot of rationalist calculations.