[ Question ]

Why should EA care about rationality (and vice-versa)?

16


There's a lot of overlap between the effective altruism movement and the LessWrong rationality movement in terms of their membership, but each also has many people who are part of one group and not the other. For those in the overlap, why should EA care about rationality and rationality care about EA?

New Answer
Ask Related Question
New Comment

4 Answers

The simple answer is this:

  • Rationality asks the question "How to think clearly". For many people who start to think more clearly, this leads to an update of their goals toward the question "How we can do as much good as possible (thinking rationally)", and acting on the answer, which is effective altruism.
  • Effective altruism asks the question "How we can do as much good as possible, thinking rationally and based on data?". For many people who actually start thinking about the question, this leads to an update "the ability to think clearly is critical when trying to answer the question". Which is rationality.

Obviously, this is an idealization. In the real world, many people enter the EA movement with a lot of weight on the "altruism" and less on the "effective", and do not fully update toward rationality. On the other hand it seems some people enter the rationality community, get mostly aligned with EA goals in the very abstract, but do not fully update toward actually acting.

The whole idea of effective altruism is in getting the biggest bang for your charitable buck. If the evidence about how to do this was simple and incontrovertible, we wouldn't need advanced rationality skills to do so. In the real world, choosing the best cause requires weighing up subtle balances of evidence on everything from if animals are suffering in ways we would care about, to how likely a super intelligent AI is.

On the other side, effective altruism is only persuasive if you have various skills and patterns of thought. These include the ability to think quantitatively, avoiding scope insensitivity, the ideas of expected utility maximization and the rejection of the absurdity heuristic. It is conceptually possible for a mind to be a brilliant rationalist with the sole goal of paperclip maximization, however all humans have the same basic emotional architecture, with emotions like empathy and caring. When this is combined with rigorous structured thought, the end result often looks at least somewhat utilitarianish.

Here are the kinds of thought patterns that a stereotypical rationalist, and a stereotypical non rationalist would engage in, when evaluating two charities. One charity is a donkey sanctuary, the other is trying to genetically modify chickens that don't feel pain.

The leaflet has a beautiful picture of a cute fluffy donkey in a field of sunshine and flowers. Aww Don't you just want to stroke him. Donkeys in medows seem an unambiguous pure good. Who could argue with donkeys. Thinking about donkeys makes me feel happy. Look, this one with the brown ears is called buttercup. I'll put this nice poster up and send them some money.

Genetically modifying? Don't like the sound of that. To not feel pain? Weird? Why would you want to do that? Imagines the chicken crushed into a tiny cage, looking miserable, "its not really suffering" doesn't cut it. Wouldn't that encourage people to abuse them? We should be letting them live in the wild as nature intended.

The main component of this decision comes from adding up the little "good" or "bad" labels that they attach to each word. There is also a sense in which a donkey sanctuary is a typical charity (the robin of birds), while GM chickens is an atypical charity (the ostrich).

The rationalist starts off with questions like "How much do I value a year of happy donkey life, vs a year of happy chicken life?". How much money is needed to modify chickens, and get them used in farms. Whats the relative utility gain from a "non suffering" chicken in a tiny cage, vs a chicken in chicken paradise, relative to a factory farm chicken that is suffering? What is the size of the world chicken industry?

The rationalist ends up finding that the world chicken industry is huge, and so most sensible values for the other parameters lead to the GM chicken charity being better. They trust utilitarian logic more than any intuitions they might have.

The way I see it, is that people in EA live in the intersection of the set "People interested in altruistic initiatives" and the set "People interested in critical thinking". It seems to me that people in EA would answer your question like any person interested in critical thinking would to justify the role of critical thinking in decision making.

"Why X should care about Y" is a really ambiguous topic. I presume you're not asking "why should everyone care about rationality and about EA", but I can't tell what information you're seeking. For clarity, are you asking "how can we increase the overlap", or "should we increase the overlap", or something else?

I'm an agent who believes that rationality helps me identify and achieve goals. I care about rationality as a tool. I also prefer that more agents be happier, and EA seems to be one mechanism to pursue that goal. I don't consider myself a constituent of either "community", but rather a consumer of (and occasionally a contributor to) the thinking and philosophy of each.