There's a lot of overlap between the effective altruism movement and the LessWrong rationality movement in terms of their membership, but each also has many people who are part of one group and not the other. For those in the overlap, why should EA care about rationality and rationality care about EA?

New Answer
New Comment

4 Answers sorted by

Jan_Kulveit

150

The simple answer is this:

  • Rationality asks the question "How to think clearly". For many people who start to think more clearly, this leads to an update of their goals toward the question "How we can do as much good as possible (thinking rationally)", and acting on the answer, which is effective altruism.
  • Effective altruism asks the question "How we can do as much good as possible, thinking rationally and based on data?". For many people who actually start thinking about the question, this leads to an update "the ability to think clearly is critical when trying to answer the question". Which is rationality.

Obviously, this is an idealization. In the real world, many people enter the EA movement with a lot of weight on the "altruism" and less on the "effective", and do not fully update toward rationality. On the other hand it seems some people enter the rationality community, get mostly aligned with EA goals in the very abstract, but do not fully update toward actually acting.

Donald Hobson

40

The whole idea of effective altruism is in getting the biggest bang for your charitable buck. If the evidence about how to do this was simple and incontrovertible, we wouldn't need advanced rationality skills to do so. In the real world, choosing the best cause requires weighing up subtle balances of evidence on everything from if animals are suffering in ways we would care about, to how likely a super intelligent AI is.

On the other side, effective altruism is only persuasive if you have various skills and patterns of thought. These include the ability to think quantitatively, avoiding scope insensitivity, the ideas of expected utility maximization and the rejection of the absurdity heuristic. It is conceptually possible for a mind to be a brilliant rationalist with the sole goal of paperclip maximization, however all humans have the same basic emotional architecture, with emotions like empathy and caring. When this is combined with rigorous structured thought, the end result often looks at least somewhat utilitarianish.

Here are the kinds of thought patterns that a stereotypical rationalist, and a stereotypical non rationalist would engage in, when evaluating two charities. One charity is a donkey sanctuary, the other is trying to genetically modify chickens that don't feel pain.

The leaflet has a beautiful picture of a cute fluffy donkey in a field of sunshine and flowers. Aww Don't you just want to stroke him. Donkeys in medows seem an unambiguous pure good. Who could argue with donkeys. Thinking about donkeys makes me feel happy. Look, this one with the brown ears is called buttercup. I'll put this nice poster up and send them some money.

Genetically modifying? Don't like the sound of that. To not feel pain? Weird? Why would you want to do that? Imagines the chicken crushed into a tiny cage, looking miserable, "its not really suffering" doesn't cut it. Wouldn't that encourage people to abuse them? We should be letting them live in the wild as nature intended.

The main component of this decision comes from adding up the little "good" or "bad" labels that they attach to each word. There is also a sense in which a donkey sanctuary is a typical charity (the robin of birds), while GM chickens is an atypical charity (the ostrich).

The rationalist starts off with questions like "How much do I value a year of happy donkey life, vs a year of happy chicken life?". How much money is needed to modify chickens, and get them used in farms. Whats the relative utility gain from a "non suffering" chicken in a tiny cage, vs a chicken in chicken paradise, relative to a factory farm chicken that is suffering? What is the size of the world chicken industry?

The rationalist ends up finding that the world chicken industry is huge, and so most sensible values for the other parameters lead to the GM chicken charity being better. They trust utilitarian logic more than any intuitions they might have.

Insofar as your answer makes predictions about how actual “rationalists” behave, it would seem to be at least partly falsified: empirically, it turns out that many rationalists do not respond well to that particular suggestion (“modify chickens to not feel pain”).

(The important thing to note, about the above link, isn’t so much that there are disagreements with the proposal, but that the reasons given for those disagreements are fairly terrible—they are mostly non sequiturs, with a dash of bad logic thrown in. This would seem to more closely resemble the way you describe a “stereotypical non-rationalist” behaving than a “stereotypical rationalist”.)

In our argument in the comments to my post on zetetic explanations, I was a bit worried about pushing back too hard socially. I had a vague sense that there was something real and bad going on that your behavior was a legitimate immune response to, and that even though I thought and continue to think that I was a false positive, it seemed pretty bad to contribute to marginalization of one of the only people visibly upset about some sort of hard-to-put-my-finger-on shoddiness going on. It's very important to the success of an epistemic community to have people sensing things like this, and promote that sort of alarm.

I've continued to try to track this, and I can now see somewhat more clearly a really sketchy pattern, which you're one of the few people to consistently call out when it happens. This comment is a good example. It seems like there's a tendency to conflate the stated ambitions and actual behavior of ingroups like Rationalists and EAs, when we wouldn't extend this courtesy to the outgroup, in a way that subtly shades corrective objections as failures to get with the program.

This kind of thing is insidious, and can be done by well-meaning people. W... (read more)

Thank you for the encouragement, and I’m glad you’ve found value in my commentary.

… it’s also im­por­tant to track which er­rors seem like part of a pat­tern of mo­ti­vated er­ror, and which seem to be mere mis­takes. The former class seems much more dan­ger­ous to me, since such er­rors are cor­re­lated.

I agree with this as an object-level policy / approach, but I think not quite for the same reason as yours.

It seems to me that the line between “motivated error” and “mere mistake” is thin, and hard to locate, and possibly not actually existent. We humans are very good at self-deception, after all. Operating on the assumption that something can be identified as clearly being a “mere mistake” (or, conversely, as clearly being a “motivated error”) is dangerous.

That said, I think that there is clearly a spectrum, and I do endorse tracking at least roughly in which region of the spectrum any given case lies, because doing so creates some good incentives (i.e., it avoids disincentivizing post-hoc honesty). On the other hand, it also creates some bad incentives, e.g. the incentive for the sort of self-deception described above. Truthfully, I don’t know what the optimal approach is, here. Constant vigilance against any failures in this whole class is, however, warranted in any case.

1Donald Hobson
I agree that not all rationalists would want wireheaded chickens, maybe they don't care about chicken suffering at all. I also agree that you sometimes see bad logic and non-sequiters in the rationalist community. The non rationalist, motivated, emotion driven thinking, is the way that humans think by default. The rationalist community is trying to think a different way, sometimes successfully. Illustrating a junior rationalist having an off day and doing something stupid doesn't illuminate the concept of rationality, the way that seeing a beginner juggler drop balls doesn't show you what juggling is.

Is this GM chicken example just a thought experiment?

1Donald Hobson
I've not seen a charity trying to do it, but wouldn't be surprised if there was one. I'm trying to illustrate the different thought processes.

KeeKeh

20

The way I see it, is that people in EA live in the intersection of the set "People interested in altruistic initiatives" and the set "People interested in critical thinking". It seems to me that people in EA would answer your question like any person interested in critical thinking would to justify the role of critical thinking in decision making.

Dagon

10

"Why X should care about Y" is a really ambiguous topic. I presume you're not asking "why should everyone care about rationality and about EA", but I can't tell what information you're seeking. For clarity, are you asking "how can we increase the overlap", or "should we increase the overlap", or something else?

I'm an agent who believes that rationality helps me identify and achieve goals. I care about rationality as a tool. I also prefer that more agents be happier, and EA seems to be one mechanism to pursue that goal. I don't consider myself a constituent of either "community", but rather a consumer of (and occasionally a contributor to) the thinking and philosophy of each.

This seems more a comment than an answer to me. I think it's worth asking this about how I framed the question and why (reason: I see a lot of overlap between these two movements, so it seems worth asking "why does this overlap exist?"), but should happen in a comment rather than in an answer.

4 comments, sorted by Click to highlight new comments since:

The simple answer is this:

  • Rationality asks the question "How to think clearly". For many people who start to think more clearly, this leads to an update of their goals toward the questions "How we can do as much good as possible (thinking rationally)", and act on that, which is effective altruism.
  • Effective altruism asks the question "How we can do as much good as possible, thinking rationally and based on data?". For many people who actually start thinking about the question, this leads to an update "the ability to think clearly is critical when trying to answer the question". Which is rationality.

Obviously, this is an idealization. In the real world, many people enter the EA movement with a lot of weight on the "altruism" and less on the "effective", and do not fully update toward rationality. On the other hand it seems some people enter the rationality community, get mostly aligned with EA goals in the very abstract, but do not fully update toward actually acting.

[This comment is no longer endorsed by its author]Reply

You didn't ask Why does this overlap exist? but asked a question about what people should do.

[+][comment deleted]20
[+][comment deleted]20