It's been claimed that increasing rationality increases effective altruism. I think that this is true, but the effect size is unclear to me, so it seems worth exploring how strong the evidence for it is. I've offered some general considerations below, followed by a description of my own experience. I'd very much welcome thoughts on the effect that rationality has had on your own altruistic activities (and any other relevant thoughts).
The 2013 LW Survey found that 28.6% of respondents identified as effective altruists. This rate is much higher than the rate in the general population (even after controlling for intelligence), and because LW is distinguished by virtue of being a community focused on rationality, one might be led to the conclusion that increasing rationality increases effective altruism. But there are a number of possible confounding factors:
- It's ambiguous what the respondents meant when they said that they're "effective altruists." (They could have used the term the way Wikipedia does, or they could have meant it in a more colloquial sense.)
- Interest in rationality and interest in effective altruism might both stem from an underlying dispositional variable.
- Effective altruists may be disproportionately likely to seek to improve their epistemic rationality than are members of the general population.
- The rationalist community and the effective altruist community may have become intertwined by historical accident, out of virtue of having some early members in common.
So it's helpful to look beyond the observed correlation and think about the hypothetical causal pathways between increased rationality and increased effective altruism.
The above claim can be broken into several subclaims (any or all of which may be intended):
Claim 1: When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value.
Claim 2: When people are more rational, they're more likely to succeed in their altruistic endeavors.
Claim 3: Being more rational strengthens people's altruistic motivation.
Claim 1: "When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value."
Some elements of effective altruism thinking are:
- Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.
- Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism.
- The principle of indifference. — The idea that from an altruistic point of view, we should care about people who are unrelated to us as much as we do about people who are related to us. For example, in The Life You Can Save: How to Do Your Part to End World Poverty, Peter Singer makes the case that we should show a similar degree of moral concern for people in the developing world who are suffering from poverty as we do to people in our neighborhoods. I'd venture the guess its popularity among rationalists is an artifact of culture or a selection effect rather than a consequence of rationality. Note that concern about global poverty is far more prevalent than interest in rationality (while still being low enough so that global poverty is far from alleviated).
Claim 2: "When people are more rational, they're more likely to succeed in their altruistic endeavors."
If "rationality" is taken to be "instrumental rationality" then this is tautologically true, so the relevant sense of "rationality" here is "epistemic."
- The question of how useful epistemic rationality is in general has been debated, (e.g. here, here, here, here, and here).
- I think that epistemic rationality matters more for altruistic endeavors than it does in other contexts. Cognitive biases were developed for survival and evolutionary fitness, and these things correlate more strongly with personal well-being than with the well-being of others. I think that epistemic rationality matters still more for those who aspire to maximize utilitarian expected value: cognitive biases correlate more strongly with well-being of others within one's social circles than they do with the well-being of those outside of one's social circles.
- In Cognitive Biases Potentially Affecting Judgment of Global Risks, Eliezer describes some cognitive biases that can lead one to underestimate the likelihood of risks of human extinction. To the extent that reducing these risks is the most promising philanthropic cause (as Eliezer has suggested), reducing cognitive biases improves people's prospects of maximizing utilitarian expected value.
Claim 3: "Being more rational strengthens people's altruistic motivation."
- I think that there may be some effect in this direction mediated through improved well-being: when people's emotional well-being increases, their empathy also increases.
- It's possible to come to the conclusion that one should care as much about others as one does about oneself through philosophical reflection, and I know people who have had this experience. I don't know whether or not this is accurately described as an effect attributable to improved accuracy of beliefs, though.
Putting it all together
The considerations above point in the direction of increased rationality of a population only slightly (if at all?) increasing the effective altruism at the 50th percentile of the population, but increasing the effective altruism at higher percentiles more, with the skewing becoming more and more extreme the further up one goes. This is in parallel with, e.g. the effect of height on income.
My own experience
In A personal history of involvement with effective altruism I give some relevant autobiographical information. Summarizing and elaborating a bit:
- I was fully on board with consequentialism and with ascribing similar value to strangers as to familiar people as an early teenager, before I had any knowledge of cognitive biases as such, and at a time when my predictive model of the world was in many ways weaker than those of most adults.
- It was only when I read Eliezer's posts that the justification for expected value maximization in altruistic contexts clicked. Understanding it didn't require background knowledge — it seems independent of most aspects of rationality.
- I started reading Less Wrong because a friend pointed me to Yvain's posts on utilitarianism. My interest in rationality was more driven by my interest in effective altruism than the other way around. This is evidence that the high fraction of Less Wrongers who identify as effective altruists is partially a function of it being an attractor.
- So far increased rationality hasn't increased my productivity to a degree that's statistically significant. There are changes that have occurred in my thinking that greatly increase my productivity in the most favorable possible future scenarios, relative to a counterfactual in which these changes hadn't occurred. This is in consonance with my remark under the "putting it all together" heading above.
How about you?
Sorry if this is obviously covered somewhere but every time I think I answer it in either direction I immediately have doubts.
Does EA come packaged with "we SHOULD maximize our altruism" or does it just assert that IF we are giving, well, anything worth doing is worth doing right?
For example, I have no interest in giving materially more than I already do, but getting more bang for my buck in my existing donations sounds awesome. Do I count? I currently think not but I've changed my mind enough to just ask.
Being more rational makes rationalization harder. When confronted with thought experiments such as Peter Singer's drowning child example, it makes it harder to come up with reasons for not changing one's actions while still maintaining a self-image of being caring. While non-rationalists often object to EA by bringing up bad arguments (e.g. by not understanding expected utility theory or decision-making under uncertainty), rationalists are more likely to draw more radical conclusions. This means they might either accept the extreme conclusion that they wan... (read more)
My impression is that a lack of compartmentalization is a risk factor for both LW and EA group membership.
My impression is also that it is a risk factor for religious mania.
Lack of compartmentalization, also called taking ideas seriously, when applied to religious ideas, gives you religious mania. Applied to various types of collective utilitarianism, can produce various anything from EA to antinatalism, from tithing to giving away all that you have. Applied to what it actually takes to find out how the world works, gives you Science.
Whether it's a good thing or a bad thing depends on what's in the compartments.
This comment actually makes aspects of your writings here make sense, that did not make sense to me before.
Your post, overall, seems to have the assumption underlying it, that effective altruism is rational, and obviously so. I am not convinced this is the case (at the very least, not the "and obviously so" part).
To the extent that effective altruism is anything like a "movement", a "philosophy", a "community"... (read more)
I've read Yvain's article, and reread it just now. It has the same underlying problem, which is: to the extent that it's obviously true, it's trivial[1]; to the extent that it's nontrivial, it's not obviously true.
Yvain talks about how we should be effective in the charity we choose to engage in (no big revelation here), then seems almost imperceptibly to slide into an assumed worldview where we're all utilitarians, where saving children is, of course, what we care about most, where the best charity is the one that saves the most children, etc.
To what extent are all of these things part of what "effective altruism" is? For instance (and this is just one possible example), let's say I really care about paintings more than dead children, and think that £550,000 paid to keep one mediocre painting in a UK museum is money quite well spent, even when the matter of sanitation in African villages is put to me as bluntly as you like; but I aspire to rationality, and want to purchase my artwork-retention-by-local-museums as cost-effectively as I can. Am I an effective altruist?
To put this another way: if "effective altruism" is really just "we should be effective in ... (read more)
In my understanding of things rationality does not involve values and altruism is all about values. They are orthogonal.
LW as a community (for various historical reasons) has a mix of rationalists and effective altruists. That's a characteristic of this particular community, not a feature of either rationalism or EA.
A couple of points:
(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say
Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest numb... (read more)
I think this needs to differentiated further or partly corrected:
... (read more)Cognitive biases which improve individual fitness by needing less resources, i.e. heuristics which arrive at the same or almost equally good result but without less resources. Reducing time and energy thus benefits the individual. Example:
Cognitive biases which improve individual fitness by avoiding dangerous parts
I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn't give away money that could be used to design and build FAI --because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There's nothing wrong with believing in what you're doing, and believing that such a thing is optimal. ...Perhaps it is optimal. If it's not, th... (read more)
Another effect: people on LW are massively more likely to describe themselves as effective altruists. My moral ideals were largely formed before I came into contact with LW, but not until I started reading was I introduced to the term "effective altruism".
The question appears to assume that LW participation is identically equal to improved rationality. Involvement in LW and involvement in EA is pretty obviously going to be correlated given they're closely related subcultures.
If this is not the case: Do you have a measure to hand of "improved rationality" that doesn't involve links to LW?