I'm pleased to announce the first annual survey of effective altruists. This is a short survey of around 40 questions (generally multiple choice), which several collaborators and I have put a great deal of work into and would be very grateful if you took. I'll offer $250 of my own money to one participant.

Take the survey at http://survey.effectivealtruismhub.com/

The survey should yield some interesting results such as EAs' political and religious views, what actions they take, and the causes they favour and donate to. It will also enable useful applications which will be launched immediately afterwards, such as a map of EAs with contact details and a cause-neutral register of planned donations or pledges which can be verified each year. I'll also provide an open platform for followup surveys and other actions people can take. If you'd like to suggest questions, email me or comment.

Anonymised results will be shared publicly and not belong to any individual or organisation. The most robust privacy practices will be followed, with clear opt-ins and opt-outs.

I'd like to thank Jacy Anthis, Ben Landau-Taylor, David Moss and Peter Hurford for their help.

Other surveys' results, and predictions for this one

Other surveys have had intriguing results. For example, Joey Savoie and Xio Kikauka's interviewed 42 often highly active EAs over Skype, and found that they generally had left-leaning parents, donated on average 10%, and were altruistic before becoming EAs. The time they spent on EA activities was correlated with the percentage they donated (0.4), the time their parents spend volunteering (0.3), and the percentage of their friends who were EAs (0.3).

80,000 Hours also released a questionnaire and, while this was mainly focused on their impact, it yielded a list of which careers people plan to pursue: 16% for academia,  9% for both finance and software engineering, and 8% for both medicine and non-profits.  

I'd be curious to hear people's predictions as to what the results of this survey will be. You might enjoy reading or sharing them here. For my part, I'd imagine we have few conservatives or even libertarians, are over 70% male, and have directed most of our donations to poverty charities.

New to LessWrong?

New Comment
148 comments, sorted by Click to highlight new comments since: Today at 3:25 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

There's a question about other social movements people might associate themselves with. How was the list of suggestions created? At present, the list is very left-wing:

  • Animal rights
  • Environmentalist
  • Feminist
  • Rationalist/LessWrong
  • Transhumanist
  • Skeptic/atheist
  • Other:

Ordinarily this would only be a small problem, but then you ask people about their political views after you've primed them with left-wing examples.

Which social movements would you add to that list?

Tea Party?

Evangelical Christianity has aspects of a social movement, but I doubt we'd turn up any evangelicals here. Not that this is necessarily a problem if the goal is to avoid Blue/Green priming.

If we're just looking for stuff that isn't stereotypically left-wing, men's rights and free software also come to mind.

Agreed. Open source was at least part of my fill in for several questions. edit: to expound.. just so much inherit value in free software – even from the smallest packages or simplest library – that we've all created immeasurable value from, and as technology progresses I really see free software as one of our greatest collective assets.
Evangelical Christianity is a good idea, I'll add it. 'Free software' might be reasonably common, and is an audience EAs could target. I'll look at a list of common write-ins.
Yes, that does count as a movement, I'll add it as clear signalling that we're not assuming people are left-wing (in this year's survey, when I get time to tweak my Perl scripts).

then you ask people about their political views after you've primed them with left-wing examples.

Not to mention all the implications of right-wing politics not making it to the list at all. "No, we don't think anyone can possibly believe that... What are you, a freak?" :-/

I can assure you I didn't think that - it was rather that I didn't think of any right-wing (or additional non left-wing) movements that significant numbers might plausibly belong to. But I definitely made a mistake in not trying to think of them more. If you can suggest some, I'll add them.

My predictions (linked in this comment ) did have few conservatives or libertarians. The set of EAs whose views I know contains a few libertarians and no conservatives. However that set contains disproportionately many elite university students, an unrepresentatively lefty group.

I was surprised that there weren't a few more libertarians and conservatives in the LessWrong census.

I see Larks' point. The movement data is action-relevant for me, as I'm spending several hours a week going to meetup groups purely to recruit GiveWell donors. I've found skeptic/atheist groups particularly fertile, and lefty political groups (and 'A' rather than 'E' groups generally) the opposite. I haven't tried any conservative or libertarian groups yet.
Given that conservative (I believe especially evangelical groups) donate the most to charity, it's probably worthwhile checking them out. My understanding is that their current approach to the inefficient charity problem involves organizing trips to the countries in question and having members personally help the charity. While this is clearly not the most efficient approach, it does help with the "most of the money winding up in the hands of middlemen" problem while also generating warm fuzzes.
That's because lefty and 'A' groups are mostly about signalling one's virtue, thus someone who shows up and starts telling them how none of the 'virtuous' things they've been doing are actually helping people is most certainly not welcome.
Uhm, upvoted the comment, but don't completely agree with the linked article. It suggests that when fans of something are worried when it becomes too popular, they object against losing their positional good. That's just one possible explanation. Sometimes the fact that X becomes widely popular changes X, and there are people who genuinely preferred the original version. -- As a simple example, imagine that tomorrow million new readers will come to LW; would that be a good thing or a bad thing? Depends on what happens to LW. If the quality of debate remains the same, that it's obviously a huge win, and anyone who resents that is guilty of caring about their positional good too much. On the other hand, the new people could easily shift LW towards the popular (in sense: frequent in population) stuff, so we would get a lot of nonsense sprinkled by LW buzzwords. I can imagine leftist groups believing they are working "more meta than thou"; solving a problem which taken in isolation doesn't seem so important (compared with the causes effective altruists care about), but would start a huge cascade of improvement afterwards (their model of the world says so, your model doesn't). Making mosquito nets instead is not an improvement according to their model.
The results can already been seen in the Census Survey: There is a small trend toward the mean. The smartest move on to greener pastures.
Moo? Edit: Stupid comment, too much reddit today. Infantile regression. I apologize. I disagree with the parent comment ("small trend toward the mean in the census = smartest move on to greener pastures") and meant to poke fun at it by showing the absurd fringe case; only dumb cows remaining (which I'm not, hence my disagreement would be conveyed). Convoluted. Sorry.
Saw this in recent comments, thought how curious is that there is a context in which this comment is not silly. I was wrong. What did you mean, again?
I see those two points to be independently supported by the survey and not to imply each other in any obvious way.
That doesn't explain why the new X looks much more like an extreme version of the popular version of X rather than the original X.
Those are both good points. Can you suggest less left-wing movements with which people might identify, or that I could now add to the list just to counteract the priming? My impression is that conservatives and centrists are less 'movementy'! How strong do you think the priming effect will be, with this audience? Is there literature on that? My Google Fu's defeating me.

I've now checked out the survey, and have a couple of comments (which I put into the comments field and am reposting here). #1 is important, #2 less so:

  1. On moral philosophy: "Consequentialist/utilitarian" should be broken up into something like "Utilitarian" and "Other consequentialist (not utilitarian)", because I am a consequentialist and (probably) not a utilitarian, and that disagreement is one of my main points of contention with the EA movement.

  2. I had no idea how to answer the "political views" question. Are these positions ("left", "centre", etc.) supposed to be on the American (U.S.) political spectrum? That'd be my default assumption, but the British/Canadian spelling suggests otherwise... in any case, at least offer as many options as e.g. the Lesswrong survey did.

2Peter Wildeford10y
Those are good points. It would confound things too much to change midstream, but now we'll know better for next year.
5Rob Bensinger10y
I'd rather see 'consequentialist' supplemented or replaced by specific questions that get at substantive ethical or meta-ethical disputes in EA and philosophy. 'Utilitarian' and 'deontologist' mean lots of different things to different people, and on their strictest definitions they don't entail a lot of their most interesting or widely cited ideas. Perhaps have an exploratory question one year asking non-utilitarians to write in their main objection to utilitarianism, then convert that into a series of questions the following year.
2Peter Wildeford10y
This was something I suggested to Tom because I'd be interested too. But ultimately we thought that only a small group of EAs would really have substantive ethical opinions and we thought to trim things for survey length. We added a box asking for clarifications at the end of the survey to provide more of this outlet.
2Said Achmiz10y
One of the main objections to utilitarianism, it seems to me, is skepticism about the possibility (or even coherence of the notion) of aggregating utility across individuals. That's one of my main objections, at any rate. Skepticism about the applicability of the VNM theorem to human preferences is another issue, though that one might be less widespread. Edit: The SEP describes classic utilitarianism as actual, direct, evaluative, hedonistic, maximizing, aggregative (specifically, total), universal, equal-consideration, agent-neutral consequentialism. I have definite issues with the "actual", "direct", "hedonistic", "aggregative", "total", and "equal-consideration" parts of that. (Though I expect that my issues with "actual" will be shared by a significant portion of those who consider themselves utilitarians here, and my issues with "hedonistic" and "direct" may be as well. That leaves "aggregative"+"total", and "equal-consideration", as the two aspects most likely to be sources of philosophical conflict.)
Those sound like objections to preference utilitarianism but not hedonistic utilitarianism. Although it's not technically possible yet, measuring the intensity of the positive and negative components of an experience sounds something that ought to be at least possible in principle. And the applicability of the VNM theorem to human preferences becomes irrelevant if you're not interested in preferences in the first place.
2Said Achmiz10y
Yes, true enough[1]; I did not properly separate those objections in my comment. To elaborate: I object to hedonistic utilitarianism on the grounds that it clearly and grossly fails to capture my moral intuitions or those of anyone else whom I consider not to be evading the question. A full takedown of the "hedonistic" part of "hedonistic utilitarianism" is basically (at least) all of Eliezer's posts about the complexity of value and so forth, and I won't rehash it here. To be honest, hedonistic utilitarianism seems to me to be so obviously wrong that I'm not even all that interested in having this sort of moral philosophy debate with an effective altruist (or anyone else) who holds such a view. I mean, to start with, my hypothetical interlocutor would have to rebut all the objections raised to hedonistic utilitarianism over the centuries since it's been articulated, including, but not limited to, the aforementioned Lesswrong material. I object to preference utilitarianism because of the "aggregation of utility" and "possibility of constructing a utility function" issues[2]. I think this is the more interesting objection. [1] I'm not sure "intensity of the positive and negative components of an experience" is a coherent notion. There may not be a single quantity like that to measure. And even if we can measure something which we think qualifies for the title, it may be measurable only in some more-or-less absolute terms, while leaving open the question of how this hypothetical measured quantity matches up with anything like "utility to this particular experiencer". But, for the sake of the argument, I'm willing to grant that such a quantity can indeed be usefully measured, because this is certainly not my true rejection. [2] These are my objections to the "preference" component of preference utilitarianism; my objection to classical utilitarianism also includes objections to other components, which I have enumerated in the grandparent.
Two replies: 1) Even if hedonistic utilitarianism would ultimately be wrong as a full description of what a person values, "maximize pleasure while minimizing suffering" can still be a useful heuristic to follow. Yes, following that heuristic to its logical conclusion would mean forcibly rewiring everyone's brains, but that doesn't need to be a problem for as long as forcibly rewiring people's brains isn't a realistic option. HU may still be the best approximation of a person's values in the context of today's world, even if it wasn't the best description overall. 2) The arguments on complexity of value and so on establish that the average person's values aren't correctly described by HU. This still leaves open the possibility of someone only approving of those of their behaviors that serve to promote HU, so there may definitely be individual people who accept HU, due to not sharing the moral intuitions which motivate the objections to it.
2Said Achmiz10y
On 1): I am skeptical of replies to the effect that "yes, well, X might not be quite right, but it's a useful heuristic, therefore I will go on acting as if X is right". For one thing, a person who makes such a reply usually goes right back to saying "X is right!" (sans qualifiers) as soon as the current conversation ends. Let's get clear on what we actually believe, I generally think; once we've firmly established that, we can look for maximally effective implementations. For another thing, HU may be the best approximation etc. etc., but that's a claim that at least should be made explicitly, such that it can be examined and argued for; a claim of this importance shouldn't come up only in such tangential discussion branches. For a third thing, what happens when forcibly rewiring people's brains becomes a realistic option? On 2): I think there's two issues here. There could indeed be people who accept HU because that's what correctly describes their moral intuitions. (Though I should certainly hope they do not think it proper to impose that moral philosophy on me, or on anyone else who doesn't subscribe to HU!) "Only approving of those behaviors that serve to promote HU" is, I think, a separate thing. Or at least, I'd need to see the concept expanded a bit more before I could judge. What does this hypothetical person believe? What moral intuitions do they have? What exactly does it mean to "promote" hedonistic utilitarianism?
Why would this be improper? Don't that it doesn't follow from any meta-ethical position.
1Said Achmiz10y
If you say "all that matters is pain and pleasure", and I say "no! I care about other things!", and you're like "nope, not listening. PAIN AND PLEASURE ARE THE ONLY THINGS", and then proceed to enact policies which minimize pain and maximize pleasure, without regard for any of the other things that I care about, and all the while I'm telling you that no, I care about these other things! Stop ignoring them! Other things matter to me! but you're not listening because you've decided that only pain and pleasure can possibly matter to anyone, despite my protestations otherwise... ... well, I hope you can see how that would bother me. It's not just a matter of us caring about different things. If it were only that, we could acknowledge the fact, and proceed to some sort of compromise. Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided! Clearly.
They may think it's incorrect if they're realists, or cognitivists of some other form. But this has nothing to do with their being HUs, only with their being cognitivists. Here are 3 non-exhaustive ways in which the situation you described could be bothersome: (i) If your first order ethical theory (as opposed to your meta-ethics), perhaps combined with very plausible facts about human nature, requires otherwise. For instance if it speaks in favour of toleration or liberty here. (ii) If you're a cognitivist of the sort who thinks she could be wrong, it could increase your credence that you're wrong. (iii) If you'd at least on reflection give weight to the evident distress SaidAchmiz feels in this scenario, as most HUs would.
0Said Achmiz10y
No, I don't think this is right. I think you (and Kaj_Sotala) are confusing these two questions: 1. Is it correct to hold an ethical view that is something other than hedonistic utilitarianism? 2. Does it make any sense to intrinsically value anything other than pleasure, or intrinsically disvalue things other than pain? #1 is a meta-ethical question; moral realism or cognitivism may lead you to answer "no", if you're a hedonistic utilitarian. #2 is an ethical question; it's about the content of hedonistic utilitarianism. If I intrinsically care about, say, freedom, that's not an ethical claim. It's just a preference. "Humans may have preferences about things other than pain/pleasure, and those preferences are morally important" is an ethical claim which I might formulate, about that preference that I have. Hedonistic utilitarianism tells me that my aforementioned preference is incoherent or mistaken, and that in fact I do not have any preferences (or any preferences that are morally important or worth caring about) other than preferences about pleasure/pain. Moral realism (which, as blacktrance correctly notes, is implied by any utilitarianism) may lead a hedonistic utilitarian to say that my aforementioned ethical claim is incorrect. As for your scenarios, I'm not sure what you meant by listing them. My point was that my scenario, which describes a situation involving a hypothetical me, Said Achmiz, would be bothersome to me, Said Achmiz. Is it really not clear why it would be?
Ethical subjectivism (which I subscribe to) would say that "ethical claims" are just a specific subset of our preferences; indeed, I'm rather skeptical of the notion of there being a distinction between ethical claims and preferences in the first place. But HU wouldn't necessarily say that someone's preference for something else than pleasure or pain would be mistaken - if it's interpreted within a subjectivist framework, HU is just a description of preferences that are different. See my response to blacktrance.
1Said Achmiz10y
I really don't think that this is correct. If this were true, first of all, hedonistic utilitarianism would simply reduce to preference utilitarianism. In actual fact, neither view is merely about one's own terminal values. If someone, personally, cares only about pain and pleasure, but acknowledges that other people may have other things as terminal values, and thinks that The Good lies in satisfying everyone's preferences maximally — which, for themselves, means maximizing pleasure and minimizing pain, and for other people may mean other things — then that person is not a hedonistic utilitarian. They are a preference utilitarian. Referring to them as an HU is simply not correct, because that's not how the term is used in the philosophical literature. On the other hand, if someone cares only about pain and pleasure — both theirs and other peoples' — and would prefer that everyone's pleasure be maximized and everyone's pain be minimized; but this person is not a moral realist, and has no opinion on what constitutes The Good or thinks there's no fact of the matter about whether an act is right or wrong; well, then this person is not a utilitarian at all. Again, describing this person as a hedonistic or any other kind of utilitarian completely fails to match up with how the term is used in the philosophical literature. As for ethical subjectivism — uh, I don't think that's an actual thing. I'd not heard of anything by that name until today. I don't like going by wikipedia's definitions of philosophical principles, so I tried tracking it down to a source, such as perhaps a major philosopher espousing the view or at least describing it coherently. No such luck. Take a look at that list of references on its wikipedia page; two are to a single book (written in 1959 by some guy I've never heard of — have you? — and the shortness whose wikipedia page suggests that he wasn't anyone interesting), and one is to a barely-related page that mentions the thing once, in passing,
... though, I just looked at the SEP entry on Consequentialism, and I note that aside for the title of one book in the bibliography, nowhere in the article is the word "realism" even mentioned. Nor does there seem to be an entry in the list of claims making up classic utilitarianism that would seem to require moral realism. I guess you could kind of interpret one of these three conditions as requiring moral realism: ... but it doesn't seem obvious to me why someone who was both an ethical subjectivist couldn't say that "I'm a classical utiliarian, in that (among other things) the best description of my ethical system is that I think that the goodness of an action should be determined based on how it affects all sentient beings, that benefits to one person matter just as much as similar benefits to others, and that the perspective of the people evaluating the consequences doesn't matter. Though of course others could have ethical systems that were not well described by these items, and that wouldn't make them wrong". Or maybe the important part in your comment was the part "...but this person is not a moral realist, and has no opinion on what constitutes The Good"? But a subjectivist doesn't say that he has no opinion on what constitutes The Good: he definitely has an opinion, and there may clearly be a right and wrong answer with regard to the kind of actions that are implied by his personal moral system; it's just that the thing that constitutes The Good will be different for people with different moral systems.
Consequenialism supplies a realistic ontology, since it's goods are facts about the real world, and utilitarian supplies an objective epistemology, since different utilitarians of the same stripe can converge. That adds up to some of the ingredients of realism, but not all of them. What is specifically lacking is an justification of comsequentialist ends as being objectively good, and not just subjectively desirable.
For this to make it realist, the fact that the truth of those facts has value would also have to be mind-independent. Even subjectivists typically value facts about the external world (e.g. their pleasure).
Ethical subjectivism is also discussed in the Stanford Encyclopedia of Philosophy. (I like this quote from that article, btw: "So many debates in philosophy revolve around the issue of objectivity versus subjectivity that one may be forgiven for assuming that someone somewhere understands this distinction.") You may be right to say that my use of "utilitarian" is different from how it's conventionally used in the literature; I'm pretty unfamiliar with the actual ethical literature. But if we have people who have the attitude of "I want to take the kinds of actions that maximally increase pleasure and maximally reduce suffering and I'm a moral realist" and people who have the attitude of "I want to take the kinds of actions that maximally increase pleasure and maximally reduce suffering and I'm a moral non-realist", then it feels a little odd to have different terms for them, given that they probably have more in common with each other (with regard to the actions that they take and the views that they hold) than e.g. two people who are both moral realists but differ on consequentialism vs. deontology. At least in a context where we are trying to categorize people into different camps based on what they think we should actually do, it would seem to make sense if we just called both the moral realist and moral non-realist "utilitarians", if they both fit the description of a utilitarian otherwise.
I don't think that hedonistic utilitarianism necessarily implies moral realism. Some HUs will certainly tell you that the people who morally disagree with them are misguided, but I don't see why the proportion of HUs who think so (vs. the proportion of HUs who think that you are simply caring about different things) would need to be any different than it would be among the adherents of any other ethical position. Maybe you meant your comment to refer specifically to the kinds of HUs who would impose their position on you, but even then the moral realism doesn't follow. You can want to impose your values on others despite thinking that values are just questions of opinion. For instance, there are things that I consider basic human rights and I want to impose the requirement to respect them on every member of every society, even though there are people who would disagree with that requirement. I don't think that the people who disagree are misguided in any sense, I just think that they value different things.
0Said Achmiz10y
I agree with blacktrance's reply to you, and also see my reply to tog in a different subthread for some commentary. However, I'm sufficiently unsure of what you're saying to be certain that your comment is fully answered by either of those things. For example: If you [the hypothetical you] think that it's possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I'm not quite sure how you can remain a hedonistic utilitarian. You'd have to say something like: "Yes, many people intrinsically value all sorts of things, but those preferences are morally irrelevant, and it is ok to frustrate those preferences as much as necessary, in order to minimize pain and maximize pleasure." You would, in other words, have to endorse a world where all the things that people value are mercilessly destroyed, and the things they most abhor and despise come to pass, if only this world had the most pleasure and least pain. Now, granted, people sometimes endorse the strangest things, and I wouldn't even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me. If I've misinterpreted your comment and thereby failed to address your points, apologies; please clarify.
Well, if you're really curious about how one could be a hedonistic utilitarian while also thinking that it's possible to care intrinsically about things other than pain and pleasure, one could think something like: "So there's this confusing concept called 'preferences' that seems to be a general term for all kinds of things that affect our behavior, or mental states, or both. Probably not all the things that affect our behavior are morally important: for instance, a reflex action is a thing in a person's nervous system that causes them to act in a certain way in certain situations, so you could kind of call that a preference to act in such a way in such a situation, but it still doesn't seem like a morally important one. "So what does make a preference morally important? If we define a preference as 'an internal disposition that affects the choices that you make', it seems like there would exist two kinds of preferences. First there are the ones that just cause a person to do things, but which don't necessarily cause any feelings of pleasure or pain. Reflexes and automated habits, for instance. These don't feel like they'd be worth moral consideration any more than the automatic decisions made by a computer program would. "But then there's the second category of preferences, ones that cause pleasure when they are satisfied, suffering when they are frustrated, or both. It feel like pleasure is a good thing and suffering is a bad thing, so that makes it good to satisfy the kinds of preferences that are produce pleasure when satisfied, as well as bad to frustrate the kinds of preferences that cause suffering when frustrated. Aha! Now I seem to have found a reasonable guideline for the kinds of preferences that I should care about. And of course this goes for higher-order preferences as well: if someone cares about X, then trying to change that preference would be a bad thing if they had a preference to continue caring about X, such that they would feel bad if someo
Any form of utilitarianism implies moral realism, as utilitarianism is a normative ethical theory and normative ethical theories presuppose moral realism.
I feel that this discussion is rapidly descending into a debate over definitions, but as a counter-example, take ethical subjectivism, which is a form of moral non-realism and which Wikipedia defines as claiming that: Someone could be an ethical subjectivist and say that utilitarianism is the theory that best describes their particular attitudes, or at least that subset of their attitudes that they endorse.
Someone could be an ethical subjectivist and want to maximize world utility, but such a person would not be a utilitarian, because utilitarianism holds that other people should maximize world utility. If you merely say "I want to maximize world utility and others to do the same", that is not utilitarianism - a utilitarian would say that you ought to maximize world utility, even if you don't want to, and it's not a matter of attitudes. Yes, this is arguing over definitions to some extent, but it's important because I often see this kind of confusion about utilitarianism on LW.
Could you provide a reference for that? At least the SEP entry on the topic doesn't clearly state this. I'm also unsure of what difference this makes in practice - I guess we could come up with a new word for all the people who are both moral antirealist and utilitarian-aside-for-being-moral-antirealists, but I'm not sure if the difference in their behavior and beliefs is large enough for that to be worth it.
Non egoistic subjectivists?
The SEP entry for consequentialism says it "is the view that normative properties depend only on consequences", implying a belief in normative properties, which means moral realism. If you want to describe people's actions, a utilitarian and a world-utility-maximizing non-realist would act similarly, but there would be differences in attitude: a utilitarian would say and feel like he is doing the morally right thing and those who disagree with him are in error, whereas the non-realist would merely feel like he is doing what he wants and that there is nothing special about wanting to maximize world utility - to him, it's just another preference, like collecting stamps or eating ice cream.
This is getting way too much into a debate over definitions so I'll stop after this comment, but I'll just point out that, among professional philosophers, there is no correlation between endorsing consequentialism and endorsing moral realism.
A non-consequentialist could be a moral realist as well, such as if they were a deontologist, so it's not a good measurement. Also, consequentialism and moral realism aren't always well-defined terms. Edit: That survey's results are strange. Twenty people answered that they're moral realists but non-cognitivists, though moral realism is necessarily cognitivist.
That doesn't mean utilitarianism is subjective. Rather, it means any subjective idea could correspond to objective truth.
I agree that it would often be good to be clearer about these points. At that point the people who consider themselves hedonistic utilitarians might come up with a theory that says that forcible wireheading is wrong and switch to calling themselves supporters of that theory. Or they could go on calling themselves HUs despite not forcibly wireheading anyone, in the same way that many people call themselves utilitarians today despite not actually giving most of their income away. Or some of them could decide to start working towards efforts to forcibly wirehead everyone, in which case they'd become the kinds of people described by my reply 2). By this, I meant to say "only approve of whatever course of action HU says is the best one".
2Said Achmiz10y
Yeah, I meant that as a normative "what then", not an empirical one. I agree that what you describe are plausible scenarios.
In that case, I'm unsure of what kind of an answer you were expecting (unless the "what then" was meant as a rhetorical question, but even then I'm slightly unsure of what point it was making).
1Said Achmiz10y
Yes, the "what then" was rhetorical. If I had to express my point non-rhetorically, it'd be something like this: If you take a position which gives ethically correct results only until such time as some (reasonably plausible) scenario comes to pass, then maybe your position isn't ethical in the first place. "This ethical framework gives nonsensical or monstrous results in edge cases [of varying degrees of edge-ness]" is, after all, a common and quite justified criticism of ethical frameworks.
It is a point against the framework, certainly. But so far nobody has developed an ethical framework that would have no problems at all, so at the moment we can only choose the framework that's the least bad. (Assuming that we wish to choose one in the first place, of course - I do think that there is merit in just accepting that they're all flawed and then not choosing to endorse any single one.)
0Said Achmiz10y
Well, that's been my policy so far, certainly. Some are worse than others, though. "This ethical framework breaks in catastrophic, horrifying fashion, creating an instant dystopia, as soon as we can rewire people's brains" is pretty darn bad.
... can't we rewire brains right now? We just ... don't.
1Said Achmiz10y
Well, we must not be hedonistic utilitarians then, right? Because if we were, and we could, we would. Edit: Also, what the heck are you talking about?
Wireheading. The term is not a metaphor, and it's not a hypothetical. You can literally stick a wire into someone's pleasure centers and activate them, using only non-groundbreaking neuroscience. It's been tested on humans, but AFAIK no-one has ever felt compelled to go any further. (Yeah, seems like it might be evidence. But then, maybe akrasia...)
0Said Achmiz9y
Where and what are these "pleasure centers", exactly?
I don't see how having a quantitative, empirical measure which is appropriate for one individual helps you with comparisons across individuals. Do we really want to make people utility monsters because their neural currents devoted to measuring happiness have a higher amperage?
I was assuming that the measure would be valid across individuals. I wouldn't expect the neural basis of suffering or pleasure to vary so much that you couldn't automatically adapt it to the brains in question. Well yes, hedonistic utilitarianism does make it possible in principle that Felix ends up screwing us over, but that's an objection to hedonistic utilitarianism rather than the measure.
I mean, the measure is going to be something like an EEG or an MRI, where we determine the amount of activity in some brain region. But while measuring the electrical properties of that region is just an engineering problem, and the units are the same from person to person, and maybe even the range is the same from person to person, that doesn't establish the ethical principle that all people deserve equal consideration (or, in the case of range differences or variance differences, that neural activity determines how much consideration one deserves). It's not obvious to me that all agents deserve the same level of moral consideration (i.e. I am open to the possibility of utility monsters), but it is obvious to me that some ways of determining who should be the utility monsters are bad (generally because they're easily hacked or provide unproductive incentives).
Well it's not like people would go around maximizing the amount of this particular pattern of neural activity in the world: they would go around maximizing pleasure in the-kinds-of-agents-they-care-about, where the pattern is just a way of measuring and establishing what kinds of interventions actually do increase pleasure. (We are talking about humans, not FAI design, right?) If there are ways of hacking the pattern or producing it in ways that don't actually correlate with pleasure (of the kind that we care about), then those can be identified and ignored.
Depending on your view of human psychology, this doesn't seem like that bad a description, so long as we're talking about people only maximizing their own circuitry. (Maximizing is probably wrong, rather than keeping it within some reference range.) That's what I had that in mind, yeah. ---------------------------------------- My core objection, which I think lines up with SaidAchmiz's, is that even if there's the ability to measure people's satisfaction objectively (so that we can count the transparency problem as solved), that doesn't tell us how to make satisfaction tradeoffs between individuals.
I agree with this. I was originally only objecting to the argument that aggregating utility between individuals would be impossible or incoherent, but I do not have an objection to the argument that the mapping from subjective states to math is underspecified. (Though I don't see this as a serious problem for utilitarianism: it only means that different people will have different mappings rather than there being a single unique one.)
2Said Achmiz10y
Er, hang on. If this is your objection, I'm not sure that you've actually said what's wrong with said argument. Or do you mean that you were objecting to the applicability of said argument to hedonistic utilitarianism, which is how I read your comments?
To add to my "yes": I agree with the claim that aggregating utility between individuals seems to be possibly incoherent in the context of preference utilitarianism. Indeed, if we define utility in terms of preferences, I'm even somewhat skeptical of the feasibility of optimizing the utility of a single individual over their lifetime: see this comment.
Kaj, is there somewhere you lay out your ethical views in more detail?
Ditto for Vaniver and Said.
I approve of virtuous acts, and disapprove of vicious ones. In terms of labels, I think I give consequentialist answers to the standard ethical questions, but I think most character improvement comes from thinking deontologically, because of the tremendous amount of influence our identities have on our actions. If one thinks of oneself as humble, that has many known ways of making one act differently. One's abstract, far mode views are likely to only change one's speech, not one's behavior. Thus, I don't put all that much effort into theories of ethics, and try to put effort instead into acting virtuously.
2Said Achmiz10y
Interestingly, it seems our views are complementary, not contradictory. I would (I think) be willing to endorse what you said as a recipe for implementing the views I describe.
1Said Achmiz10y
There is no such centralized place, no; I've alluded to my views in comments here and there over the past year or so, but haven't gone laid them out fully. (Then again, I'm a member of no movements that depend heavily on any ethical positions. ;) Truth be told — and I haven't disguised this — my ethical views are not anywhere near completely fleshed-out. I know the general shape, I suppose, but beyond that I'm more sure about what I don't believe — what objections and criticisms I have to other people's views — than about what I do believe. But here's a brief sketch. I think that consequentialism, as a foundational idea, a basic approach, is the only one that makes sense. Deontology seems to me to be completely nonsensical as a grounding for ethics. Every seemingly-intelligent deontologist to whom I've spoken (which, admittedly, is a small number — a handful of people here in LessWrong) has appeared to be spouting utter nonsense. Deontology has its uses (see Bostrom's "An Infinitarian Challenge to Aggregative Ethics", and this post by Eliezer, for examples), but there it's deployed for consequentialist reasons: we think it'll give better results. I've seen the view expressed that virtue ethics is descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind once you've decided on your object-level moral views, and that seems like a more-or-less reasonable stance to take. As an actual philosophical grounding for morality, virtue ethics is nonsense, but perhaps that's fine, given the above. Consequentialism actually makes sense. Consequences are the only things that matter? Well, yes. What else could there be? As far as varieties of consequentialism go... I think intended and foreseeable consequences matter when evaluating the moral rightness of an act, not actual consequences; judging based on actual consequences seems utterly useless, because
0Said Achmiz10y
Out of curiosity, what was your reason for asking about my ethical views in detail? I did somewhat enjoy writing out that comment, but I'm curious as to whether you were planning to go somewhere with this.
I'm glad you enjoyed it, as you're right I didn't go anywhere - I got distracted by other thing. But it was partly a sort of straw poll to supplement the survey, and partly connected to these concerns: http://lesswrong.com/lw/k60/2014_survey_of_effective_altruists/aw1p
No big systematic overview, though several comments and posts of mine touch upon different parts of them. Is there anything in particular that you're interested in?
If I could ask two quick questions, it'd be whether you're a realist and whether you're a cognitivist. The preponderance of those views within EA is what I've heard debated most often. (This is different from what first made me ask, but I'll drop that.) I know Jacy Anthis - thebestwecan on LessWrong - has an argument that realism combined with the moral beliefs about future generations typical among EAs suggests that smarter people in the future will work out a more correct ethics, and that this should significantly affect our actions now. He rejects realism, and think this is a bad consequence. I think it actually doesn't depend on realism, but rather on most forms of cognitivism, for instance ones on which our coherent extrapolated view is correct. He plans to write about this.
Definitely not a realist. I haven't looked at the exact definitions of these terms very much, but judging from the Wikipedia and SEP articles that I've skimmed, I'd call myself an ethical subjectivist (which apparently does fall under cognitivism).
I believe the prevalence of moral realism within EA is risky and bad for EA goals for several reasons. One of which is that moral realists tend to believe in the inevitability of a positive far-future (since smart minds will converge on the "right" morality), which tends to make them focus on ensuring the existence of the far future at the cost of other things. If smart minds will converge on the "right" morality, this makes sense, but I severely doubt that is true. It could be true, but that possibility certainly isn't worth sacrificing other goals of improvement. And I think trying to figure out the "right" morality is a waste of resources for similar reasons. CEA has expressed the views I argue against here, which has other EAs and me concerned.
Can you suggest some? These could go into next year's survey, though we're keeping that short - more likely they'd go into a followup that Ben Landau-Taylor of Leverage Research is running.
Why are you taking the effective altruists survey?
2Said Achmiz10y
I think it'd be interesting to know more about the specific ethical views of ethically-minded EAs, but the majority of EAs are not well-versed enough to make Utilitarianism vs. Other Consequentialism distinctions. It's good to make a big survey like this as easy to fill out as possible. Same thing about the "political views" point, although there are standards for left vs. right across countries: http://en.wikipedia.org/wiki/Left%E2%80%93right_politics
1Said Achmiz10y
I think that's a problem! (I discuss in this comment some reasons why.)
Whether or not it's a problem, a survey is not a good place to address it. You have to ask questions people will be able to easily answer if you want to get useful data.
1Said Achmiz10y
That's true, but it is also an inherently problematic approach if (as will almost certainly be the case when it comes to issues of ethics, politics, etc.) the things you really want to know are not easily elicited by questions that people will be able to easily answer, and vice versa — the questions that people can easily answer don't actually tell you what you really want to know about those people's views, attitudes, etc. In any case, what I meant wasn't that "EAs are not well-versed enough in moral philosophy" is a problem for the survey — what I meant was that it's a problem for the EA movement.
I agree about consequentialism. Also, at that level of detail I can't see a way it's action-relevant (whereas if most EAs say they have no knowledge of ethical theories, that suggests a non-philosophical audience is more receptive than some have thought). We should have explained that political terms were what you'd naturally describe yourself as in your country. Do people think most will have interpreted them thus? If so, we can cross-tabulate them against country. If not, would this make many people more than one point out along the spectrum? I'd have thought that an American who describes themselves as 'left' is at least 'centre left' in Europe, and so on.
Quite possibly. At least in Finland, the word "left" refers to people who tend to have at least a rough familiarity with actual Marxist theories and still endorse many of them, and tend to use the word "capitalism" as a negative term. It also includes actual outright communists who want to go to a planned economy, though they're a fringe group even here and mostly dying out. Still, it's my impression that "Left" means something very much more to right in the US. I've frequently heard it said that the average American leftist would be considered a clear right-winger in Finland, though I don't have enough familiarity with the exact positions of American leftists to be able to tell whether that's true.
It's hard to say anything coherent about the U.S. "left" and "right" without antagonizing both groups, but my $0.02: * I'd characterize the typical U.S. leftist as not really having the foggiest clue about Marx beyond his having some vaguely important relationship to Soviet-style communism, and as not having a clear stance regarding communism or capitalism... either because they actively support a mixed economy, or because they are confused about economics. (I don't mean to imply here that Americans who do have a clear stance aren't confused.) * While outright communists are generally considered "left" in the U.S., much as outright fascists are generally considered "right" (though some disagree), neither group is terribly relevant; they exist mostly as extremes to rhetorically compare our political opponents to. "So-and-so is a communist/fascist" gets said a lot, but if one were to respond to that claim by discussing various points of non-congruence with communism or fascism this would likely be seen as sophistry rather than on-point analysis. * The "left" tends to support government intervention to enforce equal treatment of some genders, ethnicities and sexual orientations, to enforce wealth distribution, and to provide communal access to various goods (of which the most fractious right now is health insurance, which has become a proxy for health care). Domestically, this intervention is usually framed in terms of government-regulated markets rather than straight-up government control of the means of production or distribution, although there are exceptions. * Also, the "left" is generally associated with minimizing restrictions on abortion and contraception, maximizing restrictions on firearms, unionizing labor, increasing the political influence of feminism (and "social justice" more generally), and decreasing the political influence of Christianity (and religion more generally), and decreasing support for the military, while the "right" is generally ass
I think this is true, but with the caveat that a lot of the memes circulating among educated leftists in the US are basically Marxian in their approach to class and economics. Usually not orthodox Marxist, though, and they fall well short of cohering into a complete Marxian analysis anywhere outside of sociology departments and the odd punk show. Joe Left is generally not aware of this. Joe Right probably has a confused idea of the relation ("communist" is a dirty word in the US, so right-wing news outlets don't miss opportunities to use it), but is unaware of the Marxian/Marxist distinction and thinks it makes Joe Left an outright commie.
I don't know enough about Marxianism (either orthodox or heterodox) to have a useful opinion about how popular Marxian memes are among the US left (or, for that matter, the US right), but I certainly agree that that's a different question than how well informed J Left is about Marx, and an interesting one.
I'm not so sure, in terms of their actual policies I hear the British Conservatives are pretty close to the US Democrats. They're cutting services for the poor, but to a level above that found in the US. That does typically show inclinations similar to those of US Republicans, but it could also reflect a view about the optimal end level of services similar to some Democrats. So I guess it depends on what it shows most often, and whether those inclinations are most informative for the purposes of understanding people (eg in this survey).

Taking this was an interesting feeling. In particular, being asked (even anonymously) about donations and other concrete actions in a context where donating a lot is the norm. The scene in HP:MOR where the phoenix asks Hermione who she's saved comes to mind. That is, being asked just made it very obvious that I believe I should be an effective altruist, but from my actions it doesn't look like I am one. I have reasons for that, but it's still worrying, since I don't have much evidence that I won't just change my mind once I do have money.

For what it's worth, I just set up a bunch of email reminders throughout my last semester to make sure I put some kind of donation plan in place by the time I start working (even if it's "nevermind, I was wrong about my values").

That was a weird feeling; I didn't realize that this was my own comment, and only checked the username when that last paragraph seemed eerily familiar. As a follow-up: I got a good full-time job starting in January 2015. I've got 10% of post-tax earnings from my internships set aside in a savings account to donate when Givewell announces 2015 recommendations, and I'll add 5% of this year's pre-tax salary to that donation also. Nothing actually donated yet, but it seems really unlikely that I won't do it. I'm planning to keep donating 5% of pre-tax as a token amount for the next few years, and have a few plans for how I might be able to donate more later. I was several months late in deciding to do this and setting up the savings account, so my reminder emails didn't work perfectly, but in the end I did it.

Exciting to see that Peter Singer took the survey!

Was it definitely Peter Singer?
7Peter Wildeford10y
As nearest as we can tell, but we're reaching out to verify.
Yes, I contacted him personally to fill it out. We used personal contacts as much as possible to avoid biased sampling (as many EAs don't frequent online forums like LW and Facebook).
[confused comment, ignore]
Uh, I contacted him. Tom, this is on the survey planning document :P

It is not clear whether non-EAs (whatever that exactly means) should participate in this survey. My first reaction was: "I'm not really an EA. Should I take the survey? Maybe not."

I'd think as many people as possible should take this survey to avoid selection biases.

EDIT: I took the survey,

Agreed that as many people as possible should take it. The first question asks whether you self-identify as an 'EA', and clarifies that we'd also like the responses of those who don't.
For people who are not EAs, a lot of questions make little sense.
5Peter Wildeford10y
If the questions don't make sense, then either answer them as best you can or don't answer them. We're just looking to make sure that we minimize as much as possible our "I'm not really that EA, so I won't take the survey" sample bias.
I appreciate that you did this - I wanted to give you information, but I'm also not very EA and kind of insecure about that, so I probably would have quit midway through the survey if there were too many questions that seemed like they weren't for me.
Yes. Many non-EA results will include lots of "unsure/unfamiliar with the options" responses.

The question "When did you first hear the term 'effective altruism'?" is tricky because that term was only invented in late 2011, after many of us had heard about effective altruism itself.

Yes - 2012 in practice. To make the question precise, it clarifies that it refers to the term. It would also be interesting to know when people first head of EA avant la lettre - this could mean many things, but hearing of an EA org certainly counts. For my part I heard of GWWC in 2010, from Pablo Stafforini (benthamite here). I read Peter Unger's book Living High and Letting Die in about 2002, which argues for giving large amounts to effective charities, and is perhaps the first mention of Earning to Give.
I think some people might have us beat 300 years for EtG ;) http://www.jefftk.com/p/history-of-earning-to-give-iii-john-wesley
Effective Altruism was used several years before CEA adopted the term. If you heard it before that time, please put the earlier date. However, yes, many people will put dates after CEA's adoption (or even after Singer's TED Talk, which seems to be the final galvanization of the term).
Are you sure it was used beforehand Jacy? Are there instances you can remember?
It was used in the Felicifia community, although it wasn't used as definitively as it is now. Although 'strategic altruism' was more common although that wasn't as catchy. It was also just used in casual conversation. I could be wrong though.
This 'official' account gives the impression that no term had much common currency, apart from the jokey 'super-hardcore do-gooder' before the end of 2011. I can't comment about whether other branches of the community used terms in a similar way- I've never heard of felicifia. http://www.effective-altruism.com/the-history-of-the-term-effective-altruism/
lukeprog (Luke Muehlhauser) objects to CEA's claim that EA grew primarily out of Giving What We Can at http://www.effectivealtruism.org/#comments :
I agree with Luke here. CEA seems to often overstate its role in the EA movement (another example at http://centreforeffectivealtruism.org/).
I certainly agree that effective altruism existed long before GWWC. The discussion I'm addressing though is about the origin of the term "effective altruist."
That's interesting, especially if someone can find a link. Here's a date-based Google search, though a cursory glance doesn't reveal any references where the term itself was included before 2012: https://www.google.ca/search?q=%22effective+altruism%22&client=firefox-a&hs=1mw&rls=org.mozilla%3Aen-US%3Aofficial&channel=sb&sa=X&ei=pohiU5jjINSyyASUgIGgBQ&ved=0CB0QpwUoBg&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2008%2Ccd_max%3A12%2F31%2F2011&tbm=
The first mention I find on the Felicifia site is from 2012. (As a check, the first entry I find for "suffering" is 2007.) (And trying this search with Felicifia's search tool gives "The following words in your search query were ignored because they are too common words: altruism effective.")
When I wrote "A Name For A Movement?" in March 2012, "Effective Altruism" was a name in circulation, but other names like "Smart Giving" and "Optimal Philanthropy" were more common.
For people who haven't ready CEA's post on this, that was after their vote in December 2011. When I participated in it I don't remember anyone discussing it as already in use; I'd expect someone would remember this if it was in use.

Just to let you guys know: Like with the LW survey, I wouldn't have minded to fill in an optional 'extended section'. I imagine you made the survey shorter in order not to scare people off.

Thanks, that's helpful to know. Jacy Anthis suggested that, and I was the main person keeping it short. I was going to link at the end to a follow-up survey Ben Landau Taylor was listing, but it wasn't ready in time. In general, how did people find the length of the survey, would they have filled in more, and would they have followed a link to more questions?
Knowing nothing about the survey before I would have filled in a much longer survey but then I'm a survey junkie I even got a long way into the 45 minute Yale survey.
When I had already started the survey as I said I wouldn't have minded to fill in more. If it had been previously announced to be a longer survey I imagine the initial barrier would have been higher for many though. Personally, I would have filled it in even if it was longer since I think it's important. But with a different topic that could have made me not fill it in.

I'd love to hear thoughts connected to the LessWrong censuses: comparisons, lessons learnt, feedback on our survey, thoughts on how EAs and LessWrongers may differ, etc. The censuses have been going on a long time, and have a lot of data, so this would be interesting.

Can anyone involved in the census say whether it reached people wholly or mainly thought a post on http://lesswrong.com/promoted/ ? That'd be pretty powerful if it can get 1500+ responses - it would be great if this post could be promoted too, as many people are putting a lot of effort into sharing the EA survey widely! How can we make promotion happen?

I predict there will be around 35% of people supporting meta and x-risk causes (like 80k, GWWC operating costs, MIRI, FHI etc).

[Thread for making and discussing predictions]

To expand on my predictions, I think that global poverty will be the most popular cause except among those who say they heard of EA through LessWrong (whose numbers I'll be interested to see). I also think that skepticism/atheism will be the other social movement with which most identify, and atheism the most popular religious position. In the link Jacy Anthis has given a full set of predictions to test his accuracy.

Here are my predictions (on Prediction Book): * meat-eating * gender * favored cause
Fun site! Once you register, it lets you assign probabilities to predictions others have made.
I predict: * utilitarianism's the most common philosophy * a clear majority will be non-religious, and respondents often identify with skepticism/atheism as a social movement * a clear majority are left wing * most respondents are under 30, with 50% students * people often heard of EA through Peter Singer And the most significant outcome: * There will be many non-students without significant donations, which in my view is not a good thing at all
Good point about LW affiliation - in addition I would add that results are highly dependent on how the survey is distributed. This makes large predictions difficult, but more specific predictions (like >80% of LW affiliations will identify as atheist/agnostic) might be the way to go. I'm still getting familiar with this community, but I suppose it's a fun exercise so I've added some thoughts to the excel sheet.
Yes, the survey asks where you heard of it itself, and what groups you're a member of, and where you first heard of EA: LessWrong is a candidate for each. So you can make predictions for specific groups.
I definitely agree LW affiliation will be a major predictor of other results. Perhaps I should have made two sets of predictions (one for LW folks, one for others). - Jacy
One thing that would be really interesting is comparing EA-LW folks with both the standard EA answers and the standard LW survey answers.
Just to be clear, it wouldn't be "LW affiliation"; it would be "heard of EA through LW". I'm sure there are quite a few like me who learned about LW through EA, not the other way around.
There are questions both about whether you're a LessWrong member and whether you first heard of EA through LessWrong, so we can get data on both.

What qualifies one as an effective altruist for the purposes of this survey? Is it "self-identifies as an effective altruist"? Or something else?


were altruistic before becoming EAs

This phrase strongly suggests that the EA community needs to more clearly describe what it is they mean when they use the terms "altruism" and "effective altruism" (as I've commented before).

Yes, the second question is: Could you, however loosely, be described as 'an EA'? Answer no if you are not familiar with the term 'EA', which stands for 'Effective Altruist'. This question is not asking if you are altruistic and value effectiveness, but rather whether you loosely identify with the existing 'EA' identity. What would you suggest? I take 'altruistic' to generally mean 'acts partly for the good of others, and is willing to make sacrifices for this end'. There's then a decent behavioural test for whether people were altruistic beforehand. There's no clear definition of being EA, besides accepting some sufficient number of EA ideas.
2Said Achmiz10y
I judge this to be a problematic criterion. See this comment, esp. starting with "To put this another way ...", for why I think so. That does seem like a reasonable definition, but in that form it seems rather too vague to be useful for the purposes of constructing a behavioral test. We'd have to at least begin to sketch out what sorts of acts we mean (literally any act that benefits anyone else in any way?), and what sorts of sacrifices, and how willing, etc. Quite so. My contention is that there's a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea. This is problematic for various reasons, I think. I won't clutter this thread by starting a debate on those reasons (unless asked), but I think it's at least important (and relevant to endeavors like this survey) to recognize this distinction.
That comment makes a lot of sense. It depends what we use the criterion for. In the survey, it's to gather information, and it's for precisely this reason that I chose not to ask if people were 'EAs' in your loose sense - almost everyone would say yes. I'm curious as to what uses do you think the criterion's problematic for. It's a matter of a degree, but in the EA context (which sets a high bar), I personally call people 'altruistic' if (but not only if) they've donated >=10% of a real income for over a year or they've consistently spent over an hour a week doing something they'd otherwise rather not do to help others. That's right, if by 'conceptually entangled' you mean 'necessarily connected', or even 'commonly accepted by both groups of people'. For example, I believe utilitarianism's widely accepted by EAs (though the survey may show otherwise!), but not entangled with merely valuing altruism and the effectiveness of altruism. I see no harm in thread-cluttering, at least here - go for it.
8Said Achmiz10y
Well, one issue is recruiting/evangelism/outreach/PR/etc. If you want to convince people[1] to both be altruistic and to attempt to optimize their altruism (i.e., the general form of the "effective altruism" concept), it does not do to conflate that general form with your specific form (which involves the specific, idiosyncratic ideas I listed in that comment I linked — a particular form of utilitarianism, a particular set of values including e.g. the welfare of animals, etc.). Take me, for instance. I find the general concept to be almost obvious. (I'm an altruistic person by temperament, though I remain agnostic on whether certain forms of direct action are in fact the best way to bring about the sort of world toward which such action is ostensibly aimed, as compared with e.g. a more libertarian approach. As for the "effective" part — well, duh.) However, if you were to say: "Hey, Said Achmiz, want to join this-and-such EA group / organization / etc.? Or donate to it? Or otherwise contribute to its success?" I would demur, because in my experience, groups and organizations that self-identify as EA tend to have the aforementioned specific form of EA as their aim — and I have significant disagreements with many components of that specific form. If you (this hypothetical organization) do not make it clear that you have, as your goal, the general form of effective altruism, and that the specific form is merely one way in which your members express it, then I won't join/contribute/etc. If you in fact have only the specific, and not the general, form as your goal, then not only will I not join, but I will be quite cross about the fact that you would thereby be appropriating the term "effective altruism" (which would otherwise describe a perfectly reasonable concept with which I agree and a general ethical and practical stance which I support), and using it to describe something which I do not support and about which I have strong reservations, and leaving me (and oth
1Said Achmiz10y
Here is the promised other issue I see with the conflation of the general[1] and specific[2] forms of effective altruism. You do not actually ever argue for the ideas making up that specific form. It seems to go like this: "We all think being altruistic is good, right? Of course we do. And we think it's important to be effective in our altruism, don't we? Of course. Good! Now, onwards to the fight for animal rights, the saving of children in Africa, the application of utilitarian principles to our charity work, and all the rest." Now, as I say in my other comments, one issue is that potential newcomers to the movement might assent to those first two questions, but to the "Now, onwards ..." say — "whoa, whoa, where did that suddenly come from?". But the other issue is that it seems like you yourselves haven't given much thought to those positions. How do you know they're right, those philosophical and moral ideas? A lot of EA writing seems not to even consider the question! It's not like these are obvious principles you're assuming — many intelligent people, on LessWrong and elsewhere, do not agree with them! Of course I don't actually think you've simply accepted these ideas out of some sort of blind go-alonging with some liberal crowd. This is LessWrong; I think better of you folks than that. (Although some EA-ers without an LW-or-similar background may well have given the matter just as little thought as that.) Presumably, you were, at some point, convinced of these ideas, in some way, by some arguments or evidence or considerations. But I have no idea what those considerations are. I have no idea what convinced you; I don't know why you believe what you believe, because you hardly even acknowledge that you believe these things. In most EA writings I've seen, they are breezily assumed. That is not good for the epistemic health of the movement, I think. I think it would be good to have some effort to clearly delineate the ideas that are held by, and commonly
Global poverty don't generally state or imply utilitarianism or similar views, though x-riskers do (at least those who value non-existent people). I personally favour global poverty charities, and am quite tentative in my attitudes to many mainstream ethical theories, and don't think being more so would affect my donations (though being less so might). The degree of thought varies a lot, sure. I agree that people should spend more time on them when they're action relevant, as they are for people who'd act to prevent x-risk if they accepted them. Breezy assumption isn't optimal, but detailed writing about ethical theory isn't either.
I apply a similarly high bar for altruism - many EAs don't count as altruistic based on this.