Part one of what will hopefully become the aspirant sequence.

Content note: Possibly a difficult read for some people. You are encouraged to just stop reading the post if you are the kind of person who isn’t going to find it useful. Somewhat intended to be read alongside various more-reassuring posts, some of which it links to, as a counterpoint in dialogue with them. Pushes in a direction along a spectrum, and whether this is good for you will depend on where you currently are on that spectrum. Many thanks to Keller and Ozy for insightful and helpful feedback; all remaining errors are my own. 


Alice is a rationalist and Effective Altruist who is extremely motivated to work hard and devote her life to positive impact. She switched away from her dream game-dev career to do higher-impact work instead, she spends her weekends volunteering (editing papers), she only eats the most ethical foods, she never tells lies and she gives 50% of her income away. She even works on AI because she abstractly believes it’s the most important cause, even though it doesn’t really emotionally connect with her the way that global health does. (Or maybe she works on animal rights for principled reasons even though she emotionally dislikes animals, or she works on global health even though she finds AI more fascinating; you can pick whichever version feels more challenging to you.) 

Bob is interested in Effective Altruism, but Alice honestly makes him a little nervous. He feels he has some sort of moral obligation to make the world better, but he likes to hope that he’s fulfilled that obligation by giving 10% of his income as a well-paid software dev, because he doesn’t really want to have to give up his Netflix-watching weekends. Thinking about AI makes him feel scared and overwhelmed, so he mostly donates to AMF even though he’s vaguely aware that AI might be more important to him. (Or maybe he donates to AI because he feels it’s fascinating, even though he thinks rationally global health might have more positive impact or more evidence behind it - or he gives to animal rights because animals are cute. Up to you.) 


Alice: You know, Bob, you claim to really care about improving the world, but you don’t seem to donate as much as you could or to use your time very effectively. Maybe you should donate that money rather than getting takeout tonight? 

Bob: Wow, Alice. It’s none of your business what I do with my own money; that’s rude. 

Alice: I think the negative impact of my rudeness is probably smaller than the potential positive impact of getting you to act in line with the values you claim to have. 

Bob: That doesn’t even seem true. If everyone is rude like you, then the Effective Altruism movement will get a bad reputation, and fewer people will be willing to join. What if I get so upset by your rudeness that I decide not to donate at all?

Alice: That kind of seems like a you problem, not a me problem. 

Bob: You’re the one who is being rude.

Alice: I mean, you claim to actually seriously agree with the whole Drowning Child thing. If you would avoid doing any good at all, purely because someone was rude to you, then I think you were probably lying about being convinced of Effective Altruism in the first place, and if you’re lying then it’s my business. 

Bob: I’m not lying; I’m just arguing why you shouldn’t say those things in the abstract, to arbitrary people, who could respond badly. Sure, maybe they shouldn’t respond badly, but you can’t force everyone to be rational.

Alice: But I’m not going out and saying this to some abstract arbitrary person. Why shouldn’t you, personally, work harder and donate more? 

Bob: I’m protecting my mental health by ensuring that I only commit an amount of money and time which is sustainable for me. 

Alice: So you believe that good will actually be maximised by donating exactly the amount of money that will give you warm fuzzies, and no more, and volunteering exactly the amount of time that makes you happy, and no more? 

Bob: Absolutely. If I tried to donate more time or money, I’d burn out. Then I’d do even less good. Under this view, I’m actually obligated not to donate any more than I do!

Alice: You’re morally obligated to take the actions that happen to make you maximally happy?  Wow, that seems like a really convenient coincidence for you, and that seems like a great reason to really challenge that belief. Isn’t it possible that you could be slightly inconvenienced without significantly increasing your risk of burning out, or that you could do a significant amount more good while only increasing your burn-out risk by an acceptably small amount? 

Bob: Who says I’m maximally happy? I’d probably be happier if I gave 0% to charity and bought a faster car, but I’m giving 10%! Nobody is perfect, and 10% is good enough. Surely you should go and criticise some of the people who are giving 0%? 

Alice: I criticise them plenty, and that doesn’t mean that I can’t also criticise you; that seems like a deflection. Nobody’s perfect, but some people are coming closer than others. I can’t really define whether you’re maximally happy, but I assume you would feel some guilt about donating 0%, or you’d miss out on some warm fuzzies, or you’d miss out on the various social benefits of being part of the community. 

Bob: No, I donate 10% because I want to help others and I genuinely care about positive impact, and ethical obligations, and utilitarian considerations. I just set a lower standard. 

Alice: Regardless, I don’t think any of this really addresses my criticism. Donating 10% is perfectly consistent with being a total egoist who just happens to enjoy the warm fuzzies of donating some money to charity. But humans aren’t reflectively consistent, and I think if you were an actual utilitarian, you would probably believe that the ethical amount to give is higher than the amount you inherently personally want to give. 

Bob: Sure, if there was a button which magically made me more ethical, and caused me to want to donate 30%, then I’d probably press it because I believe that’s the right thing to do. But the magical button doesn’t exist. I currently want to donate 10%, and I can’t make myself want to donate 30% any more than I can change my natural talents.

Alice: So your claim is that it’s okay to be lazy, or selfish, or hypocritical, because you can’t make yourself be any less of those things?

Bob: No, you’re just being rude again. I’m not lazy about doing my fair share of the dishes. I just think that, when it comes to allocating resources to altruism, you’ll burn out if you push yourself to do more good than you’re naturally inclined to. 

Alice: I think if this was your true objection - your crux - then you would have probably put a lot of work in to understand burnout. Some of the hardest-working people have done that work - and never burned out. Instead, you seem to treat it like a magical worst possible outcome, which provides a universal excuse to never do anything that you don’t want to do. How good a model do you have of what causes burnout? (I notice that many people think vacations treat burnout, which is probably a sign they haven’t looked at the research.) Surely there’s not a black-and-white system where working slightly too hard will instantly disable you forever; maybe there’s a third option where you do more but you also take some anti-burnout precaution.. If I really believed I couldn’t do more without risking burnout, and that was the most important factor preventing me from fulfilling my deeply held ethical beliefs, I think I would have a complex model of what sorts of risk factors create what sort of probability of burnout, and whether there’s different kinds of burnout or different severity levels, and what I could do to guard against it. 

Bob: Well, maybe that’s true. I definitely don’t want to work any harder than I currently do, so I guess I’d be motivated to believe that I’ll burn out if I do, and that could bias my thinking. But it’s still dangerous and rude to go around spouting this kind of rhetoric, because some people might have a lot of scrupulosity, and they could be really harmed by being told they’re bad people unless they work harder. 

Alice: Seems like a fake justification. I’m sure some people should reverse any advice they hear, but I’m currently talking to you and I don’t think you have scrupulosity issues.

Bob: Even assuming I don’t have scrupulosity issues, if I overworked myself, I’d be setting a bad example to people who do have scrupulosity issues. I’d be contributing to bad social norms. 

Alice: Weird, you don’t seem to think that I’m contributing to bad social norms by existing. Actually I think I’m a good role model for everyone else. 

Bob: You’re really arrogant.

Alice: This conversation isn’t about my flaws, and also, I don’t think humility is always a virtue. For instance, you’re humble about how much you can realistically achieve, but since you haven’t really tested the question, I think it’s a vice. I actually think my mental health is pretty good, and the work that I do contributes to my positive mental health; I have a sense of purpose, a sense of camaraderie with other people in the community, I don’t really deal with any guilt because I genuinely think I’m doing the most I can do, and I like it when people look up to me.

Bob: Okay, but I can’t become you. I can only act in accordance with whatever values I really have. I wouldn’t feel really good all the time if I worked hard like you. I’d just be miserable and burn out. I can’t change fundamental facts about my motivational system. 

Alice: What if we lived in the least convenient possible world? What if the techniques I use to avoid burnout - like meditating, surrounding myself with people who work similarly hard so that my brain feels it’s normal, eating a really healthy diet, coworking or getting support on tasks that I’m aversive about, practising lots of instrumental rationality techniques, frequently reminding myself that I’m living consistently with my values, avoiding guilt-based motivation, exercising regularly, seeing a therapist proactively to work on my emotional resilience, and all that - would actually completely work for you, and you’d be able to work super hard without burning out at all, and you’d be perfectly capable of changing yourself if you tried? 

Bob: Just because they’d work for me, doesn’t mean they’d work for others. This is a potentially harmful sort of thing to talk about, because some fraction of people will hear this advice and overwork themselves and end up with mental health crises, and some people will think you’re a jerk and leave the movement, and some people will be unable to change themselves and will feel really guilty. 

Alice: How sure are you that this isn’t also true about the opposite advice? Maybe some people work on a forks model rather than a spoons model, so they actually need to do tasks in order to improve their mental health, but they hear advice telling them to take breaks to avoid burnout - so they sit around being miserable, gaming and scrolling social media, wondering when resting is going to start improving their burnout problems, not realising that they aren’t burned out at all and they’d actually feel better if they worked harder and did rejuvenating tasks and got into a success spiral. Maybe some people are put off from the movement because they don’t think we’re hardcore enough, so they go off to do totally ineffective things like being a monk and taking a vow of silence because that feels more hardcore or real. Maybe the belief that you can’t change fundamental facts about yourself is harmful to some people with mental illnesses who feel like they’ll never be able to become happy or productive. In the least convenient possible world, where the advice to rest more is equally harmful to the advice to work harder, and most people should totally view themselves as less fundamentally unchangeable, and the movement would have better PR if we were sternerwould you work harder then? 

Bob: I just kind of don’t really want to work harder. 

Alice: I think we’ve arrived at the core of the problem, yes.

Bob: I don’t know what the point of this conversation was. You haven’t persuaded me to do anything differently, I don’t think you can persuade me to do anything that I don’t want to do, and you’ve kind of just made me feel bad.

Alice: Maybe I’d like you to stop claiming to be a utilitarian, when you’re totally not - you’re just an egoist who happens to have certain tuistic preferences. I might respect you more if you had the integrity to be honest about it. Maybe I think you’re wrong, and there’s some way to persuade you to be better, and I just haven’t found it yet. (Growth mindset!) Maybe I want an epistemic community that helps me with my reasoning, and calls me out when I’m engaging in bias or motivated stopping, which means I want the kinds of things I’m saying here to be normal and okay to say - otherwise people won’t say them to me. Maybe I just notice that when people make type-1 errors in the working-too-hard-and-burning-out direction they usually get the reassurance they need from the community, and when people make errors in the type-2 not-working-hard-enough direction they don’t really get the callouts they need because it’s considered rude, and I’m just pushing in the direction of editing that social norm. Maybe I’d like you to be honest about this because I’d like to surround myself with a community of people who share my values, so I’d like to be able to filter out people like you - no offence, we can still be friends, it’s just that I feel like I’d find it easier to be motivated and consistent if my brain wasn’t constantly looking at you and reminding me that I totally could have a cushy life like yours if I just stopped living my values.  

Bob: Wait, are you claiming that I’m harming you, just by existing in your vague vicinity and not doing the maximum amount of good? 

Alice: No, not really, maybe I’m just claiming that we have competing access needs. I mean, I don’t really know what the correct solution is. Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai - there’s a reason this is on LessWrong and not the EA forum. Maybe I’m in the minority and my needs aren’t realistically going to be met, in which case I will shrug and carry on trying to do the best that I can. Or maybe thinking about the potential positive impact on me is just the push you need to be better yourself. Maybe I don’t think you’re harming me, exactly, I just think you’re being rudeand maybe that makes it okay for me to be a little rude, too. 

Bob: I want to tap out of this conversation now. 

New Comment
93 comments, sorted by Click to highlight new comments since: Today at 7:23 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

[note: I don't consider myself Utilitarian and sometimes apply True Scotsman to argue that no human can be, but that's mostly trolling and not my intent here. I'm not an EA in any but the most big-tent form (I try to be effective in things I do, and I am somewhat altruistic in many of my preferences). ]

I think Alice is confused about how status and group participation works.  Which is fine, we all are - it's insanely complicated.  But she's not even aware how confused she is, and she's making a huge typical mind fallacy in telling Bob that he can't use her preferred label "Utilitarian". 

I think she's also VERY confused about sizes and structures of organization.  Neither "the Effective Altruist movement" nor "rationalist community" are coherent structures in the sense she's talking about.  Different sites, group homes, companies, and other specific groups CAN make decisions on who is invited and what behaviors are encouraged or discouraged.  If she'd said "Bob, I won't hire you for my lab working on X because you don't seem to be serious about Y", there would be ZERO controversy.  This is a useful and clear communication.  When she says "I do... (read more)

Hmm, does your response change if they're housemates or something like that? I agree there'd be no controversy about Alice deciding not to hire Bob because he doesn't meet her standards, and I think there'd be little controversy over some org deciding to hire Bob over Alice because he's more likeable. But, if it makes the post work better for you, you can totally pretend that instead of talking about membership in "the rationalist community", they're talking about "membership in the Greater Springfield Rationalist Book Club that meets on Tuesdays in Alice and Bob's group house". I think Alice kicking Bob out of that would be much more contentious and controversial!
Part of my response is "this is very context-dependent", and that is overwhelmingly true for a group house or book club.  Alice can, of course, leave either one if she feels Bob is ruining her experience.  She may or may not convince others to kick Bob out if he doesn't shape up, depending on the style of group and charter for formal ownership of the house. She'd be far better off, in either case, being specific about what she wants Bob to do differently, rather than just saying "work harder".

I think the fact that the world where:

I can work extremely hard, doing things I don't particularly like, without burnouts, eat only healthy food without binge eating spirals, honestly enjoy doing exercises, have only meaningful rest without exausting my will power and generally be fully intellectually and emotionally consistent, completely subjugating my urges to my values...

is called the least convinient possible world - says something interesting about this whole discourse.


Honestly, the world where I'm already a god sounds extremely convinient. And pretending that we are there, demanding that we have to be there, claiming that we could've been there already if only I just tried harder doesn't sound helpful at all. Yes it's important to try to get there. One step at a time. Check whether it's possible to go faster occasionally, while being nice and careful towards yourself. But as soon as you find yourself actually having a voice in your head being mean to you because your are not as good as you wish to be, it seems that you've failed the nice and careful part.

If I could work extremely hard doing things I don't like, without any burnouts, eat only healthy food without binge eating spirals, honestly enjoy doing exercises, have only meaningful rest without exausting my will power and generally be fully intellectually and emotionally consistent, completely subjugating my urges to my values... but ONLY by being really mean and cruel and careless to myself... Man, that would suck! That would be a really inconvenient world! That would be a world where I'm forced to choose either "I don't want to be mean to myself, even if I could save lots of people's lives by doing that, so I'm just going to deliberately leave all those people to die" or "I'm going to be mean to myself because I think it's ethically obligatory", and I really don't want to make that choice! I much prefer a world where a choice like "I'm going to be nice and careful to myself because actually that's the best way to be more productive, and being mean isn't sustainable" is an option on the table. Way more convenient. I really hope it's the one we live in.
7Ape in the coat3mo
I mean if you have successfully subjugated your urges to your values, thus you actually enjoy your new lifestyle thus you are not mean to yourself anymore and it's very convinient... But, yeah, we can spin the inconvinience framework however we (don't) like. That's because reality doesn't actually run on inconvinience and this kind of speculation is rarely helpful. Saying that we believe X because it's convinient is easy because one can always find a framework according to which believing X is convinient and always demand attempts to find new clever solutions around all the objective reasons why X seems to be true. Let's go one step highter: Carol: Hey, Alice, I've noticed that you spend couple of hours a day meditating instead of taking extra work and thus earning more  money and donating them to charity. Don't you think hat you are being hypocritical and not consistent with your values? Alice: Actually meditating is what helps me to keep my lifestyle at all. I do it specifically in order to be more productive. Carol: Oh, how very convinient that the only way for you to be somewhat productive is to spend couple of hours a day doing nothing and not, say, self-flagelation. Have you actually tried to find a clever solution around this problem or just stopped as soon as you figured out a nice way, instead of actually efficient one? The thing is, perceiving Alice (or Carol) as speaking the hard truths and Bob as a laizy motivated reasoner is wrong. Both of them are motivated reasoners! Both of them are rationalizing for their own convinience and both of them capture something true about the reality. And both of them are probably voices in your head. Sometimes you need to side more with Alice and sometimes with Bob. Finding the right balance is the difficult thing. But if you always find yourself as if you are Bob, who is defending themself against Alice - then something seems to be not working as it should.
Well, yes. The correct response to noticing "it's really convenient to believe X, so I might be biased towards X" isn't to immediately believe not-X. It's to be extra careful to use evidence and good reasoning to figure out whether you believe X or not-X.

Multiple related problems with Alice's behavior (if we treat this as a real conversation):

  1. Interfering with Bob's boundaries/autonomy, not respecting the basic background framework where he gets to choose what he does with his life/money/etc.
  2. Jumping to conclusions about Bob, e.g., insisting that the good he's been doing is just for "warm fuzzies", or that Bob is lying
  3. Repeatedly shifting her motive for being in the conversation / her claim about the purpose of the conversation (e.g., from trying to help Bob act on his values, to "if you’re lying then it’s my business", to what sorts of people should be accepted in the rationalist community) 
  4. Cutting off conversational threads once Bob starts engaging with them to jump to new threads, in ways that are disorienting and let her stay on the attack, and don't leave Bob space to engage with the things that have already come up

These aren't merely impolite, they're bad things to do, especially when combined and repeated in rapid succession. It seems like an assault on Bob's ability to orient & think for himself about himself.

Yes, but this isn't about Alice.

I don't think anyone would dispute that Alice is being extremely rude! Indeed she is deliberately written that way (though I think people aren't reading it quite the way I wrote it because I intended them to be housemates or close friends, so Alice would legitimately know some amount about Bob's goals and values.) I think a real conversation involving a real Bob would definitely involve lots more thoughtful pauses that gave him time to think. Luckily it's not a real conversation, just a blog post trying to stay within a reasonable word limit. :( Alice is not my voice; this is supposed to inspire questions, not convince people of a point. For instance: is there a way to achieve what Alice wants to achieve, while being polite and not an asshole? Do you think the needs she expresses can be met without hurting Bob?

Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai...


This is a disturbing claim, although I realize that the author's opinions don't coincide with those of the "Alice" character. Personally, I'm not a utilitarian, nor do I want to be a utilitarian or think that I "should" be a utilitarian[1]. I do consider myself a person who is empathetic, honest and cooperative[2]. I hope this doesn't disqualify me from the rationalist community?

In general, I'm in favor of promoting societal norms which incentivize making the world better: such norms are obviously in everyone's interest. In this sense, I'm very sympathetic to effective altruism. However, these norms should still regard altruism as supererogatory: i.e., it should be rewarded and encouraged, but it's lack should not be severely punished. The alternative is much too vulnerable to abuse.

  1. ^

    IMO utilitarianism is not even logically coherent, due to paradoxes with infinite ethics and Pascal's mugging.

  2. ^

    In the sense of, trying to act according to superration

... (read more)
2Ben Amitay2mo
I seem to be the the only one who read the post that way, so probably I read my own opinions into it, but my main takeaway was pretty much that people with your (and my) values are often shamed into pretending to have other values and invent excuses for how their values are consistent with their actions, while it would be more honest and productive if we take a more pragmatic approach to cooperating around our altruistic goals.

If I were Bob I'd have told her to fuck off long ago and stopped letting some random person berate me for being lazy just like my parents always have. This is basically guilt-tripping, not a beneficial way of approaching any kind of motivation, and it is absolutely guaranteed to produce pushback. But then, I'm probably not your target audience, am I?

Btw just to be clear, I think Said Achmiz explained my reaction better than I, who habitually post short reddit-tier responses, can. My specific issue is that Alice seems to be acting as if it's any of her business what Bob does. It is not. Absolutely nobody likes being told they're not being ethical enough. It's why everyone hates vegans. As someone who doesn't like experiencing such judgmental demands, I would have the kneejerk emotional reaction to want to become less of an EA just to spite her. (I would not of course act on this reaction, but I would start finding EA things to be in an ugh field because they remind me of the distress caused by this interaction.)

I'm noticing it's hard to engage with this post because... well, if I observed this in a real conversation, my main hypothesis would be that Alice has a bunch of internal conflict and guilt that she's taking out on Bob, and the conversation is not really about Bob at all. (In particular, the line "That kind of seems like a you problem, not a me problem" seems like a strong indicator of this.)

So maybe I'll just register that both Alice and Bob seem confused in a bunch of ways, and if the point of the post is "here are two different ways you can be confused" then I guess that makes sense, but if the point of the post is "okay, so why is Alice wrong?" then... well, Alice herself doesn't even seem to really know what her position is, since it's constantly shifting throughout the post, so it's hard to answer that (although Holden's "maximization is perilous" post is a good start).

Relatedly: I don't think it's an accident that the first request Alice makes of Bob (donate that money rather than getting takeout tonight) is far more optimized for signalling ingroup status than for actually doing good.

I think this post raises important points and handles them reasonably well. I am of course celebrating that fact mostly by pointing out disagreements with it.

I wish Alice drew a sharper distinction between Bob being honest about his beliefs, Bob bringing his actions in line with his stated beliefs, and Bob doing what Alice wants. I think pushing people to be honest is prosocial by default (within limits). Pushing people to do what you want is antisocial by default, with occasional exceptions. 

And Alice's methods can be bad, even if the goal is good. If I could push a button and have a community only of people on a long term growth trajectory, I would. But policing this does more harm than good, because it's hard for the police to monitor. Growth doesn't always look like what other people expect, and people need breaks. Demandng everyone present legible growth on a predictable cycle impedes growth (and pushes people to be dishonest).

My personal take here is that you should be ready to work unsustainably and miserably when the circumstances call for it, but the circumstances very rarely call for it, and those circumstances always include being very time-limited. "I'll just take ... (read more)

This is a very good post and nearly all the replies here are illustrating the exact issue that Bob has, which is an inability to engage in the dialectic between these two perspectives without indignation as a defense against guilt.

Most people, including myself, are more Bob than Alice, but I've had a much easier time integrating my inner Alice and engaging with Alices I meet because I rarely, if ever, feel guilt about anything. Strong guilt increases the anticipated costs of positive self-change, and makes people strengthen defense mechanisms that boil down to "I don't owe anyone anything!" to avoid confronting that cost. Ironically this creates people who think they're not predisposed towards guilt, but absolutely are.

Don't get me wrong, Bobs often have pretty understandable reasons to be the way they are. A lot of Bobs got out of religious groups that were really aggressive with the guilting. But understandable reasons to be in an undesirable state does not increase the desirability of that state!

Having met a number of Alices, I think they need to invest more in the consequences of the manner in which they try to get other people to improve. I understand their frustration but the aggressiveness is really so counterproductive and just makes Bobs even worse. Bobs unfortunately need to be treated with kid gloves to get them to improve without feeling in danger of self-guilt-torture.

4Said Achmiz3mo
It seems like you see only two possibilities: either (a) agreeing with Alice, or (b) secretly agreeing with Alice and feeling guilty about it. Do you not see any possibility of disagreeing with Alice? Thinking that she’s just wrong? Do you see no possibility of someone thinking that the change in question is actually negative, not positive? Sincerely believing that one doesn’t owe anyone anything (or, at least, that one doesn’t owe the sorts of things that the Alices of the world claim that we owe), without guilt?
I think if someone wasn't indignant about Alice's ideas, but did just disagree with Alice and think she was wrong, we might see lots of comments that look something like: "Hmm, I think there's actually a 80% probability that I can't be any more ethical than I currently am, even if I did try to self-improve or self-modify. I ran a test where I tried contributing 5% more of my time while simultaneously starting therapy and increasing the amount of social support that I felt okay asking for, and in my journal I noted an increase in my sleep needs, which I thought was probably a symptom of burnout. When I tried contributing 10% more, the problem got a lot worse. So it's possible that there's some unknown intervention that would let me do this (that's about ~15% of my 20% uncertainty), but since the ones I've tried haven't worked, I've decided to limit my excess contributions to no more than 5% above my comfortable level."  I think these are good habits for rationalists: using evidence, building models, remembering that 0 and 1 aren't probabilities, testing our beliefs against the territory, etc. Obviously I can't force you to do any of that. But I'd like to have a better model about this, so if I saw comments that offered me useful evidence that I could update on, then I'd be excited about the possibility of changing my mind and improving my world-model. 
4Said Achmiz3mo
The disagreement isn’t with Alice’s ideas, it’s with Alice’s claims to have any right to impose her judgment on people who aren’t interested in hearing it. What you describe here is instead an acceptance of Alice’s premises. I’m pointing out that it’s possible to disagree with those premises entirely. I agree that “using evidence, building models, remembering that 0 and 1 aren’t probabilities, testing our beliefs against the territory, etc.” are good habits. But they’re habits that it’s good to deploy of your own volition. If someone is trying to pressure you into doing these things—especially someone who, like Alice, quite transparently does not have your best interests in mind, and is acting in the service of ulterior motives, and who (again, like Alice) is deceptively clothing these motives in a guise of trying to help you conform to your own stated values—then the first thing you should do is tell them to fuck off (employing as much or as little tact in this as you deem fit), and only then should you consider whether and what techniques of epistemic rationality to apply to the situation. It is a foolish, limited, and ultimately doomed sort of rationality, that ignores interpersonal conflicts when figuring out what the world is like, and what to do about it.
The point of the post is to be about ideas. Alice is only there as a framework for presenting the post's ideas. If Alice is expressing the ideas rudely, that's just a deficiency in how the post presents them. Saying "I'd ignore Alice because she's rude" is missing the point; it's as if the post had Alice be an angel and you replied "I'd ignore Alice because angels don't exist". The proper reaction is "the post is flawed in that it attributes the ideas to a rude character, but in order to engage with the thesis of the post I should ignore this flaw and address the ideas anyway".
1Said Achmiz2mo
In this case, the ideas seem to be linked quite closely with the behavior of the ‘Alice’ character, so attempting to reply to a hypothetical alternate version of the post where the ideas are (somehow) the same but Alice is very polite… seems strange and unproductive. (For one thing, if Alice were polite, the whole conversation wouldn’t happen.)
I think you lack imagination if you think that Alice can't express those ideas without being rude. For instance, "Alice" and "Bob" could be a metaphor for conflicting impulses and motives inside your own head. Trying to decide between Alice-type ideas and Bob-type ideas doesn't mean that you're being rude to yourself.
3Said Achmiz2mo
Yep, could be. Show me a rewritten version of this dialogue which supports your suggestion, and we’ll talk. I think it would be different in instructive ways (not just incidental ones). Well, perhaps “being rude to yourself” is an odd way of putting it, but something like this is precisely why I wouldn’t think these things to myself. I have no particular interest in conjuring a mental Insanity Wolf!
"Should I do (list of things said by Alice in the post)? Or should I do (list of things said by Bob in the original post)?"
I believe that people who agreed with Alice and had worked to increase their capacity would be more indignant, and that's reason enough to never use this approach even if the goal is good. People hate having their work dismissed.
Huh, interesting! I definitely count myself as agreeing with Alice in some regards - like, I think I should work harder than I currently do, and I think it's bad that I don't, and I've definitely done some amount to increase my capacity, and I'm really interested in finding more ways to increase my capacity. But I don't feel super indignant about being told that I should donate more or work harder - though I might feel pretty indignant if Alice is being mean about it! I'd describe my emotions as being closer to anxiety, and a very urgent sense of curiosity, and a desire for help and support. (Planned posts later in the sequence cover things like what I want Alice to do differently, so I won't write up the whole thing in a comment.)
I can picture ways people could bring up capacity-improvement-for-the-greater-good that I'd be really excited about. It's something I care about and most people aren't interested in. It's the way Alice (in this story, and by default in the real world) brings it up I think is counterproductive. 

Hot take: Bob should be bullying Alice to do less so she doesn't burn out. 

hmm I think Alice wants to wrestle in that puddle of mud? Like, these two sections are basically how Alice would respond to Bob saying "hey Alice, you're going to burn out":
It's hard because Alice is a fictional character in stylized dialogue the author says is intended to be a bad implementation. But in the real world if someone talked like Alice did (about herself and towards Bob) I'd place good money on burnout.  Probably Bob isn't actually the right person to raise this issue with Alice, because she doesn't respect him enough. But I don't think it's worse than what she's doing to him. 
(I wrote way too much in this comment while waiting for my lentils to finish simmering; I apologise!) I don't think it's necessarily intended to be bad or excessively stylized, but it's intended to be rude for sure. I didn't want to write a preachy thing! Three kinda main reasons that I made Alice suck, deliberately: Firstly, later in my sequence I want to talk about ways that Alice could achieve her goals better. Secondly, I kind of want to be able to sit with the awkward dissonant feeling of, "huh, Alice is rude and mean and making me feel bad and maybe she shouldn't say those things, and ALSO, Alice being an infinitely flawed person would still not actually be a good justification for me to save fewer lives than I think I can save if I try (or otherwise fail according to my own values and my own ethics), and hm, holding those two ideas in juxtaposition feels uncomfy for me, let's poke that!" I feel like a lot of truthseeking mindsets involve getting comfy with that sorta "huh, this juxtaposition is super uncomfy and I'm going to sit with it anyway" kinda mental state. Thirdly, I have a voice in my head that gets WAY meaner than Alice! I totally sometimes have thoughts like, "Wow, I'm such a worthless hypocrite for preaching EA things online even though I don't have as much impact as I could if I tried harder, I'm totally just lying to myself about thinking I'm burned out because I'm lazy, I should go flog myself in penance!*" *mild hyperbole for humour I can respond by thinking something like, "Go away, stupid voice in my head, you're rude and mean and I don't want to listen to you." I could also respond by deliberately seeking out lots of reassuring blog posts that say "burnout is super bad and you're morally obligated to be happy!" and try to pretend that I'm definitely not engaging in any confirmation bias, no, definitely not, I definitely feel reassured by all of these definitely-true posts about the thing I really wanted to believe anyway. But maybe
I agree this set of questions is really important, and shouldn't be avoided just because it's uncomfortable. And I really appreciate your investment in truthseeking even when it's hard. But Alice doesn't seem particularly truthseeking to me here, and the voice in your head sounds worse. Alice sounds like she has made up her mind and is attempting to browbeat people into agreeing with her. Nor does Alice seem curious about why her approach causes such indignance, which makes me further doubt this is about pursuit of knowledge for her.  One reason people react badly to these tactics: rejecting assholes out of hand when they try to extract value from you is an important defense mechanism. If you force people to remove that you make them vulnerable to all kinds of malware (and you can't say "only remove it for good things" because the decision needs to be made before you know if the idea is good or not. That's the point). If Alice is going to push this hard about responsibility to the world she needs to put more thought into her techniques. Maybe this will be covered in a later post but I have to respond to what's in front of me now. 
yep, fair! Do you think the point would come across better if Alice was nice? (I wasn't sure I could make Alice nice without an extra few thousand words, but maybe someone more skilful could.) I think a lot of us have voices in our heads that are meaner than Alice, so if you think Alice is going to cause burnout, I think we need a response that is better than Bob's (and better than "I'm just going to reject all assholes out of hand", because I can't use that on myself!)
I think being nicer would make truthseeking easier but isn't truthseeking in and of itself. I also think it's a mistake to assume your inner Alice would shut up if only you came up with a good enough argument. The loudest alarm is probably false. Truthseeking might be useful in convincing other parts of your brain to stop giving Alice so much weight, but I would include "is Alice updating in response to facts?" as part of that investigation. 

Alice: Our utility functions differ.

Bob: I also observe this.

Alice: I want you to change to match me: conditional on your utility function being the same as mine, my expected utility would be larger.

Bob: Yes, that follows from me being a utility maximizer

Bob: I won't change my utility function: conditional on my utility function becoming the same as yours, my expected utility as measured by my current utility function would be lower.

Alice: Yes, that follows from you being a utility maximizer

If Bob isn't reflectively consistent, their utility functions could currently be the same in some sense, right? (They might agree on what Bob's utility function should be - Bob would happily press a button that makes him want to donate 30%, he just doesn't currently want to do that and doesn't think he has access to such a button.)
Certainly! Most likely, neither of them is reflectively consistent: "I feel like I’d find it easier to be motivated and consistent if my brain wasn’t constantly looking at you and reminding me that I totally could have a cushy life like yours if I just stopped living my values." hints at this.
yes, definitely!

Alice: I think the negative impact of my rudeness is probably smaller than the potential positive impact of getting you to act in line with the values you claim to have.

It seems to me that Bob has a moral obligation to respond in such a way as to ensure that Alice’s claim here is false, i.e. the correct response here is “lol fuck you” (and escalating from there if Alice persists). Alice’s behavior here ought not be incentivized; on the contrary, it should be severely punished. Bob is exhibiting a failure of moral rectitude, or else a failure of will, by not applying said punishment.

6Nathaniel Monson3mo
Lots of your comments on various posts seem rude to me--should I be attempting to severely punish you?
4Said Achmiz3mo
The behavior I was referring to, specifically, is not rudeness (or else I’d have quoted Alice’s first comment, not her second one), but rather Alice taking as given the assumption that she has some sort of claim on Bob’s reasons for his actions—that Bob has some obligation to explain himself, to justify his actions and his reasons, to Alice. It is that assumption which must be firmly and implacably rejected at once. Bob should make clear to Alice that he owes her no explanations and no justifications. By indulging Alice, Bob is giving her power over himself that he has no reason at all to surrender. Such concessions are invariably exploited by those who wish to make use of others as tools to advance their own agenda. Bob’s first response was correct. But—out of weakness, lack of conviction, or some other flaw—he didn’t follow up. Instead, he succumbed to the pressure to acknowledge Alice’s claim to be owed a justification for his actions, and thus gave Alice entirely undeserved power. That was a mistake—and what’s more, it’s a mistake that, by incentivizing Alice’s behavior, has anti-social consequences, which degrade the moral fabric of Bob’s community and society.
4Nathaniel Monson3mo
To me, it sounds like A is a member of a community which A wants to have certain standards and B is claiming membership in that community while not meeting those. In that circumstance, I think a discussion between various members of the community about obligations to be part of that community and the community's goals and beliefs and how these things relate is very very good. Do you A) disagree with that framing of the situation in the dialogue B) disagree that in the situation I described a discussion is virtuous, verging on necessary C) other?
5Said Achmiz3mo
Indeed, I disagree with that characterization of the situation in the dialogue. For one thing, there’s no indication that Bob is claiming to be a member of anything. He’s “interested in Effective Altruism”, and he “want[s] to help others and … genuinely care[s] about positive impact, and ethical obligations, and utilitarian considerations”, and he also (according to Alice!) “claim[s] to really care about improving the world”, and (also according to Alice!) “claim[s] to be a utilitarian”. But membership in some community? I see no such claim on Bob’s part. But also, and perhaps more importantly: suppose for a moment that “Effective Altruism” is, indeed, properly understood as a “community”, membership in which it is reasonable to gatekeep in the sort of way you describe.[1] It might, then, make sense for Alice to have a discussion with Carol, Dave, etc.—all of whom are members-in-good-standing of the Effective Altruism community, and who share Alice’s values, as well as her unyielding commitment thereto—concerning the question of whether Bob is to be acknowledged as “one of us”, whether he’s to be extended whatever courtesies and privileges are reserved for good Effective Altruists, and so on. However, the norm that Bob, himself, is answerable to Alice—that he owes Alice a justification for his actions, that Alice has the right to interrogate Bob concerning whether he’s living up to his stated values, etc.—that is a deeply corrosive norm. It ought not be tolerated. Note that this is different from, say, engaging a willing Bob in a discussion about what his behavior should be (or about any other topic whatsoever)! This is a key aspect of the situation: Bob has expressed that he considers his behavior none of Alice’s business, but Alice asserts the standing to interrogate Bob anyway, on the reasoning that perhaps she might convince him after all. It’s that which makes Bob’s failure to stand up for his total lack of obligation to answer to Alice for his actions dep
Word of God, as the creator of both Alice and Bob: Bob really does claim to be an EA, want to belong to EA communities, say he's a utilitarian, claim to be a rationalist, call himself a member of the rationalist community, etc. Alice isn't lying or wrong about any of that. (You can get all "death of the author" and analyse the text as though Bob isn't a rationalist/EA if you really want, but I think that would make for a less productive discussion with other commenters.) Speaking for myself personally, I'd definitely prefer that people came and said "hey we need you to improve or we'll kick you out" to my face, rather than going behind my back and starting a whisper campaign to kick me out of a group. So if I were Bob, I definitely wouldn't want Alice to just go talk to Carol and Dave without talking to me first! But more importantly, I think there's a part of the dialogue you're not engaging with. Alice claims to need or want certain things; she wants to surround herself with similarly-ethical people who normalise and affirm her lifestyle so that it's easier for her to keep up, she wants people to call her out if she's engaging in biased or motivated reasoning about how many resources she can devote to altruism or how hard she can work, she wants Bob to be honest with her, etc. In your view, is it ever acceptable for her to criticise Bob? Is there any way for her to get what she wants which is, in your eyes, morally acceptable? If it's never morally acceptable to tell people they're wrong about beliefs like "I can't work harder than this", how do you make sure those beliefs track truth? Those questions aren't rhetorical; the dialogue isn't supposed to have a clear hero/villain dynamic. If you have a really awesome technique for calibrating beliefs about how much you can contribute which doesn't require any input from anyone else, then that sounds super useful and I'd like to hear about it!
6Said Achmiz3mo
Fair enough, but this is new information, not included in the post. So, all responses prior to you posting this explanatory comment can’t have taken it into account. (Perhaps you might make an addendum to the post, with this clarification? It significantly changes the context of the conversation!) However, there is then the problem that if we assume what you’ve just added to be true, then the depicted conversation is rather odd. Why isn’t Alice focusing on these claims of Bob’s? After all, they’re the real problem! Alice should be saying: “You are making these-and-such claims, in public, but they’re lies, Bob! Lies! Or, at the very least, deceptions! You’re trying to belong to this community [of EAs / rationalists], but you’re not doing any of the things that we, the existing members, take to be determinative of membership! You claim to be a utilitarian, but you’re clearly not! Words have meanings, Bob! Quit trying to grab status that you’re not entitled to!” And so on. But Alice only touches these issues in the most casual way, in passing, and skates right past them. She should be hammering Bob on this point! Her behavior seems weird, in this context. Now, Bob might very well respond with something like: “Just who appointed you the gatekeeper of these identities, eh, Alice? Please display for me your ‘Official Enforcer of Who Gets To Call Themselves a Rationalist / EA / Utilitarian’ badge!” And at that point, Alice would do well to dismiss talking to Bob as a lost cause, and convene at once the meeting of true EAs / rationalists / etc., to discuss the question of public shunning. That’s as may be, but Bob makes clear right at the start of the conversation (and then again several times afterwards) that he’s not really interested in being lectured like this. He just lacks the spine to enforce his boundaries. And Alice takes advantage. But the “whisper campaign” concern is misplaced. Of course, as I say above, Alice doesn’t exactly make it clear that this whol
I agree with your sense that they should be directly arguing about "what are the standards implied by 'calling yourself a rationalist' or 'saying you're interested in EA'?". I think that they are closer to having that argument than not having it, tho. I think the difficulty is that the conversation they're having is happening at multiple levels, dealing with both premises and implications, and it's generally jumbled together instead of laid out cleanly (in a way that makes the conversation more natural, if Alice and Bob have context on each other, but read more strangely without that context). Looking at the first statement by Alice: In my eyes, this is pretty close to your proposed starter for Alice: The main difference is that Alice's version seems like it's trying to balance "enforcing the boundary" and "helping Bob end up on the inside". She's not (initially) asking Bob to become a copy of her; she's proposing a specific concrete action tied to one of Bob's stated values, suggesting a way that he could make his self-assessments more honest. Now, the next step in the conversation (after Bob rejected Alice's bid to both suggest courses of action and evaluate how well he conforms to community standards) could have been for Alice to say "well, I'd rather you not lie about being one of us." (And, indeed, it looks to me like Alice says as much in her 4th comment.) The remaining discussion is mostly about whether or not Alice's interpretation of the community standards is right. Given that many of the standards are downstream of empirical facts (like which working styles are most productive instead of demonstrating the most loyalty or w/e), it makes sense that Alice couldn't just say "you're not working hard enough" and instead needs to justify her belief that the standard is where she thinks it is. (And, indeed, if Bob in fact cannot work harder then Alice doesn't want to push him past his limits--she just doesn't yet believe that his limits are where he claims
3Said Achmiz3mo
Hm, I don’t think those are very close. After all, suppose we imagine me in Bob’s place, having this conversation with the same fictional Alice. I could respond thus: “Yes, I really care about improving the world. But why should that imply donating more, or using my time differently? I am acting in a way that my principles dictate. You claim that ‘really caring about the world’ implies that I should act as you want me to act, but I just don’t agree with you about that.” Now, one imagines that Alice wouldn’t start such a conversation with me in the first place, as I am not, nor claim to be, an “Effective Altruist”, or any such thing.[1] But here again we come to the same result: that the point of contention between Bob and Alice is Bob’s self-assignment to certain distinctly identified groups or communities, not his claim to hold some general or particular values. Well, there is also all the stuff (specifically called out as important by the OP, in the grandparent comment) about Alice’s needs and wants and so forth. Sure, maybe, but that mostly just points to the importance of being clear on what a discussion is about. Note that Alice flitting from topic to topic, neither striving for clarity nor allowing herself to be pressed on any point, is also quite realistic, and is characteristic of untrustworthy debaters. If this is true, then so much the worse for EA! When I condemn Alice’s behavior, that condemnation does not contain an “EA exemption”, like “this behavior is bad, but if you slap the ‘EA’ label on it, then it’s not bad after all”. On the contrary, if the label is accurate, then my condemnation extends to EA itself. ---------------------------------------- 1. Although I could certainly claim to be an effective altruist (note the lowercase), and such a claim would be true, as far as it goes. I don’t actually do this because it’s needlessly confusing, and nothing really hinges on such a claim. ↩︎
Right, and then you and Alice could get into the details. I think this is roughly what Alice is trying to do with Bob ("here's what I believe and why I believe it") and Bob is trying to make the conversation not happen because it is about Bob. And so there's an interesting underlying disagreement, there! Bob believes in a peace treaty where people don't point out each other's flaws, and Alice believes in a high-performing-team culture where people point out each other's flaws so that they can be fixed. To the extent that the resolution is just "yeah, I prefer the peace treaty to the mutual flaw inspection", the conversation doesn't have to be very long. But, like, my impression is that a lot of rationalist culture is about this sort of mutual flaw inspection, and there are fights between people who prefer that style and people who prefer a more 'peace treaty' style. I think that's the same sort of conversation that's happening here. Sure--in my read, Alice's needs and wants and so forth are, in part, the generators of the 'community standards'. (If Alice was better off with lots of low-performers around to feel superior to, instead of with lots of high-performers around to feel comparable to, then one imagines Alice would instead prefer 'big-tent EA' membership definitions.] I think this part of EA makes it 'sharp' which is pretty ambivalent. If I'm reading you correctly, the main thing that's going on here to condemn about Alice is that she's doing some mixture of: 1. Setting herself as the judge of Bob without his consent or some external source of legitimacy 2. Being insufficiently clear about her complaints and judgments I broadly agree with 2 (because basically anything can always be clearer) tho I think this is, like, a realistic level of clarity. I think 1 is unclear because it's one of the points of disagreement--does Bob saying that he's "interested in EA" or "really cares about improving the world" give Alice license to provide him with unsolicite
3Said Achmiz3mo
But that’s just the thing—I wouldn’t be interested in getting into the details. My hypothetical response was meant to ward Alice off, not to engage with her. The subtext (which could be made into text, if need be—i.e., if Alice persists) is “I’m not an EA and won’t become an EA, so please take your sales pitch elsewhere”. The expected result is that Alice loses interest and goes off to find a likely-looking Bob. The conversation as written doesn’t seem to me to support this reading. Alice steadfastly resists Bob’s attempts to turn the topic around to what she believes, her actions, etc., and instead relentlessly focuses on Bob’s beliefs, his alleged hypocrisy, etc. Well, for one thing, I’ll note that I’m not much of a fan of this “mutual flaw inspection”, either. The proper alternative, in my view, isn’t any sort of “peace treaty”, but rather a “person-interface” approach. More importantly, though, any sort of “mutual flaw inspection” has got to be opted into. Otherwise you’re just accosting random people to berate them about their flaws. That’s not praiseworthy behavior. Sorry, I don’t think I get the meaning here. Could you rephrase? Yes, basically this. Let me emphasize again what the problem is: Criticism, per se, is not the central issue (although anti-solicited criticism is almost always rude, if nothing else).
I think EA is a mixture of 'giving people new options' (we found a cool new intervention!) and 'removing previously held options'; it involves cutting to the heart of things, and also cutting things out of your life. The core beliefs do not involve much in the way of softness or malleability to individual autonomy. (I think people have since developed a bunch of padding so that they can live with it more easily.) Like, EA is about deprioritizing 'ineffective' approaches in favor of 'effective' approaches. This is both rough (for the ineffective approaches and people excited about them) and also the mechanism of action by which EA does any good (in the same way that capitalism does well in part by having companies go out of business when they're less good at deploying capital than others).
-1Said Achmiz3mo
Hmm, I see. Well, I agree with your first paragraph but not with your second. That is, I do not think that selection of approaches is the core, to say nothing of the entirety, of what EA is. This is a major part of my problem with EA as a movement an ideology. However, that is perhaps a digression we can avoid. More relevant is that none of this seems to me to require, or even to motivate, “being this sort of rude”. It’s all very well to “remove previously held options” and otherwise be “rough” to the beliefs and values of people who come to EA looking for guidance and answers, but to impose these things on people who manifestly aren’t interested is… not justifiable behavior, it seems to me. (And, again, this is quite distinct from the question of accepting or rejecting someone from some group or what have you, or letting their false claims to have some praiseworthy quality stand unchallenged, etc.)
Just noting here that I broadly agree with Said's position throughout this comment thread.
If Bob asked this question, it would show he's misunderstanding the point of Alice's critique - unless I'm missing something, she claims he should, morally speaking, act differently. Responding "What do I get out of any of this?" to that kind of critique is either a misunderstanding, or a rejection of morality ("I don't care if I should be, morally speaking, doing something else, because I prefer to maximize my own utility."). Edit: Or also, possibly, a rejection of Alice ("You are so annoying that I'll pretend this conversation is about something else to make you go away.").
4Said Achmiz3mo
Please reread my comment more carefully. That part (Bob’s “what do I get out of any of this” response) was specifically about Alice’s commentary on her personal wants/needs, i.e. the specifically non-moral aspect of Alice’s array of criticisms.
How does that apply if Alice and Bob are a metaphor for trying to decide between Alice-type and Bob-type things inside your head? Surely you have a claim on your own reasons for your actions.
I hear Alice saying, “Very well. We shall resume in an hour.”

I am mostly like Bob (although I don't make up stuff about burnout), but I think calling myself a utilitarian is totally reasonable. By my understanding, utilitarianism is an answer to the question "what is moral behavior." It doesn't imply that I want to always decide to do the most moral behavior.

I think the existence of Bob is obviously good. Bob is in, like, the 90th percentile of human moral behavior, and if other people improved their behavior, Bob is also the kind of person who would reciprocally improve his own. If Alice wants to go around personal... (read more)

You doubt that it would work very well if Alice nags everyone to be more altruistic. I'm curious how confident you are that this doesn't work and whether you'd propose any better techniques that might work better? For myself, I notice that being nagged to be more altruistic is unpleasant and uncomfortable. So I might be biased to conclude that it doesn't work, because I'm motivated to believe it doesn't work so that I can conveniently conclude that nobody should nag me; so I want to be very careful and explicit in how I reason and consider evidence here. (If it does work, that doesn't mean it's good; you could think it works but the harms outweigh the benefits. But you'd have to be willing to say "this works but I'm still not okay with it" rather than "conveniently, the unpleasant thing is ineffective anyway, so we don't have to do it!") (PS. yes, I too am very glad that people like Bob exist, and I think it's good they exist!)

I am genuinely confused why this is on lesswrong instead of EA. What do you think the distribution of giving money is like on each place, and what do you think the distribution of responses to drowning child is like on each?

Hmm, I think I could be persuaded into putting it on the EA Forum, but I'm mildly against it:  * It is literally about rationality, in the sense that it's about the cognitive biases and false justifications and motivated reasoning that cause people to conclude that they don't want to be any more ethical than they currently are; you can apply the point to other ethical systems if you want, like, Bob could just as easily be a religious person justifying why he can't be bothered to do any pilgrimages this year while Alice is a hotshot missionary or something. I would hope that lots of people on LW want to work harder on saving the world, even if they don't agree with the Drowning Child thing; there are many reasons to work harder on x-risk reduction.  * It's the sort of spicy that makes me worried that EAs will consider it bad PR, whereas rationalists are fine with spicy takes because we already have those in spades. I think people can effectively link to it no matter where it is, so posting it in more places isn't necessarily beneficial?  * I don't agree with everything Alice says but I do think it's very plausible that EA should be a big tent that welcomes everyone - including people who just want to give 10% and not do anything else - whereas my personal view is that the rationality community should probably be more elitist; we're supposed to be a self-improve-so-hard-that-you-end-up-saving-the-world group, damnit, not a book club for insight porn. Also it's going to be part of a sequence (conditional on me successfully finishing the other posts), and I feel like the sequence overall belongs more on LW.  I genuinely don't really know how the response to the Drowning Child differs between LW and EA! I guess I would probably say more people on the EA Forum probably donate money to charity for Drowning-Child-related reasons, but more people on LW are probably interested in philosophy qua philosophy and probably more people on LW switched careers to directly work
2Seth Herd3mo
This does seem more like EA than LW.

Alice and Bob sound to me very like the two options of my variant 8 of the red-pill-blue-pill conundrum. We can imagine Alice working as she describes for the whole of a long life, because we can imagine anything. A real Alice, I'd be interested to see in 10 years. I think there are few, very few, who can live like that. If Bob could, he'd be doing it already.

Alice is, indeed, a fictional character - but clearly some people exist who are extremely ethical. There's people who go around donating 50%, giving kidneys to strangers, volunteering to get diseases in human challenge trials, working on important things rather than their dream career, thinking about altruism in the shower, etc. Where do you think is the optimal realistic point on the spectrum between Alice and Bob? Do you think it's definitely true that Bob would be doing it already if he could? Or do you think there exist some people who could but don't want to, or who have mistaken beliefs where they think they couldn't but could if they tried, or who currently can't but could if they got stronger social support from the community?
That will vary with the person. All these things are imaginable, but that is no limitation. Bob is presented as someone who talks the EA talk, but has no heart for walking the walk. If he lets Alice badger him into greatly upping his efforts I would not expect it to go well.
What specifically would you expect to not go well? What bad things will happen if Bob greatly ups his efforts? Why will they happen? Are there things we could do to mitigate those bad things? How could we lower the probability of the bad things happening? If you don't think any risk reduction or mitigation is possible at all, how certain are you about that? Can we test this? Do you think it's worthwhile to have really precise, careful, detailed models of this aspect of the world?
I would expect Bob, as you have described him, to never reach Alice's level of commitment and performance, and after some period of time, with or without some trauma along the way, to drop out of EA. But these are imaginary creatures, and we can make up any story we like about them. There is no question of making predictions. If Alice — or you — want to convert people like Bob, you and she will have to observe the results obtained and steer accordingly. Four intensifiers in a row!!!! Is it worthwhile to have, simply, models? Expectations about how things will go? Yes, as long as you track how well they're fitting.

If in World A, the majority was an Alice ... not doing the job they loved ( imagine a teacher who thinks education is important, but emotionally dislikes students) , unreciprically giving away some arbitrary % of their earnings, etc...

Is that actually better than World B? A world where the majority are Bobs, sucessful at their chosen craft, giving away some amount of their earnings but maintaining a majority they are comfortable with.

I'm surprised Bob didn't make the obvious rebuttals:

  1. Alice, why aren't you giving away 51% of your earnings? What metho

... (read more)

I have to wonder if you are posting this here in order to play Alice to our Bobs, distanced by writing it as a parable.

Absolutely not. I definitely have a mini Alice voice inside my head. I also have a mini Bob voice inside my head. They fight, like, all the time. I'd love help in resolving their fights!

No human being is a full utilitarian. Expecting them or yourself to be will bring disappointment or guilt.

But helping others can bring great joy and satisfaction.

The answer is obviously yes to Alice's question.

We should work harder in the most convenient world. The premise basically states that Bob would be happier AND do more good. He's an idiot for saying no, except to get bossy, controlling Alice off his back and not let her gaslight him into doing what she wants.

But is this that world? Probably not the least convenient/easiest. Where is it on the spectrum? What will lead to Bob's happiest life? That is the right question for Bob to ask, and it's not trivial to answer.

I think this line of argument works okay until this point.

Alice: ... In the least convenient possible world, where the advice to rest more is equally harmful to the advice to work harder, and most people should totally view themselves as less fundamentally unchangeable, and the movement would have better PR if we were sterner…

Okay. Let's call the initial world Earth-1, with Alice-1 talking to Bob-1. Let's call the least convenient possible world Earth-2. Earth-2 contains Alice-2 and Bob-2. They aren't having this exact conversation, because that's not ... (read more)

Hmm, this isn't really what I'm trying to get across when I use the phrase "least convenient possible world". I'm not talking about being isekaid into an actually different world; I'm just talking about operating under uncertainty, and isolating cruxes. Alice is suggesting that Bob - this universe's Bob - really might be harmed more by rest-more advice than by work-harder advice, really might find it easier to change himself than he predicts, etc. He doesn't know for certain what's true (ie "which universe he's in") until he tries. Let's use an easier example: Jane doesn't really want to go to Joe's karaoke party on Sunday. Joe asks why. Jane says she doesn't want to go because she's got a lot of household chores to get done, and she doesn't like karaoke anyway. Joe really wants her to go, so he could ask: "If I get all your chores done for you, and I change the plan from karaoke to bowling, then will you come?" You could phrase that as, "In the most convenient possible world, would you come then?" but Joe isn't positing that there's an alternate-universe bowling party that alternate-universe Jane might attend (but this universe's Jane doesn't want to attend because in this universe it's a karaoke party). He's just checking to see whether Jane's given her REAL objections. She might say, "Okay, yeah, so long as it's not karaoke then I'll happily attend." Or she might say, "No, I still don't really want to go." In the latter case, Joe has discovered that the REAL reason Jane doesn't want to go is something else - maybe she just doesn't like him and she said the thing about chores just to be polite, or maybe she doesn't want to admit that she's staying home to watch the latest episode of her favourite guilty-pleasure sitcom, or something. If "but what about the PR?" is Bob's real genuine crux, he'll say, "Yeah, if the PR issues were reversed then I'd commit harder for sure!" If, on the other hand, it's just an excuse, then nothing Alice says will convince Bob to wo
5Martin Randall3mo
I confess that was not my reading of the text. I've been reading quite a few thought experiments recently, so I'm primed to interpret "possible worlds" that way. In my defense, the text links to Yudkowsky's Self-Integrity and the Drowning Child, which uses the "Least Convenient Possible World" to indicate a counterfactual / thought experiment / hypothetical worlds. Regardless, I missed Alice's point. Since Alice was trying to ask about cruxes and uncertainty, here's an altered dialog that I think is clearer: ---------------------------------------- Alice: Okay, so your objections are (1) hard work might harm you (2) you can't change (3) social norms and (4) public relations. Is that it, or do you have other reasons? Bob: Yes. I just kind of don’t really want to work harder. Alice: I think we’ve arrived at the core of the problem. Bob: We're going full-contact psychoanalysis, then. Are you sure you want to go there? Maybe you are a workaholic because you have an unresolved need to impress your parents, or because it gives you moral license to be rude and arrogant, or because you never fully grew out of your childhood faith. Alice: Unlike you, Bob, I see a therapist. Bob: You mentioned. So we have two hypotheses. Maybe I don't want to work harder and therefore have rationalized reasons not to. Maybe I have reasons not to work harder and therefore I don't want to. I suppose I could see a therapist and get evidence to distinguish these cases. Then what? If I learn that, really, I just don't want to work harder then you haven’t persuaded me to do anything differently, you’ve kind of just made me feel bad. Alice: Maybe I’d like you to stop claiming to be a utilitarian, when you’re totally not - you’re just an egoist who happens to have certain tuistic preferences. I might respect you more if you had the integrity to be honest about it. Maybe I think you’re wrong, and there’s some way to persuade you to be better, and I just haven’t found it yet. [...]
This seems like you understood my intent; I'm glad we communicated! Though I think Bob seeing a therapist is totally an action that Alice would support, if he thinks that's the best test of her ideas - and importantly if he thinks the test genuinely could go either way.

“Maybe the Effective Altruist movement should accept people like you because they’re a big tent and they’re friendly and welcoming, but the rationalist community should be elitist and only accept people who say tsuyoku naritai - there’s a reason this is on LessWrong and not the EA forum”

As the EA community has become less intense, sometimes I’ve wondered whether there would be value in someone starting an LW or EA adjacent group that’s on the more intense part of the spectrum.

I definitely see risks associated with this (people pushing themselves too hard, fanaticism) and I probably wouldn’t want to be part of it myself, but I imagine that it could be a good fit for some people.

Motto: "Maximising utility isn't everything, it's the only thing!"
Sounds like evaporative cooling in reverse (although actually more in keeping with the literal meaning): the fieriest radicals boiling off to leave the more tepid behind.

I think Bob's answer should probably be.

Look, I care somewhat about improving the world as a whole. But I also care about myself as well. 

And I would recommend you don't go out of your way to antagonize and reject allies with a utility function similar enough to yours that mutual cooperation is easy. 

The number of people who are a genuine Alice is rather low. 


Also, bear in mind that the human brain has a built in "don't follow that logic off a cliff" circuit. This is the circuit that ensures crazy suicide cults are so rare despite the ... (read more)

Hello Firinn, 

I can relate to this post, even when I was never part of the EA-movement. When I was younger, I did join a climate-organization, and also had an account on And I would say there was a lot of guilt and confusion around my actions at that point, whilst simultaneously trying to do a lot of 'better than'-actions. 

Your post is very extensive, and as such I find myself engaged by just reading one of the external links and the post itself. Therefore, my comment isn't really a comment to the whole post, but sees the post through o... (read more)

3Jay Bailey3mo
I don't really understand how your central point applies here. The idea of "money saves lives" is not supposed to be a general rule of society, but rather a local point about Alice and Bob - namely, donating ~5k will save a life. That doesn't need to be always true under all circumstances, there just needs to be some repeatable action that Alice and Bob can take (e.g, donating to the AMF) that costs 5k for them that reliably results in a life being saved. (Your point about prolonging life is true, but since the people dying of malaria are generally under 5, the amount of QALY's produced is pretty close to an entire human lifetime) It doesn't really matter, for the rest of the argument, how this causal relationship works. It could be that donating 5k causes more bednets to be distributed, it could be that donating 5k allows for effective lobbying to improve economic growth to the value of one life, or it could be that the money is burnt in a sacrificial pyre to the God of Charitable Sacrifices, who then descends from the heavens and miraculously cures a child dying of malaria. From the point of view of Alice and Bob, the mechanism isn't important if you're talking on the level of individual donations. In other words, Alice and Bob are talking on the margins here, and on the margin, 5k spent equals one live saved, at least for now.
Hello Jay Bailey, Thanks for your reply. Yes, I seem to have overcomplicated the point made in this post by adding the system-lens to this situation. It isn't irrelevant, it is simply besides the point for Alice and Bob. The goal I am focusing on is a 'system overhaul' not a concrete example like this. I was also reminded of how detrimental the confrontational tone and haughtiness by Alice and the lack of clarity and self-understanding of Bob is for learning, change and understanding. How it creates a loop where the interaction itself doesn't seem to bring either any closer to being more in tune with their values and beliefs. It seems to further widen the gulf between their respective positions, instead of capitalizing on their respective differences to further improve on facets of their values-to-actions efficiency ratio that their opposite seems capable of helping them with. But I didn't focus much on this point in my comment. Kindly, Caerulea-Lawrence
I'm sorry I don't have time to respond to all of this, but I think you might enjoy Money: The Unit Of Caring: (Sorry, not sure how to make neat-looking links on mobile.)
Hello Firinn, Thanks for the linked post, it was right on the money. I see that I look at market-economy as a problem by itself, but I haven't really thought about money from a less idealistic point of view. It is really hard to come to terms with the argument he makes, when the system money operates under is so flawed. But maybe it is more of a general point. In the instance between Alice and Bob, they might not see or have the ability to try to change the system itself, and under those circumstances I have missed the point. Again, thanks for the post. Kindly, Caerulea-Lawrence

I feel like the crux of this discussion is how much we should adjust our behavior to be "less utilitarian", to preserve our utilitarian values.

The expected utility that a person created could be measured by (utility created by behavior) x (odds that they will actually follow through on their behavior), where the odds of follow-up decrease as the behavior modifications become more drastic, but the utility created if followed through increases. 

People are already implicitly taking this account when evaluating what the optimal amount of radicality in act... (read more)

Alice strikes me as the poster child for the old saying about good intentions and roads to hell. Ultimately, I think she ends up causing much more harm via the toxic and negative experience those around her have than any good she can do herself.

If you mean this literally, it's a pretty extraordinary claim! Like, if Alice is really doing important AI Safety work and/or donating large amounts of money, she's plausibly saving multiple lives every year. Is the impact of being rude worse than killing multiple people per year? (Note, I'm not saying in this comment that Alice should talk the way she does, or that Alice's conversation techniques are effective or socially acceptable. I'm saying it's extraordinary to claim that the toxic experience of Alice's friends is equivalently bad to "any good she can do herself". I think basically no amount of rudeness is equivalently bad to how good it is to save a life and/or help avert the apocalypse, but if you think it's morally equivalent then I'd be really curious for your reasoning.)
If the "you other people need to work harder because I do and this is import" attitude starts pushing many people away in a setting that likely lives and dies from team/group efforts Alice will have to be an exceptional talent to make up for the collective loss. It might be well intended but can (and all too frequently does, hence the old saying) produce unintended consequences that prove to be counter productive. Even if you're saying it nicely, if the message is basically your not being good enough it becomes a bit alienating. One can definitely lead by example and try to create an environment where people want to do more but we should respect the level of contribution each is willing to produce -- and certainly so if we're not in a role where we get to define what the minimum acceptable contributions are.
3Seth Herd3mo
Sure, but this isn't about Alice. She's not telling Bob or us to talk like her, just asking if he'd work harder.
No. She was "asking" for a lot more than just if he could work harder.  Moreover, my comment had little to do with some claim she wanted others to talk like her. 

So I basically know Alice is right, yet I mostly act like a Bob. I'm probably neither a true rationalist (I am acting on emotions instead of the truth) nor a strong effective altruist. I donate money because it makes me feel good, volunteer mostly for the fuzzies and engage with my local EA group because it's a strong community with amazing and brilliant people.

Yeah, deep down I'm a selfish human. I don't think I'll change that about myself. But EA has still enabled me to have a large positive impact trhough effective giving and that's a net positive.

Strong upvote for this post! While I'd caution against linking this sequence to the Effective Altruism forum and movement in general - because I don't think placing explicit and extremely strong moral *obligations* about action makes for a healthy, self-confident or outward looking mass movement - I would definitely encourage Firinn to write more LessWrong posts in this vein. 

The LessWrong community should be very enthusiastic about more articulate narratives and discussions on exemplary actions motivated toward saving the whole entire world! Posts di... (read more)

The correct moral choice is for both people to lower their EA contributions to 0%.

I couldn't read this straight. Alice is being an absolute asshole to Bob. This is incredibly off-putting. 

I think you could have communicated better if you had tried to make Alice remotely human.

I think I get what you are trying to do with this, but I only got it after reading comments.