This is an essay describing some of my motivation to be an effective altruist. It is crossposted from my blog. Many of the ideas here are quite similar to others found in the sequences. I have a slightly different take, and after adjusting for the typical mind fallacy I expect that this post may contain insights that are new to many.
I'm not very good at feeling the size of large numbers. Once you start tossing around numbers larger than 1000 (or maybe even 100), the numbers just seem "big".
Consider Sirius, the brightest star in the night sky. If you told me that Sirius is as big as a million earths, I would feel like that's a lot of Earths. If, instead, you told me that you could fit a billion Earths inside Sirius… I would still just feel like that's a lot of Earths.
The feelings are almost identical. In context, my brain grudgingly admits that a billion is a lot larger than a million, and puts forth a token effort to feel like a billion-Earth-sized star is bigger than a million-Earth-sized star. But out of context — if I wasn't anchored at "a million" when I heard "a billion" — both these numbers just feel vaguely large.
I feel a little respect for the bigness of numbers, if you pick really really large numbers. If you say "one followed by a hundred zeroes", then this feels a lot bigger than a billion. But it certainly doesn't feel (in my gut) like it's 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 times bigger than a billion. Not in the way that four apples internally feels like twice as many as two apples. My brain can't even begin to wrap itself around this sort of magnitude differential.
This phenomena is related to scope insensitivity, and it's important to me because I live in a world where sometimes the things I care about are really really numerous.
For example, billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease. And though most of them are out of my sight, I still care about them.
The loss of a human life with all is joys and all its sorrows is tragic no matter what the cause, and the tragedy is not reduced simply because I was far away, or because I did not know of it, or because I did not know how to help, or because I was not personally responsible.
Knowing this, I care about every single individual on this planet. The problem is, my brain is simply incapable of taking the amount of caring I feel for a single person and scaling it up by a billion times. I lack the internal capacity to feel that much. My care-o-meter simply doesn't go up that far.
And this is a problem.
It's a common trope that courage isn't about being fearless, it's about being afraid but doing the right thing anyway. In the same sense, caring about the world isn't about having a gut feeling that corresponds to the amount of suffering in the world, it's about doing the right thing anyway. Even without the feeling.
My internal care-o-meter was calibrated to deal with about a hundred and fifty people, and it simply can't express the amount of caring that I have for billions of sufferers. The internal care-o-meter just doesn't go up that high.
Humanity is playing for unimaginably high stakes. At the very least, there are billions of people suffering today. At the worst, there are quadrillions (or more) potential humans, transhumans, or posthumans whose existence depends upon what we do here and now. All the intricate civilizations that the future could hold, the experience and art and beauty that is possible in the future, depends upon the present.
When you're faced with stakes like these, your internal caring heuristics — calibrated on numbers like "ten" or "twenty" — completely fail to grasp the gravity of the situation.
Saving a person's life feels great, and it would probably feel just about as good to save one life as it would feel to save the world. It surely wouldn't be many billion times more of a high to save the world, because your hardware can't express a feeling a billion times bigger than the feeling of saving a person's life. But even though the altruistic high from saving someone's life would be shockingly similar to the altruistic high from saving the world, always remember that behind those similar feelings there is a whole world of difference.
Our internal care-feelings are woefully inadequate for deciding how to act in a world with big problems.
There's a mental shift that happened to me when I first started internalizing scope insensitivity. It is a little difficult to articulate, so I'm going to start with a few stories.
Consider Alice, a software engineer at Amazon in Seattle. Once a month or so, those college students will show up on street corners with clipboards, looking ever more disillusioned as they struggle to convince people to donate to Doctors Without Borders. Usually, Alice avoids eye contact and goes about her day, but this month they finally manage to corner her. They explain Doctors Without Borders, and she actually has to admit that it sounds like a pretty good cause. She ends up handing them $20 through a combination of guilt, social pressure, and altruism, and then rushes back to work. (Next month, when they show up again, she avoids eye contact.)
Now consider Bob, who has been given the Ice Bucket Challenge by a friend on facebook. He feels too busy to do the ice bucket challenge, and instead just donates $100 to ALSA.
Now consider Christine, who is in the college sorority ΑΔΠ. ΑΔΠ is engaged in a competition with ΠΒΦ (another sorority) to see who can raise the most money for the National Breast Cancer Foundation in a week. Christine has a competitive spirit and gets engaged in fund-raising, and gives a few hundred dollars herself over the course of the week (especially at times when ΑΔΠ is especially behind).
All three of these people are donating money to charitable organizations… and that's great. But notice that there's something similar in these three stories: these donations are largely motivated by a social context. Alice feels obligation and social pressure. Bob feels social pressure and maybe a bit of camaraderie. Christine feels camaraderie and competitiveness. These are all fine motivations, but notice that these motivations are related to the social setting, and only tangentially to the content of the charitable donation.
If you took any of Alice or Bob or Christine and asked them why they aren't donating all of their time and money to these causes that they apparently believe are worthwhile, they'd look at you funny and they'd probably think you were being rude (with good reason!). If you pressed, they might tell you that money is a little tight right now, or that they would donate more if they were a better person.
But the question would still feel kind of wrong. Giving all your money away is just not what you do with money. We can all say out loud that people who give all their possessions away are really great, but behind closed doors we all know that people are crazy. (Good crazy, perhaps, but crazy all the same.)
This is a mindset that I inhabited for a while. There's an alternative mindset that can hit you like a freight train when you start internalizing scope insensitivity.
Consider Daniel, a college student shortly after the Deepwater Horizon BP oil spill. He encounters one of those college students with the clipboards on the street corners, soliciting donations to the World Wildlife Foundation. They're trying to save as many oiled birds as possible. Normally, Daniel would simply dismiss the charity as Not The Most Important Thing, or Not Worth His Time Right Now, or Somebody Else's Problem, but this time Daniel has been thinking about how his brain is bad at numbers and decides to do a quick sanity check.
He pictures himself walking along the beach after the oil spill, and encountering a group of people cleaning birds as fast as they can. They simply don't have the resources to clean all the available birds. A pathetic young bird flops towards his feet, slick with oil, eyes barely able to open. He kneels down to pick it up and help it onto the table. One of the bird-cleaners informs him that they won't have time to get to that bird themselves, but he could pull on some gloves and could probably save the bird with three minutes of washing.
Daniel decides that he would spend three minutes of his time to save the bird, and that he would also be happy to pay at least $3 to have someone else spend a few minutes cleaning the bird. He introspects and finds that this is not just because he imagined a bird right in front of him: he feels that it is worth at least three minutes of his time (or $3) to save an oiled bird in some vague platonic sense.
And, because he's been thinking about scope insensitivity, he expects his brain to misreport how much he actually cares about large numbers of birds: the internal feeling of caring can't be expected to line up with the actual importance of the situation. So instead of just asking his gut how much he cares about de-oiling lots of birds, he shuts up and multiplies.
Thousands and thousands of birds were oiled by the BP spill alone. After shutting up and multiplying, Daniel realizes (with growing horror) that the amount he acutally cares about oiled birds is lower bounded by two months of hard work and/or fifty thousand dollars. And that's not even counting wildlife threatened by other oil spills.
And if he cares that much about de-oiling birds, then how much does he actually care about factory farming, nevermind hunger, or poverty, or sickness? How much does he actually care about wars that ravage nations? About neglected, deprived children? About the future of humanity? He actually cares about these things to the tune of much more money than he has, and much more time than he has.
For the first time, Daniel sees a glimpse of of how much he actually cares, and how poor a state the world is in.
This has the strange effect that Daniel's reasoning goes full-circle, and he realizes that he actually can't care about oiled birds to the tune of 3 minutes or $3: not because the birds aren't worth the time and money (and, in fact, he thinks that the economy produces things priced at $3 which are worth less than the bird's survival), but because he can't spend his time or money on saving the birds. The opportunity cost suddenly seems far too high: there is too much else to do! People are sick and starving and dying! The very future of our civilization is at stake!
Daniel doesn't wind up giving $50k to the WWF, and he also doesn't donate to ALSA or NBCF. But if you ask Daniel why he's not donating all his money, he won't look at you funny or think you're rude. He's left the place where you don't care far behind, and has realized that his mind was lying to him the whole time about the gravity of the real problems.
Now he realizes that he can't possibly do enough. After adjusting for his scope insensitivity (and the fact that his brain lies about the size of large numbers), even the "less important" causes like the WWF suddenly seem worthy of dedicating a life to. Wildlife destruction and ALS and breast cancer are suddenly all problems that he would move mountains to solve — except he's finally understood that there are just too many mountains, and ALS isn't the bottleneck, and AHHH HOW DID ALL THESE MOUNTAINS GET HERE?
In the original mindstate, the reason he didn't drop everything to work on ALS was because it just didn't seem… pressing enough. Or tractable enough. Or important enough. Kind of. These are sort of the reason, but the real reason is more that the concept of "dropping everything to address ALS" never even crossed his mind as a real possibility. The idea was too much of a break from the standard narrative. It wasn't his problem.
In the new mindstate, everything is his problem. The only reason he's not dropping everything to work on ALS is because there are far too many things to do first.
Alice and Bob and Christine usually aren't spending time solving all the world's problems because they forget to see them. If you remind them — put them in a social context where they remember how much they care (hopefully without guilt or pressure) — then they'll likely donate a little money.
By contrast, Daniel and others who have undergone the mental shift aren't spending time solving all the world's problems because there are just too many problems. (Daniel hopefully goes on to discover movements like effective altruism and starts contributing towards fixing the world's most pressing problems.)
I'm not trying to preach here about how to be a good person. You don't need to share my viewpoint to be a good person (obviously).
Rather, I'm trying to point at a shift in perspective. Many of us go through life understanding that we should care about people suffering far away from us, but failing to. I think that this attitude is tied, at least in part, to the fact that most of us implicitly trust our internal care-o-meters.
The "care feeling" isn't usually strong enough to compel us to frantically save everyone dying. So while we acknowledge that it would be virtuous to do more for the world, we think that we can't, because we weren't gifted with that virtuous extra-caring that prominent altruists must have.
But this is an error — prominent altruists aren't the people who have a larger care-o-meter, they're the people who have learned not to trust their care-o-meters.
Our care-o-meters are broken. They don't work on large numbers. Nobody has one capable of faithfully representing the scope of the world's problems. But the fact that you can't feel the caring doesn't mean that you can't do the caring.
You don't get to feel the appropriate amount of "care", in your body. Sorry — the world's problems are just too large, and your body is not built to respond appropriately to problems of this magnitude. But if you choose to do so, you can still act like the world's problems are as big as they are. You can stop trusting the internal feelings to guide your actions and switch over to manual control.
This, of course, leads us to the question of "what the hell do you then?"
And I don't really know yet. (Though I'll plug the Giving What We Can pledge, GiveWell, MIRI, and The Future of Humanity Institute as a good start).
I think that at least part of it comes from a certain sort of desperate perspective. It's not enough to think you should change the world — you also need the sort of desperation that comes from realizing that you would dedicate your entire life to solving the world's 100th biggest problem if you could, but you can't, because there are 99 bigger problems you have to address first.
I'm not trying to guilt you into giving more money away — becoming a philanthropist is really really hard. (If you're already a philanthropist, then you have my acclaim and my affection.) First it requires you to have money, which is uncommon, and then it requires you to throw that money at distant invisible problems, which is not an easy sell to a human brain. Akrasia is a formidable enemy. And most importantly, guilt doesn't seem like a good long-term motivator: if you want to join the ranks of people saving the world, I would rather you join them proudly. There are many trials and tribulations ahead, and we'd do better to face them with our heads held high.
Courage isn't about being fearless, it's about being able to do the right thing even if you're afraid.
And similarly, addressing the major problems of our time isn't about feeling a strong compulsion to do so. It's about doing it anyway, even when internal compulsion utterly fails to capture the scope of the problems we face.
It's easy to look at especially virtuous people — Gandhi, Mother Theresa, Nelson Mandela — and conclude that they must have cared more than we do. But I don't think that's the case.
Nobody gets to comprehend the scope of these problems. The closest we can get is doing the multiplication: finding something we care about, putting a number on it, and multiplying. And then trusting the numbers more than we trust our feelings.
Because our feelings lie to us.
When you do the multiplication, you realize that addressing global poverty and building a brighter future deserve more resources than currently exist. There is not enough money, time, or effort in the world to do what we need to do.
There is only you, and me, and everyone else who is trying anyway.
You can't actually feel the weight of the world. The human mind is not capable of that feat.
But sometimes, you can catch a glimpse.
I accept all the argument for why one should be an effective altruist, and yet I am not, personally, an EA. This post gives a pretty good avenue for explaining how and why. I'm in Daniel's position up through chunk 4, and reach the state of mind where
and find it literally unbearable. All of a sudden, it's clear that to be a good person is to accept the weight of the world on your shoulders. This is where my path diverges; EA says "OK, then, that's what I'll do, as best I can"; from my perspective, it's swallowing the bullet. At this point, your modus ponens is my modus tollens; I can't deal with what the argument would require of me, so I reject the premise. I concluded that I am not a good person and won't be for the foreseeable future, and limited myself to the weight of my chosen community and narrowly-defined ingroup.
I don't think you're wrong to try to convert people to EA. It does bear remembering, though, that not everyone is equipped to deal with this outlook, and some people will find that trying to shut up and multiply is lastingly unpleasant, such that an altruistic outlook becomes significantly aversive.
This is why I prefer to frame EA as something exciting, not burdensome.
Exciting vs. burdensome seems to be a matter of how you think about success and failure. If you think "we can actually make things better!", it's exciting. If you think "if you haven't succeeded immediately, it's all your fault", it's burdensome.
This just might have more general application.
When you ask how bad an action is, you can mean (at least) two different things.
Killing someone in person is psychologically harder for normal decent people than letting them die, especially if the victim is a stranger far away, and even more so if there isn't some specific person who's dying. So actually killing someone is "worse", if by that you mean that it gives a stronger indication of being callous or malicious or something, even if there's no difference in harm done.
In some contexts this sort of character evaluation really is what you care about. If you want to know whether someone's going to be safe and enjoyable company if you have a drink with them, you probably do prefer someone who'd put in place an embargo that kills millions rather than someone who would shoot dozens of schoolchildren.
That's perfectly consistent with (1) saying that in terms of actual harm done spending money on yourself rather than giving it to effective charities is as bad as killing people, and (2) attempting to choose one's own actions on the basis of harm done rather than evidence of character.
I agree with others that the post is very nice and clear, as most of your posts are. Upvoted for that. I just want to provide a perspective not often voiced here. My mind does not work the way yours does and I do not think I am a worse person than you because of that. I am not sure how common my thought process is on this forum.
Going section by section:
I do not "care about every single individual on this planet". I care about myself, my family, friends and some other people I know. I cannot bring myself to care (and I don't really want to) about a random person half-way around the world, except in the non-scalable general sense that "it is sad that bad stuff happens, be it to 1 person or to 1 billion people". I care about the humanity surviving and thriving, in the abstract, but I do not feel the connection between the current suffering and future thriving. (Actually, it's worse than that. I am not sure whether humanity existing, in Yvain's words, in a 10m x 10m x 10m box of computronium with billions of sims is much different from actually colonizing the observable universe (or the multiverse, as the case might be). But that's a different story, unrelated to
I don't disagree, and I don't think you're a bad person, and my intent is not to guilt or pressure you. My intent is more to show some people that certain things that may feel impossible are not impossible. :-)
A few things, though:
This seems like a cop out to me. Given a bunch of people trying to help the world, it would be best for all of them to do the thing that they think most helps the world. Often, this will lead to diversity (not just because people have different ideas about what is good, but also because of diminishing marginal returns and saturation). Sometimes, it won't (e.g. after a syn bio proof of concept that kills 1/4 of the race I would hope that diversity in problem-selection would decrease). "It is best to diversify and hope" seems like a platitude that dodges the fun parts.
I also ha... (read more)
My view is similar to yours, but with the following addition:
I have actual obligations to my friends and family, and I care about them quite a bit. I also care to a lesser extent about the city and region that I live in. If I act as though I instead have overriding obligations to the third world, then I risk being unable to satisfy my more basic obligations. To me, if for instance I spend my surplus income on mosquito nets instead of saving it and then have some personal disaster that my friends and family help bail me out of (because they also have obligations to me), I've effectively stolen their money and spent it on something they wouldn't have chosen to spend it on. While I clearly have some leeway in these obligations and get to do some things other than save, charity falls into the same category as dinner out: I spend resources on it occasionally and enjoy or feel good about doing so, but it has to be kept strictly in check.
I feel like I'm somewhere halfway between you and so8res. I appreciate you sharing this perspective as well.
I can't speak for shminux, of course, but caring about humanity surviving and thriving while not caring about the suffering or lives of strangers doesn't seem at all arbitrary or puzzling to me.
I mean, consider the impact on me if 1000 people I've never met or heard of die tomorrow, vs. the impact on me if humanity doesn't survive. The latter seems incontestably and vastly greater to me... does it not seem that way to you?
It doesn't seem at all arbitrary that I should care about something that affects me greatly more than something that affects me less. Does it seem that way to you?
...Slow deep breath... Ignore inflammatory and judgmental comments... Exhale slowly... Resist the urge to downvote... OK, I'm good.
First, as usual, TheOtherDave has already put it better than I could.
Maybe to elaborate just a bit.
First, almost everyone cares about the survival of the human race as a terminal goal. Very few have the infamous 'apres nous le deluge' attitude. It seems neither abstract nor arbitrary to me. I want my family, friends and their descendants to have a bright and long-lasting future, and it is predicated on the humanity in general having one.
Second, a good life and a bright future for the people I care about does not necessarily require me to care about the wellbeing of everyone on Earth. So I only get mildly and non-scalably sad when bad stuff happen to them. Other people, including you, care a lot. Good for them.
Unlike you (and probably Eliezer), I do not tell other people what they should care about, and I get annoyed at those who think their morals are better than mine. And I certainly support any steps to stop people from actively making other people's lives worse, be it abusing them, telling them whom to marry or how much and what cause to donate to. But other than that, it's up to them. Live and let live and such.
Hope this helps you understand where I am coming from. If you decide to reply, please consider doing it in a thoughtful and respectful manner this time.
I'm actually having difficultly understanding the sentiment "I get annoyed at those who think their morals are better than mine". I mean, I can understand not wanting other people to look down on you as a basic emotional reaction, but doesn't everyone think their morals are better than other people?
That's the difference between morals and tastes. If I like chocolate ice cream and you like vanilla, then oh well. I don't really care and certainly don't think my tastes are better for anyone other than me. But if I think people should value the welfare of strangers and you don't, then of course I think my morality is better. Morals differ from tastes in that people believe that it's not just different, but WRONG to not follow them. If you remove that element from morality, what's left? The sentiment "I have these morals, but other people's morals are equally valid" sounds good, all egalitarian and such, but it doesn't make any sense to me. People judge the value of things through their moral system, and saying "System B is as good as System A, based on System A" is borderline nonsensical.
Also, as an aside, I think you should avoid rhetorical statements like "call me heartless if you like" if you're going to get this upset when someone actually does.
It seems to me that when you explicitly make your own virtue or lack thereof a topic of discussion, and challenge readers in so many words to "call [you] heartless", you should not then complain of someone else's "inflammatory and judgmental comments" when they take you up on the offer.
And it doesn't seem to me that Hedonic_Treader's response was particularly thoughtless or disrespectful.
(For what it's worth, I don't think your comments indicate that you're heartless.)
I expect that's correct, but I'm not sure your justification for it is correct. In particular it seems obviously possible for the following things all to be true:
and I think people who say (e.g.) that atheists think they're smarter than everyone else would claim that that's what's happening.
I repeat, I agree that these accusations are usually pretty strawy, but it's a slightly more complicated variety of straw than simply claiming that people have said things they haven't. More specifically, I think the usual situation is something like this:
[EDITED to add, for clarity:] By "But so does everyone else" I meant that (almost!) ever... (read more)
Interesting article, sounds a very good introduction to scope insensitivity.
Two points where I disagree :
I don't think birds are a good example of it, at least not for me. I don't care much for individual birds. I definitely wouldn't spend $3 nor any significant time to save a single bird. I'm not a vegetarian, it would be quite hypocritical for me to invest resources in saving one bird for "care" reasons and then going to eat a chicken at dinner. On the other hand, I do care about ecological disasters, massive bird death, damage to natural reserves, threats to a whole specie, ... So a massive death of birds is something I'm ready to invest resources to prevent, but not a single death of bird.
I know it's quite taboo here, and most will disagree with me, but to me, the answer to how big the problems are is not charity, even "efficient" charity (which seems a very good idea on paper but I'm quite skeptical about the reliability of it), but more into structural changes - politics. I can't fail to notice that two of the "especially virtuous people" you named, Gandhi and Mandela, both were active mostly in politics, not in charity. To quote another one often labeled "especially virtuous people", Martin Luther King, "True compassion is more than flinging a coin to a beggar. It comes to see that an edifice which produces beggars needs restructuring."
Regarding scope sensitivity and the oily bird test, one man's modus ponens is another's modus tollens. Maybe if you're willing to save one bird, you should be willing to donate to save many more birds. But maybe the reverse is true - you're not willing to save thousands and thousands of birds, so you shouldn't save one bird, either. You can shut up and multiply, but you can also shut up and divide.
Did the oil bird mental exercise. Came to conclusion that I don't care at all about anyone else, and am only doing good things for altruistic high and social benefits. Sad.
Funny thing. I started out expanding this, trying to explain it as thoroughly as possible, and, all of a sudden, it became confusing to me. I guess, it was not a well thought out or consistent position to begin with. Thank you for a random rationality lesson, but you are not getting this idea expanded, alas.
Even they didn't try to take on all the problems in the world. They helped a subset of people that they cared about with particular fairly well-defined problems.
Yes, that is how adults help in real life. In science we chop off little sub-sub-problems we think we can address to do our part to address larger questions whose answers no one person will ever find alone, and thus end up doing enormous work on the shoulders of giants. It works roughly the same in activism.
I see suffering the whole day in healthcare but I'm actually pretty much numbed to it. Nothing really gets to me, and if it did it could be quite crippling. Sometimes I watch sad videos or read dramatizations of real events to force myself to care for a while, to keep me from forgetting why I show up at work. Reading certain types of writings by rationalists helps too.
You shouldn't get more than glimpses of the weight of the world, or rather you shouldn't let them through the defences, to be able to function.
"Will the procedure hurt?" asked the patient. "Not if you don't sting yourself by accident!" answered the doctor with the needle.
I'm not sure what to make out of it, but one could run the motivating example backwards:
"He pictures himself helping the people and wading deep in all that sticky oil and imagines how long he'd endure that and quickly arrives at the conclusion that he doesn't care that much for the birds really. And would rather prefer to get away from that mess. His estimate how much it is worth for him to rescue 1000 birds is quite low."
What can we derive from this if we shut-up-and-calculate? If his value for rescuing 1000 birds is 10$ now 1 million birds still come out as 10K$. But it could be zero now if not negative (he'd feel he should get money for saving the birds). Does that mean if we extrapolate that he should strive to eradicate all birds? Surely not.
It appears to means that our care-o-meter plus system-2-multiply gives meaningless answers.
Our empathy towards beings is to a large part dependent on socialization and context. Taking it out of its ancestral environment is bound to cause problems I fear individuals can't solve. But maybe societies can.
Wow this post is pretty much exactly what I've been thinking about lately.
Yup. Been there. Still finding a way to use that ICU-nursing high as motivation for something more generalized than "omg take all the overtime shifts."
Also, I think that my brain already runs on something like virtue ethics, but that the particular thing I think is virtuous changes based on my beliefs about the world, and this is probably a decent way to do things for reasons other than visceral caring. (I mean, I do viscerally care about being virtuous...)
Cross commented from the EA forum
First of all. Thanks Nate. An engaging outlook on overcoming point and shoot morality.
Moral Tribes, Joshua Greene`s book, addresses the question of when to do this manual switch. Interested readers may want to check it out.
Some of us - where "us" here means people who are really trying - take your approach. They visualize the sinking ship, the hanging souls silently glaring at them in desperation, they shut up... (read more)
I'm sympathetic to the effective altruist movement, and when I do periodically donate, I try to do so as efficiently as possible. But I don't focus much effort on it. I've concluded that my impact probably comes mostly from my everyday interactions with people around me, not from money that I send across the world.
It's also worth mentioning that cleaning birds after an oil spill isn't always even helpful. Some birds, like gulls and penguins, do pretty well. Others, like loons, tend to do poorly. Here are some articles concerning cleaning oiled birds.
And I know that the oiled birds issue was only an example, but I just wanted to point out that this issue, much like the "Food and clothing aid to Africa" examples you often... (read more)
I wonder if in some interesting way the idea that the scope of what needs doing for other people is so massive as to preclude any rational response then to work full time on it is related to the insight that voting doesn't matter. In both cases, the math seems to preclude bothering to do something which will be easy, but will help in the aggregate.
My dog recently tore both of her ACL's, and required two operations and a total of about 10 weeks recovery. My vet suggested I had a choice as to whether to do the 2X $3100 operations on the knees. I realize... (read more)
Fifty thousand times the marginal utility of a dollar, which is probably much less than the utility difference between the status quo and having fifty thousand dollars less unless Daniel is filthy rich.
I don't have the internal capacity to feel large numbers as deeply as I should, but I do have the capacity to feel that prioritizing my use of resources is important, which amounts to a similar thing. I don't have an internal value assigned for one million birds or for ten thousand, but I do have a value that says maximization is worth pursuing.
Because of this, and because I'm basically an ethical egoist, I disagree with your view that effective altruism requires ignoring our care-o-meters. I think it only requires their training and refinement, not comple... (read more)
I think this is a really good post and extreamly clear. The idea of of the broken care-O-meter is a very compelling metaphor. It might be worthwhile to try to put this somewhere higher exposure where people who have money and are not allready familiar with the LW memeplex would see it
Nice write-up. I'm one of those thoughtful creepy nerds who figured out about the scale thing years ago, and now just picks a fixed percentage of total income and donates it to fixed, utility-calculated causes once a year... and then ends up giving away bits of spending money for other things anyway, but that's warm-fuzzies.
So yeah. Roughly 10% (I actually divide between a few causes, trying to hit both Far Away problems where I can contribute a lot of utility but have little influence, and Nearby problems where I have more influence on specific outcomes... (read more)
Thank you for writing this. I was stuck on 3, and found the answer to a question I asked myself the other day.
That is the thing that I never got. If I tell my brain to model a mind that cares, it comes up empty. I seem to literally be incapable of even imagining the thought process that would lead me to care for people I don't know.
If anybody knows how to fix that, please tell me.
Two possible responses that a person could have after recognizing that their care-o-meter is broken and deciding to pursue important causes anyways:
Option 1: Ignore their care-o-meter, treat its readings as nothing but noise, and rely on other tools instead.
Option 2: Don't naively trust their care-o-meter, and put effort into making it so that their care-o-meter will be engaged when it's appropriate, will be not-too-horribly calibrated, and will be useful as they pursue the projects that they've identified as important (despite its flaws).
Parts of this pos... (read more)
I think we need to consider another avenue in which our emotions are generated, and effect our lives. An immediate, short to medium term high is, in a way, the least valuable personal return we can expect from our actions. However, there is a more subtle yet long lasting emotional effect, which is more strongly correlated to our belief system, and our rationality. I refer to a feeling of purpose we can have on a daily basis, a feeling of maximizing personal potential, and even long term happiness. This is created when we believe we are doing the right thin... (read more)
Daniel grew up as a poor kid, and one day he was overjoyed to find $20 on the sidewalk. Daniel could have worked hard to become a trader on Wall Street. Yet he decides to become a teacher instead, because of his positive experiences in tutoring a few kids while in high school. But as a high school teacher, he will only teach thousand kids in his career, while as a trader, he would have been able to make millions of dollars. If he multiplied his positive experience with one kid by a thousand, it still probably wouldn't compare with the joy of finding $20 on the sidewalk times a million.
I know the name is just a coincidence, but I'm going to pretend that you wrote this about me.
An interesting followup to your example of an oiled bird deserving 3 minutes of care that came to mind:
Let's assume that there are 150 million suffering people right now, which is a completely wrong random number but a somewhat reasonable order-of-magnitude assumption. A quick calculation estimates that if I dedicate every single waking moment of my remaining life to caring about them and fixing the situation, then I've got a total of about 15 million care-minutes.
According to even the best possible care-o-meter that I could have, all the problems in th... (read more)
Upvoted for clarity and relevance. You touched on the exact reason why many people I know can't/won't become EAs; even if they genuinely want to help the world, the scope of the problem is just too massive for them to care about accurately. So they go back to donating to the causes that scream the loudest, and turning a blind eye to the rest of the problems.
I used to be like Alice, Bob, and Christine, and donated to whatever charitable cause would pop up. Then I had a couple of Daniel moments, and resolved that whenever I felt pressured to donate to a good cause, I'd note how much I was going to donate and then donate to one of Givewell's top charities.
Thank you for this explanation. Now it helps me to understand a little bit more of why so many people I know simply feel overwhelmed and give up. Personally as I am not in position to donate money, I work to tackle one specific problem set that I think will help open up and leave the solutions to other problems.
If you don't feel like you care about billions of people, and you recognize that the part of your brain that cares about small numbers of people has scope sensitivity, what observation causes you to believe that you do care about everyone equally?
Serious question; I traverse the reasoning the other way, and since I don't care much about the aggregate six billion people I don't know, I divide and say that I don't care more than one six-billionth as much about the typical person that I don't know.
People that I do know, I do care about- but I don't have to multiply to figure my total caring, I have to add.
I would like to subscribe to your newsletter!
I've been frustrated recently by people not realizing that they are arguing that if you divide responsibility up until it's a very small quantity, then it just goes away.
Attempting to process this post in light of being on my anti-anxiety medication is weird.
There are specific parts in your post where I thought 'If I was having these thoughts, it would probably be a sign I had not yet taken my pill today.' and I get the distinct feeling I would read this entirely differently when not on medication.
It's kind of like 'I notice I'm confused' except... In this case I know why I'm confused and I know that this particular kind of confusion is probably better than the alternative (Being a sleep deprived mess from constant worry) ... (read more)
This post is amazing, So8res! (My team and I have stumbled upon it in search for the all-time greatest articles on improving oneself and making a difference. Just in case you’re interested, you can see our selection at One Daily Nugget. We’ve featured this article in today’s issue.)
Here’s one question that we discussed, would love to get your take: You recommend that one starts with something one cares about, quantifies it, multiplies, and then trusts the result more than one’s intuition.
I love this approach. But how can we be sure that the first element i... (read more)
Sorry I was rude, I just know how it is, to stand in the rain and try to get someone do something painless for the greater good and have them turn away for whatever reason.
On another point, here's a case study of lesser proportions.
Suppose you generally want to fight social injustice, save Our Planet, uphold peace, defend women's rights etc. (as many do when they just begin deciding what to do with themselves). A friend subscribes you to a NGO for nature conservation, and you think it might be a good place to start, since you don't have much money to donat... (read more)
I think there's some good points to be made about the care-o-meter as a heuristic.
Basically, let's say that the utility associated with altruistic effort has a term something like this:
U = [relative amount of impact I can have on the problem] * [absolute significance of the problem]
To some extent, one's care-o-meter is a measurement of the latter term, i.e. the "scope" of the problem, and the issue of scope insensitivity demonstrates that it fails miserably in this regard. However, that isn't entirely an accurate criticism, because as a rough heu... (read more)
Thank You for this write-up; I really like the structure of it actually managing to present the evolution of an idea. Agreeing with more or less of the content, I often find myself posing the question whether I - and seven billion others - could save the world with my, our own hands. (I am beginning to see utilons even in my work as an artist, but that belongs into a wholly different post) This is a question for the ones like me, not earning much, and - without further and serious reclusion, reinvention and reorientation - not going to earn much, ever: Do ... (read more)