I see a discussion treating beggars as human beings whose welfare we are interested in, that assumes that they respond to incentives.

(This is somewhat tangential to your actual comment so don't look on this like it's a counter-argument.)

This idea of using incentives to guide their behavior is to impose our value system on them. It's one thing to say that you don't want to help these people at all, but to actually try to use the power (through money) to influence the way these people live their lives is morally not justified. Just because we have the po... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

I am uneasy with that sentiment although I'm having a hard time putting my finger one exactly why. But this is how I see it: there are vastly more people in the world than I could possibly ever help and some of them are so poor and downtrodden that they spend most of their money on food since they can't afford luxuries such as drugs. Eventually, I might give money to the drug user if I had solved all the other problems first, but I would prefer my money to be spent on something more essential for survival first before I turn to subsidizing people's luxur... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

1[anonymous]10y This position conflicts with consequentialist ethics. Situation 1: I give money to those beggars who are more likely to buy bread than drugs, because this way my money brings more expected benefit to the beggar: it might help this particular one to bounce back, whereas with a crack addict there's almost no chance. Situation 2: I give money to those beggars who are more likely to buy bread than drugs, because I enjoy the power rush of controlling other people like the controlling father that tried to control Aaron Swartz with his controlling money. The net result is the same, only the thoughts differ. Discriminating between the two situations sounds like deontological ethics [http://en.wikipedia.org/wiki/Deontological_ethics]. It works okay on the small scale but breaks down very quickly when the stakes rise and you realize that the universe doesn't care about your personal hangups.
3conchis10y This is a valid question, but I think you also need to allow the possibility that these things are not what the beggar most needs right now. Not all attempts to substitute one's own decisions for someone else's involve "impos[ing] your values on the person." Sometimes people make decisions that do not further their own values, and in such cases, I think it is morally justified to try to respect their values rather than just their choices. Not to say that this is easy, and that we shouldn't be incredibly wary of the possibility that we're actually just projecting our own values on others. But claiming that others' decisions are morally inviolable seems like an overreaction to me (albeit one that is founded in a legitimate concern).

AndrewH's observation and opportunity costs

by Scott Alexander 10y23rd Jul 20091 min read61 comments


In his discussion of "cryocrastination", AndrewH makes a pretty good point. There may be some better things you can do with the money you'd spend on cryonics insurance. The sort of people who are into cryonics would probably accept that donating it to the Singularity Institute is probably, all in all, a higher utility use of however many dollars. Andrew's conclusion is that you should figure out what maximizes utility and do it, regardless of how small a contribution is involved. He's right, but I want to use the same example to push a point that is very slightly different, or maybe a little more general, or maybe the exact same one but phrased differently.

Consider an argument frequently made when politicians are discussing the budget. I frequently hear people say it would cost between ten and twenty billion dollars a year to feed all the hungry people in the world. I don't know if that's true or not, and considering the recent skepticism about aid it probably isn't, but let's say the politicians believe it. So when they look at (for example) NASA's budget of fifteen billion dollars, they say something like "It's criminal to be spending all this money on space probes and radio telescopes when it could eliminate world hunger, so let's cut NASA's budget."

You see the problem? When we cut NASA's budget, it doesn't immediately go into the "solve world hunger" fund. It goes into the rest of the budget, and probably gets divided among the Congressman Johnson Memorial Fisheries Museum and purchasing twelve-thousand-dollar staplers.

The same is true of cryocrastination. Unless you actually take that money you would have spent on cryonics and donate it to the Singularity Institute, it's going into the rest of your budget, and you'll probably spend it on coffee and plasma TVs and famous statistician trading cards and whatever else.

I find myself frequently making this error in the following way: a beggar asks me for money, and I want to give it to them on the grounds that they have activated my urge to help people. Then think to myself "I can't justify giving the money to this beggar when it would help many more people if I gave it to a responsible charity." So I say no, and forget all about it, and never give the money to anyone. Even though (from a charity point of view) I know of a superior alternative to giving the money to the beggar, I would still be better off just giving the beggar the money!

All this means that for any entity that does not use its resources with maximum efficiency, the opportunity cost of spending a certain amount of resources should not be calculated as what you'd get earn from the best possible use of those resources, but what you'll earn from the use of those resources which you expect to actually occur.