Classic Tyler, trying to point out to people that the logical conclusions of their beliefs may include ideas that are associated with their ideological enemies. The first half of this diavlog is basically very incisive and interesting Singer-baiting.
I don't see anything wrong that debating style in itself; it can be informative to interview someone by highlighting possible ideological blind spots.
However, Cowen pressing Singer on tax cuts for charitable donations did make me roll my eyes. Singer acknowledged a specific point about encouraging donations with a targeted tax cut, and Cowen reshaped Singer's acknowledgement into a sound bite easily misinterpretable as Singer taking a position in the broader, mainstream political argument about cutting taxes on the rich. Then Cowen nudged Singer into approving that rephrasing! If I were Singer I'd have been less polite and explicitly ADBOCed.
What brilliant operationalizations Cowen offers with the baby example and the 18 year old example.
I also love the way Cowen doesn't 'let it go': when discussing whether colonialism might have been better off for Africa, Singer offers that the problem is complex because there might have been militants uprisings even under colonial rule. But Cowen doesn't let it go and forces Singer to consider the fact that most probably the scale of uprisings and damage done due to that would not have compared to the damage being done in current civil wars.
Singer is also very quick to update and move on when he realizes the truth of something. Awesome rationality skills from both sides.
Thank you for doing the transcript.
In re making people more cooperative: If people in general were more cooperative, I doubt things would be so bad in Haiti.
What happens when a significant proportion of people are more cooperative, and a significant proportion aren't?
http://lesswrong.com/lw/7e1/rationality_quotes_september_2011/4r01 Note one of the papers is by Cowen, it's good reading.
Cowan: But doesn't preference utilitarianism itself require some means of aggregation? The means we use for weighing different clashing preferences, can require some kind of value judgments above and beyond Utilitarianism? Singer: I don't quite see why that should be so. While acknowledging the practical differences of actually weighing up and calculating all the preferences, I fail to see why it involves other values apart from the preferences themselves.
This is very similar to a question I asked in response to this article by Julia Galef. You can find my comment here as well as several (unsuccessful, IMO) attempts to answer it. This worries me somewhat, because many Less Wrongers affirm utilitarianism without even so much as addressing a huge gaping whole at the very core of its logic. It seems to me that utilitarianism hasn't been paying rent for quite some time, but there are no signs that it is about to be evicted.
You're saying that Utilitarianism is fatally flawed because there's no "method (or even a good reason to believe there is such a method) for interpersonal utility comparison", right?
Utilitarians try to maximize some quantity across all people, generally either the sum or average of either happiness or satisfied preferences. These can't be directly measured, so we estimate them as well as we can. For example, to figure out how unhappy having back pain is making someone, you could ask them what probability of success an operation that would cure their back pain (or kill them if it failed) would need to have before they would take it. Questions like this tell us that nearly everyone has the same basic preferences or enjoys the same basic things: really strong preferences for or happiness from getting minimal food, shelter, medical care, etc. Unless we have some reason to believe otherwise we should just add these up across people equally, assuming that having way too little food is as bad for me as it is for you.
Utilitarianism is a value system. It doesn't pay rent in the same way beliefs do, in anticipated experiences. Instead it pays rent in telling us what to do. This is a much weaker standard, but Utilitarianism clearly meets it.
We can't "just add these [preferences] up across people equally" because utility functions are only defined up to an affine transformation.
You might be able to "just add up" pleasure, on the other hand, though you are then vulnerable to utility monsters, etc.
For a Total Utilitarian it's not a problem to be missing a zero point (unless you're talking about adding/removing people).
For an Average Utilitarian, or a Total Utilitarian considering birth or death, you try to identify the point at which a life is not worth living. You estimate as well as you can.
But all we want is an ordering of choices, and affine transformations (with a positive multiplicative constant) are order preserving.
Doesn't "multiplication by a constant" mean births and deaths? Which puts you in my second paragraph: you try to figure out at what point it would be better to never have lived at all. The point at which a life is a net negative is not very clear, and many Utilitarians disagree on where it is. I agree that this is a "big problem", though I think I would prefer the phrasing "open question".
Asking people to trade off various goods against risk of death allows you to elicit a utility function with a zero point, where death has zero utility. But such a utility function is only determined up to multiplication by a positive constant. With just this information, we can't even decide how to distribute goods among a population consisting of two people. Depending on how we scale their utility functions, one of them could be a utility monster. If you choose two calibration points for utility functions (say, death and some other outcome O), then you can make interpersonal comparisons of utility — although this comes at the cost of deciding a priori that one person's death is as good as another's, and one person's outcome O is as good as another's, ceteris paribus, independently of their preferences.
Utilitarianism is a value system. It doesn't pay rent in the same way beliefs do, in anticipated experiences. Instead it pays rent in telling us what to do. This is a much weaker standard, but Utilitarianism clearly meets it.
I will grant this assumption for the sake of argument. Utilitarianism doesn't have a truth-value or it does have a truth-value, but is only true for those people who prefer it. Why should I prefer utilitarianism? It seems to have several properties that make it look not very appealing compared to other ethical theories (or "value systems").
For example, utilitarianism requires knowing lots of social science and being able to perform very computationally expensive calculations. Alternatively, the Decalogue only requires that you memorise a small list of rules and have the ability to judge when a violation of the rules has occurred (and our minds are already much better optimised for this kind of judgement relative to utility calculations because of our evolutionary history). Also, from my perspective, the Decalogue is preferable for the reason that it is much easier to meet its standard (it actually isn't that hard not to murder people or steal from them and take a break once a week) which is much more psychologically appealing then beating yourself up for going to see a movie instead of donating your kidney to a starving child in Africa.
So, why should I adopt utilitarianism rather than God's Commandments, egoism, the Categorical Imperative, or any other ethical theory that I happen to fancy?
Wait, are you really claiming we should choose a moral system based on simplicity alone? And that a system of judging how to treat other people that "requires knowing lots of social science" is too complicated? I'd distrust any way of judging how to treat people that didn't require social science. As for calculations, I agree that we don't have very good ways to quantify other people's happiness and suffering (or even our own), but our best guess is better than throwing all the data out and going with arbitrary rules like commandments.
The categorical imperative is nice if you get to make the rules for everyone, but none of us do. Utilitarianism appeals to me because I believe I have worth and other people have worth, and I should do things that take that into account.
Wait, are you really claiming we should choose a moral system based on simplicity alone?
Jayson's point is that a moral system so complicated that you can't figure out whether a given action is moral isn't very useful.
Nutrition is also impossible to perfectly understand, but I take my best guess and know not to eat rocks. Choosing arbitrary rules is not a good alternative to doing your best at rules you don't fully understand.
Nutrition is also impossible to perfectly understand, but I take my best guess and know not to eat rocks. Choosing arbitrary rules is not a good alternative to doing your best at rules you don't fully understand.
How would you know whether utilitarianism is telling you to do the right thing or not? What experiment would you run? On Less Wrong these are supposed to be basic questions you may ask of any belief. Why is it okay to place utilitarianism in a non-overlapping magisteria (NOMA), but not, say, religion?
I am simply pointing out that utilitarianism doesn't meet Less Wrong's epistemic standards and that if utilitarianism is mere personal preference your arguments are no more persuasive to me than a chocolate-eater's would be to a vanilla-eater (except, in this case, chocolate (utilitarianism) is more expensive than vanilla (10 Commandments)).
Also, the Decalogue is not an arbitrary set of rules. We have quite good evidence that it is adaptive in many different environments.
Sorry, I was going in the wrong direction. You're right that utilitarianism isn't a tool, but a descriptor of what I value.
I care about both my wellbeing and my husband's wellbeing. No moral system spells out how to balance these things - the Decalogue merely forbids killing him or cheating on him, but doesn't address whether it's permissible to turn on the light while he's trying to sleep or if I should dress in the dark instead. Should I say, "balancing multiple people's needs is too computationally costly" and give up on the whole project?
When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don't have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.
Picking a simpler system might minimize thought required on my part, but it wouldn't maximize what I want to maximize.
Sorry, I was going in the wrong direction. You're right that utilitarianism isn't a tool, but a descriptor of what I value.
So, utilitarianism isn't true, it is a matter of taste (preferences, values, etc...)? I'm fine with that. The problem I see here is this: I, nor anyone I have ever met, actually has preferences that are isomorphic to utilitarianism (I am not including you, because I do not believe you when you say that utilitarianism describes your value system; I will explain why below).
I care about both my wellbeing and my husband's wellbeing. No moral system spells out how to balance these things - the Decalogue merely forbids killing him or cheating on him, but doesn't address whether it's permissible to turn on the light while he's trying to sleep or if I should dress in the dark instead. Should I say, "balancing multiple people's needs is too computationally costly" and give up on the whole project?
This is not a reason to adopt utilitarianism relative to alternative moral theories. Why? Because utilitarianism is not required in order to balance some people's interest against others'. Altruism does not require weighing everyone in your preference function equally, but utilitarianism does. Even egoists (typically) have friends that they care about. The motto of utilitarianism is "the greatest good for the greatest number", not "the greatest good for me and the people I care most about". If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).
When a computation gets too maddening, maybe so. Said husband (jkaufman) and I value our own wellbeing, and we also value the lives of strangers. We give some of our money to buy mosquito nets for strangers, but we don't have a perfect way to calculate how much, and at points it has been maddening to choose. So we pick an amount, somewhat arbitrarily, and go with it.
Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person's preferences (namely, you).
In summary, I find utilitarianism as proposition and utilitarianism as value system very unpersuasive. As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so. Propositions of this kind (meaningless or metaphysical propositions) don't ordinarily warrant wasting much time thinking about them. As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.
So, utilitarianism isn't true, it is a matter of taste
I don't understand how "true" applies to a matter of taste any more than a taste for chocolate is "truer" than any other.
utilitarianism is not required in order to balance some people's interest against others'.
There are others, but this is the one that seems best to me.
If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry
This is the type of decision we found maddening, which is why we currently have firm charity and non-charity budgets. Before that system I did spend money on non-necessities, and I felt terrible about it. So you're correct that I have other preferences besides utilitarianism.
I don't think it's fair or accurate to say "If you ever spent any resources on anything other than what you say you prefer, it's not really your preference." I believe people can prefer multiple things at once. I value the greatest good for the greatest number, and if I could redesign myself as a perfect person, I would always act on that preference. But as a mammal, yes, I also have a drive to care for me and mine more than strangers. When I've tried to supress that entirely, I was very unhappy.
I think a pragmatic utilitarian takes into account the fact that we are mammals, and that at some point we'll probably break down if we don't satisfy our other preferences a little. I try to balance it at a point where I can sustain what I'm doing for the rest of my life.
I came late to this whole philosophy thing, so it took me a while to find out "utilitarianism" is what people called what I was trying to do. The name isn't really important to me, so it may be that I've been using it wrong or we have different definitions of what counts as real utilitarianism.
So, utilitarianism isn't true, it is a matter of taste (preferences, values, etc...)?
Saying utilitarianism isn't true because some people aren't automatically motivated to follow it is like saying that grass isn't green because some people wish it was purple. If you don't want to follow utilitarian ethics that doesn't mean they aren't true. It just means that you're not nearly as good a person as someone who does. If you genuinely want to be a bad person then nothing can change your mind, but most human beings place at least some value on morality.
You're confusing moral truth with motivational internalism. Motivational internalism states that moral knowledge is intrinsically motivating, simply knowing something is good and right motivates a rational entity to do it. That's obviously false.
Its opposite is motivational externalism, which states that we are motivated to act morally by our moral emotions (i. e. sympathy, compassion) and willpower. Motivational externalism seems obviously correct to me. That in turn indicates that people will often act immorally if their willpower, compassion, and other moral emotions are depleted, even if they know intellectually that their behavior is less moral than it could be.
If you have ever purchased a birthday present for, say, your husband instead of feeding the hungry (who would have gotten more utility from those particular resources), then to that extent your values are not utilitarian (as demonstrated by WARP).
There is a vast, vast amount of writing at Less Wrong on the fact that people's behavior and their values often fail to coincide. Have you never read anything on the topic of "akrasia?" Revealed preference is moderately informative in regards to people's values, but it is nowhere near 100% reliable. If someone talks about how utilitarianism is correct, but often fails to act in utilitarian ways, it is highly likely they are suffering from akrasia and lack the willpower to act on their values.
Even if you could measure utility perfectly and perform rock-solid interpersonal utility calculations, I suspect that you would still not weigh your own well-being (nor your husband, friends, etc...) equally with that of random strangers. If I am right about this, then your defence of utilitarianism as your own personal system of value fails on the ground that it is a false claim about a particular person's preferences (namely, you).
You don't seem to understand the difference between categorical and incremental preferences. If juliawise spends 50% of her time doing selfish stuff and 50% of her time doing utilitarian stuff that doesn't mean she has no preference for utilitarianism. That would be like saying that I don't have a preference for pizza because I sometimes eat pizza and sometimes eat tacos.
Furthermore, I expect that if juliawise was given a magic drug that completely removed her akrasia she would behave in a much more utilitarian fashion.
As for the former, I have requested of sophisticated and knowledgeable utilitarians that they tell me what experiences I should anticipate in the world if utilitarianism is true (and that I should not anticipate if other, contradictory, moral theories were true) and, so far, they have been unable to do so.
If utilitarianism was true we could expect to see a correlation between willpower and morally positive behavior. This appears to be true, in fact such behaviors are lumped together into the trait "conscientiousness" because they are correlated.
If utilitarianism was true then deontological rule systems would be vulnerable to dutch-booking, while utilitarianism would not be. This appears to be true.
If utilitarianism was true then it would be unfair to for multiple people to have different utility levels, all else being equal. This is practically tautological.
If utilitarianism was true then goodness would consist primarily of doing things that benefit yourself and others. Again, this is practically tautological.
Now, these pieces of evidence don't necessarily point to utilitarianism, other types of consequentialist theories might also explain them. But they are informative.
As for the latter, according to my revealed preferences, utilitarianism does not describe my preferences at all accurately, so is not much use for determining how to act. Simply, it is not, in fact, my value system.
Again, ethical systems are not intrinsically motivating. If you don't want to follow utilitarianism then that doesn't mean it's not true, it just means that you're a person who sometimes treats other people unfairly and badly. Again, if that doesn't bother you then there are no universally compelling arguments. But if you're a reasonably normal human it might bother you a little and make you want to find a consistent system to guide you in your attempts to behave better. Like utilitarianism.
What alternative to utilitarianism are you proposing? Avoiding taking into account multiple people's welfare? Even a perfect egoist still needs to weigh the welfare of different possible future selves. If you zoom in enough, arbitrariness is everywhere, but "arbitrariness is everywhere, arbitrariness, arbitrariness!" is not a policy. To the extent that our "true" preferences about how to compare welfare have structure, we can try to capture that structure in principles; to the extent that they don't have structure, picking arbitrary principles isn't worse than picking arbitrary actions.
Your preferences tell you how to aggregate the preferences of everyone else.
Edit: This post was downvoted to -1 when I came to it, so I thought I'd clarify. It's since been voted back up to 0, but I just finished writing the clarification, so...
Your preferences are all that you care about (by definition). So you only care about the preferences of others to the extent that their preferences are a component of your own preferences. Now if you claim preference utilitarianism is true, you could be making one of two distinct claims:
In both cases, some "suitable aggregation" has to be chosen and which agents are relevant has to be chosen. The latter is actually a sub-problem of the former: set weights of zero for non-relevant agents in the aggregation. So how does the utilitarian aggregate? Well, that depends on what the utilitarian cares about, quite literally. What does the utilitarian's preferences say? Maximize average utility? Total utility? Ultimately what the utilitarian should be maximizing comes back to her own preferences (or the collective preferences of humanity if the utilitarian is making the claim that our preferences are all the same). Going back to the utilitarian's own utility function also (potentially) deals with things like utility monsters, how to deal with the preferences of the dead and the potentially-alive and so forth.
I've recently started supporting GiveWell, you can find them on givewell.net, because they're doing something that I'm sure you would support: they're trying to get aid organizations to demonstrate their efficacy, to be more transparent about why they support some projects rather than others, and to show how much it costs for them to achieve their goals, whether those goals are saving lives or lifting people out of poverty. And so it's kind of at a meta level, saying I want to improve aid by helping organizations that are trying to do that. I think that's a really highly leveraged way of making an impact on what's going to happen in aid over the next couple of decades.
This made me wonder if he knows about existential risks. This page (2007) does suggest that he is aware of existential risks:
I would also include the issue of what Nick Bostrom calls ”existential risks” – how should we act in regard to risks, even very small ones, to the future existence of the entire human species? Arguably, all other issues pale into insignificance when we consider the risk of extinction of our species,
In March 2009, Tyler Cowen (blog) interviewed Peter Singer about morality, giving, and how we can most improve the world. They are both thinkers I respect a lot, and I was excited to read their debate. Unfortunately the interview was available only as a video. I wanted a transcript, so I made one:
From there I pull back to saying "what does this mean about the problem of world poverty, given that there are, according to Unicef, ten million children dying of avoidable poverty-related causes every year?" We could save some of them, and probably it wouldn't cost us much more than the cost of an expensive pair of shoes if we find an effective aid agency that is doing something to combat the causes of world poverty, or perhaps to combat the deaths of children from simple conditions like diarrhea or measles, conditions that are not that hard to prevent or to cure. We could probably save a life for the cost of a pair of shoes. So why don't we? What's the problem here? Why do we think it's ok to live a comfortable, even luxurious, life while children are dying? In the book I explore various objections to that view, I don't find any of them really convincing. I look at some of the psychological barriers to giving, and I acknowledge that they are problems. And I consider also some of the objections to aid and questions raised by economists as to whether aid really works. In the end I come to a proposal by which I want to change the culture of giving.
The aim of the book in a sense is to get us to internalize the view that not to do anything for those living in poverty, when we are living in luxury and abundance, is ethically wrong, that it's not just not a nice thing to do but that a part of living an ethically decent life is at least to do something significant for the poor. The book ends with a chapter in which I propose a realistic standard, which I think most people in the affluent world could meet without great hardship. It involves giving 1% of your income if you're in the bottom 90% of US taxpayers, scaling up through 5% and 10% and even more as you get into the top 10%, the top 5%, the top 1% of US taxpayers. But at no point is the scale I'm proposing what I believe is an excessively burdensome one. I've set up a website, thelifeyoucansave.com that people can go to in order to publicly pledge that they will meet this scale, because I think if people will do it publicly, that in itself will encourage other people to do it and, hopefully, the idea will spread.
Immigration as an Anti-Poverty Program
I don't think we could have open borders; I don't think we could have unlimited immigration, but we're both sitting here in the United States and it hardly seems to me that we're at the breaking point. Immigrants would benefit much more: their wages would rise by a factor of twenty or more, and there would be perhaps some costs to us, but in a cost-benefit sense it seems far, far more effective than sending them money. Do you agree?
Changing Institutions: Greater Tax Break for True Charity
Millennium Villages Skepticism
Chinese Reforms
Military Intervention
Colonialism
Aid without stable government
Genetically modifying ourselves to be more moral
Problem areas in Utilitarianism
Is Utilitarianism independent?
Peter Singer: Jewish Moralist
What charities does Peter Singer give to?
Zero-Overhead Giving
Moral Intuitions
Improving the world through commerce
What makes Peter Singer happy?
Human and animal pleasures
Pescatarianism
(I also posted this on my blog.)