So our ethical intuitions are scope insensitive. But how do we know how to correct for the insensitivity? Maybe maybe the value of an action increases linearly as it increases in scope. Maybe it increases exponentially. Maybe the right thing to do is average the utility per person. What possible evidence is there to answer this question? For that matter, why should we think that ethics shouldn't be scope insensitive?

Yes, this is one of the big questions. In Eliezer's scope insensitivity post, he implied that goodness is proportional to absolute number. But Eliezer has also said that he is an average utilitarian. So actually he should see what he called "scope insensitivity" as pretty near the right thing to do. If someone has no idea how many migrating birds there are, and you ask them, "How much would you pay to save 2000 birds?", they're going to seize on 2000 as a cue to indicate how many migrating birds there are. It might seem perfectly r... (read more)

2komponisto10ySee Circular Altruism [].
0[anonymous]10yThis is an interesting question.

Ethics has Evidence Too

by Jack 2 min read6th Feb 201057 comments


A tenet of traditional rationality is that you can't learn much about the world from armchair theorizing. Theory must be epiphenomenal to observation-- our theories are functions that tell us what experiences we should anticipate, but we generate the theories from *past* experiences. And of course we update our theories on the basis of new experiences. Our theories respond to our evidence, usually not the other way around. We do it this way because it works better then trying to make predictions on the basis of concepts or abstract reasoning. Philosophy from Plato through Descartes and to Kant is replete with failed examples of theorizing about the natural world on the basis of something other than empirical observation. Socrates thinks he has deduced that souls are immortal, Descartes thinks he has deduced that he is an immaterial mind, that he is immortal, that God exists and that he can have secure knowledge of the external world, Kant thinks he has proven by pure reason the necessity of Newton's laws of motion.

These mistakes aren't just found in philosophy curricula. There is a long list of people who thought they could deduce Euclid's theorems as analytic or a priori knowledge. Epicycles were a response to new evidence but they weren't a response that truly privileged the evidence. Geocentric astronomers changed their theory *just enough* so that it would yield the right predictions instead of letting a new theory flow from the evidence. Same goes for pre-Einsteinian theories of light. Same goes for quantum mechanics. A kludge is a sign someone is privileging the hypothesis. It's the same way many of us think the Italian police changed their hypothesis explaining the murder of Meredith Kercher once it became clear Lumumba had an alibi and Rudy Guede's DNA and hand prints were found all over the crime scene. They just replaced Lumumba with Guede and left the rest of their theory unchanged even though there was no longer reason to include Knox and Sollecito in the explanation of the murder. These theories may make it over the bar of traditional rationality but they sail right under what Bayes theorem requires.

Most people here get this already and many probably understand it better than I do. But I think it needs to be brought up in the context of our ongoing discussion of normative ethics.

Unless we have reason to think about ethics differently, our normative theories should respond to evidence in the same way we expect our theories in other domains to respond to evidence. What are the experiences that we are trying to explain with our ethical theories? Why bother with ethics at all? What is the mystery we are trying to solve? The only answer I can think of is our ethical intuitions. When faced with certain situations in real life or in fiction we get strong impulses to react in certain ways, to praise some parties and condemn others. We feel guilt and sometimes pay amends. There are some actions which we have a visceral abhorrence of.

These reactions are for ethics what measurements of time and distance are for physics -- the evidence.

The reason ethicists use hypotheticals like the runaway trolley and the unwilling organ donor is that different normative theories predict different intuitions in response to such scenarios. Short of actually setting up these scenarios for real, this is as close as ethics gets to controlled experiments. Now there are problems with this method. Our intuitions in fictional cases might be different from real life intuitions. The scenario could be poorly described. It might not be as controlled an experiment as we think. Or some features could be clouding the issue such that our intuitions about a particular case might not actually falsify a particular ethical principle. Just as there are optical illusions there might be ethical illusions such that we can occasionally be wrong about an ethical judgment in the same way that we can sometimes be wrong about the size or velocity of a physical object.

The big point is that the way we should be reasoning about ethics is not from first principles, a priori truths, definitions or psychological concepts. Kant's Categorical Imperative is a paradigm example of screwing this up, but he is hardly the only one. We should be looking at our ethical intuitions and trying to come up with theories that predict future ethical intuitions. And if your theory is outputting results that are systematically or radically different from actual ethical intuitions then you need to have a damn good explanation for the discrepancy or be ready to change your theory (and not just by adding a kludge).