One of my hobbies is collecting philosophical justifications for torture. Once you start looking out for it, there are surprisingly many of them. Here are six. None are endorsed. 

#1: The peak-end rule says that your memory of how bad something is mostly depends on the peak and end of the experience. Imagine you walk into a warehouse and see someone tied up. You learn that they have just been subject to extreme torture. You can either turn off the machine, or you can turn down the dial to slightly less extreme torture, wait a bit, and then turn it off. The latter option will cause them to rate the overall experience as less bad than the first option. This is plausibly the better option, depending on how you weight remembered utilons vs experienced utilons. 

#2: Gwern suggests we torture kids so we get more innovation. 

Another consideration is Nick Bostrom’s ‘status quo bias’, which suggests that the current status quo may be incorrect; if something in the air caused a 1% increase in child abuse and this gave us, say, 10 extra Nobel Prizes’ worth of work a year, (TODO: is this plausible based on the motivation research? Crunch the order statistics), and we would permit this, then we ought to be willing to cause such a 1% increase as well as permit it.

#3: We should torture people as an acausal sacrifice to alien Gods in another universe so that they torture fewer people in their alien universe. This makes sense because longtermism acausal trade. Caspar Oesterheld wrote a paper on it:

You face a choice between two actions:

 • You can continue to reduce suffering and increase your own utility and that of other suffering reducers by 1

• You can increase the utility of superrational agents in other parts of the multiverse who (also) care about things other than suffering reduction by 100, e. g. by generating a society of agents who live happily, produce interesting art, conduct science, explore technologies, trade, behave benevolently towards each other, etc.

By construction of the thought experiment you care about suffering reduction only, so you would usually take the first action. But consider that many agents throughout the multiverse will face very similar decision problems. For example, there might be an agent who primarily cares about agents experiencing art and the interestingness of things and who is facing similarly diminishing returns – in her world, most things that could be of interest already exist. Other value systems, on the other hand, have been ignored in the process of making her world more interesting. Her world contains many sentient beings with very low levels of well-being, such as humans experiencing various crises (wars, loneliness, life-threatening dangers) – a common theme in art –, wild animals, or blood sports. She knows that agents in other parts of the multiverse dislike this suffering and that she could alleviate them at low opportunity costs to herself. Her decision problem is thus structurally similar to our own. If her thought process is similar to our own, superrationality applies. If we are nice and follow the heuristic “fulfill the goals of other agents in the multiverse whenever the returns are much higher than the opportunity costs for your own values”, then this makes it more likely that she will be nice as well, the benefits of which are much greater than those forgone by our own friendliness.

#4: says that it's better for you that WWI happened, because if it didn't then you wouldn't have existed. By the same logic, if you got tortured at the age of five, arguably that changes your identity enough s.t. the version of you that didn't get tortured wouldn't be "you". That means that the torture was good for you.

#5: Robin Hanson says Ems will be selected for being ok with being run at subsistence levels in parallel, which makes it ok. I.e. it's ok to torture them, since they are ok with it. 

#6: It's at worst morally neutral to torture baby Hitler. Most people are roughly equivalent to baby Hitler with varying amounts of moral luck.


New Comment

New to LessWrong?