In my last post I made this mind experiment:

Let’s assume that Omega has an evil twin: Psi.

Psi really likes to play with human minds, and today it will force some people to do difficult choices.

Alice is a very nice person: she does volunteer, donates moneys to important causes, is vegetarian, and she is always kind to the people she interact with.

Bob on the other hand is a murderer and torturer: he has kidnapped lots of people and then tortured them for many days in horrible ways that are worse than your deepest fears.

Psi offers two option:

A) Alice will be tortured for 1 day and then killed, while Bob will be instantly killed in a painless way.

B) Bob will be tortured for 3 days and then killed, while Alice will be instantly killed in a painless way.

If you refuse to answer, everyone will be tortured for a year and then killed.

No matter what you will choose, you too will be killed and no one will ever know what has appended.

1. What would you answer?

2. What would you answer if the 3 days in option B) were substituted by 3 years?

3. And if they were substituted by 33 years?

Even if the most compassionate answer is clearly A, most people I know (me included) prefer B, at least in the first case.

This is not surprising: anger can motivate humans to contrast perceived injustices committed toward ourselves or other member of our community and so it helps to ensure the preservation of collaborative behaviors. In other words, many people usually adopt til for tat strategies.

Moreover, I think that we aren’t able to really empathize with psychopaths: even a person with a hypothetical perfect empathy, able to feel all Bob’s emotions, would ultimately condemn him, because she would have to feel his pain and fear but also his sadism and lack of empathy, which would create contrasting emotions.

On the other hand, this person would perceive other's injustices as her own and she would feel empathic anger against Bob. Adopting this kind of empathy based ethic would probably result in a sort of tit for tat strategy based on intentions and attitudes rather than past actions.

Another reason to choose B, which could work even with a Golden Rule based morality, is signaling: choosing B here, in this moment, you signal that you care about merit, or rather you signal that you have certain emotions in response of certain behavior and that these emotions will guide your actions. Which can motivate other to behave in a certain way. When choosing B people deliver a double message:

1) If you will hurt other people then you will be hurt by me.

2) If you will protect other then you will be protected by me.

The first part of the message is not very useful since psychopaths have a lower fear of punishments due to an increased boldness, but the second one could be effective since it signals to people of good will that we want to protect them, especially for their behavior, and this could further motivate them to continue doing good deeds.

On the other hand, I think that many people dislike the lack of recognition of merit, and a society that doesn't motivate others to behave right, will be more instable and will cause more pain in the long run.

However there is something that should be taken in account before choosing B: a society based on raw reciprocity would inflict pain on people who have been brainwashed to commit crimes, or have been traumatized until they developed some antisocial behavior, or more realistically, who have received some types of head injuries; and even if very small, it is a possibility that could occur to us.

So I would like to ask: if you could program a society and live in it, would you set a maximum amount of torture in option B, after which people should choose A? If yes where and why?

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 11:01 PM

I choose A in all 3 cases. I thought no one would know what happened?

This invalidates your entire signalling argument.

Refine your post and figure out the question that you want to ask.

[-]RST6y10

Choosing A here, in this moment, you signal that you care about merit, or rather you signal that you have certain emotions in response of certain behavior and that these emotions guide your actions. Which can motivate other to behave in a certain way.

[-]RST6y10

Sorry, my fault.

The point is that the probability to really found yourself in such situation is very remote.

On the other hand, I think that many people dislike the lack of recognition of merit (me included), and a society that doesn't motivate others to behave right, will be more instable and will cause more pain in the long run.

[-]RST6y10

Anyway, thanks.

I edited the post as you advised to be more comprehensible.

  1. I die after I make the choice.
  2. No one finds out about my choice.

The first drastically limits how much I care about the consequences.

The second removes any signalling, benefit and only my internal moral compass is relevant.

The question was set up so that the concern that would make me choose B ceased to be concerns.

[-]RST6y10

It depends if someone can trust our promises.

You answered A so I can think that you care equally about people, and you don’t see merit’s reward as an intrinsic good.

In this case I will be less motivated to collaborate with you.

I answered B (at least in the first case), so people can think that I care more about those who do good actions, and that I see merit’s reward as an intrinsic good.

I didn't consider any signalling benefit of my answer. I wasn't thinking: "what answer should I make so that people reading this think I'm X sort of person"; I just answered honestly.

I don't care equally about people, and I'll normally choose B. However:

  1. Alice would die anyway. Since they're both going to die, might as well reduce suffering.
  2. No one finds out about this, so my answer has no long term relevance.
  3. I die as well, and so receive no benefit from this.
[-]RST6y10

Good points, I should have been more explicit.

It is true that no one will ever discover what has happened, but “good willed” people can make a social contract to ensure mutual protection.

Maybe it is just the opposite of Roko’s basilisk: rather than threatens people, the social contract protects them, and enhances solidariety.

Of coure the contract could work as motivator only if people trust each other: spontaneously choosing B) people signal that their “moral compass” recognize merit as a value in itself, and that they will punish or reward people because they want to do so.