I think a better way to frame this issue would be the following method.
For example, if I respond to your question of the solitary traveler with "You shouldn't do it because of biological concerns." Accept the answer and then ask, what would need to change in this situation for you to accept the killing of the traveler as moral?
I remember this method giving me deeper insight into the Happiness Box experiment.
Here is how the process works:
I find a similar strategy useful when I am trying to argue my point to a stubborn friend. I ask them, "What would I have to prove in order for you to change your mind?" If they answer "nothing" you know they are probably not truth-seekers.
Namely, the point of reversal of your moral decision is that it helps to identify what this particular moral position is really about. There are many factors to every decision, so it might help to try varying each of them, and finding other conditions that compensate for the variation.
For example, you wouldn't enter the happiness box if you suspected that information about it giving the true happiness is flawed, that it's some kind of lie or misunderstanding (on anyone's part), of which the situation of leaving your family on the outside is a special case, and here is a new piece of information. Would you like your copy to enter the happiness box if you left behind your original self? Would you like a new child to be born within the happiness box? And so on.
I'm not sure if I'm evading the spirit of the post, but it seems to me that the answer to the opening problem is this:
If you were willing to kill this man to save these ten others, then you should long ago have simply had all ten patients agree to a 1/10 game of Russian Roulette, with the proviso that the nine winners get the organs of the one loser.
While emphasizing that I don't want this post to turn into a discussion of trolley problems, I endorse that solution.
In the least convenient possible world, only the random traveler has a blood type compatible with all ten patients.
Throwing a die is a way of avoiding bias in choosing a person to kill. If you choose a person to kill personally, you run a risk of doing in in an unfair fashion, and thus being guilty in making an unfair choice. People value fairness. Using dice frees you of this responsibility, unless there is a predictably better option. You are alleviating additional technical moral issues involved in killing a person. This issue is separate from deciding whether to kill a person at all, although the reduction in moral cost of killing a person achieved by using the fair roulette technology may figure in the original decision.
There are real life examples where reality has turned out to be the "least convenient of possible worlds". I have spent many hours arguing with people who insist that there are no significant gender differences (beyond the obvious), and are convinced that to assert otherwise is morally reprehensible.
They have spent so long arguing that such differences do not exist, and this is the reason that sexism is wrong, that their morality just can't cope with a world in which this turns out not to be true. There are many similar politically charged issues - Pinker discusses quite a few in the Blank Slate - where people aren't wiling to listen to arguments about factual issues because they believe they have moral consequences.
The problem, of course - and I realise this is the main point of this post - is that if your morality is contingent on empirical issues where you might turn out to be wrong, you have to accept the consequences. If you believe that sexism is wrong because there are no heritable gender differences, you have to be willing to accept that if these differences do turn out to exist then you'll say sexism is ok.
This is probably a test you should apply to all of your moral beliefs - if it just so happens that I'm wrong about the factual issue on which I'm basing my belief is wrong, will really I be willing to change my mind?
One way to train this: in my number theory class, there was a type of problem called a PODASIP. This stood for Prove Or Disprove And Salvage If Possible. The instructor would give us a theorem to prove, without telling us if it was true or false. If it was true, we were to prove it. If it was false, then we had to disprove it and then come up with the "most general" theorem similar to it (e.g. prove it for Zp after coming up with a counterexample in Zm).
This trained us to be on the lookout for problems with the theorem, but then seeing the "least convenient possible world" in which it was true.
I voted up on your post, Yvain, as you've presented some really good ideas here. Although it may seem like I'm totally missing your point by my response to your 3 scenarios, I assure you that I am well aware that my responses are of the "dodging the question" type which you are advocating against. I simply cannot resist to explore these 3 scenarios on their own.
Pascal's Wager
In all 3 scenarios, I would ask Omega further questions. But these being "least convenient world" scenarios, I suspect it'd be all "Sorry, can't answer that" and then fly away. And I'd call it a big jerk.
For Pascal Wager's specific scenario, I'd probably ask Omega "Really? Either God doesn't exist or everything the Catholics say is correct? Even the self-contradicting stuff?" And of course, he'd decline to answer and fly away.
So then I'd be stuck trying to decide whether God doesn't exist, or logic is incorrect (i.e. reality can be logically self inconsistent). I'm tempted to adopt Catholicism (for the same reason I would one-box on Newcomb: I want the rewards), but I'm not sure how my brain could handle a non-logical reality. So I really don't know what would happen ...
Let's try something different.
The Yvain's post presented a new method for dealing with the stopsign problem in reasoning about questions of morality. The stopsign problem consists in following an invalid excuse to avoid thinking about the issue at hand, instead of doing something constructive about resolving the issue.
The method presented by Yvain consists in putting in place the universal countermeasure against the stopsign excuses: whenever a stopsign comes up, you move the discussed moral issue to a different, hypothetical setting, where the stopsign no longer applies. The only valid excuse in this setting is that you shouldn't do something, which also resolves the moral question.
However, the moral questions should be concerned with reality, not with fantasy. Whenever a hypothetical setting is brought in the discussion of morality, it should be understood as a theoretical device for reasoning about the underlying moral judgment applicable to the real world. There is a danger in fallaciously generalizing the moral conclusion from fictional evidence, both because there might be factors in the fictional setting that change your decision and which you ...
One difficulty with the least convenient possible world is where that least convenience is a significant change in the makeup of the human brain. For example, I don't trust myself to make a decision about killing a traveler with sufficient moral abstraction from the day-to-day concerns of being a human. I don't trust what I would become if I did kill a human. Or, if that's insufficient, fill in a lack of trust in the decisionmaking in general for the moment. (Another example would be the ability to trust Omega in his responses)
Because once that's a significant issue in the subject , then the least convenient possible world you're asking me to imagine doesn't include me -- it includes some variant of me whose reactions I can predict, but not really access. Porting them back to me is also nontrivial.
It is an interesting thought experiment, though.
So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"
Obviously, you wait for one of the sick patients to die, and use that person's organs to save the others, letting the healthy traveler go on his way. ;)
But that isn't the least convenient possible world - the least convenient one is actually the one in which the traveler is compatible with all the sick people, but the sick people are not compatible with each other.
Actually, you don't even need to add that additional complexity to make the world sufficiently inconvenient.
If the rest of the patients are sufficiently sick, their organs may not really be suitable for use as transplants, right?
There's another benefit: you remove a motivation to lie to yourself. If you think that a contingent fact will get you out of a hard choice, you might believe it. But you probably won't if it doesn't get you out of the hard choice anyway.
Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it's wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a perverted game of blind poker with your eternal soul?
I don't think I would be able to bring myself to worship honestly a God who bestowed upon us the ability to reason and then rewarded us for not using it.
The problem with the 'god shaped hole' situation (and questions of happiness in general) is that if something doesn't make you happy NOW, it becomes very difficult to believe that it will make you happy LATER.
For example, say some Soma-drug was invented that, once taken, would make you blissfully happy for the rest of your life. Would you take it? Our immediate reaction is to say 'no', probably because we don't like the idea of 'fake', chemically-induced happiness. In other words, because the idea doesn't make us happy now, we don't really believe it will ...
I like the phrase "precedent utilitarianism". It sounds to utilitarians like you're joining their camp, while actually pointing out that you're taking a long-term view of utility, which they usually refuse to do. The important ingredient is paying attention to incentives, which is really the rational response to most questions about morality. Many choices which seem "fairer", "more just", or whose alternatives provoke a disgust response don't take the long-term view into account. If we go around sacrificing every lonely s...
I would act differently in the least convenient world than I do in the world that I do live in.
Very good point, and crystalizes some of my thinking on some of the discussion on the tyrant/charity thing.
As far as the specific problems you posed...
For your souped up Pascal's Wager, I admit that one gives me pause. Taking into account the fact that Omega singled out one out of the space of all possible religions, etc etc... Well, the answer isn't obvious to me right now. This flavor would seem to not admit to any of the usual basic refutations of the wager. I think under these circumstances, assuming Omega wasn't open to answering any further question...
I am trying to imagine the least convenient possible world (LCPW) for the LCPW method.
Perhaps it is the world in which there is precisely one possible world. All 'possible' worlds turn out to be impossible on closer scrutiny. Omega reveals that talking about a counterfactual possible world is as incoherent as talking about a square triangle. There is exactly one way to have a world with anyone in it whatsoever, and we're in it.
Yes! I can't believe I don't see this repeated in one form or another more often. Fallacies are a bit like prions in that they tend to force a cascade of fallacies to derive from them, and one of my favorite debate tactics is the thought experiment, "Let's assume your entire premise is true. How might this contradict your position?"
Usually the list is longer than my own arguments.
On your Pascal's Wager example, I don't think your "least convenient possible world" is really equivalent to the world I live in on every meaningful feature other than convenience. Selecting Catholicism out of all similarly complex stories would take a whole lot of evidence. So if Omega tells me the Catholic interpretation of Yahweh is the only plausible god and I completely trust Omega, I've been given a lot of evidence in favor of the Catholic interpretation of Yahweh.
Pascal's Wager is meant to be an argument for theism in the complete absence of evidenc...
The least convenient world is one where there's no traveler and the doctor debates whether to harvest organs from another villager. I figure that if it's okay to kill the traveler for organs, then it should be okay to kill a villager. Similarly, if it's against general principle to kill a villager for organs, then it shouldn't be okay to kill the traveler. Perhaps someone can come up with a clever argument why the life of a villager is worth intrinsically more than the life of the traveler, but let's keep things simple for now.
So, let us suppose that N sic...
This might be better placed somewhere else, but I just thought I'd comment on Pascal's Wager here. To me both the convenient and inconvenient resolutions of Pascal's Wager given above are quite unsatisfactory.
To me, the resolution of this wager comes from the concept of sets of measure zero. The set of possible realities in which belief in any given God is infinitely beneficial is an infinite set, but it is nonetheless like Cantor Dust in the space of possible explanations of reality. The existence of sets of measure zero explains why it is reasonable to a...
Although I understand and appreciate your approach the particular examples do not represent particularly good ones:
1: Pascal's Wager:
For an atheist the least convenient possible world is one where testable, reproducible scientific evidence strongly suggests the existence of some "super-natural" (clearly no-longer super-natural) being that we might ascribe the moniker of God to. In such a world any "principled atheist" would believe what the verifiable scientific evidence support as probably true. "Atheists" who did not do th...
"I believe that God’s existence or non-existence can not be rigorously proven."
Cannot be proven by us, with our limits on detection, or cannot be proven in principle?
Because if it's the latter, you're saying that the concept of 'God' has no meaning.
0: Should we kill the miraculously-compatible traveler and distribute his organs?
My answer is based on a principle that I'm surprised no one else seems to use (then again, I rarely listen to answers to the Fat Man/Train problem): ask the f**king traveler!
Explain to the traveler that he has the opportunity to save ten lives at the cost of his own. First they'll take a kidney and a lung, then he'll get some time to say goodbye to his loved ones while he gets to see the two people with the donated organs recover... and then when he's ready they'll take the re...
My answers:
1.No, because their belief doesn't make any sense. It even has logical contradictions, which makes it "super impossible", meaning there's no possible world where it could be true (the omnipotence paradox proves that omnipotence is logically inconsistent; a god which is nearly omnipotent, nearly omniscient and nearly omnibenevolent wouldn't allow suffering, which, undoubtably, exists; "God wants to allow free will" isn't a valid defence, since there's a lot of suffering that isn't caused by other ...
either God does not exist or the Catholics are right about absolutely everything.
Then I would definitely and swiftly become an atheist, and I maintain that this is by far the most rational choice for everybody else as well. My prior belief in God not existing is relatively high (let's say 50/50), but my prior belief in all of Catholicism being the absolute truth is pretty much nil. And if you're using anything vaguely resembling consistent priors, it has to near-nil for you too, because the beliefs of Catholicism are just so incredibly specific. They na...
I find this method to be intellectually dangerous.
We do not live in the LCPW, and constantly considering ethical problems as if we do is a mind-killer. It trains the brain to stop looking for creative solutions to intractable real world problems and instead focus on rigid abstract solutions to conceptual problems.
I agree that there is a small modicum of value to considering the LCPW. Just like there's a small modicum of value to eating a pound of butter for dinner. It's just, there are a lot better ways to spend ones time. The proper response to "We...
Yvain,
Do you have a blog or home page with more material you've written? Failing that, is there another site (apart from OB) with contributions from you that might be interesting to LW readers?
with regards to the third question: what if I believe that any resources given simply allow the population to expand and hence cause more suffering than letting people die?
If you don't really believe that, and it's just your excuse for not giving away lots of money, you should say loud and clear "I don't believe I'm morally obligated to reduce suffering if it inconveniences me too much." And then you've learned something useful about yourself.
But if you do really believe that, and you otherwise accept John's argument, you should say explicitly, "I accept I'm morally obligated to reduce suffering as much as possible, even at the cost of great inconvenience to myself. However, I am worried because of the contingent fact that giving people more resources will lead to more population, causing more suffering."
And if you really do believe that and think it through, you'll end up spending almost all your income on condoms for third world countries.
Is this not just an alternative way of describing a red herring argument? If not, I would be interested to see what nuance I'm missing.
I find this classically in the abortion discussion. Pro-abortionists will bring up valid-at-face-value concerns regarding rape and incest. But if you grant that victims of rape/incest can retain full access to abortions, the pro-abortionist will not suddenly agree with criminalisation of abortion in the non-rape/incest group. Why? Because the rape/incest point was a red herring argument
...This is a good argument against Pascal's Wager, but it isn't the least convenient possible world. The least convenient possible world is the one where Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.
Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized p
I apologize for banging on about the railroad question, but I think the way you phrased it does an excellent job of illustrating (and has helped me isolate) why I've always vaguely uncomfortable with Utilitarianism. There is a sharp moral contrast which the question doesn't innately recognize between the patients entering into a voluntary lottery, and the forced-sacrifice of the wandering traveller.
Unbridled Utilitarianism, taken to the extreme, would mandate some form of forced Socialism. I think it was you who commented on OvercomingBias, that one of t...
Unbridled Utilitarianism, taken to the extreme, would mandate some form of forced Socialism.
So maybe some form of forced socialism is right. But you don't seem interested in considering that possibility. Why not?
While Utilitarianism is excellent for considering consequences, I think it's a mistake to try and raise it as a moral principle.
Why not?
It seems like you have some pre-established moral principles which you are using in your arguments against utilitarianism. Right?
I don't see how you can compromise on these principles. Either each person has full ownership of themselves (so long as they don't infringe on others), or they have zero ownership.
To me it seems that most people making difficult moral decisions make complicated compromises between competing principles.
I don't see any problem with acknowledging that in a world very different from this one my beliefs and actions would also be different. For example, I think the fact that there are and have been so many different religions with significantly different beliefs as to what God wants is evidence that none of them are correct. It follows that if there was just one religion with any significant number of adherents then that would be evidence (not proof) that that religion was in fact correct.
Maybe if Omega tells me it's Catholicism or nothing I'll become a Cath...
Yvain,
Do you have a blog or home page with more material, or is there another site (apart from OB) with contributions from you that might be interesting to LW readers?
Yvain, you frequently seem to have extra line breaks in your post, which I've been editing to fix. I'm leaving this post as is because I'm wondering if you can't even see them, in which case are you using an unusual browser or OS?
So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"
That's a pretty damn convenient world. It's basically like saying "In a world where serious issue X isn't applicable, what would you do?" which might as well be the better question instead of beating around the bush.
Sorry if this was posted before.
The acceleratingfuture domain's registration has expired (referenced in the starting quote) (http://acceleratingfuture.com/?reqp=1&reqr=)
I have a question related to the initial question about the lone traveler. When is it okay to initiate force against any individual who has not initiated force against anyone?
Bonus: Here's a (very anal) cop out you could use against the least convenient possible world suggestion: Such a world—as seen from the perspective of someone seeking a rational answer—has no rational answer for the question posed.
Or a slightly different flavor for those who are more concerned with being rational than with rationality: In such a world, I—who value rational answers above all other answers—will inevitably answer the question irrationally. :þ
In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.
That sounds like it would decrease my probability that God exists by several dozen orders of magnitude.
Related to: Is That Your True Rejection?
"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."
-- Black Belt Bayesian, via Rationality Quotes 13
Yesterday John Maxwell's post wondered how much the average person would do to save ten people from a ruthless tyrant. I remember asking some of my friends a vaguely related question as part of an investigation of the Trolley Problems:
I don't want to discuss the answer to this problem today. I want to discuss the answer one of my friends gave, because I think it illuminates a very interesting kind of defense mechanism that rationalists need to be watching for. My friend said:
On the one hand, I have to give my friend credit: his answer is biologically accurate, and beyond a doubt the technically correct answer to the question I asked. On the other hand, I don't have to give him very much credit: he completely missed the point and lost a valuable effort to examine the nature of morality.
So I asked him, "In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?"
He mumbled something about counterfactuals and refused to answer. But I learned something very important from him, and that is to always ask this question of myself. Sometimes the least convenient possible world is the only place where I can figure out my true motivations, or which step to take next. I offer three examples:
1: Pascal's Wager. Upon being presented with Pascal's Wager, one of the first things most atheists think of is this:
Perhaps God values intellectual integrity so highly that He is prepared to reward honest atheists, but will punish anyone who practices a religion he does not truly believe simply for personal gain. Or perhaps, as the Discordians claim, "Hell is reserved for people who believe in it, and the hottest levels of Hell are reserved for people who believe in it on the principle that they'll go there if they don't."
This is a good argument against Pascal's Wager, but it isn't the least convenient possible world. The least convenient possible world is the one where Omega, the completely trustworthy superintelligence who is always right, informs you that God definitely doesn't value intellectual integrity that much. In fact (Omega tells you) either God does not exist or the Catholics are right about absolutely everything.
Would you become a Catholic in this world? Or are you willing to admit that maybe your rejection of Pascal's Wager has less to do with a hypothesized pro-atheism God, and more to do with a belief that it's wrong to abandon your intellectual integrity on the off chance that a crazy deity is playing a perverted game of blind poker with your eternal soul?
2: The God-Shaped Hole. Christians claim there is one in every atheist, keeping him from spiritual fulfillment.
Some commenters on Raising the Sanity Waterline don't deny the existence of such a hole, if it is intepreted as a desire for purpose or connection to something greater than one's self. But, some commenters say, science and rationality can fill this hole even better than God can.
What luck! Evolution has by a wild coincidence created us with a big rationality-shaped hole in our brains! Good thing we happen to be rationalists, so we can fill this hole in the best possible way! I don't know - despite my sarcasm this may even be true. But in the least convenient possible world, Omega comes along and tells you that sorry, the hole is exactly God-shaped, and anyone without a religion will lead a less-than-optimally-happy life. Do you head down to the nearest church for a baptism? Or do you admit that even if believing something makes you happier, you still don't want to believe it unless it's true?
3: Extreme Altruism. John Maxwell mentions the utilitarian argument for donating almost everything to charity.
Some commenters object that many forms of charity, especially the classic "give to starving African orphans," are counterproductive, either because they enable dictators or thwart the free market. This is quite true.
But in the least convenient possible world, here comes Omega again and tells you that Charity X has been proven to do exactly what it claims: help the poor without any counterproductive effects. So is your real objection the corruption, or do you just not believe that you're morally obligated to give everything you own to starving Africans?
You may argue that this citing of convenient facts is at worst a venial sin. If you still get to the correct answer, and you do it by a correct method, what does it matter if this method isn't really the one that's convinced you personally?
One easy answer is that it saves you from embarrassment later. If some scientist does a study and finds that people really do have a god-shaped hole that can't be filled by anything else, no one can come up to you and say "Hey, didn't you say the reason you didn't convert to religion was because rationality filled the god-shaped hole better than God did? Well, I have some bad news for you..."
Another easy answer is that your real answer teaches you something about yourself. My friend may have successfully avoiding making a distasteful moral judgment, but he didn't learn anything about morality. My refusal to take the easy way out on the transplant question helped me develop the form of precedent-utilitarianism I use today.
But more than either of these, it matters because it seriously influences where you go next.
Say "I accept the argument that I need to donate almost all my money to poor African countries, but my only objection is that corrupt warlords might get it instead", and the obvious next step is to see if there's a poor African country without corrupt warlords (see: Ghana, Botswana, etc.) and donate almost all your money to them. Another acceptable answer would be to donate to another warlord-free charitable cause like the Singularity Institute.
If you just say "Nope, corrupt dictators might get it," you may go off and spend the money on a new TV. Which is fine, if a new TV is what you really want. But if you're the sort of person who would have been convinced by John Maxwell's argument, but you dismissed it by saying "Nope, corrupt dictators," then you've lost an opportunity to change your mind.
So I recommend: limit yourself to responses of the form "I completely reject the entire basis of your argument" or "I accept the basis of your argument, but it doesn't apply to the real world because of contingent fact X." If you just say "Yeah, well, contigent fact X!" and walk away, you've left yourself too much wiggle room.
In other words: always have a plan for what you would do in the least convenient possible world.