Bayesians vs. Barbarians

Previously in seriesCollective Apathy and the Internet
Followup toHelpless Individuals


Let's say we have two groups of soldiers.  In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy.  In group 2, everyone at all levels knows all about tactics and strategy.

Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.

Now there's a certain viewpoint on "rationality" or "rationalism" which would say something like this:

"Obviously, the rationalists will lose.  The Barbarians believe in an afterlife where they'll be rewarded for courage; so they'll throw themselves into battle without hesitation or remorse.  Thanks to their affective death spirals around their Cause and Great Leader Bob, their warriors will obey orders, and their citizens at home will produce enthusiastically and at full capacity for the war; anyone caught skimming or holding back will be burned at the stake in accordance with Barbarian tradition.  They'll believe in each other's goodness and hate the enemy more strongly than any sane person would, binding themselves into a tight group.  Meanwhile, the rationalists will realize that there's no conceivable reward to be had from dying in battle; they'll wish that others would fight, but not want to fight themselves.  Even if they can find soldiers, their civilians won't be as cooperative:  So long as any one sausage almost certainly doesn't lead to the collapse of the war effort, they'll want to keep that sausage for themselves, and so not contribute as much as they could.  No matter how refined, elegant, civilized, productive, and nonviolent their culture was to start with, they won't be able to resist the Barbarian invasion; sane discussion is no match for a frothing lunatic armed with a gun.  In the end, the Barbarians will win because they want to fight, they want to hurt the rationalists, they want to conquer and their whole society is united around conquest; they care about that more than any sane person would."

War is not fun.  As many many people have found since the dawn of recorded history, as many many people have found out before the dawn of recorded history, as some community somewhere is finding out right now in some sad little country whose internal agonies don't even make the front pages any more.

War is not fun.  Losing a war is even less fun.  And it was said since the ancient times:  "If thou would have peace, prepare for war."  Your opponents don't have to believe that you'll win, that you'll conquer; but they have to believe you'll put up enough of a fight to make it not worth their while.

You perceive, then, that if it were genuinely the lot of "rationalists" to always lose in war, that I could not in good conscience advocate the widespread public adoption of "rationality".

This is probably the dirtiest topic I've discussed or plan to discuss on LW.  War is not clean.  Current high-tech militaries—by this I mean the US military—are unique in the overwhelmingly superior force they can bring to bear on opponents, which allows for a historically extraordinary degree of concern about enemy casualties and civilian casualties.

Winning in war has not always meant tossing aside all morality.  Wars have been won without using torture.  The unfunness of war does not imply, say, that questioning the President is unpatriotic.  We're used to "war" being exploited as an excuse for bad behavior, because in recent US history that pretty much is exactly what it's been used for...

But reversed stupidity is not intelligence.  And reversed evil is not intelligence either.  It remains true that real wars cannot be won by refined politeness.  If "rationalists" can't prepare themselves for that mental shock, the Barbarians really will win; and the "rationalists"... I don't want to say, "deserve to lose".  But they will have failed that test of their society's existence.

Let me start by disposing of the idea that, in principle, ideal rational agents cannot fight a war, because each of them prefers being a civilian to being a soldier.

As has already been discussed at some length, I one-box on Newcomb's Problem.

Consistently, I do not believe that if an election is settled by 100,000 to 99,998 votes, that all of the voters were irrational in expending effort to go to the polling place because "my staying home would not have affected the outcome".  (Nor do I believe that if the election came out 100,000 to 99,999, then 100,000 people were all, individually, solely responsible for the outcome.)

Consistently, I also hold that two rational AIs (that use my kind of decision theory), even if they had completely different utility functions and were designed by different creators, will cooperate on the true Prisoner's Dilemma if they have common knowledge of each other's source code.  (Or even just common knowledge of each other's rationality in the appropriate sense.)

Consistently, I believe that rational agents are capable of coordinating on group projects whenever the (expected probabilistic) outcome is better than it would be without such coordination.  A society of agents that use my kind of decision theory, and have common knowledge of this fact, will end up at Pareto optima instead of Nash equilibria.  If all rational agents agree that they are better off fighting than surrendering, they will fight the Barbarians rather than surrender.

Imagine a community of self-modifying AIs who collectively prefer fighting to surrender, but individually prefer being a civilian to fighting.  One solution is to run a lottery, unpredictable to any agent, to select warriors.  Before the lottery is run, all the AIs change their code, in advance, so that if selected they will fight as a warrior in the most communally efficient possible way—even if it means calmly marching into their own death.

(A reflectively consistent decision theory works the same way, only without the self-modification.)

You reply:  "But in the real, human world, agents are not perfectly rational, nor do they have common knowledge of each other's source code.  Cooperation in the Prisoner's Dilemma requires certain conditions according to your decision theory (which these margins are too small to contain) and these conditions are not met in real life."

I reply:  The pure, true Prisoner's Dilemma is incredibly rare in real life.  In real life you usually have knock-on effects—what you do affects your reputation.  In real life most people care to some degree about what happens to other people.  And in real life you have an opportunity to set up incentive mechanisms.

And in real life, I do think that a community of human rationalists could manage to produce soldiers willing to die to defend the community.  So long as children aren't told in school that ideal rationalists are supposed to defect against each other in the Prisoner's Dilemma.  Let it be widely believed—and I do believe it, for exactly the same reason I one-box on Newcomb's Problem—that if people decided as individuals not to be soldiers or if soldiers decided to run away, then that is the same as deciding for the Barbarians to win.  By that same theory whereby, if a lottery is won by 100,000 votes to 99,998 votes, it does not make sense for every voter to say "my vote made no difference".  Let it be said (for it is true) that utility functions don't need to be solipsistic, and that a rational agent can fight to the death if they care enough about what they're protecting.  Let them not be told that rationalists should expect to lose reasonably.

If this is the culture and the mores of the rationalist society, then, I think, ordinary human beings in that society would volunteer to be soldiers.  That also seems to be built into human beings, after all.  You only need to ensure that the cultural training does not get in the way.

And if I'm wrong, and that doesn't get you enough volunteers?

Then so long as people still prefer, on the whole, fighting to surrender; they have an opportunity to set up incentive mechanisms, and avert the True Prisoner's Dilemma.

You can have lotteries for who gets elected as a warrior.  Sort of like the example above with AIs changing their own code.  Except that if "be reflectively consistent; do that which you would precommit to do" is not sufficient motivation for humans to obey the lottery, then...

...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away.  Even considering that we ourselves might be selected in the lottery.  Because in advance of the lottery, this is the general policy that gives us the highest expectation of survival. I said:  Real wars = not fun, losing wars = less fun.

Let's be clear, by the way, that I'm not endorsing the draft as practiced nowadays.  Those drafts are not collective attempts by a populace to move from a Nash equilibrium to a Pareto optimum.  Drafts are a tool of kings playing games in need of toy soldiers. The Vietnam draftees who fled to Canada, I hold to have been in the right.  But a society that considers itself too smart for kings, does not have to be too smart to survive.  Even if the Barbarian hordes are invading, and the Barbarians do practice the draft.

Will rational soldiers obey orders?  What if the commanding officer makes a mistake?

Soldiers march.  Everyone's feet hitting the ground in the same rhythm.  Even, perhaps, against their own inclinations, since people left to themselves would walk all at separate paces.  Lasers made out of people.  That's marching.

If it's possible to invent some method of group decisionmaking that is superior to the captain handing down orders, then a company of rational soldiers might implement that procedure.  If there is no proven method better than a captain, then a company of rational soldiers commit to obey the captain, even against their own separate inclinations.  And if human beings aren't that rational... then in advance of the lottery, the general policy that gives you the highest personal expectation of survival is to shoot soldiers who disobey orders.  This is not to say that those who fragged their own officers in Vietnam were in the wrong; for they could have consistently held that they preferred no one to participate in the draft lottery.

But an uncoordinated mob gets slaughtered, and so the soldiers need some way of all doing the same thing at the same time in the pursuit of the same goal, even though, left to their own devices, they might march off in all directions.  The orders may not come from a captain like a superior tribal chief, but unified orders have to come from somewhere.  A society whose soldiers are too clever to obey orders, is a society which is too clever to survive.  Just like a society whose people are too clever to be soldiers.  That is why I say "clever", which I often use as a term of opprobrium, rather than "rational".

(Though I do think it's an important question as to whether you can come up with a small-group coordination method that really genuinely in practice works better than having a leader.  The more people can trust the group decision method—the more they can believe that it really is superior to people going their own way—the more coherently they can behave even in the absence of enforceable penalties for disobedience.)

I say all this, even though I certainly don't expect rationalists to take over a country any time soon, because I think that what we believe about a society of "people like us" has some reflection on what we think of ourselves.  If you believe that a society of people like you would be too reasonable to survive in the long run... that's one sort of self-image.  And it's a different sort of self-image if you think that a society of people all like you could fight the vicious Evil Barbarians and win—not just by dint of superior technology, but because your people care about each other and about their collective society—and because they can face the realities of war without losing themselves—and because they would calculate the group-rational thing to do and make sure it got done—and because there's nothing in the rules of probability theory or decision theory that says you can't sacrifice yourself for a cause—and because if you really are smarter than the Enemy and not just flattering yourself about that, then you should be able to exploit the blind spots that the Enemy does not allow itself to think about—and because no matter how heavily the Enemy hypes itself up before battle, you think that just maybe a coherent mind, undivided within itself, and perhaps practicing something akin to meditation or self-hypnosis, can fight as hard in practice as someone who theoretically believes they've got seventy-two virgins waiting for them.

Then you'll expect more of yourself and people like you operating in groups; and then you can see yourself as something more than a cultural dead end.

So look at it this wayJeffreyssai probably wouldn't give up against the Evil Barbarians if he were fighting alone.  A whole army of beisutsukai masters ought to be a force that no one would mess with.  That's the motivating vision.  The question is how, exactly, that works.


Part of the sequence The Craft and the Community

Next post: "Of Gender and Rationality"

Previous post: "Collective Apathy and the Internet"

270 comments, sorted by
magical algorithm
Highlighting new comments since Today at 1:41 PM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

IAWYC, but I think it sidesteps an important issue.

A perfectly rational community will be able to resist the barbarians. But it's possible, perhaps likely, that as you increase community rationality, there's a valley somewhere between barbarian and Bayesian where fighting ability decreases until you climb out of it.

I think the most rational societies currently existing are still within that valley. And that a country with the values and rationality level of 21st century Harvard will with high probability be defeated by a country with the values and rationality level of 13th century Mongolia (holding everything else equal).

I don't know who you're arguing against, but I bet they are more interested in this problem than in an ideal case with a country of perfect Bayesians.

I agree such a valley is plausible (though far from obvious: more rational societies have better science and better economies; democracies can give guns to working class soldiers whereas aristocracies had to fear arming their peasants; etc.). To speculate about the underlying phenomenon, it seems plausible that across a range of goals (e.g., increasing one’s income; defending one’s society against barbarian hordes):

  • Slightly above-average amounts of rationality fairly often make things worse, since increased rationality, like any change in one’s mode of decision-making, can move people out of local optima.

  • Significantly larger amounts of rationality predictably make things better, since, after awhile, the person/society actually has enough skills to notice the expected benefits of “doing things the way most people do them” (which are often considerable; cultural action-patterns don’t come from nowhere) and to fairly evaluate the expected benefits of potential changes, and to solve the intrapersonal or societal coordination problems necessary to actually implement the action from which best results are expected.

Though I agree with Yvain's points elsewhere that we need detailed, concrete, empirical arguments regarding the potential benefits claimed from these larger amounts of rationality.

more rational societies have better science and better economies

More-developed societies develop technology; less-developed societies use them without paying the huge costs of development.

It's not evident which strategy is a win. Historically, it often appears that those who develop tech win. But not always. Japan has for decades been cashing in on American developments in cars, automation, steelmaking, ICs, and other areas.

If American corporations were required to foot the bill for the education needed for technological development, instead of having it paid for by taxpayers and by students, they might choose not to.

More-developed societies develop technology; less-developed societies use them without paying the huge costs of development.

If you patent something, you can charge what you like for the license. Were you suggesting that some countries ignore patent law; or that extenalities (such as failed R&D projects and education costs) don't get recompensed? Or something else?

But not always. Japan has for decades been cashing in on American developments in cars, automation, steelmaking, ICs, and other areas.

That's probably unfair. Japan files a lot of patents -- more than the US by some measures.

The subject was discussed at Overcoming Bias recently.

I'm no economist, but don't they already pay for it to a certain extent, in the form of the higher wages educated workers demand?

I think that's more a function of the rarity of the educated individuals of the needed sort, than of the cost of their education.

"Jeffreyssai probably wouldn't give up against the Evil Barbarians if he were fighting alone."

WWJD, indeed.

But since Jeffreyssai is a fictional creation of Eliezer Yudkowsky, appealing to what we imagine he would do is nothing more than an appeal to Eliezer Yudkowsky's ideas, in the same way that trying to confirm a newspaper's claim by picking up another copy of the same edition is just appealing to the newspaper again.

How can we test the newspaper instead of appealing to it?

Fortunately, this is a case where the least convenient possible world is quite unlike the real world, because modern wars are fought less with infantry and more with money and technology. As technology advances, military robots get cheaper, and larger portions of the military move to greater distances from the battlefield. If current trends continue, wars will be fought entirely between machines, until one side runs out of robots and is forced to surrender (or else fight man-vs-machine, which, in spite of what happens in movies, is probably fruitless suicide).

The problem with this theory is that people in a poor country are a lot cheaper than cutting edge military robots. In a serious war, the U.S. would quickly run out of "smart bombs" and such. Military equipment is a pure consumption item, it produces nothing at all, so there is only going to be limited investment in it in peacetime. And modern high-tech military equipment requires long lead times for building up (unlike the situation in WWII).

Robots get cheaper and stronger over time, while people are a fixed parameter.

...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away.

I've set my line of retreat at a much higher extreme. I expect humans trained in rationality, when faced with a situation where they must abandon their rationality in order to win, to abandon their rationality. If the most effective way to produce a winning army is to irreversibly alter the brains of soldiers to become barbarians, the pre-lottery agreement, for me, would include that process (say brain washing, drugging and computer implants), as well as appropriate ways to pacify the army once the war has been completed.

I expect a rational society, when faced with the inevitability of war, would pick the most efficient way to pound the enemy into dust, and go as far as this, if required.

Caveats: I don't actually expect anything this extreme would be required for winning most wars. I have a nagging doubt, that it may not be possible to form a society of humans which is at the same time both rational, and willing to go to such an extreme.

So basically the Culture-Idiran War version of "when you need soldiers, make people born to be warriors".

A couple comments. I think I overall agree, though I admit this is one of those "it gives me the willies, dun want to think about it too much" things for me. (Which, of course, means it's the sort of thing I especially should think about to see what ways I'm being stupid that I'm not letting myself see...)

Anyways, first, as far as methods of group decision making better than a chain of command type thing... I would expect that "better" in this context, would actually have to have a stricter requirement than merely "produces a more correct answer" but "produces a more correct answer QUICKLY", since group decision methods that we currently know of tend to, well, take more time, right?

Also, as far as precommiting to follow the captain (or other appropriate officer), that should be "up to limits where actually disobeying, even when taking into account Newcomb type arguments, is actually the Right Thing". (for example, commitment to obey until "egregious violations of morality" or something)

Semiformalizing this in this context seems tricky, though. Maybe something like this: for morality related disobediences, the rule is obey until "actually losing this battle/war/skirmish/whatever-the-correct-granularity-is would actually be preferable to obeying."?

I'm just stumped as far as what a sane rule for "captain is ordering us to do something that's really really self destructively stupid in a way that absolutely won't achieve anything useful" is. Maybe the "still obey under those circumstances" rule is the Right Way, if the probability of actually being (not just seeming to be) in such a situation is low enough that it's far better to precommit to obey unconditionally (up to extreme morality situations, as mentioned above)

You're right, in principle, about both things. There's a limit to our willingness to follow orders based on raw immorality of the orders. That's what Nuremburg, Mi Lai, and abu ghraib were about. But we also want to constrain our right to claim that we're disobeying for morality so we don't do it in the heat of action unless we're right. Tough call for the individual to make, and tough to set up proper incentives for.

But that's the goal. Follow order unless ..., but don't abuse your right to invoke the exception.

To pick a 2 year old Nit:

That was what Nuremburg and Mi Lai were about, but that is not what Abu Ghraib was about. At Abu Ghraib most of the events and acts that were made public, and most of what people are upset about was done by people who were violating orders--with some exceptions, and from what I can tell most of the exceptions were from non-military organizations.

I'm not going to waste a lot more time going into detail, but the people who went to jail went there for violating orders, and the people who got "retired" got it because they were shitty leaders and didn't make sure their troops where well behaved.

In a "appeal to authority", I've been briefed several times over the last 20 years on the rules of land warfare, I've spent time in that area (in fact when the original article was posted I was about 30 miles from Abu Ghraib) and a very good friend of mine was called in to help investigate/document what happened there. When his NDA expires I intend to get him drunk and get the real skinny.

This doesn't change the thrust of your argument--which not only do I agree with, but is part and parcel of military training these days. It is hammered into each soldier, sailor, marine and airman that you do NOT have to follow illegal orders. Read "Lone Survivor", a book by Marcus Luttrell about his Seal Team going up against unwinnable odds in the mountains of Afghanistan--because they, as a team, decided not to commit a war crime. Yeah, they voted on it, and it was close. , but one of those things was not like the other and I felt I had to say something.

I'm not completely convinced that all the people who were punished believed they were not doing what their superiors wanted. I understand that that's the way the adjudication came out, but that's what I would expect from a system that knows how to protect itself. But I'll admit I haven't paid close attention to any of the proceedings.

Is there any good, short, material laying out the evidence that none of the perpetrators heard anything to reinforce the mayhem from their superiors--non-coms etc. included? Your sentence "the people who went to jail went there for violating orders" leaves open the possibility that some of the illegal activity was done by people who thought they were following orders, or at least doing what their superiors wanted.

If you are right, then I'll agree that Abu Ghraib was orthogonal to the main point. But I'm not completely convinced, and it seems likely to me that it looks exactly like a relevant case to the Arab street. Whether or not there were explicit orders from the top of the institution, it looked to have been pervasive enough to have to count as policy at some level.

Torture and Democracy argues that torture is a craft apprenticeship technique, and develops when superiors say "I want answers and I don't care how you get them".

This makes the question of what's been ordered a little fuzzy.

(This is a reply to both Mr. Hibbert and Ms. Lebovitz)

I've got a couple problems here--one is that there wasn't an incident @Abu Grhaib, there were a couple periods of time in which certain classes if things happened. Another is that some military personnel (this is from memory since it's just not worth my time right now to google it) from a reservist MP unit, many of whom were prison guards "in real life" abused some prisoners during one or two shifts after a particularly brutal (in terms of casualties to American forces from VBIEDs/Suicide bombers. These particular abuses (getting detainees naked piled up etc) were not done as part of information gathering, and IIRC many of those prisoners weren't even considered intelligence sources. Abu Grhaib at the time held both iraqi criminal and insurgent/terrorist suspects.

I haven't paid much attention to the debate since, and have not wasted the cycles on reading any other sources. As I indicated, I've been in the military and rejoined the armed forces around the time that story broke (or maybe later, I'm having trouble nailing down exactly when the story broke).

One thing that did come out was that during the period of time the military abuses took place (as in the shifts that they happened on) there WERE NO OFFICERS PRESENT. That is basically what got the Brigadier General in charge "retired". (she later whined about how she was mistreated by the system. I've got no sympathy. Her people were poorly trained and CLEARLY poorly lead from the top down).

There were other photographs that surfaced of "fake torture"--an detainee dressed in a something that looked like a poncho with jumper cables on his arms--he believe the jumper cables were attached to a power source and would light him up like a christmas tree if he stepped down (again IIRC). This was the actions of a non-military questioner, and someone who thought he was following the law--after all he wasn't doing anything by scaring the guy there was (absent a weak heart) no risk of injury. It was a really awful looking photo though.

Ms. Levbovitz:

I've known people (not current military, Vietnam era) who engaged in a variety of rather brutal interrogation techniques. The one I have in mind was raised in a primitive part of the US were violence and poverty were more common that education, and spent a long time fighting an enemy that would do things like chop off arms of people who had vaccination scars.

His superiors didn't have to tell him anything. (Note I have never said that "we" haven't engaged in these sorts of behaviors, only that it didn't happen under our watch in Abu Grhaib (some of the stuff that happened before we took over, when it was Saddam's prison? It's hard for me to watch and I have a bit of tough stomach for that sort of thing).

And this notion that "a person being tortured is likely to say whatever he thinks his captors want to hear, making it one of the poorest methods of gathering reliable information" is pure bullshit.

Yes, if I grab random people off the street and waterboard them I will get no useful information. If 5 people break into my house and kidnap my daughter, but only 4 get out he WILL give me the information I want. He will say anything to stop the pain, and that anything happens to be what I want to hear.

This is again orthagonal to what I was discussing with Mr. Hibbert--I was not claiming that torture doesn't happen (it does), but that most of what the public knows about what happened at Abu Grhaib wasn't torture or abuse ordered by those above, and in some cases it was not even what the perpetrator thought of as abuse.

Well, that's more "what laws should there be/what sort of enforcement ought there to be?" I was more asking with regards to "what underlying rule is the Rational Way"? :)

ie, some level of being willing to do what you're told even if it's not optimal is a consequence of the need to coordinate groups and stuff. ie, the various newcomb arguments and so on.. I'm just trying to figure out where that breaks down, how stupid the orders have to seem before that implodes, if ever.

The morality one was easier, since the "obey even though you think you know better" thing is based on one boxing and having the goal of winning that battle/whatever. If losing would actually be preferable to obeying, then the single iteration PD/newcomb problem type stuff doesn't seem to come up as strongly.

Any idea what an explicit rule a rationalist should follow with regards to this might look like? (not necesarally "how do we enforce it" though. Separate question)

Even an upper limit criteria would be okay. ie, something of the form "I don't know the exact dividing line, but I think I can argue that at least if it gets to this point, then disobeying is rational"

(which is what I did for the morality one, with the "better to lose than obey" criteria.)

No, I don't have a boiled down answer. When I try to think about it, rational/right includes not just the outcome of the current engagement, but the incentives and lessons left behind for the next incident.

Okay, here's one example I've used before: torture. It's somewhat orthogonal to the question of following orders, but it bears on the issue of setting up incentives for how often breaking the rules is acceptable. I think the law and the practice should be that torture is illegal and punished strictly. If some person is convinced that imminent harm will result if information isn't extracted from a suspect, and that it's worth going to jail for a long time in order to prevent the harm, then they are able to (which is not the same as authorized) torture. But it's always at the cost of personal sacrifice. So, if you think a million people will die from a nuke, and you're convinced you can actually get information out of someone by immoral and prohibited means (which I think is usually the weakest link in the chain) and you're willing to give up your life or your liberty in order to prevent it, then go for it.

But don't ever expect a hero's welcome for your sacrifice. It's a bad choice that's (conceivably) sometimes necessary. The idea that any moral society would authorize the use of torture in routine situations makes me sick.

I think people exist who will make the personal sacrifice of going to jail for a long time to prevent the nuke from going off. But I do not think people exist who will also sacrifice a friend. But under American law that is what a person would have to do to consult with a friend on the decision of whether to torture: American law punishes people who have foreknowledge of certain crimes but do not convey their foreknowledge to the authorities. So the person is faced with making what may well be the most important decision of their lives without help from any friend or conspiring somehow to keep the authorities from learning about the friend's foreknowledge of the crime. Although I believe that lying is sometimes justified, this particular lie must be planned out simultaneously with the deliberations over the important decision -- potentially undermining those deliberations if the person is unused to high-stakes lies -- and the person probably is unused to high-stakes lies if he is the kind of person seriously considering such a large personal sacrifice.

Any suggestions for the person?

Any suggestions for the person?

Discuss a hypothetical situation with your friend that happens to match up in all particulars with the real-world situation, which you do not discuss.

It isn't actually important here that your friend be fooled, the goal is to give your friend plausible deniability to protect her from litigation.

Yes. I am sympathetic to that view of "how to deal with stuff like torture/etc", but that doesn't answer "when to do it".

ie, I wasn't saying "when should it be 'officially permitted'?" but rather at what point should a rationalist do so? how convinced does a rationalist need to be, if ever?

Or did I completely misunderstand what you were saying?

No, you understood me. I sidestepped the heart of the question.

This is an example where I believe I know what the right incentives structure of the answer is. But I can't give any guidance on the root question, since in my example case, (torture) I don't believe in the efficacy of the immoral act. I don't think you can procure useful information by torturing someone when time is short. And when time isn't short, there are better choices.

I guess the big question here is why do you not believe it. Since you (and I!) would prefer to live in a world where torture is not effective, we must be aware that our biases is to believe it is not effective -- it makes the world nicer. Hence, we must conciously shift up our belief in the effectiveness of torture from our "gut feeling." Given that, what evidence have you seen that for the purposes of solving NP-like problems (meaning, a problem where a solution is hard to find but easy to verify like "where is the bomb hidden") is not effective. I would say that for me personally, the amount that my preferences shift in the presence of relatively mild pain ("I prefer not to medicate myself" vs. "Gimme that goddamn pill") is at least cause to suspect that someone who is an expert at causing vast amounts of pain would be able to make me do things I would normally prefer not to do (like tell them where I hid the bomb) to stop that pain.

Of course, torture used for unverifiable information is completely useless for exactly the same reason -- the prisoner will say anything they can get away with to make the pain stop.

Maybe my previous answer would have been cleaner if I had said "I don't think I can procure useful information by torturing someone when time is short." It's a relatively easy choice for me, since I doubt that even with proper tools, that I could appropriately gauge the level of pain to the necessary calibration in order to get detailed information in a few minutes or hours.

When I think about other people who might have more experience, it's hard to imagine someone who had repeatedly fallen into the situation where they were the right person to perform the torture so they had enough experience to both make the call, and effectively extract information. Do you want to argue that they could have gotten to that point without violating our sense of morality?

Since my question is "What should the law be?", not "is it ever conceivable that torture could be effective?" I still have to say that the law should forbid torture, and people should expect to be punished if they torture. There may be cases where you or I would agree that in that circumstance it was the necessary thing to do, but I still believe that the system should never condone it.

You talked about two issues that have little to do with each other:

  1. What should the law be? (I didn't argue with your point here, so re-iterating it is useless?)
  2. A statement that was misleading: apparently you meant that you're not a good torturer. That is not impossible. I think that given a short amount of time, with someone who knows something specific (where the bomb is hidden), my best chance (in effective, not moral, ordering) is to torture them. I'm not a professional torturer, I luckily never had to torture anyone, but like any human, I have an understanding in pain. I've watched movies about torture, and I've heard about waterboarding. If I decided that this was the ethical thing to do (which be both agree, in some cases is possible), and I was the only one around, I'd probably try waterboarding. It's risky, there's a chance the prisoner might die, but if I have one hour, and 50 million people will die otherwise, I don't see any better way. So let me ask you flat out -- I'm assuming you also read about waterboarding, and that when you need to, you have access to the WP article about waterboarding. What would you do in that situation? Ask nicely?

All that does not go to condone torture. I'm just saying, if a nation of Rationalists is fighting with the Barbarians, then it's not necessarily in their best interests to decide they will never torture no matter what.

My point wasn't just that I wouldn't make a good torturer. It seems to me that ordinary circumstances don't provide many opportunities for anyone to learn much about torture, (other than from fictional sources). I have little reason to believe that inexperienced torturers would be effective in the time-critical circumstances that seem necessary for any convincing justification of torture. You may believe it, but it's not convincing to me. So it would be hard to ethically produce trained torturers, and there's a dearth of evidence on the effectiveness of inexperienced torturers in the circumstances necessary to justify it.

Given that, I think it's better to take the stance that torture is always unethical. There are conceivable circumstances when it would be the only way to prevent a cataclysm, but they're neither common, nor easy to prepare for.

And I don't think I've said that it would be ethical, just that individuals would sometimes think it was necessary. I think we are all better off if they have to make that choice without any expectation that we will condone their actions. Otherwise, some will argue that it's useful to have a course of training in how to perform torture, which would encourage its use even though we don't have evidence of its usefulness. It seems difficult to produce evidence one way or another on the efficacy of torture without violating the spirit of the Nuremberg Code. I don't see an ethical way to add to the evidence.

You seem to believe that sufficient evidence exists. Can you point to any?

You wanted an explicit answer to your question. My response is that I would be unhappy that I didn't have effective tools for finding out the truth. But my unhappiness doesn't change the facts of the situation. There isn't always something useful that you can do. When I generalize over all the fictional evidence I've been exposed to, it's too likely that my evidence is wrong as to the identity of the suspect, or he doesn't have the info I want, or the bomb can't be disabled anyway. When I try to think of actual circumstances, I don't come up with examples in which time was short and the information produced was useful. I also can't imagine myself personally punching, pistol-whipping, pulling fingernails, waterboarding, etc, nor ordering the experienced torturer (who you want me to imagine is under my command) to do so.

Sorry to disappoint you, but I don't believe the arguments I've heard for effectiveness or morality of torture.

Yeah, the "do it, but keep it illegal and be punished for it even if it was needed" is a possible solution given "in principle it may be useful", which is a whole other question.

But anyways, I was talking about "when should a rationalist soldier be willing to disobey in the name of 'I think my CO is giving really stupid orders'?", since I believe I already have a partial solution to the "I think my CO is giving really immoral orders" case (as described above)

As far as when torture would even be plausibly useful (especially plausibly optimal) for obtaining info? I can't really currently think of any non-contrived situations.