IAWYC, but I think it sidesteps an important issue.
A perfectly rational community will be able to resist the barbarians. But it's possible, perhaps likely, that as you increase community rationality, there's a valley somewhere between barbarian and Bayesian where fighting ability decreases until you climb out of it.
I think the most rational societies currently existing are still within that valley. And that a country with the values and rationality level of 21st century Harvard will with high probability be defeated by a country with the values and rationality level of 13th century Mongolia (holding everything else equal).
I don't know who you're arguing against, but I bet they are more interested in this problem than in an ideal case with a country of perfect Bayesians.
I agree such a valley is plausible (though far from obvious: more rational societies have better science and better economies; democracies can give guns to working class soldiers whereas aristocracies had to fear arming their peasants; etc.). To speculate about the underlying phenomenon, it seems plausible that across a range of goals (e.g., increasing one’s income; defending one’s society against barbarian hordes):
Slightly above-average amounts of rationality fairly often make things worse, since increased rationality, like any change in one’s mode of decision-making, can move people out of local optima.
Significantly larger amounts of rationality predictably make things better, since, after awhile, the person/society actually has enough skills to notice the expected benefits of “doing things the way most people do them” (which are often considerable; cultural action-patterns don’t come from nowhere) and to fairly evaluate the expected benefits of potential changes, and to solve the intrapersonal or societal coordination problems necessary to actually implement the action from which best results are expected.
Though I agree with Yvain's points elsewhere that we need detailed, concrete, empirical arguments regarding the potential benefits claimed from these larger amounts of rationality.
"Jeffreyssai probably wouldn't give up against the Evil Barbarians if he were fighting alone."
WWJD, indeed.
But since Jeffreyssai is a fictional creation of Eliezer Yudkowsky, appealing to what we imagine he would do is nothing more than an appeal to Eliezer Yudkowsky's ideas, in the same way that trying to confirm a newspaper's claim by picking up another copy of the same edition is just appealing to the newspaper again.
How can we test the newspaper instead of appealing to it?
Fortunately, this is a case where the least convenient possible world is quite unlike the real world, because modern wars are fought less with infantry and more with money and technology. As technology advances, military robots get cheaper, and larger portions of the military move to greater distances from the battlefield. If current trends continue, wars will be fought entirely between machines, until one side runs out of robots and is forced to surrender (or else fight man-vs-machine, which, in spite of what happens in movies, is probably fruitless suicide).
A couple comments. I think I overall agree, though I admit this is one of those "it gives me the willies, dun want to think about it too much" things for me. (Which, of course, means it's the sort of thing I especially should think about to see what ways I'm being stupid that I'm not letting myself see...)
Anyways, first, as far as methods of group decision making better than a chain of command type thing... I would expect that "better" in this context, would actually have to have a stricter requirement than merely "produces a more ...
...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away.
I've set my line of retreat at a much higher extreme. I expect humans trained in rationality, when faced with a situation where they must abandon their rationality in order to win, to abandon their rationality. If the most effective way to produce a winning army is to irreversibly alter the brains of soldiers to become barbarians, the pre-lottery agreement, for m...
I'm wondering whether the rationalists can effectively use mercenaries. Why doesn't the US have more mercenaries than US soldiers? In the typically poverty-stricken areas where US forces operate, we could hire and equip 100-1000 locals for the price of a single US soldier (which, when you figure in health-care costs, is so much that we basically can't afford to fight wars using American soldiers anymore). We might also have less war opposition back at home if Americans weren't dying.
Voted up because dealing with uncooperative people is a necessary part of the art and war is the extreme of "uncooperative".
Good post.
Also, historically, evil barbarians regularly fall prey to some irrational doctrine or personal paranoia that wastes their resources (sacrifice to the gods, kill all your Jews, kill everybody in the Ukraine, have a cultural revolution).
We in the US probably have a peculiar attitude on the rationality of war because we've never, with the possible exception of the War of 1812, fought in a war that was very rational (in terms of the benefits for us). The Revolutionary war? The war with Mexico? The Civil War? The Spanish-American War? WWI? WWII? Korea? Vietnam? Iraq? None of them make sense in terms of self-interest.
(Disclaimer: I'm a little drunk at the moment.)
This struck me as relevant:
"If we desire to defeat the enemy, we must proportion our efforts to his powers of resistance. This is expressed by the power of two factors which cannot be separated, namely, the sum of available means and the strength of the Will."
Carl Von Clausewitz, On War, Chapter 1, Section 5. Utmost Exertion of Powers
(I'm still planning on putting together a post of game theory, war, and morality, but I think most of you will be inclined to disagree with my conclusions, so I'm really doing my homework for this one.)
For seeing someone's source code to act as a commitment mechanism, you have to be reasonably sure that what they show you really is their source code - and also that their source code is not going to be modified by another agent between when they show it to you, and when they get a chance to defect.
While it's possible to imagine these conditions being met, it seems non-trivial to imagine a society where they are met very frequently.
If agents face one-shot prisoner's dilemmas with each other very often, there are other ways to get them to cooperate - assumi...
What about just paying them to fight? You can have an auction of sorts to set the price, but in the end they'd select themselves. You could still use the courage enhancing drugs and shoot those who try to breach the contract.
One might respond "no amount of (positive) money could convince me to fight a war", but what about at some negative amount? After all, everyone else has to pay for the soldiers.
Eliezer's point is that, given a certain decision theory (or, failing that, a certain set of incentives to precommitment), rational soldiers could in fact carry out even suicide missions if the tactical incentives were strong enough for them to precommit to a certain chance of drawing such a mission.
This has actually come up: in World War II (citation in Pinker's "How the Mind Works"), bomber pilots making runs on Japan had a 1 in 4 chance of survival. Someone realized that the missions could be carried out with half the planes if those planes carried bombs in place of their fuel for the return trip; the pilots could draw straws, and half would survive while the other half went on a suicide mission. Despite the fact that precommitting to this policy would have doubled their chances of survival, the actual pilots were unable to adopt this policy (among other things, because they were suspicious that those so chosen would renege rather than carry out the mission).
I think Eliezer believes that a team of soldiers trained by Jeffreysai would be able to precommit in this fashion and carry the mission through if selected. I think that, even if humans can't meet such a high standard by training and will alone, that there could exist some form of preparation or institution that could make it a workable strategy.
In group #2, where everybody at all levels understand all tactics and strategy, they would all understand the need for a coordinated, galvanized front, and so would figure out a way to designate who takes orders and who does the ordering because that is the rational response. The maximally optimal rational response might be a self-organized system where the lines are blurred between those who do the ordering and those who follow the orders, and may alternate in round-robin fashion or some other protocol. That boils down to a technical problem in operatio...
Any one reminded of 'The World of Null-A'? Rationalists do win the war over barbarians in this case.
Interesting. It seems to imply however that a rationalist would always consider, a priori, its own individual survival as the highest ultimate goal, and modulate - rationally - from there. This is highly debatable however: you could have a rationalist father who considers, a priori, the survival of his children to be more important than its own, a rationalist patriot, who considers, a priori, the survival of its political community to be more important than its own etc.
The moral of Ends Don't Justify Means (Among Humans) was that even if philosophical though experiments demonstrate scenarios where ethical rules should be abandoned for the greater good, real life cases are not as clear cut and we should still obey these moral rules because humans cannot be trusted when they claim that <unethical plan> really does maximize the expected utility - we cannot be trusted when we say "this is the only way" and we cannot be trusted when we say "this is better than the alternative".
I think this may be the source of the repul...
Edit: lottery won by two votes -> election.
I've heard you say a handful of times now: as justified by some decision theory (which I won't talk about yet), I one-box/cooperate. I'm increasingly interested.
I don't understand the assumption that each rationalist prefers to be a civilian while someone else risks her life. They can be rational and use a completely altruistic utility function that values all people equally a priori. The strongest rationalist society is the rationalist society where everyone have the same terminal values (in an absolute rather than relative sense).
Perhaps slightly off topic, but I'm skeptical of the idea that two AIs having access to each other's source code is in general likely to be a particularly strong commitment mechanism. I find it much easier to imagine how this could be gamed than how it could be trustworthy.
Is it just intended as a rhetorical device to symbolize the idea of a very reliable pre-commitment signal (in which case perhaps there are better choices because it doesn't succeed at that for me, and I imagine would raise doubts for most people with much programming experience) or is it supposed to be accepted as highly likely to be a very reliable commitment signal (in which case I'd like to see the reasoning expanded upon)?
A life spent on something less valuable than itself is wasted, just as money is squandered on junk. If you want to respect the value of your life, you must spend it on something more valuable to you than you are. If you invest your life into something more valuable than you are, you are not throwing it away, you are ensuring that it is spent wisely.
People sacrifice their best years passing their genes on, knowing that the continuation of the species is more valuable than those years, and they fight in war because freeing themselves and future generations...
I found this post very disturbing, so I thought for a bit about why. It reads very much like some kind of SF dystopia, and indeed if it were necessary to agree to this lottery to be part of the hypothetical rationalist community/country, then I wouldn't wish to be a part of it. One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it's not impeding anyone else's right to do the same). No government should have the right to compel its citizens to become...
Oh, my first downvote. Interesting. Bad Leisha, you've violated some community norm or other. But given that I'm new here and still trying to determine whether or not this community is a good fit for me, I'm curious about the specifics. I wonder what I did wrong.
A single downvote is not an expression of a community norm. It is an expression by a single person that there was something, and it could be pretty much anything, about your post that that one person did not like. I wouldn't worry until a post gets to -5 or so, and -1 isn't very predictive that it will.
Note: This post is a concerted rational effort to overcome the cached thought 'oh no, someone at LW doesn't like what I wrote :( ' and should be taken in that spirit.
The "someone at LW doesn't like what I wrote" part is accurate. You don't need the "oh no" and ":(" parts. Just because someone disagrees with you, doesn't mean that you are wrong.
Personally (and I did not vote on your post either way), I don't think you are quite engaging with the problem posed, which is that each of these hypothetical rationalists would rather win without being in the army themselves than win with being in t...
But reversed stupidity is not intelligence. And reversed evil is not intelligence either. It remains true that real wars cannot be won by refined politeness. If "rationalists" can't prepare themselves for that mental shock, the Barbarians really will win; and the "rationalists"... I don't want to say, "deserve to lose". But they will have failed that test of their society's existence.
Are you assuming that niceness (not torturing people, not killing civilians) is correlated with rationality?
Previously:
Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.
Now there's a certain viewpoint on "rationality" or "rationalism" which would say something like this:
"Obviously, the rationalists will lose. The Barbarians believe in an afterlife where they'll be rewarded for courage; so they'll throw themselves into battle without hesitation or remorse. Thanks to their affective death spirals around their Cause and Great Leader Bob, their warriors will obey orders, and their citizens at home will produce enthusiastically and at full capacity for the war; anyone caught skimming or holding back will be burned at the stake in accordance with Barbarian tradition. They'll believe in each other's goodness and hate the enemy more strongly than any sane person would, binding themselves into a tight group. Meanwhile, the rationalists will realize that there's no conceivable reward to be had from dying in battle; they'll wish that others would fight, but not want to fight themselves. Even if they can find soldiers, their civilians won't be as cooperative: So long as any one sausage almost certainly doesn't lead to the collapse of the war effort, they'll want to keep that sausage for themselves, and so not contribute as much as they could. No matter how refined, elegant, civilized, productive, and nonviolent their culture was to start with, they won't be able to resist the Barbarian invasion; sane discussion is no match for a frothing lunatic armed with a gun. In the end, the Barbarians will win because they want to fight, they want to hurt the rationalists, they want to conquer and their whole society is united around conquest; they care about that more than any sane person would."
War is not fun. As many many people have found since the dawn of recorded history, as many many people have found out before the dawn of recorded history, as some community somewhere is finding out right now in some sad little country whose internal agonies don't even make the front pages any more.
War is not fun. Losing a war is even less fun. And it was said since the ancient times: "If thou would have peace, prepare for war." Your opponents don't have to believe that you'll win, that you'll conquer; but they have to believe you'll put up enough of a fight to make it not worth their while.
You perceive, then, that if it were genuinely the lot of "rationalists" to always lose in war, that I could not in good conscience advocate the widespread public adoption of "rationality".
This is probably the dirtiest topic I've discussed or plan to discuss on LW. War is not clean. Current high-tech militaries—by this I mean the US military—are unique in the overwhelmingly superior force they can bring to bear on opponents, which allows for a historically extraordinary degree of concern about enemy casualties and civilian casualties.
Winning in war has not always meant tossing aside all morality. Wars have been won without using torture. The unfunness of war does not imply, say, that questioning the President is unpatriotic. We're used to "war" being exploited as an excuse for bad behavior, because in recent US history that pretty much is exactly what it's been used for...
But reversed stupidity is not intelligence. And reversed evil is not intelligence either. It remains true that real wars cannot be won by refined politeness. If "rationalists" can't prepare themselves for that mental shock, the Barbarians really will win; and the "rationalists"... I don't want to say, "deserve to lose". But they will have failed that test of their society's existence.
Let me start by disposing of the idea that, in principle, ideal rational agents cannot fight a war, because each of them prefers being a civilian to being a soldier.
As has already been discussed at some length, I one-box on Newcomb's Problem.
Consistently, I do not believe that if an election is settled by 100,000 to 99,998 votes, that all of the voters were irrational in expending effort to go to the polling place because "my staying home would not have affected the outcome". (Nor do I believe that if the election came out 100,000 to 99,999, then 100,000 people were all, individually, solely responsible for the outcome.)
Consistently, I also hold that two rational AIs (that use my kind of decision theory), even if they had completely different utility functions and were designed by different creators, will cooperate on the true Prisoner's Dilemma if they have common knowledge of each other's source code. (Or even just common knowledge of each other's rationality in the appropriate sense.)
Consistently, I believe that rational agents are capable of coordinating on group projects whenever the (expected probabilistic) outcome is better than it would be without such coordination. A society of agents that use my kind of decision theory, and have common knowledge of this fact, will end up at Pareto optima instead of Nash equilibria. If all rational agents agree that they are better off fighting than surrendering, they will fight the Barbarians rather than surrender.
Imagine a community of self-modifying AIs who collectively prefer fighting to surrender, but individually prefer being a civilian to fighting. One solution is to run a lottery, unpredictable to any agent, to select warriors. Before the lottery is run, all the AIs change their code, in advance, so that if selected they will fight as a warrior in the most communally efficient possible way—even if it means calmly marching into their own death.
(A reflectively consistent decision theory works the same way, only without the self-modification.)
You reply: "But in the real, human world, agents are not perfectly rational, nor do they have common knowledge of each other's source code. Cooperation in the Prisoner's Dilemma requires certain conditions according to your decision theory (which these margins are too small to contain) and these conditions are not met in real life."
I reply: The pure, true Prisoner's Dilemma is incredibly rare in real life. In real life you usually have knock-on effects—what you do affects your reputation. In real life most people care to some degree about what happens to other people. And in real life you have an opportunity to set up incentive mechanisms.
And in real life, I do think that a community of human rationalists could manage to produce soldiers willing to die to defend the community. So long as children aren't told in school that ideal rationalists are supposed to defect against each other in the Prisoner's Dilemma. Let it be widely believed—and I do believe it, for exactly the same reason I one-box on Newcomb's Problem—that if people decided as individuals not to be soldiers or if soldiers decided to run away, then that is the same as deciding for the Barbarians to win. By that same theory whereby, if a lottery is won by 100,000 votes to 99,998 votes, it does not make sense for every voter to say "my vote made no difference". Let it be said (for it is true) that utility functions don't need to be solipsistic, and that a rational agent can fight to the death if they care enough about what they're protecting. Let them not be told that rationalists should expect to lose reasonably.
If this is the culture and the mores of the rationalist society, then, I think, ordinary human beings in that society would volunteer to be soldiers. That also seems to be built into human beings, after all. You only need to ensure that the cultural training does not get in the way.
And if I'm wrong, and that doesn't get you enough volunteers?
Then so long as people still prefer, on the whole, fighting to surrender; they have an opportunity to set up incentive mechanisms, and avert the True Prisoner's Dilemma.
You can have lotteries for who gets elected as a warrior. Sort of like the example above with AIs changing their own code. Except that if "be reflectively consistent; do that which you would precommit to do" is not sufficient motivation for humans to obey the lottery, then...
...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away. Even considering that we ourselves might be selected in the lottery. Because in advance of the lottery, this is the general policy that gives us the highest expectation of survival.
...like I said: Real wars = not fun, losing wars = less fun.
Let's be clear, by the way, that I'm not endorsing the draft as practiced nowadays. Those drafts are not collective attempts by a populace to move from a Nash equilibrium to a Pareto optimum. Drafts are a tool of kings playing games in need of toy soldiers. The Vietnam draftees who fled to Canada, I hold to have been in the right. But a society that considers itself too smart for kings, does not have to be too smart to survive. Even if the Barbarian hordes are invading, and the Barbarians do practice the draft.
Will rational soldiers obey orders? What if the commanding officer makes a mistake?
Soldiers march. Everyone's feet hitting the ground in the same rhythm. Even, perhaps, against their own inclinations, since people left to themselves would walk all at separate paces. Lasers made out of people. That's marching.
If it's possible to invent some method of group decisionmaking that is superior to the captain handing down orders, then a company of rational soldiers might implement that procedure. If there is no proven method better than a captain, then a company of rational soldiers commit to obey the captain, even against their own separate inclinations. And if human beings aren't that rational... then in advance of the lottery, the general policy that gives you the highest personal expectation of survival is to shoot soldiers who disobey orders. This is not to say that those who fragged their own officers in Vietnam were in the wrong; for they could have consistently held that they preferred no one to participate in the draft lottery.
But an uncoordinated mob gets slaughtered, and so the soldiers need some way of all doing the same thing at the same time in the pursuit of the same goal, even though, left to their own devices, they might march off in all directions. The orders may not come from a captain like a superior tribal chief, but unified orders have to come from somewhere. A society whose soldiers are too clever to obey orders, is a society which is too clever to survive. Just like a society whose people are too clever to be soldiers. That is why I say "clever", which I often use as a term of opprobrium, rather than "rational".
(Though I do think it's an important question as to whether you can come up with a small-group coordination method that really genuinely in practice works better than having a leader. The more people can trust the group decision method—the more they can believe that it really is superior to people going their own way—the more coherently they can behave even in the absence of enforceable penalties for disobedience.)
I say all this, even though I certainly don't expect rationalists to take over a country any time soon, because I think that what we believe about a society of "people like us" has some reflection on what we think of ourselves. If you believe that a society of people like you would be too reasonable to survive in the long run... that's one sort of self-image. And it's a different sort of self-image if you think that a society of people all like you could fight the vicious Evil Barbarians and win—not just by dint of superior technology, but because your people care about each other and about their collective society—and because they can face the realities of war without losing themselves—and because they would calculate the group-rational thing to do and make sure it got done—and because there's nothing in the rules of probability theory or decision theory that says you can't sacrifice yourself for a cause—and because if you really are smarter than the Enemy and not just flattering yourself about that, then you should be able to exploit the blind spots that the Enemy does not allow itself to think about—and because no matter how heavily the Enemy hypes itself up before battle, you think that just maybe a coherent mind, undivided within itself, and perhaps practicing something akin to meditation or self-hypnosis, can fight as hard in practice as someone who theoretically believes they've got seventy-two virgins waiting for them.
Then you'll expect more of yourself and people like you operating in groups; and then you can see yourself as something more than a cultural dead end.
So look at it this way: Jeffreyssai probably wouldn't give up against the Evil Barbarians if he were fighting alone. A whole army of beisutsukai masters ought to be a force that no one would mess with. That's the motivating vision. The question is how, exactly, that works.