How does one determine the appropriate burden of proof?
Apparently I left a tag open. The first paragraph is yours, whereas the second is my question.
Minor point. It is peculiar to talk about the "death of communism" when there are about as many communists in the world as there are Christians.
"Death of the Purported Worldwide Worker's Communist Revolution" is closer to the truth (and a mouthful).
How about "Death of Worldwide Revolutionary Communism"?
The problem with Communism is timing. In the future (if we survive the next century) there will be enough technological progression to create essential Communism (no-one needs to work, everyone will have necessary resources to live incredible lives and so forth). Of course, we won't call it Communism.
Haven't had a post this good in a while. With immediate application,too.
Err... did that post end up dying in a free speech happy death spiral?
Especially odd from a person who believes in the probable possibility of humanly irresistible bad arguments as a reason for not AI boxing. If there are minds that we can't let exist because they would make bad arguments that we would find persuasive this seems terribly close, from an aggregative utilitarian standpoint, to killing them.
I'm not an expert in the Rwandan genocide, but it's my impression that to a substantial extent the people behind it basically just made arguments (bad ones, of a primarily ad-hominem form like "Tutsis are like cockroaches") for killing them and people who listened to those arguments on the radio went along with it. At least with the benefit of hindsight I am reluctant to say that the people promoting that genocide should have been stopped forcibly. Similarly, it's my impression that Charles Manson didn't personally kill anyone. He merely told his followers ridiculous stories of what the likely results of their killing certain people would be.
It would be nice if, as Socrates claimed, a bad argument cannot defeat a good one, but if that was true we wouldn't need to overc...
Especially odd from a person who believes in the probable possibility of humanly irresistible bad arguments as a reason for not AI boxing. If there are minds that we can't let exist because they would make bad arguments that we would find persuasive this seems terribly close, from an aggregative utilitarian standpoint, to killing them.
Fine, let me rephrase: in the human art of rationality there's a flat law against meeting arguments with violence, anywhere in the human world. In the superintelligent domain, as you say, violence is not an ontological category and there is no firm line between persuading someone with a bad argument and reprogramming their brain with nanomachines. In our world there is a firm line, however.
Let me put it this way: If you can invent a bullet that, regardless of how it is fired, or who fires it, only hits people who emit untrue statements, then you can try to use bullets as part of a Bayesian analysis. Until then, you really ought to consider the possibility of the other guy shooting back, no matter how right you are or how wrong they are, and ask whether you want to start down that road.
If the other guy shoots first, of course, that's a whole different story that has nothing to do with free speech.
So what is your response to someone like Hitler? Assuming the thug won't listen? Die? Run? I mean before the AGI goes "phoom".
Eliezer, I first saw the distinction between "natural" and "supernatural" made the way you describe in something by Richard Carrier. It was probably a blog entry from 2007-01, which points back to a couple of his earlier writings. I had a quick look at the 2003 one, and it mentions a few antecedents.
anonymous:
"In the future (if we survive the next century) there will be enough technological progression to create essential Communism (no-one needs to work, everyone will have necessary resources to live incredible lives and so forth)."
-10 points for confusing means with ends.
From the article:
"[...]there is never an Idea so true that it's wrong to criticize any argument that supports it."
Or make jokes about it? Having a sense of humour ought to be mentioned as a primary piece of equipment in the Bias-Buster's toolkit. It's easy and fun...
Eli, you said:
In the superintelligent domain, as you say, violence is not an ontological category and there is no firm line between persuading someone with a bad argument and reprogramming their brain with nanomachines. In our world there is a firm line, however.
I don't think there is such a firm line. I think argument shades smoothly into cult brainwashing techniques.
Peter: It seems to me that we can draw a firm line, but on one side sits our very strictest most careful thought in the spirit of inquiry and on the other sits everything remotely aimed at influencing others, from legal argument to scientific defense of a position to advertising to flirtation to music (at least lyrical music) to conversation using body language and tones of voice to cult brainwashing techniques and protest rallies etc. It's very clear that we can't live entirely to one side of that line, or if we can, that we can only live on the side that contains, well, life, and also, sadly, violence.
"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."
What about knowledge which is actually dangerous, eg., the Utterly Convincing and Irresistible Five-Minute Seminar on Why We Should Build a UFAI, with highly detailed technical instructions.
there are about as many communists in the world as there are Christians. Really? There are a lot of Christians. From what I've read, virtually nobody in China is a communist now, just as people had stopped believing in the last days of the Soviet Union. In North Korea or among the rebels of Nepal there are still true-believers, but I don't think there are as many as there are Christians.
In general I like having a norm against using force when people make bad arguments. I deplore the anti-fascist fascists who seem to be the primary enemies of free speech to...
Incidentally, I've taken to using the term "afaithist" for myself rather than "atheist" largely due to above mentioned issues. I'm not all that concerned so much about various religious beliefs rather than the notion of the virtue of non rational/anti rational belief, including various "must not question" flavors. Questions like existance of god/etc etc are almost incidental, questions of "mere" (eheh) fact.
Tom: If there was such a convincing eminar, perhaps it contains such a convincing argument that it's genunitely correct. Modify it to "Utterly Convincing and Irresistable Five-Minute Brainwashing Seminar On Why....." :)
"And it is triple ultra forbidden to respond with violence. There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses."
I'm half-convinced that Eliezer put that one in just to see whether we'd spot him contradicting his own advice and pick up on it, so that he can catch us all out in the next post in this series. I think that
"no ifs, ands, buts, or escape clauses... Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."
constitutes infinitely ma...
Anyway, an excellent post apart from the little "are you still awake" test at the end. It ties a lot of things together for me; i have always wondered about how to best describe the commonality between Stalinist Communism and fundamentalist religion, as I often find myself debating whether religion is a net cause of human suffering or not.
no, anonymous. The problem with communism is that it's coercive and tyrannical. A super-duper welfare state is not the same as communism. Especially as productivity goes up. The difference being: under a welfare state you are taxed a portion of what you have, and some of that goes to the poor. Under communism you are essentially owned by the state. The state can tell you when to work, what to work on, and how many hours. The state tells you what you can or cannot buy, because the state decides what will or will not be produced.
Whatever you think about welfare states, communism is something else entirely.
"Tom: If there was such a convincing eminar, perhaps it contains such a convincing argument that it's genunitely correct. Modify it to "Utterly Convincing and Irresistable Five-Minute Brainwashing Seminar On Why....." :)"
It was just an example; it was deliberately chosen to be as extreme as possible, to avoid grey-area questions. No human, so far as I can determine, has the intelligence to actually produce such a thing. Many less-extreme examples of this abound, eg., what should we do with blueprints for nuclear weapons? What about genetic databases which include deadly viruses? What about genetic databases which are .1% deadly viruses and 99.9% life-saving medical research? And on and on it goes.
in the human art of rationality there's a flat law against meeting arguments with violence, anywhere in the human world
No. You're confusing rationality with your own received ethical value system. Violence is both an appropriate and frequently necessary response to all sorts of arguments.
Drawing a sharp distinction like this between violence and the implied threat of violence (e.g., firing weapons and "harsh words" and the invoking of authority backed by force) is problematic. The efficacy of the latter depends on the former; a law-maker known to be reliably nonviolent firing a harmless noisemaker would be far less effective.
From what I've read, virtually nobody in China is a communist now, just as people had stopped believing in the last days of the Soviet Union. In North Korea or among the rebels of Nepal there are still true-believers, but I don't think there are as many as there are Christians.
I find it useful to distinguish between the Chinese and the Swedish. I call the Chinese form of government "communism", and I call the Swedish form of government "socialism". If they are all sub-tribes of "Canadians" to you, then you don't prize dis...
And it is triple ultra forbidden to respond with violence.
I agree. However, here are my minority beliefs on the topic: unless you use Philosophical Majoritarianism, or some other framework where you consider yourself as part of an ensemble of fallible human beings, it's fairly hard to conclusively demonstrate the validity of this rule, or indeed to draw any accurate conclusions about what to do in these cases.
If I consider my memories and my current beliefs in the abstract, as not a priori less infallible than anyone else's, a "no exceptions to Freedo...
A rule of human rationality becomes flat when the probability of falsely perceiving an exception to the rule, vastly exceeds the probability of ending up in a real-world situation where it genuinely makes sense to violate the rule. Relative to modern Earth which includes many violent people and many difficult ethical dilemmas, but does not include the superbeing Omega credibly threatening to launch a black hole at the Sun if you don't shoot the next three Girl Scouts who try to sell you cookies.
I think a lot of the commenters to this thread are also missing the counterintuitive idea that once you fire a gun, the target or their survivors may shoot back.
Isn't the probability of ending up in a real world situation where the entire world is in terrible danger and only you can save it vastly smaller than that of falsely perceiving such a situation? Despite that, I'm glad Petrov made his decision. Expected costs and benefits have to be considered, not just probabilities, but then you are back in normal decision theory or at least normal but not yet invented "decision theory for biased finite agents".
Not murdering people for criticizing your beliefs is, at the very least, a useful heuristic.
Isn't the probability of ending up in a real world situation where the entire world is in terrible danger and only you can save it vastly smaller than that of falsely perceiving such a situation? Despite that, I'm glad Petrov made his decision.
Fair enough. s/probability of/expected utilities associated with/
But you can still end up with a "flat" rule for the human art of rationality, when the expected negative utilities associated from biased decisions that "the end justifies the means, in just this one case here", exceeds the expected...
Agreed Nominull, spectacularly useful. Definitely the sort of heuristic one would sensibly like to promote.
Rolf: It seems to me that you are trying to assert that it is normative for agents to behave in a certain manner because the agents you are addressing are presumably non-normative. The trouble is, using that strategy you guarantee no normative agents. The non-normative agents are not corrected by adopting your strategy, as it only mitigates their irrationalities, while any normative agents are adopting an inappropriate strategy. You can never cho...
in the human art of rationality there's a flat law against meeting arguments with violence, anywhere in the human world
"No. You're confusing rationality with your own received ethical value system. Violence is both an appropriate and frequently necessary response to all sorts of arguments."
I want to note that Buzz Aldrin, the second man to set foot on the moon, famously encountered a man who denied that humans have ever gone to the moon but that the videos of Buzz on the moon were filmed in arizona. Buzz's response when the man presented his argu...
In this case it seems that Eliezer is a bit biased toward defending his stated position, despite the fact that it is entirely obvious that his "flat rule" is in fact a leaky generalization.
For example, he keeps mentioning consequences that result from the response of the person attacked or the imitation of others. These consequences will not follow in every case. There will be no such consequences when no one (including the person attacked) will ever find out that one has responded to an argument with violence.
One can easily think of many theoret...
It seems to me Eleizer has arrived at a line of arguing that mirrors buddhusm and other similar systems. People are attached to certain ideas, concepts, beliefsystems etc, and when two opposing ideas clash together the result is killing, and a destructive spiral. The challenge is to transcend the situation, by being able to keep the mind cold when the rest of society goes amok. Unfortunately while scientists are good at discribing a situation, when it comes to giving normative advice they are mostly useless.
Another thing, faith is so often brought up as a...
Indeed; unknown has a good point. It's perfectly possible for violence to be the only action which will avert terrible outcomes, and it's perfectly possible for violence to not lead to further violence. As for
"A rule of human rationality becomes flat when the probability [expected utility] of falsely perceiving an exception to the rule, vastly exceeds the probability [expected utility] of ending up in a real-world situation where it genuinely makes sense to violate the rule."
why do special rules need to be invented here? A rational agent should a...
But you can still end up with a "flat" rule for the human art of rationality, when the expected negative utilities associated from biased decisions that "the end justifies the means, in just this one case here", exceeds the expected positive utilities from cases where the universe really does end up a better place from shooting someone who makes an argument you don't like after taking all side effects into account including encouragement of similar behavior by others.
That might deserve a post of it own...
There will be no such consequenc...
"You'd need a murder-suicide to pull this off properly..."
Reminds me of Agatha Christie's "Curtain". Of course this is fictional evidence, and in any case I was thinking of more obviously justified cases.
You can think of reasons to be violent, you can think of the good that violence might create, but consider this:
The only human being who is remembered as being completely good because he shot someone was Hitler, when he shot himself.
The list of possitive changes accomplished in the REFUSAL to shoot anyone is much longer.
I don't believe violence can ever have a positive effect, except when used to defend against greater violence.
In argument, short of the entirely impossible situation where an abominable idea is irrestable to everyone else, (and assuming tha...
Let me rephrase: Hitler's action (suicide) was for the good. Not he as a human being, or pretty much anything else he did. (With the exception of painting, those weren't bad.) I really should proofread this before I come off as saying something completely different.
There are plenty of situations where violence is the correct answer. There are even situations where being the first to initiate violence is the correct answer, for example, to establish a property-ownership system and enforce against anyone being able to wander in and eat the crops you grew, even if they don't use violence before eating.
However, in real life, initiation of violence is never the correct answer to a verbal argument you don't like. Anyone can "imagine" exceptions to the rule, involving definite knowledge that an argument persuading other people is wrong, and (more difficult) absolute knowledge of the consequences, and (most difficult) themselves being the only people in the world who will ever pick up a gun. Except that it's easy to forget these as conditions, if you imagine in a naively realistic way - postulate a "wrong argument" instead of your own belief that an argument is wrong, postulate "I shoot them and that makes the problem go away" instead of your own belief that these are the consequences, and just not think about anyone else being inspired to follow the same rule. Real ethical rules, however, have to apply in the case of states of knowledge, rather than states of reality. So don't tell me about situations in which it is appropriate to respond to an argument with violence. Tell me about realistically obtainable states of belief in which it is appropriate to respond to an argument with violence.
It seems to me that normative statements like "let us go and serve other gods" aren't really something you can have a rational debate about. The question comes down to "which do you love more, your god or me", and the answer should always be "God"... according to God.
Similarly, one could have a rational debate about whether a command economy will outperform a market economy or vice versa (although the empirical evidence seems pretty one-sided), but a statement like "all people ought to be socially and economically equal" seems like something that just has to be accepted or rejected.
If you met John Barnes and he argued that he's doing the right thing, would it be appropriate to sock him in the jaw?
No, because the statement that "the only appropriate response to some arguments is a good swift sock in the jaw" is not itself one of the arguments whose appropriate response is a sock in the jaw. There may or may not be any such arguments, but socking him in the jaw is admitting that he is fundamentally right. Of course, it might be appropriate to sock him for some other reason :-)
One can argue that Buzz Aldrin had a special righ...
GW, to what extent should we treat people as we want them to treat us, and to what extent should we treat them the way they say is right and the way they treat others?
Sometimes it's polite to treat other people by their own standards, and it isn't an admission that their way is right and ours is wrong.
J Thomas: Ideally you knock him out and he falls down and hits his head on the floor, and when he wakes up he will be a chastened antisemite, a subdued antisemite, a far more submissive antisemite. He will not annoy you with logical argument.
Gosh, I hope no one ever tries anything similar on a Jew.
TGGP writes:
"I recognize that in some situations it could hypothetically be the case that free speech leads to bad outcomes, in which case I'd be alright with restricting it. I think such cases would be fantastically rare..."
What about the "Werther Effect"? Journalism guidelines are drafted on the assumption that it is real, and browsing through PubMed suggests that the evidence is strong enough.
So, if imitative suicide is facilitated through art or media stimuli in predictable ways, isn't the empirical question as to whether there are...
What about the "Werther Effect"? I'm not really that bothered by a bunch of people I don't know killing themselves. It's your life to make or take.
Unless you don't think suicide is a bad thing, I suppose. I think my more apathetic attitude toward human life separates me from transhumanists/immortalists. I discuss that a bit here. I'm thinking more along the lines of violent totalitarian ideologies that have a reasonable chance of taking over and really screwing things up.
You can see the Buzz Aldrin punch on Youtube.
I heard he also roundhouse kicked a holocaust denier through a plate glass window and karate chopped a 9/11 truther in the balls.
dutz, as paintings, yes, they weren't any good. But still, much better than genocide.
Violence may convince your opponent it isn't worth arguing with you. But it will convince your audience that you're an emotional, impulsive, irrational person, no matter how right you were.
People can see someone as less than human. Until they see the getting beaten with fire hoses, and then pity sinks in.
I think in the original context, Eliezer was talking about violence commited by a society/sect/police force against an individual.
I happen to believe a swift punch in the ...
DaCracka: I think these are two issues related in a different way. His paintings were not better than genocide. This is like saying butter is better than a smack in the face. This is kind of illogical. Though, if his paintings would have been better there would have been a chance to avoid this genocide, because the Academy would have accepted him and he might have become a painter instead of a dictator. About the violence thing. I agree nobody should react with violence to an argument. There are people out there who do so. They do it because they are eit...
Every now and then, you see people arguing over whether atheism is a “religion.” As I touch on elsewhere, in “Purpose and Pragmatism,” arguing over the meaning of a word nearly always means that you’ve lost track of the original question.1 How might this argument arise to begin with?
An atheist is holding forth, blaming “religion” for the Inquisition, the Crusades, and various conflicts with or within Islam. The religious one may reply, “But atheism is also a religion, because you also have beliefs about God; you believe God doesn’t exist.” Then the atheist answers, “If atheism is a religion, then not collecting stamps is a hobby,” and the argument begins.
Or the one may reply, “But horrors just as great were inflicted by Stalin, who was an atheist, and who suppressed churches in the name of atheism; therefore you are wrong to blame the violence on religion.” Now the atheist may be tempted to reply, “No true Scotsman,” saying, “Stalin’s religion was Communism.” The religious one answers “If Communism is a religion, then Star Wars fandom is a government,” and the argument begins.
Should a “religious” person be defined as someone who has a definite opinion about the existence of at least one God, e.g., assigning a probability lower than 10% or higher than 90% to the existence of Zeus? Or should a “religious” person be defined as someone who has a positive opinion (say, a probability higher than 90%) on the existence of at least one God? In the former case, Stalin was “religious”; in the latter case, Stalin was “not religious.”
But this is exactly the wrong way to look at the problem. What you really want to know—what the argument was originally about—is why, at certain points in human history, large groups of people were slaughtered and tortured, ostensibly in the name of an idea. Redefining a word won’t change the facts of history one way or the other.
Communism was a complex catastrophe, and there may be no single why, no single critical link in the chain of causality. But if I had to suggest an ur-mistake, it would be . . . well, I’ll let God say it for me:
This was likewise the rule which Stalin set for Communism, and Hitler for Nazism: if your brother tries to tell you why Marx is wrong, if your son tries to tell you the Jews are not planning world conquest, then do not debate him or set forth your own evidence; do not perform replicable experiments or examine history; but turn him in at once to the secret police.
I suggested that one key to resisting an affective death spiral is the principle of “burdensome details”—just remembering to question the specific details of each additional nice claim about the Great Idea.2 This wouldn’t get rid of the halo effect, but it would hopefully reduce the resonance to below criticality, so that one nice-sounding claim triggers less than 1.0 additional nice-sounding claims, on average.
The diametric opposite of this advice, which sends the halo effect supercritical, is when it feels wrong to argue against any positive claim about the Great Idea.
Politics is the mind-killer. Arguments are soldiers. Once you know which side you’re on, you must support all favorable claims, and argue against all unfavorable claims. Otherwise it’s like giving aid and comfort to the enemy, or stabbing your friends in the back.
If . . .
. . . then the affective death spiral has gone supercritical. It is now a Super Happy Death Spiral.
When it comes to our original question—“What makes the slaughter?”—the key category to pay attention to isn’t religion as such. The best distinction I’ve heard between “supernatural” and “naturalistic” worldviews is that a supernatural worldview asserts the existence of ontologically basic mental substances, like spirits, while a naturalistic worldview reduces mental phenomena to nonmental parts. Focusing on this as the source of the problem buys into religious exceptionalism. Supernaturalist claims are worth distinguishing, because they always turn out to be wrong for fairly fundamental reasons.3 But it’s still just one kind of mistake.
An affective death spiral can nucleate around supernatural beliefs—particularly monotheisms whose pinnacle is a Super Happy Agent, defined primarily by agreeing with any nice statement about it—and particularly meme complexes grown sophisticated enough to assert supernatural punishments for disbelief. But the death spiral can also start around a political innovation, a charismatic leader, belief in racial destiny, or an economic hypothesis. The lesson of history is that affective death spirals are dangerous whether or not they happen to involve supernaturalism. Religion isn’t special enough, as a class of mistake, to be the key problem.
Sam Harris came closer when he put the accusing finger on faith. If you don’t place an appropriate burden of proof on each and every additional nice claim, the affective resonance gets started very easily. Look at the poor New Agers. Christianity developed defenses against criticism, arguing for the wonders of faith; New Agers culturally inherit the cached thought that faith is positive, but lack Christianity’s exclusionary scripture to keep out competing memes. New Agers end up in happy death spirals around stars, trees, magnets, diets, spells, unicorns . . .
But the affective death spiral turns much deadlier after criticism becomes a sin, or a gaffe, or a crime. There are things in this world that are worth praising greatly, and you can’t flatly say that praise beyond a certain point is forbidden. But there is never an Idea so true that it’s wrong to criticize any argument that supports it. Never. Never ever never for ever. That is flat. The vast majority of possible beliefs in a nontrivial answer space are false, and likewise, the vast majority of possible supporting arguments for a true belief are also false, and not even the happiest idea can change that.
And it is triple ultra forbidden to respond to criticism with violence. There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.
1Link: http://lesswrong.com/lw/lf/purpose_and_pragmatism/.
2It’s not trivial advice. People often don’t remember to do this when they’re listening to a futurist sketching amazingly detailed projections about the wonders of tomorrow, let alone when they’re thinking about their favorite idea ever.
3See, for example, “Mysterious Answers to Mysterious Questions” in Map and Territory.