Uncritical Supercriticality

Followup toResist the Happy Death Spiral

Every now and then, you see people arguing over whether atheism is a "religion".  As I touched on in Purpose and Pragmatism, arguing over the meaning of a word nearly always means that you've lost track of the original question.  How might this argument arise to begin with?

An atheist is holding forth, blaming "religion" for the Inquisition, the Crusades, and various conflicts with or within Islam.  The religious one may reply, "But atheism is also a religion, because you also have beliefs about God; you believe God doesn't exist."  Then the atheist answers, "If atheism is a religion, then not collecting stamps is a hobby," and the argument begins.

Or the one may reply, "But horrors just as great were inflicted by Stalin, who was an atheist, and who suppressed churches in the name of atheism; therefore you are wrong to blame the violence on religion."  Now the atheist may be tempted to reply "No true Scotsman", saying, "Stalin's religion was Communism."  The religious one answers "If Communism is a religion, then Star Wars fandom is a government," and the argument begins.

Should a "religious" person be defined as someone who has a definite opinion about the existence of at least one God, e.g., assigning a probability lower than 10% or higher than 90% to the existence of Zeus?  Or should a "religious" person be defined as someone who has a positive opinion, say a probability higher than 90%, for the existence of at least one God?  In the former case, Stalin was "religious"; in the latter case, Stalin was "not religious".

But this is exactly the wrong way to look at the problem.  What you really want to know—what the argument was originally about—is why, at certain points in human history, large groups of people were slaughtered and tortured, ostensibly in the name of an idea.  Redefining a word won't change the facts of history one way or the other.

Communism was a complex catastrophe, and there may be no single why, no single critical link in the chain of causality.  But if I had to suggest an ur-mistake, it would be... well, I'll let God say it for me:

"If your brother, the son of your father or of your mother, or your son or daughter, or the spouse whom you embrace, or your most intimate friend, tries to secretly seduce you, saying, 'Let us go and serve other gods,' unknown to you or your ancestors before you, gods of the peoples surrounding you, whether near you or far away, anywhere throughout the world, you must not consent, you must not listen to him; you must show him no pity, you must not spare him or conceal his guilt. No, you must kill him, your hand must strike the first blow in putting him to death and the hands of the rest of the people following.  You must stone him to death, since he has tried to divert you from Yahweh your God."  (Deuteronomy 13:7-11, emphasis added)

This was likewise the rule which Stalin set for Communism, and Hitler for Nazism: if your brother tries to tell you why Marx is wrong, if your son tries to tell you the Jews are not planning world conquest, then do not debate him or set forth your own evidence; do not perform replicable experiments or examine history; but turn him in at once to the secret police.

Yesterday, I suggested that one key to resisting an affective death spiral is the principle of "burdensome details"—just remembering to question the specific details of each additional nice claim about the Great Idea.  (It's not trivial advice.  People often don't remember to do this when they're listening to a futurist sketching amazingly detailed projections about the wonders of tomorrow, let alone when they're thinking about their favorite idea ever.)  This wouldn't get rid of the halo effect, but  it would hopefully reduce the resonance to below criticality, so that one nice-sounding claim triggers less than 1.0 additional nice-sounding claims, on average.

The diametric opposite of this advice, which sends the halo effect supercritical, is when it feels wrong to argue against any positive claim about the Great Idea.  Politics is the mind-killer.  Arguments are soldiers.  Once you know which side you're on, you must support all favorable claims, and argue against all unfavorable claims.  Otherwise it's like giving aid and comfort to the enemy, or stabbing your friends in the back.

If...

  • ...you feel that contradicting someone else who makes a flawed nice claim in favor of evolution, would be giving aid and comfort to the creationists;
  • ...you feel like you get spiritual credit for each nice thing you say about God, and arguing about it would interfere with your relationship with God;
  • ...you have the distinct sense that the other people in the room will dislike you for "not supporting our troops" if you argue against the latest war;
  • ...saying anything against Communism gets you stoned to death shot;

...then the affective death spiral has gone supercritical.  It is now a Super Happy Death Spiral.

It's not religion, as such, that is the key categorization, relative to our original question:  "What makes the slaughter?"  The best distinction I've heard between "supernatural" and "naturalistic" worldviews is that a supernatural worldview asserts the existence of ontologically basic mental substances, like spirits, while a naturalistic worldview reduces mental phenomena to nonmental parts.  (Can't find original source thanks, g!)  Focusing on this as the source of the problem buys into religious exceptionalism.  Supernaturalist claims are worth distinguishing, because they always turn out to be wrong for fairly fundamental reasons.  But it's still just one kind of mistake.

An affective death spiral can nucleate around supernatural beliefs; especially monotheisms whose pinnacle is a Super Happy Agent, defined primarily by agreeing with any nice statement about it; especially meme complexes grown sophisticated enough to assert supernatural punishments for disbelief.  But the death spiral can also start around a political innovation, a charismatic leader, belief in racial destiny, or an economic hypothesis.  The lesson of history is that affective death spirals are dangerous whether or not they happen to involve supernaturalism.  Religion isn't special enough, as a class of mistake, to be the key problem.

Sam Harris came closer when he put the accusing finger on faith. If you don't place an appropriate burden of proof on each and every additional nice claim, the affective resonance gets started very easily.  Look at the poor New Agers.  Christianity developed defenses against criticism, arguing for the wonders of faith; New Agers culturally inherit the cached thought that faith is positive, but lack Christianity's exclusionary scripture to keep out competing memes.  New Agers end up in happy death spirals around stars, trees, magnets, diets, spells, unicorns...

But the affective death spiral turns much deadlier after criticism becomes a sin, or a gaffe, or a crime.  There are things in this world that are worth praising greatly, and you can't flatly say that praise beyond a certain point is forbidden.  But there is never an Idea so true that it's wrong to criticize any argument that supports it.  Never.  Never ever never for ever.  That is flat.  The vast majority of possible beliefs in a nontrivial answer space are false, and likewise, the vast majority of possible supporting arguments for a true belief are also false, and not even the happiest idea can change that.

And it is triple ultra forbidden to respond to criticism with violence.  There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses.  This is one of them.  Bad argument gets counterargument.  Does not get bullet.  Never.  Never ever never for ever.

 

Part of the Death Spirals and the Cult Attractor subsequence of How To Actually Change Your Mind

Next post: "Evaporative Cooling of Group Beliefs"

Previous post: "Resist the Happy Death Spiral"

163 comments, sorted by
magical algorithm
Highlighting new comments since Today at 12:12 PM
Select new highlight date
Moderation Guidelines: Reign of Terror - I delete anything I judge to be annoying or counterproductiveexpand_more

Err... did that post end up dying in a free speech happy death spiral?

Especially odd from a person who believes in the probable possibility of humanly irresistible bad arguments as a reason for not AI boxing. If there are minds that we can't let exist because they would make bad arguments that we would find persuasive this seems terribly close, from an aggregative utilitarian standpoint, to killing them.

I'm not an expert in the Rwandan genocide, but it's my impression that to a substantial extent the people behind it basically just made arguments (bad ones, of a primarily ad-hominem form like "Tutsis are like cockroaches") for killing them and people who listened to those arguments on the radio went along with it. At least with the benefit of hindsight I am reluctant to say that the people promoting that genocide should have been stopped forcibly. Similarly, it's my impression that Charles Manson didn't personally kill anyone. He merely told his followers ridiculous stories of what the likely results of their killing certain people would be.

It would be nice if, as Socrates claimed, a bad argument cannot defeat a good one, but if that was true we wouldn't need to overcome bias. With respect to our own biases, hopefully careful thought and study of psychology is the only tool we will ever need to overcome them, but with respect to the biases of others it would be terribly biased to never consider the possibility that other tools are necessary. We can find good heuristics, like "don't violently suppress anyone who isn't actively promoting violence", but sadly violence isn't a basic ontological category, so we can't cleanly divide the world into violent and non-violent actions, no into statements that promote or don't promote some conclusion (in the context of what goal system?).

There are plenty of situations where violence is the correct answer. There are even situations where being the first to initiate violence is the correct answer, for example, to establish a property-ownership system and enforce against anyone being able to wander in and eat the crops you grew, even if they don't use violence before eating.

However, in real life, initiation of violence is never the correct answer to a verbal argument you don't like. Anyone can "imagine" exceptions to the rule, involving definite knowledge that an argument persuading other people is wrong, and (more difficult) absolute knowledge of the consequences, and (most difficult) themselves being the only people in the world who will ever pick up a gun. Except that it's easy to forget these as conditions, if you imagine in a naively realistic way - postulate a "wrong argument" instead of your own belief that an argument is wrong, postulate "I shoot them and that makes the problem go away" instead of your own belief that these are the consequences, and just not think about anyone else being inspired to follow the same rule. Real ethical rules, however, have to apply in the case of states of knowledge, rather than states of reality. So don't tell me about situations in which it is appropriate to respond to an argument with violence. Tell me about realistically obtainable states of belief in which it is appropriate to respond to an argument with violence.

What was the point of quoting Deuteronomy, then? The Deuteronomy quote is very specifically about introducing the worship of foreign or novel gods. Conflating this with a general decree to punish critics is a totally implausible reading to anyone who’s actually bothered to pay attention to the Bible; ancient Israelite prophets frequently claimed that Yahweh’s instructions had been wrongly construed, and that the dominant power structure (including both kings and the priesthood) was in error. They seem to have been a sufficiently protected class that kings and priests would sometimes yell at them, but rarely physically injure them.

The correct modern analogue to advocating the worship of a foreign god, is advocating cooperation with a foreign government. The contemporary analogue to stoning the person introducing the worship of foreign gods, would be imposing legal sanctions against Facebook for colluding with Russian intelligence services to manipulate American election results.

I've found this response to be incredibly useful in other discussions of morality. I hadn't found it formulated elsewhere, and had been looking for something like it for a long time.

Especially odd from a person who believes in the probable possibility of humanly irresistible bad arguments as a reason for not AI boxing. If there are minds that we can't let exist because they would make bad arguments that we would find persuasive this seems terribly close, from an aggregative utilitarian standpoint, to killing them.

Fine, let me rephrase: in the human art of rationality there's a flat law against meeting arguments with violence, anywhere in the human world. In the superintelligent domain, as you say, violence is not an ontological category and there is no firm line between persuading someone with a bad argument and reprogramming their brain with nanomachines. In our world there is a firm line, however.

Let me put it this way: If you can invent a bullet that, regardless of how it is fired, or who fires it, only hits people who emit untrue statements, then you can try to use bullets as part of a Bayesian analysis. Until then, you really ought to consider the possibility of the other guy shooting back, no matter how right you are or how wrong they are, and ask whether you want to start down that road.

If the other guy shoots first, of course, that's a whole different story that has nothing to do with free speech.

"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."

What about knowledge which is actually dangerous, eg., the Utterly Convincing and Irresistible Five-Minute Seminar on Why We Should Build a UFAI, with highly detailed technical instructions.

Not murdering people for criticizing your beliefs is, at the very least, a useful heuristic.

J Thomas: Ideally you knock him out and he falls down and hits his head on the floor, and when he wakes up he will be a chastened antisemite, a subdued antisemite, a far more submissive antisemite. He will not annoy you with logical argument.

Gosh, I hope no one ever tries anything similar on a Jew.

Peter: It seems to me that we can draw a firm line, but on one side sits our very strictest most careful thought in the spirit of inquiry and on the other sits everything remotely aimed at influencing others, from legal argument to scientific defense of a position to advertising to flirtation to music (at least lyrical music) to conversation using body language and tones of voice to cult brainwashing techniques and protest rallies etc. It's very clear that we can't live entirely to one side of that line, or if we can, that we can only live on the side that contains, well, life, and also, sadly, violence.

Isn't the probability of ending up in a real world situation where the entire world is in terrible danger and only you can save it vastly smaller than that of falsely perceiving such a situation? Despite that, I'm glad Petrov made his decision.

Fair enough. s/probability of/expected utilities associated with/

But you can still end up with a "flat" rule for the human art of rationality, when the expected negative utilities associated from biased decisions that "the end justifies the means, in just this one case here", exceeds the expected positive utilities from cases where the universe really does end up a better place from shooting someone who makes an argument you don't like after taking all side effects into account including encouragement of similar behavior by others.

Remember, human targets shoot back. Since bullets are not even probabilistically more likely to hit when fired at a human target who has just made false statements as opposed to true statements, it's very difficult to see how a social decision process can be made more rational by introducing bullets into it.

Isn't the probability of ending up in a real world situation where the entire world is in terrible danger and only you can save it vastly smaller than that of falsely perceiving such a situation? Despite that, I'm glad Petrov made his decision. Expected costs and benefits have to be considered, not just probabilities, but then you are back in normal decision theory or at least normal but not yet invented "decision theory for biased finite agents".

A rule of human rationality becomes flat when the probability of falsely perceiving an exception to the rule, vastly exceeds the probability of ending up in a real-world situation where it genuinely makes sense to violate the rule. Relative to modern Earth which includes many violent people and many difficult ethical dilemmas, but does not include the superbeing Omega credibly threatening to launch a black hole at the Sun if you don't shoot the next three Girl Scouts who try to sell you cookies.

I think a lot of the commenters to this thread are also missing the counterintuitive idea that once you fire a gun, the target or their survivors may shoot back.

Incidentally, I've taken to using the term "afaithist" for myself rather than "atheist" largely due to above mentioned issues. I'm not all that concerned so much about various religious beliefs rather than the notion of the virtue of non rational/anti rational belief, including various "must not question" flavors. Questions like existance of god/etc etc are almost incidental, questions of "mere" (eheh) fact.

Tom: If there was such a convincing eminar, perhaps it contains such a convincing argument that it's genunitely correct. Modify it to "Utterly Convincing and Irresistable Five-Minute Brainwashing Seminar On Why....." :)

there are about as many communists in the world as there are Christians. Really? There are a lot of Christians. From what I've read, virtually nobody in China is a communist now, just as people had stopped believing in the last days of the Soviet Union. In North Korea or among the rebels of Nepal there are still true-believers, but I don't think there are as many as there are Christians.

In general I like having a norm against using force when people make bad arguments. I deplore the anti-fascist fascists who seem to be the primary enemies of free speech today. At the same time I recognize that in some situations it could hypothetically be the case that free speech leads to bad outcomes, in which case I'd be alright with restricting it. I think such cases would be fantastically rare and would likely only occur during a civil war (a category I don't consider wars of secession to be members of). I recognize though that in normal situations giving a directive/command as opposed to an argument for something should be treated as solicitation of an act. Stephan Kinsella discusses that in Causality and Aggression.

I think that you can't count most of the Chinese as non-communist. Centralized propoganda is a strong weapon and shouldn't be discounted. When people first start doubting church dogmas- in most part they developped a some kinds of heresy, not an atheism. So, they doesn't believe in offical religion, but for outer observer point of view - they beliefs was almost indistingushable from offical dogma. And in the example with Soviet Union- communist party still exsicte tin Russia. It's influence slowly dyied out, but right after the disintegration of Soviet Union they have a really good chance to win elections

Eliezer, I first saw the distinction between "natural" and "supernatural" made the way you describe in something by Richard Carrier. It was probably a blog entry from 2007-01, which points back to a couple of his earlier writings. I had a quick look at the 2003 one, and it mentions a few antecedents.

If you met John Barnes and he argued that he's doing the right thing, would it be appropriate to sock him in the jaw?

No, because the statement that "the only appropriate response to some arguments is a good swift sock in the jaw" is not itself one of the arguments whose appropriate response is a sock in the jaw. There may or may not be any such arguments, but socking him in the jaw is admitting that he is fundamentally right. Of course, it might be appropriate to sock him for some other reason :-)

One can argue that Buzz Aldrin had a special right to sock the guy that you or I would not. To me, claiming the moon landing was faked is just an absurd statement. Saying it in front of Buzz is unjustifiably calling the man a fraud and a liar. Buzz shouldn't have to put up with that kind of crap.

Eli, you said:

In the superintelligent domain, as you say, violence is not an ontological category and there is no firm line between persuading someone with a bad argument and reprogramming their brain with nanomachines. In our world there is a firm line, however.

I don't think there is such a firm line. I think argument shades smoothly into cult brainwashing techniques.

Sam Harris came closer when he put the accusing finger on faith. If you don't place an appropriate burden of proof on each and every additional nice claim, the affective resonance gets started very easily.

How does one determine the appropriate burden of proof?

I would say that is when there is empirical evidence supporting the claim but, try as you might, you can't find any that falsifies it.

I think having a flawed (human) agent using this technique is too susceptible to the agent convincing themselves that they tried hard enough, and so we've just pushed the problem one step back: How do you know when you've tried hard enough.

In this case it seems that Eliezer is a bit biased toward defending his stated position, despite the fact that it is entirely obvious that his "flat rule" is in fact a leaky generalization.

For example, he keeps mentioning consequences that result from the response of the person attacked or the imitation of others. These consequences will not follow in every case. There will be no such consequences when no one (including the person attacked) will ever find out that one has responded to an argument with violence.

One can easily think of many theoretically possible circumstances (not involving superintelligence) in which one can prevent immense evils and bring about immense goods by responding to an argument with violence, and yet satisfying the condition above, that no one will ever find out.

It is true that such circumstances are not particularly probable, yet they might well be quite recognizable if they actually happened. Thus, there can be no such flat rule of rationality.

in the human art of rationality there's a flat law against meeting arguments with violence, anywhere in the human world

No. You're confusing rationality with your own received ethical value system. Violence is both an appropriate and frequently necessary response to all sorts of arguments.

A strong claim but without any evidence to back it up. Perhaps you could at least give some examples of arguments for which the necessary response is violence.

Perhaps you could at least give some examples of arguments for which the necessary response is violence.

Possibly, but only if you roll a natural 20 on your necromancy check. You are replying to a comment that was posted in 2007 - and on a different blog! Perhaps not the ideal place to play the "my position is the default - you are the one who needs to supply all the evidence!" game. Or, then again, perhaps it is the ideal place!

A strong claim but without any evidence to back it up.

It is not even a question for which demanding evidence makes sense - at least without specifying more clearly what kind of observations of the world you are considering. One could assume that you mean "give me evidence that the consequences of responding to arguments with violence can positive" - but then you have already lost to Caledonian's position. When you are looking at consequences, "argument" and "violence" are just two different kinds of power. Occasionally the latter is to be preferred to the former.

The only way "there's a flat law against meeting arguments with violence, anywhere in the human world" was going to hold was if it stayed purely in the ideological realm. And a Traditional Rationality ideology realm more than a Bayesian Rationality one. "Arguments" can, at times, be a greater epistemic rationality violation than mere violence.

Possibly, but only if you roll a natural 20 on your necromancy check.

:) I did know that I was responding to an ancient response... but I had thought that Caledonian may still be lurking about this site...

In a way my response was more to point out the problem with what he said - than to actually request a specific response from him. If somebody else came along later and happened to agree with Caledonian, they might point out evidence that would support his claim... thus I figured it was worth posting anyway.

We keep being told to comment regardless of how old the posts are... and this is why.

One could assume that you mean "give me evidence that the consequences of responding to arguments with violence can positive"

Nah - that's the wrong tack. I'm sure there are things where violence could be a positive response. But the claim made was that there are things for which the necessary response is violence... as though for certain situations, only violence will work.

Perhaps there are such situations.. but Caledonian did not even give examples, let alone evidence to support his claim... Thus my reaction.

I'd argue that there are vanishingly few situations in which the only possible solution is violence... but, as I stated, would welcome evidence/discussion to the contrary.

This might be stretching the definition of an "argument", but I think there's a class of speech that must be dealt with by violence. The key identifier of this class is that there is a time-critical danger from third parties accepting the argument.

In other words, its not so much violence used to prevent Alice from trying to convince you that the sky is red, but violence used to prevent Alice from trying to convince Bob to participate in a lynching.

I'd disagree that violence was the only option in that case. I think the best option might be to spirit away the potential lynchee. If they've already got him strung up - then firing a shot in the air, followed by harsh words from the local law-maker would be the next option... violence is still an option, but not the only one, and not necessarily the first port of call.

I think it's quite rare for violence to be the only option available.

Drawing a sharp distinction like this between violence and the implied threat of violence (e.g., firing weapons and "harsh words" and the invoking of authority backed by force) is problematic. The efficacy of the latter depends on the former; a law-maker known to be reliably nonviolent firing a harmless noisemaker would be far less effective.

You may be right, but I wanted to point out that I think that violent repression of speech is an accepted part of common law and is an available remedy in the United States and other countries based on English common law. Civil actions related to libel and slander ultimately carry a threat of violence by the state in recovering judgments found against someone in a civil court; you never see it actually happen, and often judgments are not collected at all but it is completely possible you could be found in contempt or have a lien against property that eventually results in an arrest warrant.

I had thought that Caledonian may still be lurking about this site...

Actually, I think he got banned - if you look at his last comments, he certainly thought it likely that he was going to be.

Yeah - could be. I also read a comment from EY saying that it was specifically his vote keeping him unbanned.

I've read other comments from EY that seem to suggest that Caledonian was being kept around as a kind of troll-in-residence.... including one that seemed to indicate that EY used responses to Caledonian to determine whether or not he'd got his point across well enough :)

Interesting. I don't remember those, but I do remember several discussions where EY wanted to ban Caledonian and various other people talked him out of it.

My understanding is that it's supposed to be a "Bayesian Rationality on leaky hardware" thing. This makes finding evidence for and against very subtle, because you have to come up with some kind of reference class that's objective in a certain hard-to-define way.

But some kind of argumentation is necessary and has some chance of working.

no, anonymous. The problem with communism is that it's coercive and tyrannical. A super-duper welfare state is not the same as communism. Especially as productivity goes up. The difference being: under a welfare state you are taxed a portion of what you have, and some of that goes to the poor. Under communism you are essentially owned by the state. The state can tell you when to work, what to work on, and how many hours. The state tells you what you can or cannot buy, because the state decides what will or will not be produced.

Whatever you think about welfare states, communism is something else entirely.

anonymous:

"In the future (if we survive the next century) there will be enough technological progression to create essential Communism (no-one needs to work, everyone will have necessary resources to live incredible lives and so forth)."

-10 points for confusing means with ends.

From the article:

"[...]there is never an Idea so true that it's wrong to criticize any argument that supports it."

Or make jokes about it? Having a sense of humour ought to be mentioned as a primary piece of equipment in the Bias-Buster's toolkit. It's easy and fun! After all, a defining feature of True Believers is that they lack a sense of irony.

While reading, i tried to think of a case when i fell in affective death spiral, and interesting thing came to my mind. Falling in love falls under Halo Effect? Butterflies in stomach, worshiping the beloved, etc... That means that who overcomes this bias can't fall in love that way anymore?

There is a difference between infatuation and love. (Similar to the difference between "Hollywood rationality" and rationality.) Affective death spiral is infatuation. A person who overcomes this bias will not say things like: "Oh, if this amazing person I met five minutes ago will not friend me on facebook then my life has no meaning and I have to slash my wrists."

Yes, infatuation is what i really wanted to say.(I'm not native speaker) So, two points:

  1. Affective death spiral has leading role in existence of humanity, (if none had it, less children would be born.)
  2. It's kinda shitty to find out that butterflies are consequence of false beliefs, which could lead to people being resistant to accepting this whole idea.

Crosslinking:

If you don't place an appropriate burden of proof on each and every additional nice claim, the affective resonance gets started very easily. Look at the poor New Agers. Christianity developed defenses against criticism, arguing for the wonders of faith; New Agers culturally inherit the cached thought that faith is positive, but lack Christianity's exclusionary scripture to keep out competing memes. New Agers end up in happy death spirals around stars, trees, magnets, diets, spells, unicorns...

In the August 2010 open thread, Risto_Saarelma linked to this relevant article from an insider’s perspective on (if I recall correctly) something like the above matter:

Bridging the Chasm between Two Cultures: A former New Age author writes about slowly coming to realize New Age is mostly bunk and that the skeptic community actually might have a good idea about keeping people from messing themselves up. Also about how hard it is to open a genuine dialogue with the New Age culture, which has set up pretty formidable defenses to perpetuate itself.

(Deuteronomy 13:7-11)

Talk about a successful meme strategy! No wonder we still have this religion today. It killed off its competitors.

There is one type of claim for which a bullet does provide relevant evidence: claims about bullets themselves, such as "Your bullets will not harm me." (For example, an armored vehicle that is supposed to protect its occupants from gunfire will certainly end up being tested against actual gunfire at some point before its design is put into widespread use.)

About violence and society. What do we define by violence? Do we also define intrusion in our personal sphere, psychological re-programming, etc. as violent activities?
You should read Randall Collins.

What about the Resistance in countries that were occupied by Nazi Germany?
Did they actually accomplish anything? I think it was the violence of the opposing armies that actually ended Nazi occupation.

"But there is never an Idea so true that it's wrong to criticize any argument that supports it. Never. Never ever never for ever."

Was it wrong for the guy who thought Buzz Aldrin helped fake the moon landing to present his arguments to Buzz?

One of the hungarian Manhattan-project physicists had a slogan that went "It is not enough to be rude, one must also be wrong." When it comes time to decide whether to answer a verbal argument with violence, does it matter whether the argument is wrong, or is it enough to be rude?

You can see the Buzz Aldrin punch on Youtube.

I heard he also roundhouse kicked a holocaust denier through a plate glass window and karate chopped a 9/11 truther in the balls.

TGGP writes:

"I recognize that in some situations it could hypothetically be the case that free speech leads to bad outcomes, in which case I'd be alright with restricting it. I think such cases would be fantastically rare..."

What about the "Werther Effect"? Journalism guidelines are drafted on the assumption that it is real, and browsing through PubMed suggests that the evidence is strong enough.

So, if imitative suicide is facilitated through art or media stimuli in predictable ways, isn't the empirical question as to whether there are "bad consequence of free speech" answered, with the reality being more prosaic than fantastically rare?

Unless you don't think suicide is a bad thing, I suppose.