We've all had arguments that seemed like a complete waste of time in retrospect. But at the same time, arguments (between scientists, policy analysts, and others) play a critical part in moving society forward. You can imagine how lousy things would be if no one ever engaged those who disagreed with them.

This is a list of tips for having "productive" arguments. For the purposes of this list, "productive" means improving the accuracy of at least one person's views on some important topic. By this definition, arguments where no one changes their mind are unproductive. So are arguments about unimportant topics like which Pink Floyd album is the best.

Why do we want productive arguments? Same reason we want Wikipedia: so people are more knowledgeable. And just like the case of Wikipedia, there is a strong selfish imperative here: arguing can make you more knowledgeable, if you're willing to change your mind when another arguer has better points.

Arguments can also be negatively productive if everyone moves further from the truth on net. This could happen if, for example, the truth was somewhere in between two arguers, but they both left the argument even more sure of themselves.

These tips are derived from my personal experience arguing.


Keep it Friendly

Probably the biggest barrier to productive arguments is the desire of arguers to save face and avoid publicly admitting they were wrong. Obviously, it's hard for anyone's views to get more accurate if no one's views ever change.

This problem is exacerbated when arguers disparage one another. If you rebuke a fellow arguer, you're setting yourself up as their enemy. Admitting they were wrong would then mean giving in to an enemy. And no one likes to do that.
You may also find it difficult to carefully reconsider your own views after having ridiculed or berated someone who disagrees. I know I have in the past.
Both of these tendencies hurt argument productivity. To make arguments productive:
  • Keep things warm and collegial. Just because your ideas are in violent disagreement doesn't mean you have to disagree violently as people. Stay classy.
  • To the greatest extent possible, uphold the social norm that no one will lose face for publicly changing their mind.
  • If you're on a community-moderated forum like Less Wrong, don't downvote something unless you think the person who wrote it is being a bad forum citizen (ex: spam or unprovoked insults). Upvotes already provide plenty of information about how comments and submissions should be sorted. (It's probably safe to assume that a new Less Wrong user who sees their first comment modded below zero will decide we're all jerks and never come back. And if new users aren't coming back, we'll have a hard time raising the sanity waterline much.)
  • Err on the side of understating your disagreement, e.g. "I'm not persuaded that..." or "I agree that x is true; I'm not as sure that..." or "It seems to me..."
  • If you notice some hypocrisy, bias, or general deficiency on the part of another arguer, think extremely carefully before bringing it up while the argument is still in progress.
In a good argument, all parties will be curious about what's really going on. But curiosity and animosity are mutually incompatible emotions. Don't impede the collective search for truth through rudeness or hostility.


Inquire about Implausible-Sounding Assertions Before Expressing an Opinion

It's easy to respond to a statement you think is obviously wrong with with an immediate denial or attack. But this is also a good way to keep yourself from learning anything.

If someone suggests something you find implausible, start asking friendly questions to get them to clarify and justify their statement. If their reasoning seems genuinely bad, you can refute it then.

As a bonus, doing nothing but ask questions can be a good way to save face if the implausible assertion-maker turns out to be right.

Be careful about rejecting highly implausible ideas out of hand. Ideally, you want your rationality to be a level where even if you started out with a crazy belief like Scientology, you'd still be able to get rid of it. But for a Scientologist to berid themselves of Scientology, they have to consider ideas that initially seen extremely unlikely.

It's been argued that many mainstream skeptics aren't really that good at critically evaluating ideas, just dismissing ones that seem implausible.


Isolate Specific Points of Disagreement

Stick to one topic at a time, until someone changes their mind or the topic is declared not worth pursuing. If your discussion constantly jumps from one point of disagreement to another, reaching consensus on anything will be difficult.

You can use hypothetical-oriented thinking like conditional probabilities and the least convenient possible world to figure out exactly what it is you disagree on with regard to a given topic. Once you've creatively helped yourself or another arguer clarify beliefs, sharing intuitions on specific "irreducible" assertions or anticipated outcomes that aren't easily decomposed can improve both of your probability estimates.


Don't Straw Man Fellow Arguers, Steel Man Them Instead

You might think that a productive argument is one where the smartest person wins, but that's not always the case. Smart people can be wrong too. And a smart person successfully convincing less intelligent folks of their delusion counts as a negatively productive argument (see definition above).

Play for all sides, in case you're the smartest person in the argument.

Rewrite fellow arguers' arguments so they're even stronger, and think of new ones. Arguments for new positions, even—they don't have anyone playing for them. And if you end up convincing yourself of something you didn't previously believe, so much the better.


If You See an Opportunity To Improve the Accuracy of Your Knowledge, Take It!

This is often called losing an argument, but you're actually the winner: you and your arguing partner both invested time to argue, but you were the only one who received significantly improved knowledge.

I'm not a Christian, but I definitely want to know if Christianity is true so I can stop taking the Lord's name in vain and hopefully get to heaven. (Please don't contact me about Christianity though, I've already thought about it a lot and judged it too improbable to be worth spending additional time thinking about.) Point is, it's hard to see how having more accurate knowledge could hurt.

If you're worried about losing face or seeing your coalition (research group, political party, etc.) diminish in importance from you admitting that you were wrong, here are some ideas:

  • Say "I'll think about it". Most people will quiet down at this point without any gloating.
  • Just keep arguing, making a mental note that your mind has changed.
  • Redirect the conversation, pretend to lose interest, pretend you have no time to continue arguing, etc.
If necessary, you can make up a story about how something else changed your mind later.

Some of these techniques may seem dodgy, and honestly I think you'll usually do better by explaining what actually changed your mind. But they're a small price to pay for more accurate knowledge. Better to tell unimportant false statements to others than important false statements to yourself.


Have Low "Belief Inertia"

It's actually pretty rare that the evidence that you're wrong comes suddenly—usually you can see things turning against you. As an advanced move, cultivate the ability to update your degree of certainty in real time to new arguments, and tell fellow arguers if you find an argument of theirs persuasive. This can actually be a good way to make friends. It also encourages other arguers to share additional arguments with you, which could be valuable data.

One psychologist I agree with suggested that people ask

  • "Does the evidence allow me to believe?" when evaluating what they already believe, but
  • "Does the evidence compel me to believe?" when evaluating a claim incompatible with their current beliefs.

If folks don't have to drag you around like this for you to change your mind, you don't actually lose much face. It's only long-overdue capitulations that result in significant face loss. And the longer you put your capitulation off, the worse things get. Quickly updating in response to new evidence seems to preserve face in my experience.



If your belief inertia is low and you steel-man everything, you'll reach the super chill state of not having a "side" in any given argument. You'll play for all sides and you won't care who wins. You'll have achieved equanimity, content with the world as it actually is, not how you wish it was.

New to LessWrong?

New Comment
122 comments, sorted by Click to highlight new comments since: Today at 9:40 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

The most important tip for online arguing, for anything which you expect ever to be discussed again, is to keep a canonical master source which does your arguing for you. (The backfire effect means you owe it to the other person to present your best case and not a sketchy paraphrase of what you have enough energy to write down at that random moment; it's irresponsible to do otherwise.)

For example, if you are arguing about the historical Jesus and your argument does not consist of citations and hyperlinks with some prepositional phrases interspersed, you are doing it wrong. If I'm arguing about brain size correlation with intelligence, I stick the references into the appropriate Wikipedia article and refer to that henceforth. If I'm arguing about modafinil, I either link to the relevant section in Wikipedia or my article, or I edit a cleaned-up version of the argument into my article. If I'm arguing that Moody 2008 drastically undermines the value of dual n-back for IQ on the DNB ML, you can be sure that it's going into my FAQ. If I don't yet have an article or essay on it but it's still a topic I am interested in like how IQ contributes to peace and economic growth, then I will jus... (read more)

Suggestions like that quickly degenerate into appeal to authority, or biased selection of sources, with no substance to it (no actual argument being made; imagine arguing mathematics like this, for extreme example; you make a proof, you ask person that disagrees to show what step, exactly, is wrong, they refer to 'expert' conclusions, 99% of the time simply because they can't do math, not because they are organized). I usually don't need your cherry-picking of references from wikipedia, I have wikipedia for that.
So in other words, this strategy degenerates into several steps higher up the hierarchy of disagreement than just about every other online argument...
Okay, let me clarify: the problem of unproductive argument stems from the reality that people are a: bad truth finders, b: usually don't care to find truth and c: are prone to backwards thought from proposition to justifications, which is acceptable [because of limited computing power and difficulty to do it other way around]. The tip is awesome when you are right (and I totally agree that it is great to have references and so on). When you are wrong, which is more than half of the problem (as much of the time BOTH sides are wrong), it is extremely obtuse. I'd rather prefer people dump out something closer to why they actually believe the argument, rather than how they justify them. Yes, that makes for poor show, but it is more truthful. Why you believe something, is [often] not accurate citation. It is [often] the poor paraphrasing. Just look at the 'tips' for productive arguments. Is there a tip number 1: drop your position ASAP if you are wrong? Hell frigging no (not that it would work either, though, that's not how arguing ever works). edit: to clarify more. Consider climate debates. Those are terrible. Now, you can have naive honest folk who says he ain't trusting no climate scientist. You can have naive honest folk who says, he ain't trusting no oil company. And you can have two pseudo-climate-scientific dudes, arguing by obtusely citing studies at each other, not understanding a single thing about the climate modelling, generating a lot of noise that looks good but neither of them would ever change the view even if they seen all the studies they citing in exact same light. But they are merely the sophisticated version of former folks, who hide their actual beliefs. The cranks that make up some form of crank climate theory, are not as bad as those two types of climate-arguing folks. The former folks talking about politics, they generate some argument, they won't agree because one's authoritarian and other liberal, but they at least make that clear. The cran
I've done my best to make this a habit, and it really isn't that hard to do, especially over the internet. Once you 'bite the bullet' the first time it seems to get easier to do in the future. I've even been able to concede points of contention in real life (when appropriate). Is it automatic? No, you have to keep it in the back of your mind, just like you have to keep in mind the possibility that you're rationalizing. You also have to act on it which, for me, does seem to get easier the more I do it. This sort of goes with the need to constantly try to recognize when you are rationalizing. If you are looking up a storm of quotes, articles, posts etc. to back up your point and overwhelm your 'opponent', this should set off alarm bells. The problem is that those who spend a lot of time correcting people who are obviously wrong by providing them with large amounts of correct information also seem prone to taking the same approach to a position that merely seems obviously wrong for reasons they might not be totally conscious of themselves. They then engage in some rapid fire confirmation bias, throw a bunch of links up and try to 'overpower the opponent'. This is something to be aware of. If the position you're engaging seems wrong but you don't have a clear-cut, well evidenced reason why this is, you should take some time to consider why you want it to be right. When facing someone who is engaging in this behavior (perhaps they are dismissing something you think is sensible, be it strong AI or cryonics, or existential risk, what have you) there are some heuristics you can use. In online debates in particular, I can usually figure out pretty quickly if the other person understands the citations they make by choosing one they seem to place some emphasis on and looking at it carefully, then posing questions about the details. I've found that you can usually press the 'citingpeople' into revealing their underlying motivations in a variety of ways. One way is sort of po
One should either cite the prevailing scientific opinion (e.g. on global warming), or present a novel scientific argument (where you cite the data you use). Other stuff really is nonsense. You can't usefully second-guess science. Citing studies that support your opinion is cherry picking, and is bad. Consider a drug trial; there were 2000 cases where drug did better than placebo, and 500 cases where it did worse. If each trial was a study, the wikipedia page would likely link to 20 links showing that it did better than placebo, including the meta-study, and 20 that it did worse. If it was edited to have 40 links that it did better, it'll have 40 links that it did worse. How silly is the debate, where people just cite the cases they pick? Pointlessly silly. On top of that people (outside lesswrong mostly) really don't understand how to process scientific studies. If there is a calculation that CO2 causes warming, then if calculation is not incorrect, or some very basic physics is not incorrect, CO2 does cause warming. There's no 'countering' of this study. The effect won't go anywhere, what ever you do. The only thing one could do is to argue that CO2 somehow also causes cooling; an entirely new mechanism. E.g. if snow was black, rather than white, and ground was white rather than dark, one could argue that warming removes the snow, leading in decrease in absorption, and decreasing the impact of the warming. Alas snow is white and ground is dark, so warming does cause further warming via this mechanics, and the only thing you can do is to come up with some other mechanism here that does the opposite. And so on. (You could disprove those by e.g. finding that snow, really, is dark, and ground, really, is white., or by finding that CO2 doesn't really absorb IR, but that's it). People don't understand difference between calculating predictions, and just free-form hypothesising that may well be wrong, and needs to be tested with experiment, etc etc. (i choose global w
It might very well be possible that the calculation is correct, and the basic physics is correct, and yet an increase in CO2 emissions does not lead to warming -- because there's some mechanism that simultaneously increases CO2 absorption, or causes cooling (as you said, though in a less counterfactual way), or whatever. It could also be possible that your measurements of CO2 levels were incorrect. Thus, you could -- hypothetically -- "counter" the study (in this scenario) by revealing the errors in the measurements, or by demonstrating additional mechanisms that invalidate the end effects.
If there was a mechanism that simultaneously increased CO2 absorption, the levels wouldn't have been rising. For the measurements, you mean, like vast conspiracy that over reports the coal that is being burnt? Yes, that is possible, of course. One shouldn't do motivated search, though. There is a zillion other mechanisms going on, of course, that increase, and decrease the effects. All the immediately obvious ones amplify the effect (e.g. warming releases CO2 and methane from all kinds of sources where it is dissolved; the snow is white and melts earlier in spring, etc). Of course, if one is to start doing motivated search either way, one could remain ignorant of those and collect the ones that work in opposite, and successfully 'counter' the warming. But that's cherry picking. If one is to just look around and report on what one sees there is a giant number of amplifying mechanisms, and few if any opposite mechanisms; which depend on the temperature and are thus incapable of entirely negating the warming because they need warming to work.
I was thinking of a scenario where you measured CO2 emissions, but forgot to measure absorption (I acknowledge that such a scenario is contrived, but I think you get the idea). That's a possibility as well, but I was thinking about more innocuous things like sample contamination, malfunctioning GPS cables, etc. In all of these cases, your math is correct, and your basic physics is correct, and yet the conclusion is still wrong.
You mean like the fact that clouds are white and form more when it's warmer.
Do they, really? Last time I checked they formed pretty well at -20c and at +35c . Ohh, i see knee jerk reaction happening - they may form a bit more at +35c in your place (here they are white, and also form more in winter). Okay, 55 degrees of difference may make a difference, now what? There comes another common failure mode: animism. Even if you find temperature dependent effects that are opposite, they have to be quite strong to produce any notable difference of temperature as a result of 2 degree difference in temperature, at the many points of the temperature range, to get yourself any compensation beyond small %. It's only the biological systems, that tend to implement PID controllers, which do counter any deviations from equilibrium, even little ones, in a way not dependent on their magnitude.
The way I've always heard it, mainstream estimates of climate sensitivity are somewhere around 3 degrees (with a fair amount of spread), and the direct effect of CO2 on radiation is responsible for 1 degree of that, with the rest being caused by positive feedbacks. It may be possible to argue that some important positive feedbacks are also basic physics (and that no important negative feedbacks are basic physics), but it sounds to me like that's not what you're doing; it sounds to me like, instead, you're mistakenly claiming that the direct effect by itself, without any feedback effects, is enough to cause warming similar to that claimed by mainstream estimates.
Nah, I'm speaking of the anthropogenic global warming vs no anthropogenic global warming 'debate', not of 1 degree vs 3 degrees type debate. For the most part, the AGW debate is focussed on the effect of CO2, sans the positive feedbacks, as the deniers won't even accept 1 degree of difference. Speaking of which, one very huge positive feedback is that water vapour is a greenhouse 'gas'.
Why the quotes? Water vapor's a gas. There's also liquid- and solid-phase water in the atmosphere in the form of clouds and haze, but my understanding is that that generally has a cooling effect by way of increasing albedo. Might be missing some feedbacks there, though; I'm not a climatologist.
Well, thats why quotes, because it is changing phase there. The clouds effect on climate btw is not so simple; the clouds also reflect the infrared some.
I think the debate, and certainly the policy debate, is (in effect) about the catastrophic consequences of CO2.
Hm, I think higher up the hierarchy of abstraction is generally bad, when it comes to disagreements. People so easily get trapped into arguing because someone else is arguing back, and it's even easier when you're not being concrete.
I didn't say abstraction, I said disagreement.
Ah, okay. I still think you have to be careful of degenerating into bad stuff anyhow - if the argument becomes about cherry-picking rather than the evidence, that could be worse than arguing without those sources.
Which one on the list is appeal to authority or quotation of a piece of text one is not himself qualified to understand? (i only briefly skimmed and didn't really see it). (Looks like DH1 is the only one mentioning references to authorities, in the way of accusation of lack of authority).
DH4, argument. Pointing out what authorities say on the question is contradiction (the authorities contradict your claim) plus evidence (which authorities where).
Cherry picking, combined with typically putting words into authorities mouths. But I agree that if it is an accepted consensus rather than cherry-picked authorities, then it's pretty effective. (edit: Unfortunately of course, one probably knows of the consensus long before the argument)
I think we all seem to be forgetting that the point of this article is to help us enage in more productive debates, in which two rational people who hold different beliefs on an issue come together and satisfy Aumann's Agreement Theorem- which is to say, at least one person becomes persuaded to hold a different position from the one they started with. Presumably these people are aware of the relevant literature on the subject of their argument; the reason they're on a forum (or comment section, etc.) instead of at their local library is that they want to engage directly with an actual proponent of another position. If they're less than rational, they might be entering the argument to persuade others of their position, but nobody's there for a suggested reading list. If neither opponent has anything to add besides a list of sources, then it's not an argument- it's a book club.
Also, make sure that position is closer to the truth. Don't forget that part.
And that's another important point: Trading recommended reading lists does nothing to sift out the truth. You can find a number of books xor articles espousing virtually any position, but part of the function of a rational argument is to present arguments that respond effectively to the other person's points. Anyone can just read books and devise brilliant refutations of the arguments therein; the real test is whether those brilliant refutations can withstand an intelligent, rational "opponent" who is willing and able to thoroughly deconstruct it from a perspective outside of your own mind.

Don't Straw Man Fellow Arguers, Steel Man Them Instead

Be careful with this one. I've been in arguments where in attempting to steel-man their position only to discover that they don't agree with what I thought was the steel man.

Maybe you failed to make your steel man a proper superset (in probability space) of their original argument? If they still disagree, then they have a problem.
From an argument productivity perspective, generating strong arguments, regardless of what position they are for, can be helpful for improving the productivity of your argument. In other words, just because an argument wasn't what another arguer was trying to communicate doesn't mean it isn't valuable.

Author's Notes

I wrote this to be accessible to a general audience. I didn't announce this at the top of the post, like previous posts of this type have done, because I thought it would be weird for a member of the general audience to see "this was written for a general audience" or "share this with your friends" at the beginning of a post they were about to read.

However, it's not just for a general audience; I'm hoping it will be useful for Less Wrong users who realize they haven't been following one or more of these tips. (Like so much on Less Wrong, think of it as a list of bugs that you can check your thinking and behavior for.)

I realize the position I'm taking on holding back downvotes is somewhat extreme by current standards. But the negative externalities from excess downvoting are hard for us to see. My best friend, an intelligent and rational guy whose writing ability is only so-so, was highly turned off by Less Wrong when the first comment he made was voted down.

If we really feel like we need downvoting for hiding and sorting things, maybe we could mask the degree of downvoting by displaying negative integers as 0? I think this is what reddit does for s... (read more)


I agree that downvoting new people is a bad idea - and every comment in the Welcome Thread should get a load of karma.

However, I think people should aggressively downvote - at the very least a couple of comments per page.

If we don't downvote, comments on average get positive karma - which makes people post them more and more. A few 0 karma comments is a small price to pay if there's a high chance of positive karma.

However, we don't want these posts. They clutter LW, increasing noise. The reason we read forums rather than random letter sequences is because forums filter for strings that have useful semantic content; downvoting inane or uninsightful comments increases this filtering effect. I'd much rather spent a short period of time reading only high quality comments than spend longer reading worse comments.

Worse, it can often be hard to distinguish between a good comment on a topic you don't understand and a bad one. Yet I get much more value spending time reading the good one, which might educate me, than the bad one, which might confuse me - especially if I have trouble distinguishing experts.

Downvotes provide the sting of (variable) negative reinforcement. In the long run, well kept gardens die by pacificism.

However, I think people should aggressively downvote - at the very least a couple of comments per page.

If we don't downvote, comments on average get positive karma - which makes people post them more and more. A few 0 karma comments is a small price to pay if there's a high chance of positive karma.

We should expect comments on average to get positive karma, as long as the average member is making contributions which are on the whole more wanted than unwanted. Attempting to institute a minimum quota of downvoted comments strikes me as simply ridiculous. If the least worthwhile comment out of twenty is still not an actual detraction from the conversation, there's no reason to downvote it.

If we're just concerned with the average quality of discourse, it would be simpler to just cut off the whole community and go back to dialogues between Eliezer and Robin,.

The most significant dialog between Eliezer and Robin (Foom debate) was of abysmally low quality - relative to the output of either of those individuals when not dialoging with each other. I have been similarly unimpressed with other dialogs that I have seen them have in blog comments. Being good writers does not necessarily make people good at having high quality dialogs. Especially when their ego may be more centered around being powerful presenters of their own ideas than in being patient and reliable when comprehending of the communication of others. If we want high quality dialog have Eliezer write blog posts and Yvain engage with them.
Yep. I did write this article hoping that LWers would benefit from it, and EY was one of those LWers. (Assuming his arguing style hasn't changed since the last few times I saw him argue.)
"Downvotes provide the sting of (variable) negative reinforcement." "My [...friend...] was highly turned off by Less Wrong when the first comment he made was voted down." It seems to me that we want to cull people who repeatedly make poor comments, and who register an account just to make a single trolling remark (i.e. evading the first criteria via multiple accounts). We do not want to cull new users who have not yet adapted to the cultural standards of LessWrong, or who happen to have simply hit on one of the culture's sore spots. If nothing else, the idea that this community doesn't have blind spots and biases from being a relatively closed culture is absurd. Of course we have biases, and we want new members because they're more likely to question those biases. We don't want a mindless rehashing of the same old arguments again and again, but that initial down vote can be a large disincentive to wield so casually. Of course, solving this is trickier than identifying it! A few random ideas: * Mark anyone who registered less than a week ago, or with less than 5 comments, with a small "NEWBIE" icon (ideally something less offensive than actually saying "NEWBIE"). Also helps distinguish a fresh troll account from a regular poster who happens to have said something controversial. * Someone's first few posts are "protected" and only show positive karma, unless the user goes beneath a certain threshold (say, -10 total karma across all their posts). This allows "troll accounts" to quickly be shut down, and only shields someone's initial foray (and they'll still be met with rebuttal comments) There's probably other options, but it seems that it would be beneficial to protect a user's initial foray, while still leaving the community to defend itself from longer-term threats.
How about redirecting users to the latest Welcome thread when they register, and encouraging them to post there? Such posts are usually quickly uploaded to half-a-dozen or thereabouts.
I definitely think the "Welcome" threads could do with more prominence. That said, I'm loathe to do introductions myself; I'd far rather just jump in to discussing things and let people learn about me from my ideas. I'd expect plenty of other people here have a similar urge to respond to a specific point, before investing themselves in introductions and community-building / social activities.
For some reason I would feel much better imposing a standard cost on commenting (e.g. -2 karma) that can be easily balanced by being marginally useful. This would better disincentivise both spamming and comments that people didn't expect to be worth very much insight, and still allow people to upvote good-but-not-promotion-worthy comments without artificially inflating that user's karma. This however would skew commenters towards fewer, longer, more premeditated replies. I don't know if we want this.
I find short, pithy replies tend to get better responses karma-wise.
Anyone who posts in order to get karma either overvalues karma or undervalues their time. If their time really is worth so little, they probably can't produce karma-worthy comments anyway.
"If their time really is worth so little, they probably can't produce karma-worthy comments anyway." I can throw out a quick comment in 2 minutes. I enjoy writing quick comments, because I like talking about myself. I expect a lot of people like talking about themselves, given various social conventions and media presentations. I almost never see a comment of mine voted down unless it's actively disagreeable (BTW, cryonics is a scam!), attempting to appeal to humour (you lot seriously cannot take a joke), or actively insulting (I like my karma enough not to give an example :P) I'd idly estimate that I average about +1 karma per post. Basically, they're a waste of time. I have over 1,000 karma. So, the community consensus is that I'm a worthwhile contributor, despite the vast majority of my comments being more or less a waste of time. Specifically, I'm worthwhile because I'm prolific. (Of course, if I cared about milking karma, I'd put this time in to writing a couple well-researched main posts and earn 100+ karma from an hour of work, instead of ~30/hour contributing a two-line comment here and there.)
Thanks for that link. It occurred to me that Eliezer's intuitions for moderation may not be calibrated to the modern Internet, where there really is a forum for people at every level of intelligence: Yahoo Answers, Digg, Facebook, 4chan, Tagged (which is basically the smaller but profitable successor to MySpace that no one intelligent has heard of), etc. I saw the Reddit community denigrate, but Reddit was a case of the smart people having legitimately better software (and therefore better entertainment through better chosen links). Nowadays, things are more equalized and you don't pay much of a price in user experience terms for hanging out on a forum where the average intelligence is similar to yours. Robin Hanson recently did the first ever permanent banning on Overcoming Bias, and that was for someone who was unpleasant and made too many comments, not someone who was stupid. (Not sure how often Robin deletes comments though, it does seem to happen at least a little.) I don't think this effect is very significant. I find it implausible that people post more comments on Hacker News, where comments are hardly ever voted down below zero, because it gets them karma. But even if they do, Hacker News is a great, thriving community. I would love it if we adopted a Hacker News-style moderation system where only users with high karma could vote down. I like the idea of promote/agree/disagree buttons somewhat.
We already have a system where you can only downvote a number of comments up to four times your karma.
I idly wonder if any noticeable fraction of downvotes does come from people who don't have enough karma to post toplevel articles. I'd guess that "high karma" would refer to the threshhold needed for posting articles, which is a pretty low bar.
I like the sound of that for some reason.
I too like this idea that would grant me more power without any more responsibility.
Larks strikes again. (Comment was at -1 when I found it.) It definitely changes my feeling about getting voted down to know there are people like Larks. I guess I just assumed that everyone was like me in reserving downvotes for the absolute worst stuff. Maybe there's some way of getting new users to expect that their first few comments will be voted down and not to worry about it?
It would be interesting to see statistics on up vs down vote frequency per user. Even just a graph of how many users are in the 0-10% down vote bracket, 10-20%, etc. would be neat. I doubt the data is currently available, otherwise it would be trivial to put together a simple graph and a quick post detailing trends in that data.
To adjust your calibration a bit more: I worry that I might run out of my 4*Karma downvoting limit.
I guess that was a joke, but the downvote wasn't me ;) If nothing else downvoting replies to your own comments seems a bit dubeous
This is a thought I had after reading some anonymous feedback for this article. I've decided that the maximally productive argument style differs some by audience size and venue. When arguing online: * The argument productivity equation is dominated by bystanders. So offending the person you are responding to is not as much of a loss. * For distractible Internet users, succinctness is paramount. So it often makes sense to overstate your position instead of using wordy qualifiers. (Note that a well-calibrated person is likely to be highly uncertain.) When arguing with a few others in meatspace: * The only way to have a productive argument is for you or one of the others to change their mind. * And you pay less of a price for qualifiers, since they're going to listen to you anyway. Thanks for the feedback!

It seems to me that arguments between scientists are productive mostly because they have a lot of shared context. If the goal of arguing is to learn things for yourself, then it's useless to argue with someone who doesn't have the relevant context (they can't teach you anything) and useless to argue about a topic where you don't know the relevant context yourself (it's better to study the context first). Arguments between people who are coming from different contexts also seem to generate more heat and less light, so it might be better to avoid those.

Well in an ideal world, if I'm an ignoramus arguing with a scientist, our argument would transform into the scientist teaching me the basics of his field. Remember, the "arguing" relationship doesn't have to be (and ideally shouldn't be) adversarial.

Arguing logically works on a much smaller proportion of the populace than generally believed. My experience is that people bow to authority and status and only strive to make it look like they were convinced logically.

That's basically right, but I'd like to expand a bit. Most people are fairly easily convinced "by argument" unless they have a status incentive to not agree. The problems here are that 1) people very often have status reasons to disagree, and 2) people are usually so bad at reasoning that you can find an argument to convince them of anything in absence of the first problem. It's not quite that they don't "care" about logical inconsistencies, but rather that they are bad at finding them because they don't build concrete models and its easy enough to find a path where they don't find an objection. (note that when you point them out, they have status incentives to not listen and it'll come across that they don't care - they just care less than the status loss they'd perceive) People that I have the most productive conversations with are good at reasoning, but more importantly, when faced with a choice to interpret something as a status attack or a helpful correction, they perceive their status raising move as keeping peace and learning if at all possible. They also try to frame their own arguments in ways to minimize perceived status threat enough that their conversation partner will interpret it as helpful. This way, productive conversation can be a stable equilibrium in presence of status drives. However, unilaterally adopting this strategy doesn't always work. If you are on the blunt side of the spectrum, the other party can feel threatened enough to make discussion impossible even backing up n meta levels. If you're on the walking on eggshells side, the other party can interpret it as allowing them to take the status high ground, give bad arguments and dismiss your arguments. Going to more extreme efforts not to project status threats only makes the problem worse, as (in combination with not taking offense) it is interpreted as submission. It's like unconditional cooperation. (this appears to be exactly what is happening with the Muelhauser-Goertzel dialog, by t
I fully agree in the context of longer interactions or multiple interactions.
Roughly speaking, from what I can tell, it is generally believed that it works on 10% of the populace but really it works on less than 1%.
and 99% believe they are in 1%.
I'm in the 1% that don't think they are in the 1%. (It seems we have no choice but to be in one of the arrogant categories there!) I usually get persuaded not so much by logic (because logical stuff I can think of already and quite frankly I'm probably better at it than than the arguer) but by being given information I didn't previously have.
It is still flawed logic if the new data requires you to go substantially outside your estimated range rather than narrow down your uncertainty. (edit: and especially so if you don't even have a range of some kind). E.g. we had unproductive argument about whenever random-ish AGI 'almost certainly' just eats everyone, it's not that I have some data that it is almost certain it is not eating everyone, it's that you shouldn't have this sort of certainty about such a topic. It's fine if your estimated probability distribution centres there, it's not fine if it is ultra narrow.
I gave nothing to indicate that this was the case. While the grandparent is more self deprecating on the behalf of both myself and the species than it is boastful your additional criticism doesn't build upon it. The flaw you perceive in me (along the lines of disagreeing with you) is a different issue. I have a probability distribution, not a range. Usually not a terribly well specified probability distribution but that is the ideal to be approximated. No. Our disagreement was not one of me assigning too much certainty. The 'almost certainly' was introduced by you, applied to something that I state has well under even chance of happening. (Specifically, regarding the probability of humans developing a worse-than-just-killing-us uFAI in the near vicinity to an FAI.) You should also note that there is a world of difference between near certainty about what kind of AI will be selected and an unspecified level of certainty that the overwhelming majority of AGI goal systems would result in them killing us. The difference is akin to having 80% confidence that 99.9999% of balls in the jar are red. Don't equivocate that with 99.9999% confidence. They represent entirely different indicators of assigned probability distributions. In general meta-criticisms of my own reasoning that are founded on a specific personal disagreement with me should not be expected to be persuasive. Given that you already know I reject the premise (that you were right and I was wrong in some past dispute) why would you expect me to be persuaded by conclusions that rely on that premise?
Nah, my argument was "Well, the crux of the issue is that the random AIs may be more likely to leave us alone than near-misses at FAI." , by the 'may', I meant, there is a notable probability that far less than 99.9999% of the balls in the jar are red, and consequently, far greater than 0.0001% probability of drawing a non-red ball. edit: Furthermore, suppose we have a jar with 100 balls in which we know there is at least one blue ball (near-FAI space), and a huge jar with 100000 balls, about which we don't know quite a lot, and which has substantial probability of having a larger fraction of non-red balls than the former jar. edit: also, re probability distributions, that's why i said a "range of some sort". Humans don't seem to quite do the convolutions and the like on probability distributions when thinking.
Robin just had an interesting post on this: http://www.overcomingbias.com/2012/03/disagreement-experiment.html

It seems someone should link up "Why and How to Debate Charitably." I can't find a copy of the original because the author has taken it down. Here is a discussion of it on LW.. Here are my bulleted summary quotes. ADDED: Original essay I've just learned, and am very saddened to hear, that the author, Chris, committed suicide some time ago.

Link in discussion post updated - thanks!
0Wei Dai12y
From that essay: Does anyone know if he did write them up? Even the Internet Archive's mirror of pdf23ds.net is gone now (intentionally purged by the author, it looks like).
I've noticed that if I notice someone online as civilized and intelligent, the odds seem rather high that I'll be seeing them writing about having an ongoing problem with depression within months. This doesn't mean that everyone I like (online or off) is depressed, but it seems like a lot. The thing is, I don't know whether the proportion is high compared to the general population, or whether depression and intelligence are correlated. (Some people have suggested this as an explanation for what I think I've noticed.) I wonder whether there's a correlation between depression and being conflict averse.
0Mike Bishop12y
I also think that keeping a blog or writing in odd corners of the internet may be associated with, possibly even caused by, depression.

Good article. The 6 techniques seem quite useful. I think I use 'isolate specific disagreements' the most-it feels like at least 80% of all arguments I get into consist of me trying to clarify exactly what we disagree on, and often finding out half the time that we don't actually disagree on anything of substance, just vocabulary/values/etc.

If your belief inertia is low and you steel-man everything, you'll reach the super chill state of not having a "side" in any given argument. You'll play for all sides and you won't care who wins.

I've actu... (read more)

"Not having a side" doesn't have to mean being unable to argue a side, it can instead mean being able to argue several different sides. If I can do that, then if someone insists that I argue a position I can pick one and argue it, even knowing perfectly well that I could just as readily argue a conflicting position. In the real world, knowing that there are several different plausible positions is actually pretty useful. What I generally find professionally is that there's rarely "the right answer" so much as there are lots of wrong answers. If I can agree with someone to avoid the wrong answers, I'm usually pretty happy to accept their preferred right answer even if it isn't mine.
When you can equally well argue that X is true, and that X is false, it means that your arguing is quite entirely decoupled from truth, and as such, both of those arguments are piles of manure that shouldn't affect anyone's beliefs. It is only worth making one of such arguments to counter the other for the sake of keeping the undecided audience undecided. Ideally, you should instead present a very strong meta-ish argument that the argumentation for both sides which you would be able to make, is complete nonsense. (Unfortunately that gets both sides of argument pissed off at you.)
That's not actually true. In most real-world cases, both true statements and false statements have evidence in favor of them, and the process of assembling and presenting that evidence can be a perfectly functional argument. And it absolutely should affect your belief: if I present you with novel evidence in favor of A, your confidence in A should decrease. Ideally, I should weigh my evidence in favor of A against my evidence in favor of not-A and come to a decision as to which one I believe. In cases where one side is clearly superior to the other, I do that. In cases where it's not so clear, I generally don't do that. Depending on what's going on, I will also often present all of the available evidence. This is otherwise known as "arguing both sides of the issue" and yes, as you say, it tends to piss everyone off.
Let me clarify with an example. I did 100 tests testing new drug against placebo (in each test i had 2 volunteers), and I got very lucky to get exactly neutral result: in 50 of them, it performed better than placebo, while in other 50 it performed worse. I can construct 'argument' that drug is better than placebo, by presenting data from 50 cases where it performed better, or construct 'argument' that drug is worse than placebo, by presenting data from 50 cases where placebo performed better. Neither of 'arguments' should sway anyone's belief about the drug in any direction, in case of perfect knowledge of the process that has led to the acquisition of that data, even if the data from other 50 cases has been irreversibly destroyed and are not part of the knowledge (it is only known that there were 100 trials, and 50 outcomes have been destroyed because they didn't support the notion that drug is good; the actual 50 outcomes are not available). That's what I meant. Each 50-case data set tells absolutely nothing about truth of drug>placebo, by itself, is a weak evidence that effects of the drugs are small in the perfect knowledge of the extent of cherry picking, and only sways the opinion on "is drug better than placebo" if there's a false belief related to the degree of cherry picking. Furthermore, unlike mathematics of decision theory, qualitative comparison of two verbal arguments in their vaguely determined 'strength' yield complete junk unless the strength differs very significantly (edit: which happens when one argument is actually good and other is junk). Due to cherry picking as per above example.
Sure. More generally, in cases where evaluating all the evidence I have leads me unambiguously to conclude A, and then I pick only that subset of the evidence that leads me to conclude NOT A, I'm unambiguously lying. But in cases like that, the original problem doesn't arise... picking a side to argue is easy, I should argue A. That's very different (at least to my mind) from the case where I'm genuinely ambivalent between A and NOT A but am expected to compellingly argue some position rather than asserting ignorance.
Well, the drug example, there is no unambiguous conclusion, and one can be ambivalent, yet it is a lie to 'argue' A or not A , and confusing to argue both rather than to integrate it into conclusion that both pro and anti arguments are complete crap (it IS the case in the drug example that both the pro, and anti data, even taken alone in isolation, shouldn't update the beliefs, i.e. shouldn't be effective arguments). The latter though really pisses people off. It is usually the case in such cases that while you can't be sure in the truth value of proposition, you can be pretty darn sure that the arguments presented aren't linked to it in any way. But people don't understand the distinction between that, and really don't like if you attack their argument rather than position. In both side's eyes you're just being a jerk who doesn't even care who's right. All while, the positions are wrong statistically half of the time (counting both sides of argument once), while arguments are flawed far more than half. Even in math, if you just guess at truth value of the idk Fermat's last theorem using a coin flip, you have 50% chance of being wrong about the truth value, but if you were to make a proof, you would have something around 99% chance of entirely botching proof up, unless you are real damn good at it. And a botched proof is zero evidence. If you know the proof is botched (or if you have proof of the opposite that also pass your verification, implying that verification is botched), it's not weak bayesian evidence about any truth values, it's just a data about human minds, language, fallacies, etc.
Agreed that evaluating the relevance of an argument A to the truth-value T of a proposition doesn't depend on knowing T. Agreed that pointing out to people who are invested in a particular value of T and presenting A to justify T that in fact A isn't relevant to T generally pisses them off. Agreed that if T is binary, there are more possible As unrelated to T than there are wrong possible values for T, which means my chances of randomly getting a right answer about T are higher than my chances of randomly constructing an argument that's relevant to T. (But I'll note that not all interesting T's are binary.) This statement confuses me. If I look at all my data in this example, I observe that the drug did better than placebo half the time, and worse than placebo half the time. This certainly seems to unambiguously indicate that the drug is no more effective than the placebo, on average. Is that false for some reason I'm not getting? If so, then I'm confused If that's true, though, then it seems my original formulation applies. That is, evaluating all the evidence in this case leads me unambiguously to conclude "the drug is no more effective than the placebo, on average". I could pick subsets of that data to argue both "the drug is more effective than the placebo" and "the drug is less effective than the placebo" but doing so would be unambiguously lying. Which seems like a fine example of "in cases where evaluating all the evidence I have leads me unambiguously to conclude A, and then I pick only that subset of the evidence that leads me to conclude NOT A, I'm unambiguously lying." No? (In this case, the A to which my evidence unambiguously leads me is "the drug is no more effective than the placebo, on average"
Non-binary T: quite so, but can be generalized. but would it seem if it was 10 trials, 5 win 5 lose? It just sets some evidence that effect is small. If the drug is not some homoeopathy thats pure water, you shouldn't privilege zero effect. Exercise for the reader: calculate 95% ci for 100 placebo-controlled trials.
Ah, I misunderstood your point. Sure, agreed that if there's a data set that doesn't justify any particular conclusion, quoting a subset of it that appears to justify a conclusion is also lying.
Well, the same should apply to arguing a point when you could as well have argued opposite with same ease. Note, as you said: and i made an example where both true and false statements got "evidence in favour of them" - 50 trials one way, 50 trials other way. Both of those evidences are subset of evidence, that appears to justify a conclusion, and is a lie.
... You are absolutely correct. Point taken.
0Swimmer963 (Miranda Dixon-Luinenburg) 12y
This is the part that I can't do. It's almost like I can't argue for stuff I don't believe because I feel like I'm lying. (I'm also terrible at actually lying.)
I figured out a long time ago that I don't like lying. As a result, I constructed some personal policies to minimize the amount of lying I would need to do. In that, we most likely are the same. However, I also practiced the skill enough that when a necessity arose I would be able to do it right the first time.
So your thinking style is optimized for productive arguments but not school essays.

Rule one: Have a social goal in any given conversation. It needn't be a fixed goal but as long as there actually is one the rest is easy.

Hmm. What's your social goal here? Producing texts for social goal purposes is called signalling (usually, but depends to what you're trying to do).
It is something that it would be wiser to discuss with those for whom I would infer different motives and where I would predict different usages of any supplied text. When I engage my Hansonian reasoning I can describe everything humans do in terms of their signalling implications. Yet to describe things as just signalling is to discard rather a lot of information. Some more specific goals that people could have in a given argument include: * Learning information from the other. * Understanding why the other believes the way they do. * Tracing the precise nature of disagreement. * Persuading the other. * Providing information to the other. * Combining your thinking capabilities with another so as to better explore the relevant issues and arrive at a better solution than either could alone. * Persuade the audience. * Educate the audience. * Mitigate the damage that the other has done through advocating incorrect or undesired knowledge or political opinions. * Entertain others or yourself. * Make oneself look impressive to the audience. * Alter the feelings that the other has towards you in a positive direction. * Practice one's skills at doing any of the above. * Demonstrate one's ability to do any of the above and thereby gain indirect benefit. Some of those can be better described as 'signalling' than others.

Don't Straw Man Fellow Arguers, Steel Man Them Instead

Can you provide some specific tips on this one? I've tried to do this in discussions with non-LW people, and it comes off looking bizarre at best and rude at worst. (See also: this comment, which is basically what ends up happening.) Have you been able to implement this technique in discussions, and if so, how did you do it while adhering to social norms/not aggravating your discussion partners?

It gets better when you disagree with your opponent that they were justified in agreeing with you...
Oh, absolutely. This is why I don't like being in discussions with people who hold the same position as I do but for different reasons--saying that a particular argument in favor of our position is wrong sounds like arrogance and/or betrayal.
I was thinking more of a case where people agree out of politeness, and you have reason to believe they didn't properly understand your position.
Oh, I see. I don't think that's ever happened me, although I have had people try to end conversations with "everyone's entitled to their own beliefs" or "there's no right answer to this question, it's just your opinion" and my subsequent haggling turned the discussion sour.
It was weird the first few times when I had a cluster of people agreeing with me, spent time there, and then started to collect counter arguments. People are better tested with disagreements, than just holding similar end-views. A belief held for the wrong reasons can easily be changed. It gets a bit weird when you start fixing your opponents arguments against your own position.
I tend to do this often as part of serving as a 'moderator' of discussions/arguments, even when it's just me and another. It's useful to perceive the other party's (parties') argument as merely a podium upon which their belief rests, and then endeavor to identify, with specificity, their belief or position. Colloquially, the result would be something like: * Not you: "I think that, it just doesn't seem right, that, even without being given even a chance, the baby just dies. It's not right how they have no say at all, you know?" * You: "So, your position is..." In verbal communications you can at this point briefly pause as if you're carefully considering your words in order to allow an opportunity for their interjection of a more lucidly expressed position. "...that the fetus (and I'm just using the scientific terminology, here), has value equal to that of a grown person in moral considerations? [If confused:] I mean, that when thinking about an abortion, the fetus' rights are equal to that of the mother's?" [As shown above, clarify one point at a time. Your tone must be that of one asking for clarification on a fact. More, "The tsunami warning was cancelled before or after the 3/14 earthquake hit?" than, "You've been wrong before; you sure?"] * Not you: "Yea, such is mine position." * You: "And, due to the fetus' having equal moral standing to the mother, abortions thus are an unjust practice?" * Not you: "Aye." Be careful with these clarification proceedings, though. If by framing their arguments you happen to occlude the actual reasoning of their argument, due to them not knowing it themselves or otherwise, the entire rest of the argument could be a waste of time predicated upon a falsely framed position. Suggestions of possible solutions include: * Asking whether they are sure the framed argument accurately expresses the reason for their position on the matter; not framing at all, but jumping right into the hypothetical probing and allowing for the
Well now, this technique is straight-out dishonesty. You're not "just using the scientific terminology". You have a reason for rejecting the other person's use of "baby", and that reason is that you want to use words to draw a moral line in reality at the point of birth. Notice that you also increased the distance by comparing the "fetus" not to a newborn baby but to an adult. But you cannot make distinctions appear in the real world by drawing them on the map. That is, your tone must be a lie. You are not asking for clarification of a fact.
Don't know how you came to this, but nowhere do I take a stance on the issue. There's the 'Not you' and the 'You', with the former thinking it's wrong, and the latter wanting to know the former's reasoning and position. You can just as well use the word 'baby'; only using a neutral word as decided by a third party (namely science and scientists), besides 'the baby' or 'it', help in distancing them as well as their perception of you from the issue. It's difficult for someone to perceive an issue clearly when, every time it comes to mind, they're reminded, "Oh yes, this I believe." Subtly separating that associative belief of the other party (parties) allows them to evince their true reasons with greater accuracy. Harry did the same (unintentionally) by setting Draco into an honestly inquisitive state of mind in HPMOR when investigating blood (not ad verecundiam, just an example). I fully agree you cannot manifest terrain by drawing on the map; this is why I suggest comparing the fetus to a grown person. A new human has the potential to become a grown human, and I, in assuming the position of the 'Not you', guessed this was a reason they may value the fetus. In another example they may say, "No, they're just a baby! They're so cute! You can't kill anything cute!" From a consequentialist point of view (which appears to be the same as the utilitarian - correct me if I'm wrong, please), it doesn't matter whether they are cute unless this is a significant factor to those considering the fetus' mortality. A fetus' potential 'cuteness' is a transient property, and a rather specious foundation upon which to decide whether the fetus' shall have a life. What if they're ugly, grow out of the cute, are too annoying to the mother? Then their reason for valuing the baby operates on a relative curve directly proportional to the baby's cuteness at any point in time; I'm not sure what it's called when a baby is wanted for the same reasons one might want a pet, or a stuffed animal,
Ok, the "You" person isn't you. Sorry for conflating the two. (Probably because "You" is portrayed as the voice of reason, while "Not you" is the one given the idiot ball.) But if I look just at what the "You" person is saying, ignoring the interior monologue, they come across to me as I described. And if they speak that interior monologue, I won't believe them. This is a topic on which there is no neutral frame, no neutral vocabulary. Every discussion of it consists primarily of attempts to frame the matter in a preferred way. Here's a table comparing the two frames: fetus unborn child right to choose right to life pro-choice pro-abortion anti-choice anti-abortion You can tell what side someone is on by the words they use.
No problem. If someone with your objection were to raise their concerns with the 'You' at the time of intercourse, I would recommend calmly requesting agreement on a word both agree as neutral; actually, this would be an excellent first step in ensuring the cooperation of both parties in seeking the truth, or least wrong or disagreeable position on the matter. What that word would be in this instance, besides fetus, I haven't a clue - there may be no objectively neutral frame, from your perspective, however in each discourse all involved parties can create mutually agreed upon subjectively neutral vocabulary, if connotations truly do prove such an obstacle to productive communication. I am still for fetus as a neutral word, as it's the scientific terminology. Pro-life scientists aren't paradoxical.
What you actually appear to be doing in this exchange is framing the debate (this is not a neutral action) under the guise of being a neutral observer. If your arguer is experienced enough to see what you're doing, he will challenge you on it probably in a way that will result in a flame war. If he isn't experienced enough he may see what appears to be a logical argument that somehow doesn't seem persuasive and this may put him off the whole concept of logical arguing.
I don't see how it breaks neutrality if you frame the debate in a non-fallacious perspective. Can't it end in a peaceful back-and-forth until we have agreed on a common frame?
If I'm interpreting his objection correctly, I think the framing enables potential and possibly unknown biases to corrupt the entire process. The other party (parties) may consciously think they agree on a particular frame, but some buried bias or unknown belief may be incompatible with the frame, and will end up rejecting it.
Well, then they can tell you they made a mistake and actually reject the frame explaining why and you will have learned about their position allowing you to construct a new frame.
Indeed, though I wonder whether they will not themselves be able to express why often enough to warrant a complete omission of the framing step in favor of immediate hypothetical probing, and even that assumes they'll realize the frame is inaccurate before the argument ends and each go their separate way.
So by framing their position with my own words, I could be tricking them into agreeing to something that sounds to technically be their position, while their actual position could be suppressed, unknown, and biasing their reception to all that then follows? That sounds true, however if they interject and state their position themselves, then would the technique of probing with hypotheticals also not be neutral? I have edited the original comment so as to include and account for the former possibility, though I think the latter, probing with hypotheticals, is a valid neutral technique. If I'm wrong, please correct me.
Sometimes, the only answer you can come up with for this is "Because they're mistaken, evil, or both." (We can probably agree today that anyone making serious pro-slavery arguments prior to the American Civil War was mistaken, evil, or both.)
This is the Socratic method of arguing. It can also be as a Dark Side technique by choosing your questions so as to lead your counterpart into a trap - that their position is logically inconsistent, or implies that they have to bite a bullet that they don't want to admit to biting. I've seen this "countered" by people simply refusing to talk any more, by repeating their original statement, or saying "No, that's not it" followed by something that seems incomprehensible.
Why would that be a problem in their position actually is inconsistent? People don't like having their inconsistencies exposed, but it's still a legitimate concern for a truth-seeking debate.
Also leads to undesirable outcome of provoking even more screwed up beliefs by propagating them from one screwed up belief. If you want to convince someone (as opposed to convincing yourself and.or audience that you are right), you ought to try to start from the correct beliefs, and try to edge your way towards the screwed up ones. But that doesn't work either because people usually have surprisingly good instrumental model of the absence of the dragon in the garage, and see instantly what you are trying to do to their imaginary dragon. Speaking of which, it is easy to convince people to refine their instrumental model of non-existence of dragon, than to make them stop saying there is a dragon. People not formally trained usually don't understand the idea of proof by contradiction.
Reason as memetic immune disorder?
Reason works in any direction; you can start from nonsense and then come up with more unrelated nonsense, or you can start from sense and steam-roll over the nonsense. edit: but yea, along those lines.
Or, if you try to pull this kind of stunt at them too much, some good old ad baculum.
I have sometimes used this one with my ex-boyfriend, who was an extraordinarily bad arguer. I did it for two reasons : have a productive argument and not have a boyfriend complaining that I always won arguments, so not exactly what the post had in mind. He had also admitted that I argued better than he did, so it did not come off too rude (I still had to use many qualifiers). That being said, I never used it efficiently when there were more than two people involved ans it sometimes backfired. I would not recommend using it in any conversation, and especially not on an online discussion.

These tips seem designed for cases where everyone has read them and everyone wants to reach the truth. An important case, certainly, and what we're trying (probably pretty successfully) to achieve here.

I can't help suspecting that an argument containing someone like this and someone arguing to win will either go nowhere or conclude whatever the arguer-to-win went in thinking. Clearly no good.

Any ideas how to deal with that (rather common) case?

One option is to spend less time with people who argue to win, and more time with people who argue to reach truth.
I actually tried pretty hard to teach an intelligent person off the street to argue productively with this post. (Agonized over the wording a fair amount, started with simple techniques and then discussed advanced ones, made no assumptions about prior knowledge, etc.) So, share this prior to your argument? Clearly, your argument's productivity will be bounded if one participant is totally intransigent. So the best you can do in that case is collect evidence from the intransigent participant and move on.

TBH the #1 rule should be: set a limit of time for arguing with individuals or groups of individuals who are dogmatically sure in something for which they don't even provide any argument for that could conceivably been this convincing to them. E.g. "why you are so sure exactly 1 God exist", "well, there's a book, which i agree doesn't make a whole lot of sense, it says it was written by God..." what ever, clearly you aren't updating your beliefs to '50% sure god exists' when presented with comparable quality argument that god doesn't ... (read more)

Actually, I have two tips, which sound unfriendly, but if followed, should minimize the unproductive arguments:

1: Try not to form strong opinions (with high certainty in the opinion) based on shaky arguments (that should only result in low-certainty opinion). I.e. try not to be overconfident in whatever was conjectured.

2: Try hard not to be wrong.

More than half of the problem with the unproductive arguments is that you are wrong. That's because in the arguments, often, both sides are wrong (note: you can have a wrong proof that 2*3=6 and still be very wron... (read more)

This is a list of tips for having "productive" arguments. For the purposes of this list, "productive" means improving the accuracy of at least one person's views on some important topic. By this definition, arguments where no one changes their mind are unproductive.

Sometimes the onlookers will change their position. When arguing with someone sufficiently mind-killed about a topic (and yes there are people like that on lesswrong), that's the best you can hope for.