In general, I think LessWrong.com would benefit from conspicuous guidelines: a readily-clickable FAQ or User's Guide that describes posting etiquette and relevance criteria, and general mandatey stuff.
I encourage everyone to look at the example of http://MathOverflow.net/, a web community for mathematicians that started with a few graduate students just half a year ago and has grown immensely in size and productivity since then (notably , enjoying regular contributions from Fields Medalist Terrence Tao).
Not only do they have an FAQ, but a clearly distinguished ongoing Meta forum that was used extensively in its early development to analyze site policies:
http://mathoverflow.net/faq http://meta.mathoverflow.net/
If we did discover a cognitive trick for making people collectively reason better, a sentence about it in an FAQ could work wonders.
Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others' arguments.
I observed some time ago that Roger Penrose seemed to be a much better explainer of physics when he was using it to argue something (even though the conclusion was completely bogus) than people who graciously write textbooks that will be required reading for the students who have to buy it.
If you want good textbooks, make sure the author is trying to persuade the students of something, I'd say. I usually am.
If you want good textbooks, make sure the author is trying to persuade the students of something, I'd say. I usually am.
Perhaps the process of writing should be separated from the product of writing (i.e. the textbook). The best of both worlds surely is a textbook that doesn't try to persuade at all (since persuasion is tangential to providing an explanation), but which was written with a process involving a lot of arguing (to help stimulate the best reasoning). My brother and I sometimes had heated arguments when we wrote C# 3.0 in a Nutshell, with numerous "red ink revisions" before finally settling on the NPOVish text the reader sees.
Maybe that explains why Wikipedia is usually much clearer to read (IMO) than professionally produced encyclopedias.
From the Why do humans reason paper:
The group success is due first and foremost to the filtering of a variety of solutions achieved through evaluation. When none of the answers initially proposed is correct, then all of them are likely to be rejected, wholly or partly new hypotheses are likely to be proposed, and again filtered, thus explaining how groups may even do better than any of their individual members.
These attitudes are favorable to argumentative goals but actually detrimental to epistemic goals. This is particularly evident in decision-making. Reasoning appears to help people little when deciding; it directs people to the decisions that will be easily justified, not to the best decisions!
I'm sure this must have been stated before this post or probably already well known in these whereabouts, but this is just a brilliant and highly useful insight.
I guess this is the reason why "explain your problem to the rubber duck before asking humans" works.
Many times I have wandered into IRC with a programming problem, and the very moment I hit enter to send it out and read it on the screen as part of the conversation, the answer occurs to me; the same answer I'd give anybody else in that room asking the question.
Maybe I should rig my IRC client to delay actually sending the first message to a room...
I was thinking that the message would appear to have been sent to the chat room as soon as I hit enter, but not actually be sent to the IRC server until later. If the hack is a matter of seeing the question as though it were someone else's question, then that would work. If the hack requires that I genuinely believe that other people in the room have received the question, it wouldn't work as well... but might still work somewhat, if the delay behavior were unobtrusive enough that I partially forgot about it.
Part of the problem is that we have a much better worked out theory of reasoning than of arguing. So we are tempted to apply our theory of reasoning to evaluate our arguments, where we should prefer to apply a theory of arguing. So what we need is a better theory of arguing - what counts as a good argument, a good reply, etc.
I partake in British Parliamentary Debate. A good argument:
A counterargument either:
A good counter argument is concise.
For example: this house would force everyone to publish their income on the Internet.
This motion would lessen corruption by crowdsourcing police. Any person could go online and compare their neighbor's apparent wealth to their stated income and raise an alarm should a disparity be found. The neighbor would of course know this and thus would not dare evade taxes or whatever. So we have less corruption, less people in jail due to deterrence, more taxes, and less strain on our actual police!
Attack premises: most people live in big cities in relative anonymity, neighbors don't know each other, and wealth isn't conspicuous.
Attack logic 1: government websites are hardly a popular destination. People simply wouldn't care to go through tables of numbers.
Attack logic 2: people would just spend their ill gotten gains inconspicuously. (counter-counterargument: wealth is about signaling status which must be visible)
Alternative: this is a huge infraction on people's privacy which is more important than lessening corruption. (This one should be more elaborate but I'm out of steam.)
Note though that British Parliamentary Debate is about winning and not truth.
Wouldn't that be even worse? If we currently mostly don't use good reasoning for individual truth seeking, but in at least occasionally use good reasoning to argue, wouldn't developing a theory of arguing contribute to displacing that, resulting in even less good reasoning? Or do you think that reasoning would become better for truth seeking if it was freed from the other optimization goal of being good for arguing?
I've been chewing on this question for a while.
This WP article could serve as a starting point - though it looks a little daunting. It makes a lot of a Stephen Toulmin's "six elements of an argument" - I see that Toulmin hasn't been discussed on LW so far. I'll see if I can get some info, summarize and evaluate the usefulness of that framework.
A proposal in line with M&S would be: a good argument is one that causes your interlocutor to accept your conclusion. A good counter-argument is one that justifies your rejecting your interlocutor's conclusion. This conforms to the hypothesis that reason serves argument, and that its twin functions are to help us convince others and to resist being convinced.
I'm also wondering about a "memetic theory of argumentation", where an argument spreads by virtue of convincing others, and mutates to become more convincing. Our "rules" for correct argumentation are themselves but memetic fragments that "ally" with others to increase their force of conviction. For instance, "we should reject ad hominem arguments" is a meta-argument which, if we expect that our interlocutors are likely to use it to reject our conclusions, we will avoid using for fear of making a poor initial argument. In this manner we might expect to see an overall increase in the "fitness" of arguments as a consequence of the underlying arms race.
We should also be careful to distinguish conversation from argument, I see the former as serving an entirely different purpose.
when they are after the truth rather than after winning a debate.
To win at life you must be willing to lose at debate.
Please improve upon this slogan/proverb.
Learn to enjoy being proven wrong, or you'll never learn anything.
If you never lose an argument, then you need to find some better arguments.
Winning an argument is satisfying; losing an argument is productive.
One suggestion would be to avoid commenting too much by having a/some friends evaluate the content of your comment before posting here. Her reasoning will see your blind spots. This would also diminish the total number of comments. This in turn will lead to more people reasoning about a particular comment.
One thing to which solutions are needed is: What is a better system than Karma to encourage status-seeking-rational primates to read the blog, but at the same time to avoid over-commenting to get more points?
at the same time to avoid over-commenting to get more points?
I see the karma system as benign in this respect, because people love to argue anyway (M&S also mention that in their conclusions). They would do it without a karma outcome, so adding karma seems unlikely to affect the overall number of comments much.
There are punctual exceptions. As the "Spring meta thread" shows (or the pictures thread in the babies and bunnies post), people seem to need opportunities to just goof off. This is fine but it has the unfortunate effect of being a distraction to those who (like me, I'm afraid) prefer the more "serious" stuff.
I downvoted some of the comments in the "This is a comment" thread, but clearly enough other folks disagree with me. Here again M&S have something useful to say: the group isn't always right, and not all group processes track truth; only those which foster production and evaluation of arguments.
So, perhaps what we need is a dual system, with separate votes for "I like/dislike this" one one hand, and "Good/stupid point" on the other hand.
They would do it without a karma outcome, so adding karma seems unlikely to affect the overall number of comments much.
I don't think that follows. I enjoy drinking soda, but if someone gave me $5 every time I did it, I would drink even more soda.
More directly, as a data point, I find myself commenting more on Less Wrong than on other blogs because of the karma points system. However, an even bigger motivator for me than that is the response notification; if not for that, I would almost never comment on old threads, or post comments specifically inviting response.
If people started taking your first advice, comment quality would go up, and downvoting might be a little more prevalent, discouraging at least "pointless" posts. Also, posts in clear violation of guidelines would not happen much, unless there was visibly an exceptional benefit, lest they draw downvotes. Judging from MathOverflow (see my other comment), the downvoting doesn't devolve into fearmongering, either, just a healthy immunity to trolling and time-wasting.
This is great Meta material ;)
What is a better system than Karma to encourage status-seeking-rational primates to read the blog, but at the same time to avoid over-commenting to get more points?
Systematically downvoting comments like mine right here.
What about your comment makes it worthy of downvoting? Is it that it's meta, or that it doesn't add content to the conversation, or just because you asked us to?
I can't for the life of me remember making this comment.
I'd guess that it was meta; it was semi-smart, but not very productive, so it deserved to be downvoted, which is what it asked for, and which it got. Basically, smart-sounding comments with tenuous relevance are a waste - the smartness should not be a shield.
I use to people's bias as evidence for manipulation and deception, now I see it as naivity. I reframe the situation and have gone from frustrated, angry and contemptful, to thinking of them ...well I guess now I just don't think too much about it.
Yes, reasoning is about winning arguments, but it is also about logic! For our logic, and rules of reasoning, themselves organize our discourse into war games.
Our antagonistic rules of logic (where ideas, perspectives, etc., have to compete with each other for truth, and where logical might makes right), do not serve us when the goal is clear, differentiated, complex, high quality thinking. Traditionally competitive, antagonistic reasoning introduces an extraneous motivation (to win) that diverts cognitive capacities/resources from the actual task of thinking or reasoning, and is detrimental to high-quality complex, differentiated, as well as to courageously explorative, reasoning/thinking. (In an emotional-relational climate of mutual attack—built into our logic and rules of argument—we produce polarized knee-jerk opinions or resort to "safe" easily fortifiable/defendable positions). (See Kohn, A. [1992] No Contest. The case against competition. Why we lose in our race to win.)
What generates the highest quality thinking, and is most inspiring and motivating to those participating, is what Kohn has called "cooperative conflict" (in contrast to both antagonistic, oppositional conflict and to simple consent). That is, a framework of cooperation, whithin which contrasting (or "contradicting") ideas, theories, views, opinions, arguments, etc. are cooperatively explored, as a shared project or challenge, to come to a more differentiated view together.
This isn't true, sometimes reasoning IS about logic:
""Argumentation ethics asserts the non-aggression principle is a presupposition of every argument and so cannot be logically denied during an argument. Argumentation ethics draws on ideas from Jürgen Habermas's and Karl-Otto Apel's discourse ethics, from Misesian praxeology and from the political philosophy of Murray Rothbard.
Hoppe first notes that when two parties are in conflict with one another, they can choose to resolve the conflict by engaging in violence, or engaging in argumentation. In the event that they choose to engage in argumentation, Hoppe asserts that the parties have implicitly rejected violence as a way to resolve their conflict. He therefore concludes that non-violence is an underlying norm (Grundnorm) of argumentation, that is accepted by both parties.
Hoppe states that, because both parties propound propositions in the course of argumentation, and because argumentation presupposes various norms including non-violence, the act of propounding a proposition that negates the presupposed propositions of argumentation is a logical contradiction between one’s actions and one’s words (this is called a performative contradiction). Specifically, to argue that violence should be used to resolve conflicts (instead of argumentation) is a performative contradiction.[3]""
"Why do humans reason" (PDF), a paper by Hugo Mercier and Dan Sperber, reviewing an impressive amount of research with a lot of overlap with themes previously explored on Less Wrong, suggests that our collective efforts in "refining the art of human rationality" may ultimately be more successful than most individual efforts to become stronger. The paper sort of turns the "fifth virtue" on its head; rather than argue in order to reason (as perhaps we should), in practice, we reason in order to argue, and that should change our views quite a bit.
I summarize Mercier and Sperber's "argumentative theory of reasoning" below and point out what I believe its implications are to the mission of a site such as Less Wrong.
Human reasoning is one mechanism of inference among others (for instance, the unconscious inference involved in perception). It is distinct in being a) conscious, b) cross-domain, c) used prominently in human communication. Mercier and Sperber make much of this last aspect, taking it as a huge hint to seek an adaptive explanation in the fashion of evolutionary psychology, which may provide better answers than previous attempts at explanations of the evolution of reasoning.
The paper defends reasoning as serving argumentation, in line with evolutionary theories of communication and signaling. In rich human communication there is little opportunity for "costly signaling", that is, signals that are taken as honest because too expensive to fake. In other words, it's easy to lie.
To defend ourselves against liars, we practice "epistemic vigilance"; we check the communications we receive for attributes such as a trustworthy or authoritative source; we also evaluate the coherence of the content. If the message contains symbols that matches our existing beliefs, and packages its conclusions as an inference from these beliefs, we are more likely to accept it, and thus our interlocutors have an interest in constructing good arguments. Epistemic vigilance and argumentative reasoning are thus involved in an arms race, which we should expect to result in good argumentative skills.
What of all the research suggesting that humans are in fact very poor at logical reasoning? Well, if in fact "we reason in order to argue", when the subjects are studied in non-argumentative situations this is precisely what we should expect.
Mercier and Sperber argue that, when you look at research that studies people in the appropriate settings, we turn out to be in fact quite good at reasoning when we are in the process of arguing; specifically, we demonstrate skill at producing arguments and at evaluating others' arguments. M&S also plead for the "rehabilitation" of confirmation bias as playing an adaptive, useful role in the production of arguments in favor of an intuitively preferred view.
If reasoning is a skill evolved for social use, group settings should be particularly conducive to skilled arguing. Research findings in fact show that "truth wins": once a group participant has a correct solution they will convince others. A group in a debate setting can do better than its best member.
The argumentative theory, Mercier and Sperber argue, accounts nicely for motivated reasoning, on the model that "reasoning anticipates argument". Such anticipation colors our evaluative attitudes, leading for instance to "polarization" whereby a counter-argument makes us even more strongly believe the original position, or "bolstering" whereby we defend a position more strongly after we have committed to it.
These attitudes are favorable to argumentative goals but actually detrimental to epistemic goals. This is particularly evident in decision-making. Reasoning appears to help people little when deciding; it directs people to the decisions that will be easily justified, not to the best decisions!
However, it isn't all bad news. The important asymmetry is between production of arguments, and their evaluation. In groups with an interest in finding correct answers, "truth wins".
Becoming individually stronger at sound reasoning is possible, Mercier and Sperber point out, but rare. The best achievements of reasoning, in science or morality, are collective.
If this view of reasoning is correct, a site dedicated to "refining the art of human rationality" should recognize this asymmetry between producing arguments and evaluating arguments and strive to structure the "work" being done here accordingly.
It should encourage individual participants to support their views, and perhaps take a less jaundiced view of "confirmation bias". But it should also encourage the breaking down of arguments into small, separable pieces, so that they can be evaluated and filtered individually; that lines up with the intent behind "debate tools", even if their execution currently leaves much to be desired.
It should stress the importance of "collectively seeking the truth" and downplay attempts at "winning the debate". This, in particular, might lead us to take a more critical view of some common voting patterns, e.g. larger number of upvotes for snarky one-liner replies than for longer and well-thought out replies.
There are probably further conclusions to be drawn from the paper, but I'll stop here and encourage you to read or skim it, then suggest your own in the comments.