Wiki Contributions

Comments

For me, contemplating Zen koans for too long can make my brain "hurt".

Does a dog have Buddha-nature or not? Show me your original face before your mother and father were born. If you meet the Buddha, kill him. Look at the flower and the flower also looks.

I find it interesting because, unlike coding a program or solving a math equation or playing chess, it doesn't seem like koans have a well-defined problem/goal structure or a clear set of rules and axioms. Some folks might even call them nonsensical. So I'm not sure to what extent the notion of (in)efficient optimization is applicable here; and yet it also appears to be an example of "thinking hard". (Of course, a Zen instructor would probably tell you not to think about it too hard.)

For what it's worth, I have three years' experience with university-level competitive debating, specifically with the debate format known as British Parliamentary (which is the style used by the World Universities Debating Championship or WUDC). Since many people are unfamiliar with it, I'll briefly explain the rules: one BP debate comprises four teams of two members each. All four teams are ranked against each other, but two of them must argue for the affirmative ("government") side of the issue and the other two for the negative ("opposition") side. The objective is basically to persuade the adjudicators why your team should win. In this format you do not get to research the topic beforehand, and you don't even know what you are going to debate until 15 minutes before the debate starts -- which means that it requires a lot of quick brainstorming and improvisation. And since each individual speaker gets only 7 minutes to make their case, you have to prioritize the most important content and structure it coherently.

In our training sessions we actually do not study classical rhetoric. So I'm not familiar with terms like elocutio, dispositio or pronuntiatio -- although I can definitely recognize clear delivery, organized structure, and appeals to logic as important principles of varsity debating. I think there are skills one can learn from this kind of public speaking:

  • The target of persuasion in BP is the judge, who we regard as a layman, or "average informed voter". This means that he or she has a high-school education and reads a newspaper once a while, but is not an expert on any particular subject. Furthermore, the judge is not supposed to have a bias in favor of left-wing or right-wing arguments (but is moderate by the standards of a Western liberal democracy). This rule encourages speakers to use arguments that will appeal to a broad segment of people.
  • The persuasiveness of a speech is evaluated not based on how impressive your style is, but on how compelling your arguments are. A good argument is one that is (a) believable, i.e. the premises are acceptable and the conclusion follows logically; and (b) relevant to the concerns of the debate, i.e. something that counts in favor of your side and against the other side. It is up to you as a speaker to explain why a claim you make is likely to be true and why it implies that your team should win. This encourages speakers to be clear about what it is that they actually stand for, and why the rest of us should care.
  • Due to the short preparation time and the fact that judges do not fact-check the participants' speeches using Google, it doesn't make much sense to cite academic papers or statistics in one's speech. Additionally, to base an argument on a single example makes it vulnerable to refutation by counterexample. So if you can neither say "Studies show that in 73% of cases, X happens..." nor say "Last year, there was a case where X happened..." and get away with it, then what can you say? Well, a useful trick here is to remember that debating entails a comparison between two worlds: you can claim that "X is more likely (or less likely) to happen if we enact this policy than if we don't". Then you need argumentation to explain why that is the case, starting from premises that most people would accept as common knowledge or some kind of first principle about how the world works. You also need to explain why, assuming your argument is correct, this justifies/warrants the kind of action or conclusion you are proposing. This aspect of BP debating encourages speakers to think in terms of general rules rather than specific data points.
  • The interactive nature of the game requires you to respond to the arguments presented by the other teams. A rebuttal works the same way as an argument, but with the opposite intention: you explain why the claims made by the other side are either (a) unrealistic; or (b) unimportant, perhaps because they are not mutually exclusive with your claims, or because they are of such little consequence that they are outweighed by other factors. Thus, speakers have to simultaneously see both sides of the dispute in order to isolate the core tensions and advocate successfully for their side.
  • One quirk of this kind of format is that you don't get to choose beforehand which side of the topic you will be arguing for. This means that you will occasionally be required to defend positions that you personally disagree with, and poke holes in the ones you cherish. This is great for challenging confirmation bias and inviting speakers to consider different points of view. Even if you don't radically change your worldview, you will at least develop a greater understanding of the other side.

Of course one could also criticize this type of debating. Firstly, it inculcates a competitive spirit rather than a spirit of collaborative truth-seeking. Secondly, as a game it is in some ways detached from the nuances and practicalities of persuasion in the "real world", where things like statistical figures and budgetary limits and constitutionality do matter. Finally, one might become too adept at constructing plausible-seeming justifications for any conclusion one likes regardless of the actual evidence -- and this Eliezer warned us about:

And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich's "dysrationalia" sense of stupidity.
You can think of people who fit this description, right?  People with high g-factor who end up being less effective because they are too sophisticated as arguers?

When you start with a given position on a topic (let's say you have to argue against legalizing recreational drugs) and construct arguments in its favor, you are essentially engaging in rationalization instead of rationality.

So do the benefits outweigh these risks? I don't know.

In that case in what sense does he dislike his professor. From your example, him disliking his professor seems at be a free-floating XML tag.

I suppose it can be explained by the liking/wanting vs. approving distinction (you can have a feeling that you disapprove of) or Alicorn's idea of repudiating one's negative characteristics. And then the cognitive dissonance created by you giving an apple to someone you dislike may be resolved by shifting your attitude of the person in a positive direction -- so in this sense, Undoing is a strategy to reduce disapproved/repudiated properties.

This is especially notable with the way projection/reaction formation is discussed in practice: "He's opposing position X because he secretly supports it."

Interestingly, there is research showing that some people who oppose homosexuality or gay marriage do in fact show an unconscious attraction to the same sex -- see e.g. Weinstein et al. (2012). However, in this case I would agree that "he overtly opposes X because he covertly supports X" is the wrong way of looking at it; rather, he (the ego in Freudian terms) disapproves of his desire (the id). Of course, this doesn't imply that everybody who opposes X is doing so as a defense mechanism.

Edit: To clarify, I'm certainly not implying that homosexuality is a negative characteristic; just that some people are raised in a culture where it is stigmatized, and so they internalize the value that it is. The specific claim made by the Weinstein et al. paper is as follows:

  • Some children have parents who don't support their autonomy. Some of those parents happen to also hold negative attitudes toward homosexual individuals. The combination of these two factors results in the children seeking approval from their parents by suppressing the needs/wishes/beliefs etc. that aren't supported by their parents. The researchers did a survey of participants' explicit views about homosexuality and their sexual orientations, and also measured the participants' implicit sexual orientation using a reaction time task. They found a discrepancy between explicit and implicit sexual orientations, especially when parents showed low autonomy support, and also found that this discrepancy was related to greater self-reported homophobia and endorsement of anti-gay policy positions. The researchers conclude: "...these effects can be understood, at least in part, as a defensive response to maintain the suppression of self-relevant, but threatening, information" (p.829).

I decided to edit this comment instead of replying directly to tempus' comment below, as I did not perceive that commenter to be acting charitably.

Vernor doesn't give the professor an apple because he dislikes the professor per se, but because he feels guilty about his dislike for the professor, which he tries to "fix" by giving a gift -- this works exactly because giving a gift usually indicates liking someone (putting aside other motives, such as ingratiation).

A different example of the "Undoing" defense mechanism would be an abusive alcoholic father who buys his kids lots of Christmas presents (see the sources here and here).

In psychoanalytic theory, these various phenomena are related in that they serve the function of protecting one's ego. But if you think that's a poor way of conceptualizing them, I'd be curious how you think we could do better.

Edit: For example, gworley's comment conceptualizes them as defending one's prior probability.

Thanks for pointing it out. I've fixed it and updated the link.

Thanks, I'm glad you found it useful!

The reason I didn't link to LW 2.0 is because it's still officially in beta, and I assumed that the URL (lesserwrong.com)will eventually change back to lesswrong.com (but perhaps I'm mistaken about this; I'm not entirely sure what the plan is). Besides, the old LW site links to LW 2.0 on the frontpage.

I'm wildly speculating here, but perhaps enforcing norms is a costly signal to others that you are a trustworthy person, meaning that in the long-term you gain more resources than others who don't behave similarly.

I cannot say much about CFAR techniques, but I'd nominate the following as candidates for LW "hammers":

Of course, the list is not exhaustive.

Thanks a lot for doing this!

Load More