Yudkowsky recently posted a Twitter thread about how ideal reasoners respond to arguments. My understanding of his reasoning is:

  1. The more smart/rational/whatever you are, the better you are at figuring out what is true
  2. Thus, whether you believe the conclusion of an argument should be based primarily on whether the conclusion is true, rather than on how effectively the argument was presented

This principle seems valid to me.

Valentine, in a LessWrong comment thread, used Yudkowsky's thread to draw the conclusion that a healthy rationalist community should not make arguments. I think this is silly. We are not, unfortunately, logically omniscient; we cannot just look at data and draw all the correct conclusions from it. The purpose of an argument is to help people realize what conclusions they should draw from data without having to figure it all out on their own.

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 7:49 AM

An extra nuance which isn't totally relevant to my main hot take of "arguments are often not useless": I agree with the idea that framing an argument in your head as an attempt to convince someone of something isn't necessarily the best way to think about it. It's very easy for arguments to start to feel adversarial, which they ideally shouldn't be.

This post was requested by Eneasz Brodski on the Bayesian Conspiracy discord.

The word "argument" has (at least) two kinds of uses:

  1. "Here's some reasoning showing why X might be true."
  2. Social pressure. (e.g. "No, you should donate to XYZ and not ABC because blah blah blah.")

I hear you saying that the first is good. I agree. Even when people have all the pieces. Not logically omniscient as you say.

The second is dumb. Healthy rational discourse norms would banish it.

I'm a bit of a dick on this point. When I see it, I exaggerate it and call it out. It irritates me. I wasn't being totally fair to D0TheMath. I think they were trying to be polite and follow standard LW norms and engage in good faith. But I still think my sight was basically right.

I read their comment as saying something like:

Given my epistemic state, I find myself disagreeing with you. I don't find it worthwhile to investigate whether you're right, so here's what you would need to do to persuade me.

This is normal in LW culture, and I think it's nuts.

It's part of the same bonkers thread that has people policing their epistemic impacts on each other and calling this handwringing "good collective epistemic hygiene". It's codependence. Plain and simple.

(What to do instead? How about just actually intend good epistemics for yourself, aim to be clear and transparent, and let others take care of their epistemic states (or completely fuck themselves up). That produces vastly more epistemically robust networks with vastly less overhead.)

A network that values persuading each other is crazy. Awful incentives. It feeds ego structures ("rewards with status" if you like) when arguments are persuasive. It starves geeks and feeds sociopaths.

If you disagree with someone and aren't curious, that's fine. Just admit that to yourself (and maybe tell them).

If you are curious, just ask. ("If you feel like looking into simulacra levels theory, maybe give me an example of XYZ if you can?")

But for the love of sanity, don't feed the sociopaths.

I mostly agree, but I feel like 

Given my epistemic state, I find myself disagreeing with you. I don't find it worthwhile to investigate whether you're right, so here's what you would need to do to persuade me.

is pretty much just an example of someone admitting that they aren't curious (enough to do the investigation themself)? 

The summary of Valentine's comment here is extremely inaccurate seeing as in the same comment thread he made an argument, it calls for making the minimum amount of arguments and says nothing about arguments not being useful. The comment seems to be more about how communities about optimizing for being good at something shouldn't spend resources on catering to people who are bad at that thing

I feel like "a healthy rationalist community should not make arguments" is pretty much just a slight rephrasing of "strong rationalist communication is healthiest and most efficient when practically empty of arguments", but I'm open to suggestions for alternative phrasings (especially if Valentine wants to comment).

I wouldn't say that anyone has an obligation to spend their time/energy responding to requests for laying out an argument, but I think the idea that doing so is "unhealthy" is wrong and I want to push back against it. It does not seem to me that the core of Valentine's objection is "producing a better argument would be too resource-intensive."

I feel like "a healthy rationalist community should not make arguments" is pretty much just a slight rephrasing of "strong rationalist communication is healthiest and most efficient when practically empty of arguments", but I'm open to suggestions for alternative phrasings (especially if Valentine wants to comment).

 

They're quite different. The latter is a qualified description. The former is an unqualified prescription. Even if the prescription were qualified, it does not automatically follow from the description, because it is not necessarily the case that focusing on making less arguments is a way to get healthier -- in the same way that "healthiest people tend to exercise" doesn't imply "get out of bed and go for a jog" is gonna help sick people. Goodheart's law has a tendency to screw these kind of things up.

Sometimes these kinds of things can work (maybe exercising more will keep you from getting sick?), but in those cases it is still an additional piece which is not contained in "healthiest tends to look like X". Every time you add in an additional piece because it seems implied from your perspective, you risk changing the meaning to something that the person saying it wouldn't endorse. When you're reading someone whose worldview is quite different to your own, this can happen very rapidly, so it's crucial to read precisely what they are saying and note which inferences are your own rather.

Outlining heuristics is helpful for every lurker who happens not to have encountered that heuristic before. This is often more useful to me than the object level arguments.

[-]TAG2y20

Indeed: he wasn't literally talking about ideal reasoners, only "smarter" ones. You aren't an ideal reasoner, because ideal reasoners have to be infinite in their faculties.