Musings on Double Crux (and "Productive Disagreement")

by Raemon6 min read28th Sep 201772 comments

23

Double-Crux
Frontpage

Epistemic Status: Thinking out loud, not necessarily endorsed, more of a brainstorm and hopefully discussion-prompt.

Double Crux has been making the rounds lately (mostly on Facebook but I hope for this to change). It seems like the technique has failed to take root as well as it should. What's up with that?

(If you aren't yet familiar with Double Crux I recommend checking out Duncan's post on it in full. There's a lot of nuance that might be missed with a simple description.)

Observations So Far

  • Double Crux hasn't percolated beyond circles directly adjacent to CFAR (it seems to be learned mostly be word of mouth). This might be evidence that it's too confusing or nuanced a concept to teach without word of mouth and lots of examples. It might be evidence that we have not yet taught it very well.
  • "Double Crux" seems to refer to two things: the specific action of "finding the crux(es) you both agree the debate hinges on" and "the overall pattern of behavior surrounding using Official Doublecrux Technique". (I'll be using the phrase "productive disagreement" to refer to the second, broader usage)

Double Crux seems hard to practice, for a few reasons.

Filtering Effects

  • In local meetups where rationality-folk attempt to practice productive disagreement on purpose, they often have trouble finding things to disagree about. Instead they either:
    • are already filtered to have similar beliefs,
    • quickly realize their beliefs shouldn't be that strong (i.e. they disagree on Open Borders, but soon as they start talking they admit that neither of them really have that strong an opinion)
    • they have wildly different intuitions about deep moral sentiments that are hard to make headway on in a reasonable amount of time - often untethered to anything empirical. (i.e. what's more important? Preventing suffering? Material Freedom? Accomplishing interesting things?)

Insufficient Shared Trust

  • Meanwhile in many online spaces, people disagree all the time. And even if they're both nominally rationalists, they have an (arguably justified) distrust of people on the internet who don't seem to be arguing in good faith. So there isn't enough foundation to do a productive disagreement at all.
  • One failure mode of Double Crux is when people disagree on what frame to even be using to evaluate truth, in which case the debate recurses all the way to the level of basic epistemology. It often doesn't seem to be worth the effort to resolve that.
  • Perhaps most frustratingly: it seems to me that there are many longstanding disagreements between people who should totally be able to communicate clearly, update rationally, and make useful progress together, and those disagreements don't go away, people just eventually start ignoring each other or leaving the dispute as unresolved. (An example I feel safe bringing up publicly is the argument between Hanson and Yudkowsky, although this may be a case of the 'what frame are we even using' issue above.)

That last point is one of the biggest motivators of this post. If the people I most respect can't productively disagree in a way that leads to clear progress, recognizable from both sides, then what is the rationality community even doing? (Whether you consider the primary goal to "raise the sanity waterline" or "build a small intellectual community that can solve particular hard problems", this bodes poorly).

Possible Pre-Requisites for Progress

There's a large number of sub-skills you need to productively disagree. To have public norms surrounding disagreement, you not only need individuals to have those skills - they need to trust that each other have those skills as well.

Here's a rough list of those skills. (Note: this is long, and it's less important that you read the whole list than that the list is long, which is why Double Cruxing is hard)

  • Background beliefs (listed in Duncan's original post)
    • Epistemic humility ("I could be the wrong person here")
    • Good Faith ("I trust the other person to be believing things that make sense to them, which I'd have ended up believing if I were exposed to the same stimuli, and that they are generally trying to find the the truth")
    • Confidence in the existence of objective truth
    • Curiosity / Desire to uncover truth
  • Building-Block and Meta Skills
  • (Necessary or at least very helpful to learn everything else)
  • Notice you are in a failure mode, and step out. Examples:
    • You are fighting to make sure an side/argument wins
    • You are fighting to make another side/argument lose (potentially jumping on something that seems allied to something/someone you consider bad/dangerous)
    • You are incentivized to believe something, or not to notice something, because of social or financial rewards,
    • You're incentivized not to notice something or think it's important because it'd be physically inconvenient/annoying
    • You are offended/angered/defensive/agitated
    • You're afraid you'll lose something important if you lose a belief (possibly 'bucket errors')
    • You're rounding a person's statement off to the nearest stereotype instead of trying to actually understand and response to what they're saying
    • You're arguing about definitions of words instead of ideas
    • Notice "freudian slip" ish things that hint that you're thinking about something in an unhelpful way. (for example, while writing this, I typed out "your opponent" to refer to the person you're Double Cruxing with, which is a holdover from treating it like an adversarial debate)

(The "Step Out" part can be pretty hard and would be a long series of blogposts, but hopefully this at least gets across the ideas to shoot for)

  • Social Skills (i.e. not feeding into negative spirals, noticing what emotional state or patterns other people are in [*without* accidentaly rounding them off to a stereotype])
    • Ability to tactfully disagree in a way that arouses curiosity rather than defensiveness
    • Leaving your colleague a line of retreat (i.e. not making them lose face if they change their mind)
    • Socially reward people who change their mind (in general, frequently, so that your colleague trusts that you'll do so for them)
    • Ability to listen (in a way that makes someone feel listened to) so they feel like they got to actually talk, which makes them inclined to listen as well
    • Ability to notice if someone else seems to be in one of the above failure modes (and then, ability to point it out gently)
    • Cultivate empathy and curiosity about other people so the other social skills come more naturally, and so that even if you don't expect them to be right, you can see them as helpful to at least understand their reasoning (fleshing out your model of how other people might think)
    • Ability to communicate in (and to listen to) a variety of styles of conversation, "code switching", learning another person's jargon or explaining yours without getting frustrated
    • Habit asking clarifying questions, that help your partner find the Crux of their beliefs.
  • Actually Thinking About Things
    • Understanding when and how to apply math, statistics, etc
    • Practice thinking causally
    • Practice various creativity related things that help you brainstorm ideas, notice implications of things, etc
    • Operationalize vague beliefs into concrete predictions
  • Actually Changing Your Mind
    • Notice when you are confused or surprised and treat this as a red flag that something about your models is wrong (either you have the wrong model or no model)
    • Ability to identify what the actual Crux of your beliefs are.
    • Ability to track bits of small bits of evidence that are accumulating. If enough bits of evidence have accumulated that you should at least be taking an idea *seriously* (even if not changing your mind yet), go through motions of thinking through what the implications WOULD be, to help future updates happen more easily.
    • If enough evidence has accumulated that you should change your mind about a thing... like, actually do that. See the list of failure modes above that may prevent this. (That said, if you have a vague nagging sense that something isn't right even if you can't articulate it, try to focus on that and flesh it out rather than trying to steamroll over it)
    • Explore Implications: When you change your mind on a thing, don't just acknowledge, actually think about what other concepts in your worldview should change. Do this
      • because it *should* have other implications, and it's useful to know what they are....
      • because it'll help you actually retain the update (instead of letting it slide away when it becomes socially/politically/emotionally/physically inconvenient to believe it, or just forgetting)
    • If you notice your emotions are not in line with what you now believe the truth to be (in a system-2 level), figure out why that is.
  • Noticing Disagreement and Confusion, and then putting in the work to resolve it
  • If you have all the above skills, and your partner does too, and you both trust that this is the case, you can still fail to make progress if you don't actually follow up, and schedule the time to talk through the issues thoroughly. For deep disagreement this can take years. It may or may not be worth it. But if there are longstanding disagreements that continuously cause strife, it may be worthwhile.

Building Towards Shared Norms

When smart, insightful people disagree, at least one of them is doing something wrong, and it seems like we should be trying harder to notice and resolve it.

A rough sketch of a norm I'd like to see.

Trigger: You've gotten into a heated dispute where at least one person feels the other is arguing in bad faith (especially in public/online settings)

Action: Before arguing further:

  • stop to figure out if the argument is even worth it
  • if so, each person runs through some basic checks (i.e. "am *I* being overly tribal/emotional?)
  • instead of continuing to argue in public where there's a lot more pressure to not lose face, or steer social norms, they continue the discussion privately, in whatever the most human-centric way is practical.
  • they talk until at least they succeed at Step 1 Double Crux (i.e. agree on where they disagree, and hopefully figure out a possible empirical test for it). Ideally, they also come to as much agreement as they can.
  • Regardless of how far they get, they write up a short post (maybe just a paragraph, maybe longer depending on context) on what they did end up agreeing on or figuring out. (The post should be something they both sign off on)
Double-Crux2
Frontpage

23

72 comments, sorted by Highlighting new comments since Today at 4:16 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I am genuinely confused by the discourse around double crux. Several people I respect seem to think of DC as a key intellectual method. Duncan (curriculum director at CFAr) explicitly considers DC to be a cornerstone CFAR technique. However I have tried to use the technique and gotten nowhere.

Ray deserves credit for identifying and explicitly discussing some of the failure modes I ran into. In particular DC style discussion frequently seems to recurse down to very fundamental issues in philosophy and epistemology. Twice I have tried to discuss a concrete practical issue via DC and wound up discussing utility aggregation; in these cases we were both utilitarians and we still couldn't get the method to work.

I have to second Said Achmiz's request for public examples of double crux going well. I once asked Ray for an example via email and received the following link to Sarah Constantin's blogpost . This post is quite good and caused me to update towards the view that DC can be productive. But this post doesn't contain the actual DC conversation, just a summary of the events and the lessons learned. I want to see an actual, for real, fully detailed example of DC being used productively. I don't understand why no such examples are publicly available.

whperson's comment touches on why examples are rarely publicized.

I watched Constantin's Double-Crux, and noticed that, no matter how much I identified with one participant or another, they were not representing me. They explored reciprocally and got to address concerns as they came up, while the audience gained information about them unilaterally. They could have changed each other's minds without ever coming near points I considered relevant. Double-crux mostly accrues benefits to individuals in subtle shifts, rather than to the public in discrete actionable updates.

A good double-crux can get intensely personal. Double-crux has an empirical advantage over scientific debate because it focuses on integrating real, existing perspectives instead of attempting to simultaneously construct and deconstruct a solid position. On the flip side, you have to deal with real perspectives, not coherent platforms. Double-crux only integrates those two perspectives, cracked and flawed as they are. It's not debate 2.0 and won't solve the same problems that arguments do.

I also watched Constantin's Double-Crux, and feel that most of my understanding of how the process works comes from that observation rather than any posts including Duncan's. I also agree that her post of results, while excellent, does not do the job of explaining the process that was done by watching the process live. I wonder to what extent having an audience made the process unfold in a way that was easier to follow; on the surface both of them were ignoring us, and as hamnox says they were not trying to respond to our possible concerns, but I still got the instinctive sense that having people watching was making the process better or at least easier to parse.

The topic of that Crux was especially good for a demonstration, in that it involved a lot of disagreements over models, facts and probabilities. The underlying disagreements did not boil down to questions of philosophy.

I do think that finding out that the difference does boil down to philosophy or epistemology is a success mode rather than a failure mode - you've successfully identified important disagreements you can talk about now or another time, and ruled out other causes, so you don't waste further

... (read more)
7Conor Moreton3yStrong agreement that identifying important root disagreements is success rather than failure. If people on opposite sides of the abortion debate got themselves boiled all the way down to virtue ethics vs. utilitarianism or some other similar thing, this would be miles better than current demonization and misunderstanding.

For me, the world is divided into roughly two groups:

1. People who I do not trust enough to engage in this kind of honest intellectual debate with, because our interests and values are divergent and all human communication is political.

2. Close friends, who, when we disagree, I engage in something like "double crux" naturally and automatically, because it's the obvious general shape of how to figure out what we should do.

The latter set currently contains about two (2) people.

This is why I don't do explicit double crux.

I feel like, as a contrarian, it is my duty to offer to double-crux with people so they get some practice. :P When I've moved up to the East Bay interested people should feel free to message me.

6Zvi3yI too volunteer to double-crux with people to let them and myself get practice, either in-person in NYC or online, and encourage others to also reply and add their names to such a list.

I find that I never double crux because it feels too much like a Big Serious Activity I have to Decide to engage in or something. The closest I've gotten is having a TAP where during disagreements I try to periodically ask myself what my cruxes are and then state them.

I think there are dis-encentives to do it on the internet, even if you expect good faith from your partner, you don't expect good faith from all the other viewers.

But because if you change your mind for all the world to see, people with bad faith can use it is as evidence that you can be wrong and so are likely to be wrong about other things you say as well. Examples of this in the real world are politicians accused of flip-flopping on issues.

You touch on this with

instead of continuing to argue in public where there's a lot more pressure to not lose face, or steer social norms, they continue the discussion privately, in whatever the most human-centric way is practical.

How will this norm spread?

We need public examples for people to have an idea of what good looks like.

Unless we can hide it away in a culture where it is okay to be wrong about things, or somehow anonymise it, so you can't tell who is being wrong, it doesn't seem like it would scale.

We need public examples, agreed. I think this under-sells the difficulty here.

In an argument or discourse worth having, a lot of the beliefs feeding in are going to be things that are:

A) Hard to state with precision, or that require the sum of a lot of different claims.

B) Involve beliefs or implications that risk getting a very negative reaction on the internet. There are a lot of important facts about the world you do not want to be seen endorsing in public, as much as we wish it were not so.

C) Involve claims that you do not have a social right to make.

D) Involve claims you can't provide well-articulated evidence for, or can't without running into some of A-C.

In my experience, advanced actually-changing-minds discussions are very hard to follow and very easy to misconstrue. They involve saying things that make sense in context to the particular person you're talking to, but that often on the surface make absurd, immoral or taboo claims.

I still think trying to do this is Worth It. I would start by trying to think harder about what topics we can do this on in public, that dodge these problems while still being non-trivial enough to be worthwhile.

6Raemon3yThere'd likely be a multi-step plan, which depends on whether your goals are more "raise the sanity waterline" or "build an intellectual hub that makes rapid progress on important issues." Step 1: Practice it in the rationality community. Generally get people on board with the notion that if there's an actually-important disagreement, that people try to resolve it. This would require a few public examples of productive disagreement and double crux (I agree that lack-of-those is a major issue). Then, when people have a private dispute, they come back saying "Hey this is what we talked about, this was what we agreed on, and this is any meta-issues we stumbled upon that we think others should know about re: productive disagreement." Step 2: Do that in semi-public places (facebook, other communities we're part of, etc), in a way that let's nearby intellectual communities get a sense of it. (Maybe if we can come up with clear examples and better introduction articles, it'd be good to share those). The next time you get into a political argument with your uncle, rather than angrily yell at each other, try to meet privately and talk to each other and share it with your family. (Note: I have some uncles for whom I think this would work and some for whom it definitely wouldn't) (This will require effort and emotional labor that may be uncomfortable) Step 3: After getting some practice doing productive disagreement and/or Double Crux in particular with random people, do it in somewhat higher stakes environment. Try it when a dispute comes up at your company. (This may only work if you have the sort of company that already at least nominally values truthseeking/transparency/etc so that it feels like a natural extension of the company culture rather than a totally weird thing you're shoving into it) Step 4: A lot of things could go wrong in between steps 1-3, but afterwards basically make deliberate efforts to expand it into wider circles (I would not leap to "try to ge

My first thought on reading the post on double crux was that it's not clear to me how much value it adds beyond previous ideas about productive disagreement. If I'm already thinking about the inferential distance and trying to find a place where I agree with my conversational partner to start from, then building from there, I'm not sure what extra value the idea of cruxes has and I'm not sure what circumstances I could use double crux that the naive "find a shared starting point and go from there" doesn't work.

Obviously a

... (read more)
7Raemon3yOne important thing is that Doublecrux is not about finding a "shared starting point" (or at least, depends a lot on what you mean by shared-starting-point and I expect a lot of people to get confused). You're looking for a shared concrete disagreement, and a related-but-different pattern is more like look for what things we agree on so we can remember we're on the same side which doesn't necessarily build the skill of productively, thoroughly resolving disagreements. I do think most of the time, if things are going well, that you'll have constructed your belief systems such that you've already clearly identified cruxes, or when debating you proactively share "this is probably my crux" in a way that makes the Double Crux be a natural extension out of the productive-disagreement-environment. (i.e. when I'm arguing with CFAR-adjaecent-rationalists, we rarely say "let's have a double crux to resolve this" but we often construct the dialog in a way that has DC thoroughly embedded in its DNA, to the point where it's not necessary to do it explicitly
7magfrump3yI'm imagining a hierarchy of beliefs like: school uniforms are good (disagreement) because school uniforms reduce embarrassment (empirical disagreement, i.e. the crux) which is good because I care about the welfare of students (agreement) If I find the point of agreement and try to work toward the point of disagreement, I expect to come across the crux. If my beliefs don't live in this hierarchy, I'm not sure how searching for a crux is supposed to help (aside from telling me to build the hierarchy, which you could tell me directly). If my beliefs already live in this hierarchy, I'm not sure how searching for a crux does more than exploring the hierarchy. So I feel like "double crux" is sitting on top of another skill, like "build an inferential bridge," which is actually doing all the work. Especially if you are just using the "DNA" of the technique, it feels like everything being written about double crux is obscuring the fact that you're actually talking about building inferential bridges. Maybe my takeaway should be something like "the double crux is the way building an inferential bridge leads to resolving disagreements," and then things like the background of "genuinely care about your conversational partner's model of the world" filters through a chain like: double crux is useful because double crux is about a disagreement I care about it's use comes from letting me connect the disagreement to explicit belief hierarchies and explicit belief hierarchies are good for establishing mutual understanding So I'm starting to see double crux as a motivational tool, or a concept living within hierarchies of belief, rather than a standalone conceptual tool. But I'm not sure how this relates to the presentation of it I'm seeing here.
4Raemon3yPart of my point with the post is that I think Double Crux is just one step in a long list of steps (i.e. the giant list of background skills necessary for it to be useful). I think it's the next step a chain where every step is necessary. My belief that Double Crux is getting overloaded to mean both "literally finding the double crux" and "the entire process of productive disagreement" may be a bit of a departure from it's usual presentation. I think your current take on it, and mine, may be fairly similar, and that these are in fact different from how it's usually described.

Some Meta-Data:

This took me about 5 hours to write.

My primary goal was to get as many thoughts down as I could so I could see them all at once, so that I could then think more clearly about how they fit together and where to go from there.

A second goal was to do that mindfully, in a way that helped me better think about how to think. What was my brain actually doing as it wrote this post? What could I have done instead? I'll be writing another post soonish exploring that concept in more detail.

A third goal was to prompt a conversation to help flesh o... (read more)

9whpearson3yDatapoint: I'm okay with brain dumps.
3gjm3yMe too, especially when (1) their authors acknowledge them as such and (2) there isn't any sign of a general tendency for everyone to post brain dumps all the time when a modest expenditure of effort would let them get their thoughts better organized.
4Raemon3yLater on I'll be wanting to post brain dumps all the time, but I think the rate at which this will come to pass will roughly coincide with "people move their off-the-cuff posts to personal pages and then opt into the personal pages of people whose off-the-cuff posts they like"

This makes me want to try it :)

Would anyone else be interested in a (probably recurring if successful) "Productive disagreement practice thread"? Having a wider audience than one meetup's attendance should make it easier to find good disagreements, while being within LW would hopefully secure good faith.

I imagine a format where participants make top-level comments listing beliefs they think likely to generate productive disagreement, then others can pick a belief to debate one-on-one.

I see the technique of double-crux as being useful, although there will not always be a double crux. Sometimes people will have a whole host of reasons for being for something and merely convincing them to change their view on any one of them won't be enough to shift their view, even if they are a perfectly rational agent. Similarly, I don't see any reason why two people's cruxes have to overlap. Yet it practise, this technique seems to work reasonably well. I haven't thought enough about this to understand it very well yet.

7Raemon3yYeah - in the lengthy Double Crux article it's acknowledged that there can be multiple cruxes. But it's important to find whatever the most important cruxes are, instead of getting distracted by lots of things that sound-like-good-arguments but aren't actually the core issue.

My take on “why isn’t Double Crux getting more uptake”:

This ‘Double Crux’ thing seems like a complicated technique/process/something, with:

  • benefits that are nothing close to manifestly clear from the description

  • no clear, public examples of anyone using it (much less, successfully)

  • no endorsements from anyone whose opinion I respect (like Scott Alexander or Eliezer—or perhaps Eliezer did endorse it? but then I guess I wouldn’t ever know about it; such is the downside of using Facebook…)

There does not seem to be any reason why I should pay attention it. That

... (read more)

[Note from the Sunshine Regiment] A lot has happened in this thread, I'm going to comment at second-to-top level so this gets as seen as possible while keeping its context.

In a nutshell

Yes, there is an obligation to be prosocial here.

There's a lot of room for debate on what prosocial means and what trades-offs are worth it. This Guide To Comments is a start but insufficient. We welcome input from people as we figure this out.

I'm really torn on the particular comment "Also, it comes from CFAR, which is an anti-endorsement". I want it to be as cheap as possible to criticize the in-group on Less Wrong, because so many other forces are making it expensive. So let's be be very clear that

sharing a negative opinion is not in and of itself anti-social.

But as several people have pointed out, this opinion was shared in a way that generated a lot of unnecessary friction. A simple "I think that..." or "...for me" would have done a great deal to resolve this problem.

The mod team is in private contact with Said over this issue.

Yeah - I actually think by far the biggest reason Double Crux hasn't caught on is because no one has written a post optimized for getting it to catch on (Duncan's post is instead optimized for making sure that the people that get it actually get the whole thing, and I think it requires you to trust that it's worth the effort)

Up until last week, I actually thought Double Crux is a pretty straightforward concept (or at least, one that builds directly from ideas that are already common among educated people).

You could summarize Double Crux like this:

I. Ray Attempts to Explain Double Crux

Often times, smart people end up talking past each other, or trying to score social points, or otherwise arguing in a way that doesn't accomplish anything. This results in people wasting years arguing pointlessly, and moreover, at least half of those people spend years being wrong about stuff they could have talked through and figured out.

Double Crux is a technique to help short-circuiting those pointless and arguments, and instead figure out useful things together. Specifically, it is the first step of having a useful disagreement: figuring out what concrete thing you disagree abo... (read more)

9Said Achmiz3yI certainly appreciate that. Let me offer a couple of suggestions, that would, at least, help you explain it to me (and perhaps to others? but that’s as may be): 1. EXTENSIONS, NOT INTENSIONS. I’d really like an actual, live (by which, of course, I mean “online”) example of people using this Double Crux business. Like, actually for-real (and not, say, as a demonstration example, arranged for the purpose of showing off the technique). Is it even doable online? In a forum / blog context? Perhaps at least in chat? Or is it only something that can be done in person? (If so, that makes it of limited use, at least, to the LW audience—useful though it may be to your local, meatspace, community of rationalists!) 2. APPLICABILITY. Someone recently said to me, of Double Crux (I am quoting from memory): “it seems like a decent attempt to solve a problem that almost never happens”. He meant, I think, something like—most of the time, when people (even rationalists) disagree or argue or otherwise fail to see entirely eye to eye on a matter, it is not in a way that would be solved by identifying some key fact about which they differ. How would you characterize the class of situations in which Double Crux is applicable? How often do you think such situations come up (in comparison to, say, the category of “all disagreements that occur between people”, or even “all disagreements that occur between rationalists”)? Could you, again, point to several (at least three) real, live examples—publicly perusable by your readers here—of disagreements which Double Crux would cut through? This second point seems to me to be of the highest importance, especially because you say: But in fact, Double Crux’s applicability is very limited in scope; or else I really understand nothing about it. So—explain! :)

I'd be willing to do an asynchronous attempt to double crux about whether the problems that motivated the creation of double crux ever happen. We could then post the results as a public example. My understanding is that the person who said that to you misunderstands the problems that are trying to be solved, because they definitely happen all the time in my experience.

7Said Achmiz3yWell, I’m not willing to take (and have never taken) the position that such problems never happen. As for your offer, it is appreciated, but I was hoping first to look at an existing example (or three), before trying it myself; else I would surely do it wrong, and the attempt would prove nothing… But maybe, as a sort of prelude, we could start with you giving some examples of real-life situations that would be solved by the Double Crux?
7Conor Moreton3yYeah. (Also thanks for being willing to spend time on this—when I imagine myself thinking a thing is Useless, then I imagine it feeling costly to give it extra chances to prove itself.) The counting up vs counting down post that I wrote yesterday to near zero acclaim is one of them—often people are sort of talking past each other and both people seem to be fighting for good and coherent goals, and double crux motions (why do you believe what you believe, what would cause me to change my own mind) helps uncover those faster than default motions. "Ohhhhh, wait, hang on—I think I would agree with what you're saying if I thought that we couldn't expect to do this perfectly, and should be happy with any results above zero, and happy proportional to how far above zero we get." Another is the issue of burden of proof, which I think I've read cited in double crux explanations specifically somewhere, maybe on Facebook. The thing I'm remembering is something like, if both sides disagree about where the burden of proof lies, then both sides will end up "declaring victory" prematurely and saying that the other side has failed to justify itself. So if Bob thinks corporal punishment is how it's always been done, and it's on the bleeding hearts to prove that one should never spank kids, and Joe thinks nonviolence and sovereignty are the obvious priors, and it's on the backwards troglodytes to prove that spanking is net beneficial, the debate won't ever really move forwards productively. Double Crux solves this in theory because each person, if constantly scanning their own belief structure and asking what would cause them to change their own mind, will notice what burden of proof they're already expecting of their own beliefs, and can make that known to the other person. Some other situations, off the top of my head: * You and I are in a car in traffic, and I honk the horn at someone and wave a middle finger at them, and you're really uncomfortable and criticize my road

Hmm… I appreciate the effort that went into your reply, but I think I may’ve been unclear about what I asked: I was hoping to see actual examples—not hypothetical examples, nor categories (into which some unspecified examples are alleged to fall)!

That said, your hypothetical examples are relatively informative, so, thank you! They do much to increase the certainty of my previously-somewhat-tentative view that Double Crux is not a terribly useful technique in most circumstances (such as most of the ones you listed).

This, clearly, is the opposite reaction to the one you were (presumably) hoping for; perhaps I still have some fundamental misunderstanding. Real-life examples would, I think, really be quite helpful here.

4Conor Moreton3yHmmm. Maybe there's something in here about the difference between "Double-Crux-like" and "formal Double Crux"? On reflection, after you said you're more certain Double Crux is low-utility, I was maybe imagining that this was because you saw the formal Double Crux framework as brittle or overly constraining, whereas you might agree that somebody adhering to the "spirit" of Double Crux (which could also be fairly labeled the spirit of inquiry or the spirit of cooperative disagreement or the spirit of impartial investigation and truth-seeking, because it's the thing that generated Double Crux and not something that's owned by the named technique) would be more likely to make progress than someone not adhering to said spirit.
6clone of saturn3yHello, I'm the person who said Double Crux seems like an attempt to solve a problem that almost never happens. More specifically, the disagreements I see happening between reasonable people are almost always either too easy or too hard for Double Crux to be useful. On questions like "what is the longitude of Tokyo" or "who starred in the original Star Wars," two people could agree that looking up the answer on Wikipedia would convince both of them, which would technically fulfill the formal rules of Double Crux, but that hardly seems like a special "rationality technique" or something CFAR can take credit for inventing. On the other hand, on a question that hinges on value differences like your examples, I can see one of three things happening: either the disputants compromise their honesty by agreeing on a crux which appears relevant but isn't actually connected to the real motivations behind their disagreement ("if spanking is statistically correlated with a decrease in lifetime earnings, p<0.05, then it is bad, otherwise it is good"), or they maintain their honesty but commit themselves to solving longstanding open problems in metaethics and/or changing genetically mediated personality differences through verbal argument, or they end up using other negotiation techniques and falsely calling it Double Crux. Double Crux does seem applicable to questions where the answer can't simply be looked up, where the disagreement is strictly confined to the empirical level and doesn't touch on value differences or epistemological questions in any way, yet also where the evidence is ambiguous enough to allow for reasonable disagreement. But those are rare in my experience.
4Conor Moreton3yI note there's something in here that I'm reading as a pseudofallacy—it's the same reason why Mythbusters is terrible, and it goes like "I can only think of these three outcomes, and therefore those are the most likely outcomes." This thread and the original Double Crux thread on LessWrong (plus the ~1000 or so CFAR alumni) are full of people saying that Double Crux does indeed work to solve discourse problems that crop up a lot. That absolutely does not erase your personal experience of a) not seeing those problems and b) not seeing Double Crux solve them. Your personal experience is valid and real and definitely counts as data. But there's a particular sort of ... audacity? ... in taking one's own, personal experience, and using it to trump the experiences of others, and concluding with fairly strong confidence "this thing that a lot of smart people say is useful just isn't." In your shoes, I'd say something like what I said in my Focusing post, which is "this thing that is useful for a lot of people isn't useful for me or the people around me." That seems more solidly justified and epistemically sound, and enriches an onlooker's understanding of the situation rather than creating crosswise narratives. In particular, as I tried to do with Focusing, I'd make a genuine attempt to learn Double Crux (from the people who know what they're talking about and can point out your mistakes and scaffold your understanding) before writing it off. I weakly predict that you haven't done A + B + C where A is attend a CFAR workshop or one of their Double Crux instruction sessions at e.g. EA Global, B is talk directly to somebody who's skilled in Double Crux and ask them to help you overcome the standard failure modes, and C is go out and really actually try to follow the real actual steps for five very different sorts of disagreements with real actual humans. (By the way, it's completely fine to have not done A + B + C. People have higher priorities. But I personally think t
6clone of saturn3yYou accuse me of using Stereotypes rather than Rigor, but I in turn accuse you of using Social Proof [https://en.wikipedia.org/wiki/Social_proof] rather than Rigor, which I consider far more dangerous, because it leads to self-reinforcing information cascades [https://en.wikipedia.org/wiki/Information_cascade]. By reflexively characterizing all skepticism as hostile, you further reinforce this dynamic by creating a with-us-or-against-us atmosphere. Yes, I don't actually believe that ~1000 or so CFAR alumni self-reports represent enough evidence to overturn my initial opinion. There are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy, but I wonder if you would as forcefully reject a similar Stereotype-based dismissal of that. I'd be very happy to see some real rigor, but I'm not aware of any such from CFAR that I would actually trust to bring back a negative result if the same procedure were used on homeopathy enthusiasts. (And by the way, in 2014 Anna Salamon said [http://lesswrong.com/lw/lfg/cfar_in_2014_continuing_to_climb_out_of_the/] CFAR was "supposed to be doing better science later," meaning better than self-reports and personal impressions. How much later is later?) I never gave any indication that my comment represented anything but my own personal impression, or that it somehow trumps the experiences of others. But I'm going to keep pointing out that I see the emperor wearing fewer clothes than he claims for as long as I continue to see it that way, and I consider this to be an explicitly prosocial act. I don't gain anything personally by this, and these contentious posts are actually fairly stressful for me to write, but I consider it worth it to try to push back against your open advocacy of credulousness and protect a rationalist community like Less Wrong from evaporative cooling. I have not in fact attended a CFAR workshop and don't intend to, for reasons that might get me in trouble with the "S
4Conor Moreton3yI disagree with your claim that I "reflexively characterized all skepticism as hostile." I have reread my own comment and I do not think that's a fair or accurate synopsis. I believe you are overstating your claim that "there are also many thousands of smart people, including even ones with medical degrees, who endorse homeopathy" and disagree with the attempt to draw an equivalency there (I both do not think the situations are analogous and don't think you could actually find thousands of people in the intersection of "smart" and "endorses homeopathy"). My main point is that it looks to me like you are skeptical of everything but your own impressions, and that Less Wrong should be the sort of place where people actually take heuristics and biases literature seriously, and take the Sequences seriously, and are aware of how fallible their own thinking and impression-making mechanisms are, and how likely it is that they're being influenced by metacognitive blindspots, and take deliberate and visible steps to compensate for all of that by practicing calibration, using reference class forecasting, taking the outside view, making concrete predictions, seeking falsification rather than confirmation, etc. etc. etc. In short, I wasn't asking you to be less skeptical, I was asking you to add one more person to your list of people you're skeptical of—yourself. I'm attempting to point out that your claim "Double Crux seems like an attempt to solve a problem that almost never happens" seems to have been outright falsified—even if your homeopathy analogy holds, homeopaths aren't necessarily hypochondriacs, and I would trust the reports of homeopaths who are saying "I am experiencing this-or-that physiological distress which requires some form of treatment" or "I am having this-or-that medical problem which is lowering my quality of life" without reference to their thoughts on what would fix it. It does not seem that you are updating away from "the problems that Double Crux

Is this ad hominem? Reasonable people could say that clone of saturn values ~1000 self-reports way too little. However it is not reasonable to claim that he is not at all skeptical of himself, and not aware of his biases and blind spots, and is just a contrarian.

"If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?"

Personally, I would go to a post about Double Crux, and ask for examples of it actually working (as Said Achmiz did). Alternatively, I would list the specific concerns I have about Double Crux, and hope for constructive counterarguments (as clone of saturn did). Seeing that neither of these approaches generated any evidence, I would deduce that my impressions were right.

3Elizabeth3yWhat makes you think describing why you personally won't go to a workshop would get you in trouble?
5clone of saturn3yI suspect I'm already being more confrontational than you'd prefer, and I don't want to further wear out my welcome, or take the risk of causing unnecessary friction, by bringing up any other potentially negative points not directly related to CFAR's rationality content or Double Crux. Should I take it that I was being unnecessarily cautious?
4lahwran3ythis seems like intentionally rude wording to me. (edited - this is all I ever meant.)
8Said Achmiz3yI admit that I’m puzzled by your comment. What is it that you think I might be hiding, or that I might wish to (plausibly) deny…? I thought I’d made myself reasonably clear, but if some part of my comment’s meaning seems obscure to you, I’d be glad to clarify… (As a side note, and more generally, I’d like to note my very strong distaste for any community / site discourse norms that required commenters to hold to “prosocial wording” at all times. There is a difference between respectfulness and common decency, on the one hand, and on the other, this sort of stifling tone policing.)

I agree: it doesn't read at all like an attack hidden behind plausible deniability, it reads like an attack that isn't hidden at all.

But what's it for?

Unless you think there are a lot of LW-adjacent people who regard "X comes from CFAR" as evidence against X being useful (my guess is that there are not, though there are probably a fair few who think "X comes from CFAR" is no evidence to speak of that it actually is useful), it's not doing anything to resolve Raemon's curiosity about why the technique hasn't become popular. (I think the rest of what you wrote, however, does an admirable job of that, and I agree that it seems like a sufficient explanation.)

And, if in fact doublecruxing's CFAR origins are a problem for any reason, it's not like there's much anyone can actually do about them.

The immediate impression I get from your remark about CFAR is this: "Said Achmiz really doesn't like CFAR, and he wants everyone to know it, so much so that he puts anti-CFAR jabs into comments where they add nothing and probably serve only to antagonize people who might otherwise listen more willingly to what he's sayi

... (read more)

I am concerned about a fairly mild anti-CFAR comment getting this much criticism. I do think "part of the reason I haven't adopted double crux is that I don't trust CFAR" is a relevant comment. Even if it wasn't, I worry that motivated reasoning will cause people to be far more upset about criticism of respected rationalist organizations than they are of other institutions, and for this to lead to a dynamic where people are quiet about their feelings about CFAR for fear of being dogpiled. This seems harmful both as a community norm and to CFAR itself.

To be clear, I am not complaining about SA's comment because it's anti-CFAR. I'm pretty skeptical about CFAR myself; I wouldn't go as far as SA does, but the fact that CFAR recommends something doesn't seem to me very good evidence for it.

I'm complaining about SA's comment because it seems to me irrelevant, un-called-for, and likely to annoy or upset some readers (of whom I am not one) with no offsetting benefit to make it worth while.

But I very much hope that no one feels unable to criticize CFAR or MIRI or any other entity for fear of being dogpiled, and (as one alleged dog in the alleged pile) promise that if I see such dogpiling happening to someone for relevant criticism then I will be right there on the barricades defending them.

9lahwran3yI'm actually confused that you think my comment was bad - I was thinking the same thing you ended up saying.
3gjm3yI'm confused too. I don't think your comment was bad, though as I wrote I'm not sure I could quite endorse the exact complaint it originally made.
3Said Achmiz3yI do think that, in fact. (Caveat: I don’t know about “a lot”; I couldn’t speak to percentages of the user base or anything. Certainly not just me, though.) If you took my comment as merely a political jab, feel free to ignore it. I am not certainly not interested in discussing CFAR-in-general in this thread (though would be happy to discuss it elsewhere). But that part of my comment was fully intended to be as substantive and on-point as the rest of it. I think that it might be productive for the moderation team to comment on this point in particular. It seems like this might be a genuine difference in expectations between segments of the user base, and between the moderators and some of said segments. Thank you.
2Said Achmiz3yHere’s a more general comment re: the relevance of my aside—not about this issue in particular, but this general class of things. I have, quite a few times in the past, had the experience of bringing up something like this, and having the responses of other participants or potential-participants in the discussion be split along lines as follows: Some people: That was unnecessary! And irrelevant. No one else feels this way, why bring your grudge into this unrelated matter? Other people: Thank you for saying that. I, too, feel this way, and agree that this is highly relevant, but didn’t want to say anything. Those in the first category are usually oblivious to the existence and the prevalence of those in the second. So yes, I think that it is not only absolutely permissible, but indeed mandatory, to insert just such asides into just such discussions. If there’s no uptake—well, then I simply drop the matter. Saying it once, or at least once in a long while, is sufficient; I have no problem changing the subject. But pervasive silence in such cases is how echo chambers form.
5gjm3yI can very well believe that remarks like this get exactly those sorts of comments, but I don't think the existence of the Other People is good evidence that the remarks are a good idea. All it need show is that there are people who are cross about X (in this case X=CFAR) and feel that their views are underrepresented, which is not sufficient to make anti-X jabs helpful contributions to any given discussion. If your opinion is that CFAR is a fraud or a scam or just inept and want to reassure others who hold similar views, then make a post actually about that explaining why you think that. It'll be far more effective in showing those people that they have allies, it'll provide a venue for others who agree to explain why (and for those who disagree to explain why, which should also be important if we're trying to arrive at the truth), and it'll have some chance of persuading others (which at-most-marginally-relevant jabs will not).

If going to the effort of writing a whole post about a concern is a prerequisite to ever mentioning the concern at all, then I think that’s an entirely unreasonable barrier, and certain to create a chilling effect on discussions of that concern. I oppose such a policy unreservedly.

All it need show is that there are people who are cross about X (in this case X=CFAR) and feel that their views are underrepresented, which is not sufficient to make anti-X jabs helpful contributions to any given discussion.

I thought that “and the concern in question is relevant to the current discussion” was implied. But consider it now stated outright! Append that, mentally, to what I said in the grandparent. (Certainly, as I made clear in the parallel thread, I think that the CFAR issue is relevant to this discussion.)

Perhaps I wasn't clear: I don't think you are, or should be, forbidden to mention your opinions of / attitude to CFAR if you aren't willing to make a whole post explaining them. That would be crazy.

What I do think (which seems to me much less crazy) is this: 1. If, as you say three comments upthread from here, you feel that you have an obligation to say bad things about CFAR in public so that LW2 doesn't become a pro-CFAR echo chamber, then what you've done here is not a very effective way of doing it, and writing something more substantial would be much more effective. And: 2. Dropping boo-to-CFAR asides into discussions of something else is likely to do more harm than good (even conditional on CFAR being bad in whatever ways you consider it bad; in fact, probably more so if it is) because its most likely effect is to make fans of CFAR defensive, people who dislike CFAR gloaty, and people who frankly don't care much about CFAR annoyed at having what seem like political rivalries injected into otherwise-interesting discussions.

Of course, what's ended up happening is that there's been a ton of discussion and you may end up expending as much ef

... (read more)
9lahwran3yVery enthusiastic +1 to this. I also don't want to have a policy (that, empirically, I currently have, I guess?) of making people who say things like what you said, end up having to defend their views for hours in replies.
7Said Achmiz3yReplying to your edit: 1. General request to all commenters: when editing a post to change wording or conrent, please retain the original wording / content, if existing replies to your comment reference it or depend on it in any way. Doing otherwise destroys the coherence of comment threads, and makes them less useful to later readers. 2. Re: the edited comment: it baffles me that you perceive that sentence as not only rude, but so rude that it could only be intentional—given that I chose my words carefully, to avoid explicit abuse or impoliteness! How could I have phrased my comment instead, in your opinion, that would’ve upgraded it at least to the level of “unintentionally rude” (“actually polite” is probably too much to hope for), without losing the meaning? I am dismayed by the discourse norms that such comments imply. :(

I am surprised at 2, and want to retract my comment and make this whole subthread not able to hurt me any more. I'm feeling a lot of social disapproval at my having posted the comment, and my update from it is to just not make comments like that, which I think is a good outcome for your preferences about discourse norms. I can't stand social disapproval like this, and I feel an urgent need to change however will make it go away the fastest - on most sites, that's "delete my comment, never post another one like it".

Though actually, I have 4 points now. But I still acutely feel your disapproval of my having expressed disapproval at you, and want to just take it back and let you talk how you want.

(meta: it's quite scary for me to try to be honest about this. I feel urge to reply with my actual feelings in the interest of truth seeking, but normally would just be silent.)

6Said Achmiz3yUpvoted. I regret that my comments had this effect on you (though do not regret making them). I hope that you will continue to comment no less earnestly than you’ve done so far, and encourage you to do so. My discourse norms are honesty, integrity, and truth.
9lahwran3yI like this. My approval drives would lead to a chilling effect on truth-seeking if everyone tried to white-box optimize them when having conversations, and I don't endorse that; I'd rather people hurt me a bit than fail at truth-seeking. I wish I had a better way to defend myself from the hurt of social disapproval, though; eg, disowning a comment. I endorse those.
3gjm3yStrongly agree on #1 (with obvious exceptions if the original wording reveals trade secrets, libels people likely to bring legal action, etc; but in thoses cases you should still describe what used to be there even if you can't preserve it). On #2, I can't share SA's bafflement. What isn't rude about saying that a particular organization is so useless that when, attempting to do its job, it recommends doing a thing, that's evidence against the value of doing it? I guess it's not rude if you know there's no one around who belongs to, or identifies strongly with, that institution. But that's not very likely in these parts. Otherwise: what baffles me is how anyone would expect that not to be rude. (To be clear: "Rude" is not the same thing as "bad" or "wrong". Sometimes being rude is a good thing. Sometimes it is a necessary evil. I am not claiming that no one should ever be rude.)
3Said Achmiz3yYou seem to be using “rude” in such a way that the property of rudeness can attach to claims on the basis of their propositional content only. That, to me, is a very strange usage. It seems to me that either you must think that there’s nothing necessarily wrong with being rude; or, you must think that certain claims simply cannot be made, certain propositions simply cannot be expressed—regardless of their truth value (if they are not trade secrets or so on). I disagree with the latter, and prefer a word usage that makes the former false (else the word “rude” becomes largely useless.)
8Raemon3yIt's too late to accomplish this by this point, but the response I had planned for your CFAR comment (I actually had it planned before lahwren responded), which I didn't have time to write before going to bed, was something like: "I had an initial negative reaction and urge to downvote when I saw the CFAR comment, but I quickly noticed that most of that was coming from a place of tribal emotions (i.e. 'must defend my people!') which I didn't endorse. I briefly considered trying to respond in a more careful way that got to the heart of the issue, but it seemed like the "yay CFAR? / boo CFAR?" was basically a distraction. There may be a time/place for it but this isn't it. I'd prefer if people didn't end up having a giant discussion about "is CFAR good/bad?" and instead stuck to discussion of Double Crux as a technique."
8Raemon3yHaving said that, in light of your other comment about wanting to see a public Double Crux, "should CFAR be positive or negative evidence of a technique's validity" is precisely the sort of question that Double Crux is for, and I'd be interested in doing a public DC on it with you if you're up for it (normally I'd suggest skype but since part of the point is to produce something easy for others to consume, chatlog could be fine) (that said, I'm fairly busy in the next 30 hours or so. I might be up for it Friday night or over the weekend though) (Edit: it looks like some other people also offered something like this, I don't think it's especially important I be involved, but think it'd probably be valuable in any case)
3Said Achmiz3yI agree with you re: the grandparent, and I appreciate the offer re: the Double Crux. I am, sadly, unlikely to be able to take you up on it; my “commenting on or about an internet forum” time budget is already taken up by this flurry of activity here on LW 2.0. Instead, I’d just like to reiterate my request / suggestion that you folks find some way to be able to point readers to pre-existing, publicly viewable examples of the technique being used. I think much hinges on that, at this point. Offering, when questioned, to demonstrate Double Crux, by way of trying to debate whether Double Crux is any good, is all very well, but—it simply doesn’t scale!
7Raemon3yDoesn't scale, but seems like it should happen at least once. (tongue sort of but not entirely in cheek). Then you can just link to it the second time. The problem is that Double Crux is best conducted in ways that aren't very amenable to publicizing (i.e. a private walk where people feel free-er), so there needs to be some attempts to do a public one at a time when: - it's high enough stakes that it matters so you can see people using the technique for real - it's low enough stakes that it's okay to publicly share it without you having to worry about "looking good" during the discussion - it's convenient to record in some way
6Said Achmiz3yWell, as I say elsewhere in these comments—that does make it of rather limited utility to much of the LW readership!
6Raemon3yI agree, which is why I think noticing that there's an opportunity to do a public one (i.e. now) is something that should be treated as a valuable opportunity that's worth treating differently than arguing-on-the-internet-qua-arguing-on-the-internet. (I also think arguing "should 'created by CFAR' be positive or negative evidence" is at least slightly less meta-sturbatory than "let's double crux about double crux")
7Conor Moreton3yStrong agree that it's both true that "the lack of an example to point to produces justified skepticism" and that "that's partly unfair because that skepticism and other 'too busys' keep feeding into no one taking the time to create said example."
3gjm3yYes, I think things can be rude on the basis of their propositional content. (But not only their propositional content.) If I state that you are very unintelligent, and I say it in the presence of you or of your friends, then I am being rude. I can do it in extra-rude ways ("Said is a total fucking moron") or in less-rude ways ("I have reason to think that Said's IQ is probably below 90") but however you slice it it'll be rude. (For the avoidance of doubt, of course I do not in fact think any such thing.) I do, indeed, think there is nothing necessarily wrong with being rude. As I said: Sometimes being rude is a good thing, and sometimes it's a necessary evil. All else being equal, being rude is usually worse than not being rude, but many other things may outweigh the rudeness. I don't see that this makes the word "rude" largely useless, and I'm not sure why it should. If you mean it makes it meaningless then I strongly disagree (I take it to mean something like "predictably likely to make people upset", though for various reasons that isn't exactly right). If you mean it makes it unactionable then again I disagree; it just means that acting on the knowledge that something is rude is more complicated than just Not Doing It. (If you want to upset someone, which there may be good reasons for though usually there aren't, then rudeness is beneficial. If you don't but other things are higher-priority for you than not upsetting people, then you weigh up the benefits and harms, as always.) If you mean something other than those and the above hasn't convinced you that my way of using "rude" isn't useless, then you might want to explain further.
1Said Achmiz3yIndeed I meant “meaningless”, or perhaps “encompassing many disparate meanings under the umbrella of one word; attempting to refer to unrelated concepts as if they are the same or closely clustered; failing to cleave reality at the joints”. I find it quite unnatural to apply the word “rude” as you do, and, to be extra clear, will certainly never mean anything like this when I use the word. My takeaway here is that if you tell me that something is “rude”, I have not really gained any information about what you think of the thing, nor will I take you to have made any kind of definite claim about the thing, nor even do I know whether you’re attempting to ascribe positive valence to it or negative. (This is, to my mind, an unfortunate consequence of using words in strange ways, though of course you are free to use words as you please.) I suppose I will have to remember, should you ever describe my comments as “rude” henceforth, to reply with something like—“Ok, now, what actually do you mean by this? ‘Rude’, yes, which means what…?”.
3gjm3yI am confused. (And also, apparently, confusing, which I regret.) If I say something is rude then you learn that in my opinion it is likely to upset or offend a nontrivial fraction of people who read it. (Context will usually indicate roughly which people I think are likely to be upset or offended.) How is that no information? How have I made no definite claim? (It is true that merely from the fact that I call something rude you cannot with certainty tell whether I am being positive about it or negative. The same is true if I call something large, ingenious, conservative, wooden, complex, etc., etc., etc. I don't see how this is a problem. For the avoidance of doubt, though, most of the time when I call something rude I am being negative about it, even if I think that the rudeness was a necessary evil.) My use of the word "rude" doesn't seem to me particularly nonstandard or strange. It's more or less the same as definition 5a in the OED, which is "Unmannerly, uncivil, impolite; offensively or deliberately discourteous". (The OED has lots of definitions, because "rude" does in fact have lots of meanings. It can e.g. sometimes mean "unrefined" or "vigorous".) Clearly you are dissatisfied with my usage of the word "rude". Perhaps you might tell me yours; it is still not clear to me either what it is or why it might be better than mine. From what you say above, it seems that you want it used in such a way that "X is rude" strictly implies "X is morally wrong", but if that's really so then I'm unable to think of any meaning that does this while coming anywhere near the specificity that "rude" usually has. (At least for those who have moral systems not entirely based around not giving offence, which I am pretty sure includes both of us.)
0Duncan_Germain3yRe: "it comes from CFAR, which is an anti-endorsement." I find that a large majority of people who have a moderate-to-strong negative opinion of CFAR have either a) never subjected that opinion to falsification or b) not checked in since forming the opinion a long time ago. Generally speaking, when I engage with such people, they come away much less hesitant or skeptical or critical, and I believe this is because of justified updates rather than because of e.g. me having a persuasive reality distortion field. Most of the updates come in one of the following forms: * Ah, okay, CFAR's made significant improvements along this axis that I was right to criticize it on. * Ah, okay, CFAR is aware that this attribute that it has isn't ideal; I thought they were proceeding in ignorance but in fact they're making a cost-benefit decision and while I might disagree with their weighting I am less concerned that they're blind or stupid. * Ah, okay, this criticism I had was based on assumptions that are simply false, or on information that is simply inaccurate, and while CFAR maybe deserves some blame for imperfect image management and creating-or-allowing-others-to-create those impressions, the problem I thought existed literally doesn't exist. Said, if you would like to engage publicly with me regarding your own hesitations/criticisms/suspicions, I'm happy to make double crux motions unilaterally from my end as we do so, and then you'd have at least half of a public instance of double crux. (I won't insist that you use the frame yourself until you're at least convinced that it has potential.) I do note that my mainline prediction for "this doesn't work or doesn't happen" is something like "Said claims that it's not worth his time and attention to repair his impression of CFAR, given opportunity costs and prioritization and expected outcomes according to his models." That seems fair and plausibly correct, but if that's the

Said, if you would like to engage publicly with me regarding your own hesitations/criticisms/suspicions

I am not opposed to this, per se, but—

I do note that my mainline prediction for "this doesn't work or doesn't happen" is something like "Said claims that it's not worth his time and attention to repair his impression of CFAR, given opportunity costs and prioritization and expected outcomes according to his models." That seems fair and plausibly correct, but if that's the case, I do request that in future criticisms you flag that your negative model of my org is resistant to falsification.

I’m afraid I have to object to this. The following aren’t equivalent:

It’s not worth my time an attention to engage with you, right now, in this context and fashion

vs.

It’s not worth my time to re-examine [“repair”? why “repair”? this seems to assume the outcome!] my impression of CFAR

Nor are these equivalent:

I am unwilling to engage with you about this [whether “here and now” or “anywhere, ever”]

vs.

I subject my views on this topic to no falsification of any kind [or do you hold that discussing the matter with you is the only possible way to gain accurate info

... (read more)
4Duncan_Germain3yI appreciate pretty much everything about your reply up above. Agreement that there was a false equivalency re: right now vs. ever. Agreement that my phrasing presupposed an outcome (though that makes sense when you take the context of "the guy talking is the curriculum director at CFAR"). I predict that outcome, optimistically, but in fact the actual target should be and is "investigate" not "repair." Unfortunately for the goal of record-keeping and evidence-creation, most of those interactions have taken place in person. I could generate stories about what they're like, but a better option seems to be "start taking notes now when they happen, and ask permission to make said notes public with reasonable anonymity." Thanks for responding 100% positively/exactly as I would hope a LWer would respond. I'd love it if you let me know if I myself am not living up to that standard, as you gently did above.
7Said Achmiz3yThank you for the kind words. Re: the previous interactions: that no notes from them are available is not the problem, nor would have notes help in any meaningful way. (Plus—and I really hate to be so blunt about this, but—notes can say whatever the note-taker, or even the note-poster-to-a-public-website, wants them to say! I’m not seriously suggesting falsification of anecdotal evidence, and as I say below, this is not really my primary concern here, but from the appearance-of-propriety perspective, having notes is not a great situation.) No, the reason I asked about whether the cited interactions took place in person is certainly not disbelief or lack of evidence; and it is only in lesser part the desire to examine the interactions and see what I can conclude from them. The real reason is that an interaction in person is tremendously different from an interaction via a web forum (like this one)!! These differences are so profound and far-reaching—and so especially relevant for people with “our sort” of minds—that I hesitate to even begin enumerating them (though I’ll attempt to, upon request; but they should be obvious, I think!). The point, in any case, is that viewed in light of these differences, your track record of convincing nay-sayers, while undoubtedly real, should be much less persuasive, even to yourself, than you imply it to be. It would be very different if you could point us to an online exchange, where you, and a serious and thoughtful interlocutor, took the time to compose comments and replies back and forth—the paradigmatic example of such, around here, being the Yudkowsky–Hanson “AI Foom” debate. (Ah, but how did that one turn out, eh?)
2SilentCal3yI request this enumeration, if your offer extends to interlopers and not just Duncan. (The differences I can think of are instant vs asynchronous communication, nonverbal+verbal vs. verbal only, and speaking only to one another vs. having an audience. But I don't see why these are *inevitably* so profound and far-reaching.