There's a particular conversational move that I've noticed people making over the past couple years. I've also noticed myself making it. The move goes:

"You can't possibly succeed without X", where X is whatever principle the person is arguing for. 

(Where "succeed" means "have a functioning rationality community / have a functioning organization / solve friendly AI / etc")

This is not always false. But, I am pretty suspicious of this move. 

(I've seen it from people from a variety of worldviews. This is not a dig on any one particular faction from local-politics. And again, I do this myself).

When I do the move, my current introspective TAP goes something like: "Hmm. Okay, is this actually true? Is it impossible to succeed without my pet-issue-of-the-day? Upon reflection, obviously not. I legit think it's harder. There's a reason I started caring about my pet-issue in the first place. But 'impossible' is a word that was clearly generated by my political rationalization mind. How much harder is it, exactly? Why do I believe that?")

In general, there are incentives (and cognitive biases) to exaggerate the importance of your plans. I think this is partly for political reasons, and partly for motivational reasons – it's hard to get excited enough about your own plans if you don't believe they'll have outsized effects. (A smaller version of this, common on my web development team, is someone saying "if we just implemented Feature X we're get a 20% improvement on Metric Y", and the actual answer was we got, like, a 2% improvement, and it was worth it. But, like, the 20% figure was clearly ridiculous).

"It's impossible" is an easier yellow-flag to notice than "my numbers are bigger than what other people think are reasonable". But in both cases, I think it's a useful thing to train yourself to notice, and I think "try to build an explicit quantitative model" is a good immune response. Sometimes the thing is actually impossible, and your model checks out. But I'm willing to bet if you're bringing this up in a social context where you think an abstract principle is at stake, it's probably wrong. 

New Comment
14 comments, sorted by Click to highlight new comments since: Today at 2:37 AM

You know, I think my favorite thing about internet rationalists is when they notice a bias and go "I wonder if I can notice this in myself to avoid being wrong" rather than "How can I use this to win arguments about current hot topics."

I automatically admire anybody whose first thought when encountering a new bias is to search for it in themselves.

I should probably be clear that I my first thought was to complain about it and my second thought was to improve my own habits. 

...

...possibly fourth thought

I have no trouble believing that this is common thing to hear if you're in a position of power, but what about situations where this is correct? After all, if it was never correct, people would never find it persuasive.

Are there any heuristics you use to figure out when this is likely to be true?

(updated post to be a bit more clear about this)

Nod. The suggested tap of "build an actual model if you don't have one", or "doublecheck your model" (if you do), isn't meant to output "the statement is never true", just that you should check that you have a clear reason to believe it's true.

It hasn't been true the times I've noticed myself saying it. 

I think it's more likely to be true in physical-system setups, where, like, your engine literally won't run if it doesn't have the right kind of fuel or whatever. 

I think some instances have been a person posing a mathematical formalism and saying 'this must be true', and it was true in the mathematical example but not AFAICT in the real world analogue. (In this cases there's being some kind of Law/Toolbox conflation)

Ah.

My first reaction was thinking of a few scenarios that were analogous to the original framing, one example being "if it takes you years to coordinate the local removal of [obvious abuser], why do you think you will be able to coordinate safe AI development on a global scale?"

This isn't a pet issue of mine, but I suspect it is important to be able to say things like this. I guess my overall view is that crystallising this pattern might be putting ducttape over a more structural problem.

Recent motivating examples have been of the form "we can't possibly form good models and coordinate without X", to which I thought "WHAT!? X harms Y, and we can't possibly form good models and coordinate without Y". And it took me awhile to realize I was doing the same behavior that was annoying me.

(I think the answer is that often you need a deep understanding of both the Rock and the Hard Place before you can, hopefully, eventually, just eliminate the problem entirely)

I don't disagree with that, but I do think one reason we find it difficult to form good models and coordinate is that there's an insane norm of only ever talking about issues in abstract terms like X and Y. Maybe the issue in question here is super sensitive, since I have no idea what you are talking about, but "raising awareness of general patterms" often seems to be used as a (mostly subconscious) justification for avoiding the object level because it might make someone important look bad.

Usually when I'm avoiding addressing the object level it's 

a) engaging with someone I consider to be in roughly the same strata of social status and position-of-power as I, and

b) I just don't want to get into that particular object level debate right now (either because it's exhausting, or distracting).

I think a notable exception is Healthy Competition, where I am in fact avoiding directly critiquing powers that be. I have a cluster of reasons I could point to there with varying degrees of virtuousness, but the unvirtuous ones are definitely there.

I think it might be worth having an example-generating TAP here instead. Instead of weighing off "weigh in on the sensitive / exhausting debate" vs "say things like ' affects in a double-causal-backflip-Goodhart manner'", one could just generate another concrete example?

I agree examples are good, but generating good ones is often fairly hard (and is the difference between being a post I could rattle off in 30 minutes vs one that'll take several hours)

I guess it just doesn't seem like examples should take that long? I also think that really good examples might make for a good part of the value in a few cases, but that's just a hunch.

For what it's worth, I think that post made the right tradeoff. There will probably be some people who will have glossed over it due to lack of examples, but in that case I think it was an acceptable price to pay.

What I'm referring to is when the community does this by default, not when the author has explicitly weighed up the pros and cons. Not wanting to get into an issue is okay in isolation, but when everyone does this it impedes the flow of information in ways that make it even more difficult to avoid talking past each other.