Looking for help: what's the opposite of counterfactual reasoning -- in other words: when effective altruists encourage counterfactual reasoning, what do they discourage?

I ask because I'm writing a chapter for a book about good epistemic practices and mindsets. I am trying to structure my writing as a list of opposites (scout mindset vs soldier mindset, numerical vs verbal reasoning, etc). 

Would it be correct to say that in the case of counterfactual reasoning there is no real opposite? Rather, the appropriate contrast is: "counterfactual reasoning done well vs. counterfactual reasoning done badly"?

[This was first posted as shortform on the EA forum but I got no replies; hence trying it here, too. Thanks for any help you might provide!]

New Answer
Ask Related Question
New Comment

5 Answers sorted by

If you start with a formal definition of counterfactual reasoning 'thinking about what the world would be like now if things had been different in the past" then the polar opposite is 'thinking about the world defined by the current situation'.

If now you want to split this (challenging the basic dichotomy) you could try 'thinking about the world if none of the 'paths' are/were possible [ie a study of impossible worlds]', or 'thinking about the counterfactual reasoning that 'you' do not appreciate [ie recognising the observer dependent part of counterfactual reasoning and splitting on different perspectives]. This 'perspective' perspective is actually the super-class of your 'done well'/'done badly' distinction - you cannot judge absolutely good/bad rather your assessment is based on your own perspective.

The other fracture dimension comes from 'what/why is the purpose of the counterfactual thinking. Usually this is to conceive of possible futures and here there are plenty of opposites - futures I anticipate/futures I discount, futures I work towards/futures I take action to block etc.

Of course dichotomising uncertainty is perhaps the worst thing to do - you block off your options and constrain your room for manoeuvre.

There isn't an opposite, because counterfactual reasoning is a tool, not a mindset. Proof by Induction isn't the opposite of Proof by Contradiction, a hammer isn't the opposite of a screwdriver, they are just different tools for different tasks. Counterfactual reasoning is a way to intuit probabilities, or build empathy. Logical reasoning is a way to take a few known facts and derive additional conclusions. Intuitive reasoning is a way to make decisions quickly. All of rhen have a use case, and all of them have times when they are not appropriate.

Counterfactual reasoning here is thinking on the margin, or about opportunity cost (modulo some money/utility mixup in these terms). For a hypothetical donation/intervention, how much better would the outcome be if it was made compared to if it was not made, or compared to some alternative donation/intervention?

This is in contrast to a fallacious mode of reasoning where you choose a cause based on how much good it's currently doing, or how well it's doing it, on any metric. A cause that's doing well sometimes no longer has the capacity to make much use of additional funding or talent, while a neglected cause could turn the same funding or talent into more effect.

When I encourage counterfactual reasoning in the real world, it's usually to make sure that people are planning in a way that's responsive to reality. The opposite is planning in a way that is not responsive to reality, which usually still means changing your plans when reality changes, just in a disorganized and untimely way.

So, the alternative to counterfactual reasoning (unresponsive to reality) is not really something anyone would aspire to --- that makes sense.

I see counterfactual reasoning as the discipline of considering scenarios rather than isolated choices. 

We often need to evaluate options when making a decision, and it is an error to consider only the choice - we should also consider the implications and effects of the choice. I am not choosing between two otherwise identical worlds in which I either do X or Y; I am choosing between the world which results from doing X or the world which results from doing Y. It is not just the choice that differs, it is the entire resulting scenario. 

Similarly, when evaluating past decisions, it is an error to consider only the decision in question. We must compare the scenarios that would have resulted from the decision (to some appropriate level of computational complexity - I’m not asking you to simulate entire universes, but you should think about the knock-on effects, not just the immediate decision).

Counterfactual reasoning means considering a possible universe that didn’t happen. The opposite is failing to consider a possible universe, and instead thinking about an event that didn’t happen, superimposed on the universe that did happen. It’s the difference between evaluating coherent alternative scenarios vs (unwittingly) evaluating incoherent scenarios. 

For a chapter title, maybe “Counterfactual Reasoning vs Isolated Choices”?

This is very helpful, thank you!

New to LessWrong?