Summary: I argue that the very useful concepts of inside view and outside view are even more useful when completed with a third, equally fundamental, opposing view.

Epistemic status: I am very sure this is a good way to model my own thinking. I suspect it will be useful for more of us. But I don't know how many, because it’s about a fairly meta organization of cognition, where we should expect a lot of interindividual variance.

What is an Opposing View?

It is a type of cognition that does not neatly fit into the categories of inside and outside view.

  • It’s different from the outside view
    • …in content: it goes into some detail, rather than be satisfied with a reference class
    • …and in the underlying emotion: it’s motivated, focused and directional.
  • It’s different from the inside view
    • …in content: the details aren’t systematically connected to each other, so there’s little effort/attention to changes in them
    • …and in the underlying emotion: negative, suspicious and not by default entangled with one's self.
  • While the inside view is assertive and the outside view is detached, the opposing view is critical.

Why might you oppose this concept?

  • The duality of inside view and outside view is one of the best tools we have. Just on priors it should be expected that any alternative to it is likely inferior.
  • Part of their usefulness is that they're simple. Adding complexity is always a cost - at the very least it muddles the waters.
  • In this case maybe adding another view could create a kind of three body problem where interactions become too complicated to be predicted deterministically?
  • Obviously we can be against things. It does not automatically follow that opposition should be as important as inside and outside view.
  • Maybe it’s sufficient to understand one’s own opposition as a flavor of inside view, a stance “in favor of the opposite of that” where the opposite just happens to be less well-defined than what is opposed? Or as an outside view that just happens to add details on the negative aspects of a thing?

So why do I use this concept?

I basically do it because it works for me. I seem to map my own mind better than I mapped it before. This resolves some puzzles and makes my cognition appear better-ordered to myself.

I had aggressive thoughts, and trouble classifying them either as inside or as outside views. They didn’t fit with my other inside views, because:

  • They were much less detailed and systematic. I didn’t think I really understood what, or who, they were about.
  • They didn’t have the comfortable “homely” feel of obvious inside views.
  • They were about things that felt too obviously bad to be nothing but the subjective judgment from my own position. I expected that in order to get agreement from other people, I only needed to point out what I saw, not explain my opinion as I usually expect to have to for other inside views.

And they also didn’t fit with my other outside views

  • They were too detailed. To reduce them to reference classes seemed to lose something of importance.
  • They seemed too motivated, too unlike the cool, detached analysis of obvious outside views.

This was the conscious part of the confusion, the “known unknowns” and the reason I created a third category. But once I had that, I found I had previously classified some views as inside or outside by (what seemed in hindsight to be) clear mistake.

The cases where I had mistaken opposing views for outside views were more numerous, and more obvious. I thought I was diagnosing people with mental illnesses when I really was just fighting them. I thought I understood a large number of categories of mistakes and of people who make them, when really each of those categories contained too few examples to really be about comparability.

But more subtly, and less often (probably because if giving myself the choice, I prefer to think I have “good, less wrong” outside views rather than “bad, more wrong” inside views), I had also mistaken opposing views for inside views. I thought I had a personal, detailed understanding of anti-vaxxers and only noticed my confusion at my inability to really communicate with them. I thought I understood in detail someone I just happened to have a lot of conflict with, and had long managed not to notice my frequent switches between my hypotheses about them. Some cognitions I was conflicted about, that I suspected were weird foreign inside views of some subpersonalities or something, also turned out to be simply my own opposing views.

In hindsight, the most distinct feature of my opposing views was how readily and unconsciously my own stances towards their subjects shifted. I’d go from denial (“this can’t really be happening”) to disgust (“there’s dishonesty or insanity involved here”) to activism (“I’m fighting this”) without even noticing how these are contradictory stances I was switching between. The reference classes of outside views are not so exclusive, and not so primarily defined by what to do about them. And changes in inside views aren’t so quick; they require effort and you notice them.

Just like it is good to distinguish inside and outside views and treat them differently, it is also good to distinguish opposing views from outside views and from inside views. I for one found this enormously clarifying and liberating.

Try it yourself!

My previously described ideas that turned out to be opposing views may already have reminded you of some of your own.

I suggest you make a list of (at least three) subjects you tend to think about in such ways, and ask yourself the following questions about that list.

  • Do they all happen to be things or people you’re opposed to, ones you want to be different or to be free from?
  • If you re-sort them in this new category in your mind, do you get clarity? Does it reduce confusion?
  • Do your oppositions to each of these subjects kind of emotionally validate each other? Does putting them together give you a felt sense of permission to be against (or aggressive towards) things or people in general?
  • Basically, does this make sense to you?

I think you should, and maybe will, decide whether you use this concept primarily based on whether it works for you. Still, just in case you want it, here’s…

More reasoning in favor of this

Psychology has pretty solid evidence that humans have two motivational systems, one negative and one positive. One is for good things you seek out, one for bad things to defend against with fight or flight. And the default state of primate cognition is motivated cognition. So if there are two types of motivation, it is reasonable to expect they produce two types of motivated cognition.

What is peculiar about human cognition evolved primarily to solve social disputes, because the biggest difference between our ancestral environment and that of chimpanzees is that ours was primarily the highly social group. In social disputes, there are very commonly (always?) three kinds of views: the views one is defending, the views one is attacking, and the views we see from afar because we don’t have a personal stake. (Maybe the third only arises from the triangulation of the first two?) Evolutionary psychology tells us that we should expect our ancestral environment has left traces in our brains, analogous to other traces such as Dunbar's number, and traces of the evolutionary pressure of handling social disputes would include something like this.

I contend that we LW rationalists, in our studies of human cognition, have relatively neglected the aggressive parts of it. We understand that aggressive drives are there, but they're kind of yucky and unpleasant, so they get relatively little attention. We can and should go into more detail, try to understand, harness and optimize aggression, because it is part of the brains on top of which we're trying to build a rationality layer.

I previously argued in favor of harnessing aggression in a more specific and limited way. “I learn better when I frame learning as Vengeance…” got a number of upvotes I’m proud of, indicating I’m not the only one who thinks this is a part of human cognition worth exploring in more detail.

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 5:42 PM

I think you're on to something important here. I've spent a bunch of time researching cognitive biases (it was my job focus for four years and parts of more), and I settled on the idea that polarization among viewpoints was the biggest practical impact of cognitive biases. Confirmation bias captured some of it, but another part would fall under motivated reasoning. But that's only a partial description. Motivation could be for many things. In many of the most important cases, I think the motivation could be described as aggression, as you classify it. The motivation is to defeat a person, viewpoint, or idea you dislike and feel enemnity toward.

I think it's important to recognize this in our own cognition. It's a bias. It creates confirmation bias and motivated reasoning to believe whatever will oppose the person/viewpoint/idea we don't like.

But it can be differently motivated. It's also about wanting to get at the truth, and having a liking for doing that by supposing the opposite of whatever you're hearing, then looking for arguments and evidence for it. I think we can harness that instinct/habit/bias for better rationality. When directed at our own thinking, this can be a really effective way to combat confirmation bias. And when directed at our collaborators, but edited to preserve good social relations, it can be really good for a community's epistemics.

If we don't do that careful editing, it brings out more of the same bias in our conversational partners. And we get arguments and conflct instead of discussion.