[Epistemic status: I now endorse this again. Michael pointed out a possibility for downside risk with losing mathematical ability, which initially made me update away from the view here. However, some experience noticing what it is like to make certain kinds of mathematical progress made me return to the view presented here. Maybe don't take this post as inspiration to engage in extreme rejection of objectivity.]
There are a number of conversational norms based on the idea of an imaginary impartial observer who needs to be convinced. It's the adversarial courtroom model of conversation. Better norms, such as common crux, can be established by recognizing that a conversation is taking place between two people.
Burden-of-proof is one of these problematic ideas. The idea that there is some kind of standard which would put the burden on one person or another would only make sense if there were a judge to convince. If anything, it would be better to say the burden of proof is on both people in any argument, in the sense that they are responsible for conveying their own views to the other person. If burden-of-proof is about establishing that they "should" give in to your position, it accomplishes nothing; you need to convince them of that, not yourself. If burden-of-proof is about establishing that you don't have to believe them until they say more... well, that was true anyway, but perhaps speaks to a lack of curiosity on your part.
More generally, this external-judge intuition promotes the bad model that there are objective standards of logic which must be adhered to in a debate. There are epistemic standards which it is good to adhere to, including logic and notions of probabilistic evidence. But, if the other person has different standards, then you have to either work with them or discuss the differences. There's a failure mode of the overly rationalistic where you just get angry that their arguments are illogical and they're not accepting your perfectly-formatted arguments, so you try to get them to bow down to your standards by force of will. (The same failure mode applies to treating definitions as objective standards which must be adhered to.) What good does it do to continue arguing with them via standards you already know differ from theirs? Try to understand and engage with their real reasons rather than replacing them with imaginary things.
Actually, it's even worse than this, because you don't know your own standards of evidence completely. So, the imaginary impartial judge is also interfering with your ability to get in touch with your real reasons, what you really think, and what might sway you one way or the other. If your mental motion is to reach for justifications which the impartial judge would accept, you are rationalizing rather than finding your true rejection. You have to realize that you're using standards of evidence that you yourself don't fully understand, and live in that world -- otherwise you rob yourself of the ability to improve your tools.
This happens in two ways, that I can think of.
- Maybe your explicit standards are good, but not perfect. You notice beliefs that are not up to your standards, and you drop them reflexively. This might be a good idea most of the time, but there are two things wrong with the policy. First, you might have dropped a good belief. You could have done better by checking which you trusted more in this instance: the beliefs, or your standards of belief. Second, you've missed an opportunity to improve your explicit standards. You could have explored your reasons for believing what you did, and compared them to your explicit standards for belief.
- Maybe you don't notice the difference between your explicit standards and the way you actually arrive at your beliefs. You assume implicitly that if you believe something strongly, it's because there are strong reasons of the sort you endorse. This is especially likely if the beliefs pattern-match to the sort of thing your standards endorse; for example, being very sciency. As a result, you miss an opportunity to notice that you're rationalizing something. You would have done better to first look for the reasons you really believed the thing, and then check whether they meet your explicit standards and whether the belief still seems worth endorsing.
So far, I've argued that the imaginary judge creates problems in two domains: navigating disagreements with other people, and navigating your own epistemic standards. I'll note a third domain where the judge seems problematic: judging your own actions and decisions. Many people use an imaginary judge to guide their actions. This leads to pitfalls such as moral self-licensing, in which doing good things gives you a license to do more bad things (setting up a budget makes you feel good enough about your finances that you can go on a spending spree, eating a salad for lunch makes you more likely to treat yourself with ice cream after work, etc). Getting rid of the internal judge is an instance of Nate's Replacing Guilt, and carries similar risks: if you're currently using the internal judge for a bunch of important things, you have to either make sure you replace it with other working strategies, or be OK with kicking those things to the roadside (at least temporarily).
Similarly with the other two categories I mentioned. Noticing the dysfunctions of the imaginary-judge perspective should not make you immediately remove it; invoke Chesterton's Fence. However, I would encourage you to experiment with removing the imaginary third person from your conversations, and seeing what you do when you remind yourself that there's no one looking over your shoulder in your private mental life. I think this relates to a larger ontological shift which Val was also pointing toward in In Praise of Fake Frameworks. There is no third-person perspective. There is no view from nowhere. This isn't a rejection of reductionism, but a reminder that we haven't finished yet. This isn't a rejection of the principles of rationality, but a reminder that we are created already in motion, and there is no argument so persuasive it would move a rock.
And, more basically, it is a reminder that the map is not the territory, because humans confuse the two by default. The picture in your head isn't what's there to be seen. Putting pieces of your judgement inside an imaginary impartial judge doesn't automatically make it true. Perhaps it does really make it more trustworthy -- you "promote" your better heuristics by wrapping them up inside the judge, giving them authority over the rest. But, this system has its problems. It can create perverse incentives on the other parts of your mind, to please the judge in ways that let them get away with what they want. It can make you blind to other ways of being. It can make you think you've avoided map-territory confusion once and for all -- "See? It's written right there on my soul: DO NOT CONFUSE MAP AND TERRITORY. It is simply something I don't do." -- while really passing the responsibility to a special part of your map which is now almost always confused for the territory.
So, laugh at the judge a little. Look out for your real reasons for thinking and doing things. Notice whether your arguments seem tailored to convince your judge rather than the person in front of you. See where it leads you.
In this case, I think it's worth being very VERY curious as to how that judge got in there in the first place. It's also probably worth eventually doing psychological research in order to classify types of judge, in case they aren't all the same. Do mathematicians above a certain caliber all possess internal judges with a common standard for proof? How does this phenomenon relate to actual judges?
In general, I would expect a person following this advice to, in the average case, diverge from the process of creating a map in correspondence with the territory, towards the replacement of the map with a feedback system conditioning model-free harmony. I would expect that their mind would gradually transition from asking 'is this true' to asking 'is this what power wants me to say', and eventually to come to see truth as a dreadful constraint on safety rather than as a support with which to achieve safety. I would expect them to grow in their ability to lead and to sell, but to loose the ability to manage, or otherwise constrain the actions of a group in order to direct them towards some goal other than politics.
That doesn't at all mean that the ideal mode of cognition involves such a judge. Just that collaborative cognition requires a common set of protocols and this seems to be the default such set of protocols for constructive collaboration, while other protocols seem favored by predatory collaboration and seem likely to emerge if not suppressed.
You make an interesting point.
For many people (but not for me), it seems the judge explicitly speaks in the voice of one of their parents.
Certainly I think the judge is serving a group-coordination role. It manages outward-facing justifiability. Hence, I associate the judge with crony beliefs. I interpret you as saying that if the judge didn't handle those, they could start getting everywhere -- and also that the judge may be associated with other benefits, as in the case of mathematical reasoning.
I have actually done away with the judge at times, one time lasting a whole week. I would use the same language as before for social coordination purposes, but it wouldn't carry the same meaning -- for example, "I feel bad about X" would mean "I wish X could have happened without giving anything else up", but carry no feeling of conflict in my mind; normally, it would mean "I am feeling conflicted about my policy around X".
So, from that perspective I expect that getting rid of the judge tends to make one more epistemically coherent and less prone to bend thoughts toward social consensus. The social-coordination role of the judge then has to be replaced with other strategies.
On the other hand, your hypothesis doesn't seem absurd to me.
A few thoughts.
It seems that the judge often has a big part to play in protecting the epistemolgy.
I'm guessing the strength of your judge and the role it plays depends on your openness, in the big 5 sense.
For me there is a two step process. Even if the arguments for something aren't strong, I can "entertain" an idea, if that idea is related to something important. That idea might hang around for a long time, accruing evidence for and against it in my experience and as I think about it more. Only when it passes the judge do you confidently go around stating it. You can see the start of this type of entertaining in this post.