This lesson took me a long time to learn. Consider the following questions:
Should I be confident? Do you regret that decision? Are you happy about what happened?
In each case, there are two things that are being asked:
- Do I have a good chance of success? AND Should I feel confident?
- Would you change the decision if you had the opportunity? AND Are you feeling negative emotions about having made that decision?
- Was the outcome in line with your preferences? AND Is the outcome creating positive emotional affect for you?
Because these are two seperate questions, they can have two separate answers. It's not common, but you can believe that you are unlikely to succeed, whilst also having the internal emotional experience associated with confidence. You could wish that you had made a different decision without necessarily feeling negative emotions because of it. You can have an outcome that isn't in line with your preferences, but experience positive emotional affect by looking on the bright side.
Properties like true or false only strictly apply to beliefs. Whether or not you are likely to succeed is either true or false, but this doesn't apply to the emotive affect of confidence itself. Someone can affectively feel very "confident" without being factually wrong.
Nonetheless, explicit knowledge isn't the only knowledge contained in our brains. We also possess implicit knowledge and this can be calibrated or uncalibrated. Since these are heuristics, they can't literally be true or false or it would no longer be a heuristic. But we can create different measures of how accurate these heuristics are.
Emotions are linked to our heuristics, so we don't always want to shift ourselves towards feeling positive affect as this might bias us. However, emotions are seperate from implicit knowledge. For example, someone can have implicit knowledge that something is a bad idea, without experiencing negative affect like a sense of dread. Alternatively, someone can have a sense of dread, yet also have a strong intuition that that it is the right decision. Since they are separate, they'll be times when we can experience positive affect instead of the "appropriate" emotion without having a significantly detrimental impact on our calibration.
Unfortunately, it is very hard to predict to what extent emotions affect our implicit understanding. This means it is very hard to figure out the appropriate trade-off in terms of feeling positive affect vs. avoiding bias. Nonetheless, I would be astonished if the best trade-off involved taking no risk at all (see Barbarians vs. Bayesians for the opposite stance).
So far, I've mainly experimented with cognitive defusion as described by Kaj Solata. In many cases, this has allowed me to cognitively think that an outcome is bad, without having any significant emotive affect. Note that I limit my use of this, because sometimes I see advantages of experiencing negative affect. I mean, from an evolutionary psychology perspective, experiencing negative emotions makes a mistake stick in our brain for longer and has a greater impact on our implicit beliefs.
Unfortunately, I'm not similarly successful in creating positive effect. All I can recommend at the moment is allowing yourself to feel positive affect, even if you are aware the the reason you are feeling it is quite "silly".
This approach is somewhat influenced by Buddhism, but is also distinct from it. I've heard people influenced by Buddhism say things like, "there is no such thing as good or bad, it's all in your mind" and I find that really frustrating. I suppose that if I was a perfect, equanimous enlightened being than there really wouldn't be a difference, but given that I am not, some things really are bad for me and some really are good. It resonates with me much more to think, "this situation is bad, but I don't have to feel bad". I won't claim that this works for everyone, but that's just what I've found useful so far.
Nate Soares explores this in Conviction without Self-Deception, but I think it's a good topic to revisit from time to time.
Thanks for linking me to that. It's pretty crazy how similar our approaches to the topic are, down to pointing out emotions aren't beliefs and suggesting the same solution of "getting out of the way". I suppose very few ideas are original, but it still weirds me out to how similar our approaches are. I suppose we both probably read Barbarians vs. Bayesians and maybe that has something to do with it?
I suppose I'm more skeptical than Nate about how easy it is to access these states when they are not coherent with your beliefs. He seems to act as though they are mostly or completely independent, while I feel that it is only in particular circumstances when an ordinary person would be able to intentionally create a disjunction.