Fair enough! I think the circuit analogy is fitting; the primary analysis is at a system-level scale rather than an individual unit scale. That said, my goal would be to turn it into a tool for better understanding and changing biases themselves (without losing the higher-level functionality). Some people have offered some ideas on how that might be done, and that’s exactly what I’m hoping to get out of sharing here.
A few days ago I made a post called A New Way to Visualize Biases, and the community seems to have been generally skeptical as to the value of the diagram. I don't think the original post made clear enough the types of use cases I was primarily interested in, and I know I haven't perfected the model yet, but the primary reason I posted was to get some clarity and ideas from LW on how I might do that, which some commenters offered.
But I do want to defend the potential value of a framework like this. If I had just been trying to understand biases in isolation, I probably never... (read 739 more words →)
You are a step ahead of my latest post with the CBT comment. Good points on being able to write out thought chains and add distortion notation later and symbols for common biases. Have you seen examples of belief network diagrams used in this way?
I think my comments about it being helpful in working through biases led people to think I intended these primarily as active problem-solving devices. Of course you can't just draw a diagram with a jog in it and then say "Aha! That was a bias!" If anything, I think (particularly in more complex cases) the visuals could help make biases more tangible, almost as a kind of mnemonic device to internalize in the same way that you might create a diagram to help you study for a test. I would like to make the diagrams more robust to serve as a visual vocabulary for the types of ideas discussed on this site, and your comments on distinguishing types of biases visually are helpful and much appreciated. Would love to hear your thoughts on my latest post in response to this.
You are overestimating the ambition of the diagram. I know it does not add any new information. I am (working on) presenting the information in a visual form. That’s why I called it a new way of visualizing biases, not a new way to get rid of them with this one simple trick. You can convey all the information shown in a Venn diagram without a diagram, but that doesn’t mean the diagram has no possible value. And if there were a community dedicated to understanding logical relations between finite collections of sets back in 1880, I’m sure they would have shot down John’s big idea at first too.
I think I see what you're saying, but let me know if I've misinterpreted it.
Let's look at the planning fallacy example. First, I would argue it is entirely possible to be aware of the existence of the planning fallacy and be aware that you are personally subject to it while not knowing exactly how to eliminate it. So you might draw up a diagram showing the bias visually before searching or brainstorming a debiasing method for it.
According to Daniel Kahneman, “Using… distributional information from other ventures similar to that being forecasted is called taking an ‘outside view’ and is the cure to the planning fallacy.”
So removing the planning fallacy is not... (read more)
That makes sense; they are intentionally somewhat fluid so they can adapt to capture a wider variety of biases/phenomena. I'm trying to use the same framework to visualize emotional reactions and behavioral habits.
I have started using a visual metaphor to diagram biases in my attempts to remove and mitigate them in myself. I have found this to be incredibly useful, particularly when dealing with multiple compounding biases.
I view an inference as an interaction between external inputs/premises and the resulting cognitions/conclusions. It can be read either as "if x then y," or "x therefore y." A basic inference looks like this:
A biased inference looks like this:
This is obviously a simplification of complex cognitive shit, but it's meant to be more of a functional interface than any kind of theory.
So to run through a few example biases, the fallacy of the undistributed middle:
Fair enough! I think the circuit analogy is fitting; the primary analysis is at a system-level scale rather than an individual unit scale. That said, my goal would be to turn it into a tool for better understanding and changing biases themselves (without losing the higher-level functionality). Some people have offered some ideas on how that might be done, and that’s exactly what I’m hoping to get out of sharing here.