I've never had any sort of therapy, but I have done some personal development courses in which similar sorts of dialogue take place.
The thing is, we are all running on corrupt hardware (a phrase I couldn't find in a top-level post, which surprised me -- maybe there's scope for an article on the theme). (ETA: thanks to JGWeissman and ciphergoth for locating this article.) When asking ourselves the fundamental question of what we believe and why, we have to take that into account, and it goes way beyond the usual lists of cognitive biases. I don't see "depression" in this list, or for that matter "optimism", "mania", "self-effacement", "overconfidence", "introversion", "extraversion", or any other general patterns of mood and background belief that greatly affect how a person lives their life. How do you uproot a belief which seems convincing to you, that you can even talk the hind leg off a donkey defending with seeming evidence, but which you have just the tiniest suspicion is no more than a figment of your mental constitution? Rigour in assessing that evidence, and your evidence for believing that it's evidence, and so on, is one way. If CBT as typically practiced does not reach that level of rigour (and whose thinking does?), well, a blunt knife can be more dangerous to its user than a sharp one, but it need not be perfectly razor-sharp if it does the job.
An additional complication in the present context is that beliefs cause actions, and actions cause outcomes. If the beliefs are about those outcomes then there are problems of circularity. Contrary to (C) in the top-level post, the student's belief "I'm inadequate" has a very clear anticipated experience: failing in her course. The (rather lukewarm) replacement anticipates a chance of success. However, "I'm inadequate" is likely to cause failure to work on the course, which causes failure on the course.
Here's a very simplified payoff matrix. Assume that whether you pass a course depends solely on whether you work at it. If you work you will pass, if you don't you will fail. The payoff for passing is 1, for failing 0.
work don't work
believe you'll pass 1 0
believe you'll fail 1 0
If this is the situation, then clearly you should work, and having made that decision, believe you will pass. But if you only work if you believe you'll pass, the table becomes:
work don't work
believe you'll pass 1 n/a
believe you'll fail n/a 0
Both beliefs are then no more right or wrong than the sentence "this sentence is true". They try to reach past the means of producing an outcome to the outcome itself, which is surely a fallacy in some decision theory or other. A correct belief in the above situation is "I will pass if and only if I work". You might then still choose not to work, because there is some better use of your time, but with the correct belief, you are in a position to make that choice.
A few relevant quotes, in chronological order:
"Know thyself." (Ancient Greeks).
"Never despair; but if you do, work on in despair." (Marcus Aurelius).
"The truth shall set you free." (John 8:32).
"When I look around and think that everything's completely and utterly fucked up and hopeless, my first thought is "Am I wearing completely and utterly fucked up and hopeless-colored glasses?"" (Crap Mariner).
This is completely right.
I've done CBT to fight depression. There was an assumption - explicitly discussed between me and my therapist - that:
I'm not uncomfortable, but I can't give a useful report, for two reasons.
Firstly, there have been many confounding factors from elsewhere. In particular, I also participated in group seminars that had some methods in common with the CBT sessions.
Secondly, there was a period in which I deliberately avoided evaluating the efficacy of the method, reasoning that just as I should believe that "I'm a good and capable guy" regardless of evidence, so I should believe that "my way of fighting depression through CBT is a good and capable way". I did this for a predetermined length of time. Then I decided that 1) there was improvement but 2) it could not be linked to the CBT, so I stopped seeing the therapist.
I can definitely report there's a strong correlation between thinking positive, evidence-ignoring thoughts and general well-being, over both small and large time-scales. But you already know that :-) I have no data as to causation.
Re "corrupted hardware", the source article appears to be Ends Don't Justify Means (Among Humans).
I've been trying CBT for the last couple of years and my understanding of it is that B is the main reason for that approach. For people with depression, negative motivated cognition is so easy and habitual that trying to overcompensate is a good and useful strategy.
I don't think your argument for C quite works though, because those beliefs do have anticipated experiences attached to them. Example: "I am incompetent" -> I expect to fail at everything I attempt. Which then leads to the further negative belief that there's no point attempting anything since I already anticipate failing. The more helpful replacement I would choose in its place would be "I am capable of being competent if I try" -> I expect my level of effort to reflect my final results, which in turn would encourage me to work hard (well, in theory ;)
This is interesting and of some personal concern, as a loved one has desperately been trying to get CBT for literally years now (on the NHS in the UK).
CBT is interesting in that there's evidence that it actually somewhat works as a psychotherapy, a field full of treatments lacking in evidence of function.
I suppose the question then is: is adding some biases to get the patient out of a crippling depression or other dysfunction an acceptable side-effect? Effective medicines frequently have nasty side-effects, after all.
If believing something that is false gets me utility, I desire to believe in that falsity; If believing something that is true gets me utility, I desire to believe in that truth; Let me not become attached to states of belief that do not get me utility.
That's the Litany of the Politician. It's the extreme case of politics, but it does seem to be what is at work in a lot of practical politics.
perhaps in this case we might modify it slightly differently:
"If holding an alief which corresponds to no expectations increases my utility, I desire to alieve it."
This is perhaps more relevant to the idea of feeling "inadequate" and needing to increase one's confidence.
Formatting help:
For some reason, the posts and the comments use entirely different syntax. You should see a link button (looks like three chain links) that is greyed out until you select text (the name of the link), which you can then click and add a url.
Edit:
Truth for truth's sake is generally not worth it. Truth for curiosity's sake is worth it, and curiosity is a virtue. But your response to this situation is comparable to saying to someone who is suicidal "You know what, I think you might be right, maybe you should kill yourself." Fine for the terminally ill, not fine for the mentally ill.
From what I've seen on the subject, the emphasis is on B and C, though they don't spend words on avoiding A.
CBT is just a way to steer your thoughts, and is the same thing aspiring rationalists would do when they catch themselves counting sunk costs.
Using CBT to fight depression is almost certainly instrumentally rational, and most likely epistemically rational as well, but if you hand someone a steering wheel with no instruction on rationality, there's no guarantee they'll beat the autopilot.
Does CBT make the patient happier, or is it a way of persuading patients to self-rate their happiness as higher on surveys and pretend to be happy? I was forced into talking therapy during the early teenage years (for issues related to Aspergers syndrome) and am convinced that the answer is the latter.
I asked a professor about this in a class on CBT - "Aren't we just replacing biases with other biases?" and she answered, "Well, I wouldn't want to get rid of all biases." It drove me crazy at the time, but now I think there's some merit to it. E.g. I think it's fine for parents to believe their baby is more wonderful than all other babies. That's part of how love works.
I think the false (or unproven) beliefs of the kind CBT encourages, like "I am a capable person who can X" have hardly any bad effects, and the expected benefits of these deviations from the evidence are worth it.
While I value true knowledge, I carve out an exception for beliefs about my own capabilities, as represented by this modified version of the Litany of Tarski:
If I can X,
then I desire to believe I can X
If believing that I can not X would make it such that I could not X,
and it is plausible that I can X,
and there are no dire consequences for failure if I X,
then I desire to believe I can X.
It is plausible that I can X.
There are no dire consequences for failure if I X.
Let me not become attached to beliefs I may not want.
It is plausible that I can X. There are no dire consequences for failure if I X.
That doesn't seem appropiate for arbitrary X. It is the sort of thing you would have to use ordinary epistemic rationality to evaluate for a particular X.
I left out a bit of the implied procedure that goes with reciting this. You're supposed to truth-check those two lines as you say them, and stop if they aren't true, with the understanding that (as a prior probability) they usually will be.
with the understanding that (as a prior probability) they usually will be.
What? Where did that prior probability come from?
"Cognitive behavioral therapy" (CBT) is a catch-all term for a variety of therapeutic practices and theories. Among other things, it aims to teach patients to modify their own beliefs. The rationale seems to be this:
(1) Affect, behavior, and cognition are interrelated such that changes in one of the three will lead to changes in the other two.
(2) Affective problems, such as depression, can thus be addressed in a roundabout fashion: modifying the beliefs from which the undesired feelings stem.
So far, so good. And how does one modify destructive beliefs? CBT offers many techniques.
Alas, included among them seems to be motivated skepticism. For example, consider a depressed college student. She and her therapist decide that one of her bad beliefs is "I'm inadequate." They want to replace that bad one with a more positive one, namely, "I'm adequate in most ways (but I'm only human, too)." Their method is to do a worksheet comparing evidence for and against the old, negative belief. Listen to their dialog:
[Therapist]: What evidence do you have that you're inadequate?
[Patient]: Well, I didn't understand a concept my economics professor presented in class today.
T: Okay, write that down on the right side, then put a big "BUT" next to it...Now, let's see if there could be another explanation for why you might not have understood the concept other than that you're inadequate.
P: Well, it was the first time she talked about it. And it wasn't in the readings.
Thus the bad belief is treated with suspicion. What's wrong with that? Well, see what they do about evidence against her inadequacy:
T: Okay, let's try the left side now. What evidence do you have from today that you are adequate at many things? I'll warn you, this can be hard if your screen is operating.
P: Well, I worked on my literature paper.
T: Good. Write that down. What else?
(pp. 179-180; ellipsis and emphasis both in the original)
When they encounter evidence for the patient's bad belief, they investigate further, looking for ways to avoid inferring that she is inadequate. However, when they find evidence against the bad belief, they just chalk it up.
This is not how one should approach evidence...assuming one wants correct beliefs.
So why does Beck advocate this approach? Here are some possible reasons.
A. If beliefs are keeping you depressed, maybe you should fight them even at the cost of a little correctness (and of the increased habituation to motivated cognition).
B. Depressed patients are already predisposed to find the downside of any given event. They don't need help doubting themselves. Therefore, therapists' encouraging them to seek alternative explanations for negative events doesn't skew their beliefs. On the contrary, it helps to bring the depressed patients' beliefs back into correspondence with reality.
C. Strictly speaking, this motivated cognition does not lead to false beliefs because beliefs of the form "I'm inadequate," along with its more helpful replacement, are not truth-apt. They can't be true or false. After all, what experiences do they induce believers to anticipate? (If this were the rationale, then what would the sense of the term "evidence" be in this context?)
What do you guys think? Is this common to other CBT authors as well? I've only read two other books in this vein (Albert Ellis and Robert A. Harper's A Guide to Rational Living and Jacqueline Persons' Cognitive Therapy in Practice: A Case Formulation Approach) and I can't recall either one explicitly doing this, but I may have missed it. I do remember that Ellis and Harper seemed to conflate instrumental and epistemic rationality.
Edit: Thanks a lot to Vaniver for the help on link formatting.