Ed Yong over at Not Exactly Rocket Science has an article on a study demonstrating "restraint bias" (reference), which seems like an important thing to be aware of in fighting akrasia:

People who think they are more restrained are more likely to succumb to temptation

In a series of four experiments, Loran Nordgren from Northwestern University showed that people suffer from a "restraint bias", where they overestimate their ability to control their own impulses. Those who fall prey to this fallacy most strongly are more likely to dive into tempting situations. Smokers, for example, who are trying to quit, are more likely to put themselves in situations if they think they're invulnerable to temptation. As a result, they're more likely to relapse.

Thus, not only do people overestimate their abilities to carry out non-immediate plans (far-mode thinking, like in planning fallacy), but also the more confident ones turn out to be least able. This might have something to do with how public commitment may be counterproductive: once you've effectively signaled your intentions, the pressure to actually implement them fades away. Once you believe yourself to have asserted self-image of a person with good self-control, maintaining the actual self-control loses priority.

See also: Akrasia, Planning fallacy, Near/far thinking.

Related to: Image vs. Impact: Can public commitment be counterproductive for achievement?

New to LessWrong?

New Comment
12 comments, sorted by Click to highlight new comments since: Today at 1:19 PM

This confirms that, like the case of the icy bucket, students overestimate their ability to fight off tiredness unless they're actually experiencing it, and this affects how they plan their studying.

What amazes me in all these studies is that they show how little we learn from our own experience. A lot of people always study in the last days prior to an exam. The problem is, they repeat this same behaviour year after year. Our decisions are much more affected by our momentary feelings than by data, including our past experience.

EDIT:

People often lack the discipline to adhere to a superior strategy that doesn't "feel" right. Reasoning in a way that sometimes "feels" wrong takes discipline.

-- Michael Bishop, Epistemology and the psychology of human judgement

A lot of people always study in the last days prior to an exam.

I was one of them. It worked for me, and I don't see why I should have done it differently.

But what about the people it doesn't work for, is the point. Just attending class and never studying worked for me, but I recognize my results as atypical.

Given that people who have excellent self-control self-assess the fact accurately, it's almost tautological that overestimation of their self-control is correlated with low self-control.

Also, don't people (who aren't depressed) overestimate nearly every virtue in themselves (self-serving bias)?

I approve of this post, because the facts in it are useful.

Two concepts to distinguish: significant overestimation of self-control (relative) vs. high estimation of self-control (absolute).

I agree that it's not exactly tautological.

Another surface pattern: "those with low 'self-control' tend to overestimate their 'self-control' the most" is an instance of the Dunning-Kruger effect, where you substitute any skill for 'self-control'. I don't think that it's deeply meaningful to do so, however; I think self-control is a more basic phenomenon than the kind of performance competencies studied by DK.

I see Dunning-Kruger mentioned all the time, but hasn't it been discredited?

The Dunning-Kruger effect has been disputed, mostly by people saying that the pattern of results is just due to regression to the mean rather than a lack of metacognitive skill, but the debate is ongoing. Dunning, Kruger, and others have a 2008 paper (pdf) which includes a summary of the criticisms and a defense of their original interpretation.

According to my cursory research over the past five minutes, not obviously - do you have a specific idea of which results have been disconfirmed? (The "Lake Wobegon effect" - that everyone considers themselves above average - has been widely confirmed, I believe.)

I don't remember it quite myself, so I had to google and came up with this: http://neuroskeptic.blogspot.com/2008/11/kruger-dunning-revisited.html

There is a link to a study there.

Looking at the abstract:

People are inaccurate judges of how their abilities compare to others’. J. Kruger and D. Dunning (1999, 2002) argued that unskilled performers in particular lack metacognitive insight about their relative performance and disproportionately account for better-than-average effects. The unskilled overestimate their actual percentile of performance, whereas skilled performers more accurately predict theirs. However, not all tasks show this bias. In a series of 12 tasks across 3 studies, the authors show that on moderately difficult tasks, best and worst performers differ very little in accuracy, and on more difficult tasks, best performers are less accurate than worst performers in their judgments. This pattern suggests that judges at all skill levels are subject to similar degrees of error. The authors propose that a noise-plus-bias model of judgment is sufficient to explain the relation between skill level and accuracy of judgments of relative standing.

...it appears that these authors are disputing the mechanism proposed by Dunning and Kruger, proposing a simpler one. The data remain the same, but the theory changes.

Assuming results such as these are upheld, we may certainly say that the Dunning-Kruger effect is refuted, but people far below average will still consider themselves above average, on average.

This might have something to do with how public commitment may be counterproductive: once you've effectively signaled your intentions, the pressure to actually implement them fades away.

I was thinking about this today in the context of Kurzweil's future predictions and I wonder if it is possible that there is some overlap. Obviously Kurzweil is not designing the systems he is predicting but likely the people who are designing them will read his predictions.

I wonder, if they see the time lines that he predicts if they will potentially think: "oh, well [this or that technology] will be designed by 2019, so I can put it off for a little while longer, or maybe someone else will take the project instead"

It might not be the case and in fact they might use the predicted time line as a motivator to beat. Regardless, I think it would be good for developers to keep things like that in mind.