Sorted by New

Wiki Contributions


Love xkcd. Spherical cows and all that. But appropriate parsimony is a desirable feature. Here's a summary of whether the equation does a good job of summarizing the science:

Also, from the book "The Procrastination Equation"

The Procrastination Equation attempts to economically describe the underlying neurobiology that creates procrastination. I will tell you right now; the biology and the math won’t match exactly. A road map of a city, for example, no matter how recent or detailed, can’t represent every corner and crevasse of reality; it skips over details like architectural styles or fire hydrant placement. Judiciously focusing on streets and highways allows the map to emphasize navigation. If this big picture doesn’t satisfy you and you want all the details, don’t fret. The next chapter will give you what you are looking for.

The next chapter discusses it from a neurobiological perspective, which ultimately provides deeper understanding. I think as along as people recognize the purpose of the equation, and that it actually is a step up in complexity from what was previously used, as well as not mistaking the map for the land, it works.

Here's the background on its construction for those interested,the academic article "Integrating Theories of Motivation"

Spectacularly uncontroversial really, based on the core and best established parts of the key motivational theories. Due to limiting the theory this way (i.e., focusing on the core elements), it doesn't cover directly obvious elements like satiation, though really you would incorporate it in value.

If I could redo it again, I would differentiate between goal choice and goal pursuit as expectancy operates differently. However, the public conversation is necessarily limited to reiterating the basics, which is fine, Academically though, it is a bit old hat.

We are working on a software based training program that we can update that is based on our best understanding of goal setting. I like it as it provides a more direct conduit to implementing what we have learned. Actually all inspired somewhat by what Less Wrong is up to.

This is neat and actually might be better in some ways than the original book. People tend to respond better to stories than statistics and science, though the most useful stories are those based on the latter. Could be the best of both worlds?

Let's go back and look at the source article one more time: "PubMed references more than 25 million articles relating primarily to biomedical research published since the 1940s. A comprehensive search of the PubMed database in May 2012 identified 2,047 retracted articles, with the earliest retracted article published in 1973 and retracted in 1977."

So over 99.99% of articles aren't retracted. Lets say the retracted ones are a tip of the iceberg and the real situation is ten times worse. That makes it 99.9% accurate.

Aside from the sensationalism, these results are a stunning and unequivocal endorsement that the scientific system works.

Too good. Trying to think up new ones that would belong, but I can't verify my own predictions. Heh, maybe that is one right there. Some more (?):

"Well, essence does precede existence." "Total spaghetti monster." "You have to make your cognitive biases work for you." "What's the citation count on that?" "I'll wait for the meta-analysis, thank you very much."

You are probably right. It was an overly onerous requirement on my part. However, peer-reviewed is our best stamp of quality research we have and a meta-analysis is even better, comprised of hundreds of peer-reviewed research. I am passionate about science, well aware of the limitations of clincial expert opinion, and was probably too strident.

In truth, it is almost impossible for a sole practitioner to discern whether the efficaciousness of their treatment is due to the treatment itself or other apparently non-relevant aspects, such as the placebo effect or the personality of the clinician. There are some really effective clinicians out there who are successful through their innate ability to inspire. You need to do or rely on research to determine what is really going on (i.e., evidence based treatment). There really isn't any other way (really, really, really), and unless he gets this, there is nothing he will personally experience that will make him change his mind. This isn't new though. Research has repeated shown statistical analysis beats clinical opinion pretty much everytime (here's one from Paul Meehl, who I studied under and was both a clinician and statistican:

This type of issue is never going go away though. We have everything from homeopathy to applied kinesiology, all of which where appears to work because people believe it works. The only way to separate out whether the motivational treatment is inherently effective is through research. If it is the placebo effect and you are happy with that being the source of whatever change you are seeing, then add a lot more pomp and ceremony -- it ups the effect.

Given our difference on opinions, I think we managed to conduct this dialogue with a fair amount of decorum. However, I don't we are going to have any agreement. I have to go with the science.

You give any group of people a perfectionism or fear of failure test along with almost any procrastination scale and you get pretty much anywhere from a negative to at best a very weak positive correlation. And if you control for self-efficacy or self-confidence, that weak correlation disappears. Science does not back you up.

Similarly, characterizing impulsiveness as a fudge factor, well that is just being silly. A simple Google Scholar search will show over 45,000 citations on the term, including the ground breaking work by George Ainslie. It really is a measure of system 1 heavy decision making, something that you yourself accept. In fact, there is enough science on it that I'm conducting a meta-analytic review. And, unlike fear of failure, you find a very strong correlation between impulsiveness and procrastination.

Now characterizing every technique that science has produced as not up to your standards is a little harsh. The book is a review of the literature. Essentially, researchers in peer-reviewed studies have conducted a variety of treatments, like stimulus control (which activates the cue sensitive system 1), and found them very effective at reducing procrastination. I organize and report what works. Since there is a thousands ways to implement stimlus control, you can describe the general methodology, report its effectiveness and give a few examples of how it can be used. If you know a better way to convery this information, I'm all ears. Of note, this is indeed an environmental fix to procrastination, one of several and not what you characterize as "don't think that way or think something else." Again, you come across as not having read the book.

On the other hand, I think you have been given pretty much a free ride up to this point. You make a lot of suggestions that are inconsistent with our present knowledge of the field (e.g., fear of failure). You make a quite bold claim that you have techniques that with one application will cure procrastinators, presumably by focusing solely on the expectancy or self-efficacy aspect of motivation. We can all make claims. Show me some peer-reviewed research (please, not clincial case studies).

On the longshot you might be right and have all the magic bullets, do some experimental research on it and publish it in a respectable journal. I would welcome the correction. I have a lot of research interests and would be happy to be able to focus on other things. Personally, I don't think you actually are going to do it. Right now, you have the warm belief that the rest of us studying this field are effectively a bunch of second rates as "science has not actually caught up to the in-field knowledge of people like myself." If you actually do the research (with proper controls, like accounting for the placebo effect which runs rampant through self-efficacy type clinical interventions), you run the risk of having a very self-satisfying set of beliefs turned into flimsy illusions. Do you really think you are willing to take that risk? Given human nature, I'm sceptical but would love to be proven wrong.

This is interesting. Actually, you are quite right in that TMT is an overall integrative model. It was actually designed to be a Roseatta stone, allowing us to draw findings and applications from different fields into a coherent whole. It was at one level of detail and has it uses, just as a map of the city is useful but not equivalent to a blueprint of a house (though neither are wrong). For example, it excluded nonsense solutions, which the field is rife with.

You have a naturally critical mind, which is useful, but you are taking a few short cognitive shortcuts. By what you write, it doesn't seem like you actually read the book or the article. The article formally integrates prospect theory, under the section CPT. CPT is actually the next update to prospect theory by Kahneman and Tversky, see pages 894-895 (e.g., "Consequently, other researchers have already proposed various integrations of prospect theory with some hyperbolic time-discounting function"). Chapter three of the book is an extended review of system one and system two, including a historical review of it going back to Plato. The last three chapters then, using TMT as an organizing model, reviews all the applied science on this, ones that has been successfully used to increase self-regulaton.

What would be useful is this. What precise techniques do you take issue with? Are there any you think ineffective or too vague to be applied? Though everything was already scientifically vetted, maybe I could have been clearer in sections. Given other feedback, I found that many people needed a better walkthrough of how to apply these techniques. In the paperback version, I added a step-by-step guide. So is it the techniques or the explanation?

Alternatively, you might have some insight into specific techniques that the book neglected. This is quite possible as I didn't want to include less developed techniques, ones without proven value. Developing a full package of self-regulatory techniques is exactly where science needs to go and why what Lesswrong is doing is quite remarkable. We don't have this. Instead, the area of motivation is splintered into competing theories and practices, other redundant to one another or simply isolated. What we get marketed to us from the self-help arena is often out of date or even wrong. Aside from Lesswrong, I don't know of another concerted effort to change this.

Think of the book as version 1.0. What do you want in the next upgrade? You like the basic model, which is a start. It can help direct people towards broad areas of weakness (e.g., the diagnostic test in the book; which notably accounts for about 70% of the variance in people's procrastination scores). Then, we have a series of techniques to address these weaknesses, outlined in chapters 7, 8, and 9. What's next? Can we expand on them? Can we refine or improve their implementation? Can we express them in ways that helps people adopt them? Can we combine them into something more powerful? These are questions worth asking.

To some extent, I can contribute to Lesswrong on a positive venture like this. It is serious, useful and noble.

Here's the research it cites along with a few hyperlinks to other articles. Did you read it?

Mellers, B. A. (2000). Choice and the relative pleasure of consequences. Psychological Bulletin, 126, 910-924. Ghahremani, D. G., Tabibnia, G., Monterosso, J., Hellemann, G., Poldrack, R., & London, E. D. (2011). Effect of modafinil on learning and task-related brain activity in methamphetamine-dependent and healthy individuals. Neuropsychopharmacology, 36(5), 950-959. Repantis D., Schlattmann P., Laisney O., & Heuser I. (2010). Modafinil and methylphenidate for neuroenhancement in healthy individuals: A systematic review. Pharmacol Res, 62(3), 187-206.

Load More