Modafinil is probably the most popular cognitive enhancer. LessWrong seems pretty interested in it. The incredible Gwern wrote an excellent and extensive article about it

Of all the stimulants I tried, modafinil is my favorite one. There are more powerful substances like e.g. amphetamine or methylphenidate, but modafinil has much less negative effects on physical as well as mental health and is far less addictive. All things considered, the cost-benefit-ratio of modafinil is unparalleled. 

For those reasons I decided to publish my bachelor thesis on the cognitive effects of modafinil in healthy, non-sleep deprived individuals on LessWrong. Forgive me its shortcomings. 

Here are some relevant quotes:


...the main research question of this thesis is if and to what extent modafinil has positive effects on cognitive performance (operationalized as performance improvements in a variety of cognitive tests) in healthy, non-sleep deprived individuals.... The abuse liability and adverse effects of modafinil are also discussed. A literature research of all available, randomized, placebo-controlled, double-blind studies which examined those effects was therefore conducted.

Overview of effects in healthy individuals:

...Altogether 19 randomized, double-blind, placebo-controlled studies about the effects of modafinil on cognitive functioning in healthy, non sleep-deprived individuals were reviewed. One of them (Randall et al., 2005b) was a retrospect analysis of 2 other studies (Randall et al., 2002 and 2005a), so 18 independent studies remain.

Out of the 19 studies, 14 found performance improvements in at least one of the administered cognitive tests through modafinil in healthy volunteers.
Modafinil significantly improved performance in 26 out of 102 cognitive tests, but significantly decreased performance in 3 cognitive tests.

...Several studies suggest that modafinil is only effective in subjects with lower IQ or lower baseline performance (Randall et al., 2005b; Müller et al., 2004; Finke et al., 2010). Significant differences between modafinil and placebo also often only emerge in the most difficult conditions of cognitive tests (Müller et al., 2004; Müller et al., 2012; Winder-Rhodes et al., 2010; Marchant et al., 2009).

Adverse effects:

...A study by Wong et al. (1999) of 32 healthy, male volunteers showed that the most frequently observed adverse effects among modafinil subjects were headache (34%), followed by insomnia, palpitations and anxiety (each occurring in 21% of participants). Adverse events were clearly dose- dependent: 50%, 83%, 100% and 100% of the participants in the 200 mg, 400 mg, 600 mg, and 800 mg dose groups respectively experienced at least one adverse event. According to the authors of this study the maximal safe dosage of modafinil is 600 mg.

Abuse potential:

...Using a randomized, double-blind, placebo-controlled design Rush et al. (2002) examined subjective and behavioral effects of cocaine (100, 200 or 300 mg), modafinil (200, 400 or 600 mg) and placebo in cocaine users….Of note, while subjects taking cocaine were willing to pay $3 for 100 mg, $6 for 200 mg and $10 for 300 mg cocaine, participants on modafinil were willing to pay $2, regardless of the dose. These results suggest that modafinil has a low abuse liability, but the rather small sample size (n=9) limits the validity of this study.

The study by Marchant et al. (2009) which is discussed in more detail in part 2.4.12 found that subjects receiving modafinil were significantly less (p<0,05) content than subjects receiving placebo which indicates a low abuse potential of modafinil. In contrast, in a study by Müller et al. (2012) which is also discussed in more detail above, modafinil significantly increased (p<0,05) ratings of "task-enjoyment" which may suggest a moderate potential for abuse.

...Overall, these results indicate that although modafinil promotes wakefulness, its effects are distinct from those of more typical stimulants like amphetamine and methylphenidate and more similar to the effects of caffeine which suggests a relatively low abuse liability.


In healthy individuals modafinil seems to improve cognitive performance, especially on the Stroop Task, stop-signal and serial reaction time tasks and tests of visual memory, working memory, spatial planning ability and sustained attention. However, these cognitive enhancing effects did only emerge in a subset of the reviewed studies. Additionally, significant performance increases may be limited to subjects with low baseline performance. Modafinil also appears to have detrimental effects on mental flexibility.

...The abuse liability of modafinil seems to be small, particularly in comparison with other stimulants such as amphetamine and methylphenidate. Headache and insomnia are the most common adverse effects of modafinil.

...Because several studies suggest that modafinil may only provide substantial beneficial effects to individuals with low baseline performance, ultimately the big question remains if modafinil can really improve the cognitive performance of already high-functioning, healthy individuals. Only in the latter case modafinil can justifiably be called a genuine cognitive enhancer.

You can download the whole thing below. (Just skip the sections on substance-dependent individuals and patients with dementia. My professor wanted them.)

Effects of modafinil on cognitive performance in healthy individuals, substance-dependent individuals and patients with dementia

New to LessWrong?

New Comment
42 comments, sorted by Click to highlight new comments since: Today at 8:25 AM

Meta-analysis on cognitive effects of modafinil (my bachelor thesis)

Well, meta-analyses certainly are an area of interest to me, and I was disappointed in 2012 by "Cognition Enhancement by Modafinil: A Meta-Analysis" (Kelley et al 20120) which used only 3 studies, and so was not very informative. A new meta-analysis would be great. But... I read quickly through it, and I saw no meta-analysis. Just a literature review. What's with the post title?

Modafinil significantly improved performance in 26 out of 102 cognitive tests, but significantly decreased performance in 3 cognitive tests.

Nitpick: I really hate this use of 'significantly' and I ban it from my own writing. Is this referring to effect sizes or p-values?

Notably, modafinil appears to have detrimental effects on mental flexibility. Although 4 studies employed the Intra/Extradimensional Set Shift task (ID/ED), no performance improvements could be detected. Performance was even reduced in a study by Randall et al. (2004). Furthermore, Müller et al. (2012) found that subjects on modafinil had lower flexibility scores in the Abbreviated Torrance task for adults.

Eh. Absence of improvement != damage. Randal 2004 didn't find a statistically-significant decrease (and it's not clear whether it should, given that it reports 25 datasets for 3 groups, so hunting for decreases incurs worries about multiplicity). And I have to point out, as far as Müller et al 2012 goes, the decrease didn't reach p<0.05 (just 0.053), and if you're willing to accept just trending, then you should also be accepting the increase in the GEFT/Group Embedded Figures Task (p=0.08).

How important are these observations...? Well, as you found out, it can be hard to compare or meta-analyze psychology studies since studies may cover the same topic but use different sets of tests, frustrating the most obvious approach 'just univariate meta-analyze everything!'

Reprinted from Baranski et al. (2004) without permission.


But... I read quickly through it, and I saw no meta-analysis. Just a literature review. What's with the post title?

You're right. I don't remember why I wrote "meta-analysis". (Probably because it sounds fancy and smart). I updated the title.

Is this referring to effect sizes or p-values?


Eh. Absence of improvement != damage.


...Randal 2004 didn't find a statistically-significant decrease...

No. In Randall et al. (2004) participants in the 200 mg modafinil condition made significantly more errors (p<0,05) in the Intra/Extradimensional Set Shift task than participants in the placebo and the 100 mg modafinil condition. (The 200 mg group made on average around 27 errors. The 100 mg group around 14. The control group around 17 errors.)

Actually, you linked to a different study. The results can be found in the complete study I linked to. I can upload it if you want to see it yourself.

Reprinted from Baranski et al. (2004) without permission.

Every single graphic in this whole thing is reprinted without permission, to tell the truth. (Is this a problem?)

I'm not an academic, but my understanding was that "significantly" was a synonym for "p<0.05" every time in academic writing. "Significantly" referring to effect size is solely the province of non-academic writing(well, that or things like history).

I'm not an academic, but my understanding was that "significantly" was a synonym for "p<0.05" every time in academic writing.

If only it were that simple. But one of my scripts flags use of significance language, and I have seen many times 'significant' and variants used in scientific writing as meaning important or large.

Sigh. People suck sometimes.

frustrating the most obvious approach 'just univariate meta-analyze everything!'

I'm curious if you have ideas on how to deal with that.

Maybe grouping the tests into different kinds of tests and fitting a hierarchical model inside those groups? Are there similar kinds of tests?

I'm curious if you have ideas on how to deal with that.

The standard solution seems to be 'multivariate meta-analysis'. I've done a little reading on the topic, but I've had trouble getting started with it - you need to know the correlations between the multiple outcome variables, this is typically unavailable (the data-sharing problem), and I think it only works anyway if there is at least a little bit of correlation between the multiple outcomes, while I would like to be able to collectively analyze outcomes from disjoint studies which is... less clear how to do.

Right that makes sense. People rarely report the covariance matrix of the data.

Much less provide IPD/individual-patient-data which is what one really wants. The lack of data is frustrating.

This meta-analysis on meditation, has an interesting approach, they basically just analyze the effect sizes in the same "class" (averaging effect sizes within a study if there are multiple different outcomes measured in the same class).

That sounds like a completely disgusting approach... I'm going to have to read that and see if it's a legitimate strategy.

They seem to get pretty strong effect sizes and low heterogeneity, so I'm curious to hear your thoughts on it.

So, their methodology is, as far as I can tell, described by these parts:

The aim of our meta-analysis was to assess the effect a mindfulness meditation intervention on health status measures. We considered the concept of health to include both physical and mental health. All outcome measures were either subsumed under "physical health", "mental health" or were excluded from the analysis. We only included data from standardized and validated scales with established internal consistency (e.g., the Global Severity Inventory of Symptom Check List-R, Hospital Anxiety and Depression Scale, Beck Depression Inventory, Profile of Mood States, McGill-Melzack Pain-Rating Scale, Short Form 36 Health Survey, and Medical Symptom Checklist; a full list is available upon request). Also a conservative procedure was chosen to exclude relatively ambiguous or unconventional measures, e.g., spiritual experience, empathy, neuropsychological performance, quality of social support, and egocentrism.

"Mental health" constructs comprised scales such as psychological wellbeing and symptomatology, depression, anxiety, sleep, psychological components of quality of life, or affective perception of pain. "Physical health" constructs were medical symptoms, physical pain, physical impairment, and physical component of quality of life questionnaires.

...We first integrated all effect sizes within a single study by the calculation of means into two effect sizes, one for mental and one for physical health. If the sample size varied between scales of one study, we weighted them for N. Effect sizes obtained in this manner were aggregated across studies by the computation of a weighted mean, where the inverse of the estimated standard deviation for each investigation served as a weight [8].

So, they just split the effect sizes, and do an average of the 2 sets. Nothing more.

I dunno. They don't give any references to papers or textbooks on meta-analysis to justify this procedure. It doesn't sound very kosher to me.

From a statistical point of view, I wouldn't expect this to work very well. I would expect a lot of heterogeneity and a very weak signal. However, they report very strong results with low heterogeneity (which I find pretty surprising). I don't see any obvious way in which this would be "cheating".

Are you worried about something else specific?

I don't see any obvious way in which this would be "cheating".

Oh, that's easy: publication bias. If the original studies report only the measures which reached a cutoff, and the null is always true, then since their measures will generally all be on the same subjects/with the same n, their effect sizes will have to be fairly similar* and I'd expect the i^2 to be low even as the results are meaningless.

* since p is just a function of sample size & effect size, and the p threshold is fixed by convention at 0.05, and sample size n is pretty much the same across all measures - since why would you recruit a subject and then not get as much data as possible and omit lots of subjects? - only measurements with effect sizes big enough to cross the p with the fixed n will be reported.

While if each particular measure was done separately as a bunch of univariate or multivariate meta-analyses, they'd have to get access to the original data or they'd be able to see the publication bias on a measure by measure basis.

Or it might be that each measure has a weighted effect size of zero, it's just that each study is biased towards a different measure, and so its 'overall' estimate is positive even though if we had combined each measure with all its siblings, every single one would net to zero.

Maybe I'm wrong about these speculations. But I hope you see why I feel uncomfortable with this 'lump everything remotely similar together' approach and would like to see what meta-analytic experts say about the approach.

That's a great point, I hadn't been thinking about that. It amplifies the publication bias by a lot.

This might be of interest: I have dedicated 1/4 of my master's thesis in Ethics to drawn a comparison between caffeine and modafinil. I also plan to present this research in the Netherlands later this year. PM or message here in case you want to know more.

Thanks for filling out that wiki page with detail & references.

Probably a very rare adverse effect: I knew someone who found that modafinil would make him sleep for 18 hours. Sorry I don't know the dose.


Immediately after, or as catch-up sleep after a long modafinil-fueled waking period?

Immediately after the first time he tried it. It caught him by surprise. I think he tried modafinil a second time to check and got the same result.

He did look more rested than usual, so it may have been better as well as more sleep.

I think anyone would look more rested than usual if they had just slept for 18 hours.

Because several studies suggest that modafinil may only provide substantial beneficial effects to individuals with low baseline performance, ultimately the big question remains if modafinil can really improve the cognitive performance of already high-functioning, healthy individuals.

Seems that we have few of those here. I wonder whether [so8res will tell us whether that is part of his dark arts}(

Being the first poster I will try a highly biased poll on this:


High functioning compared to whom? Everyone else or just everyone else who takes modafinil?

As I just used the description from the OP I'd say compared to the median of the population (e.g. IQ 100).


High-functioning in terms of IQ or in terms of ability to get things done?

(Has anyone come up with a motivation enhancer? Nicotine used to work for me, but not anymore.)

I've experienced very slight motivation-enhancing effects from various stimulants. But the only thing that I've found to really work so far is falling in love. Unfortunately, the side-effects are enormous and it doesn't work under arbitrary conditions, either…

A data point: compared to the median of the population, I'm high-functioning in terms of IQ, but not in terms of the ability to get things done. The few times I've tried modafinil (due to the headaches and nausea I get from it, these have not been more than a handful), it has helped a lot with the latter.

Caffeine acts as a motivation enhancer for me. It reliably raises my mood levels and gets me off the couch.

Well, I take modafinil primarily as a motivation-enhancer.

"Has anyone come up with a motivation enhancer?"

Vyvanse (perscription-only ADD medication) is... almost unbelievably awesome for me there. I suspect it only works if your issue is somewhere in the range of ADD, though, as it doesn't do anything for my motivation if I'm depressed.

I've found that in general, "sustained release" options work a LOT better for motivation. Caffeine helps a tiny bit, but 8-hour sustained-release caffeine can help a lot. My motivation seems to really hate dealing with peaks and valleys throughout the day. Oddly, if I take Vyvanse one day, then skip it the next, my motivation completely crashes, but this doesn't seem to affect the value of Vyvanse for giving me very motivated days - it's the ups and downs within a day, not my long-term variation, that seems to disrupt motivation.

Has anyone come up with a motivation enhancer?

Pain is said to be a very effective one.

Well, at least for me, not really. See also this post.

See also this post.

I am talking literally about physical pain. Not about a general category of negative motivators.

I haven't thought deeply about that, but I would expect primitive things which motivate your lizard brain directly to be considerably more effective than whatever constructs parts of your conscious mind invent to try to motivate other parts.

I would expect that physical pain will only motivate immediately avoidant behaviors and will be as useless as any other kind of pain for helping sustained motivation needed to pursue long-term goals, which is usually where the problem lies. Because the lizard brain doesn't do long-term projects.

Physical pain is also kind of difficult to harness for any practical application to oneself, I suppose...

useless ... to pursue long-term goals

I agree unless you are having difficulty with that first step which starts a journey of a thousand miles.

I agree unless you are having difficulty with that first step which starts a journey of a thousand miles.

When going a journey of a thousand miles it's useful to focus in the direction of your goal and go exactly in the right direction.

Not really, humans rarely have to follow a ballistic trajectory :-) Given the ability to correct mid-course starting to move in exactly the right direction is unnecessary.

Hm, I believe the creativity required to set up reality in such a way that I feel physical pain only as long as I don't start working on a certain project is beyond me… ;-)


I'm incredibly productive when I'm working towards a pass/fail goal on a deadline and I'm very scared of "fail". Ambiguities in goal and time create problems.


Request to taboo "high-functioning".

Generally I agree. But given that "high-functioning".was used in the OP we have to use it here to stay comparable.