Originally posted at sandymaguire.me.

I have to get you to drop modesty and say to yourself, "Yes, I would like to do first-class work." Our society frowns on people who set out to do really good work. You're not supposed to; luck is supposed to descend on you and you do great things by chance. Well, that's a kind of dumb thing to say. I say, why shouldn't you set out to do something significant. You don't have to tell other people, but shouldn't you say to yourself, "Yes, I would like to do something significant."

Richard Hamming

I want to talk about impostor syndrome today. If you're unfamiliar with the term, it's this phenomenon where people discount their achievements and feel like frauds---like at any moment, others will realize that they don't belong. People with impostor syndrome are convinced that they've somehow pulled the wool over others' eyes, and have conned their way into their jobs or social circles. And it's very, very common.

On the other end of the spectrum, there's another phenomenon known as Dunning--Kruger. This one is widely reported in the media as "the average person believes they are better than 70% of drivers." They can't all be right---by definition, 50% of people must be worse than the median driver. This too is very common; it seems to hold true in most positive-feeling, every-day domains.

While this is indeed supported by Dunning--Kruger's data, the paper itself focuses on the fact that "the worst performers in a task are often so bad that they have no idea." For example, the least funny people are so tragically unfunny that they wouldn't know humor if it bit them. As a result, unfunny people are entirely blind to their lack of humor:

Participants scoring in the bottom quartile on our humor test not only overestimated their percentile ranking, but they overestimated it by 46 percentile points.

A less well-known finding of Dunning--Kruger is that the best performers will systematically underestimate how good they are, by about 15 percentile points. The proposed reason is that they found the task to be easy, assume others must have also found it easy, and go from there. In other words, top performers are so good that they don't notice many challenges.

It's unfortunate that Dunning--Kruger has been popularized as "most people think they are better than they are." Not only do high-performers already underestimate themselves, but those who know about Dunning--Kruger in its popularized form are likely to double-dip, and further adjust down to compensate for this. For example, if you are in fact in the top 90% for some skill, due to Dunning--Kruger you will likely estimate yourself to be at 75%. But, if you know that people routinely overestimate their abilities by 20%, you might drop your estimate down to 55% in compensation---significantly lower than your true skill level.

If this is true, it would suggest some of the world's best people will estimate themselves to be worse than the self-evaluations of the worlds' worst. The takeaway is this: if you're the kind of person who worries about statistics and Dunning--Kruger in the first place, you're already way above average and clearly have the necessary meta-cognition to not fall victim to such things. From now on, unless you have evidence that you're particularly bad at something, I want you to assume that you're 15 percentile points higher than you would otherwise estimate.

The mathematician Richard Hamming is said to have often ruffled feathers by asking his colleagues "what's the most important problem in your field, and why aren't you working on it?" Most people, I'd suspect, would say that the most important problems are too hard to be tackled by the likes of them. That it would take minds greater minds than theirs to make progress on such things. They give up before having even tried.

Instead, the smartest people I know join Google or go work at B2B startups and are simultaneously bored in their day jobs, feel like they're frauds while they're there, and don't have enough energy to work on personal projects after hours. But at least they're making wicked-big paychecks for themselves. And for their less-qualified leadership.

The best minds of my generation are thinking about how to make people click ads.

Jeff Hammerbacher

Here's an interesting thought experiment. If you randomly swapped places with someone for a week---you doing their job and they doing yours---how would it turn out? If you're a competent programmer with passable social skills, I suspect you would be a lot more successful in that week than your replacement. Most things just aren't that hard, and a good percentage of the people doing those jobs are phoning it in anyways.

The bar on competency is tragically low. And yet the world revolves.

If you agree with this line of reasoning, it means the world is just oozing with potential, ready for the taking. Most of the world's most competent people are unaware of just how good they are. Most things really aren't as hard as getting through that graph-theory class you took, and don't take nearly as much effort. The world is being run by people who are too incompetent to know it; people who are only in power because they're the ones who showed up, and because showing up is most of the battle.

Which leads us to the inescapable conclusion that this world we live in is particularly amenable to change. That if you're willing to trust in your own instinct and tackle hard-seeming problems, you're going to experience literally unbelievable amounts of success. Everyone else is deferring to better minds. So be the change you want to see in the world, because we've been waiting for you.

New to LessWrong?

New Comment
31 comments, sorted by Click to highlight new comments since: Today at 10:30 AM
Participants scoring in the bottom quartile on our humor test (...) overestimated their percentile ranking
A less well-known finding of Dunning--Kruger is that the best performers will systematically underestimate how good they are, by about 15 percentile points.

Isn't this exactly what you'd expect if people were good bayesians receiving scarce evidence? Everyone starts out with assuming that they're in the middle, and as they find something easy or hard, they gradually update away from their prior. If they don't have good information about how good other people are, they won't update too much.

If you then look at the extremes, the very best and the very worst people, of course you're going to see that they should extremify their beliefs. But if everyone followed that advice, you'd ruin the accuracy of the people more towards the middle, since they haven't received enough evidence to distinguish themselves from the extremes.

(Similarly, I've heard that people often overestimate their ability on easy tasks and underestimate their ability on difficult tasks, which is exactly what you'd expect if they had good epistemics but limited evidence. If task performance is a function of task difficulty and talent for a task, and the only things you can observe is your performance, then believing that you're good at tasks you do well at and bad at tasks you fail at is the correct thing to do. As a consequence, saying that people overestimate their driving ability doesn't tell you that much about the quality of their epistemics, in isolation, because they might be following a strategy that optimises performance across all tasks.)

The finding that people at the bottom overestimate their position with 46 percentile points is somewhat more extreme than this naïve model would suggest. As you say, however, it's easily explained when you take into account that your ability to judge your performance on a task is correlated with your performance on that task. Thus, the people at the bottom are just receiving noise, so on average they stick with their prior and judge that they're about average.

Of course, just because some of the evidence is consistent with people having good epistemics doesn't mean that they actually do have good epistemics. I haven't read the original paper, but it seems like people at the bottom actually thinks that they're a bit above average, which seems like a genuine failure, and I wouldn't be surprised if there are more examples of such failures which we can learn to correct. The impostor syndrome also seems like a case where people predictably fail in fixable ways (since they'd do better by estimating that they're of average ability, in their group, rather than even trying to update on evidence).

But I do think that people often are too quick to draw conclusions from looking at a specific subset of people estimating their performance on a specific task, without taking into account how well their strategy would do if they were better or worse, or were doing a different task. This post fixes some of those problems, by reminding us that everyone lowering the estimate of their performance would hurt the people at the top, but I'm not sure if it correctly takes into account how the people in the middle of the distribution would be affected.

(The counter-argument might be that people who know about Dunning-Kruger is likely to be at the top of any distribution they find themselves in, but this seems false to me. I'd expect a lot of people to know about Dunning-Kruger (though I may be in a bubble) and there are lots of tasks where ability doesn't correlate a lot with knowing about Dunning-Kruger. Perhaps humor is an example of this.)

In other words, regression to the mean. The predictions form a line, with a positive slope. Less than 1, but only perfect predictions would have slope 1. The intercept is high, which is overconfidence. But the intercept is a statement about the whole population, not about the lowest bin.

Here are the graphs. A lot of information has been destroyed by binning them, but it doesn't sound like DK thought that information was relevant or made use of it:

The second is different. That better matches the cartoons one finds for an image search for Dunning-Kruger. But I'm not sure it matches this post. (The third could be described as yet another shape, but I'd classify it as a line with a very low positive slope.)

The fourth graph is of a more complicated intervention. It seems like it has the opposite message of this post, namely it finds that the 4th quartile is better calibrated than the 3rd.

I like and endorse the general theme of this post, but have some issues with the details.

The takeaway is this: if you're the kind of person who worries about statistics and Dunning--Kruger in the first place, you're already way above average and clearly have the necessary meta-cognition to not fall victim to such things.

I feel like this is good motivation but bad world-modelling. Two important ways in which it fails:

  • Social interactions. You gave the example of people not really knowing how funny they are. I don't think worrying about statistics in general helps with this, because this might just not be the type of thing you've considered as a failure mode, and also because it's very difficult to substitute deliberate analysis for buggy social intuitions.
  • People being bad at philosophy. There are very many smart people who confidently make ridiculous arguments - people smart enough to understand Dunning-Kruger, but who either think they're an exception, or else pay lip service to it and then don't actually process any change in beliefs.
The world is being run by people who are too incompetent to know it; people who are only in power because they're the ones who showed up, and because showing up is most of the battle.

I dislike lines of argument which point at people on top of a pile of utility and call them incompetent. I think it is plausibly very difficult to get to the top of the society, but that the skills required are things which are really difficult to measure or even understand properly, like "hustle" or "ambition" or "social skills" or "pays less attention to local incentive gradients" or "has no wasted mental motion in between deciding that x is a good idea and deciding to do x".

From now on, unless you have evidence that you're particularly bad at something, I want you to assume that you're 15 percentile points higher than you would otherwise estimate.

Nit: I prefer using standard deviations instead of percentile points when talking about high-level performance, because it better allows us to separate people with excellent skill from people with amazing skill. Also because "assume that you're 15 percentile points higher" leaves a lot of people above 100%.

This is good feedback---thanks!

I observe that the idea of incorrectly believing I'm bad at something doesn't disturb me much, while the idea of incorrectly believing I'm good at something is mortifying.

I smell some kind of social signaling here.

I believe Yudkowsky discussed this some in his writings against Modesty, in Inadequate Equilibria. [recommendation due to relevance]

Have tried to tackle hard-seeming problems. They turned out even harder than they seemed.

I've had the opposite experience (though restricted to problems that someone at all has solved/understood). There are multiple fields (such as AI/ML, cryptocurrency, and zero-knowledge proofs) that I at some point thought were mysteriously difficult. In each case, there is a finite set of important concepts (less than 30) to learn, and it is possible to do cutting-edge research from there.

As far as I can tell, nothing is magic. Nothing that anyone can do is mysteriously difficult. Complicated things that anyone understands decompose into a finite number of interacting parts, each of which (as well as their interactions) can be modeled. Skills people have (even ones they don't understand) can be learned through experience and guides; getting great can take years, but getting adequate usually only takes months. Actually doing any of these requires willingness to understand and think for yourself. If you don't know if you can understand a field, you can set aside an amount of time, such as one month, to intensively study it, and see what happens.

(Of course, some things can be so hard that no one knows how to do them, and these actually can be mysteriously difficult, though won't be with the benefit of hindsight if someone actually does them. And all of this is based on my own personal experience.)

Glad that this approach worked out for you! It's an amazing feeling when you finally solve or get something that looked so hard initially. I would not deny that this has happened to me, too, but mostly in the cases I knew for sure I could handle with enough effort. I did my PhD in General Relativity, and had to go through a few proofs that required significantly more background than I had at the time. I was able to master the necessary basics of Algebraic Topology enough to add my own small theorem on top of what was already in my research area, yet it was excruciatingly slow and painful to get to that point, and not a lot of fun. I had to abandon any larger ambitions in the area. Similarly, I was used to getting A and A+ in almost all my undergrad and grad classes, yet when I hit advanced grad classes, like String theory and topics in QFT, I was lucky I could get through them, even though other grad students seemed to have had little difficulties there.

In general, I have found that in many areas, especially in math, everyone has their threshold of abilities. Below the threshold the effort required scales basically linearly with the amount of material, Past that threshold any extra learning becomes exponentially more difficult. I mean "exponentially" in the mathematical sense, not in the colloquial one. Gotta know your limits.

Can you make your advice more specific? I work for Google and sometimes write LW posts about decision theory. What's the world waiting for me to do instead?

Try solving important philosophical (ETA: this was in response to a previous version of your comment that said "MIRI math" instead of "decision theory"), social, or strategic problems. Quit working for Google and become an entrepreneur instead.

1 - already trying. 2 - I have five kids, feel a bit insecure about quitting work.

1 - already try­ing

If you haven't already, try one of the other problem categories. There's a reasonable chance that you can be even better at them relative to other humans, even if that currently seems unlikely to you.

2 - I have five kids, feel a bit in­se­cure about quit­ting work.

You could try to line up some angel investors before you quit so you can keep drawing a salary. If your business fails, it should be pretty trivial for someone like you to find another well paying job.

I, too, have some objections, including issues with the underlying model on which this post is written.

First off, impostor syndrome, self-esteem. It's mostly an endocrine "problem", it's a body's way of adapting status negociation to the social environment. It's almost entirely determined by testosterone levels in males (and probably in females as well). Of course, winning social competitions results in a boost in T. But the conscious level evaluation of where you you rank among others, in what percentile you are at something is completely different than the underlying, system-1 self-evaluation. Of course, gathering evince that you are good at something will update your system-1 evaluation automatically (e.g. having produced something you are proud of, receiving praise and admiration that you perceive as genuine etc).

The mechanism by which this happens is fairly complex. Testosterone also has a strong anxiolytic effect, by regulating the GABA neurotransmitter, which is the main inhibitory neurotransmitter in the mammalian central nervous system. This also has a very strong impact on stress-regulation, the ability of the organism to cope with stress. The result of this is that people with higher T levels are never concerned with "how good they are", or where they rank.

> if you're the kind of person who worries about statistics and Dunning--Kruger in the first place, you're already way above average and clearly have the necessary meta-cognition to not fall victim to such things
Non sequitur. Just because you're interested in statistics doesn't automatically make you good at every other field. This reminds me of all those "Why aren't rationalist winning" posts (e.g. https://www.lesswrong.com/posts/LgavAYtzFQZKg95WC/extreme-rationality-it-s-not-that-great). Being more rational doesn't give you more domain knowledge in specific domains.

But I would also not pay too much attention to social science studies such as that, which is probing surface level effects in human populations. These things could vary significantly in different cultures, and a lower level understanding of self-esteem sort of dissolves this question. Besides, you don't know whether you should update down or down.

> the smartest people I know join Google or go work at B2B startups and are simultaneously bored in their day jobs
If you consider the risk-reward gradient, joining Google or similar high-paying jobs is a great choice in terms of maximizing status and money, even though it is a local-maxima. And these are the basic human drives. Even on HN, the HQ of people with entrepreneurial spirit, people generally recommend getting a job at google or similar bigcorp if you have the opportunity, early in your career for several reasons.

Richard Hamming's advice of tackling the biggest problem, is I think applicable only to tenured professors. To anyone else it is a recipe for disaster, i.e. taking on a huge-reward/huge-risk without having either siginificant wealth or social capital, you are just getting yourself out of the gene pool. The thing is, the gut feeling has been shaped by generations of evolution, and ended up being a good heuristic. If your gut feeling is telling you not to attempt something to risky, to try something smaller, it is because that's the natural sequence in which things are done, including by world-leading scientists. People like Einstein haven't started with General Relativity. And by starting with small things and succeding, you become more confident and attempt slightly bigger problems and so on. It's an iterative cycle.

> the least funny people are so tragically unfunny that they wouldn’t know humor if it bit them
This isn't really relevant in the grand scheme of things, but humor itself is highly dependent on the person's social status. So how funny you perceive someone to be is directly influenced by his status. And obviously, how confident he is, which again is directly related to T levels (citation needed).

> Most things just aren’t that hard, and a good percentage of the people doing those jobs are phoning it in anyways.
> The bar on competency is tragically low. And yet the world revolves.
This is somewhere where you are perfectly right. Having seen how disfunctional most organisations are, it's very surprising how most of them even work. Even places like Google, I've seen way too many incompetent people getting hired, despite the place having had a certain reputation (more so in the past).

> The world is being run by people who are too incompetent to know it; people who are only in power because they’re the ones who showed up, and because showing up is most of the battle
This is not at all how I model the world. Sure, if your self-esteem is too low to even try, then of course, nobody (other than your family) will hand the power to you on a platter. But other than that, people who end up in power don't end up there accidentally, for reasons similar to why you won't find a $20 bill in the Grand Central. (although I actually think I have seen that happening, several times).

Overall, this whole post clearly sends that motivational vibe, while also seemingly containing too many applause lights. Which is a good or a bad thing, depending on how you look at it. It could probably help some people.


==============================
My personal experience with this phenomena was related to burnout. After some structural changes to the organization (software $bigcorp), which lead to the creation of a highly politicized environment, I was exposed to a lot more stress. Gradually, after about 1 year, I just couldn't take it anymore.

I was having all sorts of symptoms by then, such as constant fatigue, concentration issues, basically I was close to non-functioning. I knew I had to get out, but decided to take up on one of the job offers I had at the time (higher pay, in a field I was very interested in). After the interview, they told me I was the most qualified engineer they ever interviewed, they would be really exited to have me there. And yet, I barely lasted a couple of days.

I was barely done setting my hardware up, that I was experiencing strong panic attacks, something that hadn't happened to me before. I had to get out for a walk, I simply couldn't sit on a chair, or the world felt like it was crashing on me. Even though I was objectively better qualified than most of my peers, I had extremely low self-esteem. I could point to hundreds of objective things that would justify my competence (projects I've done in the past, winning several significant competitive programming competitions, graduating top of my class etc). Of course, I took some time of to figure out what is wrong with me, and get myself in a functional state again. After eliminating all sorts of hypotheses, like doing sleep studies (my sleep was terrible at the time, I was fairly sure I had something like sleep apneea, although I was lean), still nothing. I was tired all day. I did a hormone panel test, and T levels came out below the chart, worse than you'd expect from a healthy 90yo. I was 23 at the time, which is around when T levels should peak. No other significant issue found. Also, anxiety was a big issue at the time.

This is probably good general advice, but it's a different matter when there is evidence that points to being an actual imposter. For example, when I write novels that do not sell, or blog posts that get downvoted to oblivion, it is difficult to get honest feedback as to how I might improve my writing. The feedback I get is almost always positive, but reviews are self-selected because people rarely are motivated to review something unless they especially like it. Plus, politeness prohibits people from being harsh when you ask for feedback. For these reasons, I am more apt to trust the hard metrics and view myself as a poor writer who has managed to fool a few people. Improving myself is a far difficult task, in light of this. I guess my point is that knowing that there may be a blind spot means you can adjust for it, but it is also an opportunity to actually check it.

See, the problem is now that I've also internalized this (seemingly true) lesson, the +15% might double-boost my ass numbers.

But maybe if we accumulate enough lessons we can get increasingly close to the truth by adding these "higher order terms"?

I don't think so - the error bars do not necessarily diminish. For example:

  • Ass number for drawing ability percentile: ~70%
  • Dunning Kruger correction: ~50%
  • Double-dip correction: ~65%

Did I do it right? I have no idea. Every step might have already been taken into account in the first asstimate. Every system-2 patch that we discover might have immediately patched system-1.

One (admittedly lazy) way out is to chuck all context-sensitive formal rules like 'add/subtract X%' and leave it entirely to system-1: play calibration games for skill-percentiles.

Please link more of your posts here. I looked through the history on your blog and there are quite a few that I think would be relevant and useful for people here. In particular, I think people would get a lot out of the posts about how to make friends. Some other other posts have titles that look interesting too but I haven't had time to read them yet.

The bar on competency is tragically low. And yet the world revolves.

You buried the lede. This entire post (and most of our effort, I believe) should be in pursuit of figuring out how to raise the sanity waterline (and/or "total competency sum") of conscious entities, not about estimating one's own competency higher.

Some posts can focus on raising the sanity waterline. Other posts can be motivational and targeted at people's incorrect self-assessments. Note that successfully doing the latter is often quite a good way of making people achieve better outcomes.

Raising the sanity waterline is an instrumental goal. A deeper goal is "have more [effective mindpower] applied where it is needed". This can be achieved by increasing the [effective mindpower] of each person, or by routing people with already-high [effective mindpower] to more important and difficult work. To do the latter without an authoritarian system (which would probably fail anyway), we need people with high [effective mindpower] to be aware of this so that they can look for important and difficult work. The OP makes progress in the area of "increasing the accuracy of beliefs about one's capabilities".