Author's note: This essay was written as part of an effort to say more of the simple and straightforward things loudly and clearly, and to actually lay out arguments even for concepts which feel quite intuitive to a lot of people, for the sake of those who don't "get it" at first glance. If your response to the title of this piece is "Sure, yeah, makes sense," then be warned that the below may contain no further insight for you.
Premise 1: Deltas between one’s beliefs and the actual truth are costly in expectation
(because the universe is complicated and all truths interconnect; because people make plans based on their understanding of how the world works and if your predictions are off you will distribute your time/money/effort/attention less effectively than you otherwise would have, according to your values; because even if we posit that there are some wrong beliefs that somehow turn out to be totally innocuous and have literally zero side effects, we are unlikely to correctly guess in advance which ones are which)
Premise 2: Humans are meaningfully influenced by confidence/emphasis alone, separate from truth
(probably not literally all humans all of the time, but at least in expectation, in the aggregate, for a given individual across repeated exposures or for groups of individuals; humans are social creatures who are susceptible to e.g. halo effects when not actively taking steps to defend against them, and who delegate and defer and adopt others’ beliefs as their tentative answer, pending investigation (especially if those others seem competent and confident and intelligent, and there is in practice frequently a disconnect between the perception of competence and its reality); if you expose 1000 randomly-selected humans to a debate between a quiet, reserved person outlining an objectively correct position and a confident, emphatic person insisting on an unfounded position, many in that audience will be net persuaded by the latter and others will feel substantially more uncertainty and internal conflict than the plain facts of the matter would have left them feeling)
Therefore: Overconfidence will, in general and in expectation, tend to impose costs on other people, above and beyond the costs to one’s own efficacy, via its predictable negative impact on the accuracy of those other people’s beliefs, including further downstream effects of those people’s beliefs infecting still others’ beliefs.
I often like to think about the future, and how human behavior in the future will be different from human behavior in the past.
In Might Disagreement Fade Like Violence? Robin Hanson posits an analogy between the “benefits” of duels and fights, as described by past cultures, and the benefits of disagreement as presently described by members of modern Western culture. He points out that foreseeable disagreement, in its present form, doesn’t seem particularly aligned with the goal of arriving at truth, and envisions a future where the other good things it gets us (status, social interaction, a medium in which to transmit signals of loyalty and affiliation and intelligence and passion) are acquired in less costly ways, and disagreement itself has been replaced by something better.
Imagine that we saw disagreement as socially destructive, to be discouraged. And imagine that the few people who still disagreed thereby revealed undesirable features such as impulsiveness and ignorance. If it is possible to imagine all these things, then it is possible to imagine a world which has far less foreseeable disagreement than our world, comparable to how we now have much less violence than did the ancient farming world.
When confronted with such an imaged future scenario, many people today claim to see it as stifling and repressive. They very much enjoy their freedom today to freely disagree with anyone at any time. But many ancients probably also greatly enjoyed the freedom to hit anyone they liked at anytime. Back then, it was probably the stronger better fighters, with the most fighting allies, who enjoyed this freedom most. Just like today it is probably the people who are best at arguing to make their opponents look stupid who enjoy our freedom to disagree today. Doesn’t mean this alternate world wouldn’t be better.
Reading Hanson’s argument, I was reminded of a similar point made by a colleague, that the internet in general and Wikipedia in particular had fundamentally changed the nature of disagreement in (at least) Western culture.
There is a swath of territory in which the least-bad social technology we have available is “agree to disagree,” i.e. each person thinks that the other is wrong, but the issue is charged enough and/or intractable enough that they are socially rewarded for choosing to disengage, rather than risking the integrity of the social fabric trying to fight it out.
And while the events of the past few years have shown that widespread disagreement over checkable truth is still very much a thing, there’s nevertheless a certain sense in which people are much less free than they used to be to agree-to-disagree about very basic questions like "is Brazil’s population closer to 80 million or 230 million?" There are some individuals that choose to plug their ears and deny established fact, but even when these individuals cluster together and form echo chambers, they largely aren’t given social license by the population at large—they are docked points for it, in a way that most people generally agree not to dock points for disagreement over murkier questions like “how should people go about finding meaning in life?”
Currently, there is social license for overconfidence. It’s not something people often explicitly praise or endorse, but it’s rarely substantively punished (in part because the moment when a person reaps the social benefits of emphatic language is often quite distant from the moment of potential reckoning). More often than not, overconfidence is a successful strategy for extracting agreement and social support in excess of the amount that an omniscient neutral observer would assign.
(, but also [gestures vaguely at everything]. I confidently assert that clear and substantial support for this claim exists and is not hard to find (one extremely easy example is presidential campaign promises; we currently have an open Guantánamo Bay facility and no southern border wall), but I'm leaving it out to keep the essay relatively concise. I recommend consciously noting that the assertion has been made without being rigorously supported, and flagging it accordingly.)
Note that the claim is not “overconfidence always pays off” or “overconfidence never gets punished” or “more overconfidence is always a good thing”! Rather, it is that the pragmatically correct amount of confidence to project, given the current state of social norms and information flow, is greater than your true justified confidence. There are limits to the benefits of excessively strong speech, but the limits are (apparently) shy of e.g. literally saying, on the record, “I want you to use my words against me, [in situation X I will take action Y],” and then doing the exact opposite a few years later.
Caveat 1: readers may rightly point out that the above quote and subsequent behavior of Lindsey Graham took place within a combative partisan context, and is a somewhat extreme example when we’re considering society-as-a-whole. Average people working average jobs are less likely to get away with behavior that blatant. But I’m attempting to highlight the upper bound on socially-sanctioned overconfidence, and combative partisan contexts are a large part of our current society that it would feel silly to exclude as if they were somehow rare outliers.
Caveat 2: I've been equivocating between epistemic overconfidence and bold/unequivocal/hyperbolic speech. These are in fact two different things, but they are isomorphic in that you can convert any strong claim such as Graham’s 2016 statement into a prediction about the relative likelihood of Outcome A vs. Outcome B. One of the aggregated effects of unjustifiably emphatic and unequivocal speech across large numbers of listeners is a distortion of those listeners’ probability spread—more of them believing in one branch of possibility than they ought, and than they would have if the speech had been more reserved. There are indeed other factors in the mix (such as tribal cohesion and belief-as-attire, where people affirm things they know to be false for pragmatic reasons, often without actually losing sight of the truth), but the distortion effect is real. Many #stopthesteal supporters are genuine believers; many egalitarians are startled to discover that the claims of the IQ literature are not fully explained away by racism, etc.
In short, displays of confidence sway people, independent of their truth (and often, distressingly, even independent of a body of evidence against the person projecting confidence). If one were somehow able to run parallel experiments in which 100 separate pitches/speeches/arguments/presentations/conversations were each run twice, the first time with justified confidence and emphasis and the second with 15% "too much" confidence and emphasis, I would expect the latter set of conversations to be substantially more rewarding for the speaker overall. Someone seeking to be maximally effective in today’s world would be well advised to put nonzero skill points into projecting unearned confidence—at least a little, at least some of the time.
This is sad. One could imagine a society that is not like this, even if it’s hard to picture from our current vantage point (just as it would have been hard for a politician in Virginia in the early 1700s to imagine a society in which dueling is approximately Not At All A Thing).
I do not know how to get there from here. I am not recommending unilateral disarmament on the question of strategic overconfidence. But I am recommending the following, as preliminary steps to make future improvement in this domain slightly more likely:
0. Install a mental subroutine that passively tracks overconfidence...
...particularly the effects it has on the people and social dynamics around you (since most of my audience is already informally tracking the effects of their own overconfidence on their own personal efficacy). Gather your own anecdata. Start building a sense of this as a dynamic that might someday be different, à la dueling, so that you can begin forming opinions about possible directions and methods of change (rather than treating it as something that shall-always-be-as-it-always-has-been).
1. Recognize in your own mind that overconfidence is a subset of deceit...
...as opposed to being in some special category (just as dueling is a subset of violence). In particular, recognize that overconfidence is a behavioral pattern that people are vulnerable to, and can choose to indulge in more or less frequently, as opposed to an inescapable reflex or inexorable force of nature (just as violence is a behavioral pattern over which we have substantial individual capacity for control). Judge overconfidence (both in yourself and others, both knowing and careless) using similar criteria to those you use to judge deceit. Perhaps continue to engage in it, in ways that are beneficial in excess of their costs, but do not confuse "net positive" with "contains no drawbacks," and do not confuse "what our culture thinks of it" with "what it actually is." Recognize the ways in which your social context rewards you for performative overconfidence, and do what you can to at least cut back on the indulgence, if you can't eschew it entirely ("if you would go vegan but you don't want to give up cheese, why not just go vegan except for cheese?"). Don't indulge in the analogue of lies-by-omission; if you can tell that someone seems more convinced by you than they should be, at least consider correcting their impression, even if their convinced-ness is convenient for you.
2. Where possible, build the habit of being explicit about your own confidence level...
...the standard pitch here is "because this will make you yourself better at prediction, and give you more power over the universe!" (which, sure, but also  and also the degree matters; does ten hours of practice make you .01% more effective or 10% more effective?). I want to add to that motivation "and also because you will contribute less to the general epistemic shrapnel being blasted in every direction more or less constantly!" Reducing this shrapnel is a process with increasing marginal returns—if 1000 people in a tight-knit community are all being careless with their confidence, the first to hold themselves to a higher standard scarcely improves the society at all, but the hundredth is contributing to a growing snowball, and by the time only a handful are left, each new convert is a massive reduction in the overall problem.
Practice using numbers and percentages, and put at least a one-time cursory effort into calibrating that usage, so that when your actual confidence is "a one-in-four chance of X" you can convey that confidence precisely, rather than saying largely contentless phrases like "a very real chance." Practice publicly changing your mind and updating your current best guesses. Practice explicitly distinguishing between what seems to you to be likely, what seems to you to be true, and what you are justified in saying you know to be true. Practice explicitly distinguishing between doxa, episteme, and gnosis, or in more common terms, what you believe because you heard it, what you believe because you can prove it, and what you believe because you experienced it.
3. Adopt in your own heart a principle of adhering to true confidence...
...or at least engaging in overconfidence only with your eyes open, such that pushback of the form "you're overconfident here" lands with you as a cooperative act, someone trying to help you enact your own values instead of someone trying to impose an external standard. This doesn't mean making yourself infinitely vulnerable to attacks-in-the-guise-of-feedback (people can be wrong when they hypothesize that you're overconfident, and there are forms of pushback that are costly or destructive that you are not obligated to tolerate, and you can learn over time that specific sources of pushback are more or less likely to be useful), but it does mean rehearsing the thought "if they're right, I really want to know it" as an inoculation against knee-jerk dismissiveness or defensiveness.
4. Don't go around popping bubbles...
...in which the local standards are better than the standards of the culture at large. I have frequently seen people enter a promising subculture and drag it back into the gutter under the guise of curing its members of their naïveté, and forearming them against a cruel outside world that they were in fact successfully hiding from. I've also witnessed people who, their self-esteem apparently threatened by a local high standard, insisted that it was all lies and pretense, and that "everybody does X," and who then proceeded to deliberately double down on X themselves, successfully derailing the nascent better culture and thereby "proving their point." I myself once made a statement that was misinterpreted as being motivated primarily by status considerations, apologized and hastened to clarify and provide an alternate coherent explanation, and was shot down by a third party who explicitly asserted that I could not opt out of the misinterpretation while simultaneously agreeing that the whole status framework was toxic and ought to go.
When society improves, it's usually because a better way of doing things incubated in some bubble somewhere until it was mature enough to germinate; if you are fortunate enough to stumble across a fledgling community that's actually managed to relegate overconfidence (or any other bad-thing-we-hope-to-someday-outgrow) to the same tier as anti-vax fearmongering, maybe don't go out of your way to wreck it.
To reiterate: the claim is not that any amount of overconfidence always leads to meaningful damage. It's that a policy of indulging in and tolerating overconfidence at the societal level inevitably leads to damage over time.
Think about doping, or climate change—people often correctly note that it's difficult or impossible to justify an assertion that a given specific athletic event was won because of doping, or that a given specific extreme weather event would not have happened without the recent history of global warming. Yet that does not weaken our overall confidence that drugs give athletes an unfair edge, or that climate change is driving extreme weather in general. Overconfidence deals its damage via a thousand tiny cuts to the social fabric, each one seeming too small in the moment to make a strong objection to (but we probably ought to anyway).
It's solidly analogous to lying, and causes similar harms: like lying, it allows the speaker to reap the benefits of living in a convenient World A (that doesn't actually exist), while only paying the costs of living in World B. It creates costs, in the form of misapprehensions and false beliefs (and subsequent miscalibrated and ineffective actions) and shunts those costs onto the shoulders of the listeners (and other people downstream of those listeners). It tends to most severely damage those who are already at the greatest disadvantage—individuals who lack the intelligence or training or even just the spare time and attention to actively vet new claims as they're coming in. It's a weapon that grows more effective the more desperate, credulous, hopeful, and charitable the victims are.
This is bad.
Not every instance of overconfidence is equally bad, and not every frequently-overconfident person is equally culpable. Some are engaging in willful deception, others are merely reckless, and still others are trying their best but missing the mark. The point is not to lump "we won the election and everyone knows it" into the same bucket as "you haven't seen Firefly? Oh, you would love Firefly," but merely to acknowledge that they're both on the same spectrum. That while one might have a negative impact of magnitude 100,000 and the other of magnitude 0.01, those are both negative numbers.
That is an important truth to recognize, in the process of calibrating our response. We cannot effectively respond to what we don't let ourselves see, and it's tempting to act as if our small and convenient overconfidences are qualitatively different from those of Ponzi schemers and populist presidents.
But they aren't. Overconfidence can certainly be permissible and forgivable. In some strategic contexts, it may be justified and defensible. But every instance of it is like the cough of greenhouse gases from starting a combustion engine. Focus on the massive corporate polluters rather than trying to shame poor people who just need to get to work, yes, but don't pretend that the car isn't contributing, too.
It's unlikely that this aspect of our culture will change any time soon. We may never manage to outgrow it at all. But if you're looking for ways to be more moral than the culture that raised you, developing a prosocial distaste for overconfidence (above and beyond the self-serving one that's already in fashion) is one small thing you might do.
Author's note: Due to some personal considerations, I may not actively engage in discussion below. This feels a little rude/defecty, but on balance I figured LessWrong would prefer to see this and be able to wrestle with it without me, than to not get it until I was ready to participate in discussion (which might mean never).
This comment is not only about this post, but is also a response to Scott's model of Duncan's beliefs about how epistemic communities work, and a couple of Duncan's recent Facebook posts. It is also is a mostly unedited rant. Sorry.
I grant that overconfidence is in a similar reference class as saying false things. (I think there is still a distinction worth making, similar to the difference between lying directly and trying to mislead by saying true things, but I am not really talking about that distinction here.)
I think society needs to be robust to people saying false things, and thus have mechanisms that prevent those false things from becoming widely believed. I think that as little as possible of that responsibility should be placed on the person saying the false things, in order to make it more strategy-proof. (I think that it is also useful for the speaker to help by trying not to say false things, but I am more putting the responsibility in the listener)
I think there should be pockets of society, (e.g. collections of people, specific contexts or events) that can collect true beliefs and reliably significantly decrease the extent to which they put trust in the claims of people who say false things. Call such contexts "rigorous."
I think that it is important that people look to the output these rigorous contexts when e.g. deciding on COVID policy.
I think it is extremely important that the rigorous pockets of society is not "everyone in all contexts."
I think that that society is very much lacking reliable rigorous pockets.
I have this model where in a healthy society, there can be contexts where people generate all sorts of false beliefs, but also sometimes generate gold (e.g. new ontologies that can vastly improve the collective map). If this context is generating a sufficient supply of gold, you DO NOT go in and punish their false beliefs. Instead, you quarantine them. You put up a bunch of signs that point to them and say e.g. "80% boring true beliefs 19% crap 1% gold," then you have your rigorous pockets watch them, and try to learn how to efficiently distinguish between the gold and the crap, and maybe see if they can generate the gold without the crap. However sometimes they will fail and will just have to keep digging through the crap to find the gold.
One might look at lesswrong, and say "We are trying to be rigorous here. Let's push stronger on the gradient of throwing out all the crap." I can see that. I want to be able to say that. I look at the world, and I see all the crap, and I want there to be a good pocket that can be about "true=good", "false=bad", and there isn't one. Science can't do it, and maybe lesswrong can.
Unfortunately, I also look at the world and see a bunch of boring processes that are never going to find gold, Science can't do it, and maybe lesswrong can.
And, maybe there is no tradeoff here. Maybe it can do both. Maybe at our current level of skills, we find more gold in the long run by being better and throwing out the crap.
I don't know what I believe about how much tradeoff there is. I am writing this, and I am not trying to evaluate the claims. I am imagining inhabiting the world where there is a huge trade off. Imagining the world where lesswrong is the closest thing we have to being able to have a rigorous pocket of society, but we have to compromise, because we need a generative pocket of society even more. I am overconfidently imagining lesswrong as better than it is at both tasks, so that the tradeoff feels more real, and I am imagining the world failing to pick up the slack of whichever one it lets slide. I am crying a little bit.
And I am afraid. I am afraid of being the person who overconfidently says "We need less rigor," and sends everyone down the wrong path. I am also afraid of the person who overconfidently says "We need less rigor," and gets flagged as a person who says false things. I am not afraid of saying "We need more rigor." The fact that I am not afraid of saying "We need more rigor" scares me. I think it makes me feel that if I look to closely, I will conclude that "We need more rigor" is true. Specifically, I am afraid of concluding that and being wrong.
In my own head, I have a part of me that is inhabiting the world where there is a large tradeoff, and we need less rigor. I have another part that is trying to believe true things. The second part is making space for the first part, and letting it be as overconfident as it wants. But it is also quarantining the first part. It is not making the claim that we need more space and less rigor. This quarantine action has two positive effects. It helps the second part have good beliefs, but it also protects the first part from having to engage with hammer of truth until it has grown.
I conjecture that to the extent that I am good at generating ideas, it is partially because I quarantine, but do not squash, my crazy ideas. (Where ignoring the crazy ideas counts as squashing them) I conjecture further that in ideal society needs to do similar motions at the group level, not just the individual level. I said at the beginning that you need to put the responsibility for distinguishing in the listener for strategyproofness. This was not the complete story. I conjecture that you need to put the responsibility in the hand of the listener, because you need to have generators that are not worried about accidentally having false/overconfident beliefs. You are not supposed to put policy decisions in the hands of the people/contexts that are not worried about having false beliefs, but you are supposed to keep giving them attention, as long as they keep occasionally generating gold.
Personal Note: If you have the attention for it, I ask that anyone who sometimes listens to me keeps (at least) two separate buckets: one for "Does Scott sometimes say false things?" and one for "Does Scott sometimes generate good ideas?", and decide whether to give me attention based on these two separate scores. If you don't have the attention for that, I'd rather you just keep the second bucket, I concede the first bucket (for now), and think my comparative advantage is the be judged according the the second one, and never be trusted as epistemically sound. (I don't think I am horrible at being epistemically sound, at least in some domains, but if I only get a one dimensional score, I'd rather relinquish the right to be epistemically trusted, in order to absolve myself of the responsibility to not share false beliefs, so my generative parts can share more freely.)
I'm feeling demoralized by Ben and Scott's comments (and Christian's), which I interpret as being primarily framed as "in opposition to the OP and the worldview that generated it," and which seem to me to be not at all in opposition to the OP, but rather to something like preexisting schemas that had the misfortune to be triggered by it.
Both Scott's and Ben's thoughts ring to me as almost entirely true, and also separately valuable, and I have far, far more agreement with them than disagreement, and they are the sort of thoughts I would usually love to sit down and wrestle with and try to collaborate on. I am strong upvoting them both.
But I feel caught in this unpleasant bind where I am telling myself that I first have to go back and separate out the three conversations—where I have to prove that they're three separate conversations, rather than it being clear that I said "X" and Ben said "By the way, I have a lot of thoughts about W and Y, which are (obviously) quite close to X" and Scott said "And I have a lot of thoughts about X' and X''."
Like, from my perspective it seems that there are a bunch of valid concerns being raised that are not downstream of my assertions and my proposals, and I don't want to have to defend against them, but feel like if I don't, they will in fact go down as points against those assertions and proposals. People will take them as unanswered rebuttals, without noticing that approximately everything they're specifically arguing against, I also agree is bad. Those bad things might very well be downstream of e.g. what would happen, pragmatically speaking, if you tried to adopt the policies suggested, but there's a difference between "what I assert Policy X will degenerate to, given [a, b, c] about the human condition" and "Policy X."
(Jim made this distinction, and I appreciated it, and strong upvoted that, too.)
And for some reason, I have a very hard time mustering any enthusiasm at all for both Ben and Scott's proposed conversations while they seem to me to be masquerading as my conversation. Like, as long as they are registering as direct responses, when they seem to me to be riffs.
I think I would deeply enjoy engaging with them, if it were common knowledge that they are riffs. I reiterate that they seem, to me, to contain large amounts of useful insight.
I think that I would even deeply enjoy engaging with them right here. They're certainly on topic in a not-even-particularly-broad-sense.
But I am extremely tired of what-feels-to-me like riffs being put on [my idea's tab], and of the effort involved in separating out the threads. And I do not think it is a result of e.g. a personal failure to be clear in my own claims, such that if I wrote better or differently this would stop happening to me. I keep looking for a context where, if I say A and it makes people think of B and C, we can talk about A and B and C, and not immediately lose track of the distinctions between them.
EDIT: I should be more fair to Scott, who did indeed start his post out with a frame pretty close to the one I'm requesting. I think I would take that more meaningfully if I were less tired to start with. But also it being "a response to Scott's model of Duncan's beliefs about how epistemic communities work, and a couple of Duncan's recent Facebook posts" just kind of bumps the question back one level; I feel fairly confident that the same sort of slippery rounding-off is going on there, too (since, again, I almost entirely agree with his commentary, and yet still wrote this very essay). Our disagreement is not where (I think) Ben and Scott think that it lies.
I don't know what to do about any of that, so I wrote this comment here. Epistemic status: exhausted.
I believe that I could not pass your ITT. I believe I am projecting some views onto you, in order engage with them in my head (and publicly so you can engage if you want). I guess I have a Duncan-model that I am responding to here, but I am not treating that Duncan-model as particularly truth tracking. It is close enough that it makes sense (to me) to call it a Duncan-model, but its primary purpose in me is not for predicting Duncan, but rather for being there to engage with on various topics.
I suspect that being a better model would help it serve this purpose, and would like to make it better, but I am not requesting that.
I notice that I used different words in my header "Scott's model of Duncan's beliefs," I think that this reveals something, but it certainly isn't clear, "belief" is for true things, "models" are toys for generating things.
I think that in my culture, having a not-that-truth-tracking Duncan-model that I want to engage my ideas with is a sign of respect. I think I don't do that with that many people (more than 10, but less than 50, I think). I also do it with a bunch of concepts, like "Simic," or "Logical Induction." The best models according to me are not the ones that are the most accurate, as much as the ones that are most generally applicable. Rounding off the model makes it fit in more places.
However, I can imagine that maybe in your culture it is something like objectification, which causes you to not be taken seriously. Is this true?
If you are curious about what kind of things my Duncan-model says, I might be able to help you built a (Scott's-Duncan-Model)-Model. In one short phase, I think I often round you off as an avatar of "respect," but even my bad model has more nuance than just the word "respect".
I imagine that you are imagining my comment as a minor libel about you, by contributing to a shared narrative in which you are something that you are not. I am sad to the extent that it has that effect. I am not sure what to do about that. (I could send things like this in private messages, that might help).
However, I want to point out that I am often not asking people to update from my claims. That is often an unfortunate side effect. I want to play with my Duncan-model. I want you to see what I build with it, and point out where it is not correctly tracking what Duncan would actually say. (If that is something you want) I also want to do this in a social context. I want my model to be correct, so that I can learn more from it, but I want to relinquish any responsibility for it being correct. (I am up for being convinced that I should take on that responsibility, either as a general principal, or as a cooperative action towards you.)
Feel free to engage or not.
PS: The above is very much responding to my Duncan-model, rather than what you are actually saying. I reread your above comment, and my comment, and it seems like I am not responding to you at all. I still wanted to share the above text with you.
Anyway, my reaction to the actual post is:
"Yep, Overconfidence is Deceit. Deceit is bad."
However, reading your post made me think about how maybe your right to not be deceived is trumped by my right to be incorrect.
And I mean the word "maybe" in the above sentence. I am saying the sentence not to express any disagreement, but to play with a conjecture that I am curious about.
For the record I was planning a reply to Scott saying something like "This seems true, and seemed compatible with my interpretation of the OP, which I think went out of it's way to be pretty well caveated."
I didn't end up writing that comment yet in part because I did feel something like "something going on in Scott's post feels relevant to The Other FB Discussion", and wanted to acknowledge that, but that seemed to be going down a conversational path that I expected to be exhausted by, and then I wasn't sure what to do and bounced off.
Yep, I totally agree that it is a riff. I think that I would have put it in response to the poll about how important it is for karma to track truth, if not for the fact that I don't like to post on Facebook.
In one sense, this is straightforwardly true: there is an incentive in some (but not all) circumstances to project more confidence than you'd have if you were reasoning correctly based on the evidence, and following that incentive means emitting untrue information. But there are two pushbacks I want to give.
First, a minor pedantic point: There are environments where everyone is both signaling overconfidence all the time, and compensating for it in their interpretation of everyone else's communication. You could interpret this as dialects having certain confidence-related words calibrated differently. Unilaterally breaking from that equilibrium would also be deceptive.
But second, much more importantly: I think a lot of people, when they read this post, will update in the direction of socially attacking confidence rather than attacking overconfidence. This is already a thing that happens; the usual shape of the conversation is that someone says something confidently, because they have illegible sources of relevant expertise. Several times early in the COVID-19 pandemic, I got substantial social attacks, explicitly citing overconfidence on matters in which I was in fact correct. The outcome of which was that I ultimately burned out for a while and did less than I could have. This failure mode does a lot of damage, and I don't think this post is adequately caveated around it.
Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm?
I feel when I query myself for situations to apply this advice, or situations where I feel I’ve seen others apply the norms recommended here, it mostly points in directions I don’t want. Be less confident about things, make fewer bold claims, make sure to not make confident statements that turn out to be false.
I feel like the virtues I would like many people to live out are about about trusting themselves and taking on more risk: take stronger bets on your ideas, make more bold claims, spend more time defending unlikely/niche ideas in the public sphere, make surprising predictions (to allow yourself to be falsified). Ask yourself not “was everything I said always precisely communicated with the exact right level of probability” but “did we get closer to reality or further away”. This helps move the discourse forward, be it between close collaborators or in the public sphere.
I think it’s a cost if you can’t always take every sentence and assume it represented the person’s reflectively endorsed confidence, but on the margin I think it’s much worse if people have nothing interesting to say. Proper scoring rules don’t incentivize getting the important questions right.
I don’t want to dodge the ethical claims in the OP, which frames the topic of overconfidence as an ethical question around deception. As deontology, I agree that deceptive sentences are unethical. From a virtue ethics standpoint, I think that if you follow the virtue of moving the conversation closer to reality, and learn in yourself to avoid the attraction toward deception, then you’ll be on the right track morally, and in most cases do not need to police your individual statements to the degree the OP recommends. Virtue ethics cannot always be as precise as deontology (which itself is not as precise as utilitarianism), so I acknowledge that my recommendations cannot always save someone who is living a life of epistemological sin, but I think that overall following the virtues is what I’ll do more than trying to follow the deontology like someone scared he is constantly committing (or attempting to commit) crimes.
When I ruminate on trying to apply the norm ‘Overconfidence is Deceit’, I think of two example cases. The first is people feeling epistemically helpless, like they don’t know how to think or what’s true, and looking for some hard guardrails to avoid making a single step wrong. Sometimes this is right, but more often I think people’s fear and anxiety is not reflective of reality, and they should take the risk that they might be totally overconfident. And if they do suspect they are acting unethically, they should stop, drop and catch fire, and decide whether to reorganize themselves in a fundamental way, rather than putting a non-trivial tax on all further thoughts.
The second case I have in mind is people feeling helpless about the level of the sanity water-line being so low. “Why can’t everyone stop saying such dumb things all the time!” I think trying to stop other people saying wrong things feels like a thing you do when you’re spending too much time around people you think are dumb, and I recommend fixing this more directly by changing your social circles. For me and the people close to me, I would often rather they try to take on epistemic practices motivated by getting important questions right. Questions more like “How does a physicist figure out a new fundamental law?” rather than “How would an random person stop themselves from becoming a crackpot who believed they’d invented perpetual motion?”. That tends to come up with things like “get good at fermi estimates” and “make lots of predictions” and “go away from everyone and think for yourself for a couple of years” moreso than things like “make sure all your sentences never miscommunicate their confidence, to the point of immorality and disgust”.
I guess this is the age-old debate that Eliezer discusses in Inadequate Equilibria, and I tend to take his side of it. I am concerned that people who talk about overconfidence all the time aren’t primarily motivated by trying to figure out new and important truths, but are mostly trying to add guardrails out of fear of themselves/everyone else falling off. I guess I mostly don’t share the spirit of the OP and won’t be installing the recommended mental subroutine.
Does anyone have a clear example to give of a time/space where overconfidence seems to them to be doing a lot of harm? I would say making investments in general (I am a professional investment analyst.) This is an area where lots of people are making decisions under uncertainty, and overconfidence can cost everyone a lot of money.
One example would be bank risk modelling pre-2008: 'our VAR model says that 99.9% of the time we won't lose more than X' therefore this bank is well-capitalised. Everyone was overconfident that the models were correct, they weren't, chaos ensued. (I remember the risk manager of one bank - Goldman Sachs? - bewailing that they had just experienced a 26-standard deviation event, which is basically impossible. No mate, your models were wrong, and you should have known better because financial systems have crises every decade or two.)
Speaking from personal experience, I'd say a frequent failure-mode is excessive belief in modelling. Sometimes it comes from the model-builder: 'this model is the best model it can be, I've spent lots of time and effort tinkering with it, therefore the model must be right'. Sometimes it's because the model-builder understands that the model is flawed, but is willing to overstate their confidence in the results, and/or the person receiving the communication doesn't want to listen to that uncertainty.
While my personal experience is mostly around people (including myself) building financial models, I suggest that people building any model of some dynamic system that is not fully understood are likely to suffer the same failure-mode: at some point down the line someone gets very over-confident and starts thinking that the model is right, or at least everyone forgets to explore the possibility that the model is wrong. When those models are used to make decisions with real-life consequences (think epidemiology models in 2020), there is a risk of getting things very wrong, when people start acting on the basis that the model is the reality.
Which brings me on to my second example, which will be more controversial than the first one, so sorry about that. In March 2020, Imperial College released a model predicting an extraordinary death toll if countries didn't lock down to control Covid. I can't speak to Imperial's internal calibration, but the communication to politicians and the public definitely seems to have suffered from over-confidence. The forecasts of a very high death toll pushed governments around the world, including the UK (where I live) into strict lockdowns. Remember that lockdowns themselves are very damaging: mass deprivation of liberty, mass unemployment, stoking a mental health pandemic, depriving children of education - the harms caused by lockdowns will still be with us for decades to come. You need a really strong reason to impose one.
And yet, the one counterfactual we have, Sweden, suggests that Imperial College's model was wrong by an order of magnitude. When the model was applied to Sweden (link below), it suggested a death toll of 96,000 by 1 July 2020 with no mitigation, or half that level with more aggressive social distancing. Actual reported Covid deaths in Sweden by 1 July were 5,500 (second link below).
So it's my contention - and I'm aware it's a controversial view - that overconfidence in the output of an epidemiological model has resulted in strict lockdowns which are a disaster for human welfare and which in themselves do far more harm than they prevent. (This is not an argument for doing nothing: it's an argument for carefully calibrating a response to try and save the most lives for the least collateral damage.)
Imperial model applied to Sweden: https://www.medrxiv.org/content/10.1101/2020.04.11.20062133v1.full.pdf
Covid deaths in Sweden by date: https://www.statista.com/statistics/1105753/cumulative-coronavirus-deaths-in-sweden/
Hey! Thanks. I notice you’re a brand new commenter and I wanted to say this was a great first (actually second) comment. Both your examples were on-point and detailed. Your second one FYI seems quite likely to me too. (A friend of mine interacted with epidemiological modeling at many places early in the pandemic – and I have heard many horror stories from them about the modeling that was being used to advise governments.)
I’ll leave an on-topic reply tomorrow, just wanted to say thanks for the solid comment.
I was thinking about this a little more, and I think that the difference in our perspectives is that you approached the topic from the point of view of individual psychology, while I (perhaps wrongly) interpreted Duncan's original post as being about group decision-making. From an individual point of view, I get where you're coming from, and I would agree that many people need to be more confident rather than less.
But applied to group decision-making, I think the situation is very different. I'll admit I don't have hard data on this, but from life experience and anecdotes of others, I would support the claim that most groups are too swayed by the apparent confidence of the person presenting a recommendation/pitch/whatever, and therefore that most groups make sub-optimal decisions because of it. (I think this is also why Duncan somewhat elides the difference between individuals who are genuinely over-confident about their beliefs, and individuals who are deliberately projecting overconfidence: from the point of view of the group listening to them, it looks the same.)
Since groups make a very large number of decisions (in business contexts, in NGOs, in academic research, in regulatory contexts...) I think this is a widespread problem and it's useful to ask ourselves how to reduce the bias toward over-confidence in group decision-making.
Almost everyone's response to COVID, including institutions, to the tune of many preventable deaths.
Almost everything produced by the red tribe in 2020, to the tune of significant damage to the social fabric.
Thanks for the examples! Those two sound like the second case I had in mind.
An easy way to capture most of the benefits of overconfidence without actually contorting beliefs too much is to separate beliefs about the world from beliefs about the best decision given those beliefs about the world.
I can have very high confidence that something is the right course of action because it has the highest expected benefit even if my confidence about the state of the world is low.
Overall I agree with this post, and was about to write a kinda boring comment saying "whelp, that all seemed correctly caveated such that I think I basically agree with it completely." Then I reread this last line, which I don't disagree with, but I feel somewhat confused about:
There's a post I have brewing called "When Should You Cultivate Disgust?". I haven't written it yet in part because I don't know the answer. I think cultivating distaste, disgust, aversion, are probably important tools to have you one's toolkit, but they locally seem fairly costly if you don't actually have the ability to do anything about an ugly thing that you suddenly gain the ability to see everywhere. It seems prosocial to learn to see bad things affecting society, but I'm not sure how strongly I'd recommend gaining that distaste faster than you gain useful things to do with it.
This seems highly variable person-to-person; Nate Soares and Anna Salamon each seem to pay fairly low costs/no costs for many kinds of disgust, and are also notably each doing very different things than each other. I also find that a majority of my experiences of disgust are not costly for me, and instead convert themselves by default into various fuels or resolutions or reinforcement-rewards. There may be discoverable and exportable mental tech re: relating productively to disgust-that-isn't-particularly-actionable.
Thanks for posting this article here. Sometimes it feels like I got into this rationality stuff too late or only after a lot of people scattered away.
(I hope no one minds that this comment doesn't talk about the article's contents.)
FWIW I appreciate this, and do indeed prefer to have the post on the site even if you don't engage personally with comments.
Thank you for an interesting article. It helped clarify some things I've been thinking about. The question I'm left with is: how practically can someone encourage a culture to be less rewarding of overconfidence?
I guess I'm feeling this particularly strongly because in the last. year I started a new job in a company much more tolerant of overconfidence than my previous employer. I've recalibrated my communications with colleagues to the level that is normal for my new employer, but it makes me uncomfortable (my job is to make investment recommendations, and I feel like I'm not adequately communicating risks to my colleagues, because if I do no-one will take up my recommendations, they'll buy riskier things which are pitched with greater confidence by other analysts). Other than making sure I'm the least-bad offender consistent with actually being listened to, is there something I can do to shift the culture?
And please, no recommendations on the lines of 'find another job', that's not practical right now.
Both examples are about presidents who set certain goals and used the office of the presidency to persue those goals while being blocked by congress.
They are examples of presidents don't successfully extracting support for their goals from congressional allies of their own party even through they put it on their platform.
They are very different promises then those by either of the president to not support revolving-door-dynamics where it would have actually been in the power of the presidents to fulfill their promises if they wanted to do so.
When a presidential candidate promises to do something that would require congressional approval but ends up unable to get it, how is that not an instance of apparent overconfidence? And neither president seems to have suffered electorally due to that particular failure. So they both seem like evidence for the claim that apparent overconfidence (within reason) isn't punished.
I think if you would ask any presidential candiate whether they think they will be able to implement all of their platform they won't tell you that they are confident that they implement all of their platform.
I think you're underweighting a crucial part of the thesis, which is that it doesn't matter what the candidate secretly knows or would admit if asked. A substantial portion of the listeners just ... get swayed by the strong claim. The existence of savvy listeners who "get it" and "know better" and know where to put the hedges and know which parts are hyperbole doesn't change that fact. And there is approximately never a reckoning.
When it comes to Obama promises on Guantanamo the video I find has him say things he did do. He did intent to close Gitmo. He did follow through on this intention by doing things to close Gitmo.
I don't think having a plan for the future where you aren't fully in control of the outcome is necessarily overconfidence. Ambitious plans are valuable. If you think that everybody making an ambitious plan is inherently deceitful, that would mean declaring all startups to engage in deceit.
I don't think the problems of our time is that too many people have ambitious plans.
You're still missing the thesis. Apologies for not having the spoons to try restating it in different words, but I figured I could at least politely let you know.
Edit: a good first place to look might be "what do I think is different for me, Christian, than for people with substantially less discernment and savviness?"