I have a strong appreciation for the general point that "specificity is sometimes really great", but I'm wondering if this point might miss the forest for the trees with some large portion of its actual audience?
If you buy that in some sense all debates are bravery debates then audience can matter a lot, and perhaps this point addresses central tendencies in "global english internet discourse" while failing to address central tendencies on LW?
There is a sense in which nearly all highly general statements are technically false, because they admit of at least some counter examples.
However any such statement might still be a useful in a structured argument of very high quality, perhaps as an illustration of a troubling central tendency, or a "lemma" in a multi-part probabalistic argument.
It might even be the case that the MEDIAN EXAMPLE of a real tendency is highly imperfect without that "demolishing" the point.
Suppose for example that someone has focused on a lot on higher level structural truths whose evidential basis was, say, a thorough exploration of many meta-analyses about a given subject.
"Mel the meta-meta-analyst" mig...
A culturally local example might be Hanson's general claim that medical insurance coverage does not appear to "cause health" on average. No single vivid patient generates this result. Vivid stories do exist here, but they don't adequately justify the broader claim. Rather, the substantiation arises from tallying many outcomes in a variety of circumstances and empirically noticing relations between circumstances and tallies.
I don't see how this is an example of a time when my specificity test shouldn't be used, because Robin Hanson would simply pass my specificity test. It's safe to say that Robin has thought through at least one specific example of what the claim that "medical insurance doesn't cause health" means. The sample dialogue with me and Robin would look like this:
Robin: Medical insurance coverage doesn't cause health on average!
Liron: What's a specific example (real or hypothetical) of someone who seeks medical insurance coverage because they think they're improving their health outcome, but who actually would have had the same health outcome without insurance?
Robin: A 50-year-old man opts for a job that has insurance benefits over one that doesn't, because he believes
...I appreciate your high-quality comment.
I likewise appreciate your prompt and generous response :-)
I think I see how you imagine a hypothetical example of "no net health from insurance" might work as a filter that "passes" Hanson's claim.
In this case, I don't think your example works super well and might almost cause more problems that not?
Differences of detail in different people's examples might SUBTRACT from attention to key facts relevant to a larger claim because people might propose different examples that hint at different larger causal models.
Like, if I was going to give the strongest possible hypothetical example to illustrate the basic idea of "no net health from insurance" I'd offers something like:
EXAMPLE: Alice has some minor symptoms of something that would clear up by itself and because she has health insurance she visits a doctor. ("Doctor visits" is one of the few things that health insurance strongly and reliably causes in many people.) While there she gets a nosocomial infection that is antibiotic resistant, lowering her life expectancy. This is more common than many people think. Done.
This example is...
I <3 Specificity
For years, I've been aware of myself "activating my specificity powers" multiple times per day, but it's kind of a lonely power to have. "I'm going to swivel my brain around and ride it in the general→specific direction. Care to join me?" is not something you can say in most group settings. It's hard to explain to people that I'm not just asking them to be specific right now, in this one context. I wish I could make them see that specificity is just this massively under-appreciated cross-domain power. That's why I wanted this sequence to exist.
I gratuitously violated a bunch of important LW norms
Many commenters took me to task on the two issues above, as well as raising other valid issues, like whether the post implies that specificity is always the right power to activate in every situation.
The voting for this post was probably a rar...
The notion of specificity may be useful, but to me its presentation in terms of tone (beginning with the title "The Power to Demolish Bad Arguments") and examples seemed rather antithetical to the Less Wrong philosophy of truth-seeking.
For instance, I read the "Uber exploits its drivers" example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart's arguments apart, all the while insulting this fictitious person with asides like "By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.".
In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit - reversed stupidity is not intelligence, and hence even if we "demolish" our counterpart's supposedly bad arguments, at best we discover that they could not shift our priors.
And more generally, the essay gave me a yucky sense of "rationalists try to prove their superiority by creating strawmen and then beating them in arguments", sneer culture, etc. It doesn't help that some of its central examples invo...
Echoing previous reviews (it's weird to me the site still suggested this to review anyway, seems like it was covered already?) I would strongly advise against including this. While it has a useful central point - that specificity is important and you should look for and request it - I agree with other reviewers that the style here is very much the set of things LW shouldn't be about, and LWers shouldn't be about, but that others think LW-style people are about, and it's structuring all these discussions as if arguments are soldiers and the goal is to win while being snarky about it.
Frankly, I found the whole thing annoying and boring, and wouldn't have finished it if I wasn't reviewing.
I don't think changing the central example would overcome any of my core objections to the post, although it would likely improve it.
There's a version of this post that is much shorter, makes the central point quickly, and is something I would link to occasionally, but this very much isn't it.
I really dislike the central example used in this post, for reasons explained in this article. I hope it isn't included in the next LW book series without changing to a better example.
By telling Steve to be specific, you are trying to trick him into adopting an incoherent position. You should be trying to argue against your opponent at his strongest, in this case in the general case that he has thought the most about. If you lay out your strategy before going specific, he can choose an example that is more resilient to it. In your example, if Uber didn't exist, that job may have been a taxi driver job instead, which pays more, because there's less rent seeking stacked against you.
I believe that the thing which is making many of your commenters misinterpret the post is that you chose a political example for the dialogue. That gives people the (reasonable, as this is a common move) suspicion that you have a motive of attacking your political enemy while disguising it as rationality advice.
Even if they don't think that, if they have any sympathies towards the side that you seem to be attacking, they will still feel it necessary to defend that side. To do otherwise would risk granting the implicit notion that the "Uber exploits its drivers" side would have no coherent arguments in general, regardless of whether or not you meant to send that message.
You mentioned that you have personal examples where Anna pointed out to you that your position was incoherently. Something like that would probably have been a better example to use: saying "here's an example of how I made this mistake" won't be suspected of having ulterior motives the way that "here's an example of a mistake made by someone who I might reasonably be suspected to consider a political opponent" will.
I really liked this sequence. I agree that specificity is important, and think this sequence does a great job of illustrating many scenarios in which it might be useful.
However, I believe that there are a couple implicit frames that permeate the entire sequence, alongside the call for specificity. I believe that these frames together can create a "valley of bad rationality" in which calls for specificity can actually make you worse at reasoning than the default.
------------------------------------
The first of these frames is not just that being speci...
I liked this post, though I am afraid that it will suggest the wrong spirit.
Can you help me paint a specific mental picture of a driver being exploited by Uber?
I've had similar "exploitation" arguments with people:
"Commodification" and "dehumanization" don't mean anything unless you can point to their concrete effects.
I think your way of handling it is much, much better than how I've handled it. It comes across as less adversarial while still making the other person do the work of explaining themselves better. I've found that small tricks like this can completely flip a conversation from dysfunctional to effective. I'll have to remember to use your suggestion.
Nominating this whole sequence. It’s a blast, even if reading it felt very jumpy and stop-and-start. And I love how it’s clearly a self-example. But overall it’s just some really key lessons, taught better than any other place on the internet that I know.
I really liked this whole sequence. I think I have some disagreements with its presentation (a bit too loud for my taste), but I have actually repeatedly mentally referred back to the idea of specificity that is proposed here, and the sequence caused me to substantially update that trying to be more specific is a surprisingly powerful level in lots of different situations.
I also just really like the example and concreteness driven content of the sequence.
I stumbled on this post today and when I read the Acme example, Uber did not come to mind as an example. It was only till I read the comments did I know that Acme was originally Uber.
Just wanted to give some evidence that changing the example from Acme to Uber was helpful.
I find that I struggle with the rhetoric of the argument. Shouldn't the goal be to illuminate facts and truths rather than merely proving the other side wrong? Specifics certainly allow the illumination of truths (and so getting less wrong in our decisions and actions). However, it almost reads like the goal is to use specificity as some rhetorical tool in much the same way statistics can be misused to color the lens and mislead.
I'm sure that is not your goal so assume one of the hidden assumptions here could be put in the title. One additional word: The Power to Demolish BAD Arguments might set a better tone at the start.
Here's how Steve could have demolished Liron's argument:
"If a company makes $28/h out of their workers and pays them $14/h, they are exploiting them".
Love the concept of "the ladder".
What advice do you have for applying it in everyday discourse; meetings, interviews, presentations etc. In an attempt to be specific with my question, I am not asking about the concept of "being more specific", rather I am asking how you can "train" yourself to apply it more frequently and effectively. The goal being that when you are asked a question you are able to respond with a more concise and specific response which improves your communication effectiveness?
"Love your neighbour" is also not specific. Very many good things aren't. It's ok. You don't have to play chess at all until you discuss interventions.
As I am but a vain monkey, I have often turned my mind to this post hoping to win arguments, but have not succeeded. The missing step seems to be the one here:
Steve: Ok… A single dad whose kid gets sick. He works for Uber and he doesn’t even get health insurance, and he’s maxed out all his credit cards to pay for doctor’s visits. The next time his car breaks down, he won’t even be able to fix it. Meanwhile, Uber skims 25% of every dollar so he barely makes minimum wage. You should try living on minimum wage so you can see...
1) There is a risk in looking at concrete examples before understanding the relevant abstractions. Your Uber example relies on the fact that you can both look at his concrete example and know you're seeing the same thing. This condition does not always hold, as often the wrong details jump out as salient.
To give a toy example, if I were to use the examples "King cobra, Black mamba" to contrast with "Boa constrictor, Anaconda" you'd probably see "Ah, I get it! Venomous snakes vs non-venomous snakes", but that's ...
This is Part I of the Specificity Sequence
Imagine you've played ordinary chess your whole life, until one day the game becomes 3D. That's what unlocking the power of specificity feels like: a new dimension you suddenly perceive all concepts to have. By learning to navigate the specificity dimension, you'll be training a unique mental superpower. With it, you'll be able to jump outside the ordinary course of arguments and fly through the conceptual landscape. Fly, I say!
"Acme exploits its workers!"
Want to see what a 3D argument looks like? Consider a conversation I had the other day when my friend “Steve” put forward a claim that seemed counter to my own worldview:
We were only one sentence into the conversation and my understanding of Steve’s point was high-level, devoid of specific detail. But I assumed that whatever his exact point was, I could refute it using my understanding of basic economics. So I shot back with a counterpoint:
I injected principles of Econ 101 into the discussion because I figured they could help me expose that Steve misunderstood Acme's impact on its workers.
My smart-sounding response might let me pass for an intelligent conversationalist in non-rationalist circles. But my rationalist friends wouldn't have been impressed at my 2D tactics, parrying Steve's point with my own counterpoint. They'd have sensed that I'm not progressing toward clarity and mutual understanding, that I'm ignorant of the way of Double Crux.
If I were playing 3D chess, my opening move wouldn't be to slide a defensive piece (the Econ 101 Rook) across the board. It would be to... shove my face at the board and stare at the pieces from an inch away.
Here's what an attempt to do this might look like:
No, this is still not the enlightening conversation we were hoping for.
But where did I go wrong? Wasn't I making a respectable attempt to lead the conversation toward clear and precise definitions? Wasn't I navigating the first waypoint on the road to Double Crux?
Can you figure out where I went wrong?
…
…
…
It was a mistake for me to ask Steve for a mere definition of the term “exploit”. I should have asked for a specific example of what he imagines “exploit” to mean in this context. What specifically does it mean—actually, forget "mean"—what specifically does it look like for Acme to "exploit its workers by paying them too little"?
When Steve explained that "exploit" means "to use selfishly", he fulfilled my request for a definition, but the whole back-and-forth didn't yield any insight for either of us. In retrospect, it was a wasted motion for me to ask, "What do you mean by 'exploits its workers'".
Then I, instead of making another attempt to shove my face to stare at the board up close, couldn't help myself: I went back to sliding my pieces around. I set out to rebut the claim that "Acme uses its workers selfishly" by tossing the big abstract concept of “capitalism” into the discussion.
At this point, imagine that Steve were a malicious actor whose only goal was to score rhetorical points on me. He'd be thrilled to hear me say the word “capitalism”. "Capitalism" is a nice high-level concept for him to build an infinite variety of smart-sounding defenses out of, together with other high-level concepts like “exploitation” and “selfishness”.
A malicious Steve can be rhetorically effective against me even without possessing a structured understanding of the subject he’s making a claim about. His mental model of the subject could just be a ball pit of loosely-associated concepts. He can hold up his end of the conversation surprisingly well by just snatching a nearby ball and flinging it at me. And what have I done by mentioning “capitalism”? I’ve gone and tossed in another ball.
I'd like to think that Steve isn't malicious, that he isn't trying to score rhetorical points on me, and that the point he's trying to make has some structure and depth to it. But there's only one way to be sure: By using the power of specificity to get a closer look! Here's how it's done:
Steve doesn’t realize this yet, but by coaxing out a specific example of his claim, I've suddenly made it impossible for him to use a ball pit of loosely-associated concepts to score rhetorical points on me. From this point on, the only way he can win the argument with me is by clarifying and supporting his claim in a way that helps me update my mental model of the subject. This isn’t your average 2D argument anymore. We’re now flying like Superman.
I have to stop and point out how crazy this is.
You’d think the way smart people argue is by supporting their claims with evidence, right? But here I’m giving Steve a handicap where he gets to make up fake evidence (telling me any hypothetical specific story) just to establish that his argument is coherent by checking whether empirical support for it ever could meaningfully exist. I'm asking Steve to step over a really low bar here.
Surprisingly, in real-world arguments, this lowly bar often stops people in their tracks. The conversation often goes like this:
When someone makes a claim you (think you) disagree with, don't immediately start gaming out which 2D moves you'll counterargue with. Instead, start by drilling down in the specificity dimension: think through one or more specific scenarios to which their claim applies.
If you can't think of any specific scenarios to which their claim applies, maybe it's because there are none. Maybe the thinking behind their original claim is incoherent.
In complex topics such as politics and economics, the sad reality is that people who think they’re arguing for a claim are often not even making a claim. In the above conversation, I never got to a point where I was trying to refute Steve’s argument, I was just trying to get specific clarity on what Steve’s claim is, and I never could. We weren't able to discuss an example of what specific world-state constitutes, in his judgment, a referent of the statement “Acme exploits its workers by paying them too little”.
Zooming Into the Claim
Imagine Steve shows you this map and says, “Oregon’s coastline is too straight. I wish all coastlines were less straight so that they could all have a bay!”
Resist the temptation to argue back, “You’re wrong, bays are stupid!” Hopefully, you’ve built up the habit of nailing down a claim’s specific meaning before trying to argue against it.
Steve is making a claim about “Oregon’s coastline”, which is a pretty abstract concept. In order to unpack the claim’s specific meaning, we have to zoom into the concept of a “coastline” and see it in more detail as this specific configuration of land and water:
From this perspective, a good first reply would be, “Well, Steve, what about Coos Bay over here? Are you happy with Oregon’s coastline as long as Coos Bay is part of it, or do you still think it’s too straight even though it has this bay?”
Notice that we can’t predict how Steve will answer our specific clarifying question. So we never knew what Steve’s words meant in the first place, did we? Now you can see why it wasn’t yet productive for us to start arguing against him.
When you hear a claim that sounds meaningful, but isn’t 100% concrete and specific, the first thing you want to do is zoom into its specifics. In many cases, you’ll then find yourself disambiguating between multiple valid specific interpretations, like for Steve’s claim that “Oregon’s coastline is too straight”.
In other cases, you’ll discover that there was no specific meaning in the mind of the speaker, like in the case of Steve’s claim that “Acme exploits its workers by paying them too little” — a staggering thing to discover.
TFW a statement unexpectedly turns out to have no specific meaning
“Startups should have more impact!”
Consider this excerpt from a recent series of tweets by Michael Seibel, CEO of the Y Combinator startup accelerator program:
When I first read these tweets, my impression was that Michael was providing useful suggestions that any founder could act on to make their startup more of a force for good. But then I activated my specificity powers…
Before elaborating on what I think is the failure of specificity on Michael’s part, I want to say that I really appreciate Michael and Y Combinator engaging with this topic in the first place. It would be easy for them to keep their head down and stick to their original wheelhouse of funding successful startups and making huge financial returns, but instead, YC repeatedly pushes the envelope into new areas such as founding OpenAI and creating their Request for Carbon Removal Technologies. The Y Combinator community is an amazing group of smart and morally good people, and I’m proud to call myself a YC founder (my company Relationship Hero was in the YC Summer 2017 batch). Michael’s heart is in the right place to suggest that startup founders may have certain underused mechanisms by which to make the world a better place.
That said… is there any coherent takeaway from this series of tweets, or not?
The key phrases seem to be that startup founders should “compete for happiness and impact” and “use the company’s reach and power to make the world a better place”.
It sounds meaningful, doesn’t it? But notice that it’s generically-worded and lacks any specific examples. This is a red flag.
Remember when you first heard Steve’s claim that “Acme exploits its workers by paying them too little”? At first, it sounded like a meaningful claim. But as we tried to nail down what it meant, it collapsed into nothing. Will the same thing happen here?
Specificity powers, activate! Form of: Tweet reply
Let’s consider a specific example of a startup founder who is highly successful: Elon Musk and his company SpaceX, currently valued at $33B. The company’s mission statement is proudly displayed at the top of their about page:
What I love about SpaceX is that everything they do follows from Elon Musk’s original goal of making human life multiplanetary. Check out this incredible post by Tim Urban to understand Elon’s plan in detail. Elon’s 20-year playbook is breathtaking:
A single catastrophic event on Earth can permanently wipe out the human species
Colonize other planets, starting with Mars
Invent reusable rockets to drop the price per launch, then dominate the $27B/yr market for space launches
I would enthusiastically advise any founder to follow Elon’s playbook, as long as they have the stomach to commit to it for 20+ years.
So how does this relate to Michael’s tweets? I believe my advice to “follow Elon’s playbook” constitutes a specific example of Michael’s suggestion to “use the company’s reach and power to make the world a better place”.
But here’s the thing: Elon’s playbook is something you have to do before you found the company. First you have to identify a major problem in the world, then you come up with a plan to start a certain type of company. How do you apply Michael’s advice once you’ve already got a company?
To see what I mean, let’s pick another specific example of a successful founder: Drew Houston and Dropbox ($11B market cap). We know that Michael wants Drew to “compete for happiness and impact” and to “use the company’s reach and power to make the world a better place”. But what does that mean here? What specific advice would Michael have for Drew?
Let’s brainstorm some possible ideas for specific actions that Michael might want Drew to take:
I know, these are just stabs in the dark, because we need to talk about specifics somehow. Did Michael really mean any of these? The ones about charity and employee benefits seem too obvious. Let’s explore the possibility that Michael might be recommending that Dropbox change its mission.
Here’s Dropbox’s current mission from their about page:
Seems like a nice mission that helps the world, right? I use Dropbox myself and can confirm that the product makes my life a little better. So would Michael say that Dropbox is an example of “competing for happiness and impact”?
If so, then it would have been really helpful if Michael had written in one of his tweets, “I mean like how Dropbox is unleashing the world’s creative energy”. Mentioning Dropbox, or any other specific example, would have really clarified what Michael is talking about.
And if Dropbox’s current mission isn’t what Michael is calling for, then how would Dropbox need to change it in order to better “compete for happiness and impact”? For instance, would it help if they tack on “and we guarantee that anyone can have access to cloud storage regardless of their ability to pay for it”, or not?
Notice how this parallels my conversation with Steve about Acme. We begin with what sounds like a meaningful exhortation: Companies should compete for happiness and impact instead of wealth and users/revenue! Acme shouldn’t exploit its workers! But when we reach for specifics, we suddenly find ourselves grasping at straws. I showed three specific guesses of what Michael’s advice could mean for Drew, but we have no idea what it does mean, if anything.
Imagine that the CEO of Acme wanted to take Steve’s advice about how not to exploit workers. He’d be in the same situation as Drew from Dropbox: confused about the specifics of what his company was supposedly doing wrong, to begin with.
Once you’ve mastered the power of specificity, you’ll see this kind of thing everywhere: a statement that at first sounds full of substance, but then turns out to actually be empty. And the clearest warning sign is the absence of specific examples.
Next post: How Specificity Works