This is Part I of the Specificity Sequence

Imagine you've played ordinary chess your whole life, until one day the game becomes 3D. That's what unlocking the power of specificity feels like: a new dimension you suddenly perceive all concepts to have. By learning to navigate the specificity dimension, you'll be training a unique mental superpower. With it, you'll be able to jump outside the ordinary course of arguments and fly through the conceptual landscape. Fly, I say!

"Acme exploits its workers!"

Want to see what a 3D argument looks like? Consider a conversation I had the other day when my friend “Steve” put forward a claim that seemed counter to my own worldview:

Steve: Acme exploits its workers by paying them too little!

We were only one sentence into the conversation and my understanding of Steve’s point was high-level, devoid of specific detail. But I assumed that whatever his exact point was, I could refute it using my understanding of basic economics. So I shot back with a counterpoint:

Liron: No, job creation is a force for good at any wage. Acme increases the demand for labor, which drives wages up in the economy as a whole.

I injected principles of Econ 101 into the discussion because I figured they could help me expose that Steve misunderstood Acme's impact on its workers.

My smart-sounding response might let me pass for an intelligent conversationalist in non-rationalist circles. But my rationalist friends wouldn't have been impressed at my 2D tactics, parrying Steve's point with my own counterpoint. They'd have sensed that I'm not progressing toward clarity and mutual understanding, that I'm ignorant of the way of Double Crux.

If I were playing 3D chess, my opening move wouldn't be to slide a defensive piece (the Econ 101 Rook) across the board. It would be to... shove my face at the board and stare at the pieces from an inch away.

Here's what an attempt to do this might look like:

Steve: Acme exploits its workers by paying them too little!

Liron: What do you mean by “exploits its workers”?

Steve: Come on, you know what “exploit” means… says it means “to use selfishly for one’s own ends”

Liron: You’re saying you have a beef with any company that acts “selfish”? Doesn’t every company under capitalism aim to maximize returns for its shareholders?

Steve: Capitalism can be good sometimes, but Acme has gone beyond the pale with their exploitation of workers. They’re basically ruining capitalism.

No, this is still not the enlightening conversation we were hoping for.

But where did I go wrong? Wasn't I making a respectable attempt to lead the conversation toward clear and precise definitions? Wasn't I navigating the first waypoint on the road to Double Crux?

Can you figure out where I went wrong?

It was a mistake for me to ask Steve for a mere definition of the term “exploit”. I should have asked for a specific example of what he imagines “exploit” to mean in this context. What specifically does it mean—actually, forget "mean"—what specifically does it look like for Acme to "exploit its workers by paying them too little"?

When Steve explained that "exploit" means "to use selfishly", he fulfilled my request for a definition, but the whole back-and-forth didn't yield any insight for either of us. In retrospect, it was a wasted motion for me to ask, "What do you mean by 'exploits its workers'".

Then I, instead of making another attempt to shove my face to stare at the board up close, couldn't help myself: I went back to sliding my pieces around. I set out to rebut the claim that "Acme uses its workers selfishly" by tossing the big abstract concept of “capitalism” into the discussion.

At this point, imagine that Steve were a malicious actor whose only goal was to score rhetorical points on me. He'd be thrilled to hear me say the word “capitalism”. "Capitalism" is a nice high-level concept for him to build an infinite variety of smart-sounding defenses out of, together with other high-level concepts like “exploitation” and “selfishness”.

A malicious Steve can be rhetorically effective against me even without possessing a structured understanding of the subject he’s making a claim about. His mental model of the subject could just be a ball pit of loosely-associated concepts. He can hold up his end of the conversation surprisingly well by just snatching a nearby ball and flinging it at me. And what have I done by mentioning “capitalism”? I’ve gone and tossed in another ball.

I'd like to think that Steve isn't malicious, that he isn't trying to score rhetorical points on me, and that the point he's trying to make has some structure and depth to it. But there's only one way to be sure: By using the power of specificity to get a closer look! Here's how it's done:

Steve: Acme exploits its workers by paying them too little!

Liron: Can you help me paint a specific mental picture of a worker being exploited by Acme?

Steve: Ok… A single dad who works at Acme and never gets to spend time with his kids because he works so much. He's living paycheck to paycheck and he doesn't get any paid vacation days. The next time his car breaks down, he won’t even be able to fix it because he barely makes minimum wage. You should try living on minimum wage so you can see how hard it is!

Liron: You’re saying Acme should be blamed for this specific person’s unpleasant life circumstances, right?

Steve: Yes, because they have thousands of workers in these kinds of circumstances, and meanwhile their stock is worth $80 billion.

Steve doesn’t realize this yet, but by coaxing out a specific example of his claim, I've suddenly made it impossible for him to use a ball pit of loosely-associated concepts to score rhetorical points on me. From this point on, the only way he can win the argument with me is by clarifying and supporting his claim in a way that helps me update my mental model of the subject. This isn’t your average 2D argument anymore. We’re now flying like Superman.

Liron: Ok, sticking with this one specific worker's hypothetical story — what would they be doing if Acme didn’t exist?

Steve: Getting a different job

Liron: Ok, what specific job?

Steve: I don’t know, depends what their skills are

Liron: This is your specific story Steve, you get to pick any specific plausible details you want in order to support any point you want!

I have to stop and point out how crazy this is.

You’d think the way smart people argue is by supporting their claims with evidence, right? But here I’m giving Steve a handicap where he gets to make up fake evidence (telling me any hypothetical specific story) just to establish that his argument is coherent by checking whether empirical support for it ever could meaningfully exist. I'm asking Steve to step over a really low bar here.

Surprisingly, in real-world arguments, this lowly bar often stops people in their tracks. The conversation often goes like this:

Steve: I guess he could instead be a cashier at McDonald’s. Because then he’d at least get three weeks per year of paid vacation time.

Liron: In a world where Acme exists, couldn’t this specific guy still go get a job as a cashier at McDonald’s? Plus, wouldn’t he have less competition for that McDonald's cashier job because some of the other would-be applicants got recruited to be Acme workers instead? Can we conclude that the specific person who you chose to illustrate your point is actually being *helped* by the existence of Acme?

Steve: No because he’s an Acme worker, not a McDonald’s cashier

Liron: So doesn’t that mean Acme offered him a better deal than McDonald’s, thereby improving his life?

Steve: No, Acme just tricked him into thinking that it’s a better deal because they run ads saying how flexible their job is, and how you can set your own hours. But it’s actually a worse deal for the worker.

Liron: So like, McDonald’s offered him $13/hr plus three weeks per year of paid vacation, while Acme offered $13/hr with no paid vacation time, but more flexibility to set his own hours?

Steve: Um, ya, something like that.

Liron: So if Acme did a better job of educating prospective workers that they don't offer the same paid vacation time that some other companies do, then would you stop saying that Acme is “exploiting its workers by paying them too little”?

Steve: No, because Acme preys on uneducated workers who need quick access to cash, and they also intend to automate away the workers’ jobs as soon as they can.

Liron: It looks like you’re now making new claims that weren’t represented in the specific story you chose, right?

Steve: Yes, but I can tell other stories

Liron: But for the specific story you chose to tell that was supposed to best illustrate your claim, the “exploitation” you’re referring to only deprived the worker of the value of a McDonald’s cashier’s paid vacation time, which might be like a 5% difference in total compensation value? And since his work schedule is much more flexible as an Acme worker, couldn’t that easily be worth the 5% to him, so that he wasn’t “tricked” into joining Acme but rather made a decision in rational self-interest?

Steve: Yeah maybe, but anyway that’s just one story.

Liron: No worries, we can start over and talk about a specific story that you think would illustrate your main claim. Who knows, I might even end up agreeing with your claim once I understand it. It's just hard to understand what you're saying about Acme without having at least one example in mind, even if it's a hypothetical one.

Steve thinks for a little while...

Steve: I don't know all the exploitative shit Acme does ok? I just think Acme is a greedy company.

When someone makes a claim you (think you) disagree with, don't immediately start gaming out which 2D moves you'll counterargue with. Instead, start by drilling down in the specificity dimension: think through one or more specific scenarios to which their claim applies.

If you can't think of any specific scenarios to which their claim applies, maybe it's because there are none. Maybe the thinking behind their original claim is incoherent.

In complex topics such as politics and economics, the sad reality is that people who think they’re arguing for a claim are often not even making a claim. In the above conversation, I never got to a point where I was trying to refute Steve’s argument, I was just trying to get specific clarity on what Steve’s claim is, and I never could. We weren't able to discuss an example of what specific world-state constitutes, in his judgment, a referent of the statement “Acme exploits its workers by paying them too little”.

Zooming Into the Claim

Imagine Steve shows you this map and says, “Oregon’s coastline is too straight. I wish all coastlines were less straight so that they could all have a bay!”

Resist the temptation to argue back, “You’re wrong, bays are stupid!” Hopefully, you’ve built up the habit of nailing down a claim’s specific meaning before trying to argue against it.

Steve is making a claim about “Oregon’s coastline”, which is a pretty abstract concept. In order to unpack the claim’s specific meaning, we have to zoom into the concept of a “coastline” and see it in more detail as this specific configuration of land and water:

From this perspective, a good first reply would be, “Well, Steve, what about Coos Bay over here? Are you happy with Oregon’s coastline as long as Coos Bay is part of it, or do you still think it’s too straight even though it has this bay?”

Notice that we can’t predict how Steve will answer our specific clarifying question. So we never knew what Steve’s words meant in the first place, did we? Now you can see why it wasn’t yet productive for us to start arguing against him.

When you hear a claim that sounds meaningful, but isn’t 100% concrete and specific, the first thing you want to do is zoom into its specifics. In many cases, you’ll then find yourself disambiguating between multiple valid specific interpretations, like for Steve’s claim that “Oregon’s coastline is too straight”.

In other cases, you’ll discover that there was no specific meaning in the mind of the speaker, like in the case of Steve’s claim that “Acme exploits its workers by paying them too little” — a staggering thing to discover.

TFW a statement unexpectedly turns out to have no specific meaning

“Startups should have more impact!”

Consider this excerpt from a recent series of tweets by Michael Seibel, CEO of the Y Combinator startup accelerator program:

Successful tech founders would have far better lives and legacies if they competed for happiness and impact instead of wealth and users/revenue.

We need to change [the] model from build a big company, get rich, and then starting a foundation...

To build a big company, get rich, and use the company's reach and power to make the world a better place.

When I first read these tweets, my impression was that Michael was providing useful suggestions that any founder could act on to make their startup more of a force for good. But then I activated my specificity powers…

Before elaborating on what I think is the failure of specificity on Michael’s part, I want to say that I really appreciate Michael and Y Combinator engaging with this topic in the first place. It would be easy for them to keep their head down and stick to their original wheelhouse of funding successful startups and making huge financial returns, but instead, YC repeatedly pushes the envelope into new areas such as founding OpenAI and creating their Request for Carbon Removal Technologies. The Y Combinator community is an amazing group of smart and morally good people, and I’m proud to call myself a YC founder (my company Relationship Hero was in the YC Summer 2017 batch). Michael’s heart is in the right place to suggest that startup founders may have certain underused mechanisms by which to make the world a better place.

That said… is there any coherent takeaway from this series of tweets, or not?

The key phrases seem to be that startup founders should “compete for happiness and impact” and “use the company’s reach and power to make the world a better place”.

It sounds meaningful, doesn’t it? But notice that it’s generically-worded and lacks any specific examples. This is a red flag.

Remember when you first heard Steve’s claim that “Acme exploits its workers by paying them too little”? At first, it sounded like a meaningful claim. But as we tried to nail down what it meant, it collapsed into nothing. Will the same thing happen here?

Specificity powers, activate! Form of: Tweet reply

What's a specific example, real or hypothetical, of a $1B+ founder trading off less revenue for more impact?

Cuz at the $1B+ level, competing for impact may look indistinguishable from competing for revenue.

E.g. Elon Musk companies have huge impact and huge valuations.

Let’s consider a specific example of a startup founder who is highly successful: Elon Musk and his company SpaceX, currently valued at $33B. The company’s mission statement is proudly displayed at the top of their about page:

SpaceX designs, manufactures and launches advanced rockets and spacecraft. The company was founded in 2002 to revolutionize space technology, with the ultimate goal of enabling people to live on other planets.

What I love about SpaceX is that everything they do follows from Elon Musk’s original goal of making human life multiplanetary. Check out this incredible post by Tim Urban to understand Elon’s plan in detail. Elon’s 20-year playbook is breathtaking:

  1. Identify a major problem in the world
    A single catastrophic event on Earth can permanently wipe out the human species
  2. Propose a method of fixing it
    Colonize other planets, starting with Mars
  3. Design a self-sustaining company or organization to get it done
    Invent reusable rockets to drop the price per launch, then dominate the $27B/yr market for space launches

I would enthusiastically advise any founder to follow Elon’s playbook, as long as they have the stomach to commit to it for 20+ years.

So how does this relate to Michael’s tweets? I believe my advice to “follow Elon’s playbook” constitutes a specific example of Michael’s suggestion to “use the company’s reach and power to make the world a better place”.

But here’s the thing: Elon’s playbook is something you have to do before you found the company. First you have to identify a major problem in the world, then you come up with a plan to start a certain type of company. How do you apply Michael’s advice once you’ve already got a company?

To see what I mean, let’s pick another specific example of a successful founder: Drew Houston and Dropbox ($11B market cap). We know that Michael wants Drew to “compete for happiness and impact” and to “use the company’s reach and power to make the world a better place”. But what does that mean here? What specific advice would Michael have for Drew?

Let’s brainstorm some possible ideas for specific actions that Michael might want Drew to take:

  • Change Dropbox’s mission to something that has more impact on happiness
  • Donate 10% of Dropbox’s profits to efforts to reduce world hunger
  • Give all Dropbox employees two months of paid vacation each year

I know, these are just stabs in the dark, because we need to talk about specifics somehow. Did Michael really mean any of these? The ones about charity and employee benefits seem too obvious. Let’s explore the possibility that Michael might be recommending that Dropbox change its mission.

Here’s Dropbox’s current mission from their about page:

We’re here to unleash the world’s creative energy by designing a more enlightened way of working.

Seems like a nice mission that helps the world, right? I use Dropbox myself and can confirm that the product makes my life a little better. So would Michael say that Dropbox is an example of “competing for happiness and impact”?

If so, then it would have been really helpful if Michael had written in one of his tweets, “I mean like how Dropbox is unleashing the world’s creative energy”. Mentioning Dropbox, or any other specific example, would have really clarified what Michael is talking about.

And if Dropbox’s current mission isn’t what Michael is calling for, then how would Dropbox need to change it in order to better “compete for happiness and impact”? For instance, would it help if they tack on “and we guarantee that anyone can have access to cloud storage regardless of their ability to pay for it”, or not?

Notice how this parallels my conversation with Steve about Acme. We begin with what sounds like a meaningful exhortation: Companies should compete for happiness and impact instead of wealth and users/revenue! Acme shouldn’t exploit its workers! But when we reach for specifics, we suddenly find ourselves grasping at straws. I showed three specific guesses of what Michael’s advice could mean for Drew, but we have no idea what it does mean, if anything.

Imagine that the CEO of Acme wanted to take Steve’s advice about how not to exploit workers. He’d be in the same situation as Drew from Dropbox: confused about the specifics of what his company was supposedly doing wrong, to begin with.

Once you’ve mastered the power of specificity, you’ll see this kind of thing everywhere: a statement that at first sounds full of substance, but then turns out to actually be empty. And the clearest warning sign is the absence of specific examples.

Next post: How Specificity Works

New Comment
83 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I have a strong appreciation for the general point that "specificity is sometimes really great", but I'm wondering if this point might miss the forest for the trees with some large portion of its actual audience?

If you buy that in some sense all debates are bravery debates then audience can matter a lot, and perhaps this point addresses central tendencies in "global english internet discourse" while failing to address central tendencies on LW?

There is a sense in which nearly all highly general statements are technically false, because they admit of at least some counter examples.

However any such statement might still be a useful in a structured argument of very high quality, perhaps as an illustration of a troubling central tendency, or a "lemma" in a multi-part probabalistic argument.

It might even be the case that the MEDIAN EXAMPLE of a real tendency is highly imperfect without that "demolishing" the point.

Suppose for example that someone has focused on a lot on higher level structural truths whose evidential basis was, say, a thorough exploration of many meta-analyses about a given subject.

"Mel the meta-meta-analyst" mig... (read more)

A culturally local example might be Hanson's general claim that medical insurance coverage does not appear to "cause health" on average. No single vivid patient generates this result. Vivid stories do exist here, but they don't adequately justify the broader claim. Rather, the substantiation arises from tallying many outcomes in a variety of circumstances and empirically noticing relations between circumstances and tallies.

I don't see how this is an example of a time when my specificity test shouldn't be used, because Robin Hanson would simply pass my specificity test. It's safe to say that Robin has thought through at least one specific example of what the claim that "medical insurance doesn't cause health" means. The sample dialogue with me and Robin would look like this:

Robin: Medical insurance coverage doesn't cause health on average!

Liron: What's a specific example (real or hypothetical) of someone who seeks medical insurance coverage because they think they're improving their health outcome, but who actually would have had the same health outcome without insurance?

Robin: A 50-year-old man opts for a job that has insurance benefits over one that doesn't, because he believes

... (read more)
I appreciate your high-quality comment.

I likewise appreciate your prompt and generous response :-)

I think I see how you imagine a hypothetical example of "no net health from insurance" might work as a filter that "passes" Hanson's claim.

In this case, I don't think your example works super well and might almost cause more problems that not?

Differences of detail in different people's examples might SUBTRACT from attention to key facts relevant to a larger claim because people might propose different examples that hint at different larger causal models.

Like, if I was going to give the strongest possible hypothetical example to illustrate the basic idea of "no net health from insurance" I'd offers something like:

EXAMPLE: Alice has some minor symptoms of something that would clear up by itself and because she has health insurance she visits a doctor. ("Doctor visits" is one of the few things that health insurance strongly and reliably causes in many people.) While there she gets a nosocomial infection that is antibiotic resistant, lowering her life expectancy. This is more common than many people think. Done.

This example is... (read more)

I agree that this section of your comment is the most cruxy: Yes. Then I would say, "Ok, I've never encountered a coherent generalization for which I couldn't easily generate an example, so go ahead and tell me your causal model and I'll probably cook up an obvious example to satisfy myself in the first minute of your explanation." Anyone talking about having a "causal model" is probably beyond the level that my specificity trick is going to demolish. The specificity trick I focus on in this post is for demolishing the coherence of the claims of the average untrained arguer, or occasionally catching oneself at thinking overly vaguely. That's it.
I think maybe we agree... verbosely... with different emphasis? :-) At least I think we could communicate reasonably well. I feel like the danger, if any, would arise from playing example ping pong and having the serious disagreements arise from how we "cook (instantiate?)" examples into models, and "uncook (generalize?)" models into examples. When people just say what their model "actually is", I really like it. When people only point to instances I feel like the instances often under-determine the hypothetical underlying idea and leave me still confused as to how to generate novel instances for myself that they would assent to as predictions consistent with the idea that they "meant to mean" with the instances. Maybe: intensive theories > extensive theories?
I feel like the same scrutinity standard is not being applied. Guy with health insurance doesn't check their health more often catching diseases earlier? Uncertainty doesn't cause stress and workload on circulatory system? Why are these not holes that prevent it from being coherent? Why can't Steve claim he has a friend that can be called that can exempilify exploitation? If the bar is infact low Steve passed it upon positing McDonalds as relevant alternative and the argument went on to actually argue the argument. Or alternatively it requires opinion to have that Robin specification to be coherent and a reasonable arguer could try to hold it to be incoherent. I feel like this is a case where epistemic status breaks symmetry. A white coat doctor and a witch doctor making the same claims requires the witch doctor to show more evidence to reach the same credibility levels. If argument truly screens off authority the problems needs to be in the argument. Steve is required to have the specification ready on hand during debate.
The difference is that we all have a much clearer picture of what Robin Hanson's claim means than what Steve means, so Robin's claim is sufficiently coherent and Steve's isn't. I agree there's a gray area on what is "sufficiently coherent", but I think we can agree there's a significant difference on this coherence spectrum between Steve's claim and Robin's claim. For example, any listener can reasonably infer from Robin's claim that someone on medical insurance who gets cancer shouldn't be expected to survive with a higher probability than someone without medical insurance. But reasonable listeners can have differing guesses about whether or not Steve's claim would also describe a world where Uber institutes a $15/hr flat hourly wage for all drivers. Sure, then I'd just try to understand what specific property of the friend counts as exploitation. In Robin's case, I already have good reasonable guesses about the operational definitions of "health". Yes I can try to play "gotcha" and argue that Robin hasn't nailed down his claim, and in some cases that might actually be the right thing to do - it's up to me to determine what's sufficiently coherent for my own understanding, and nail down the claim to that standard, before moving onto arguing about the truth-value of the claim. Ah, if Steve really meant "Yes Uber screws the employee out of $1/hr compared to McDonald's", then he would have passed the specificity bar. But the specific Steve I portray in the dialogue didn't pass the bar because that's not really the point he felt he wanted to make. The Steve that I portray found himself confused because his claim really was insufficiently specific, and therefore really was demolished, unexpectedly to him.
Well I am more familiar with settings where I have a duty to understand the world rather than the world having the duty to explain itself to me. I also hold that having unfamiliar things hit higher standards creates epistemic xenophobia. I would hold it important that one doesn't assign falsehood to a claim they don't understand. Althought it is also true that assigning truth to a claim one doesn't understand is dangerous to relatively same caliber. My go-to assumption would be that Steve understands something different with the word and might be running some sort of moon logic in his head. Rather than declare the "moon proof" to be invalid it's more important that the translation between moon logic and my planet logic interfaces without confusion. Instead of using a word/concept I do know wrong he is using a word or concept I do not know. "Coherent" usually points to a concept where a sentence is judged on it's home logics terms. But as used here it's clearly in the eye of the beholder. So it's less "makes objective sense" and more a "makes sense to whom?". The shared reality you create in a discussion or debate would be the arbiter but if the argument realies too much on those mechanics it doesn't generalise to contextes outside of that.
Sure, makes sense. I also just think there are a lot of Steves in the world who are holding on to belief-claims that lack specific referents, who could benefit from reading this post even if no one is arguing with them.
Related: it's tempting to interpret Ignorance, a skilled practice as pushing for the epistemological stance that specificity should overwhelm Mel the meta meta-analyst
I think this is a common misunderstanding that people are having about what I'm saying. I'm not saying to hunt for a counterexample that demolishes a claim. I'm saying to ask the person making the claim for a single specific example that's consistent with the general claim. Imagine that a general claim has 900 examples and 100 counterexamples. Then I'm just asking to see one of the 900 examples :)

I <3 Specificity

For years, I've been aware of myself "activating my specificity powers" multiple times per day, but it's kind of a lonely power to have. "I'm going to swivel my brain around and ride it in the general→specific direction. Care to join me?" is not something you can say in most group settings. It's hard to explain to people that I'm not just asking them to be specific right now, in this one context. I wish I could make them see that specificity is just this massively under-appreciated cross-domain power. That's why I wanted this sequence to exist.

I gratuitously violated a bunch of important LW norms

  1. As Kaj insightfully observed last year, choosing Uber as the original post's object-level subject made it a political mind-killer.
  2. On top of that, the original post's only role model of a specificity-empowered rationalist was this repulsive "Liron" character who visibly got off on raising his own status by demolishing people's claims.

Many commenters took me to task on the two issues above, as well as raising other valid issues, like whether the post implies that specificity is always the right power to activate in every situation.

The voting for this post was probably a rar... (read more)

The notion of specificity may be useful, but to me its presentation in terms of tone (beginning with the title "The Power to Demolish Bad Arguments") and examples seemed rather antithetical to the Less Wrong philosophy of truth-seeking.

For instance, I read the "Uber exploits its drivers" example discussion as follows: the author already disagrees with the claim as their bottom line, then tries to win the discussion by picking their counterpart's arguments apart, all the while insulting this fictitious person with asides like "By sloshing around his mental ball pit and flinging smart-sounding assertions about “capitalism” and “exploitation”, he just might win over a neutral audience of our peers.".

In contrast to e.g. Double Crux, that seems like an unproductive and misguided pursuit - reversed stupidity is not intelligence, and hence even if we "demolish" our counterpart's supposedly bad arguments, at best we discover that they could not shift our priors.

And more generally, the essay gave me a yucky sense of "rationalists try to prove their superiority by creating strawmen and then beating them in arguments", sneer culture, etc. It doesn't help that some of its central examples invo... (read more)

Meta-level reply Yeah, I take your point that the post's tone and political-ish topic choice undermine the ability of readers to absorb its lessons about the power of specificity. This is a clear message I've gotten from many commenters, whether explicitly or implicitly. I shall edit the post. Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness. Object-level reply In the meantime, I still think it's worth pointing out where I think you are, in fact, analyzing the content wrong and not absorbing its lessons :) My dialogue character has various positive-affect a-priori beliefs about Uber, but having an a-priori belief state isn't the same thing as having an immutable bottom line. If Steve had put forth a coherent claim, and a shred of support for that claim, then the argument would have left me with a modified a-posteriori belief state. My character is making a good-faith attempt at Double Crux. It's just impossible for me to ascertain Steve's claim-underlying crux until I first ascertain Steve's claim. You seem to be objecting that selling "the power to demolish bad arguments" means that I'm selling a Fully General Counterargument, but I'm not. The way this dialogue goes isn't representative of every possible dialogue where the power of specificity is applied. If Steve's claim were coherent, then asking him to be specific would end up helping me change my own mind faster and demolish my own a-priori beliefs. It doesn't seem relevant to mention this. In the dialogue, there's no instance of me creating or modifying my beliefs about Uber by reversing anything. I'm making an example out of Steve because I want to teach the reader about an important and widely-applicable observation about so-called "intellectual discussions": that participants often win over a crowd by making smart-sounding general assertions whose corresponding set of possible specific interpretations is the empty set.
2Fritz Iversen
I think you are on the right track. The problem is, "specifity" has to be handled in a really specific way and the intention has to be the desire to get from the realm of unclear arguments to clear insight. If you see discussions as a chess game, you're already sending your brain in the wrong direction, to the goal of "winning" the conversation, which is something fundamentally different than the goal of clarity. Just as specificity remains abstract here and is therefore misunderstood, one would have to ask: What exactly is specificity supposed to be? Linguistics would help here. For the problem that is negotiated grows out of the deficiencies of language, namely that language is contaminated with ambiguities. Linguistically specific is when numbers and entities (names) come in.  With "Acme" there is already an entity - otherwise everything, even the so-called specific argument - remains highly abstract. Therefore, the specificity trick in the dialogs remain just that - a manipulative trick. And tricks don't lead to clarity.  Specificity would be possible here only by injecting numbers: "How many dollars does Acme extract in surplus value per hour worked by their workers?" After that, the exploitation would have been specifically quantified and one could talk about whether Acme is brutally or somewhat unjustified exploiting the workers' bad situation or whether the wages are fair.  The specific economics of Acme would, of course, be even more complicated, insofar as one would have to ask whether much of the added value is already being absorbed by overpaid senior executives. At the end of any specific discussion, however, the panelists must ask themselves what they want to be: fair or unfair? Those who want to gain clarity about this have to answer it for themselves. Then briefly on Uber: Uber is a bad business idea. It's bad because it can only become profitable if Uber dominates its markets up to the point that they don't have no competition anymore. The

Echoing previous reviews (it's weird to me the site still suggested this to review anyway, seems like it was covered already?) I would strongly advise against including this. While it has a useful central point - that specificity is important and you should look for and request it - I agree with other reviewers that the style here is very much the set of things LW shouldn't be about, and LWers shouldn't be about, but that others think LW-style people are about, and it's structuring all these discussions as if arguments are soldiers and the goal is to win while being snarky about it.

Frankly, I found the whole thing annoying and boring, and wouldn't have finished it if I wasn't reviewing. 

I don't think changing the central example would overcome any of my core objections to the post, although it would likely improve it.

There's a version of this post that is much shorter, makes the central point quickly, and is something I would link to occasionally, but this very much isn't it. 

Zvi, I respect your opinion a lot and I've come to accept that the tone disqualifies the original version from being a good representation of LW. I'm working on a revision now. Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

I really dislike the central example used in this post, for reasons explained in this article. I hope it isn't included in the next LW book series without changing to a better example.

This comment leads me to believe that you misunderstand the point of the example. Demonstrating that an arguer doesn't have a coherent understanding of their claim doesn't mean that the claim itself is incoherent. It just means that if you argue against that particular person on that particular claim nobody is likely to gain anything out of it[1]. The validity of the example does not correlate to whether "Uber exploits its drivers!" or not. You agree with Steve in the example and because the example shows Steve being unable to defend his point you don't like it. You should strive to understand however that Steve's incoherent defense of his claim has nothing to do with your very coherent reasons for believing the same claim. I think that the example is strengthened if Steve's central claim is correct despite the fact that he can't defend it coherently. ---------------------------------------- 1. At least, that's my take. I haven't read the rest of this sequence yet so I don't know if Liron explains what you gain out of discovering that somebody's argument is incoherent. ↩︎
This looks like a case where hanging a lampshade would be useful. A footnote on the original claim by Steve saying: "The Steve character here does not have the eloquence to express this argument, but <source> and <source> present the case that Uber does, in fact, exploit its workers"
Making an unnecessary and possibly false object-level claim would only hurt the post. It's irrelevant to Liron's discussion whether Steve's claim is right or wrong and getting sidetracked by it's potential truthfulness would muddy the point.
Yeah, that's why I didn't advocate making an argument, just acknowledging that such arguments exist, and linking to a couple of reasonable ones. If an author chooses to use a real-world example, and uses a staw man as one side, that is automatically going to generate bad feelings in people who know better the better arguments. By hanging a lampshade on the straw man, the author aknowledges those feelings, explicitly sets them aside, and the discussion of debate technique (or other meta-level review) can proceed from there without getting sidetracked. And yes, the reader could make that inference, but 1. Don't make the reader do more work than necessary. 2. It (may) be epistemically damaging to expose people to bad arguments: 3. Not all readers implicitly trust the storyteller, and by avoiding plot holes, the author can avoid knocking the narritive train onto a sidetrack.
I don't understand your usage of the term "hanging a lampshade" in this context. I don't think either Steve's or Liron's behavior in the hypothetical is unrealistic or unreasonable. I have seen similar conversations before. Liron even stated that the Steve was basically him from some time ago. I thought hanging a lampshade is when the fictional scenario is unrealistic or overly coincidental and the author wants to alleviate reader ire by letting them know that he thinks the situation is unlikely as well. Since the situation here isn't unrealistic, I don't see the relevance of hanging a lampshade. If the article should be amended to include pro-"Uber exploits drivers" arguments it should also include contra arguments to maintain parity. Otherwise we have the exact same scenario but in reverse, as including only pro-"Uber exploits drivers" arguments will "automatically [...] generate bad feelings in people who know better the better arguments". This is why getting into the object-level accuracy of Steve's claim has negative value. Trying to do so will bloat the article and muddy the waters.
Steve's argumemt is explicitly bad, but the original post doesn't say that better arguments exist. And "Uber exploits workers?" isn't a settled question like, say, "vaccines cause autism?" or "new species develop from existing species." so the author shouldn't presume that the audience is totally familiar with the relative merits of both (or either) side(s), and can recognize that Steve is making a relatively poor argument for his point. And the overall structure pattern-matches to straw man fallacy. Perhaps that's the only thing that needs to be lampshade? If instead of 'Steve" the character is called "Scarecrow" (and then call the other one Dorathy, for narrative consistency/humor)
2Ben Pace
Alas, paywall. Summary?
It worked straightforwardly in an incognito window for me.  Only section I could find that seemed relevant to the example is this:  Overall, I do think the article makes some decent points, but I am overall not particularly compelled by it. The document seems to try to argue that Uber cannot possibly become profitable. I would be happy to take a bet that Uber will become profitable within the next 5 years.  It also makes some weak arguments that drivers are worse off working for Uber, but doesn't really back them up, just saying that "Uber pushed them down to minimum wage", but seems to completely ignore the value of flexibility of working for Uber, which (talking to many drivers over the years, as well as friends who temporarily got into driving for Uber) is one of the biggest value adds for drivers.

By telling Steve to be specific, you are trying to trick him into adopting an incoherent position. You should be trying to argue against your opponent at his strongest, in this case in the general case that he has thought the most about. If you lay out your strategy before going specific, he can choose an example that is more resilient to it. In your example, if Uber didn't exist, that job may have been a taxi driver job instead, which pays more, because there's less rent seeking stacked against you.

It's not a trick, he's allowed to backtrack and amend his choice of specific example and have a re-do of my response. In this dialogue, I chose the underlying reality to be that Steve's "point" really is incoherent, because a Monte Carlo simulation of real-world arguers has a high probability of landing on this scenario.
You saw coming that his position would be temporarily incoherent, that's why you went there. I expect Steve to be aware of this at some level, and update on how hostile the debate is. Minimize the amount of times you have to prove him wrong.
I agree with the principle of "minimize the amount of times you have to prove him wrong". But in the dialogue, I only had to "prove him wrong" that one time, because his position was fundamentally incoherent, not temporarily incoherent.
Relies very heavily that in adversial context a free pick should be an optimal pick. The other arguer demonstrated that he didn't even realise that he can pick so its is reasonable to assume he doesn't know the pick should be optimal. Doing re-dos without preannouncing them is giving free mulligans for yourself. I think would have been in the safe in saying that he did not make new claims, denying the mulligan. There could have been 10 facets of the exploitation in the scenario and fixing one of them would still leave 9 open. You can't say that a forest doesn't exist if it is not any of the individual trees. The new claim is also not contradictory with the old story. It could also be taken as further spesification of it.
Ok but what if he truly doesn't have a point? Because as the author of the fictional situation, I'm saying that's the premise. Under these circumstances, isn't it right and proper to play the kind of "adversarial game" with him that he's likely to "lose"?
It is pretty easy to 'win' any argument that the other side has to take a strong stance behind. Go ahead, ask something where you take a strong stance. I was going to say Uber has definately done 'blank', but I'm not very Steve and saying that Uber isn't as 'fair' as amazon or walmart is a position that is easy to agree with or against. Whoever makes out the best is going to agree with this adversarial gain.
Relying that your opponent does a mistake is not a super reliable strategy. If someone reads your story and uses it as inspiration to start an argument they might end up in a situation where the actual person doesn' t make that mistake. That could feel a lot more like "shooting yourself in the face in an argument" rather than "demolishing an argument". Argument methods that work because of misdirection arguably don't serve truth very well or work very indirectly (being deceptive makes it rewarding for the other to be keen). Most people have reasons for their stances. Their point might be louzy or unimportant but usually one exists. If he truly doesn' t have a point then there is no specific story to tell. As author you have the options of him having a story or not meaning anything with his words but not both.
Well, I'm not offering a general-purpose debate-winning tactic. I'm offering a basic sanity check that happens to demolish most arguments because humans are bad at thinking and have trouble even forming coherent claims.
I guess the point of humanity is to achieve as much prosperity as possible. Adversarial techniques help when competition improves our chances -- helpful in physical activities, when groups compete, in markets generally. But in a conversation with someone your best bet to help humanity is to help them come around to your superior understanding, and adversarial conversation won't achieve that. The ideal strategy looks something like the best path along which you can lead them, where you can demonstrate to them they are wrong and they will believe you, which usually involves you demonstrating a very clear and comprehensive understanding, citing information, but doing it all in a way that seems collaborative.
Asking for specific examples is not a rhetorical device, it's a tool for clear thinking. What I'm illustrating with Steve is a productive approach that raises the standard of discourse. IMO. I've personally been in the Steve role many times: I used to hang out a lot with Anna Salamon when I was still new to LessWrong-style rationality, and I distinctly remember how I would make statements that Anna would then basically just ask me to clarify, and in attempting to do so I would realize I probably don't have a coherent point, and this is what talking to a smarter person than me feels like. She was being empathetic and considerate to me like she always is, not adversarial at all, but it's still accurate to say she demolished my arguments.

I believe that the thing which is making many of your commenters misinterpret the post is that you chose a political example for the dialogue. That gives people the (reasonable, as this is a common move) suspicion that you have a motive of attacking your political enemy while disguising it as rationality advice.

Even if they don't think that, if they have any sympathies towards the side that you seem to be attacking, they will still feel it necessary to defend that side. To do otherwise would risk granting the implicit notion that the "Uber exploits its drivers" side would have no coherent arguments in general, regardless of whether or not you meant to send that message.

You mentioned that you have personal examples where Anna pointed out to you that your position was incoherently. Something like that would probably have been a better example to use: saying "here's an example of how I made this mistake" won't be suspected of having ulterior motives the way that "here's an example of a mistake made by someone who I might reasonably be suspected to consider a political opponent" will.

Ahhh right, you got me, I was unnecessarily political! It didn't pattern match to the kind of political arguing that I see in my bubble, but I get why anyone who feels skeptical or unhappy about Uber's practices won't be maximally receptive to learning about specificity using this example, and why even people who don't have an opinion about Uber have expressed feeling "uncomfortable" with the example. Thanks! At some point I may go back and replace with another example. I'm open to ideas.
Ok I finally made this edit. Wish I did it sooner!

I really liked this sequence. I agree that specificity is important, and think this sequence does a great job of illustrating many scenarios in which it might be useful.

However, I believe that there are a couple implicit frames that permeate the entire sequence, alongside the call for specificity.  I believe that these frames together can create a "valley of bad rationality" in which calls for specificity can actually make you worse at reasoning than the default.


The first of these frames is not just that being speci... (read more)

Thanks for the feedback. I agree that the tone of the post has been undermining its content. I'm currently working on editing this post to blast away the gratuitously bad-tone parts :) Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.
3Matt Goldenberg
I think the post reads much better now and I think specifically the second point I made about Combat Culture is addressed. In regards to the first point, I made some specific points in here about when and when not specificity might be useful that I feel still aren't fully addressed. I also get that this review was made really late and you didn't really have an opportunity to digest and incorporate the feedback, so I'm not blaming you in regards to this, but pointing out where I think the post still may need some work and my potential reasons for not voting for it or others in the sequence.
Glad to hear you feel I've addressed the Combat Culture issues. I think those were the lowest-hanging fruits that everyone agreed on, including me :) As for the first point, I guess this is the same thing we had a long comment thread about last year, and I'm not sure how much our views diverge at this point... Let's take this paragraph you quoted: "It sounds meaningful, doesn’t it? But notice that it’s generically-worded and lacks any specific examples. This is a red flag." Do you not agree with my point that Seibel should have endeavored to be more clear in his public statement?

I liked this post, though I am afraid that it will suggest the wrong spirit.

Update: I've edited the post to remove a lot of parts that I recognized as gratuitous yuckiness.

Can you help me paint a specific mental picture of a driver being exploited by Uber?

I've had similar "exploitation" arguments with people:

"Commodification" and "dehumanization" don't mean anything unless you can point to their concrete effects.

I think your way of handling it is much, much better than how I've handled it. It comes across as less adversarial while still making the other person do the work of explaining themselves better. I've found that small tricks like this can completely flip a conversation from dysfunctional to effective. I'll have to remember to use your suggestion.

Nominating this whole sequence. It’s a blast, even if reading it felt very jumpy and stop-and-start. And I love how it’s clearly a self-example. But overall it’s just some really key lessons, taught better than any other place on the internet that I know.

I really liked this whole sequence. I think I have some disagreements with its presentation (a bit too loud for my taste), but I have actually repeatedly mentally referred back to the idea of specificity that is proposed here, and the sequence caused me to substantially update that trying to be more specific is a surprisingly powerful level in lots of different situations. 

I also just really like the example and concreteness driven content of the sequence. 

I stumbled on this post today and when I read the Acme example, Uber did not come to mind as an example. It was only till I read the comments did I know that Acme was originally Uber. 

Just wanted to give some evidence that changing the example from Acme to Uber was helpful. 

Sweet thanks

I find that I struggle with the rhetoric of the argument. Shouldn't the goal be to illuminate facts and truths rather than merely proving the other side wrong? Specifics certainly allow the illumination of truths (and so getting less wrong in our decisions and actions). However, it almost reads like the goal is to use specificity as some rhetorical tool in much the same way statistics can be misused to color the lens and mislead.

I'm sure that is not your goal so assume one of the hidden assumptions here could be put in the title. One additional word: The Power to Demolish BAD Arguments might set a better tone at the start.

Yep! Since in practice, the other side of a discussion is often incoherent about the meaning of their original claim, I believe it's efficient to employ this specificity tool to illuminate the incoherence early in the conversation. Ah yeah, I agree. Title changed. Thanks!

Here's how Steve could have demolished Liron's argument:

"If a company makes $28/h out of their workers and pays them $14/h, they are exploiting them".

Yeah if Steve had said that, he would have been making progress toward potentially passing my having-a-coherent-claim test. Do I agree or disagree with this version of Steve's claim? Neither. I'm still not done nailing down what he's talking about. My followup question would be, "If it's impossible for Uber to take a lower share of the ride revenue without hastening their bankruptcy (because they're currently cashflow negative), would this scenario still make you claim that Uber is exploiting its drivers?"
1Svyatoslav Usachev
That's actually a very reasonable question to ask your interlocutor (and yourself), as are the previous "specificity" questions. The answer to that question would be: "If Uber was making $1/h out of their workers and paying them $14/h, that clearly would not have been exploitation, but if it makes $28/h, then it is, regardless of them being profitable. The question is how much every particular worker brings to the company, it doesn't matter whether it's enough for the viability of their business model." I don't see what is that you would argue about if not about these particular questions one of which might become a double-crux at some point.
If Steve has a coherent claim for us to argue about, then you're right. But a surprisingly large amount of people, such as the specific Steve I chose for this dialogue, don't have enough substance to their belief-state to qualify as a "claim", despite their ability to say an abstract sentence like "Uber exploits its workers" that sounds like it could be a meaningful claim. In this case, specificity demolishes their argument, making them go back to the drawing board of what claim they desire to put forth, if any. My meta-level response: This is more than what the Steve in the dialogue had in mind. The specific Steve in the dialogue just thinks "Hm, I guess I don't know in what sense my specific-example guy is being exploited. I'd have to think about this more." My object-level response: What specifically do you mean by "Uber making $X/hr"? Contribution margin before (huge) fixed costs? Or net profit margin? Because right now Uber has a negative net profit margin, so its shareholders are subsidizing the drivers' wages and the riders' low prices.
1Svyatoslav Usachev
On the meta level I agree with you, and I am happy to see the updated title, which makes the post feel less like attacking this sort of questions. With regards to the object-level response, I surely mean "contribution margin before (huge) fixed costs". Net profits are not very relevant here, see Amazon, a booming business with close-to-zero net profits. It is also clear that while Uber doesn't have profits, it surely has other gains, such as market share, for example. I.e., if the company decides that it wants to essentially exchange money for other non-monetary gains, it should not affect our opinion on their relationship with their workers. That said, I acknowledge that it is slightly more nuanced, and that simply looking at the contribution margin is not enough.
Well, I'm guessing Uber's current contribution margins are hovering around slightly negative or slightly positive, and that's before accounting for fixed costs like engineering that we can probably agree should be amortized into the "fairness equation". In my personal analysis, I don't see how Uber is being unfair in any way to their drivers. It seems like Uber is a nice shareholder-subsidized program to put drivers to work and give riders convenient point-to-point transportation.
2clone of saturn
Could you explain why shareholders are subsidizing Uber drivers, in your opinion?
Since I'm pretty sure Uber's net margins are negative, there must be a net transfer of wealth from people who buy Uber shares to Uber's drivers, employees, vendors and/or customers (I believe it's to all of them).
2clone of saturn
Right, but what's their motivation for transferring their wealth in this way?
Ah because they think that in the next 5-20 years, Uber will figure out how to widen their contribution margin, get leverage on their fixed costs, and eventually make so much total profit that it compensates for all the money burned along the way (currently losing $5B/quarter btw). One way this could happen is if Uber is able to keep their prices the same but replace their human-driven cars with self-driving cars. This scenario is possible but I'd give it less than 50% chance. But we don't need to try to predict the exact win scenario, we can just assign some probability that there will be some kind of win scenario. Facebook IPO'd in 2012 at a $104B market cap before anyone knew by what mechanism they'd generate profits. They only hit the jackpot figuring out a highly profitable advertising engine around 2014. I personally don't think Uber is an appealing investment, but I'm not confident about that assessment, because they're in a unique position where they're embedded deeply in the lives of many millions of customers, and they might also hit some kind of unforeseen jackpot to become the next Facebook.
Not invested either, but I thought its view of the future where, in general, people moved from owning transportation capital (cars) to one of an on demand use of transportation service seems to have some sense to it. Coupled with the move toward rental of personal assets while not used, like the AirBnB model it looks a bit better too (perhaps as a transition state...?) That does seem to depend on more than merely the technical and financial aspect. I suspect there is also the whole cultural and social (and probably the legal liability and insurance aspects for the autonomous car) part that will need to shift to support that type of market shift. Not sure if this is a move in the similar direction but one of the big car rental companies just launched (or will) a new service for longer term rental. Basically you can pay a monthly fee and drive most of the cars they offer. The market here seemed to be those that might want a difference car every few weeks (BMW this month, Audi next, any maybe Lexus a bit later...). In the back of my mind I cannot help be see some time of signaling motivation here and wonder just how long that lasts if everyone can do it -- all the different cars you are seen driving no long signals any really type of status. Still, there are clearly some functional aspects that make it appealing over having to own multiple vehicles.
1Svyatoslav Usachev
I don't quite understand, if even the contribution margin of an individual driver is negative (before fixed costs), then I don't see how this model can become viable in the future. My understanding is that contribution margins are obviously positive (Uber gets at least a half of the trip fare on average), but there is also a cost of investment in engineering and in low fares (which buy market share), that have not yet been covered. The viability of the business model, thus, comes from the fact that future (quite positive) income from the provided services will continue to cover investments in non-monetary gains, such as brand, market share, assets and IP.
The hope is that they'll hit on something big like self-driving car technology that fundamentally improves Uber's marginal profit. I run a service busines with a financial model kind of similar to Uber's, and I can tell you there's not much qualitative difference between reporting -20% vs +20% "contribution margin", because it depends on how much you decide to amortize all kinds of gray-area costs like marketing, new-driver incentives, non-driver employees, etc, into the calculation of what goes into "one ride". I use the ambiguous term "marginal profit" to mean "contribution margin with more overhead amortized in", and I'm pretty sure Uber's is quite negative right now, maybe in the ballpark of -20%.
Or the old fashioned thing where you kill off competition and then raise prices.
Upvoted for having a claim with a testable component.

Love the concept of "the ladder".

What advice do you have for applying it in everyday discourse; meetings, interviews, presentations etc. In an attempt to be specific with my question, I am not asking about the concept of "being more specific", rather I am asking how you can "train" yourself to apply it more frequently and effectively. The goal being that when you are asked a question you are able to respond with a more concise and specific response which improves your communication effectiveness? 

Personally I just have the habit of reaching for specifics to begin my communication to help make things clear. This post may help.

"Love your neighbour" is also not specific. Very many good things aren't. It's ok. You don't have to play chess at all until you discuss interventions.

Meaningful claims don't have to be specific; they just have to be able to be substantiated by a nonzero number of specific examples. Here's how I imagine this conversation: Chris: Love your neighbor! Liron: Can you give me an example of a time in your life where that exhortation was relevant? Chris: Sure. People in my apartment complex like to smoke cigarettes in the courtyard and the smoke wafts up to my window. It's actually a nonsmoking complex, so I could complain to management and get them to stop, but I understand the relaxing feeling of a good smoke, so I let them be. Liron: Ah I see, that was pretty accommodating of you. Chris: Yeah, and I felt love in my heart for my fellow man when I did that. Liron: Cool beans. Thanks for helping me understand what kind of scenarios you mean for your exhortation to "love your neighbor" to map to.
2Mary Chernyshenko
It's mapping a river system to a drop. Just because something is technically possible and topologically feasible doesn't make it a sensible thing to do.
I’m not saying “mapping a big category to a single example is what it’s all about”. I’m saying that it’s a sanity check. Like why wouldn’t you be able to do that? Yet sometimes you can’t, and it’s cause for alarm.

As I am but a vain monkey, I have often turned my mind to this post hoping to win arguments, but have not succeeded. The missing step seems to be the one here:

Steve: Ok… A single dad whose kid gets sick. He works for Uber and he doesn’t even get health insurance, and he’s maxed out all his credit cards to pay for doctor’s visits. The next time his car breaks down, he won’t even be able to fix it. Meanwhile, Uber skims 25% of every dollar so he barely makes minimum wage. You should try living on minimum wage so you can see
... (read more)
Hmm, can you give an example of the kind of back & forth you find yourself having? The technique in this post demolishes a common type of bad argument, in which the arguer's claim sounds meaningful when phrased in abstract terms, but turns out to be meaningless when viewed at a higher specificity level. In your experience as I understand it, your discussion partner furnishes an example per your request, and the example seems like a valid illustration of their general claim, not something that dissolves into nothing when you try to repeat back what they're trying to say? In that case, it sounds like their claim might not be a meaningless one like Steve's. But at least you can use the example to help your thought process about whether the general claim is right.
I think the Uber conversation with Steve is a good example. Say Steve describes this single dad who's having a hard time. I'm like, "Yeah, that does sound bad", rather than linking back to the context and trying to establish if Uber is blameworthy. Similarly, the specific contrast with Uber's nonexistence is not the obvious move to me; I would likely get into what Uber should do instead, which feels more doomy
If a single dad is having a hard time with Uber, it seems relevant to ask counterfactually about the same dad if Uber didn’t exist. To some degree you have to keep this in mind and not forget and let the conversation be steered away. Part of the “zooming in” operation I describe involves holding your mental camera steady :)

1) There is a risk in looking at concrete examples before understanding the relevant abstractions. Your Uber example relies on the fact that you can both look at his concrete example and know you're seeing the same thing. This condition does not always hold, as often the wrong details jump out as salient.

To give a toy example, if I were to use the examples "King cobra, Black mamba" to contrast with "Boa constrictor, Anaconda" you'd probably see "Ah, I get it! Venomous snakes vs non-venomous snakes", but that's ... (read more)

Re (3): Well, the whole example is a fictional pastiche. I didn't force myself to make it super real because I didn't think people would doubt that it was sufficiently realistic. If you want to know a real example of a Steve, it's me a bunch of times when I first talked to Anna Salamon about various subjects.
Okay, I thought that might be the case but I wasn't sure because the way it was worded made it sound like the first interaction was real. ("You can see I was showing off my mastery of basic economics." doesn't have any "[in this hypothetical]" clarification and "This seemed like a good move to me at the time" also seems like something that could happen in real life but an unusual choice for a hypothetical). To clarify though, it's not quite "doubt that it's sufficiently realistic". Where your simulated conversation differs from my experience is easily explained by differing subcommunication and preexisting relationships, so it's not "it doesn't work this way" but "it doesn't *have to* work this way". The other part of it is that even if the transcript was exactly something that happened, I don't see any satisfying resolution. If it ended in "Huh, I guess I didn't actually have any coherent point after all", it would be much stronger evidence that they didn't actually have a coherent point -- even if the conversation were entirely fictional but plausible.
Ok I think I see your point! I've edited the dialogue to add: