I've noticed that when people are asked to "Steelman" a position, they sometimes instead do what I would call "Straw-Steelmanning". Someone can also straw-steelman without having been asked to steelman or having said that they would do so. 

What is straw-steelmanning? Assume someone makes an argument X for a claim C, and you are arguing against X.

  • Straw-manning (bad): You replace X with a weaker argument Y and argue against that, pretending as if you have thereby refuted X.
  • Steel-manning (good): You replace X with a stronger argument Y which still contains the core of X and argue against that, thereby actually refuting X. (The term can also be used in a context where you are not actually arguing against the claim C).
  • Straw-steelmanning (bad): You replace C with an entirely different claim D and make an argument Y for it which you consider to be stronger than X, pretending as if you no longer need to argue against C.

An example which I have noticed is something like the following:

  • "Can you steelman the position that future AI systems will pose an existential risk?"
  • "Well, while we whould not take these hollywood movie plots seriously, there are real social problems with AI that we have to deal with. AI will potentially cause massive inequality because entire industries will be automated and a small amount of corporation will own the AI tools that facillitate that. AI engineers will earn large wages, while demand for other professions stagnates. Moreover, we need to worry about biases in AI systems, because [etc, proceeds to argue more]"

This is a straw-steelman, because they have

  • Simply bypassed the original claim, replacing it with a different claim that they already agreed with.
  • Proceeded to argue for that claim, ignoring the original.

29

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 3:51 AM

That's a good point, before steelmanning someone's position one has to understand it properly, including where the person is coming from, a sort of topical Turing test.

That's charity, potentially useful for participating in the conversation, not so much for steelmanning considered on its own. Taking inspiration from a mysterious utterance to create something more natural from your own perspective doesn't require engaging with the intent of the utterance. The rhetorical use of steelmanned ideas doesn't have to be baked into the concept.

New to LessWrong?