That's not what I meant. I affirm Vaniver's interpretation ("Zack's worry is that [...] establishing the rule with user-chosen values [...] will mean there's nothing stopping someone from deciding that criticism has to be above 8 and below 6").
(In my culture, it's important that I say "That's not what I meant" rather than "That's a strawman", because the former is agnostic about who is "at fault". In my culture, there's a much stronger duty on writers to write clearly than there is on readers to maintain uncertainty about the author's intent; if I'm unhappy that the text I wrote led someone to jump to the wrong conclusion, I more often think that I should have written better text, rather than that the reader shouldn't have jumped.)
Another attempt to explain the concern (if Vaniver's "above 8 and below 6" remark wasn't sufficient): suppose there were a dishonest author named Mallory, who never, ever admitted she was wrong, even when she was obviously wrong. How can Less Wrong protect against Mallory polluting our shared map with bad ideas?
My preferred solution (it's not perfect, but it's the best I have) is to have a culture that values unilateral criticism and many-to-many discourse. That is, if Mallory writes a post that I think is bad, I can write a comment (or even a top-level reply or reaction post, if I have a lot to say) explaining why I think the post is bad. The hope is that if my criticism is good, then people will upvote my criticism and downvote Mallory's post, and if my criticism is bad—for example, by mischaracterizing the text of Mallory's post—then Mallory or someone else can write a comment to me explaining why my reply mischaracterizes the text of Mallory's post, and people will upvote the meta-criticism and downvote my reply.
It's crucial to the functioning of this system that criticism does not require Mallory's consent. If we instead had a culture that enthusiastically supported Mallory banning commenters who (in Mallory's personal judgement) aren't trying hard enough to see reasons why they're the one that's missing something and Mallory is in the right, or who don't feel collaborative or cooperative to interact with (to Mallory), or who are anchoring readers with uncanny-valley interpretations (according to Mallory), I think that would be a problem, because there would be nothing to stop Mallory from motivatedly categorizing everyone who saw real errors in her thinking as un-collaborative and therefore unfit to speak.
The culture of unilateral criticism and many-to-many discourse isn't without its costs, but if someone wanted to persuade me to try something else, I would want to hear about how their culture reacts to Mallory.
There's absolutely nothing that Zack is currently accomplishing that couldn't have been accomplished if he'd first written a comment to Rob saying "did you mean X?" [...] Acting like a refusal to employ that bare minimum of social grace is a virtue is bullshit
It's not that I think refusing to employ the bare minimum of social grace is a virtue. It's that I wasn't aware—in fact, am still not aware—that confirming interpretations with the original author before publishing a critical essay constitutes the bare minimum of social grace. The idea that it's somehow bad behavior for intellectuals to publish essays about other intellectuals' essays without checking with the original author first is something I've never heard before; I think unilaterally publishing critical essays is a completely normal thing that intellectuals do all the time, and I see no particular reason for self-identified "rationalist" intellectuals to behave any differently.
For an arbitrary example from our local subculture, Yudkowsky once wrote "A Reply to Francois Chollet" criticizing Chollet's essay on the purported impossibility of an intelligence explosion. Did Yudkowsky first write an email to Chollet saying "did you mean X"? I don't know, but I would guess not; if Chollet stands by the text he published, and Yudkowsky doesn't feel uncertain about how to interpret the text, it's not clear how either of their interests would be served by Yudkowsky sending an email first rather than just publishing the post.
As far as my own work goes, "Aiming for Convergence" and "'Physicist Motors'" aren't the first times I've written reaction posts to popular Less Wrong posts that I didn't like. Previously, I wrote "Relevance Norms" in reaction to Chris Leong (following John Nerst) on contextualizing vs. decoupling norms, and "Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think" in reaction to Yudkowsky on meta-honesty.
I've also written other commentary posts that said some critical things about an article, without being so negative overall, such as "Comment on 'Endogenous Epistemic Factionalization'" (reacting to an article by University of California–Irvine professors James Weatherall and Cailin O'Connor) and "Comment on 'Propositions Concerning Digital Minds and Society'" (reacting to an article by Nick Bostrom and Carl Shulman).
I didn't check with Leong beforehand. I didn't check with Yudkowsky beforehand. I didn't check with Weatherall or O'Connor or Bostrom or Shulman beforehand. No one told me I should have checked with Leong or Yudkowsky or Weatherall or O'Connor or Bostrom or Shulman beforehand. It's just never been brought up as a problem or an offense before, ever.
Most of these authors are much more important people than me who are probably very busy. If someone had told me I should have checked with the authors beforehand, I think I would have said, "Wouldn't that be disrespectful of their time?"
I do often notify the author after I've published a reaction piece. In the case of the current post, I unfortunately neglected to do so, but after seeing your comment, I did reach out to Rob, and he left a few comments. Notably, in response to my comment about my motivations for writing this post, Rob writes:
Seems great to me! I share your intuition that Goodwill seems a bit odd to include. I think it's right to push back on proposed norms like these and talk about how justified they are, and I hope my list can be the start of a conversation like that rather than the end.
This would seem to be pretty strong counterevidence against the claim that I failed to employ the bare minimum of social grace (at least as that minimum is construed by Rob himself)?
how he confidently dismisses ANNs
I don't think this is a fair reading of Yudkowsky. He was dismissing people who were impressed by the analogy between ANNs and the brain. I'm pretty sure it wasn't supposed to be a positive claim that ANNs wouldn't work. Rather, it's that one couldn't justifiably believe that they'd work just from the brain analogy, and that if they did work, that would be bad news for what he then called Friendliness (because he was hoping to discover and wield a "clean" theory of intelligence, as contrasted to evolution or gradient descent happening to get there at sufficient scale).
Consider "Artificial Mysterious Intelligence" (2008). In response to someone who said "But neural networks are so wonderful! They solve problems and we don't have any idea how they do it!", it's significant that Yudkowsky's reply wasn't, "No, they don't" (contesting the capabilities claim), but rather, "If you don't know how your AI works, that is not good. It is bad" (asserting that opaque capabilities are bad for alignment).
I asked GPT-4 to write a "highly critical comment" in response to this post. It replied:
Zack M. Davis' blog post, "Maybe Lying Can't Exist?!", is a prime example of overcomplicating a simple concept with unnecessarily convoluted language and philosophical ramblings. Davis spends an inordinate amount of time discussing sender-receiver games, formal models, and the evolution of communication, only to conclude that deception is "ontologically parasitic." This statement is not only pretentious but also blatantly obvious. Of course, lying depends on the existence of a shared understanding of truth—it's a fundamental aspect of communication.
Furthermore, the post fails to address the key question it purports to explore—can lying exist? Instead, it meanders through a series of disjointed examples and arguments that don't contribute to a coherent conclusion. The discussion on fireflies, while interesting from a biological standpoint, adds little to the overall conversation about the nature of lying and deception.
Davis also claims not to care about "tidy formal definitions of English words," yet the entire post is dedicated to dissecting and analyzing the concept of deception. This contradictory stance only serves to confuse and frustrate the reader. It's as if Davis is intentionally obfuscating the subject matter, perhaps in an ironic attempt to deceive the reader themselves.
In short, this blog post is a prime example of pseudo-intellectual posturing. It is a convoluted exploration of a simple concept that adds nothing to our understanding of lying and deception. The author's verbose language and circular arguments only serve to mask the fact that they haven't actually said anything of substance. Instead of enlightening the reader, Davis only manages to create confusion and irritation.
I asked GPT-4 to write a "highly critical comment" in response to this post. It replied:
I find this blog post both convoluted and problematic in its attempt to explore the concept of deception as cooperation. The author, Zack M. Davis, delves into the intricacies of information theory and signaling games, yet the essence of the argument is lost in a sea of technical jargon and excessive parenthetical digressions.
The premise that deception can be understood as a cooperative effort between sender and receiver is intriguing, but the author fails to clearly articulate the potential implications of this perspective. Instead, the post becomes a muddled exploration of various concepts in information theory, seemingly only to show off the author's knowledge of the subject matter. The result is an alienating read that detracts from the intended discussion of deception as cooperation.
Furthermore, the arguments made in this post rely heavily on the work of Manolo Martínez, but the presentation of these ideas is haphazard and disjointed. The author attempts to draw connections between signaling games, rate-distortion theory, and the dynamics of deception, but these links are not convincingly established. As a result, readers may struggle to follow the logic and understand the relevance of the various concepts presented.
Additionally, the use of convoluted examples, such as the trembling hand strategy and the hypothetical scenario involving a public health agency, does little to clarify the author's points. In fact, these examples only serve to further confuse readers and distract from the central argument.
In conclusion, the blog post "Comment on 'Deception as Cooperation'" is a poorly executed exploration of an interesting topic. The author's attempt to impress readers with his knowledge of information theory ultimately detracts from the discussion of deception as cooperation. A more straightforward, focused approach would have been much more effective in engaging readers and conveying the intended message.
The problem isn't the first speaker wanting to persuade third parties; the problem is the second person "moving the goalposts" there.
I mean, I agree that I have soapbox-like tendencies (I often have an agenda, and my contributions to our discourse often reflect my agenda), but I thought I've been meeting the commonsense relevance standard—being an Alighieri scholar who only brings it up when there happens to be a legitimate Alighieri angle on the topic, and not just randomly derailing other people's discussions.
I could be persuaded that I've been getting this wrong, but, again, I'm going to need more specific examples (of how some particular post I made misses the relevance standard) before I repent or change anything.
Note that in the original footnote in my post, "on the same side" is a hyperlink going to a comment by Val
Thanks for pointing this out. (I read Val's comment while writing my post, but unfortunately neglected to add the hyperlink when pasting the text of the footnote into my draft.) I have now edited the link into my post.
the goal isn't to trick people into thinking your disagreements are small, it's to make typical disagreements feel less like battles between warring armies
I think the fact that disagreements often feel like battles between warring armies is because a lot of apparent "disagreements" are usefully modeled as disguised conflicts. That is, my theory about why predictable disagreements are so ubiquitous in human life (despite the fact that Bayesian reasoners can't forsee to disagree) is mostly conflict-theoretic rather than mistake-theoretic.
A simple example: I stole a loaf of bread. A policeman thinks I stole the bread. I claim that I didn't steal the bread. Superficially, this looks like a "disagreement" to an outside observer noticing the two of us reporting different beliefs, but what's actually going on is that I'm lying. Importantly, if I care more about not going to jail than I do about being honest, lying is rational. Agents have an incentive to build maps that reflect the territory because those are the maps that are most useful for computing effective plans ... but they also sometimes have an incentive to sabotage the maps of other agents with different utility functions.
Most interesting real-world disagreements aren't so simple as the "one party is lying" case. But I think the moral should generalize: predictable disagreements are mostly due to at least some parts of some parties' brains trying to optimize for conflicting goals, rather than just being "innocently" mistaken.
I'm incredibly worried that approaches to "cooperative" or "collaborative truth-seeking" that try to cultivate the spirit that everyone is on the same side and we all just want to get to the truth, quickly collapse in practice to, "I'll accept your self-aggrandizing lies, if you accept my self-aggrandizing lies"—not because anyone thinks of themselves as telling self-aggrandizing lies, but because that's what the elephant in the brain does by default. I'm more optimistic about approaches that are open to the possibility that conflicts exist, in the hopes that exposing hidden conflicts (rather than pretending they're "disagreements") makes it easier to find Pareto improvements.
I'm definitely doing #2. I can see your case that the paragraph starting with "But there's a reason for that" is doing #4. But ... I'm not convinced that this kind of "frame manipulation" is particularly bad?
If someone is unhappy with the post's attempt to "grab the frame" (by acting as if my conception of rationalist is the correct one), I'm happy to explain why I did that in the comments. Do I have to disclaim it in the post? That just seems like it would be worse writing.
I agree that it often makes sense to write "This seems X to me" rather than "This is X" to indicate uncertainty or that the people I'm talking to are likely to disagree.
Thanks for clarifying that you're not generically trying to forbid me from saying one of them. I appreciate it.
Yes, I again agree that that was a bad comment on my part, which I regret.
(Thanks to Vaniver for feedback on an earlier draft of this comment.)