Of course, if that's the only defense they offer and they don't bother refuting any of the actual accusations in any substantial way, that's certainly very suspicious. But then the suspicious thing is more the lack of an object-level response rather than the presence of a defensive response.
Yeah, I'm starting with this part of your response because I agree and think it is good to have clear messaging on the most unambiguously one-directional ("guilty or not") pieces of evidence. Nothing comes close to having persuasive responses to the most load-bearing accusations.
"It's fine to be outraged/go on the counterattack, but it becomes suspicious if you use this to deflect from engaging with the evidence against you" seems like a good takeaway.
What shouldn't happen is that onlookers give someone a pass because of reasoning that goes as follows: "They seem to struggle with insecurity, and getting accused is hard, so it's okay that they're deeming it all so outrageous that it's beneath them to engage more on the object-level." Or, with less explicit reasoning, but still equally suboptimal, would be an onlooker reaction of, "This is just how this person responds to accusations; I will treat this as a fact of the world," combined with the onlookers leaving it at that and not flagging it as unfortunate (and suspiciously convenient) that the accused will now not do their best to gather information they can voluntarily disclose to immediately shed more light on their innocence.
Basically, the asymmetry is that innocent people can often (though not always) disclose information voluntarily that makes their innocence more clear/likely. That's the best strategy if it is available to you. It is never available to guilty people, but sometimes available to innocent people.
(In fact, this trope is overused in the show "Elementary" and once I realized it, it became hard to enjoy watching the show because it's usually the same formula for the short self-contained episodes: The initial one, two, or three suspects will almost always be red herrings, and this will become clear quickly enough because they will admit to minor crimes that make clear that they would have lacked the motive for the more serious crime, or they would admit something surprising or embarrassing that is verifiable and gives them an alibi, etc.)
So, anything that deflects from this is a bit suspicious! Justifiably accused "problem people" will almost always attempt counterattacks in one form or another (if not calling into question the accuser's character, then at least their mental health and sanity) because this has a chance of successful deflection.
The following paragraph is less important to get to the bottom of because I'm sure we both agree that the evidence is weak at best no matter what direction it goes in, but I still want to flag that I have opposite intuitions from you about the direction of evidence.
My sense is still that the strategy "act as though you've been attacked viciously by a person who is biased against you because they're bad" does weakly (or maybe even moderately, but with important exceptions) correlate with people being actually guilty. That said, that's importantly different from your example of "being able to dig up accusation-relavant dirt". I mean, it depends what we're picturing... I agree that "this police accusing me has been known to take bribes and accuse innocent people before" is quite relevant and concerning. By contrast, something that would seem a lot less relevant (and therefore go in the other direction, evidence-wise), would be things like, "the person who accused me of bad behavior had too much to drink on the night in question." Even if true, that's quite irrelevant because problem people may sometimes pick out victims precisely because they are drunk (or otherwise vulnerable) and also because "having too much to drink" doesn't usually turn reliable narrators into liars, so the fact that someone being drunk is the worst that can be said about them is not all that incriminating.
When you say defensiveness, does that include something like "act as though you've been attacked viciously by a person who is biased against you because they're bad"? Because that, to me, is the defensiveness behavior I'd find the most suspicious (other facets of defensiveness less so).
The problem with the "immediately focus on maximally discrediting the accusers" is that is that it is awfully close to the tactic that actually guilty people might want to use to discredit or intimidate their accusers (or, in movies, discredit law enforcement that has good reasons for asking questions/being suspicious).
Of course, in complex interpersonal contexts, it's often the case that accusers are in fact the troublemakers (and maybe every once in a blue moon, law enforcement asking what they say are "standard" questions might be part of a conspiracy to frame you), so the behavior is only suspicious when there's a perfectly valid explanation as to why people are pointing at you, and you not only do not see it from that perspective (or acknowledge that you're seeing it), but you then put on behavior designed to make onlookers believe that something incredibly outrageous has just happened to you.
One admittedly confounding factor is "honor culture" -- not a big thing in LW circles, but if we're thinking of movies where they arrest or ask accusing questions to people in regions or cultures where one's reputation is really important, and being accused of something is seen as a massive insult, then I can understand that this is a strong confounding factor (to actually being guilty).
The rule itself sounds reasonable but I find it odd that it would come up often enough. Here's an alternative I have found useful: Disengage when people are stubborn and overconfident. It seems like a possible red flag to me if an environment needs rules for how to "resolve" factual disagreements. When I'm around reasonable people I feel like we usually agree quite easily what qualifies as convincing evidence.
I think the people who talk as though the contested issue here is Said's disagreeableness combined with him having high standards are missing the point.
Said Achmiz, in contrast, expresses some amount of contempt for people who do fairly specific and circumscribed things like write posts that are vague or self-contradictory or that promote religion or woo.
If it was just that (and if by "posts that are vague" you mean "posts that are so vague that they are bad, or posts that are vague in ways that defeat the point of the post"), I'd be sympathetic to your take. However, my impression is that a lot more posts would trigger Said's "questioning mode." (Personally I'm hesitant to use the word "contempt," but it's fair to say it made engaging more difficult for authors and they did involve what I think of as "sneer tone" sometimes.)
The way I see it, there are posts that might be a bit vague in some ways but they're still good and valuable. This could even be because the post was gesturing at a phenomeon with nuances where it would require a lot of writing (and disentanglement work) to make it completely concise and comprehensive, or it could be because an author wanted to share an idea what wasn't 100% fleshed out but might have already been pointing at something valuable. I feel like Said not only has a personal distaste of that sort of "post that contains bits that aren't pinned down," but it also seemed like he wouldn't get any closer to seeing the point of those posts or comments when it was explained in additional detail. (Or, in case he did eventually see the points, he'd rarely say thanks or acknowledged that he got it now). That's pretty frustrating to deal with for authors and other commenters.
(Having said all that, I have not had any problems with Said's commenting in the last two years -- though I did find it strongly negative and off-putting before that point. And to end with something positive, I liked that Said was one of the few LessWrongers who steered back a bit against Zvi's very one-sided takes on homeschooling -- context here.)
I really like this post.
In this post, I mostly conflated "being a moderate" with "working with people at AI companies". You could in principle be a moderate and work to impose extremely moderate regulations, or push for minor changes to the behavior of governments.
There's also a "moderates vs radicals" when it comes to attitudes, certainty in one's assumptions, and epistemics, rather than (currently-)favored policies. While some of the benefits you list are hard to get for people who are putting their weight behind interventions to bring about radical change, a lot of the listed benefits fit the theme of"keeping good incentives for your epistemics," and so they might apply more broadly. E.g., we can imagine someone who is "moderate" in their attitudes, certainty in their assumptions, etc., but might still (if pressed) think that radical change is probably warranted.
For illusration, imagine I donate to Pause AI (or joined one of their protests with one of the more uncontroversial protest signs), but I still care a lot about what the informed people who are convinced of Anthropic's strategy have to say. Imagine I don't think they're obviously unreasonable, I try to pass their Ideological Turing test, I care about whether they consider me well-informed, etc. If those conditions are met, then I might still retain some of the benefits you list.
I did this conflation mostly because I think that for small and inexpensive actions, you're usually better off trying to make them happen by talking to companies or other actors directly (e.g. starting a non-profit to do the project) rather than trying to persuade uninformed people to make them happen. And cases where you push for minor changes to the behavior of governments have many of the advantages I described here: you're doing work that substantially involves understanding a topic (e.g. the inner workings of the USG) that your interlocutors also understand well, and you spend a lot of your time responding to well-informed objections about the costs and benefits of some intervention.
What about the converse, the strategy for bringing about large and expensive changes? You not discussing that part makes it seem like you might agree with a picture where the way to attempt large and expensive changes is always to appeal to a mass audience (who will be comparatively uninformed). However, I think it's it at least worth considering that promising ways towards pausing the AI race (or some other types of large-scale change) could go through convincing Anthropic's leadership of problems in their strategy (or or more generally: through convincing some other powerful group of subject matter experts). To summarize whether radical change goes through mass advocacy and virality vs convincing specific highly-informed groups and experts, seems like somewhat of an open question and might depend on the specifics.
[Caveat: Apart from reading roughly five posts and discussion threads about Maple over the years recently, I have no further context and so am one of the least informed people commenting here. But I think that's okay because my comment will be about how I think your post comes across to onlookers.]
Even though you're overall critical of Maple, I still get the impression that the closeness to them has negatively affected your judgment about some things. But maybe I misread the point of the post. To me, it sounds like you're saying that, while, your impression of Maple is that they traumatize a bit too many people and they don't seem to produce sufficiently many "saints" (or, as early indicators of success: "micro saints") for these methods to be worth it, you think it's important (so I infer?) that critics should engage with Maple on the object level of the strategy/path to impact that they're pursing, because that strategy (of producing saints to help save the world) was possibly worth trying ex ante? In other words, critics shouldn't sneer at the strategy itself but rather have to at least consider (problems with) its execution?
Assuming the above is an accurate paraphrase, my reply would be that, no, that sort of strategy never made much sense in the first place, and obviously so. You don't save the world by doing inward-looking stuff at a monastery, and "path to impact through producing saints" seems doubtful because:
(1), people who greatly inspire others almost never started out as followers in a school for how to become inspiring (this is similar to the issues with CFAR, although I'd say it was less outlandish to assume that rationality is teachable rather than sainthood).
(2), even if you could create a bunch of particularly virtuous and x-risk-concerned individuals, the path to impact would remain non-obvious from there, since they'd neither be famous nor powerful nor particularly smart or rational or skilled, so how are they going to have an outsized impact later?
Overall, this strategy does not warrant the "under steep tradeoffs, we should be more willing to accept the costs of harming people" outlook.
I feel like if the recruitment and selling point was less about "come here to have tons of impact," and more about "have you always thought about joining a monastery, but you're also into rationality and x-risks reduction?," then this would be more okay and safer? A group oriented that way would maybe also be more generally relaxed and "have Slack," and would be more alarmed if they were causing harm rather than trying to justify it via the potential of having impact. So, I feel like the points you raise sort-of-in-defense of Maple make things worse because all these attempts of explaining how the mission is important for world-saving are what adds pressure to the Maple environment.
I saw that others have commented about how the bio is an edited meme rather than real, but just on the perception of various personality disorders, I feel like the statements you highlighted would show too much self-endorsement of that interpersonally bleak and exploitative outlook to be typical of (just) BPD. If we had to pick something that the dating profile statement seem typical of, it sounds more like ASPD (maybe together with BPD) to me. If someone only has BPD, it would probably be more typical for them to feel super attached and positive towards their loved ones for at least large parts of the time. And while they might split and end up betraying their loved ones, the person with BPD doesn't typically have the insight to understand that this is a likely thing that they might do, so liking drama and being ready to betray others wouldn't be a part of how they see themselves.
Disliking/unendorsing the negative features of one's personality instead of endorsing them is an important ingredient for success chances with therapy, which is why BPD by itself is easier to treat than NPD or ASPD, or combinations where either of those come comorbid with BPD.
Oh, thanks! Yeah, I should reverse my vote, then. I got confused by the sentence structure (and commenting before my morning coffee).
I disagree-voted this comment [edit: reversed now because I misread the comment I'm replying to] because the sort of pushback Said typically gives doesn't remind me of "the good old days" (I think that's a separate thing), but I want to flag that, as someone who's had negative reactions to Said's commenting style in the past, I feel like the past two years or so, I noticed several times where I thought he left valuable comments or criticism that felt on point, and I have noticed a lot less (possibly zero) instances of "omg this feels uncharitable and nitpicky/deliberately playing dense." So, for my part at least, I no longer consider myself as having strong opinions on this topic.
(Note that I haven't read the recent threads with Gordon Seidoh Worley, so this shouldn't be interpreted as me taking a side on that.)
In movies and series it happens a bunch that people find themselves accused of something due to silly coincidences, as this ramps up the drama. In real life, such coincidences or huge misunderstandings presumably happen very infrequently, so when someone in real life gets accused of serious wrongdoing, it is usually the case that either they are guilty, or their accusers have a biased agenda.
This logic would suggest that you're right about counterattacks being ~equally frequent.
Perhaps once we go from being accused of serious wrongdoing to something more like "being accused of being a kind of bad manager," misunderstandings, such as that the "accuser" just happened to see you on a bad day, become more plausible. In that case, operating from a perspective of "the accuser is reasonable and this can be cleared up with a conversation rather than by counterattacking them" is something we should expect to see more often from actually "innocent" managers. (Of course, unlike with serious transgressions/wrongdoing, being a "kind of bad" manager is more of a spectrum, and part of being a good manager is being open to feedback and willingness to work on improving onself, etc., so these situations are also more disanalogous for additional reasons.)