This is a horrible situation, where excessive knowledge of some bad action X could be evidence of being:
Taking all of them them together, we get a group that has the best knowledge of X, and which is dangerous to approach, because too many of them are bad actors. (Also, there is a possible overlap between the groups.)
Even worse, if you decide to avoid approaching the group and just study X yourself... you become one of them.
However, having zero knowledge about X makes you an easy victim. So what are we supposed to do?
I guess the standard solution is something like "try to learn about X without coming into too much contact with the teachers (learn from books, radio, TV, internet, or visit a public lecture), and keep your knowledge about X mostly private (do things that reduce the chance of X happening to you, maybe warn your friends about the most obvious mistakes, but do not give lectures yourself)" or sometimes "find a legible excuse to learn about X (join the police force)". Which, again, is something that the bad actor would be happy to do, too.
They say that the former criminals make the most efficient cops. I believe it also works the other way round.
I guess Jordan Peterson would say that you cannot become stronger without simultaneously becoming a potential monster. The difference between good and bad actors is how much the "potential" remains potential. Weak (or ignorant) people are less of a threat, but also less of a help.
It could help if you know the person for a long time, so you could see how the excessive knowledge of X manifests in their actual life. Difficult to do for an online community.
Like I said, I don't have a solution. At least, not one I'm confident and certain of. I have other essays in the pipeline with (optimistically) pieces of it.
I don't think it's doomed. Most security experts a bank would reasonably hire are not bank robbers, you know? I assume that's true anyway, I'm not in that field but somehow my bank account goes un-robbed.
Checking where wildly different spheres agree seems promising. The source of advice here that I trust the most comes from a social worker who I knew for years who hadn't heard of the rationalist community, and I asked them instead of them unprompted (or as part of an argument) starting to tell me how it should work. Put another way, getting outside perspectives is helpful- if a romantic partner seems like they might be pressuring you, describe it to a friend and see what they say.
It's part of why I spent a while studying other communities, looking to see if there was anything that say, Toastmasters and the U.S. Marines and Burning Man and Worldcon all agreed about.
Most security experts a bank would reasonably hire are not bank robbers, you know?
Yes, it would be useful to know how exactly that happens.
I suspect that a part of the answer is how formal employment and long-term career changes the cost:benefit balance. Like, if you are not employed as a security expert, and rob a bank, you have an X% chance of getting Y money, and Z% chance of ending up in prison. If you get hired as a security expert, that increases the X, but probably even more increases the Z (you would be the obvious first suspect), and you probably get a nice salary so that somewhat reduces the temptation of X% chance at Y. So even if you hire people who are tempted to rob a bank, you kinda offer them a better deal on average?
Another part of the answer is distributing the responsibility, and letting the potential bad actors keep each other in check. You don't have one person overseeing all security systems in the bank without any review. One guy places the cameras, another guy checks whether all locations are recorded. One guy knows a password to a sensitive system (preferably different people for different sensitive systems), another guy writes the code that logs all activities in the system. You pay auditors, external penetration testers, etc.
There is also reputation. If someone worked in several banks, and they those banks didn't get robbed, maybe it is safe to hire that person. (Or they play a long con. Then again, many criminals probably don't have patience for too long plans.) What about your first job? You probably get a role with less responsibility. And they probably check your background?
...also, sometimes the banks do get robbed; they probably do not always make it public news. So I guess there is no philosophically elegant solution to the problem, just a bunch of heuristics that together reduce the risk to the acceptable level (or rather, we get used to whatever is the final level).
So... yeah, it makes sense to learn the heuristics... and there will be obvious objections... and some of the heuristics will be expensive (in money and/or time).
I think the amount of cash a bank loses in a typical armed robbery really isn't that large compared to the amounts of money the bank actually handles - bank robbers are a nusiance but not an existential threat to the bank.
The actual big danger to banks comes from insiders; as the saying goes, the best way to rob a bank is to own one.
Most security experts a bank would reasonably hire are not bank robbers, you know? I assume that's true anyway,
If you're good at it, you can purchase the knowledge without giving them a position of power. Intelligence agencies purchase zero days from hackers on black market. Foreign spies can be turned using money to become double agents.
Any opinion on this regarding being a somewhat good solution?
https://www.lesswrong.com/posts/Q3huo2PYxcDGJWR6q/how-to-corner-liars-a-miasma-clearing-protocol
Trying to summarize the method:
I guess this mostly avoids the failure mode when someone uses an argument A to support their theory X, later under the weight of evidence B switches to a theory Y (because B was incompatible with X, but is compatible with Y), and you fail to notice that A is now incompatible with Y... because you vaguely remember that "we talked about A, and there was a good explanation for that".
The admitted disadvantage is that it takes a lot of time.
Makes me empathize with the defender :), but let me tell you, being interrogated in an airport for six hours trying to convince a US immigration agent that I'm an oddball not a danger is not fun
All of this sounds reasonable, on the surface…
And yet I notice that the view that “people who have opinions about how [whatever] should be done are unusually likely to be bad actors who want me to do [whatever] in such a way as to benefit them, therefore I should be suspicious of their motives and suggestions” is memetically adaptive. Whenever you come across this idea, it is to your benefit to immediately adopt it—after all, it means that you will thenceforth need to spend less effort evaluating people’s suggestions and opinions, and have a new and powerful reason to reject criticism. And the idea protects itself: if someone suggests to you that this whole perspective is misguided and harmful, well, aren’t they just maliciously trying to undermine your vigilance?
Anyhow, I am not a meetup czar, so I don’t have to make the decisions that you make. And I don’t go to many meetups, so I am more or less unaffected by those decisions. I do have a bit of experience running communities, though; and, of course, the usual plethora of experience interacting with people who run communities. My own view, on the basis of all of that experience, is this:
Community members should default to the assumption that you are basically the KGB.
Your own approach and policies should work unproblematically even if everyone assumes that you are basically the KGB. (This is especially true if you are not the KGB at all.)
And if your approach to running a community is predicated on the members of that community not treating you as if you are the KGB, then you are definitely the KGB.
And yet I notice that the view that “people who have opinions about how [whatever] should be done are unusually likely to be bad actors who want me to do [whatever] in such a way as to benefit them, therefore I should be suspicious of their motives and suggestions” is memetically adaptive. Whenever you come across this idea, it is to your benefit to immediately adopt it—after all, it means that you will thenceforth need to spend less effort evaluating people’s suggestions and opinions, and have a new and powerful reason to reject criticism. And the idea protects itself: if someone suggests to you that this whole perspective is misguided and harmful, well, aren’t they just maliciously trying to undermine your vigilance?
I have a couple of thoughts here. One is that I don't think this is true for most values of [whatever]. If someone has suggestions about the venue, or the meetup activities, or announcement platforms, I don't think this dynamic is in play. If I get advice on job searching or algebra homework or the best way to bake a loaf of sourdough, I'm not getting nearly as much adverse selection as for conflict resolution from within the community I'm involved in. Who has a motive to subtly sabotage my sourdough?
If someone read this essay and came away with a full general counterargument against listening to advice on any subject, my guess is there's a big reading comprehension failure happening.
It isn't as clearly a failure of reading comprehension if someone comes away with the idea that they shouldn't listen to any advice on handling conflict specifically, though I think that would also be incorrect. Finding people who are trustworthy, good at handling it well, and willing to teach you is wonderful. I've been trying to learn the most from sources well outside the rationalist community, but I think there is good advice to be had. Just, not uncritically trusted?
Also, some people seem to think this class of problem should be easy. For those people I want to make the point that it is (at least sometimes) an adversarial situation.
Who has a motive to subtly sabotage my sourdough?
Probably nobody, but then again, your sourdough is probably not impinging on anyone’s interests, either. Baking a loaf of sourdough doesn’t really come with opportunities to exploit other people for your own gain, etc. So of course there’s not going to be much controversy.
But whenever there is controversy, usually due to the existence of genuinely competing interests, then motives for sabotage become plausible, whereupon it immediately becomes tempting to declare that those who think that you ought to be doing things differently are just trying to sabotage you.
Finding people who are trustworthy, good at handling it well, and willing to teach you is wonderful. I’ve been trying to learn the most from sources well outside the rationalist community, but I think there is good advice to be had. Just, not uncritically trusted?
Also, some people seem to think this class of problem should be easy. For those people I want to make the point that it is (at least sometimes) an adversarial situation.
I agree, it certainly is an adversarial situation—and not only sometimes, but most of the time. And I agree that you should not uncritically trust advice that you hear from any sources. In fact, you shouldn’t even trust advice that you hear from yourself.
Consider your bank example again. You might think: “hmm, that guy has an odd amount of knowledge of, and/or interest in, internal bank practices and security and so on; suspicious!”. Then you learn that he works at a bank himself, so it turns out that his knowledge and interest aren’t suspicious after all—great, cancel that red flag.
No! Wrong! Don’t cancel it! Put it back! Raise two red flags! (“An analysis by the American Bankers Association concluded that 65% to 70% of fraud dollar losses in banks are associated with insider fraud.”) Suspect everyone, especially the people you’ve already decided to trust!
But of course “suspect” is exactly the wrong word here. If you’re having to suspect people, you’ve already lost.
Consider computer security. I ask about the security software that your company is using to protect your customers’ data—could I see the code? Which cryptographic algorithms do you use? You’re suspicious; what do I need this information for? Who should be allowed to have this sort of knowledge?
And of course the right answer is “absolutely everyone”. It should be fully public. If your setup is such that it even makes sense to ask this question of “who should be allowed to know what cryptographic algorithm we use”, then your security system is a complete failure and nobody should trust you with so much as their mother’s award-winning recipe for potato salad, much less any truly sensitive data.
The way to ensure that you don’t accidentally give the wrong person insider access to your system is to construct a system such that nobody can exploit it by having insider access.
(Another way of putting this is to say that selective methods absolutely do not suffice for ensuring the trustworthiness and integrity of social systems.)
The same is true for the problem of “from whom to take advice on conflict resolution”. You should not have to figure out the motives of the advice-giver or to decide whether to trust their advice. Your procedure for evaluating advice should work perfectly even if the advice comes from your bitter enemy who wishes nothing more than to see you fail. And then you should apply that same procedure to what you already believe and the practices you are already employing—take the advice that you would give to someone, and ask what you would think of it if it had come to you from someone of whom you suspected that they might be your worst and most cunning enemy. Is your evaluation procedure robust enough to handle that?
If it is not, then any time spent thinking about whether the source of the advice is trustworthy is pointless, because you can’t very well trust someone else more than you trust yourself, and you evaluation procedure is too weak to guard against your own biases. And if it is robust enough, then once again it is pointless to wonder whom you should trust, because you don’t have to trust anyone—only to verify.
you can’t very well trust someone else more than you trust yourself
In certain domains, I absolutely can and will do this, because "someone else" has knowledge and experience that I don't and could not conveniently acquire. For example, if I hire lawyers for my business's legal department, I'm probably not going to second-guess them about whether a given contract is unfair or contains hidden gotchas, and I'm usually going to trust a doctor's diagnosis more than I trust my own. (The shortfalls of "Doctor Google" are well-known, so although I often do "do my own research" I only trust it so much.)
In certain domains, I absolutely can and will do this, because “someone else” has knowledge and experience that I don’t and could not conveniently acquire.
And how do you choose who the “someone else” is?
Honestly? By going to the list of doctors that my health insurance will pay for, or some other method of semi-randomly choosing among licensed professionals that I hope doesn't anti-correlate with the quality of their advice. There are probably better ways, but I don't know what they are offhand. ::shrug::
If you were accused of a crime and intended to plead not guilty, how would you choose a defense attorney, assuming you weren't going to use a public defender?
So you trust yourself to decide how to select a doctor; you trust your decision procedure, which you have chosen.
If you were accused of a crime and intended to plead not guilty, how would you choose a defense attorney, assuming you weren’t going to use a public defender?
I’d ask trusted friends for recommendations, because I trust myself to know whom to ask, and how to evaluate their advice.
And of course the right answer is “absolutely everyone”. It should be fully public. If your setup is such that it even makes sense to ask this question of “who should be allowed to know what cryptographic algorithm we use”, then your security system is a complete failure and nobody should trust you with so much as their mother’s award-winning recipe for potato salad, much less any truly sensitive data.
This makes sense for computer security, but for biosecurity it doesn't work, because it's a lot harder to ship a patch to people's bodies than to people's computers. The biggest reason there has never been a terrorist attack with a pandemic-capable virus is that, with few exceptions (such as smallpox), we don't know what they are.
A: My understanding is that the U.S. Government is currently funding research programs to identify new potential pandemic-level viruses.
K: Unfortunately, yes. The U.S. government thinks we need to learn about these viruses so we can build defenses — in this case vaccines and antivirals. Of course, vaccines are what have gotten us out of COVID, more or less. Certainly they’ve saved a ton of lives. And antivirals like Paxlovid are helping. So people naturally think, that’s that’s the answer, right?
But it’s not. In the first place, learning whether a virus is pandemic capable does not help you develop a vaccine against it in any way, nor does it help create antivirals. Second, knowing about a pandemic-capable virus in advance doesn’t speed up research in vaccines or antivirals. You can’t run a clinical trial in humans on a new virus of unknown lethality, especially one which has never infected a human — and might never. And given that we can design vaccines in one day, you don’t save much time in knowing what the threat is in advance.
The problem is there are around three to four pandemics per century that cause a million or more deaths, just judging from the last ones — 1889, 1918, 1957, 1968 and 2019. There’s probably at least 100 times as many pandemic-capable viruses in nature — it’s just that most of them never get exposed to humans, and if they do, they don’t infect another human soon enough to spread. They just get extinguished.
What that means is if you identify one pandemic-capable virus, even if you can perfectly prevent it from spilling over and there’s zero risk of accidents, you’ve prevented 1/100 of a pandemic. But if there’s a 1% chance per year that someone will assemble that virus and release it, then you’ve caused one full pandemic in expectation. In other words, you’ve just killed more than 100 times as many people as you saved.
I would be delighted to have the social equivalent of a zero trust conflict resolution system that everyone who interacted with it could understand and where the system could also maintain confidentiality as needed. I'm in favour of the incremental steps towards that I can make. In the abstract, I agree the procedure for evaluating advice should work even if it comes from bitter enemies. I do not think my personal evaluation procedure is currently robust enough to handle that, though tsuyoku naritai, someday maybe it will be.
The main context I encounter these problems is in helping local ACX meetup organizers. Some of them first found the blog a few months ago, ran a decent ACX Everywhere that blossomed into a regular meetup group, and then a conflict happened. I want good advice or structures to hand to them, and expecting them to be able to evaluate my advice to that standard seems unreasonable. It's likely that at least one and possibly all of the local belligerents will have suggestions, and those suggestions will conveniently favour the advice-giver.
One way to read this essay, which I would endorse as useful, is as one useful answer to the question "why do all the people in this conflict I find myself in all have such different ideas of the procedure we should use to resolve it?"
I’m in favour of the incremental steps towards that I can make. In the abstract, I agree the procedure for evaluating advice should work even if it comes from bitter enemies. I do not think my personal evaluation procedure is currently robust enough to handle that, though tsuyoku naritai, someday maybe it will be.
Yes, but in the absence of this, every other approach is doomed to failure. And the severity of the failure will be inversely proportional to how seriously the people with the power and authority take this problem, and to how much effort they put into addressing it.
I want good advice or structures to hand to them, and expecting them to be able to evaluate my advice to that standard seems unreasonable.
Respectfully, I disagree. I think that this is the only standard that yields workable results. If it cannot be satisfied even approximately, even in large part (if not in whole), then better not to begin.
I'm trying to come up with people that I think actually reach the standard you're describing. I think I know maybe ten, of which two have any time or interest in handling meetup conflicts.
I do agree there's some big failures that can happen when the people with authority to solve the problem take it very seriously, put a lot of effort into addressing it, and screw up. I don't agree that relationship is inversely proportional; if I imagine say, a 0 effort organizer who does nothing vs a 0.1 effort organizer who only moderates to say "shut up or leave" to a attendees who keep yelling their political opponents should be killed, this seems like an improvement. There's a lot of low hanging fruit here.
It's possible "even approximately, even in large part" covers a much greater range than I'm interpreting it as and your standard is lower than it sounds. If not, I think we're at an impasse of a disagreement. I think that if nobody does any conflict resolution at all unless they are that good of an evaluator, all but a vanishingly small number of spaces will become much worse. We're talking on LessWrong, I do not think the moderators here are at that level, and yet the space is much improved relative to other places. Seems like 4chan decided better not to begin, and I like LessWrong more.
I do agree there’s some big failures that can happen when the people with authority to solve the problem take it very seriously, put a lot of effort into addressing it, and screw up. I don’t agree that relationship is inversely proportional; if I imagine say, a 0 effort organizer who does nothing vs a 0.1 effort organizer who only moderates to say “shut up or leave” to a attendees who keep yelling their political opponents should be killed, this seems like an improvement. There’s a lot of low hanging fruit here.
Er, sorry, I think you might’ve misread my comment? What I was saying was that the more seriously the people with the power and authority take the problem, the better it is. (I think that perhaps you got the direction backwards from how I wrote it? Your response would make sense if I had said “directly proportional”, it seems to me.)
I think that if nobody does any conflict resolution at all unless they are that good of an evaluator, all but a vanishingly small number of spaces will become much worse. We’re talking on LessWrong, I do not think the moderators here are at that level, and yet the space is much improved relative to other places. Seems like 4chan decided better not to begin, and I like LessWrong more.
“Better not to begin” wouldn’t be “4chan”, it would be “nothing”.
I agree that the moderators on Less Wrong aren’t quite at the level we’re talking about, but they’re certainly closer than most people in most places. (And many of what I perceive to be mistakes in moderation policy are traceable to the gap between their approach, and the sort of approach I am describing here.) At the very least, it’s clear that the LW mods have considerable experience with having to evaluate advice that does, in fact, come from their (our) enemies.
Er, sorry, I think you might’ve misread my comment? What I was saying was that the more seriously the people with the power and authority take the problem, the better it is. (I think that perhaps you got the direction backwards from how I wrote it? Your response would make sense if I had said “directly proportional”, it seems to me.)
"And the severity of the failure will be inversely proportional to how seriously the people with the power and authority take this problem, and to how much effort they put into addressing it."
Hrm. Yes, I seem to have read it differently, apologies. I think I flipped the sign on "the severity of the failure" where I interpreted it as the failure being bigger the more seriously people with power and authority took the problem.
“Better not to begin” wouldn’t be “4chan”, it would be “nothing”.
I agree that the moderators on Less Wrong aren’t quite at the level we’re talking about, but they’re certainly closer than most people in most places.
Yeah. I prefer having LessWrong over having nothing in its place. I even prefer having LessWrong over having nothing in the place of everything shaped like an internet forum.
Do the LW mods pass your threshold for good enough it's worth beginning? I think a lot of my incredulity here comes from trying to figure out how big that gap is, though in terms of the specific problem I'm trying to solve I think I need to take as a premise that I start with whatever crop of ACX organizers I'm offered by selection effects.
Do the LW mods pass your threshold for good enough it’s worth beginning?
Well… hard to say. The LW mods now pass that threshold[1], but then again they’re not beginning now; they began eight years ago.
I think a lot of my incredulity here comes from trying to figure out how big that gap is, though in terms of the specific problem I’m trying to solve I think I need to take as a premise that I start with whatever crop of ACX organizers I’m offered by selection effects.
Yes… essentially, this boils down to a pattern which I have seen many, many times. It goes like this:
A: You are trying to do X, which requires Y. But you don’t have Y.
B: Well, sure, I mean… not exactly, no. I mean, mostly, sort of… (a bunch more waffling, eventually ending with…) Yeah, we don’t have Y.
A: So you can’t do X.
B: Well, we have to, I’m afraid…
A: That’s too bad, because you can’t. As we’ve established.
B: Well, what are we going to do, just not do X?
A: Right.
B: Unacceptable! We have to!
A: You are not going to successfully do X. That will either be because you stop trying, or because you try but fail.
B: Not doing X is not an option!
B tries to do X
B fails to do X, due to the lack of Y
A: Yep.
B: Well, we have to do our best!
A: Your best would be “stop trying to do X”.
B ignores A, continues trying to do X and predictably failing, wasting resources and causing harm indefinitely (or until external circumstances terminate the endeavor, possibly causing even more harm in the process)
In this case: a bunch of people who are completely unqualified to run meetups are trying to run meetups. Can they run meetups well? No, they cannot. What should they do? They should not run meetups. Then who will run the meetups? Nobody.
Now, while reading the above, you might have thought: “obviously B should be trying to acquire Y, in order to successfully do X!”. I agree. But that does not look like “do X anyway, and maybe we’ll acquire Y in the process”. (Y, in this case, is “the skills that we’ve been discussing in this comment thread”.) It has to be a goal-directed effort, with the explicit purpose of acquiring those skills. It can be done while also starting to actually run meetups, but only with an explicit awareness and serious appreciation of the problem, and with serious effort being continuously put in to mitigate the problem. And the advice for prospective meetup organizers should tackle this head-on, not seek to circumvent it. And there ought to be “centralized” efforts to develop effective solutions which can then be taught and deployed.
You might say: “this is a high bar to clear, and high standards to meet”. Yes. But the standards are not set by me, they are set by reality; and the evidence of their necessity has been haunting us for basically the entirety of the “rationalist community”’s existence, and continues to do so.
Approximately, anyway. There’s a bunch of mods, they’re not all the same, etc. ↩︎
Well… hard to say. The LW mods now pass that threshold[1], but then again they’re not beginning now; they began eight years ago.
My sense is that if the mods had waited to start trying to moderate things until they met this threshold, they wouldn't wind up ever meeting it. There's a bit of, if you can't bench press 100lbs now, try benching 20lbs now and you'll be able to do 100lbs in a couple years, but if you just wait a couple years before starting you won't be able to then either.
Ideally there's a way to speed that up and among the ideas I have for that is writing down some lessons I've learned in big highlighter. I'm pretty annoyed at how hard it is to get a good feedback loop and get some real reps in here.
Yes… essentially, this boils down to a pattern which I have seen many, many times. It goes like this:
...
In this case: a bunch of people who are completely unqualified to run meetups are trying to run meetups. Can they run meetups well? No, they cannot. What should they do? They should not run meetups. Then who will run the meetups? Nobody.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they're terrible at the interpersonal conflict part they're often fine at picking a location and time and bringing snacks. That makes them partially qualified.
The -2std failure case is something like, they announced a time and place that's inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don't have a good time. This is not great and I try to avoid that outcome where I can, but it's not so horrible that I'd give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
It's possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I'm on board with a claim something like "a rationalist meetup aught to have some rationality practiced" and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups. Which, given my domain is ACX groups, isn't that surprising. Conflict can come for them anyway.
Hrm. Maybe a helpful model here is I'm trying to reduce the failure rate? The perfect spam filter bins all spam and never bins non-spam. If someone woke up, went to work, and improved the spam filter such that it let half as much spam through, that would be progress. If because of my work half the [organizers that would have burned out/ attendees who would have been sadly driven away/ maleficers who would have caused problems] have a better outcome, I'll call it an incremental victory.
And there ought to be “centralized” efforts to develop effective solutions which can then be taught and deployed.
waves Hi, one somewhat central fellow, trying to develop some effective solution I can teach. I don't think I'm the only one (as usual I think CEA is ahead of me) but I'm trying. I didn't write much about this for the first year or two because I wasn't sure which approaches worked and which advisors were worth listening to. Having gone around the block a few times, I feel like I've got toeholds, at least enough to hopefully warn away some fools mates.
fwiw, these are what I'd say a 2std failure case of a rationalist meetup looks like
https://www.wired.com/story/delirious-violent-impossible-true-story-zizians/
https://variety.com/2025/tv/news/julia-garner-caroline-ellison-ftx-series-netflix-1236385385/
https://www.wired.com/story/book-excerpt-the-optimist-open-ai-sam-altman/
(Ways my claim could be false: there could have been way more than 150 rationalist meetups, so that these are lower than 2 std, or these could not have, at any point in their development, counted as rationalist meetups, or ziz, sam, and eliezer could have intended these outcomes, so these don't count as failures)
I think of Ziz and co as less likely than 2std out, for about the reasons you give. I tend to give 200 as the rough number of organizers and groups, since I get a bit under that for ACX Everywhere meetups in a given season. If we're asking per-event, Dirk's ~5,000 number sounds low (off the top of my head, San Diego does frequent meetups but only the ACX Everywheres wind up on LessWrong, and there are others like that) but I'd believe 5,000~10,000.
You're way off on the number of meetups. The LW events page has 4684 entries (kudos to Said for designing GreaterWrong such that one can simply adjust the URL to find this info). The number will be inflated by any duplicates or non-meetup events, of course, but it only goes back to 2018 and is thus missing the prior decade+ of events; accordingly, I think it's reasonable to treat it as a lower bound.
There are circumstances where trying and failing is very bad. If someone is trying to figure out heart surgery, I think they should put the scalpel down and go read some anatomy textbooks first, maybe practice on some cadavers, medical school seems a good idea. I do not think meetups are like this and I do not think the majority of the organizers are completely unqualified; even if they’re terrible at the interpersonal conflict part they’re often fine at picking a location and time and bringing snacks. That makes them partially qualified.
FWIW, my experience is that rationalist meetup organizers are in fact mostly terrible at picking a location and at bringing snacks. (That’s mostly not the kind of failure mode that is relevant to our discussion here—just an observation.)
Anyhow…
The −2std failure case is something like, they announced a time and place that’s inconvenient, then show up half an hour late and talk over everyone, so not many people come and attendees don’t have a good time. This is not great and I try to avoid that outcome where I can, but it’s not so horrible that I’d give up ten average meetups to prevent it. Worse outcomes do happen where I do get more concerned.
All of this (including the sentiment in the preceding paragraph) would be true in the absence of adversarial optimization… but that is not the environment we’re dealing with.
(Also, just to make sure we’re properly calibrating our intuitions: −2std is 1 in 50.)
It’s possible you have a higher bar or a different definition of what a rationalist meetup aught to be? I’m on board with a claim something like “a rationalist meetup aught to have some rationality practiced” and in practice something like (very roughly) a third of the meetups are pure socials and another third are reading groups.
No, I don’t think that’s it. (And I gave up on the “a rationalist meetup aught to have some rationality practiced” notion a long, long time ago.)
Not sure how referential "you" vs general "you" you're using when you're talking about assuming some "you" is the KGB. I do think it's useful to build a system which does not assume the watchman is perfectly trustworthy and good. In my own case, one of the first things I did once I started to realize how tricky this part of my role might be was write down a method for limited auditing of myself. That said:
Your own approach and policies should work unproblematically even if everyone assumes that you are basically the KGB. (This is especially true if you are not the KGB at all.)
I'm not sure how literally to take the "unproblematically" adverb here. If you're being literal, then I disagree; part of my thesis here is that sometimes there will be as many problems as enemy action can cause, and they will be able to cause some problems.
(If you're on the lookout for a fully general counterargument, here's one I haven't found a way around! This theory treats occasional strident complaints about the way a resolution system is operating as very little evidence that the system is operating badly, because one would expect occasional bad actors to try shaking everyone's trust in the system even if it was a good system. And yes, that is such a suspicious theory for me in particular to put forward. Dunno what to tell you here.)
And yes, that is such a suspicious theory for me in particular to put forward.
Indeed. But really, I wouldn’t say “suspicious”, exactly; I’d say “yes, it makes perfect sense that you would say this”. This isn’t even an accusation, or anything like that. It’s just the logical outcome of the setup.
The question is, can a bad actor shake everyone’s trust in the system? If they can, then is it really a good system?
The best answer to “should I trust you[r system]?” isn’t “yes, you should, and here is why”. It’s “you don’t have to”.
It’s “you don’t have to”.
My current best guess about what you are trying to say is something like this: "People should give up on the idea of making systems that are resilient against bad actors on both sides. You should just give unlimited power to one side (the moderator, the meetup czar, the police...) and that's it. Now at least the system is resilient against bad actors on one side."
EDIT: Never mind, after reading your other comments, I guess you believe that community moderation can be solved by an algorithm. Ok, I might believe it if you show me the code.
My current best guess about what you are trying to say is something like this: “People should give up on the idea of making systems that are resilient against bad actors on both sides. You should just give unlimited power to one side (the moderator, the meetup czar, the police...) and that’s it. Now at least the system is resilient against bad actors on one side.”
Uh… no. Definitely not.
I guess you believe that community moderation can be solved by an algorithm.
… what in the world?
No, I don’t believe anything like this.
Honestly, it would be hard to get further from my views on this subject than what you’ve described…
Here is some commentary on my views on this subject, framed with a bit of literary analysis. (This was inspired by a semi-recent rereading of a pair of Alexander Wales stories, namely Shadows of the Limelight and The Metropolitan Man. NOTE: This comment contains spoilers… well, a spoiler… but only a very mild and general one.)
The key point is: Superman is evil.
For anyone who hasn’t read The Metropolitan Man: it’s a Superman fanfic, wherein Superman is revealed to be evil. That is: it’s not an AU fic where Superman, instead of being good like he normally is, is instead evil; rather, it’s just the regular Superman, but the story shows (or so I claim! it’s not made explicit, of course) that regular Superman is evil.
Now, this is hardly a novel insight, but the trick is articulating just what the nature of that evil is; but I’ve finally managed to do so, to my own satisfaction. The problem with Superman (as depicted in The Metropolitan Man—and also, of course, many, many canon Superman stories) is the chain of reasoning that goes:
“With great power comes great responsibility” -> “With great responsibility comes great moral leeway”
In other words, you are responsible for the lives of many people, the fate of a nation/planet/whatever, or some other great and important thing; therefore you cannot be expected to scrupulously follow rules that normal people are held to, and are allowed to break the law, to lie, to imprison innocents, or whatever else.
But why do you have that great responsibility? Well, because you have great power, so you were obligated to take on that responsibility.
But then result of this is: “With great power comes great moral leeway”.
Which is just “might makes right”.
So, through this concept of moral obligation, you have reasoned yourself into being exempt from the rules because you’re very powerful; and of course, being powerful, you also can’t have those rules enforced upon you—very convenient!
That’s Superman in a nutshell. (Also many other superheroes, but Superman is the most famous example.)
The correct solution (morally speaking) is, of course, to deny the first step: no, with great power does not come great responsibility—except the responsibility of being careful not to use that power to do harm. Anything other than that, over and above any responsibilities that any person has, is strictly supererogatory.
And of course one does not have to be Superman to encounter this problem. Whenever you gain power, there is a very great temptation to use this power; and it is a dangerous delusion, to think that the only problematic temptation is the temptation to use your power to aggrandize or enrich yourself directly. The temptation to use your power to benefit others is even more perilous.
So what is to be done?
If you find yourself with power, do not begin by thinking “how can I use this to do good”. Think first:
(Note that these are not numbered—I am not saying that you need to think of these things in that order. Think of them all, simultaneously.)
(There are also auxiliary questions, like “If someone whom I didn’t trust at all were to gain this power, what would I wish they’d do with it?”, and “Suppose that I become crazy and/or evil tomorrow; what should I do to prevent bad-future-me from abusing this power?”. Note that if the above-listed primary questions are addressed effectively, these auxiliary questions also disappear almost entirely.)
(The astute reader may notice parallels between the above perspective and certain dichotomies among different types of economic systems, approaches to the construction of complex abstract systems, etc.)
people who have opinions about how [whatever] should be done are unusually likely to be bad actors who want me to do [whatever] in such a way as to benefit them
But the inference is correct, no, since you are discarding the probability mass on "innocent normie", no?
I am not sure I follow. Could you say more? What do you mean by saying that I am discarding that probability mass?
Thanks, yes.
I'm not saying anything the post isn't saying, I'm just pointing out that the forecasting/simple Bayesian tradition of knowledge really agrees with this post. You then have further arguments around the virtue of orienting the world around happy paths and normies, but still.
Uh… sure, that’s true enough, but this logic requires that we first accept the OP’s categorization scheme—which is part of precisely the meme that I am referring to!
OP notes that “These categories absolutely overlap and intersect some of the time”. This is true, but the trouble is that taking this caveat seriously means discarding the logic of the argument.
Consider an alternate set of categories:
Hmm. Doing a Bayesian calculation on this might be tricky. Perhaps we can separate out some of those, like so:
We learn that someone has opinions about [whatever]. We now discard categories #5 and #6. But unbeknownst to us, the ratio of bad:good in the overall population was actually lower than the ratio of bad:good among normal people. So learning this fact should reduce our subjective probability of the person being bad.
Is this possible? Well, for example, suppose that it’s 1975, and you learn that a certain person has opinions about how psikhushkas should work. In particular, this individual thinks that said institutions should work differently than they do in fact work. What should you conclude about this person?
In the 1960s Soviet psychiatry, particularly Serbsky Institute Director Dr. Andrei Snezhnevsky, introduced the concept of "sluggish schizophrenia", a special form of the illness that supposedly affects only social behavior, with no effect on other traits: "most frequently, ideas about a struggle for truth and justice are formed by personalities with a paranoid structure", according to the Serbsky Institute professors.[9][10]
The logic in the OP is easily recognizable as the logic of every police force, every security service, and every authoritarian enforcement organization. It’s the logic that says “if you’re not one of us, then you’re either a clueless normie who will unthinkingly submit to our authority, or else you’re probably a criminal—unless you can, with great effort and in the face of considerable skepticism, prove to us (and yes, the burden of proof is entirely on you) that you’re one of the rare harmless weirdoes (emphasis on the ‘harmless’; if you give any hint that you’re challenging our authority, or make any move toward trying to change the system, then the ‘harmless’ qualifier is immediately stripped from you, and you move right back into the ‘criminal’ category)”.
(That “professionals” are much more likely than anyone else to be bad actors is another fact that drastically undermines the OP’s thesis—and this blind spot is not an accident. It’s just that “professionals” simply means “the ingroup”—“one of us”. As in, “You know the score, pal! If you’re not cop, you're little people.”)
That is not the argument I'm trying to make.
The argument I'm trying to make is that conflict resolution is hard in a particular way that approximately nothing else in running events or communities is hard; it's potentially adversarial, therefore taking the advice of people with strongly held and seemingly sensible advice can be a trap.
The bullet pointed personas are not a load bearing part of this thesis. If it would help, try dropping everything from "let's be reductive" to "what they might do about it". I think the only other part I'm directly referencing them is the parenthesis in section III about how the position of overseeing the resolution process is a position of interest for a bad actor, therefore you might even try random lot.
There is a mistake I'm trying to warn about where someone thinks conflict handling should be simple, why don't we all sit down and come up with a simple setup, and I do not think they are actually envisioning how any part of this will work in the face of someone with a mind to get away with something. There is a second mistake I'm trying to warn about where someone makes confident assertions about how conflict handling should work, and people do not notice the ulterior motive. I recognize the weirdness around me trying to point out the second mistake - as I said in the post, "Okay, but why should you trust me? Professional interest since complaint handling is part of my role, but good question and don't be satisfied by that." I am trying design a setup that would work even if I was a problem even as I was designing the setup.
I do not think the logic you're talking about is the logic I'm using. I will very cheerfully bet you my hundred bucks against your dollar that every police force does not criminalize everyone who makes any move towards trying to change the system, unless you're using some non-standard definitions of police force and criminalize. I'm going to assume that "every" is colloquial, not literal, and that paragraph is rhetoric, but even if the section in quotes was how police forces and authoritarian enforcement organizations were, I don't think their logic ends by suggesting constant vigilance against themselves.
That “professionals” are much more likely than anyone else to be bad actors is another fact that drastically undermines the OP’s thesis—and this blind spot is not an accident.
I do think it would help to have a better name for this bucket, since in my contexts a lot of people aren't getting paid a regular paycheck to do this and don't have a lot of training or background. The obvious and central examples are- divorce lawyers are, CEA's community health team comes to mind as well- but a lot of the time it's a local meetup organizer in some random town that's had an issue.
The idea that situations/problems that involve conflicts, and require resolving conflicts, are more challenging than situations/problems that don’t involve conflicts, is trivial. If someone thinks that conflict handling should be simple then of course that person is an idiot. If this were all that you were saying, then it would hardly be worthy of a post.
However. In the OP you write:
Now let’s say that the bad actor is not an idiot. They have considered what and who might stop them, and what they might do about it.
It is an obvious, straightforward move to accuse whatever system is responsible for catching them of being corrupt and the people running that system to be horrible or incompetent.
But a more realistic scenario would be:
“Now let’s say that the system is already full of bad actors (as it probably is). They have considered what and who might stop them, and what they might do about it. The system will, of course, be corrupt, and the people running that system will be horrible or incompetent. It is an obvious, straightforward move to promote memes that prevent this from being rectified.”
(You might protest that sure, this happens, but you know that you are not a bad actor, right? Even if nobody else can be sure of this, you at least can! And you’re talking to people who also know that they are not bad actors. And I say that even this is false. You don’t know this. The people in your target audience—other organizers, etc.—also don’t know this about themselves. And certainly nobody else should take your (or their) word for it.)
You write:
I recognize the weirdness around me trying to point out the second mistake—as I said in the post, “Okay, but why should you trust me? Professional interest since complaint handling is part of my role, but good question and don’t be satisfied by that.”
But in fact this should read more like this:
“Okay, but why should you trust me? Good question; the answer is that you definitely shouldn’t trust me—especially since complaint handling is part of my role, so I have a professional interest, which makes me exceptionally likely to be a bad actor. Do not trust! Verify! If you can’t verify, assume treachery until proven otherwise!”
I am trying design a setup that would work even if I was a problem even as I was designing the setup.
This is an admirable goal and I applaud it. My comments are aimed precisely at this goal also. You can read them as saying “your setup does not and cannot succeed at this, so long as you take the approach described in the OP”.
I will very cheerfully bet you my hundred bucks against your dollar that every police force does not criminalize everyone who makes any move towards trying to change the system, unless you’re using some non-standard definitions of police force and criminalize.
I… didn’t claim that any police force criminalizes everyone who makes any move towards trying to change the system, so… no bet! (This is especially a nonsensical formulation since police forces do not criminalize anything; governments—in systems like that of the United States, specifically legislatures—criminalize things. But police forces have many, many tools available to them to apply differential treatment on the basis of evaluated threat level. Heck, even actual criminals can be harrassed and pressured in many ways other than arresting, indicting, and convicting them of a crime.)
I’m going to assume that “every” is colloquial, not literal, and that paragraph is rhetoric, but even if the section in quotes was how police forces and authoritarian enforcement organizations were, I don’t think their logic ends by suggesting constant vigilance against themselves.
It’s not literal only to the extent that I anticipate reasonable disagreements about what qualifies as a “police force”, “security service”, or “authoritarian enforcement organization”. Otherwise, I can’t think of any exceptions.
And their logic certainly doesn’t end by suggesting constant vigilance against themselves. Of course it doesn’t! They’re the ingroup; why should they guard against themselves?
(Although I will point out that the pattern is fractal. The cops treat “civilians” in this way, but then cops-within-the-cops organizations like “Internal Affairs” divisions and the like treat regular cops in this way. Similar patterns involving the relationship of ordinary people, spies, and counter-intelligence agencies are also well known. I don’t know offhand of any examples of this pattern going up another level, but I expect that they exist.)
I do think it would help to have a better name for this bucket, since in my contexts a lot of people aren’t getting paid a regular paycheck to do this and don’t have a lot of training or background. The obvious and central examples are- divorce lawyers are, CEA’s community health team comes to mind as well- but a lot of the time it’s a local meetup organizer in some random town that’s had an issue.
To be clear, this is how I assumed you were using the term as well.
I am not sure whether “this blind spot is not an accident” is suggesting that I’m making a good faith effort but a predictable mistake, or that I am deliberately leaving information which I know to be true and relevant out in an effort to make my argument stronger, or some third thing. Would you please clarify this? As you might imagine, if it’s the second I’m going to say that’s not what I’m trying to do.
First one.
EDIT: I really should clarify this one, sorry. It’s not the second one or any third thing—that’s what my response meant. My comment about “this blind spot is not an accident” was not meant as an accusation of deliberate bad faith argument in your post, in other words—nothing like that!
But your first interpretation isn’t quite right either, because it takes an entirely too mistake-theoretic approach. It would be more correct to say something like: the blind spot in question is a predictable consequence of the logic and incentives of the situation, such that it might be that your intentions are pure, or it might be that your intentions are not pure, but either way the result is ultimately the same, and the distinction usually moot in any case (due to self-deception).
(I am happy to assume perfectly good intentions on the part of actual-you, the actual person I am talking to, for the purposes of this and similar discussions. It’s just that we have to keep in mind the possibility of a hypothetical-you who is in the same situation and is writing the same things but who does not have good intentions.)
I disagree your scenario is more realistic.
“Now let’s say that the system is already full of bad actors (as it probably is). They have considered what and who might stop them, and what they might do about it. The system will, of course, be corrupt, and the people running that system will be horrible or incompetent. It is an obvious, straightforward move to promote memes that prevent this from being rectified.”
I think that happens sometimes, and higher pressure scenarios are more likely to be targets for this. Most of my disagreement is that I think most people are trying to do the right thing; dealing with occasional bad actors who are outnumbered is easier than dealing with lots of bad actors who outnumber everyone else. A system that works for the latter I'd expect to work for the former though.
But in fact this should read more like this:
“Okay, but why should you trust me? Good question; the answer is that you definitely shouldn’t trust me—especially since complaint handling is part of my role, so I have a professional interest, which makes me exceptionally likely to be a bad actor. Do not trust! Verify! If you can’t verify, assume treachery until proven otherwise!”
Hrm. I don't agree with that much emphasis and I'm not sure how much of that is interpretation. I do feel a little bit of refreshment at encountering someone with even more CONSTANT VIGILANCE than I have, it's a nice change of pace. Do you happen to work in computer security by any chance?
More seriously, I don't think the systems around me work well if they need to verify every step, so there's some spot checking and trust extended instead. Paying more attention to this over the last couple years has made me more aware of all the gaps where someone could cause problems if they had a mind to.
My comments are aimed precisely at this goal also. You can read them as saying “your setup does not and cannot succeed at this, so long as you take the approach described in the OP”.
Hrm. I have a small objection here, which is that I don't view the main post as laying out an approach for dealing with this. I said I don't have a solution. To use a chess analogy, I'm not saying "use the Italian Game opening, then look to control the centre." I'm saying "don't move f3 as a starter, and if you feel compelled to do it anyway really don't move g4, it's an embarrassing way to lose." If someone showed up in the comments and said hey, I do think there's a solution, here it is- well, I'd read their solution carefully and be happy if it turned out to be correct.
I… didn’t claim that any police force criminalizes everyone who makes any move towards trying to change the system, so… no bet!
Ah, I may have misinterpreted you. I read
The logic in the OP is easily recognizable as the logic of every police force, every security service, and every authoritarian enforcement organization. It’s the logic that says “if you’re not one of us, then you’re either a clueless normie who will unthinkingly submit to our authority, or else you’re probably a criminal—unless you can, with great effort and in the face of considerable skepticism, prove to us (and yes, the burden of proof is entirely on you) that you’re one of the rare harmless weirdoes (emphasis on the ‘harmless’; if you give any hint that you’re challenging our authority, or make any move toward trying to change the system, then the ‘harmless’ qualifier is immediately stripped from you, and you move right back into the ‘criminal’ category)”.
and the bolded parts (bolding mine) seemed to say every police force does criminalized everyone who makes any move towards trying to change the system. I. . . do see a distinction between "moved into the criminal category" and "criminalized according to the written legal code" but that does seem a thin distinction. Still, my misinterpretation.
> Would you please clarify this? As you might imagine, if it’s the second I’m going to say that’s not what I’m trying to do.
First one.
I appreciate the clarification, including the edit!
(I am happy to assume perfectly good intentions on the part of actual-you, the actual person I am talking to, for the purposes of this and similar discussions. It’s just that we have to keep in mind the possibility of a hypothetical-you who is in the same situation and is writing the same things but who does not have good intentions.)
Yep, noted and agreed. And likewise the possibility of a hypothetical-you who is trying to make sure whatever process gets used isn't going to catch them. Neither of you might go to meetups much but LessWrong moderation decisions are probably relevant. (To be clear I don't make those, I'm not a mod here and I don't even make moderation decisions on ACX comments, I can just see the same line.)
Most of my disagreement is that I think most people are trying to do the right thing; dealing with occasional bad actors who are outnumbered is easier than dealing with lots of bad actors who outnumber everyone else.
Yes, perhaps most people are trying to do the right thing, but (a) they are mostly not trying very hard, and (b) trying to do the right thing is just not anywhere close to sufficient for actually doing the right thing.
It is extremely easy to just find yourself doing the wrong thing, if you are not systematically and effectively avoiding all the things that nudge you toward doing the wrong thing. This is why I have emphasized, throughout this discussion, that I am not accusing anyone in particular of bad faith or bad character, and that not only should you trust no one, you should not even trust yourself, because “trying to do the right thing” is not sufficient even from your own perspective.
Do you happen to work in computer security by any chance?
I do not, but I will take the question as a compliment.
More seriously, I don’t think the systems around me work well if they need to verify every step, so there’s some spot checking and trust extended instead.
Yes. But there is a difference between making a considered judgment not to verify every step of some process, or not to check every instance, etc., and simply not having thought about it. (At the very least, the former decision can be revisited, re-evaluated, updated—the latter decision cannot even be acknowledged or accounted for, because it was never made in the first place!)
And, of course—as per my other comment—it may well be that the answer to “if we had to do this the ‘proper’ way, then we couldn’t do it at all” is “then you shouldn’t do it at all”.
Re: the “every police force” commentary—the whole “make any move toward trying to change the system” was mostly intended to qualify the “harmless weirdoes” scenario, not to necessarily apply to all people of any sort. (But also, the “moved into the criminal category” distinction is pretty important. But this is a tangent at this point, so let’s table it for now…)
Yep, noted and agreed. And likewise the possibility of a hypothetical-you who is trying to make sure whatever process gets used isn’t going to catch them. Neither of you might go to meetups much but LessWrong moderation decisions are probably relevant. (To be clear I don’t make those, I’m not a mod here and I don’t even make moderation decisions on ACX comments, I can just see the same line.)
FWIW, I think that approaches to conflict resolution in in-person meetups and on online forums should differ considerably, for many reasons, but certainly in large part due to the different ways in which evaluation of bad actors / problems / etc. can/must happen in those two types of contexts. I would not give the same advice to operators of a web forum as to organizers of a meetup, and I would be suspicious of anyone who insisted on applying the same approach to both contexts.
It seems to me that the most robust solution is to do it the hard way: know the people involved really well, both directly and via reputation among people you also know really well--ideally by having lived with them in a small community for a few decades.
I think the thesis should be "everyone has an opinion on how conflicts should be handled that miraculously works out so they're right in any given conflict."
I think analyzing different types of actors with different goals isn't elucidating. Bad actors are explicitly self-serving; good actors are probably still a little biased and petty. Being right shouldn't be the main thing but it probably is. It's also easier to remember everyone self-serve biases than "this is one of 5 different types of people whose interest in conflict resolution benefits 5 different goal categories they might have."
Why is conflict resolution hard?
I talk to a lot of event organizers and community managers. Handling conflict is consistently one of the things they find the most stressful, difficult, or time consuming. Why is that?
Or, to ask a related question: Why is the ACX Meetups Czar, tasked with collecting and dispersing best practices for meetups, spending so much time writing about social conflict? This essay is not the whole answer, but it is one part of why this is a hard problem.
Short answer: Because interest in conflict resolution is instrumentally convergent. Both helpful and unhelpful people have reason to express strong opinions on how conflict is handled.
Please take as a given that there exists (as a minimum) one bad actor with an interest in showing up to (as a minimum) one in-person group.
I.
Imagine you are a bank security guard.
You are standing in your bank, sipping your coffee and watching people come in and out. It's a good gig, bank security, they have dental insurance and the coffee machine makes a surprisingly good cup. As you're standing there, someone in sunglasses and a baseball cap waiting in line strikes up a conversation with you.
"Nice day, isn't it?" they say.
"Sure is," you reply.
"Nice bank too," they say, "I love the architecture on these old buildings. Good floors. Do you know if it has a basement level, maybe with a vault?"
"Yeah, concrete flooring."
"Nice, nice," this stranger (who you start thinking of as Sunglasses) says, "You know, I'm also into videography. What kind of cameras do you have around these parts? Like, about how many, and covering what angles?" You notice Sunglasses has a notepad out, pen held expectantly.
". . . you know, I'm not sure I should tell you that," you say slowly. "It's not like it's a secret exactly, the cameras or at least their bubbles are pretty visible, but I don't think idle curiosity is a good reason to tell strangers how the bank security system works."
"Okay, I admit it's not just curiosity," Sunglasses says with a charming smile. "What if I'm just concerned whether my money is going to be safe in this bank? Isn't it reasonable to want to understand how it's kept safe from bank robbers, and how the teller will figure out if I'm actually me when I come to withdraw money again?"
"That would be reasonable," you answer, "and lots of people might be interested in knowing their money is safe and they'll be able to get it back. Some of that information is public, we have a newsletter about it."
"But not all of it's public," Sunglasses points out. "Every customer should care that their bank is secure. It seems like you're new at this whole bank security thing. Look, I'm willing to help you out, make some suggestions about vault locks and camera angles, maybe recommend a good security firm. Looks like you're using TS-53 cameras? Those were fine for ten years ago, but these days networking TLAs are faster and someone could technobabble their tachyons to break in."
You stare at Sunglasses. "I admit I'm new at security. It would be nice to make the bank more secure, and you're right that customers have a legitimate interest in the bank's money being well defended. What you said sounds about the TS-53 sounds right at first pass, and you seem very confident. But I am also getting increasingly suspicious about your interests here."
Sunglasses shakes their head disarmingly. "I solemnly swear I have a lot of experience with bank security systems, and I think there may be a weakness in the anti-bank robber measures you have here. I've seen a lot of good banks get robbed, and that's why I have such strong opinions on how bank security should work. Just let me tell you how to arrange the cameras, what vault lock to install, and how to set up the night guard patrols."
"No," you say. "While some people do have a professional skillset around bank security, not everyone with that skillset is automatically on my team. I would not do better at keeping the customer's money safe if I accepted help from the people most insistent on giving me help. I'm going to ask you to leave now."
"Fine, be that way," Sunglasses says. Then they cup their hands and yell to the other customers, "Hey everyone, this bank guard is throwing me out even though I haven't done anything! They're probably racist! Ya'll should get another bank guard!"
II.
Let's be reductive and say there are five kinds of people in the world.
(These categories absolutely overlap and intersect some of the time. Some autistics set off a bunch of yellow or even red flags, then developed a special interest in human social norms and find them interesting. Some therapists are abusive bad actors, leveraging the privacy and power of their position to do harm. This list of overlaps is not exhaustive.)
Now let's say that the bad actor is not an idiot. They have considered what and who might stop them, and what they might do about it.
It is an obvious, straightforward move to accuse whatever system is responsible for catching them of being corrupt and the people running that system to be horrible or incompetent.
Yes, there are other reasons that someone might say the system is flawed. Yes, sometimes the people in charge of it make mistakes. Sometimes, yeah, it is actually the case that there's a glaring problem with the way hypothetical bad actors are identified and treated. The KGB and Soviet Russia is a famous historical example, but there are many more and many smaller examples of misrun HR departments and convention safety chairs in over their heads. No, I do not think I or humanity at large have found the One True Way to correctly handle complaints and conflict. Yes, I want to improve the setups around me and to get more skilled at handling things like this. Yes, sometimes I think it is correct to try and dismantle the thing and put something better in its place.
But.
Even if you had a system that was perfectly accurate, universally applicable and flexible, whose agents were unfailingly correct in how they carried out its orders, you would have some portion of people who have an obvious motive to say the system is broken and the agents are horrible. If the bad actors had a little forethought, they wouldn't say "the system is horrible, they won't let me punch people in the face." They'd say things like "the system is horrible, it thinks that innocent person punched someone in the face even though they didn't."
And when you don't have a perfect system, but only a decent system with reasonable people in the position of its agents and where it didn't quite match the local social norm but was making an honest effort, then you will wind up with a lot of things an antagonist can point at to argue that nobody should trust it.
Yes, due process and rights for the imprisoned. Yes, the system also has an incentive to smear and put away anyone who threatens to rebel. And yes, we can always try to do better. But maybe be a little suspicious when the person in sunglasses, already being dragged into the cop car, complains that the cop is just being racist and decries the legitimacy of the justice system?
(Though also remember I mostly deal with the kind of complaints you get about ACX meetups. It's considerably less dramatic than that sentence might sound.)
III.
When you are trying to set up your disciplinary process, justice system, network security permissions, or other system by which you will identify and handle bad actors, you should be aware that some of the people who appear to be trying to help you might have ulterior motives.
If you do not have some reason to expect that you are already good at this — if you're one of the normal people in the bullet points above, who just wants to have a nice society or meetup group and is wondering why we can't just do something simple and reasonable — then the bad actors probably have more experience with this than you do. Consider that you may have never interacted with a complaint department at all, while they may have been through the ban committee process from multiple different groups.
(For that matter, being the person in charge of banning others is a position with obvious appeal to someone who suspects they may come to the attention of The System sooner or later. If there's a ninety eight normal people, one honest professional, one bad actor, and you have no way to distinguish, then you may prefer choosing your overseer via random lot rather than taking a 50/50 chance between your bad actor and your honest professional, even though the honest professional is really really useful if you have one.)
In Pareto Best and the Curse of Doom, I talked about how finding people with the overlaps of multiple skills is hard. To use the example of a community organizer, there's selection pressure to have one who is a good marketer, good with handling logistics like the venue and food, and charismatic in person.
Over a long enough run and a large enough community, there's eventually some pressure for them to be good at conflict resolution, but a group can get surprisingly big and last surprisingly long before this becomes important — and if they're bad at it, or just normal amounts of competent, there are many ways for a group to keep growing despite constant arguments until the organizer steps away and even after.
No other part of organizing has this problem. If you don't know what activities to run, you can ask, and people will tell you what they like. If you don't know how to advertise the event, you can ask, and people might have helpful suggestions. If you don't know how to book a venue, you can basically just ask, and it's pretty unlikely anyone has a motive to sabotage your venue selection. Maybe they own the venue and they're trying to sell it to you, but that's a bit more straightforward. Not so with conflict resolution.
IV.
I don't have a solution to this.
I keep encountering people with very strong opinions on the correct way to handle complaints and conflict. I don't have an omniscient view of who is good at it, who is right and who is wrong. But uh. I notice that something like half of the people who have expressed very strong opinions on this to me, it turns out there's a bunch of complaints about them and if I used the system or rules they're advocating for they'd be in the clear[2].
(Which makes sense! If I heard lots of the people dragged away in the night by the KGB had strong opinions on how great jury trials were and that they'd have been cleared by a jury trial, that doesn't surprise me. And yet I also wouldn't be surprised to hear lots of the losing defendants of a healthy jury trial system have strong opinions on how the judge and the cops and the whole system are out to get them.)
If you are a good and virtuous person, you maybe are interested in how conflict resolution is done and having a part in it. If you're a harmless nonconformist, you're a bit more likely interested in how conflict resolution is done and having a part in it. If you are a nefarious person who wants to rob banks or punch faces, you have an obvious interest in how conflict resolution is done and having a position of trust or authority in it.
If you just want the thing to work and not be a big deal, you should be at least somewhat suspicious of the people offering to help. Not a lot suspicious! Most people are basically well meaning, I'm not advocating pervasive paranoia here. Maybe less suspicious, if you have a firmer explanation for why they know this information and why they're interested, but remember that bad actors can lie or mislead about why they're interested.
And this generalizes all the way upstream of the conflict. If you have some part of the system that doesn't make or carry out the decisions, it's just the part that's supposed to investigate and report the truth of what happened, obviously that's a super useful part of the system to get control of. If there's a verification setup or a vote counting role in deciding who is supposed to investigate and report the truth then the vote counting role is a super useful part of the system to get control of, or if it can't be controlled than discredited.
Thus the answer. Why is the ACX Meetups Czar, tasked with collecting and dispersing best practices for meetups, spending so much time writing about social conflict? Why is this the topic that creates so much stress for so many otherwise skilled organizers?
Because this is the topic that is adversarial, not just during an incident, but in every step leading up to it. If you take everyone's advice on how to build your bank security system, you may well be doomed before the alarm sounds — if it ever does.
(Okay, but why should you trust me? Professional interest since complaint handling is part of my role, but good question and don't be satisfied by that. CONSTANT VIGILANCE.)
There's a tangent I plan to talk about in a future post here, but I tend to use examples which are obviously bad to do and I expect everyone to agree are bad to do. These examples tend to be unusually bad, because I'm trying to meet that standard. I could have put "will imply everyone who disagrees with them is stupid" or "will awkwardly hit on every woman attendee" instead of the face punching thing. I could have put "will get drunk and stand so close to others people can smell the alcohol on their breath" or "will loudly bring up how Chairman Mao was a great leader every single meetup, even if the event is about ice skating."
There is an issue of distinguishing what side of a line an edge case falls on, or how hard to come down on something that's kinda bad but not seriously bad, or how to carve out spaces where a thing that's bad in most places is accepted here. It's an important issue. I'm ignoring it in this essay.
Or at least more in the clear. One in a while someone will advocate for rules they're pretty plainly breaking, but they tend to assert some interpretation where what they're doing is fine actually.