Screwtape

I'm Screwtape, also known as Skyler. I'm an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I'm fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you're ever in the Boston area, feel free to say hi.

Starting early in 2023, I'm the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games. 

I recognize that last description might fit more than one person.

Sequences

Running A Basic ____ Meetup
Cohabitive Game Design
The LessWrong Community Census
Meetup Tips
Meetup in a box

Wikitag Contributions

Comments

Sorted by

Er, sorry, I think you might’ve misread my comment? What I was saying was that the more seriously the people with the power and authority take the problem, the better it is. (I think that perhaps you got the direction backwards from how I wrote it? Your response would make sense if I had said “directly proportional”, it seems to me.)

"And the severity of the failure will be inversely proportional to how seriously the people with the power and authority take this problem, and to how much effort they put into addressing it."

Hrm. Yes, I seem to have read it differently, apologies. I think I flipped the sign on "the severity of the failure" where I interpreted it as the failure being bigger the more seriously people with power and authority took the problem.

“Better not to begin” wouldn’t be “4chan”, it would be “nothing”.

I agree that the moderators on Less Wrong aren’t quite at the level we’re talking about, but they’re certainly closer than most people in most places.

Yeah. I prefer having LessWrong over having nothing in its place. I even prefer having LessWrong over having nothing in the place of everything shaped like an internet forum. 

Do the LW mods pass your threshold for good enough it's worth beginning? I think a lot of my incredulity here comes from trying to figure out how big that gap is, though in terms of the specific problem I'm trying to solve I think I need to take as a premise that I start with whatever crop of ACX organizers I'm offered by selection effects.

I disagree your scenario is more realistic. 

“Now let’s say that the system is already full of bad actors (as it probably is). They have considered what and who might stop them, and what they might do about it. The system will, of course, be corrupt, and the people running that system will be horrible or incompetent. It is an obvious, straightforward move to promote memes that prevent this from being rectified.”

I think that happens sometimes, and higher pressure scenarios are more likely to be targets for this. Most of my disagreement is that I think most people are trying to do the right thing; dealing with occasional bad actors who are outnumbered is easier than dealing with lots of bad actors who outnumber everyone else. A system that works for the latter I'd expect to work for the former though.

But in fact this should read more like this:

“Okay, but why should you trust me? Good question; the answer is that you definitely shouldn’t trust me—especially since complaint handling is part of my role, so I have a professional interest, which makes me exceptionally likely to be a bad actor. Do not trust! Verify! If you can’t verify, assume treachery until proven otherwise!”

Hrm. I don't agree with that much emphasis and I'm not sure how much of that is interpretation. I do feel a little bit of refreshment at encountering someone with even more CONSTANT VIGILANCE than I have, it's a nice change of pace. Do you happen to work in computer security by any chance?

More seriously, I don't think the systems around me work well if they need to verify every step, so there's some spot checking and trust extended instead. Paying more attention to this over the last couple years has made me more aware of all the gaps where someone could cause problems if they had a mind to.

My comments are aimed precisely at this goal also. You can read them as saying “your setup does not and cannot succeed at this, so long as you take the approach described in the OP”.

Hrm. I have a small objection here, which is that I don't view the main post as laying out an approach for dealing with this. I said I don't have a solution. To use a chess analogy, I'm not saying "use the Italian Game opening, then look to control the centre." I'm saying "don't move f3 as a starter, and if you feel compelled to do it anyway really don't move g4, it's an embarrassing way to lose." If someone showed up in the comments and said hey, I do think there's a solution, here it is- well, I'd read their solution carefully and be happy if it turned out to be correct.

I… didn’t claim that any police force criminalizes everyone who makes any move towards trying to change the system, so… no bet!

Ah, I may have misinterpreted you. I read 

The logic in the OP is easily recognizable as the logic of every police force, every security service, and every authoritarian enforcement organization. It’s the logic that says “if you’re not one of us, then you’re either a clueless normie who will unthinkingly submit to our authority, or else you’re probably a criminal—unless you can, with great effort and in the face of considerable skepticism, prove to us (and yes, the burden of proof is entirely on you) that you’re one of the rare harmless weirdoes (emphasis on the ‘harmless’; if you give any hint that you’re challenging our authority, or make any move toward trying to change the system, then the ‘harmless’ qualifier is immediately stripped from you, and you move right back into the ‘criminal’ category)”.

and the bolded parts (bolding mine) seemed to say every police force does criminalized everyone who makes any move towards trying to change the system. I. . . do see a distinction between "moved into the criminal category" and "criminalized according to the written legal code" but that does seem a thin distinction. Still, my misinterpretation.

> Would you please clarify this? As you might imagine, if it’s the second I’m going to say that’s not what I’m trying to do.

First one.

I appreciate the clarification, including the edit!

(I am happy to assume perfectly good intentions on the part of actual-you, the actual person I am talking to, for the purposes of this and similar discussions. It’s just that we have to keep in mind the possibility of a hypothetical-you who is in the same situation and is writing the same things but who does not have good intentions.)

Yep, noted and agreed. And likewise the possibility of a hypothetical-you who is trying to make sure whatever process gets used isn't going to catch them. Neither of you might go to meetups much but LessWrong moderation decisions are probably relevant. (To be clear I don't make those, I'm not a mod here and I don't even make moderation decisions on ACX comments, I can just see the same line.)

I'm trying to come up with people that I think actually reach the standard you're describing. I think I know maybe ten, of which two have any time or interest in handling meetup conflicts. 

I do agree there's some big failures that can happen when the people with authority to solve the problem take it very seriously, put a lot of effort into addressing it, and screw up. I don't agree that relationship is inversely proportional; if I imagine say, a 0 effort organizer who does nothing vs a 0.1 effort organizer who only moderates to say "shut up or leave" to a attendees who keep yelling their political opponents should be killed, this seems like an improvement. There's a lot of low hanging fruit here.

It's possible "even approximately, even in large part" covers a much greater range than I'm interpreting it as and your standard is lower than it sounds. If not, I think we're at an impasse of a disagreement. I think that if nobody does any conflict resolution at all unless they are that good of an evaluator, all but a vanishingly small number of spaces will become much worse. We're talking on LessWrong, I do not think the moderators here are at that level, and yet the space is much improved relative to other places. Seems like 4chan decided better not to begin, and I like LessWrong more.

I would be delighted to have the social equivalent of a zero trust conflict resolution system that everyone who interacted with it could understand and where the system could also maintain confidentiality as needed. I'm in favour of the incremental steps towards that I can make. In the abstract, I agree the procedure for evaluating advice should work even if it comes from bitter enemies. I do not think my personal evaluation procedure is currently robust enough to handle that, though tsuyoku naritai, someday maybe it will be.

The main context I encounter these problems is in helping local ACX meetup organizers. Some of them first found the blog a few months ago, ran a decent ACX Everywhere that blossomed into a regular meetup group, and then a conflict happened. I want good advice or structures to hand to them, and expecting them to be able to evaluate my advice to that standard seems unreasonable. It's likely that at least one and possibly all of the local belligerents will have suggestions, and those suggestions will conveniently favour the advice-giver.

One way to read this essay, which I would endorse as useful, is as one useful answer to the question "why do all the people in this conflict I find myself in all have such different ideas of the procedure we should use to resolve it?"

That is not the argument I'm trying to make.

The argument I'm trying to make is that conflict resolution is hard in a particular way that approximately nothing else in running events or communities is hard; it's potentially adversarial, therefore taking the advice of people with strongly held and seemingly sensible advice can be a trap.

The bullet pointed personas are not a load bearing part of this thesis. If it would help, try dropping everything from "let's be reductive" to "what they might do about it". I think the only other part I'm directly referencing them is the parenthesis in section III about how the position of overseeing the resolution process is a position of interest for a bad actor, therefore you might even try random lot.

There is a mistake I'm trying to warn about where someone thinks conflict handling should be simple, why don't we all sit down and come up with a simple setup, and I do not think they are actually envisioning how any part of this will work in the face of someone with a mind to get away with something. There is a second mistake I'm trying to warn about where someone makes confident assertions about how conflict handling should work, and people do not notice the ulterior motive. I recognize the weirdness around me trying to point out the second mistake - as I said in the post, "Okay, but why should you trust me? Professional interest since complaint handling is part of my role, but good question and don't be satisfied by that." I am trying design a setup that would work even if I was a problem even as I was designing the setup.

I do not think the logic you're talking about is the logic I'm using. I will very cheerfully bet you my hundred bucks against your dollar that every police force does not criminalize everyone who makes any move towards trying to change the system, unless you're using some non-standard definitions of police force and criminalize. I'm going to assume that "every" is colloquial, not literal, and that paragraph is rhetoric, but even if the section in quotes was how police forces and authoritarian enforcement organizations were, I don't think their logic ends by suggesting constant vigilance against themselves.

That “professionals” are much more likely than anyone else to be bad actors is another fact that drastically undermines the OP’s thesis—and this blind spot is not an accident.

  1. That would undermine my thesis if my thesis was that everyone should just trust the professionals, which seems to be the thing you claim is my thesis. That's not my thesis, and bad actors turning out to be more likely "professionals"[1] than any other category would not undermine my actual thesis.
  2. I am not sure whether "this blind spot is not an accident" is suggesting that I'm making a good faith effort but a predictable mistake, or that I am deliberately leaving information which I know to be true and relevant out in an effort to make my argument stronger, or some third thing. Would you please clarify this? As you might imagine, if it's the second I'm going to say that's not what I'm trying to do.
  1. ^

    I do think it would help to have a better name for this bucket, since in my contexts a lot of people aren't getting paid a regular paycheck to do this and don't have a lot of training or background. The obvious and central examples are- divorce lawyers are, CEA's community health team comes to mind as well- but a lot of the time it's a local meetup organizer in some random town that's had an issue. 

Not sure how referential "you" vs general "you" you're using when you're talking about assuming some "you" is the KGB. I do think it's useful to build a system which does not assume the watchman is perfectly trustworthy and good. In my own case, one of the first things I did once I started to realize how tricky this part of my role might be was write down a method for limited auditing of myself. That said:

Your own approach and policies should work unproblematically even if everyone assumes that you are basically the KGB. (This is especially true if you are not the KGB at all.)

I'm not sure how literally to take the "unproblematically" adverb here. If you're being literal, then I disagree; part of my thesis here is that sometimes there will be as many problems as enemy action can cause, and they will be able to cause some problems.

(If you're on the lookout for a fully general counterargument, here's one I haven't found a way around! This theory treats occasional strident complaints about the way a resolution system is operating as very little evidence that the system is operating badly, because one would expect occasional bad actors to try shaking everyone's trust in the system even if it was a good system. And yes, that is such a suspicious theory for me in particular to put forward. Dunno what to tell you here.)

Like I said, I don't have a solution. At least, not one I'm confident and certain of. I have other essays in the pipeline with (optimistically) pieces of it.

I don't think it's doomed. Most security experts a bank would reasonably hire are not bank robbers, you know? I assume that's true anyway, I'm not in that field but somehow my bank account goes un-robbed.

Checking where wildly different spheres agree seems promising. The source of advice here that I trust the most comes from a social worker who I knew for years who hadn't heard of the rationalist community, and I asked them instead of them unprompted (or as part of an argument) starting to tell me how it should work. Put another way, getting outside perspectives is helpful- if a romantic partner seems like they might be pressuring you, describe it to a friend and see what they say. 

It's part of why I spent a while studying other communities, looking to see if there was anything that say, Toastmasters and the U.S. Marines and Burning Man and Worldcon all agreed about.

And yet I notice that the view that “people who have opinions about how [whatever] should be done are unusually likely to be bad actors who want me to do [whatever] in such a way as to benefit them, therefore I should be suspicious of their motives and suggestions” is memetically adaptive. Whenever you come across this idea, it is to your benefit to immediately adopt it—after all, it means that you will thenceforth need to spend less effort evaluating people’s suggestions and opinions, and have a new and powerful reason to reject criticism. And the idea protects itself: if someone suggests to you that this whole perspective is misguided and harmful, well, aren’t they just maliciously trying to undermine your vigilance?

I have a couple of thoughts here. One is that I don't think this is true for most values of [whatever]. If someone has suggestions about the venue, or the meetup activities, or announcement platforms, I don't think this dynamic is in play. If I get advice on job searching or algebra homework or the best way to bake a loaf of sourdough, I'm not getting nearly as much adverse selection as for conflict resolution from within the community I'm involved in. Who has a motive to subtly sabotage my sourdough?

If someone read this essay and came away with a full general counterargument against listening to advice on any subject, my guess is there's a big reading comprehension failure happening. 

It isn't as clearly a failure of reading comprehension if someone comes away with the idea that they shouldn't listen to any advice on handling conflict specifically, though I think that would also be incorrect. Finding people who are trustworthy, good at handling it well, and willing to teach you is wonderful. I've been trying to learn the most from sources well outside the rationalist community, but I think there is good advice to be had. Just, not uncritically trusted?

Also, some people seem to think this class of problem should be easy. For those people I want to make the point that it is (at least sometimes) an adversarial situation.

In a sense, Scott and I have the power to make decisions based on whatever kind and level of knowledge we want. I could go to Scott and say something like hey, I've got a bad vibe here, zero Verifiable Documented Evidence but it's an intuition, lets permanently and publicly ban this person. Or skip going to Scott- I think I only loop him in on ~10% of conflicts I'm aware of.

How much do you trust my intuition? How much do I trust it?

I trust local ACX organizers a lot. If someone comes to me and says the local ACX organizer banned them from the local meetup, I generally nod and say yep I endorse the local organizer being able to do that.[1] Locals don't get to ban people from other cities but they usually don't need to.

I often do talk to people. I travel where I can[2], but I also talk by videocall to lots of people in the ACX community to expand the range. Written complaints in a feedback form that I have permission to share do make for a lovely essay example, and they do come up, but videocall talks and in-person conversations feed into this. I take notes on many of both largely as my own memory aid, usually on paper in a stack of index cards. But every piece of information is evidence, so I'll start with (uncertain, with wide error bars) intuitions on one or two sentence entries in an application form and build from there.

Bloodlessness is somewhere between a target and a necessary side effect of the way I'm discussing and dissecting the components of handling conflict here and in a lot of posts I've made on the subject lately. I could write the bloody versions, but I feel those drafts show too much fury to be generally useful to others.

  1. ^

    I have to my memory overruled the local organizer exactly once, and in hindsight I think that was a mistake; not because I later came to agree with the local organizer (though I eventually more or less did) but because I think that's an important power to have in the hands of the person actually doing the ground level work.

  2. ^

    Mostly not to investigate complaints, though I've had that as one of multiple goals once or twice.

I think context clues do usually make the difference between signal as in cell signal vs. kodo clear. I'm less confident that context will usually make the difference between signal as in signal and the noise vs. kodo clear. Most conversations I have with other people where I'd want to use it, I expect they won't have this concept and it's not worth pausing whatever conversation we were having to explain kodo. 

(Like, prior to me writing this up I think there were maybe a hundred people in the world who'd heard these terms used this way, because there were maybe a hundred people who'd heard the lecture.)

A concept can still be useful in my own head even if the people I talk to don't have that concept. Affordance, update, modularity (especially in code), these are all ideas I don't talk about directly except with specialists but I have in my thoughts when it's relevant. And one way to get other people to have a concept is to give a talk on it, or to write an essay about it on LessWrong.

Take modularity in particular: at some point in a good Intro To Programming class someone should explain the idea of modular code and why you should try to make your functions neat and compartmentalized, once in a while when talking to another programmer one of you might say 'oh, I want to refactor this to be more modular', but when you're talking to your non-technical boss or client you probably don't want to use that word. Is modularity a basic idea? Maaaybe? Depends on your frame of reference I guess. Does it serve a purpose other than in-group signaling? Yes! A programmer who doesn't have the concept will write "worse"[1] code.

That doesn't make a convincing argument that this idea in particular is worth a jargon slot, but taking ideas and assigning specific words to them is useful.

  1. ^

    Yes I'm asserting a broad and fuzzy quality of better or worse to code, I'm confident a jury of a dozen software engineers would back me up here.

Load More