I'm Screwtape, also known as Skyler. I'm an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I'm fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you're ever in the Boston area, feel free to say hi.
Starting early in 2023, I'm the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games.
I recognize that last description might fit more than one person.
This is a great meetup format and ya'll can fight me.
I want more entries in Group Rationality, and this is a fine way for a group to be smarter than an individual. They can read faster, and the summation and presentation process might even help retention.
I also want more meetup descriptions. Jenn runs excellent meetups, many of which require a braver organizer than I. This is one of the ones I feel I can grab without risking sparking a fight, and it's well laid out with plenty of examples. I've run a Partitioned Book Club myself, and my main quibble is it requires a certain critical mass of people; your mileage may vary if you're the kind of meetup that has three or four people, though you might be able to make it work.
Please write up more!
Rationalists love our prediction markets. They have good features. They aren't perfect. I like Zvi's Prediction Markets: When Do They Work more since it gives a better overview, but for some strange reason the UI won't let me vote for that this year. As prediction markets gain in prominence (yay!) we should keep our eyes on where they fall short and whether there's anything that can be done to fix them.
I keep this in my back pocket in case anyone tries to argue that a thing's high odds on Manifold is definitive. It's a bit niche. It's probably not super important.
Now, that being said:
At the time of this writing it's 90% to get in the Best Of LessWrong collection, so obviously it's an amazing post that's very likely to be included and we should talk about it with the seriousness that deserves.
I just unironically love this?
First off, the Effective Samaritan idea is fun. It's a little bit of playful ribbing, but it's also not entirely wrong. The broader point is a good mental exercise, trying to talk to imaginary people who believe different things than you for the same reasons you believe what you believe.
The entire Starcraft section makes me smile. This is perfect Write A Thousand Roads To Rome territory. Some reader is going to be a Starcraft fan, run across this essay, and suddenly be enlightened at how the outside view actually works. Each person who figures this out on a gut level and writes something about how they made that intellectual jump is a small win for the rising sanity waterline.
Get in the Best Of list; it's not earthshaking or a completely new insight, but I'm glad to have MathiasKB standing alongside me in the shield wall of putting rationality content up on LessWrong and I'd be sad if nothing shaped like this made it into the Best Of list.
I am very much of two opposed minds here.
The case against: This is inside baseball. This is the insidest of inside baseball, it's about a LessWrong commenter talking about LessWrong on the talk pages of Wikipedia, written by someone who cut his teeth writing by writing in the LessWrong diaspora online.
Also, it's kind of a bad look for LessWrong to prominently feature an argument that a major detractor of LessWrong is up to shady epistemic nonsense. Like, obviously people on this forum are upvoting this, we're mostly here because we like LessWrong and this says good things about the forum.
The case for: If this was about any other online community - if Gerard had been skewing the Wikipedia sources to make the My Little Pony fandom look bad, or to Streisand effect some weird theory and associated ban decision made on the War Thunder forums - I would highly upvote it without any qualm. That's because this is an excellent highlighting of two important topics: how knowledge production works in the modern internet, and how deception can be hidden by spreading edge-case decisions around in a lot of different places.
What Wikipedia calls a reliable source and how Wikipedia writes about things informs a lot of the internet. (Less now with the rise of LLMs, but still a lot.) My answer to "What do you think you know and how do you think you know it" often included "I think I know it because I just read it on Wikipedia." Seriously, I live in Boston, and I think Boston has a population of about six hundred thousand people because that's what Wikipedia says. A detailed examination of the how that particular sausage gets made seems pretty good!
As for deception. . . look, I know my interests these days. But I still like having this as a thing to point to. How would you spot this, on the ground and as it's happening? How would you point it out to someone else once you'd spotted it? At so many points of interaction, other Wikipedians might look and go "eh, seems like a judgement call, doesn't look too unreasonable even if maybe it's not how I'd have called it?" When you try and follow up on all the leads, and present them in a collection showing the persistent pattern, you look like the crazy guy with a corkboard full of string.
I also go back and forth on the voice. This is written in a narrative, storytelling mode. It's well written in that mode. Sometimes I'd prefer the dryer, just-the-facts-ma'am version. But that wouldn't get people to read it as much, and I think it wouldn't even get across the feeling of what's going on as much. Like, the line "No, of course not. That would be crass. They got another friend to review the book when it came out, and he cited that." is emotive and trying to get a response from you, but I don't know how I'd write those facts in a way that wasn't emotive and also made the connection for people.
All together, I'm voting for this to be included on the Best Of list. . . but maybe we can put a caveat at the top that it's not because it says bad things about one of our detractors, but because of the talk of epistemic sausage?
High level
This is less of an explainer of how individual ideas work, and more of an index outlining how various named ideas would fit together. It's about how groups of people can function better, and how a certain kind of common knowledge grows.
This could be a big entry in the Group Rationality subject, and even where I disagree with it, it's productive disagreement that helps clarify to myself what I think. That's useful. And it does reference sub-ideas that the author's written before, which is the way to do this kind of high level thing I think.
Bits and pieces
First let's talk about the frame story here. The idea is that a weird textbook showed up from some alternate world with a very different civics class, and the author is relating notable parts of the textbook.
I find this fun. I can see how it might be offputting, and it sort of sidesteps some of the 'is this testable?' questions one might ask. A textbook on American civic government sort of presumes Democracy is the right answer, and (with apologies to Agor) it would be fair to assume this fictional textbook is doing a little of that too.
But I also want to reemphasize that I find it fun! Fun is good. I want to hear more textbooks of different people's inner worlds. That I find it fun alone wouldn't be enough to make me vote for it in the Best Of list, but it helps.
Then there's the definition of civilization as self-restraint. I don't agree with the definition of civilization here. To me, civilization can also be imposed. Maybe that's part of the definition- that if you refrain from taking all the actions you could take mostly because of an overseer or enforcer who will catch you.
But when we get to Breathing Room, I love it. This is Hobbes Leviathan described in colourful cartoons. I like descriptions of philosophical ideas with colourful cartoons, so A+. And I like that it takes seriously the big Green's objection; that Green does win in most individual physical standoffs. I feel like a lot of treatments of this issue dismiss Green as being unfair, and they might be right from a certain Rawlsian veil of ignorance perspective, but it's still ignoring a situation of real people who are often really right in front of us and know they're stronger. (Or whatever capability is important here.)
The treatment of Magenta is new to me and interesting. Magenta's problem is that they need resources other people have, and that they'll probably lose a fight if they start one. The deal that the Leviathan needs to offer Magenta is that quietly following the rules is better than making a low-odds-of-success attempt to smash the system and better their situation. It reminds me of leaving people a line of retreat, but that's not it exactly.
In Lopsided possiblity trees, I am of two minds.I like the story of the desert becoming a place of growing things. I am nervous about it.
I want to stare really hard at trust built up over lower stakes interactions generalizing to higher stakes. I think it does, actually, work that way- that you should test a little bit, extend a small bit of trust, and see what happens- and also that human brains naturally trust that way. I have in the back of my drafts an essay on how if you can behave like a good person for three interactions, the fourth is when you've got a pattern. (Later on there's a mention of parts of the book on deception, and people who pretend to agree with the rules of civilization but break them when they think they can get away with it. I too want that chapter.)
We talk about evolution as a metaphor. I agree with cultural evolution. Maybe a thousand years ago the "don't kill people and take their stuff" rule was not as widespread, and the "don't force the serfs to work on the same plot of land their whole life" rule was basically nonexistant. We have evolved, and I think it's good to stand and keep to the informal cultural evolution.
I believe it is correct to say new bits of cultural grounds should be laid down slowly, but socially graceful degradation seems worth pointing out- some pieces of civilization (not all! but some!) do give you a bonus done unilaterally, and that can be a good toehold. Maybe you leave behind your sword, trusting in your ability to outrun people carrying the heavy metal of their swords. If this whole mutual build up of trust thing looks like a chicken and egg problem, impossible to start, I suggest looking at what pieces work even if you're the only one doing them but get better as more people join.
And I think it is great to talk about when it is correct to rebel against the cultural evolution. As the author says, that's often left out. There's an important hacker-mindset skill in seeing the options that are only blocked by culture, not by physics. I'd love a more principled or detailed description of when to do that, but kind of by its nature people aren't all going to agree on when those principles apply and it's time to shred the social fabric.
What's A Best Of Collection To Do Then?
I want this text book, or at least these sequences.
It's very Duncan, opinionated and wearing the opinions on its sleeves. It'd be good for LessWrong to have more opinions I think, more coherent but different angles on how the human mind can make better decisions.
I'm a bit torn on whether that should go in the Best Of collection. I think ideally that would be for things with more consensus behind them, and I doubt Civilization & Coordination has that. My general frame is that the Best Of collection is the fifty posts people should read as the common knowledge of LessWrong, that if you show up and read them you have 'caught up' on what the site is generally about. But this post is kind of talking about what cool things happen when everyone is caught up, and it's worth having at least one post a year reminding people of that- but it's hardly an efficient entry there.
Ultimately I'm voting yes, put this in the Best Of collection: it's a roundup post that gives shape to a lot of writing in a particular area of rationality, it's well written, and it stands as an emissary from a branch of the ideas that don't get enough representation. If I imagine the AI people getting to put in half a dozen posts that they want me to read even if I'm not into AI, and me and Duncan getting to put in half a dozen posts that they read even if they're not into making better decisions, that seems a fair culture trade.
That isn't to be clear a piece of civilization and cooperation we've established. It's an offer of such a cooperation though, at least from me :)
On reflection yep "within 50 years" or something like that would have been better.
This post gave me hands down the most useful new mental handle I've picked up in the last three years.
Now, I should qualify that. My role involves a lot of community management, where Thresholding is applicable. It's not a general rationality technique. I also think Thresholding is kind of a 201 or 301 level idea so to speak; it's not the first thing I'd tell someone about. (Although, if I imagine actually teaching a semester long 101 Community Management or Conflict Management class, it might make the cut?) It's pretty plausible to me that there were other ways my last three years could have gone where Thresholding wasn't the problem that kept coming up, again and again, and so I'd look at this handle and go "huh, seems fine I guess but not important" or even "do people really do that?"
Given the way my three years actually went though, I think it makes accurate claims, and having the word is really useful in how I think and act. If I had the option to send a copy of Thresholding back in time to myself on January 1st, 2023, along with assurance from my future self that it was important no seriously. . . well, obviously not the best use of a time machine. But that would obviously have been advice worth at least ~$500 USD to me.
I'm not arguing it's worth that much to everyone; again, I have some vocational applications. But even if you don't handle community complaints, you live among other people, and some of those people are going to butt up against the thresholds of the rules, and I claim this will help you react more sensibly to that. I'll further claim that, given the kinds of people who hang out around LessWrong, the Thresholding concept is unusually useful for the blind spots we have. We like to have explicit rules, and we pride ourselves on being principled and holding to exactly what we said. But man, that doesn't stop incessant 2.9ing from being a problem. It's also a concept that gains from more people having the word in their vocabulary.
I want this thing in the Best Of LessWrong collection, because I want more people to read it and recognize it when it happens. Mostly, I really want past!me to have read it, and the next best thing I have is telling folks like me about it.
Bella: "I made a bet on this coin flip. If it comes up heads, I'm going to use the money to go out for dinner! Hrm, where would I want to eat if I win. . ."
Carl: "How do you know for certain it will come up heads? If you don't, it seems like an incoherent thing to even discuss."
Bella: "I'm not certain it will happen. It would be a bad idea to put too much weight or too many assumptions on something with only a 50% probability. But things can be both uncertain to happen and also coherent enough to talk about."
For the moment, the tag system exists. It'd be straightforward to make a Children or Parenting or somesuch tag. Users can filter what posts they want to show up by tag, though not everyone knows how to do it.
(I'm not saying this to imply the subdomain idea is bad, just that the tag version would be easy to implement.)
I greatly appreciate people saying the circumstances under which they are and are not truth seeking or truthful. I think Dragon Agnosticism is actually pretty widespread, and instrumentally rational in many societies.
This essay lays out in a concise way, without talking about a specific incendiary topic, and from a position of trust (I and likely many others do trust Jeff a lot) why someone would sometimes not go for maximum epistemic rationality. I haven't yet referenced this post in a conversation, but mostly because I haven't happened to wind up in the right circumstance.
I strongly want this to be in the Best Of LessWrong collection, because in the circumstances where someone is practicing Dragon Agnosticism, they probably can't (or won't) say that out loud even if it is trivial for others to infer. "I'm not going to research [taboo topic] in case I come to believe [taboo conclusion]" doesn't get you into less trouble (or not much less) than "I believe [taboo conlcusion]" and thus people probably won't claim Dragon Agnosticism explicitly.
I want everyone to read Dragon Agnosticism, and then be able to guess what's going on when other people oddly aren't talking about or investigating a taboo topic.