Screwtape

I'm Screwtape, also known as Skyler. I'm an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I'm fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you're ever in the Boston area, feel free to say hi.

Starting early in 2023, I'm the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games. 

I recognize that last description might fit more than one person.

Sequences

The LessWrong Community Census
Meetup Tips
Meetup in a box

Wiki Contributions

Comments

Sorted by

I continue to be a fan of people trying to accomplish something in the world and reporting back on what happened. This is a good example of the genre, and on a subject near and dear to (part of) LessWrong's collective heart.

I confidently expect somebody will read a bunch of things on LessWrong, get excited about AI, and try to get the American government to Do Something. By default this attempt will not be particularly well aimed or effective, and every piece of information we can give on the obstacles will be useful. There have been updates since 2023 on government awareness and response to AI, though I suspect the core information in this post about how to get in contact with people remains unchanged. It might even be cause area agnostic; if I wanted to talk to congress people about education or biosecurity, my guess is having draft proposals ready would be useful.

As novel is as the Dialogue feature was, I'd be interested in a tightened up version of this that cut to the key points and takeaways. I'd also be interested in hearing from people who've done policy work whether this seems accurate and whether it leaves anything important out- better yet from people who tried using this as a guide! Overall, yeah, I weakly think this is worth including in a Best Of LessWrong collection.

(Self review.) Bystander effect is fairly well known in the rationalist community. Quietly fading is not as widely recognized. Since writing this post, two people have told me and other people about projects they were dropping, specifically citing this post as the reason they said that aloud instead of just showing up less.

Mission (partially) accomplished.

Since crystalizing this concept, I've started paying more attention to 1. who owns a project and 2. when I last saw motion on that project. I stand by this post: it spotlights a real problem and makes a couple useful suggestions. 

I wish more people 1. tried practicing the skills and techniques they think are important as rationalists and 2. reported back on how it went. Thank you Olli for doing so and writing up what happened!

Being well calibrated is something I aspire to, and so the advice on particular places where one might stumble (pointing out the >90% region is difficult, pointing out that ones gut may get anchored on a particular percentage for no good reason, pointing out switching domains threw things off for a little) is helpful. I'm a little nervous about how changing question category apparently lead to poorer calibration for a while. It makes sense why that would be the case, but my ideal art of rationality would work well across domains. Otherwise, why not study that particular domain more? I do like the application to day-to-day problems; "do I have peanut butter at home or did I run out?" is the kind of thing I run into on at least a daily basis. 

I'd love to have a dozen such reports from a dozen people's attempts, both to see if a pattern stood out of where common mistakes are ("Be cautious, Laplace's Rule works a bit differently when there can be multiple outcomes") and to get more datapoints that practice works. That's not a knock against what Olli's written here, that's a wish for more people to follow up and do this! Without feedback on what techniques work and what it looks like to improve, building a martial art of rationality gets much harder. With feedback like this, other people can better understand what's worth practicing and what's realistic to expect.

That's the most important takeaway I had from this takeaway. The repeated practice worked, and Olli got more calibrated as they practiced.

I'm inclined to think the Best Of LessWrong posts should include, not just the big insights or the shiny new techniques, but the dutiful reports years later about how those techniques have impacts on normal life. I'd like to lightly recommend Takeaways From Calibration Training for inclusion in the Best Of LessWrong Posts.

The structure did change. I've gone ahead and added a SFLW file to reflect the new structure, using the description Andrew had for the First Saturday SFLW group. @Andrew Gaul if you want to tweak that description look for /_posts/2025-01-05-SFLW.md and change it as you need.

Well, thank you for filling the survey out. If you used to be around and aren't any more, I'm happy to have you in the dataset. 

I hope you get unsubscribed successfully, and best of luck in whatever you're up to now!

Thank you for taking it! It's designed to let people skip lots of questions if they want.

The thing I want most from LessWrong and the Rationality Community writ large is the martial art of rationality. That was the Sequences post that hooked me, that is the thing I personally want to find if it exists. Therefore, posts that are actually trying to build a real art of rationality (or warn of failed approaches) are the kind of thing I'm going to pay attention to, and if they look like they actually might work I'm going to strongly vote for including them in the Best Of LessWrong collection.

Feedbackloop-first Rationality sure looks like an actual attempt at solving the problem. It lays out a strategy, the plan seems like it plausibly might work, and there's followup workshops that suggest some people are actually willing to spend money on this; that's not a clear indicator that it works (people spend money on all kinds of things) but it is significantly more than armchair theorizing. 

If Raemon keeps working on this and is successful, I expect we'll see some testable results. If, say, the graduates or regular practitioners turn out to be able to confidently one-shot Thinking Physics style problems while demographically matched people stumble around, that'll be a Hot Dang Look At That Chart result at least in the toy problems. If they go on to solve novel, real world problems, then that's a clear suggestion this works.

There's two branches of followup I'd like to see. One, Raemon's already been doing; running more workshops teaching this, teasing out useful subskills to teach, and writing up how how to run exercises and what the subskills are. The second is evaluations. If Raemon's keeping track of students and people who considered going but didn't, I'd love to see a report on how both sets are doing in a year or two. I'm also tempted to ask on future community censuses whether people have done Feedbackloop-first Rationality workshops (["Yes under Raemon", "Yes by other people based on this", "no"] and then throw a timed Thinking Physics-style problem at them, see if there's any signal to pick up. 

Mostly, I really want people to keep trying things in this genre of finding techniques and trainings to make better decisions. I want them to keep writing up what they're trying, what works, and what doesn't. If LessWrong stops having space for that in our Best Of collection, or has nobody in the community trying things like that, then I think something somewhere went badly wrong. 

Thank you for your work Raemon!

(Self review)

Basically I stand by this post and I think it makes a useful addition to the conversation.

"Motte and bailey" is one of the pieces of rationalist lexicon that has wound up fairly widespread. It's also easy to misuse, because "America" or "Catholics" or "The military industrial complex" are made up of lots of different people who might legitimately different views. The countercharm is recognizing that, and talking to specific people. "Here's a way to be wrong, here's a way to be less wrong" seems a worthwhile addition to LessWrong.

Does it make accurate claims, and is there a subclaim I can test? Not easily. There aren't going to be molecules of bailey or atoms of motte I can get under a microscope, and while I think if I took half an hour on twitter/x I'd be able to find a bunch of examples of people making the Mob and Bailey mistake they'd be fuzzy or arguable examples. Consider the original Motte and Bailey: lots of people seem to find it useful, but I'm not sure I'd get more than 70% agreement on any particular example in the wild.

For followup work, I'd like ideas for how to convince large organizations to change directions. My current best idea is to identify who makes the decisions and to change their minds, and this is pretty well represented by business or sales guides for identifying decisionmakers. There's also something I'd like more answers in how, as an individual level, to stay on target and not get distracted into arguing with a crowd while simultaneously not making the crowd extra mad at you for ignoring them.

If I could vote on it, I'd give this a small vote for inclusion in the Best Of LessWrong collection.

. . . Okay, I'll bite.

 

Prediction

 

Edit: And-

Prediction
Now, I don't suppose that LessWrong prediction API is documented anywhere?
Load More