I'm Screwtape, also known as Skyler. I'm an aspiring rationalist originally introduced to the community through HPMoR, and I stayed around because the writers here kept improving how I thought. I'm fond of the Rationality As A Martial Art metaphor, new mental tools to make my life better, and meeting people who are strange in ways I find familiar and comfortable. If you're ever in the Boston area, feel free to say hi.
Starting early in 2023, I'm the ACX Meetups Czar. You might also know me from the New York City Rationalist Megameetup, editing the Animorphs: The Reckoning podfic, or being that guy at meetups with a bright bandanna who gets really excited when people bring up indie tabletop roleplaying games.
I recognize that last description might fit more than one person.
My two cents, I'm happy with the amount of reacts I usually see and would probably enjoy about 20% more.
Thank you for chipping in your two cents!
Three arguments in favour here:
First off, as prediction markets become more and more of a reality and start to permeate the rest of the world, we should expect to see some bugs people will need shake out or work around. Well, here's a list of problems that might come up with prediction markets.
Second, it's entertaining fiction that centres a topic we care about. That's pretty rare! I'm in favour of applauding and curating such efforts. Sure, it's a bit goofy and the emotional drama isn't that deep, but it still made me smile and chuckle a bunch. Also, this is a pretty approachable bit of writing! I want a prequel to it that goes a step lower and easier in describing how a prediction market functions, but I do actually think tossing someone who can enjoy a random sci-fi novel at this would be a fine way to introduce them to the downsides of prediction markets; maybe the whole topic actually. Anyone have better fic discussions of the topic?
Third, if more people read this, more people might make fanwork of it. I for one want fanart of economic tsundere demon kings.
I currently think the rationalist norms of assuming good faith, doing a lot of interpretive labor, and passive selection for honest and high functioning people are all good. We are as a culture pretty darn good at sifting useful meaning out of subtle mishandling of statistics, or noticing when a conflict is driven by people mistakenly talking past each other. I also think these norms create a bit of a monoculture weakness against people willing to just say false things.
I do actually think it's valuable to have a Best Of entry that reminds readers that yeah, some people will just say false things. And uh, from observation a lot of rationalists are kinda bad at the basics of lying, and so someone willing to say the false thing with confidence can sneak by a surprising number of vibe checks.
I don't know how well Ymeskhout is handling confidentiality here, but he offers a clearer story than I usually can. From my observation, yes, there exist people in or adjacent to the rationalist community that will tell whoppers like this, but with more mental cycles modeling you to make it harder to nail down that they definitely fibbed.
I vote this should be in the Best Of collection. We do need the antibodies this provides.
I have meetup-tinted glasses, I'll admit. That being said, I think reviewing your year is good for achieving more of your goals, this is a solid structure for encouraging people to do a thing that's good for their goals, and the writeup is therefore pretty good to have in the world. When I imagine a small community of people trying to make better decisions, I think they run this or something kind of like this once a year or so. This is an easy-to-run writeup of how to do something that groups around the world do.
I'll vote this should be in the Best Of LessWrong Review. It's not groundbreaking, sure, but c'mon- it works even if your "meetup" is one person in size, just yourself. That's a rationality technique right there.
I'd love to see followup work on which variations people feel helps them, and better yet a bit of data on whether (as fits my intuitions) people who do this achieve more of their goals than people who don't. That's outside the scope of the one post though. Wish I'd thought to ask about this on the unofficial LW census, that's totally the kind of input I'd like to check for.
How small is small? Subcultures I think mostly aren't gift economies, a non-exhaustive list:
They do have some gift, but it's not what they principally run on. Of those, MtG has the most gifts (I got my start playing magic using a gift of a friend's extra cards) but the local game store I frequented years ago was making a pretty deliberate profit off me, and I expect the whole thing would look different if Wizards of the Coast suddenly public-domained Magic and vanished.
I greatly appreciate people saying the circumstances under which they are and are not truth seeking or truthful. I think Dragon Agnosticism is actually pretty widespread, and instrumentally rational in many societies.
This essay lays out in a concise way, without talking about a specific incendiary topic, and from a position of trust (I and likely many others do trust Jeff a lot) why someone would sometimes not go for maximum epistemic rationality. I haven't yet referenced this post in a conversation, but mostly because I haven't happened to wind up in the right circumstance.
I strongly want this to be in the Best Of LessWrong collection, because in the circumstances where someone is practicing Dragon Agnosticism, they probably can't (or won't) say that out loud even if it is trivial for others to infer. "I'm not going to research [taboo topic] in case I come to believe [taboo conclusion]" doesn't get you into less trouble (or not much less) than "I believe [taboo conlcusion]" and thus people probably won't claim Dragon Agnosticism explicitly.
I want everyone to read Dragon Agnosticism, and then be able to guess what's going on when other people oddly aren't talking about or investigating a taboo topic.
This is a great meetup format and ya'll can fight me.
I want more entries in Group Rationality, and this is a fine way for a group to be smarter than an individual. They can read faster, and the summation and presentation process might even help retention.
I also want more meetup descriptions. Jenn runs excellent meetups, many of which require a braver organizer than I. This is one of the ones I feel I can grab without risking sparking a fight, and it's well laid out with plenty of examples. I've run a Partitioned Book Club myself, and my main quibble is it requires a certain critical mass of people; your mileage may vary if you're the kind of meetup that has three or four people, though you might be able to make it work.
Please write up more!
Rationalists love our prediction markets. They have good features. They aren't perfect. I like Zvi's Prediction Markets: When Do They Work more since it gives a better overview, but for some strange reason the UI won't let me vote for that this year. As prediction markets gain in prominence (yay!) we should keep our eyes on where they fall short and whether there's anything that can be done to fix them.
I keep this in my back pocket in case anyone tries to argue that a thing's high odds on Manifold is definitive. It's a bit niche. It's probably not super important.
Now, that being said:
At the time of this writing it's 90% to get in the Best Of LessWrong collection, so obviously it's an amazing post that's very likely to be included and we should talk about it with the seriousness that deserves.
I just unironically love this?
First off, the Effective Samaritan idea is fun. It's a little bit of playful ribbing, but it's also not entirely wrong. The broader point is a good mental exercise, trying to talk to imaginary people who believe different things than you for the same reasons you believe what you believe.
The entire Starcraft section makes me smile. This is perfect Write A Thousand Roads To Rome territory. Some reader is going to be a Starcraft fan, run across this essay, and suddenly be enlightened at how the outside view actually works. Each person who figures this out on a gut level and writes something about how they made that intellectual jump is a small win for the rising sanity waterline.
Get in the Best Of list; it's not earthshaking or a completely new insight, but I'm glad to have MathiasKB standing alongside me in the shield wall of putting rationality content up on LessWrong and I'd be sad if nothing shaped like this made it into the Best Of list.
In case folks missed it, the Unofficial LessWrong Community Census is underway. I'd appreciate if you'd click through, perhaps take a survey, and help my quest for truth- specifically, truth about what the demographics of the website userbase looks like, what rationality skills people have, whether Zvi or Gwern would win in a fight, and many other questions! Possibly too many questions, but don't worry, there's a question about whether there's too many questions. Sadly there's not a question about whether there's too many questions about whether there's too many questions (yet, growth mindset) so those of you looking to maximize your recursion points will have to find other surveys.
If you're wondering what happens to the data, I use it for results posts like this one.