Raemon

I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Ray's Coordination Sequence
Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
Kickstarter for Coordinated Action
Open Threads
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Load More (9/11)

Comments

The Apprentice Thread

Free mentorship offers tend to attract flakes so if you show up one minute late to a meeting then we're done. If I message you and you do not reply in a reasonable timeframe then we're done. I'm not going to be your therapist. Expect something more along the lines of a drill sergeant.

I like this.

Raemon's Shortform

The two-term limit was actually not intended by Washington to become a tradition, he retired after his second term because he was declining in health.

Citation? (I've only really read American Propaganda about this so not very surprised if this is the case, but hadn't heard it before)

How to make errands fun

I was quite tickled by the combo of "make it physical training" and "make it a rest from physical training!".

Raemon's Shortform

A thing I might have maybe changed my mind about:

I used to think a primary job of a meetup/community organizer was to train their successor, and develop longterm sustainability of leadership.

I still hold out for that dream. But, it seems like a pattern is:

1) community organizer with passion and vision founds a community

2) they eventually move on, and pass it on to one successor who's pretty closely aligned and competent

3) then the First Successor has to move on to, and then... there isn't anyone obvious to take the reins, but if no one does the community dies, so some people reluctantly step up. and....

...then forever after it's a pale shadow of its original self.

For semi-branded communities (such as EA, or Rationality), this also means that if someone new with energy/vision shows up in the area, they'll see a meetup, they'll show up, they'll feel like the meetup isn't all that good, and then move on. Wherein they (maybe??) might have founded a new one that they got to shape the direction of more.

I think this also applies to non-community organizations (i.e. founder hands the reins to a new CEO who hands the reins to a new CEO who doesn't quite know what to do)

So... I'm kinda wondering if second-generation successors should just... err on the side of shutting the thing down when they leave, rather than trying desperately to find a replacement.

The answer isn't obvious. There is value that continues to be created by the third+ generation. I think I've mostly gone from "having a firm opinion that you should be proactively training your successor" to "man, I dunno, finding a suitable successor is actually pretty hard, mrrr?"

Everything Okay

Curated.

This is a somewhat old post, but I thought it was underappreciated (other LW mods weren't quite sure why I wanted to curate it). 

I think the core concept here was a crisp articulation of something I hadn't had a clear handle on, but I think is a key rationality concept. And as Zvi dug into the different examples, it was very illuminating how confused and jumbled my default intuitions about being "Okay" and "Not okay" were. 

Distinguishing the psychological state of "things are okay/not-okay" from the reality of "are things 'okay' within some frame of judgment?" seems really important. This feels similar to Nate Soare's post about how to detach the attitude of 'conviction' from your epistemic assessment of "will Project X work out?"

It happened to be particularly relevant to me this week, as I was feeling a sense of 'things are not okay', and looking around, things in fact seemed 'not okay' in many objective senses. Nonetheless, being in "not okay mode" was making it harder to think sensibly.

I liked Zvi's exploration of how the concept of "Are things okay?" can get distorted at the group rationality level, where people might pressure people help them adopt a "I can be okay" stance by ignoring problems, or shuffling responsibility for them around, without noticing that that's what you're doing.

Longterm, I'd be interested in both:

  • Better teaching techniques for helping individuals learn skills relating into "be in psychological 'okay-mode', in the places where it's useful to be so, without losing sight of object-level reality."
  • Having good group-practices for how to relate to okay-mode. 

A criticism I have of this piece, similar to some other Zvi pieces that do an exhaustive taxonomy, is that it's overly long and the individual examples are just given simple letters that get really hard to follow in the second half. (similar complaint on Simple Rules of Law, despite also liking that post a lot). I'm not actually sure how to solve the problem though.

A Breakdown of AI Chip Companies

I think this post could use a summary of what your takeaways were here or why this is relevant to LW. (It does indeed seem relevant, but it's generally good practice to include that in linkposts so people can get a rough sense of an article via the hoverover)

Cryonics signup guide #1: Overview

Curated. Some posts convey a brilliant insight. Some entire sequences of posts are... just a lot of helpful information for people who need it. I'm hoping this sequence helps people who are thinking about cryonics get started on it, with a bunch of practical info.

Rules for Epistemic Warfare?

I'm surprised by the degree of controversialness of the OP and... all the comments so far?

Rules for Epistemic Warfare?

I haven't yet thought in detail about whether this particular set of suggestions is good, but I think dealing with the reality of "conflict incentivizes deception", figuring out what sort of rules regarding deception can become stable schelling points seems really important.

Load More