Raemon

I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

The Coordination Frontier
LessWrong Political Prerequisites
Ray's Coordination Sequence
Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
Kickstarter for Coordinated Action
Open Threads
LW Open Source Guide
Load More (9/13)

Comments

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

FYI I just interpreted it to mean "former staff member" automatically. (This is biased by my belief that CFAR has very few current staff members so of course it was highly unlikely to be one, but I don't think it was an unreasonably weird reading)

Zoe Curzi's Experience with Leverage Research

My own experience is somewhat like Linch's here, where mostly I'm vaguely aware of some things that aren't my story to tell.

For most of the past 9ish years I'd found Leverage "weird/sometimes-offputting, but not obviously moreso than other rationality orgs." I have gotten personal value out of the Leverage suite of memes and techniques (Belief Reporting was a particularly valuable thing to have in my toolkit). 

I've received one bit of secondhand info about "An ex-leverage employee (not Zoe) had an experience that seemed reasonable to describe as 'the bad kind of cult that was actually harmful'." I was told this as part of a decisionmaking process where it seemed relevant, and asked not to share it further in the past couple years. I think it makes sense to share this much meta-data in this context.

Book Review: A Pattern Language by Christopher Alexander

Periodically this book gets mentioned to me, and I try reading it and bounce off. I'd be interested in someone going in more detail into the parts that seem sensible/interesting/tractable for mid-level organization projects.

(concretely: the Lightcone team is working longterm on building a campus, and I expect us to be more likely to make use of a good summary of this, or a strong recommendation that the whole thing is worth reading)

How to think about and deal with OpenAI

Mostly time and attention. This has been on the list of things the LessWrong team has considered working on and there's just a lot of competing priorities.

Cup-Stacking Skills (or, Reflexive Involuntary Mental Motions)

This post (unfortunately fairly badly formatted on his new website) probably won't be new to you but is another that I refer back to from time to time.

(rereading it, upon reflection I'm not actually sure if I agree with the primary thesis of the post exactly, but the framing resonates)

https://www.theferrett.com/2014/09/04/the-answer-that-destroys-all-our-futures/

Secure homes for digital people

Man, this fills me with some creeping dread at how many complex problems need to be solved in order for the future to not be dystopic.

Secure homes for digital people

Also, wouldn't being forced to retreat entirely to your "home" qualify as horrible conditions? That's solitary confinement, no?

Depending on setup you can probably invite other people into your home. 

Shoulder Advisors 101

Two anecdotes:

...

I recall when I was 18 years old or so, and I'd been arguing with a very religious friend throughout high school. I would rehearse arguments with him in my head, preparing for the next time we'd meet and I'd tell him all the reasons I thought his beliefs didn't make sense.

And for the first couple years of this, in the arguments in my head, I'd always say things like "have you considered point X" and imaginary-friend would say "oh, man, you're right. I am wrong." But then, eventually, I hit points where I'd say "what about X?" and then my imaginary friend would say "so? X doesn't matter, because [counterargument]".

This was a neat thing to discover about my ability to model people. (It also was relevant to the entire "does God exist?" debate – an eventually cruxy point for me is that you totally can build up simulations of people in your head, and I'd expect that to be hard to distinguish from God speaking to you)

...

More recently, I received benefit from asking my own future self for advice. (In fact, I asked multiple future selves who might evolve in different directions). One future self responded with some concrete, compassionate advice about how one of my coping mechanisms wasn't actually helping with my core goals.

Cheap food causes cooperative ethics

This seems probably right.

I’m interested in the obvious followup questions of “how hard does this check out over more than 2 examples? How strong is the effect size of surplus food / how much food do you need? Do you need enlightenment memes separately from the food? Is food abundance sufficient or do you also need other kinds of abundance?”

Load More