Raemon

I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Ray's Coordination Sequence
Privacy Practices
The LessWrong Review
Keep your beliefs cruxy and your frames explicit
Kickstarter for Coordinated Action
Open Threads
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Load More (9/11)

Comments

Specializing in Problems We Don't Understand

Curated.

I think the problem this post grapples with is essentially one of the core rationality problems. Or, one of the core reasons I think it might be useful to have "rationality" as a field.

The particular set of suggestions and exercises here seemed a) plausibly quite useful (although I haven't really tried them), b) pointed towards a useful generator of how to think more about how to develop as "the sort of person who can solve general confusing problems."

"Taking your environment as object" vs "Being subject to your environment"

I don't actually know what the grammatical rules say, but "take environment as object" is the phrase I've heard used in local culture over the past few years.

What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs)

Curated. I appreciated this post for a combination of:

  • laying out several concrete stories about how AI could lead to human extinction
  • layout out a frame for how think about those stories (while acknowledging other frames one could apply to the story)
  • linking to a variety of research, with more thoughts what sort of further research might be helpful.

I also wanted to highlight this section:

Finally, should also mention that I agree with Tom Dietterich’s view (dietterich2019robust) that we should make AI safer to society by learning from high-reliability organizations (HROs), such as those studied by social scientists Karlene Roberts, Gene Rochlin, and Todd LaPorte (roberts1989research, roberts1989new, roberts1994decision, roberts2001systems, rochlin1987self, laporte1991working, laporte1996high).  HROs have a lot of beneficial agent-agnostic human-implemented processes and control loops that keep them operating.  Again, Dietterich himself is not as yet a proponent of existential safety concerns, however, to me this does not detract from the correctness of his perspective on learning from the HRO framework to make AI safer.

Which is a thing I think I once heard Critch talk about, but which I don't think had been discussed much on LessWrong, and which I'd be interested in seeing more thoughts and distillation of.

Covid 4/9: Another Vaccine Passport Objection
Raemon5dModerator Comment5

(Frontpaged despite not normally frontpaging covid posts)

Monastery and Throne

Something my wife last month: "Is this how you think about politics all the time? No wonder you're depressed."

I'm not quite sure that the "this" is in that sentence. You think about politics all the time how?

Open and Welcome Thread - April 2021

Oh, huh. I'll merge the comments from the other one into this one.

Another (outer) alignment failure story

There's a lot of intellectual meat in this story that's interesting. But, my first comment was: "I'm finding myself surprisingly impressed about some aesthetic/stylistic choices here, which I'm surprised I haven't seen before in AI Takeoff Fiction."

In normal english phrasing across multiple paragraphs, there's a sort of rise-and-fall of tension. You establish a minor conflict, confusion, or an open loop of curiosity, and then something happens that resolves it a bit. This isn't just about the content of 'what happens', but also what sort of phrasing one uses. In verbal audio storytelling, this often is accompanied with the pitch of your voice rising and falling. 

And this story... even moreso than Accelerando or other similar works, somehow gave me this consistent metaphorical vibe of "rising pitch". Like, some club music where it keeps sounding like the bass is about to drop, but instead it just keeps rising and rising. Something about most of the paragraph structures feel like they're supposed to be the first half of a two-paragraph-long-clause, and then instead... another first half of a clause happens, and another.

And this was incredibly appropriate for what the story was trying to do. I dunno how intentional any of that was but I quite appreciated it, and am kinda in awe and boggled and what precisely created the effect – I don't think I'd be able to do it on purpose myself without a lot of study and thought.

Open and Welcome Thread - April 2021

I do definitely agree proper footnotes would be good for the default editor. I'm not sure whether we'll get to it any time soon because we continue to have a lot of competing priorities. But meanwhile my recommendation is to do footnotes they way they were done in this post (i.e. as comments that you can create hover-links to)

Don't Sell Your Soul

I think part of the lesson here is ‘don’t casually sell vaguely defined things that are generally understood to be some kind of big deal’

Load More