LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
One of the most interesting parts of iterating on my workshops was realizing who they are not for. This post was helpful for crystallizing that you really need to a) have control over your projects, b) already be good at executive function, and c) be capable of noticing if you're about to burnout.
That rules out a lot of people.
(In practice, it turns out everyone is kinda weaksauce at executive function, and we realy do also need an Executive Function Hacks workshop)
...
I think this post is surprisingly useful for succinctly explaining the current state of the my paradigm, that gives enough of a birds-eye-view to sense how everything fits together, while making each individual piece reasonably self explanatory. It's not quite the ideal Timeless Explanatory Post version of itself, but, it's pretty good.
I have a feeling there is a more engaging way to write this post. I think it does a good job compressing "Noticing" into a post that explains why you should care, instead of having to read 20 meandering Logan essays. But, my post here is kinda dry, tbh.
Since writing this post, I still haven't integrated Noticing directly into my workshops (because it takes awhile to pay off). But, at a workshop earlier this year, I offered an optional Noticing session where people read this post and then did corresponding exercises and everyone opted in and it went well. One participant said "Insight Unpacking went from seeming kinda fake to seeming extremely real."
I haven't followed up with those workshop participants yet to see if any of these bits stuck.
Of the Richard Ngo stories, this one gives me the most visceral dread of "oh christ this is just actually gonna happen by default, isn't it?"
I endorse this message.
I don't have a very good model of the details of how easy it is to get inroads with the local Berkeley community these days. I want to warn that Berkeley is uniquely weird because of how professionalized it is, and how many competing opportunities there are for socialization/community (which have the weird effect of sometimes making it harder to find community).
I still obviously have chosen to live in Berkeley, and I'm not saying "do as I say, not as I do", I'm saying "think about it and consider alternatives."
The reason I thought they were non-epsilon was "it sure seems like people are not willing to go on record as saying AI x-risk is important" (going on record in favor of "AI safety" is probably easy).
Generally, going on record saying something out-of-the-overton window I think counts for nontrivial integrity. (But, this isn't what I really said in the post, to be fair)
(edit: agreed that quantifying how strong a signal is is important, and not at all sure the strengths I implied here are correct, although I think they are in at least a relative fashion)
I got positive feedback about it working for people who previously hadn't been into group singing, and the "One Shot Singing" segment actually is in the top 10 setlist elements according to the ratings, which is pretty high for a meta-instructional segment.
https://secularsolstice.vercel.app/programs/cd5573d9-b3fe-4f16-ae01-09a0cbc8f931/results
My impression is it worked pretty well, although I think of this as a multi-year project that will require followup to solidify.
well I said "harder to fake", not ironclad or "sufficiently hard to fake." It's better than "in private, he said he cared about My Pet Cause"
I do think people sometimes get mad at and vote out politicians that betrayed a principle they care about, esp. if they are a single-issue voter.
Yeah, to be clear I have not thought that hard about how to handle the lawsuits. Even with a functioning lawsuit defense org-thingy, I think Evaluator People will probably need to have courage / conflict-readiness, and part of the post is a call for that.
I think the best model here is a constellation of individuals and micro-orgs, and some donors who are are serious about supporting the entire endeavor (which does unfortunately involve some modeling of the "what counts as the endeavor").
I find the PDF kinda annoying to read, could we copy it over here?
I'm not quite sure which skills you were referring to here, but, some thoughts;
I don't expect most (good) senators to really be skilled at crafting policy that helps with x-risk, it's not really their job. What they need to be good at is knowing who to defer to.
One thing I think they need is to know about Legible vs. Illegible AI Safety Problems, and to track that there are going to be illegible problems that are not easy to articulate and they themselves might not understand. (But, somehow also not be vulnerable to any random impressive sounding guy with an illegible problems they assure is important)
Realistically, the way I expect to deal with illegible problems is to convert them into legible problems, so maybe this doesn't matter that much.