LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
Maybe I should spell out some background/inside-baseball context:
In the past year, I've been aware of some waves of effort to coordinate ~thousands of people to donate to political candidates, that successfully raised a lot of money, generally through a mix of circulating google docs making arguments, and manually DMing thousands of people through existing social networks.
It was all pretty lowkey, for the reasons stated in this post.
This accomplished some pretty impressive stuff. When I write this post, I'm not like "it'd be cool if some magical rational/EA political arm came out of nowhere." I'm like "It'd be cool if the existing political networks that are actually pretty competent, also developed some processes for group epistemics and integrity (while handling the reality that it's operating in an adversarial environment). Here are some ideas on how to do that."
A lot of the comments here seem to be responding to this like a pie-in-the-sky ideation. I'm approaching it from a pretty brass tacks practical mindset, but it makes sense if that feels weird to most readers.
Yeah my statement as-worded is too strong. (Actually, upon reflection I am surprised I didn't trigger Oliver coming in and saying "WHAT!? Ray that is wrong!")
Yeah, posts saying "hey, here are some reasons I think this should be a norm" are extremely fine. Posts doing that while also using emotionally laden language are, like, kinda fine depending on context. Posts that are directly wielding shame and implying anyone who disagrees with the post is a bad person in ways that feel socially hard to push back against are generally not fine.
taking the existence of a community of sane, reasonable, and mostly value-aligned participants as a given
Yeah. I'd phrase it as "reasonably sane, reasonably reasonable, and reasonably value-aligned." I don't think the LW commentariat is perfect, but, I think they are within a basin where "aiming for a sane political coalition" is a reasonble aspirational goal. (and, while I'd like to succeed at the most ambitious version of the thing, all it needs to succeed at is "be a better use of people's time/attention than other things" (given that there are pretty compelling alternatives).
I know a lot of people around here with similar-ish political goals, and similar-ish ideals of what you might hope a rationalist political bloc to look like, such that "okay, translate that into implementation details" feels like the right step.
And not just because some of them are lying- but because useful legislation is such a narrow target, and lots of people mean well without having the skill to actually be helpful
I'm not quite sure which skills you were referring to here, but, some thoughts;
I don't expect most (good) senators to really be skilled at crafting policy that helps with x-risk, it's not really their job. What they need to be good at is knowing who to defer to.
One thing I think they need is to know about Legible vs. Illegible AI Safety Problems, and to track that there are going to be illegible problems that are not easy to articulate and they themselves might not understand. (But, somehow also not be vulnerable to any random impressive sounding guy with an illegible problems they assure is important)
Realistically, the way I expect to deal with illegible problems is to convert them into legible problems, so maybe this doesn't matter that much.
One of the most interesting parts of iterating on my workshops was realizing who they are not for. This post was helpful for crystallizing that you really need to a) have control over your projects, b) already be good at executive function, and c) be capable of noticing if you're about to burnout.
That rules out a lot of people.
(In practice, it turns out everyone is kinda weaksauce at executive function, and we realy do also need an Executive Function Hacks workshop)
...
I think this post is surprisingly useful for succinctly explaining the current state of the my paradigm, that gives enough of a birds-eye-view to sense how everything fits together, while making each individual piece reasonably self explanatory. It's not quite the ideal Timeless Explanatory Post version of itself, but, it's pretty good.
I have a feeling there is a more engaging way to write this post. I think it does a good job compressing "Noticing" into a post that explains why you should care, instead of having to read 20 meandering Logan essays. But, my post here is kinda dry, tbh.
Since writing this post, I still haven't integrated Noticing directly into my workshops (because it takes awhile to pay off). But, at a workshop earlier this year, I offered an optional Noticing session where people read this post and then did corresponding exercises and everyone opted in and it went well. One participant said "Insight Unpacking went from seeming kinda fake to seeming extremely real."
I haven't followed up with those workshop participants yet to see if any of these bits stuck.
Of the Richard Ngo stories, this one gives me the most visceral dread of "oh christ this is just actually gonna happen by default, isn't it?"
I endorse this message.
I don't have a very good model of the details of how easy it is to get inroads with the local Berkeley community these days. I want to warn that Berkeley is uniquely weird because of how professionalized it is, and how many competing opportunities there are for socialization/community (which have the weird effect of sometimes making it harder to find community).
I still obviously have chosen to live in Berkeley, and I'm not saying "do as I say, not as I do", I'm saying "think about it and consider alternatives."
The reason I thought they were non-epsilon was "it sure seems like people are not willing to go on record as saying AI x-risk is important" (going on record in favor of "AI safety" is probably easy).
Generally, going on record saying something out-of-the-overton window I think counts for nontrivial integrity. (But, this isn't what I really said in the post, to be fair)
(edit: agreed that quantifying how strong a signal is is important, and not at all sure the strengths I implied here are correct, although I think they are in at least a relative fashion)
I'm actually a bit surprised these frames were new for you-in-particular, curious which bits were helpful?