LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
Yeah it feels more like a paramilitary army which is not the point.
(By contrast I do think "Stop Sign" is pretty good)
I think these look kinda scary, in particular the black/red. White/red does feel more reasonable, although basically the original March page was constrained by the aesthetic of the book cover, and by the time there's serious effort invested in this I'd expect the aesthetic is going to get an overhaul that isn't as tied to the book.
I think the effect there would be pretty minimal, there's not more than a few thousand people in any given city that likely to show up. It'd be weird to ask 90k people to travel to San Francisco, since that doesn't send a particular message about international treaties. (You might run a different protest that is the "please AI companies, unilaterally stop", but, I don't actually think that protest makes much sense)
I gave this a +4 because it feels pretty important for the "how to develop a good intellectual" question. I'd give it a +9 if it was better argued.
I think it's generally the case that patterns need to survive to be good, but also it's fairly normal for patterns to die and this to be kinda fine. (i.e. it's fine if feudalism lasts a few hundred years and then is outcompeted by other stuff).
The application to superintelligence does seem, like, true and special, but, probably most patterns that evolved naturally don't successfully navigate superintelligence well and I'm not sure it's the right standard for them.
Yeah I think (not speaking for MIRI) that the FAQ should rephrased so the vibe is more "here's what we believe, but, there's a bunch of reasons you might want to support this."
> It's not useful for only one country to ban advancement of AI capabilities within its own borders.
This seems to imply that the US government could not on its own significantly decrease p(doom).
I think my personal beliefs would say "it's not very useful" or something. I think the "ban AGI locally" plan is dependent on a pretty specific path to be useful and I don't read the current phrasing as ruling out "One country Bans it and also does some other stuff in conjunction." (actually, upon reflection I'm not that confident I know what sort of scenario you have in mind here)
(again not MIRI, just sharing my own models and understanding)
The whole point is to send a message to DC people, and by the time we're talking about hitting 100,000 I don't think being in the Bay Area helps that much.
I already added it. [/ozymandias]
Yeah. On my end I'm like "well, this will be among the more important things I do that month/year, I expect to just actually be able to prioritize it over other existing plans."
I do think it'd be hypothetically nice to have a "80% likely to attend", or "I'mma make a good faith effort to attend" button, but it amps up the complexity of the page a bunch. (Realistically expect most people who are not rationalists* in practice who sign up to mean something more like this)
For now I'd just say "click the notify me" option, which I think is still a useful signal.
* or other flavors of "take their word abnormally seriously"
Yeah when I was doing the graphics I considered a version where everyone was waving stop signs. It looked a bit weird as an illustration but I suspect would probably work in real life.