Learning in public about formal methods, AI, and policy at provablysafe.ai
Thanks. Yeah, makes sense for official involvement to be pretty formal and restricted.
More in a "just in case someone reads this and has something to share" I'd like to extend the question to unofficial efforts others might be thinking about or coordinating around.
It would also be good if those who do get involved formally feel like there's enough outside interest to be worth their time to make informal requests for help, like "if you have 10h/week I'm looking for a volunteer research assistant to help me keep up with relevant papers/news leading up to the summit."
Beyond the people with the right qualifications to get directly involved right away e.g. via the form, are there "supporting role" tasks/efforts that interested individuals of different skillsets and locations could help out with? Baseline examples could be volunteering to do advocacy, translation, ops, making introductions, doing ad-hoc website/software development, summarizing/transcribing/editing audio/video, etc.? Is there a recommended discord/slack/other where this kind of support is being coordinated?
Considering all humans dead, do you still think it's going to be the boring paperclip kind of AGI to eat all reachable resources? Any chance that inscrutable large float vectors and lightspeed coordination difficulties will spawn godshatter AGI shards that we might find amusing or cool in some way? (Value is fragile notwithstanding)
On "lab leaders would choose to stop if given the coordination-guaranteed button" vs "big ol' global mood shift", I think the mood shift is way more likely (relatively) for two reasons.
One of which was argued directionally about and I want to capture more crisply the way I see it, and the other I didn't see mentioned and might be a helpful model to consider.
The "inverse scaling law" for human intelligence vs rationality. "AI arguments are pretty hard" for "folks like scott and sam and elon and dario" because it's very easy for intelligent people to wade into the thing overconfidently and tie themselves into knots of rationalization (amplified by incentives, "commitment and consistency" as Nate mentioned re: SBF, etc). Whereas for most people (and this, afaict, was a big part of Eliezer's update on communicating AGI Ruin to a general audience), it's a straightforward "looks very dangerous, let's not do this."
The "agency bias" (?): lab leaders et al think they can and should fix things. Not just point out problems, but save the day with positive action. ("I'm not going to oppose Big Oil, I'm going to build Tesla.") "I'm the smart, careful one, I have a plan (to make the current thing be ok-actually to be doing; to salvage it; to do the different probably-wrong thing, etc.)" Most people don't give themselves that "hero license" and even oppose others having it, which is one of those "almost always wrong but in this case right actually" things with AI.
So getting a vast number of humans to "big ol' global mood shift" into "let's stop those hubristically-agentic people from getting everyone killed which is obviously bad" seems more likely to me than getting the small number of the latter into "our plans suck actually, including mine and any I could still come up with, so we should stop."