TL;DR: you can still just sign this statement if you agree with it. It still matters, and you can clarify your position in a statement of support (600 characters) next to your name, and you can clarify your actual full position on LW and/or elsewhere.
X= whether you agree with me that safety researchers have a duty to take a public stance. If you agree with the statement and you are working at a lab where some of your coworkers signed, please consider that signing make it less personally costly for them to have signed.
X= whether you agreed with signing a more detailed longer statement that included many more things lots of folks agree on. Indeed, for many specific reasons I tried to address in the past post, seems like https://superintelligence-statement.org/ addresses those reasons. Also remember that you can also tell us why you signed if you are worried people will get the wrong idea (you have 600 characters).
X= which "camp" you fall into. You can sign and go about trying to make superintelligence safely or you can sign and go about trying to get the world to not build unsafe superintelligence.
X= whether you think there indeed are such camps on LW. You can be worried about polarization and sign. You can worry about conflationary alliances and sign.
X= whether you think I was silly circulating this statement in secret for months or whether you admire the galaxy-brained covert nature of the operation.
X=whether you are glad that there already are 33K+ signatures or whether you are sad you missed the chance of signing pre-release. One important thing to realize is that people are still signing and if/when it reaches future milestones (eg 100K signatures and beyond), it will matter how many researchers signed[1]. And indeed by how we choose to act upon our agreement with this statement, we will decide how many thresholds will be hit and how soon.
Specifically, after signing, you can add a statement of support, and then you can post this somewhere people will see it (e.g. LW, twitter, ...)
Here are some statements of support, starting with mine:
Barring an international agreement, humanity will quite likely not have the ability to build safe superintelligence by the time the first superintelligence is built. Therefore, pursuing superintelligence at this stage is quite likely to cause the permanent disempowerment or extinction of humanity. I support an international agreement to ensure that superintelligence is not built before it can be done safely.
Here is Nate's:
The race to superintelligence is suicidal. Progress shouldn't be subjected to a public veto, but technologists also shouldn't flirt with annihilation. Scientific consensus alone is not enough (any more than alchemist consensus in the year 1100 would be enough to guarantee a potion of immortality). The science of making a superintelligence beneficial is nowhere near mature. There's time pressure and reality may demand bold action from cognizant leaders (without public buy-in), but the public is right to object to the race and right to be wary of technologist consensus in this case.
Here is Tsvi's:
Humanity lacks the technical understanding required to built superhuman AI without the AI killing everyone. Efforts to build superhuman AI should be stopped through laws, international treaties, social norms, professional rules, and by providing alternatives ways to gain the supposed benefits. I hesitated to sign the statement because it sounds like it would be ok to build superhuman AI with some scientific consensus and public buy-in. I believe superhuman AI is too dangerous to be worth risking even in many of those scenarios. But I technically agree with the statement as worded
Here is Kaarel's[2]:
If we allow the pursuit of AGI to continue, the human era will end, humans and human institutions will probably be rendered insignificant and powerless, and plausibly simply extinct.
Even if you think that not building superintelligence would be the greatest tragedy, you can still just sign[3] and add a statement of support that goes a bit like:
I think not building superintelligence would be an immense tragedy, but I think the current race poses significant risks of a very bad future, and I'd prefer if we could just pause until we could build safe superintelligence.
Even if you are an uncompromising libertarian, you can still just sign and add a statement of support that goes a bit like this:
As a libertarian, I usually disagree with waiting for the public to approve, but letting anyone build tech that can take unilateral control of the future without at least some large fraction of libertarians on board would go against my libertarian values
Even if you think the risk is small but worth acting upon, you can just sign and add a statement of support:
Even a 10-25% chance of extinction would be too high, let's find a way to get better odds.
You can still just sign https://superintelligence-statement.org/ if you agree with it. It still matters, and you can clarify how/why you agree in a statement of support (600 characters) next to your name, and you can clarify your actual full position on LW and/or elsewhere.
If you disagree with the statement, you can say why in the comments.