I signed the statement. My concern, which you don't address, is that I think the statement should call for a prohibition on AGI, not just ASI. I don't think there is any meaningful sense in which we can claim that particular developments are likely to lead to AGI, but definitely won't lead to ASI. History has shown that anytime narrow AI reaches human levels, it is already superhuman. Indeed, if one imagines that tomorrow one had a true AGI (I won't define AGI here, but imagine an uploaded human that never needs to sleep or rest), then all one would need to do to make ASI is to add more hardware to accelerate thinking or add parallel copies.
I want that statement too but it doesn't seem like what this one's job is. This one is for establishing common knowledge "it'd be bad to build ASI under current conditions", there probably wouldn't be enough consensus that "...and that means stop building AGI" yet so it wouldn't be very useful to try.
Building superintelligence poses large existential risks. Also known as: If Anyone Builds It, Everyone Dies. Where ‘it’ is superintelligence, and ‘dies’ is that probably everyone on the planet literally dies.
We should not build superintelligence until such time as that changes, and the risk of everyone dying as a result, as well as the risk of losing control over the future as a result, is very low. Not zero, but far lower than it is now or will be soon.
Thus, the Statement on Superintelligence from FLI, which I have signed.
Their polling says there is 64% agreement on this, versus 5% supporting the status quo.
A Brief History Of Prior Statements
In March of 2023 FLI issued an actual pause letter, calling for an immediate pause for at least 6 months in the training of systems more powerful than GPT-4, which was signed among others by Elon Musk.
This letter was absolutely, 100% a call for a widespread regime of prior restraint on development of further frontier models, and to importantly ‘slow down’ and to ‘pause’ development in the name of safety.
At the time, I said it was a deeply flawed letter and I declined to sign it, but my quick reaction was to be happy that the letter existed. This was a mistake. I was wrong.
The pause letter not only weakened the impact of the superior CAIS letter, it has now for years been used as a club with which to browbeat or mock anyone who would suggest that future sufficiently advanced AI systems might endanger us, or that we might want to do something about that. To claim that any such person must have wanted such a pause at that time, or would want to pause now, which is usually not the case.
The second statement was the CAIS letter in May 2023, which was in its entirety:
This was a very good sentence. I was happy to sign, as were some heavy hitters, including Sam Altman, Dario Amodei, Demis Hassabis and many others.
This was very obviously not a pause, or a call for any particular law or regulation or action. It was a statement of principles and the creation of common knowledge.
Given how much worse many people have gotten on AI risk since then, it would be an interesting exercise to ask those same people to reaffirm the statement.
This Third Statement
The new statement is in between the previous two letters.
It is more prescriptive than simply stating a priority.
It is however not a call to ‘pause’ at this time, or to stop building ordinary AIs, or to stop trying to use AI for a wide variety of purposes.
It is narrowly requesting that, if you are building something that might plausibly be a superintelligence, under anything like present conditions, you should instead not do that. We should not allow you to do that. Not until you make a strong case for why this is a wise or not insane thing to do.
This is something that those who are most vocally speaking out against the statement strongly believe is not going to happen within the next few years, so for the next few years any reasonable implementation would not pause or substantially impact AI development.
I interpret the statement as saying, roughly: if a given action has a substantial chance of being the proximate cause of superintelligence coming into being, then that’s not okay, we shouldn’t let you do that, not under anything like present conditions.
I think it is important that we create common knowledge of this, which we very clearly do not yet have. This does not have to involve asking for a concrete short-term particular policy or other intervention.
Who Signed It
As of writing this there are 32,214 signatories.
The front page lists before the first break: Yoshua Bengio, Geoffrey Hinton, Stuart Russell, Steve Wozniak, Sir Richard Branson, Steve Bannon, Glenn Beck, Susan Rice, Mike Mullen and Joe Crowley.
Here are some comments by signers:
Nate Sores explains his support of the agreement, he would have written a different statement but wants to avoid the narcissism of small differences, as do I.
Pushback Against the Statement
Dean Ball pushed back on the statement, calling it counterproductive and silly. He points out that any operationalization of such a policy ‘would not feel nice to sign.’ And he points out that without some sort of global coordination to prevent building unsafe superintelligence, we would as soon after it becomes technically possible to do so build superintelligence, and look at how bad it would be if there was global coordination stopping them from doing that.
Sriram Krishnan echoes and endorses Dean’s take, calling this a ‘Stop AI’ letter, equating stopping all AI with not building superintelligence, despite Sriram having also said that he does not believe AGI let alone ASI is going to happen any time soon.
Okay, so should then, when faced with this choice, build a superintelligence shortly after it becomes possible to build one? That does not feel like a nice policy to sign.
As I understand the position taken by Sriram and Dean, they don’t offer a meaningful third option. If you intend to stop the development of superintelligence from happening as rapidly as possible, you must end up with a ‘global organization with essentially unchecked power,’ and that’s worse. Those are, they tell us, our only choices, and the only thing you could be asking for if you express the desire for superintelligence not to be built at the first opportunity.
I don’t think those are the only choices, and I certainly don’t think the way to find a third option is to tell us we can’t create common knowledge of opposition to door number one without endorsing door number two. But also don’t understand why, if the other option is not cake, such people then choose death.
Responses To The Pushback
Scott Alexander then responded, defending the idea of vague value statements of intent without operationalized methods of implementation, to create common knowledge that people care, after which you can come up with specific plans. He then challenges Dean’s assumptions about what form that implementation would take, but also asks why Dean’s implementation would be worse than the null action.
I agree with Daniel that I would expect the qualification would be seen by most people as a conciliatory/compromise/nuance clause. I also suspect that Dean’s model of himself here is incorrect, although his statement would have been different.
Exactly. This is creation of common knowledge around common sense thinking, not a request for a particular detailed policy.
Simeon pushes back that we ban technologies deemed unsafe without centralized power, and that yes you can prove safety before building, that Dean’s presumed implementation is very far from the centralization-safety Pareto frontier. I don’t actually think you can ever ‘prove’ safety of a superintelligence, what you do (like for most other things) is mitigate risk to acceptable levels since there are big costs to not building it.
Max Tegmark respectfully pushes back that we need to be able to call for systematic rules or changes without being able to fully define their implementation, using the example of child labor, where people rightfully said ‘we should ban child labor’ without first defining ‘child’ or ‘labor’ (or, I would add in this context, defining ‘ban’).
Dean respectfully notes two things. First, that implementation of child labor restrictions is far easier, which is true, although I’m not convinced it is relevant. The principles remain the same, I think? And two that they importantly disagree about the nature of intelligence and superintelligence, which is also very true.
Dean then gets to his central point, which is he prefers to focus on practical and incremental work that moves us towards good outcomes on the margin. I am all for such work, but I don’t expect it alone to be sufficient and don’t see why it should crowd out the creation of common knowledge or the need to consider bolder action.
Dean offers to discuss the issues live with Max, and I hope they do that.
Avoid Negative Polarization But Speak The Truth As You See It
Dean Ball is the kind of Worthy Opponent you want, who has a different world model than you do but ultimately wants good things over bad things.
He provided an important public service yesterday, as part of a discussion of various AI bills, when he emphasized warnings against negative polarization.
There certainly are those who actively seek to cause negative polarization of AI safety issues generally, who go full on ‘look what you made me do,’ and claim that if you point out that superintelligence probably kills us and ask us to act like it, the only reasonable response is to politicize the issue and to systematically work against any effort to mitigate risks, on principle, that’s how it works and they don’t make the rules.
They are trying to make those the rules, and use everything as ammunition.
I don’t think it is reasonable (or good decision theory) to say ‘therefore, because these people have power, STFU and only work on the margin if you know what’s good for humanity, or you.’