I think there is some way that the conversation needs to advance, and I think this is roughly carving at some real joints and it's important that people are tracking the distinction.
But
a) I'm generally worried about reifying the groups more into existence (as opposed to trying to steer towards a world where people can have more nuanced views). This is tricky, there are tradeoffs and I'm not sure how to handle this. But...
b) this post title and framing particular is super leaning into the polarization and I wish it did something different.
I don't like polarization as such, but I also don't like all of my loved ones being killed. I see this post and the open statement as dissolving a conflationary alliance that groups people who want to (at least temporarily) prevent the creation of superintelligence with people who don't want to do that. Those two groups of people are trying to do very different things that I expect will have very different outcomes.
I don't think the people in Camp A are immoral people just for holding that position[1], but I do think it is necessary to communicate: "If we do thing A, we will die. You must stop trying to do thing A, because that will kill everyone. Thing B will not kill everyone. These are not the same thing."
In general, to actually get the things that you want in the world, sometimes you have to fight very hard for them, even against other people. Sometimes you have to optimize for convincing people. Sometimes you have to shame people. The norms of discourse that are comfortable for me and elevate truth-seeking and that make LessWrong a wonderful place are not always the same patterns as those that are most likely to cause us and our families to still be alive in the near future.
T
many have more nuanced views
Fine, and also I'm not saying what to do about it (shame or polarize or whatever), but PRIOR to that, we have to STOP PRETENDING IT'S JUST A VIEW. It's a conflictual stance that they are taking. It's like saying that the statisticians arguing against "smoking causes cancer" "have a nuanced view".
I think it is sometimes correct to specifically encourage factionalization, but I consider it bad form to do it on LessWrong, especially without being explicitly self-aware about it. (i.e should come with an acknowledging that you are spending down the epistemic commons and why you think it is worth it).
(Where, to be clear, it's fine/good to say "hey guys I think there is a major, common disagreement here that is important to think about, and take actions based on." The thing I'm objecting to is the title being "which side are you on?" and encouraging you to think in terms of sides, rather than a specific belief that you try to keep your identity small about.)
I'm annoyed that Tegmark and others don't seem to understand my position: you should try for great global coordination but also invest in safety in more rushed worlds, and a relatively responsible developer shouldn't unilaterally stop.
(I'm also annoyed by this post's framing for reasons similar to Ray.)
I personally would not sign this statement because I disagree with it, but I encourage any OpenAI employee that wants to sign to do so. I do not believe they will suffer any harmful professional consequences. If you are at OpenAI and want to talk about this, feel free to slack me. You can also ask colleagues who signed the petition supporting SB1047 if they felt any pushback. As far as I know, no one did.
I agree that there is a need for thoughtful regulations for AI. The reason I personally would not sign this statement is because it is vague, hard to operationalize, and attempts to make it as a basis for laws will (in my opinion) lead to bad results.
There is no agreed upon definition of “superintelligence” let alone a definition of what it means to work on developing it as separate from developing AI in general. A “prohibition” is likely to lead to a number of bad outcomes. I believe that for AI to go well, transparency will be key. Companies or nations developing AI in secret is terrible for safety, and I believe this will be the likely outcome of any such prohibition.
My own opinions notwithstanding, other people are entitled to their own, and no one at OpenAI should feel intimidated from signing this statement.
I like the first clause in the 2025 statement. If that were the whole thing, I would happily sign it. However having lived in California for decades, I'm pretty skeptical that direct democracy is a good way of making decisions, and would not endorse making a critical decision based on polls or a vote. (See also: brexit.)
I did sign the 2023 statement.
I agree that the statement doesn't require direct democracy but that seems like the most likely way to answer the question "do people want this".
Here's a brief list of things that were unpopular and broadly opposed that I nonetheless think were clearly good:
Generally I feel like people sometimes oppose things that seem disruptive and can be swayed by demagogues. There's a reason that representative democracy works better than direct democracy. (Though it has obvious issues as well.)
As another whole class of examples, I think people instinctively dislike free speech, immigration, and free markets. We have those things because elites took a strong stance based on better understanding of the world.
I support democractic input, and especially understanding people's fears and being responsive to them. But I don't support only doing things that people want to happen. If we had followed that rule for the past few centuries I think the world would be massively worse off.
I'd split things this way:
Could you name a couple (2 or 3, say) of some of the biggest representatives of that camp? Biggest in the camp sense, so e.g. high reputation researchers or high net worth funders.
only adding one bit of sidedness seems like it is insufficient to describe the space of opinions in ways where further description is still needed. however, adding even a single bit of incorrect sidedness, or where one of the sides is one that we'd hope people don't take, seems like it could have predictable bad consequences. hopefully the split is worth it. I do see this direction in the opinion space, I'm not sure if this split is enough part-of-the-original-generating-process-of-opinions to be worth it though.
I was "let's build it before someone evil", I've left that particular viewpoint behind since realizing how hard aligning it is. my thoughts on how to align it still tend to be aimed at trying to make camp A not kill us all, because I suspect camp B will fail, and our last line of defense will be maybe we get to figure out safety in time for camp A to not be delusional; I do see some maybe not hopeless paths but I'm pretty pessimistic that we get enough of our ducks in a row in time to hit the home run below par before the death star fires. but to the degree camp B has any shot at success, I participate in it too.
I think both of these camps are seeing real things. I think:
We should not race to superintelligence because we're not prepared to have a reasonable chance of surviving
AND
It's extremely hard to stop due to underlying dynamics of civilization (competitive dynamics, moloch, etc)
We should try to stop the race, but be clearsighted about the forces we're up against and devise plans that have a chance of working despite them.
I lean more towards the Camp A side, but I do understand and think there's a lot of benefit to the Camp B side. Hopefully I can, as a more Camp A person, help explain to Camp B dwellers why we don't reflexively sign onto these kinds of statements.
I think that Camp B has a bad habit of failing to model the Camp A rationale, based on the conversations I see in in Twitter discussions between pause AI advocates and more "Camp A" people. Yudkowsky is a paradigmatic example of the Camp B mindset, and I think it's worth noting that a lot of people in the public r...
I agree this distinction is very important, thank you for highlighting it. I'm in camp B and just signed the statement.
Camp A for all the reasons you listed. I think the only safe path forward is one of earnest intent for mutual cooperation rather than control. Not holding my breath though.
I am not in any company or influential group, I'm just a forum commentator. But I focus on what would solve alignment, because of short timelines.
The AI that we have right now can perform a task like literature review, much faster than human. It can brainstorm on any technical topic, just without rigor. Meanwhile there are large numbers of top human researchers experimenting with AI, trying to maximize its contribution to research. To me, that's a recipe for reaching the fabled "von Neumann" level of intelligence - the ability to brainstorm with rigo...
I strongly support the idea that we need consensus building before looking at specific paths forward - especially since the goal is clearly far more widely shared than the agreement about what strategy should be pursued.
For example, contra Dean Bell's unfair strawman, this isn't a back-door to insist on centralized AI development, or even necessarily a position that requires binding international law! We didn't need laws to get the 1975 Alisomar moratorium on recombinant DNA research, or the email anti-abuse (SPF/DKIM/DMARC) voluntary technical standards, ...
The race only ends with the winner (or the AI they develop) gaining total power over all of humanity. No thanks. B is the only option, difficult as it may be to achieve.
For me the linked site with the statement doesn't load. And this was also the case when I first tried to access it yesterday. Seems less than ideal.
Part of the implementation choices might be alienating-- the day after I signed I saw in the announcement email yesterday "Let's take our future back from Big Tech." and maybe a lot of people, who work at large tech companies, who are on the fence don't like that brand of populism.
I am in a part of the A camp which wants us to keep pushing for superintelligence but with more of an overall percentage of funds/resources invested into safety.
I’d say I’m closer to Camp B. I get, at least conceptually, how we might arrive at ASI from Eliezer’s earlier writings—but I don’t really know how it would actually be developed in practice. Especially when it comes to the idea that scalability could somehow lead to emergence or self-reference, I just don’t see any solid technical or scientific basis for that kind of qualitative leap yet.
As Douglas Hofstadter suggested in Gödel, Escher, Bach, the essence of human cognition lies in its self-referential nature, the ability to move between levels of thought, ...
In contrast, Camp B tends to support such binding standards, akin to those of the FDA
Don't compare to FDA, compare to IAEA.
In recent years, I’ve found that people who self-identify as members of the AI safety community have increasingly split into two camps:
Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”.
Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.
Whereas the 2023 extinction statement was widely signed by both Camp B and Camp A (including Dario Amodei, Demis Hassabis and Sam Altman), the 2025 superintelligence statement conveniently separates the two groups – for example, I personally offered all US Frontier AI CEO’s to sign, and none chose to do so. However, it would be oversimplified to claim that frontier AI corporate funding predicts camp membership – for example, someone from one of the top companies recently told me that he'd sign the 2025 statement were it not for fear of how it would impact him professionally.
The distinction between Camps A and B is also interesting because it correlates with policy recommendations: Camp A tends to support corporate self-regulation and voluntary commitments without strong and legally binding safety standards akin to those in force for pharmaceuticals, aircraft, restaurants and most other industries. In contrast, Camp B tends to support such binding standards, akin to those of the FDA (which can be viewed as a strict ban on releasing medicines that haven’t yet undergone clinical trials and been safety-approved by independent experts). Combined with market forces, this would naturally lead to new powerful yet controllable AI tools, to do science, cure diseases, increase productivity and even aspire for dominance (economic and military) if that’s desired – but not full superintelligence until it can be devised to meet the agreed-upon safety standards – and it remains controversial whether this is even possible.
In my experience, most people (including top decision-makers) are currently unaware of the distinction between A and B and have an oversimplified view: You’re either for AI or against it. I’m often asked: “Do you want to accelerate or decelerate? Are you a boomer or a doomer?” To facilitate a meaningful and constructive societal conversation about AI policy, I believe that it will be hugely helpful to increase public awareness of the differing visions of Camps A and B. Creating such awareness was a key goal of the 2025 superintelligence statement. So if you’ve read this far, I’d strongly encourage you to read it and, if you agree with it, sign it and share it. If you work for a company and worry about blowback from signing, please email me at mtegmark@gmail.com and say "I'll sign this if N others from my company do", where N=5, 10 or whatever number you're comfortable with.
Finally, please let me provide an important clarification about the 2025 statement. Many have asked me why it doesn't define its terms as carefully as a law would require. Our idea is that detailed questions about how to word laws and safety standards should be tackled later, once the political will has formed to ban unsafe/unwanted superintelligence. This is analogous to how detailed wording of laws against child pornography (who counts as a child, what counts as pornography, etc.) got worked out by experts and legislators only after there was broad agreement that we needed some sort of ban on child pornography.