Google cofounder Larry Page thinks superintelligent AI is “just the next step in evolution.” In fact, Page, who’s worth about $120 billion, has reportedly argued that efforts to prevent AI-driven extinction and protect human consciousness are “speciesist” and “sentimental nonsense.”

In July, former Google DeepMind senior scientist Richard Sutton — one of the pioneers of reinforcement learning, a major subfield of AI — said that the technology “could displace us from existence,” and that “we should not resist succession.” In a 2015 talk, Sutton said, suppose “everything fails” and AI “kill[s] us all”; he asked, “Is it so bad that humans are not the final form of intelligent life in the universe?”

This is how I begin the cover story for Jacobin’s winter issue on AI. Some very influential people openly welcome an AI-driven future, even if humans aren’t part of it.

Whether you're new to the topic or work in the field, I think you'll get something out of it.

I spent five months digging into the AI existential risk debates and the economic forces driving AI development. This was the most ambitious story of my career — it was informed by interviews and written conversations with three dozen people — and I’m thrilled to see it out in the world. Some of the people include:

  • Deep learning pioneer and Turing Award winner Yoshua Bengio
  • Pathbreaking AI ethics researchers Joy Buolamwini and Inioluwa Deborah Raji
  • Reinforcement learning pioneer Richard Sutton
  • Cofounder of the AI safety field Eliezer Yudkowksy
  • Renowned philosopher of mind David Chalmers
  • Sante Fe Institute complexity professor Melanie Mitchell
  • Researchers from leading AI labs

Some of the most powerful industrialists and companies are plowing enormous amounts of money and effort into increasing the capabilities and autonomy of AI systems, all while acknowledging that superhuman AI could literally wipe out humanity: 

Bizarrely, many of the people actively advancing AI capabilities think there’s a significant chance that doing so will ultimately cause the apocalypse. A 2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to “human extinction or [a] similarly permanent and severe disempowerment” of humanity. Just months before he cofounded OpenAI, Altman said, “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

This is a pretty crazy situation!

But not everyone agrees that AI could cause human extinction. Some think that the idea itself causes more harm than good:

Some fear not the “sci-fi” scenario where AI models get so capable they wrest control from our feeble grasp, but instead that we will entrust biased, brittle, and confabulating systems with too much responsibility, opening a more pedestrian Pandora’s box full of awful but familiar problems that scale with the algorithms causing them. This community of researchers and advocates — often labeled “AI ethics” — tends to focus on the immediate harms being wrought by AI, exploring solutions involving model accountability, algorithmic transparency, and machine learning fairness.


Others buy the idea of transformative AI, but think it’s going to be great:

A third camp worries that when it comes to AI, we’re not actually moving fast enough. Prominent capitalists like billionaire Marc Andreessen agree with safety folks that AGI is possible but argue that, rather than killing us all, it will usher in an indefinite golden age of radical abundance and borderline magical technologies. This group, largely coming from Silicon Valley and commonly referred to as AI boosters, tends to worry far more that regulatory overreaction to AI will smother a transformative, world-saving technology in its crib, dooming humanity to economic stagnation.

Billionaire venture capitalist Marc Andreessen (who blocked me long ago) writes that slowing down AI is akin to murder! He may be the most famous proponent of effective accelerationism (e/acc):

In June, Andreessen published an essay called “Why AI Will Save the World,” where he explains how AI will make “everything we care about better,” as long as we don’t regulate it to death. He followed it up in October with his “Techno-Optimist Manifesto,” which, in addition to praising a founder of Italian fascism, named as enemies of progress ideas like “existential risk,” “sustainability,” “trust and safety,” and “tech ethics.” Andreessen does not mince words, writing, “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing [are] a form of murder.”

While this debate plays out, the vast majority of the money spent on AI is going into making it more capable, autonomous, and profitable. A compliant artificial general intelligence (AGI) would be the worker capitalists dream of — no need for bathroom breaks, no risk of unionizing, and no wages — just the cost of the computation. 

But many AI researchers expect that building a true AGI (the goal of leading AI labs) will lead to an explosion in capabilities, ultimately resulting in systems far more powerful than humans:

The October “Managing AI Risks” paper states:

There is no fundamental reason why AI progress would slow or halt when it reaches human-level abilities. . . . Compared to humans, AI systems can act faster, absorb more knowledge, and communicate at a far higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions.

Even systems that remain at human-level would likely be wildly profitable to run.

Here’s a stylized version of the idea of “population” growth spurring an intelligence explosion: if AI systems rival human scientists at research and development, the systems will quickly proliferate, leading to the equivalent of an enormous number of new, highly productive workers entering the economy. Put another way, if GPT-7 can perform most of the tasks of a human worker and it only costs a few bucks to put the trained model to work on a day’s worth of tasks, each instance of the model would be wildly profitable, kicking off a positive feedback loop. This could lead to a virtual “population” of billions or more digital workers, each worth much more than the cost of the energy it takes to run them. Sutskever thinks it’s likely that “the entire surface of the earth will be covered with solar panels and data centers.”

(Where would we live? Unclear.)

As AI systems become more valuable, it will be harder to rein in their developers. Many have theorized about how a superintelligence could resist efforts to turn it off, but corporations are already plenty good at continuing to do risky things that we’d really rather they didn’t:

“Just unplug it,” goes the common objection. But once an AI model is powerful enough to threaten humanity, it will probably be the most valuable thing in existence. You might have an easier time “unplugging” the New York Stock Exchange or Amazon Web Services.

So why do some people think superintelligent AI would pose a threat to humanity?

The fear that keeps many x-risk people up at night is not that an advanced AI would “wake up,” “turn evil,” and decide to kill everyone out of malice, but rather that it comes to see us as an obstacle to whatever goals it does have. In his final book, Brief Answers to the Big Questions, Stephen Hawking articulated this, saying, “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.”

By and large, the left isn’t seriously engaging with AI and in so doing, we’re giving up a chance to shape a technology that could unmake (or remake) society:

After years of inaction, the world’s governments are finally turning their attention to AI. But by not seriously engaging with what future systems could do, socialists are ceding their seat at the table.

In no small part because of the types of people who became attracted to AI, many of the earliest serious adopters of the x-risk idea decided to either engage in extremely theoretical research on how to control advanced AI or started AI companies. But for a different type of person, the response to believing that AI could end the world is to try to get people to stop building it.

We may be entering a critical period akin to the drafting of the constitution for a new country with the potential to be more powerful than any that came before. Right now, that constitution is being drafted by unelected techno-capitalists:

Governments are complex systems that wield enormous power. The foundation upon which they’re established can influence the lives of millions now and in the future. Americans live under the yoke of dead men who were so afraid of the public, they built antidemocratic measures that continue to plague our political system more than two centuries later.

It’s ironic given how similar the problem is to another thing that leftists tend to think A LOT about! When my lefty friends point out that capitalism is the real misaligned superintelligence, it’s not exactly reassuring:

We may not need to wait to find superintelligent systems that don’t prioritize humanity. Superhuman agents ruthlessly optimize for a reward at the expense of anything else we might care about. The more capable the agent and the more ruthless the optimizer, the more extreme the results.

I found that the vitriolic debate between the people worried about extinction and those worried about AI’s existing harms hides the more meaningful divide — between those trying to make AI more profitable and those trying to make it more human.

There’s so much more in the final piece, so please do check it out and consider subscribing to Jacobin to support this kind of writing. 

If you'd like to stay up to date with my work, subscribe to my Substack.

New Comment
5 comments, sorted by Click to highlight new comments since: Today at 12:40 AM

[Marc Andreessen] followed it up in October with his “Techno-Optimist Manifesto,” which, in addition to praising a founder of Italian fascism

For those who were as curious as me, the person in question is Filippo Tommaso Marinetti, who Wikipedia says founded the Italian Futurist movement and also was a co-author of the Fascist Manifesto in 1919.

He seems to have had a strange relationship to Fascism as it became more prominent - my shallow read is that he supported it but was more focussed on the "national revival" part than racial hatred. Quotes from the relevant section of his Wikipedia page:

Marinetti was one of the first affiliates of the Italian Fascist Party. In 1919 he co-wrote with Alceste De Ambris the Fascist Manifesto, the original manifesto of Italian Fascism. He opposed Fascism's later exaltation of existing institutions, terming them "reactionary," and, after walking out of the 1920 Fascist party congress in disgust, withdrew from politics for three years. However, he remained a notable force in developing the party philosophy throughout the regime's existence...

As part of his campaign to overturn tradition, Marinetti also attacked traditional Italian food. His Manifesto of Futurist Cooking was published in the Turin Gazzetta del Popolo on 28 December 1930. Arguing that "People think, dress[,] and act in accordance with what they drink and eat", Marinetti proposed wide-ranging changes to diet. He condemned pasta, blaming it for lassitude, pessimism, and lack of virility, — and promoted the eating of Italian-grown rice. In this, as in other ways, his proposed Futurist cooking was nationalistic, rejecting foreign foods and food names. It was also militaristic, seeking to stimulate men to be fighters...

On 17 November 1938, Italy passed The Racial Laws, discriminating against Italian Jews, much like the discrimination pronounced in the Nuremberg Laws. The antisemitic trend in Italy resulted in attacks against modern art, judged too foreign, too radical and anti-nationalist. In the 11 January 1939 issue of the Futurist journal, Artecrazia, Marinetti expressed his condemnation of such attacks on modern art, noting Futurism is both Italian and nationalist, not foreign, and stating that there were no Jews in Futurism. Furthermore, he claimed Jews were not active in the development of modern art. Regardless, the Italian state shut down Artecrazia.


An addendum to this: Marinetti embraced an early form of Italian fascism ('sansepolcrino') that already by 1925 had been in fact disavowed by fascist leaders and that was pretty much antithetical to fascism as most people intend it.
Here the 1919 Fascist Manifesto from Wikipedia, I am familiar with the original document and it's correct. Not to be confused with the 1925 Manifesto of the Fascist Intellectuals.

Politically, the Manifesto calls for:

  • Universal suffrage with a lowered voting age to 18 years, and voting and electoral office eligibility for all ages 25 and up;
  • Proportional representation on a regional basis;
  • Voting for women;
  • Representation at government level of newly created national councils by economic sector;
  • The abolition of the Italian Senate (at the time, the Senate, as the upper house of parliament, was by process elected by the wealthier citizens, but were in reality direct appointments by the king. It has been described as a sort of extended council of the crown);
  • The formation of a national council of experts for labor, for industry, for transportation, for the public health, for communications, etc. Selections to be made of professionals or of tradesmen with legislative powers, and elected directly to a general commission with ministerial powers.

In labor and social policy, the Manifesto calls for:

  • The quick enactment of a law of the state that sanctions an eight-hour workday for all workers;
  • A minimum wage;
  • The participation of workers' representatives in the functions of industry commissions;
  • To show the same confidence in the labor unions (that prove to be technically and morally worthy) as is given to industry executives or public servants;
  • Reorganization of the railways and the public transport sector;
  • Revision of the draft law on invalidity insurance;
  • Reduction of the retirement age from 65 to 55.

In military affairs, the Manifesto advocates:

  • Creation of a short-service national militia with specifically defensive responsibilities;
  • Armaments factories are to be nationalized;
  • A peaceful but competitive foreign policy.

In finance, the Manifesto advocates:

  • A strong extraordinary tax on capital of a progressive nature, which takes the form of true partial expropriation of all wealth;
  • The seizure of all the possessions of the religious congregations and the abolition of all the bishoprics, which constitute an enormous liability on the Nation and on the privileges of the poor;
  • Revision of all contracts for military provisions;
  • The revision of all military contracts and the seizure of 85 percent of the profits therein.

Thank you for writing this, this is by far the strongest argument for taking this problem seriously tailored to leftists I've seen and I'll be sharing it. Hopefully the frequent (probably unavoidable) references to EA doesn't turn them off too much.

Thank you so much! I haven't gotten any serious negative feedback from lefties for the EA stuff so far, though an e/acc on Twitter mentioned it haha

Great article Garrison!

I found that the vitriolic debate between the people worried about extinction and those worried about AI’s existing harms hides the more meaningful divide — between those trying to make AI more profitable and those trying to make it more human.