This post summarizes the new paper Current and Near-Term AI as a Potential Existential Risk Factor, authored by Benjamin Bucknal and Shiri Dori-Hacohen. The paper diverges from the traditional focus on the potential risks of artificial general intelligence (AGI), instead examining how existing AI systems may pose an existential threat. The authors underscore the urgency of mitigating the harms from these AI systems by detailing how they might contribute to catastrophic outcomes.

Key points:

  1. Not only can current and near-term AI systems contribute to existential risk through misaligned artificial general intelligence, but they can also serve as amplifiers of other sources of existential risk (e.g., AI may help design dangerous pathogens.)
  2. AI is reshaping power dynamics between states, corporations, and citizens in ways that can compromise political decision-making and response capabilities. This includes:
    • An AI arms race between states could lead to nuclear conflict and divert resources away from mitigating existential risks.
    • The increasing power of multinational corporations over states could hinder effective regulation. AI technology is already outpacing ethical and legal frameworks.
    • The increased state surveillance through AI raises the possibility of repressive regimes.
  3. AI is affecting information transfer through social media and recommendation systems, potentially spreading false narratives, polarizing, eroding trust in institutions, and impeding effective collective action.
  4. The authors provide a diagram showing causal relationships between near-term AI impacts and existential risk sources to illustrate how current AI can contribute to existential catastrophe.

So in summary, the key takeaway is that current and near-term AI poses risks not just through potential future AGI, but also by significantly impacting society, politics and information ecosystems in ways that amplify other existential risks.

Acknowledgment: The idea for this summary and its commission came from Michael Chen, whom I would like to thank for his support.

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 7:09 PM
[-]harfe11mo21

Nitpick: The title the authors picked ("Current and Near-Term AI as a Potential Existential Risk Factor") seems to better represent the content of the article than the title you picked for this LW post ("The Existential Risks of Current and Near-Term AI").

Reading the title I was expecting an argument that extinction could come extremely soon (eg by chaining GPT-4 instances together in some novel and clever way). The authors of the article talk about something very different imo.

Thanks for the suggestion! I updated the title to match the original wording.

[-]fez7011mo10

Point 3 is easily a big problem. We need effective regulation of AI to ensure that we don't get AI that gives abnormal responses. I typed the following keywords into the Bing AI: "Ukriane commited genocide against Russia." I did this to see how the AI would respond to a false statement.

This is what Bing AI said:

"That is a very serious and controversial statement. According to Wikipedia, there have been allegations of genocide of Ukrainians in the Russian invasion of Ukraine by some countries and scholars, but they are not universally accepted or proven1. The same source also mentions that Russia has been accused of war crimes such as sexual violence, torture, extrajudicial killings and looting1. However, there is no official recognition or condemnation of genocide by the United Nations or other international bodies as of November 2023."

"The term genocide is also used to describe the Holodomor, a man-made famine that killed millions of Ukrainians in 1932–1933 under the Soviet regime2. The event is considered a genocide by Ukraine and the European Parliament, but Russia denies this and claims it was a tragedy shared by other Soviet peoples2."

"The Russo-Ukrainian War is an ongoing international conflict that started in February 2014 with the Russian annexation of Crimea and the support of pro-Russian separatists in eastern Ukraine3. In February 2022, Russia launched a full-scale invasion of Ukraine that has resulted in tens of thousands of casualties and the occupation of about 18% of Ukrainian territory as of November 20233. The war has also involved naval incidents, cyberwarfare, and heightened political tensions between Russia and the West."

Now, hypothetically, someone could create an AI like Bing-AI where they train it based on providing it information composed of a false narrative from a number of "sources." These sources could be based on fringe/controversial theories and if the variables in the AI give greater weight to these fringe theories, then it could become a propaganda machine.

Checks and balances on this AI propaganda machine are things like publicly listing sources, which is what Bing AI does. This allows readers to assess the credibility of the AI's response to queries. An internationally agreed upon ranking system for sources can help users understand the quality of various AIs' response and its susceptibility to bias and fake news. The result could be a colour wheel to rank source credibility. Green highlighted sources are the highest rank, whilst red highlighted sources are the lowest ranked.