2563

LESSWRONG
LW

2562
NewslettersAI

3

AISN#64: New AGI Definition and Senate Bill Would Establish Liability for AI Harms

by Corin Katzke, Dan H
16th Oct 2025
Linkpost from aisafety.substack.com
5 min read
0

3

NewslettersAI

3

New Comment
Moderation Log
More from Corin Katzke
View more
Curated and popular this week
0Comments

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: A new bill in the Senate would hold AI companies liable for harms their products create; China tightens its export controls on rare earth metals; a definition of AGI.

As a reminder, we’re hiring a writer for the newsletter.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.


Senate Bill Would Establish Liability for AI Harms

Sens. Dick Durbin, (D-Ill) and Josh Hawley (R-Mo) introduced the AI LEAD Act, which would establish a federal cause of action for people harmed by AI systems to sue AI companies.

Corporations are usually liable for harms their products create. When a company sells a product in the United States that harms someone, that person can generally sue that company for damages under the doctrine of product liability. Those suits force companies to internalize the harms their products create—and incentivize them to make their products safer.

Courts haven’t settled on whether AI systems are products. Early cases indicate that US courts are open to treating AI systems as products for the purposes of product liability. In a case against CharacterAI, a federal judge ruled that the company’s system did count as a product. OpenAI is facing a similar suit brought in California state court. Nonetheless, the lack of legal certainty might deter potential plaintiffs from bringing suits.

The AI LEAD Act would apply product liability to AI systems. The AI LEAD Act would clarify that AI systems are subject to product liability and establish a path for claims to be brought in federal court. In general, the act would hold AI companies liable for harms caused by their AI systems if the company:

  • Failed to exercise reasonable care in designing the AI system,
  • Failed to exercise reasonable care in providing instructions or warnings for the AI system,
  • Breaches a warranty it provided for the AI system,
  • Sold or distributed an AI system in a defective condition that permitted unreasonably dangerous misuse.

The deployers of an AI system are also liable for harm if they substantially modify or dangerously misuse the system.

The act also prohibits AI companies from limiting their liability though contracts with consumers, requires that foreign AI developers register agents for service of process with the US before placing their products on the US market, and permits states to establish stronger safety legislation if they so choose.

China Tightens Export Controls on Rare Earth Metals

China’s Ministry of Commerce announced new export controls on rare earth metals, set to take effect December 1. If aggressively enforced, the rules would give China control over a key part of the global AI and defense supply chains. It also unveiled curbs on the export of equipment used to manufacture electric vehicle batteries, effective November 8.

China dominates global production of rare earths. China has a virtual monopoly on the production of rare earth metals, which are vital to semiconductors, smartphones, AI systems, wind turbines, electric motors, and military hardware. According to the new rules, companies exporting products containing Chinese rare earths are required to obtain export licenses from China’s Ministry of Commerce. Exporting Chinese rare earths for military use is prohibited, and use in developing sub-14 nanometer chips will be reviewed on a case-by-case basis.

A Chinese rare earth mine. Source.

If aggressively enforced, the new rules would likely disrupt AI supply chains. Rare earth metals are critical to companies producing AI hardware, and their restriction would cause downstream impacts to AI developers. Some analysts predicted they could even trigger a wider economic downturn. “If enforced aggressively,” wrote Dean Ball on X, “this policy could mean ‘lights out’ for the US AI boom, and likely lead to a recession/economic crisis in the US in the short term.”

China may be using its monopoly as leverage to extract US concessions. China claims that the purpose of the controls are only to prevent its rare earth metals from being used in military applications—samarium, for example, is used by the U.S. to manufacture F-35 fighter jets and missile systems.

However, the rules would give China effective control over the supply chains of several critical industries, including AI. The US is unlikely to accept that strategic vulnerability. US President Donald Trump responded to the new controls by announcing a 100 percent additional tariff on Chinese goods—on top of the existing 30 percent tariffs—as well as export controls on critical software, both going into effect November 1.

China may walk back its controls to deescalate an economic confrontation with the US, or in exchange for reduced tariffs or greater access to frontier AI chips. In the long run, the US would be well-advised to build independent rare earth metal production capacity.

A Definition of AGI

A large group of people in AI—including Dan Hendrycks, Yoshua Bengio, Dawn Song, Max Tegmark, Eric Schmidt, Jaan Tallinn, Gary Marcus, and others—released a paper introducing a quantifiable framework for defining Artificial General Intelligence (AGI), aiming to standardize the term and measure the gap between current AI and human-level cognition.

AGI definitions are often nebulous. The paper argues that the term AGI currently acts as a “constantly moving goalpost.” As specialized AI systems master tasks previously thought to require human intellect, the criteria for AGI shift. This ambiguity hinders productive discussions about progress and obscures the actual distance to human-level intelligence.

The framework is grounded in theory. The authors define AGI as “an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult.” To operationalize this, they ground their methodology in the Cattell-Horn-Carroll (CHC) theory, the most empirically validated model of human intelligence. The framework adapts established human psychometric tests to evaluate AI systems across ten core cognitive domains, resulting in a standardized “AGI Score” (0-100%).

Current models exhibit a “jagged” cognitive profile. Application of the framework reveals highly uneven capabilities. While models are proficient in knowledge-intensive domains (such as Math or Reading/Writing), they possess critical deficits in foundational cognitive machinery.

Long-term memory storage is the critical bottleneck. The most significant deficit identified is Long-Term Memory Storage, where current models score near 0%. This results in a form of “amnesia,” forcing the AI to re-learn context in every interaction. The paper notes that the reliance on massive context windows (Working Memory) is a “capability contortion” used to compensate for this lack of persistent memory.

The framework quantifies the gap to AGI. The resulting scores are intended to concretely quantify both rapid progress and the substantial gap remaining before AGI. The paper estimates GPT-4 at a 27% AGI score and the anticipated GPT-5 (2025) at 58%.

The paper can be accessed at agidefinition.ai.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.

In Other News

Government

  • Governor Newsom signed SB-53 into law (Politico).
  • CAISI published an evaluation of Deepseek’s AI models.
  • The Select Committee on the CCP found that companies in the US and allied countries are selling semiconductor manufacturing equipment to China.

Industry

  • OpenAI released Sora 2, its latest video-generation model, along with a tiktok-style app.
  • Microsoft and Anthropic hired former UK Prime Minister Rishi Sunak into advisory roles.
  • Anthropic open-sourced Petri, a tool for automating AI behavior audits through multi-turn simulations.

Civil Society

  • Karson Elmgren, Scott Singer, and Oliver Guest discuss how China’s new AI safety body brings together leading experts—but faces obstacles to turning ambition into influence.
  • OpenAI subpoenaed the general counsel of Encode, a nonprofit that worked on SB 53.
  • Researchers discovered an exploit of Unitree’s humanoid robots that lets attackers take control, embed themselves, and spread to nearby devices.
  • The Budget Lab at Yale published a report evaluating AI’s effects on the labor market.
  • FLI announced the Keep The Future Human Creative Contest, which offers $100,000+ in cash prizes for digital media that raises awareness of AI existential risks.

See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.