Am I right that this is about the version of the RAISE act that passed the senate, not the one that will be put into law?
Strictly speaking, the version that passed the senate has now been put into law.
But to answer your question, no, this post describes what the bill will be like after the agreed upon "chapter amendments" have been implemented.
Footnote 4 mentions one of the changes that is expected to be made.
We wrote this briefing for UK politicians, to help them quickly get their heads around the AI safety laws that already exist in the US and EU. We found that it was also clarifying for us and we hope it will be useful for others. These laws are too long and complex to be perfectly distilled without losing some nuances, but this is our best attempt.
The laws
Three jurisdictions have enacted laws addressing extreme dangers from AI:
For advanced general purpose AI models (language models trained with lots of ‘compute’ - like ChatGPT), all three laws have two basic requirements:
(details TBC)
- Increased risk of the above from the model deceiving the developer.
- Injury caused by theft / escape or loss of control of a model.
- Increased risk of the above caused by the theft / escape or loss of control of a model.
SB 53 also includes whistleblower protections for employees of AI companies. The other two acts don't, although the EU already has strong general protections for whistleblowers.
The US laws stop there. The EU AI Act is broader, also governing non-frontier models. It sets:
To more easily demonstrate compliance with the EU AI Act, companies can follow the Code of Practice. The Code of Practice isn't part of the law itself, but the law explicitly recognises it as a way to show you're compliant.
Strengths and limitations
Effect on liability
The laws mostly don't affect liability for harms, but they give slight, indirect boosts to existing laws. Judicial remedies remain largely unchanged outside of administrative penalties; no private recourse is created. That said, some features of these laws may support civil and criminal recourse:
Separately from the AI Act, in the EU, civil recourse prospects for AI-induced harms are strengthened by the recently updated Product Liability Directive, which recognises software as ‘products’, but the burden of proof of causation puts remedies out of most claimants’ reach.
UK regulatory horizon
Recommendations
A basic approach to UK legislation would be to align with existing EU/US legislation:
Easy improvements on EU/US law:
Strictly speaking, the EU AI Act does not require a safety and testing framework. However, the Act does require developers to conduct model evaluation and risk assessment, and the Code of Practice suggests a Safety and Security Framework as the operationalisation of this.
EU AI Act Arts. 43, 61; SB 53 §11547.6(a); RAISE Act §1101(c).
As mentioned earlier, The EU AI Act actually regulates a wide range of models, but compute thresholds are used to designate models as posing a 'system risk'. It is these 'system risk' models which have the safety protocol and reporting requirements described in the first section.
The RAISE Act, as passed by the legislature, enables the attorney general to block a model release via an injunction if it poses an unreasonable risk of critical harm, but this is expected to be removed or reduced in future amendments.
UK AI Safety Institute, 'Our First Year', 1 Nov. 2024. https://www.aisi.gov.uk/blog/our-first-year.
Following a letter issued by PauseAI UK, TIME confirmed that DeepMind did not give pre-release access to Gemini 2.5 Pro, Aug 2025. https://time.com/7313320/google-deepmind-gemini-ai-safety-pledge/.
DSIT, 'A pro-innovation approach to AI regulation', Cm 815, March 2023. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper.
Shaffer, T. S., 'AI incident reporting: Addressing a gap in the UK's regulation of AI', CLTR, June 2024.
Ritchie, O., Anderljung, M. and Rachman, T., 'From Turing to Tomorrow: The UK's Approach to AI Regulation', Centre for the Governance of AI, July 2025. https://arxiv.org/abs/2507.03050