Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Gabriel Weil (Assistant Professor of Law, Touro University Law Center) wrote this post series on the role of Liability Law for reducing Existential Risk from AI.  I think this may well be of interest to some people here, so wanted for a linkpost to exist. 

The first post argues that Tort Law Can Play an Important Role in Mitigating AI Risk

The second post addressed directly How Technical AI Safety Researchers Can Help Implement Punitive Damages to Mitigate Catastrophic AI Risk

Here is the full paper.

TLDR (from the first post)

Legal liability could substantially mitigate AI risk, but current law falls short in two key ways: (1) it requires provable negligence, and (2) it greatly limits the availability of punitive damages. Applying strict liability (a form of liability that does not require provable negligence) and expanding the availability and flexibility of punitive damages is feasible, but will require action by courts or legislatures. Legislatures should also consider acting in advance to create a clear ex ante expectation of liability and imposing liability insurance requirements for the training and deployment of advanced AI systems. The following post is a summary of a law review article. Here is the full draft paper. Dylan Matthews also did an excellent write-up of the core proposal for Vox’s Future Perfect vertical. 

New Comment
1 comment, sorted by Click to highlight new comments since:

One of those ideas that's so obviously good it's rarely discussed?