LESSWRONG
Community
LW

AI
Event

4

AI Safety Law-a-thon: We need more technical AI Safety researchers to join!

by Katalina Hernandez, Kabir Kumar
LW
2 min read
0

4

Saturday 25th October at 8:30 am to Sunday 26th October at 7:30 pm GMT
https://luma.com/8hv5n7t0
katalina.hrdez@gmail.com
Register
The host has requested RSVPs for this event
1 Going0 Maybe0 Can't Go
Katalina Hernandez

Posted on: 10th Sep 2025

4

New Comment
Everyone who RSVP'd to this event will be notified.
Moderation Log
More from Katalina Hernandez
View more
Curated and popular this week
0Comments

Many talented lawyers do not contribute to AI Safety, simply because they've never had a chance to work with AIS researchers or don’t know what the field entails.

I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans:

A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks. 

From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on more "obvious" contractual considerations, IP rights or privacy clauses when giving advice to their clients- not on whether model alignment drift could blow up the contract six months after signing.

Who's coming?

We launched the event two days and we already have an impressive lineup of senior counsel from top firms and regulators. 

So far, over 45 lawyers have signed up. I thought we would attract mostly law students... and I was completely wrong. Here is a bullet point list of the type of profiles you'll come accross if you join us:

  • Partner at a key global multinational law firm that provides IP and asset management strategy to leading investment banks and tech corporations.
  • Founder and editor of Legal Journals at Ivy law schools.
  • Chief AI Governance Officer at one of the largest professional service firms in the world.
  • Lead Counsel and Group Privacy Officer at a well-known airline.
  • Senior Consultant at Big 4 firm.
  • Lead contributor at a famous european standards body.
  • Caseworker at an EU/ UK regulatory body.
  • Compliance officers and Trainee Solicitors at top UK and US law firms.

This presents a rare opportunity for the AI Safety to influence high level decision-makers in the legal sphere.

The technical AI Safety challenge: What to expect if you join

We are still missing at least 40 technical AI Safety researchers and engineers to take part in the hackathon.

If you join, you'll help stress-test the legal scenarios and point out the alignment risks that are not salient to your counterpart (they’ll be obvious to you, but not to them).

At the Law-a-thon, your challenge is to help lawyers build a risk assessment for a counter-suit against one of the big labs. 

You’ll show how harms like bias, goal misgeneralisation, rare-event failures, test-awareness, or RAG drift originate upstream in the foundation model rather than downstream integration. The task is to translate alignment insights into plain-language evidence lawyers can use in court: pinpointing risks that SaaS providers couldn’t reasonably detect and identifying the disclosures (red-team logs, bias audits, system cards) that lawyers should learn how to interrogate and require from labs.

Of course, you’ll also get the chance to put your own questions to experienced attorneys, and plenty of time to network with others!

Logistics

📅 25–26 October 2025
🌍 Hybrid: online + in person (onsite venue in London, details TBC).                                 💰 Free for technical AI Safety participants. If you choose to come in person, you'll have the option to pay an amount (from 5 to 40 GBP) if you can contribute, but this is not mandatory.

If you’re up for it, sign up here: https://luma.com/8hv5n7t0 

Feel free to DM me if you want to raise any queries!

Mentioned in
17AI Safety Law-a-thon: Turning Alignment Risks into Legal Strategy