(Genuine enquiry) For anyone who's upvoted or read this post and thought "this seems useful" but didn't sign up: what stopped you from registering?
I'm trying to get a better idea of where I may not be communicating the purpose and value of the event well enough 🙏🏼
Our participants will receive feedback on their work from four exceptional experts bridging AI safety research, legal practice, and governance:
Charbel-Raphaël Segerie - Executive Director of the French Center for AI Safety (Centre pour la Sécurité de l'IA - CeSIA), OECD AI expert, and propulsor of the AI Red Lines initiative. His technical research spans RLHF theory, interpretability, and safe-by-design approaches. He has supervised multiple research groups across ML4Good bootcamps, ARENA, and AI safety hackathons, bridging cutting-edge technical AI safety research with practical risk evaluation and governance frameworks.
Chiara Gallese, Ph.D.- Researcher at Tilburg Institute for Law, Technology, and Society (TILT) and an active member of four EU AI Office working groups. Dr. Gallese has co-authored papers with computer scientists on ML fairness and trustworthy AI, conducted testbed experiments addressing bias with NXP Semiconductors, and has managed a portfolio of approximately 200 high-profile cases, many valued in the millions of euros.
Yelena Ambartsumian - Founder of AMBART LAW PLLC, a New York City law firm focused on AI governance, data privacy, and intellectual property. Her firm specializes in evaluating AI vendor agreements and helping companies navigate downstream liability risks. Yelena has published in the Harvard International Law Journal on AI and copyright issues, and is a co-chair of IAPP's New York KnowledgeNet chapter. She is a graduate of Fordham University School of Law with executive education from Harvard and MIT.
James Kavanagh - Founder and CEO of AI Career Pro, where he trains professionals in AI governance and safety engineering. Previously, he led AWS's Responsible AI Assurance function and was the Head of Microsoft Azure Government Cloud Engineering for defense and national security sectors. At AWS, James's team was the first to achieve ISO 42001 certification of any global cloud provider.
These advisors will review the legal strategies and technical risk assessments our teams produce, providing feedback on practical applicability to AI policy, litigation, and engineering decisions.
As you can see, these are people representing the exact key areas of change that we are tackling with the AI Safety Law-a-thon:
Can't wait to see the results of this legal hackathon. See you there!
Closing our Advisory panel with one last amazing addition!
Co-lead of the AI Standards Lab and Research Affiliate with the Oxford Martin AI Governance Initiative. He has contributed to the EU GPAI Code of Practice and analysed various regulatory and governance frameworks. His research currently focuses on AI risk management. Previously, he spent over a decade in the oil and gas industry.
Please don't take it personally, but I have two concerns:
(Background: law. Not a LessWrong user).
Hi! Thank you for your comment.
I am an experienced industry professional, and most of the legal participants are coming directly from my network, or found out about the event via the CAIDP channel / Women in AI Governance / Kevin Fumai's update.
It is the first time I cooperate with AI Plans but Kabir has successfully conducted Hackathons in the past, more focused on AI Evaluations. In fact, AI Plan's December 2023 had quite reputable judges such as Nate Soares, Ramana Kumar, and Charbel Raphael.
We provide preparatory materials to confirmed participants.
Best,
Katalina.
[LessWrong Community event announcement: https://www.lesswrong.com/events/rRLPycsLdjFpZ4cKe/ai-safety-law-a-thon-we-need-more-technical-ai-safety]
Many talented lawyers do not contribute to AI Safety, simply because they've never had a chance to work with AIS researchers or don’t know what the field entails.
I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans:
A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks.
From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on more "obvious" contractual considerations, IP rights or privacy clauses when giving advice to their clients- not on whether model alignment drift could blow up the contract six months after signing.
We launched the event two days and we already have an impressive lineup of senior counsel from top firms and regulators.
So far, over 45 lawyers have signed up. I thought we would attract mostly law students... and I was completely wrong. Here is a bullet point list of the type of profiles you'll come accross if you join us:
This presents a rare opportunity for the AI Safety to influence high level decision-makers in the legal sphere.
Charbel-Raphaël Segerie - Executive Director of the French Center for AI Safety (CeSIA), an OECD AI expert, and propulsor of the AI Red Lines initiative. His technical research spans RLHF theory, interpretability, and safe-by-design approaches. He has supervised multiple research groups across ML4Good bootcamps, ARENA, and AI safety hackathons, bridging cutting-edge technical AI safety research with practical risk evaluation and governance frameworks.
Dr. Chiara Gallese- Researcher at Tilburg Institute for Law, Technology, and Society (TILT) and an active member of four EU AI Office working groups. Dr. Gallese has co-authored papers with computer scientists on ML fairness and trustworthy AI, conducted testbed experiments addressing bias with NXP Semiconductors, and has managed a portfolio of approximately 200 high-profile cases, many valued in the millions of euros.
We are still missing at least 40 technical AI Safety researchers and engineers to take part in the hackathon.
If you join, you'll help stress-test the legal scenarios and point out the alignment risks that are not salient to your counterpart (they’ll be obvious to you, but not to them).
At the Law-a-thon, your challenge is to help lawyers build a risk assessment for a counter-suit against one of the big labs.
You’ll show how harms like bias, goal misgeneralisation, rare-event failures, test-awareness, or RAG drift originate upstream in the foundation model rather than downstream integration. The task is to translate alignment insights into plain-language evidence lawyers can use in court: pinpointing risks that SaaS providers couldn’t reasonably detect and identifying the disclosures (red-team logs, bias audits, system cards) that lawyers should learn how to interrogate and require from labs.
Of course, you’ll also get the chance to put your own questions to experienced attorneys, and plenty of time to network with others!
📅 25–26 October 2025, 10am to 6pm
🌍 Hybrid: You can join either online or in person .
📍Venue: London Initiative for Safe AI (LISA): 25 Holywell Row, London EC2A 4XE, United Kingdom
💰 Free for technical AI Safety participants: Use the option "Apply for a subsidized ticket" and link your Google Scholar or research page. If you choose to come in person, you'll have the option to pay an amount (from 5 to 40 GBP) if you can contribute, but this is not mandatory.
If you’re up for it, sign up here: https://luma.com/8hv5n7t0
Feel free to DM me if you want to raise any queries!