LESSWRONG
LW

147
Katalina Hernandez
10857951
Message
Dialogue
Subscribe

Lawyer by education, researcher by vocation.

Stress-Testing Reality Limited | Katalina Hernández | Substack

→Ask me about: Advice on how to make technical research legible by lawyers and regulators, frameworks for AI liability (EU or UK Law), general compliance questions (GDPR, EU AI Act, DSA/DMA, Product Liability Directive). 

→Book a free slot: https://www.aisafety.com/advisors 

I produce independent legal research for AI Safety and AI Governance projects. I work to inform enforceable legal mechanisms with alignment, interpretability, and control research- and avoid technical safety being brought to the conversation too late. 

How I work: I read frontier safety papers and reproduce core claims; map them to concrete obligations (EU AI Act, PLD, NIST/ISO); and propose implementation plans.

Current projects

  • Law-Following AI (LFAI): released a preprint (in prep for submission to the Cambridge Journal for Computational Legal Studies) on whether legal standards can serve as alignment anchors and how law-alignment relates to value alignment. Building on the original framework proposed by Cullen O'Keefe and the Institute of Law and AI.
  • Regulating downstream modifiers: writing “Regulating Downstream Modifiers in the EU: Federated Compliance and the Causality–Liability Gap” for IASEAI, stress-testing Hacker & Holweg’s proposal against causation/liability and frontier-risk realities.
  • Open problems in regulatory AI governance: co-developing with ENAIS members a tractable list where AI Safety work can close governance gaps (deceptive alignment, oversight loss, evaluations).
  • AI-safety literacy for tech lawyers: building a syllabus used by serious institutions; focuses on translating alignment/interpretability/control into audits, documentation, and enforcement-ready duties.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Law and Legal systems
a month ago
(+716)
1Musings from a Lawyer turned AI Safety researcher (ShortForm)
7mo
58
Musings from a Lawyer turned AI Safety researcher (ShortForm)
Katalina Hernandez1mo395

Many talented lawyers do not contribute to AI Safety, simply because they've never had a chance to work with AIS researchers or don’t know what the field entails.

I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans:[1] 

A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks. 

From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on IP rights or privacy clauses when giving advice to their clients- not on whether model alignment drift could blow up the contract six months after signing.

We launched the event one day ago, and we already have an impressive lineup of senior counsel from top firms and regulators. What we still need are technical AI safety people to pair with them!

If you join, you'll help stress-test the legal scenarios and point out the alignment risks that are not salient to your counterpart (they’ll be obvious to you, but not to them).

 You’ll also get the chance to put your own questions to experienced attorneys.

📅 25–26 October 
🌍 Hybrid: online + in-person (London)

If you’re up for it, sign up here: https://luma.com/8hv5n7t0 

Feel free to DM me if you want to raise any queries!

 

  1. ^

    NOTE: I really want to improve how I communicate updates like these. If this sounds too salesy or overly persuasive, it would really help me if you comment and suggest how to improve the wording. 

    I find this more effective than just downvoting- but of course, do so if you want. Thank you in advance!.

Reply
Musings from a Lawyer turned AI Safety researcher (ShortForm)
Katalina Hernandez13h50

Thank you! You've managed to explain exactly what I thought when I saw this link. And re the LinkedIn comment - I'm actually surprised that people are surprised. I know people who post very high quality articles there, but mostly it's become slop land. The pattern I'm noticing is: LinkedIn writers who value quality slowly transitioning to Substack, and those in their audiences that want to think moving with them. 

Reply
Musings from a Lawyer turned AI Safety researcher (ShortForm)
Katalina Hernandez13h30

It's not a conference, it's an online course and it's one of the most popular among privacy professionals. The most popular being those offered by IAPP (International Association of Privacy Professionals). She's legitimately that well regarded. I'd love to know where the disconnect is for you? 

 

Although ironically, I was talking to my boyfriend about how people in law or compliance would have the same reaction ("what? This guy is important?") if I said so about Zvi and just linked his Substack XD. I guess different impressions in different communities. 

Reply1
Musings from a Lawyer turned AI Safety researcher (ShortForm)
Katalina Hernandez1d*292

This is what she says:

Why read:

Although, in general, I disagree with catastrophic framings of AI risk (which have been exploited by AI CEOs to increase interest in their products, as I recently wrote in my newsletter), the AI safety debate is an important one, and it concerns all of us.

There are differing opinions on the current path of AI development and its possible futures. There are also various gray zones and unanswered questions on possible ways to mitigate risk and avoid harm.

Yudkowsky has been researching AI alignment for over 20 years, and together with Soares, he has built a strong argument for why AI safety concerns are urgent and why action is needed now. Whether you agree with their tone or not, their book is worth reading.

Reply
Musings from a Lawyer turned AI Safety researcher (ShortForm)
Katalina Hernandez1d*62

Perhaps this is a better reference point: https://academy.aitechprivacy.com/ai-governance-training

Her academy is very, very popular among Tech DPOs and lawyers (data protection officers). I am not saying she isn't a typical Linkedin influencer. 

But her posting about something, in the data protection and corp AI Governance [1]world (in terms of influence), is akin to us seeing Zvi Mowshowitz post about something. Does this make sense? 

  1. ^

    DPOs are mostly also doing AI Governance in the tech and tech adjacent industry now. I should do a post about this because it may be part of the problem soon.

Reply22
Musings from a Lawyer turned AI Safety researcher (ShortForm)
Katalina Hernandez1d6811

Luiza Jarovsky just endorsed IABIED. This is actually significant.

Luiza Jarovsky is one of the most influential people in the corporate AI Governance space right now: her newsletter has 80,000+ subscribers (mostly lawyers in the Data and Tech space) and has trained 1,300+ professionals from Google, Amazon, Microsoft, Meta, Apple, and the likes.

Her audience are basically compliance lawyers who think AI safety means "don't be racist" not "don't kill everyone." For her to recommend IABIED to that network is a non-trivial update on Overton window movement. These people literally sit in deployment decision meetings at Fortune 500s.

Corporate governance crowd is normally immune to longtermist args, but IABIED is cracking that. 

Reply3
AI Safety Law-a-thon: Turning Alignment Risks into Legal Strategy
Katalina Hernandez3d135

𝗧𝗵𝗿𝗶𝗹𝗹𝗲𝗱 𝘁𝗼 𝗮𝗻𝗻𝗼𝘂𝗻𝗰𝗲 𝘁𝗵𝗲 𝗮𝗱𝘃𝗶𝘀𝗼𝗿𝘆 𝗽𝗮𝗻𝗲𝗹 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗔𝗜 𝗦𝗮𝗳𝗲𝘁𝘆 𝗟𝗮𝘄-𝗮-𝘁𝗵𝗼𝗻 (𝗢𝗰𝘁 𝟮𝟱-𝟮𝟲)!

Our participants will receive feedback on their work from four exceptional experts bridging AI safety research, legal practice, and governance:

Charbel-Raphaël Segerie - Executive Director of the French Center for AI Safety (Centre pour la Sécurité de l'IA - CeSIA), OECD AI expert, and propulsor of the AI Red Lines initiative. His technical research spans RLHF theory, interpretability, and safe-by-design approaches. He has supervised multiple research groups across ML4Good bootcamps, ARENA, and AI safety hackathons, bridging cutting-edge technical AI safety research with practical risk evaluation and governance frameworks.

Chiara Gallese, Ph.D.- Researcher at Tilburg Institute for Law, Technology, and Society (TILT) and an active member of four EU AI Office working groups. Dr. Gallese has co-authored papers with computer scientists on ML fairness and trustworthy AI, conducted testbed experiments addressing bias with NXP Semiconductors, and has managed a portfolio of approximately 200 high-profile cases, many valued in the millions of euros.

Yelena Ambartsumian - Founder of AMBART LAW PLLC, a New York City law firm focused on AI governance, data privacy, and intellectual property. Her firm specializes in evaluating AI vendor agreements and helping companies navigate downstream liability risks. Yelena has published in the Harvard International Law Journal on AI and copyright issues, and is a co-chair of IAPP's New York KnowledgeNet chapter. She is a graduate of Fordham University School of Law with executive education from Harvard and MIT.

James Kavanagh - Founder and CEO of AI Career Pro, where he trains professionals in AI governance and safety engineering. Previously, he led AWS's Responsible AI Assurance function and was the Head of Microsoft Azure Government Cloud Engineering for defense and national security sectors. At AWS, James's team was the first to achieve ISO 42001 certification of any global cloud provider.

These advisors will review the legal strategies and technical risk assessments our teams produce, providing feedback on practical applicability to AI policy, litigation, and engineering decisions.

As you can see, these are people representing the exact key areas of change that we are tackling with the AI Safety Law-a-thon:

  • Industry Governance Engineering practices
  • BigLaw Litigation
  • Policy and legal research that informs Regulators
  • International cooperation on AI Governance (Charbel initiated the Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures).

 Can't wait to see the results of this legal hackathon. See you there!

Reply
AI Safety Law-a-thon: Turning Alignment Risks into Legal Strategy
Katalina Hernandez5d87

Hi!  Thank you for your comment. 

I am an experienced industry professional, and most of the legal participants are coming directly from my network, or found out about the event via the CAIDP channel / Women in AI Governance / Kevin Fumai's update. 

It is the first time I cooperate with AI Plans but Kabir has successfully conducted Hackathons in the past, more focused on AI Evaluations. In fact, AI Plan's December 2023 had quite reputable judges such as Nate Soares, Ramana Kumar, and Charbel Raphael.

We provide preparatory materials to confirmed participants. 

Best, 
Katalina. 

Reply
Gradual Disempowerment Monthly Roundup
Katalina Hernandez6d120

Another shocking development for me was seeing Accenture, one of the largest Tech consultancies in the world, fire 11 000 employees because they "cannot be upskilled to embrace AI". 

 

Reply
Gradual Disempowerment Monthly Roundup
Katalina Hernandez7d110

Substack is very valuable for reach, and visibility to a broader audience.

Coincidentally, my boss is Albanian (Lead Counsel for a big entity). I saw those news, sent her your paper, messaged "The authors would say this is the start of gradual disempowerment".

She read the paper.

This is how I was able to introduce AI Safety in our regular conversations 🙏🏼. 

Reply
Load More
249The Problem with Defining an "AGI Ban" by Outcome (a lawyer's take).
23d
63
58AI Safety Law-a-thon: Turning Alignment Risks into Legal Strategy
1mo
4
62The EU Is Asking for Feedback on Frontier AI Regulation (Open to Global Experts)—This Post Breaks Down What’s at Stake for AI Safety
6mo
13
6For Policy’s Sake: Why We Must Distinguish AI Safety from AI Security in Regulatory Governance
6mo
11
3Scaling AI Regulation: Realistically, what Can (and Can’t) Be Regulated?
7mo
1
1Musings from a Lawyer turned AI Safety researcher (ShortForm)
7mo
58