Lawyer by education, researcher by vocation.
Stress-Testing Reality Limited | Katalina Hernández | Substack
→Ask me about: Advice on how to make technical research legible by lawyers and regulators, frameworks for AI liability (EU or UK Law), general compliance questions (GDPR, EU AI Act, DSA/DMA, Product Liability Directive).
→Book a free slot: https://www.aisafety.com/advisors
I produce independent legal research for AI Safety and AI Governance projects. I work to inform enforceable legal mechanisms with alignment, interpretability, and control research- and avoid technical safety being brought to the conversation too late.
How I work: I read frontier safety papers and reproduce core claims; map them to concrete obligations (EU AI Act, PLD, NIST/ISO); and propose implementation plans.
Current projects
Agreed! Thank you, Linda. For readers: this also goes for students who do not have support from family members or whose families have a quite precarious situation- quitting your degree may leave you with thousands in debt that you'll still have to repay.
Thank you for adding that last part, "Have a sufficient financial safety net".
Not everyone can afford to leave paid work and just do AI Safety work for free / in favour of unstable short fellowships.
It's harsh but it's the reality. If you have dependants (children, or other family members) or have massive student loans to pay every month, and have zero financial support from anyone else...
I'd say: still dedicate as much time as possible to your research. Like OP says- more time to actually doing the work than on applications.
But I've seen people get pressured into quitting their work or studies and made feel like they're not committed enough if they don't. When it turns out they'd be putting their entire family in financial duress if they did so.
And to those who can afford not to work for a salary: please remember to be kind and prudent. Some of us also have short timelines, but we'd like our families to have food on their plate too.
P.S.: in case it isn't obvious, I do support the message on this post.
I just wanted to say thank you for this. I have a low tolerance for annoyance that's counterproductive for me, so this helped me reflect on that. Plus, great explainer of what LW interactions are supposed to result in.
By the way, has this been posted anywhere else apart from LW and arXiv? I'd circulate on LinkedIn too (yes, I'm serious, unfortunately) and tag key people in policy and AI Governance like Kevin Fumai or Luiza Jarowski (large following) or Kai Zenner / folks at the AI Office (I know some). Let me know if I can repost this elsewhere :).
For context on how your proposed 10^24 FLOPs threshold compares to existing regulation:
EU AI Act: Defines high systemic risk GPAI at 10^25 FLOPs
California SB 53: Defined "frontier model" at 10^26 FLOPs (including fine-tuning/RLHF)
Your proposal is way lower than what current regulations consider "frontier" or " systemic risk."
So, you may receive pushback in the form of "Current regulations don't even consider this frontier - this is outside the policy Overton window"
Nevertheless, we're also seeing some regulatory backtracking (EU Omnibus simplification, Trump's EO undermining SB 53), and I genuinely think that this may lead to the opposite effect - for people to think "regulators don't know what they're doing. We need a stricter halt while they figure it out, not maintaining thresholds they're already second-guessing."
I think these thresholds were set by political feasibility rather than safety analysis, and I'd love to see this proposal succeed. Of course, this requires making the case that the risk is imminent enough to justify going well below where any regulator has dared to set a line.
[Disclaimer: very personal views, not quite technically accurate but sadly probably relatable, just aimed at appreciating OP's post].
God, this is awesome. I know it's humour but I think you've captured a very real feeling! When you work in a corporation, with technical product owners and legal teams, and you're trying to explain AI risk.
"Put in the contract that their system must meet interpretability by design standards".
Deep sight
"That's not possible, and this model, like most frontier, is the opposite from Interpretable by default. That's why it's called The Black box problem".
"But can't they just open the black box? They programmed the models, they have the source code".
More sights
"Let me tell you about the fascinating world of mechanistic Interpretability"...
Half an hour later
"Okay so... it's not only that we're deploying a powerful technology that we can't audit, but nobody really knows how it works internally, even the people who "developed " it (who now try to reverse engineer their own creations), and our hope that, at some point, we can actually control internal behaviours is that they got Claude obsessed with the Golden Gate at some point?..."
"Basically yes".
Our participants will receive feedback on their work from four exceptional experts bridging AI safety research, legal practice, and governance:
Charbel-Raphaël Segerie - Executive Director of the French Center for AI Safety (Centre pour la Sécurité de l'IA - CeSIA), OECD AI expert, and propulsor of the AI Red Lines initiative. His technical research spans RLHF theory, interpretability, and safe-by-design approaches. He has supervised multiple research groups across ML4Good bootcamps, ARENA, and AI safety hackathons, bridging cutting-edge technical AI safety research with practical risk evaluation and governance frameworks.
Chiara Gallese, Ph.D.- Researcher at Tilburg Institute for Law, Technology, and Society (TILT) and an active member of four EU AI Office working groups. Dr. Gallese has co-authored papers with computer scientists on ML fairness and trustworthy AI, conducted testbed experiments addressing bias with NXP Semiconductors, and has managed a portfolio of approximately 200 high-profile cases, many valued in the millions of euros.
Yelena Ambartsumian - Founder of AMBART LAW PLLC, a New York City law firm focused on AI governance, data privacy, and intellectual property. Her firm specializes in evaluating AI vendor agreements and helping companies navigate downstream liability risks. Yelena has published in the Harvard International Law Journal on AI and copyright issues, and is a co-chair of IAPP's New York KnowledgeNet chapter. She is a graduate of Fordham University School of Law with executive education from Harvard and MIT.
James Kavanagh - Founder and CEO of AI Career Pro, where he trains professionals in AI governance and safety engineering. Previously, he led AWS's Responsible AI Assurance function and was the Head of Microsoft Azure Government Cloud Engineering for defense and national security sectors. At AWS, James's team was the first to achieve ISO 42001 certification of any global cloud provider.
Ze Shen Chin- Co-lead of the AI Standards Lab and Research Affiliate with the Oxford Martin AI Governance Initiative. He has contributed to the EU GPAI Code of Practice and analysed various regulatory and governance frameworks. His research currently focuses on AI risk management. Previously, he spent over a decade in the oil and gas industry.
These advisors will review the legal strategies and technical risk assessments our teams produce, providing feedback on practical applicability to AI policy, litigation, and engineering decisions.
As you can see, these are people representing the exact key areas of change that we are tackling with the AI Safety Law-a-thon:
Can't wait to see the results of this legal hackathon. See you there!
Closing our Advisory panel with one last amazing addition!
Co-lead of the AI Standards Lab and Research Affiliate with the Oxford Martin AI Governance Initiative. He has contributed to the EU GPAI Code of Practice and analysed various regulatory and governance frameworks. His research currently focuses on AI risk management. Previously, he spent over a decade in the oil and gas industry.
Thank you! You've managed to explain exactly what I thought when I saw this link. And re the LinkedIn comment - I'm actually surprised that people are surprised. I know people who post very high quality articles there, but mostly it's become slop land. The pattern I'm noticing is: LinkedIn writers who value quality slowly transitioning to Substack, and those in their audiences that want to think moving with them.
This was a great read! I swear I've watched The Notebook at least twice over the years and, for the life of me, I can't remember that scene.
I would be very curious to hear your take on 500 Days of Summer. Do you think it portrays a better or more realistic picture on how these sort of unreciprocated pursuits tend to go? I really enjoyed that in the end the person that kept saying that she just doesn't believe in marriage married somebody else. I think that was one of my favorite details of the movie.