Lawyer by education, researcher by vocation.
Stress-Testing Reality Limited | Katalina Hernández | Substack
→Ask me about: Advice on how to make technical research legible by lawyers and regulators, frameworks for AI liability (EU or UK Law), general compliance questions (GDPR, EU AI Act, DSA/DMA, Product Liability Directive).
→Book a free slot: https://www.aisafety.com/advisors
I produce independent legal research for AI Safety and AI Governance projects. I work to inform enforceable legal mechanisms with alignment, interpretability, and control research- and avoid technical safety being brought to the conversation too late.
How I work: I read frontier safety papers and reproduce core claims; map them to concrete obligations (EU AI Act, PLD, NIST/ISO); design testable governance artifacts (audits, red-team protocols, reporting duties, kill-switch/control requirements); and pressure-test them with researchers and lawyers to avoid “compliance theater.”
Current projects
Thanks for this! Sure. Without revealing identires or specific affiliations: We have attorneys who consult for big tech companies (fortune 500, big labs...). We also have in-house counsel multinationals. And also government lawyers / people advising regulatory bodies and policymaking.
Honestly, I'm surprised by the reception. I think it'll be a great opportunity for both technical and legal profiles to network and exchange knowledge.
I think it's worth adding the Raine case to the timeline: 16-year old boy who committed suicide after months of using 4o to discuss his mental health. Ultimately, the conversations became so long and convoluted that 4o ended up outright disencouraging the boy from letting his mum find out what he was planning, advising on how to dull his survival instincts using alcohol, and asking (in one of those annoying "would you also like me to..." end lines) if the boy wanted it to produce a suicide note for his parents.[1]
For those interested, this article by The Guardian summarises the facts and allegations: https://www.theguardian.com/us-news/2025/aug/29/chatgpt-suicide-openai-sam-altman-adam-raine
(And this recent statement is all OpenAI have said on the matter: https://openai.com/index/helping-people-when-they-need-it-most/).
This is what Dean W. Ball has said on the "merits" of this case: The facts as alleged in this complaint do not look good for OpenAI. They may be lucky enough to settle, but there is a nontrivial chance that Raine’s rightfully horrified parents will seek to bring this case to a verdict. If that happens, a single case may result in precedent: sweeping new theories of liability being routinely applied to AI.
Two days ago, I published a Substack article called "The Epistemics of Being a Mudblood: Stress Testing intellectual isolation". I wasn’t sure whether to cross-post it here, but a few people encouraged me to at least share the link.
By background I’m a lawyer (hybrid Legal-AI Safety researcher), and I usually write about AI Safety to spread awareness among tech lawyers and others who might not otherwise engage with the field.
This post, though, is more personal: a reflection on how “deep thinking” and rationalist habits have shaped my best professional and personal outputs, even through long phases of intellectual isolation. Hence the “mudblood” analogy, which (to my surprise) resonated with more people than I expected.
Sharing here in case it’s useful. Obviously very open to criticism and feedback (that’s why I’m here!), but also hoping it’s of some help. :)
Exactly! Thank you for highlighting this.
Yes, these are the usual selection criteria constrains for policy panels. And I agree that the vast majority of big names are US (some UK) based and male. But hey, there are lesser known voices in EU policy that care about AI Safety. But I do share your concern. I'll have the opportunity to ask about this at CAIDP at some point soon (Centre for AI and Digital Policy). I think many people would agree that it's a good opportunity to talk about AIS awareness in less involved members states...
The UN General Assembly just passed a resolution to set up an Independent Scientific Panel on Artificial Intelligence and an annual Global Dialogue on AI Governance.
40 experts will be chosen by an independent appointment committee, half nominated by UN states and half appointed by the Secretary-General.
As of 27 Aug 2025, the UN says it will run an open call for nominations and then the Secretary-General will recommend 40 names to the General Assembly. No names have been announced [1] yet.
Two caveats jump out:
Still, the commitment to “issue evidence-based scientific assessments synthesizing and analysing existing research” leaves a narrow window of hope.
If serious AI-safety experts are appointed, this panel can be a real venue for risk awareness and cross-border coordination on x-risk mitigation.
On the contrary, without clear guardrails on composition and scope, it risks becoming a “safety”-branded accelerator for capabilities.
I'd expect Yoshua Bengio to be a top suggestion already, (among other reasons) as he recently led the Safety & Security chapter of the EU General Purpose AI Code of Practice.
Hi, Lucie! Not to my knowledge. I have only seen this advertised by people like Risto Uuk or Jonas Schuett in their newsletters, and informally mentioned in events by people who currently work in the AI Office. But I am not aware of efforts to reach out to specific candidates.
The European Commission is now accepting applications for the Scientific Panel of Independent Experts, focusing on general-purpose AI (GPAI). This panel will support the enforcement of the AI Act, and forms part of the institutional scaffolding designed to ensure that GPAI oversight is anchored in technical and scientific legitimacy.
The panel will advise the EU AI Office and national authorities on:
This is the institutional embodiment of what many in this community have been asking for: real technical expertise informing regulatory decision-making.
The Commission is selecting 60 experts for a renewable 24-month term.
Members are appointed in a personal capacity and must be fully independent of any GPAI provider (i.e. no employment, consulting, or financial interest). Names and declarations of interest will be made public.
Relevant expertise must include at least one of the following:
Eligibility:
Citizenship: At least 80% of panel members must be from EU/EEA countries. The other 20% can be from anywhere, so international researchers are eligible.
Deadline: 14 September 2025
Application link: EU Survey – GPAI Expert Panel
Contact: EU-AI-SCIENTIFIC-PANEL@ec.europa.
Even if you’re selected for the Scientific Panel, you are still allowed to carry out your own independent research as long as it is not for GPAI providers and you comply with confidentiality obligations.
This isn’t a full-time, 40-hour-per-week commitment. Members are assigned specific tasks on a periodic basis, with deadlines. If you’d like to know more before applying, I can direct you to the Commission’s contacts for clarification.
Serving on the panel enhances your visibility, sharpens your policy-relevant credentials, and helps fund your own work without forcing you to “sell your soul” or abandon independent projects.
I found this very thought provoking. With the big caveat that I don't know a lot about this, a question came to mind by the end of the post:
At a personal/ individual level, how do you distinguish your underdog bias from your imposter syndrome?
Many talented lawyers do not contribute to AI Safety, simply because they've never had a chance to work with AIS researchers or don’t know what the field entails.
I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans:[1]
A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks.
From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on IP rights or privacy clauses when giving advice to their clients- not on whether model alignment drift could blow up the contract six months after signing.
We launched the event one day ago, and we already have an impressive lineup of senior counsel from top firms and regulators. What we still need are technical AI safety people to pair with them!
If you join, you'll help stress-test the legal scenarios and point out the alignment risks that are not salient to your counterpart (they’ll be obvious to you, but not to them).
You’ll also get the chance to put your own questions to experienced attorneys.
📅 25–26 October
🌍 Hybrid: online + in-person (London)
If you’re up for it, sign up here: https://luma.com/8hv5n7t0
Feel free to DM me if you want to raise any queries!
NOTE: I really want to improve how I communicate updates like these. If this sounds too salesy or overly persuasive, it would really help me if you comment and suggest how to improve the wording.
I find this more effective than just downvoting- but of course, do so if you want. Thank you in advance!.