It seems to me that you are doing the most important work you could be doing right now.
I am not an expert on AI, and I'm terrible at policy work. What's the most important thing I can be doing to help?
Background: my MP doesn't seem to care about messages I send her, sometimes her team doesn't even send more than an automated response, sometimes they send a message which is understanding but carefully makes no commitments to any actual changes, and might itself be LLM generated or templated.
I occasionally attend a protest but I'm pessimistic about the effectiveness of protests. It feels like people have become desensitised to them. I still go.
Many weeks late, but I thought I should respond, since no one else did:
Your MP might not care if one person sends messages, but in aggregate, many people regularly calling and sending messages (e.g. once a week) will register for them over time, especially as salience increases. You can also write letters to newspaper editors, and donate to organizations that are fighting AI x-risk via local organizing and lobbying.
You can also join a local chapter of ControlAI or PauseAI, or join the Torchbearer Community, or join Microcommit.
There is a lot that everyone can do, be those actions big or small!
I personally use Microcommit, and I am a longtime volunteer in the PauseAI movement. I've learned that willingness to try and perseverance in trying are way more valuable than specialized expertise. (Unless you can find the people who have both!)
What is disarming is the absence of performance: clear, careful speech grounded in knowledge, and an evident commitment to honesty to oneself and others.
The interesting thing is that it seems like this should be somewhat fakeable, but in practice it seems to be quite hard for people to fake. I have found this (being earnest, honest, and expert, not being fake!) to be my weird charisma superpower that I didn't expect. I have also found that I don't need to hide my passion and concern once a bit of human-to-human relating has happened over food or drink.
Plus 1 to all your points Leticia, keep up the good work!
As I recently said, I sense there's a shift now where the demand for understanding (and battle for attention) remains, but serious people are now also demanding proposals and solutions, as you've noted. I suspect a failure mode here could be coming from the ivory tower with talking points without being responsive to the constraints and skills and options of the lawmakers (etc.) on the receiving end. That too will need to be a dialogue, perhaps a 'tennis' of sorts.
I suspect a failure mode here could be coming from the ivory tower with talking points without being responsive to the constraints and skills and options of the lawmakers (etc.) on the receiving end. That too will need to be a dialogue, perhaps a 'tennis' of sorts.
I used to work in policy and agree this is a likely failure mode. Especially in AI where there is so much uncertainty, having someone put their neck out to get it on a policy agenda at all can already be a success - even if they don't go as far as you might like on highlighting the scale or urgency of the issue.
Awesome work. For anyone reading this: Please try to talk to your local policymakers. As a constituent it is much easier to get a meeting.
Torchbearers (which is a community adjacent to ControlAI) and/or PauseAI are happy to give you advice or coaching!
Great post & work.
I’m hesitant to criticise your campaign statement as no doubt it has been carefully crafted over months - but it strikes me as a bit wordy and lacking punch. Though maybe this is the point - to appear measured and slightly ponderous/academic rather than risking looking like activism & sloganizing?
Also could be a bit clearer in other ways - e.g. on a cursory read it’s not clear that specialised and superintelligent AIs are being contrasted, or what the difference is (aren’t these specialised AIs also ‘superintelligent’ i.e. better than humans at what they do? Just not at everything)
Great work! The slight pushback is on the chickpeas though! Be genuine, even if it can come off as quirky. I'm sure the staffer remembers the chickpea guy much more than 20 other meetings that week, and with some charisma that is now your hook (if you have chickpeas because you're vegan and most places don't offer it, that's a great conversation that shows you have strong values and are compassionate, positive traits which give you an aura you will benefit from when presenting what you came for).
I think suit and no chickpeas can easily end up with looking like everyone else! Add your touch to that - my suits often have colorful pocket squares, or are paired with ties or colorful socks that tell a story from my past. That gets more doors open than just the off-the-shelf kind of look! A stain on it from my toddler will make it better still if I play my cards right and am not sitting across from a robot!
Important note though is that the signals must be real and come from a place of confidence! If you're not vegan, the can of chickpeas will come off flimsy. Intentional discrete stain to talk about your kid will be disingenuous. A colorful suit that you can't explain is worse than off the shelf one. It took quite some time for me to get to know what stories I want to tell with my look and quirks and come to terms with it myself.
Back in May 2025, I published a post titled “What We Learned from Briefing 70+ Lawmakers on the Threat from AI”. I was taken aback by the positive reception that this post had, and have appreciated the kind feedback through online forums and in-person conversations.
I’ve doubled the number of meetings since writing that post and I’ve been wanting to expand on it for a while. I wouldn’t say I’ve learned twice as much! But I have learned some other things, so here’s an update I hope you’ll find helpful.
If you haven’t read my previous post from May 2025, I would recommend starting there: it contains what I consider the core insights, whereas this one builds on those ideas and addresses some questions I’ve received since.
If you have not come across ControlAI before or wish to read an update on our UK parliamentary campaign, you can find more information further down.
Sample size, characteristics, and time frame
Part I: Attention is All You Need
Betting on common knowledge
In September 2024, we began briefing parliamentarians and asking them to support a campaign statement. The objective was to build common knowledge about the problem of the extinction risk posed by superintelligence, and encourage them to take a public stance. A public stance is a tangible, verifiable signal that they understand the issue, care about it, and want others to know. Our UK campaign statement reads as follows:
As of February 2026, over 100 parliamentarians have supported this campaign. Its purpose was to raise awareness of the problem and build a coalition of lawmakers that want to tackle it. As parliamentarians came to understand the issue more fully, we were able to deepen our conversations and focus more directly on policy solutions: specifically, the case for a prohibition in the foreseeable future, given that superintelligence cannot be developed safely or controllably.
As a result of this sustained engagement, an increasing number of parliamentarians are now speaking openly about the threat from superintelligence and the need for such a prohibition. I will mention some examples in the next section.
Making change happen
At ControlAI, we placed a deliberate bet: before the problem can be addressed, it first needs to become common knowledge. We embarked on sustained engagement with lawmakers, the media, and civil society, across jurisdictions. Early on, this work is slow and difficult. But we believed there would be a point where enough people would know about the issue for it to spread more easily. At that stage, awareness can be built at scale, because the effects begin to compound rather than reset with each new conversation. Support spreads through existing networks, people learn from one another, and progress becomes non-linear rather than incremental.
In the UK Parliament, this is what that process has looked like so far:
Watching this unfold has been deeply rewarding. Recently, I made a point of having several of us at ControlAI attend one of the House of Lords debates we had been invited to. It is hard to overstate how encouraging it is to see lawmakers engage, take a stance, and carry the issue forward themselves, on a topic many were unfamiliar with just a year ago. And to see superintelligence and securing a great future for humanity being discussed in the parliament of one of the most powerful countries in the world! It is both encouraging and clarifying. It shows that change is possible through direct, consistent, and honest engagement.
It goes without saying that, despite our success, there is still much to be done! An international agreement prohibiting superintelligence will require raising awareness at scale in the UK and other jurisdictions, as well as establishing credible pathways to a robust and effective agreement.
I would also note that there are other external factors contributing to this change, whose influence I expect will increase over time. I would highlight two:
Advocating for advocacy
As in many other policy areas, AI governance is a field in which some people devote more of their time to research, while others focus more on advocating for specific policy proposals and bringing them to policymakers. Advocacy has enormous potential to make change happen in the real world, particularly in an area like AI safety. As Mass_Driver brilliantly puts it in this post from May 2025, ‘we’re not advertising enough’. Back then, the author estimated that there are 3 researchers for every advocate working on US AI governance, and argued that this ratio is backwards: advocacy, not research, should be the central activity of AI governance, “because the core problem to be solved is fixing the bad private incentives faced by AI developers.” While I would not place particular emphasis on optimising the ratio as the primary means of addressing the issue, I agree that strengthening and resourcing advocacy is an urgent priority.
In the UK, policymakers are very stretched. As discussed in my previous post, they are expected to be knowledgeable across a wide range of topics (both when it comes to their constituency and to the legislation that goes through Parliament) and they have very limited resources to address them. Their teams of staffers are often small (2–5 people). They certainly don’t have much time to search the web for meaty papers filled with technical terms and then try to figure out what they mean!
Research is a necessary first step to understand whether there is a problem, what it looks like, and how it can be tackled. There is a lot of research I benefit from when building common knowledge among policymakers! But research, on its own, seldom gets the message out. Echoing Mass_Driver’s post, “Just because a paper has ‘extinction risk’ in the title doesn’t mean that publishing the paper will reduce extinction risks.” There comes a point where spending months figuring out a nitty-gritty detail has much lower impact than just getting out there and talking to the people who have the power to do something about it.
I felt the same when we started in the UK! Parliamentarians were very surprised to learn that when AI systems deceive their users or developers or resist shutdown, no engineer actually programmed this behaviour. It is a consequence of the fact that even foremost experts do not know how to prevent such outcomes, and the picture looks quite worrying when extrapolated to more powerful AI capabilities.
Moreover, lobbyists representing tech companies are already using every resource at hand to influence lawmakers, which makes engaging directly all the more important. To begin with, Silicon Valley corporations and investors are mobilising up to $200 million across two new super PACs ahead of the 2026 midterm elections, aimed at unseating politicians they view as insufficiently supportive of expanded AI development. As reported by The New York Times, this strategy was previously used by the crypto industry, where, as they note, “the upside is potentially high.”
Tech companies are also ramping up their lobbying efforts. Here’s an example from the US:
When discussing advocacy with technical researchers, I’ve sometimes heard the following argument: “I have technical training, so I’m ill-suited to speak to lawmakers.” I suspected this wasn’t true, and I’ve seen it disproven firsthand: some of my colleagues at ControlAI with STEM backgrounds and technical research experience are doing excellent work informing lawmakers and the public!
Moreover, I have occasionally sensed a concern that advocacy merely draws on existing research without contributing new learning, and that advocates therefore engage less deeply with the substance. I don’t think this reflects how advocacy works in practice. Over the 140+ briefings I’ve delivered with ControlAI, we have repeatedly encountered difficult policy questions that required sustained reflection over months. Advocacy routinely places you in situations that demand serious intellectual work: you sit across from someone whose authority can be daunting, and you try to explain an issue they may never have encountered, and may initially find outlandish.
You have to answer questions on the spot, respond to unexpected objections that expose hard problems, and defend your reasoning under pressure. At the same time, you must rely on judgment and intuition to choose which explanations and examples, among many you know, will resonate with this particular person. You also need to stay on top of relevant developments across the field. You may not master every technical detail of, say, the US export-control framework, but you engage with the subject deeply, and learn to communicate it effectively to the audience that most needs to understand it.
So, yes indeed, we’re not advertising enough!
Part II: Reflections on Advocacy in Practice
On partisan politics: How do you talk to different parties?
I have received questions about whether I have noticed major differences between parties, whether I change my approach depending on whether I’m talking to Conservatives or Labour, and whether they have different questions.
Had I been asked this before my first meeting, I would have expected substantial differences between parties. At least, I would have expected the meetings to feel quite different. But I don’t generally attribute the character of a meeting to the party of the lawmaker, but rather to other factors: whether their background includes computer science, whether they have been interested in other challenges involving coordination problems (e.g. environmental issues), and other aspects of their personal background (e.g. they have worked on a related piece of legislation, or have a child who works in tech). Even seniority is sometimes felt more strongly than party affiliation. I am glad to see lawmakers from across the political spectrum support our campaign and engage with this topic, as it shows they rightly understand that this problem does not discriminate between political parties.
Most importantly, and at the risk of sounding obvious: don’t lie! If you have to change your message to please one party or avoid upsetting a person, that’s someone you won’t be able to work with (you have forfeited your opportunity to convince them of the problem!) and someone whose trust you have forfeited, as it will become obvious that your message is not consistent across audiences. In other words: Don’t make arguments others can’t repeat. You can only lose. Honesty is not just an asset, but an obligation to yourself and others.
On actionable next steps: Don’t leave them with just a huge problem!
Halfway through an explanation, a parliamentarian once stopped me and said: “Alright, but what can I do about it? I can go home very aware and still not know what to do.”
Compared to very specific constituency problems (e.g. bus services in this part of town are insufficient and constituents cannot travel to work via public transport), the threat posed by superintelligence can feel overwhelming and somewhat distant. A lawmaker on their own does not have the controls to steer the situation in a different direction.
So they rightly ask what the next thing they can do with their toolkit is. Raising awareness, as this parliamentarian pointed out, is not enough to fix the issue. Ever since, I have tried to be much clearer about what actionable next steps are available, and to bring them up (or at least signpost them) earlier in the conversation so it does not feel discouraging or irrelevant.
On trade-offs: Don’t lose the permit over a windowsill!
When designing a policy and when communicating it, you need to be clear about what you care about most. Policy design becomes complex very quickly: proposals can range from narrow, targeted measures to entirely new regulatory regimes for a sector.
That is why it is essential to pick your battles wisely and to be explicit about what you are willing to concede, both in shaping the policy and in signalling which elements are essential for actually implementing the policy.
Take carbon pricing. You may have strong views on whether it should be implemented through a tax or a cap-and-trade system. If you believe one of these mechanisms is fundamentally flawed, it may be non-negotiable. But if you think both could work (even if you strongly prefer one) you gain room to compromise in order to build broader support. More trade-offs will arise down the line (e.g., around sectoral exemptions, revenue recycling, and timelines). Each additional design choice opens a new axis of disagreement. Some are worth fighting over; some are not.
A useful way to think about this is as construction rather than decoration. Some elements keep the building standing; others make it look nicer. Protect the load-bearing structures, and don’t lose the permit because you insisted on a particular windowsill that the decision-maker refused to approve!
On iteration and intuition: Why conversation resembles tennis more than political science
I was recently speaking with an acquaintance who is about to launch his own campaign on a different issue. As we talked through the difficulties I faced early on, he admitted how daunting he finds this initial phase. “Studying political science didn’t prepare me for this at all,” he said. I could only agree. You can read endlessly about politics, but that only takes you so far. Real understanding comes from doing; and from reflecting, again and again, on what happens when you do.
I’ve often found myself thinking of these meetings in terms of tennis. I’ve recently taken an interest in the sport: I read Andre Agassi’s Open, started watching matches, and even queued for Wimbledon in the rain. All of that has, in theory, improved my understanding of tennis. But it hasn’t improved my footwork or my hand-eye coordination. When I pick up a racket, I still miss half my serves!
Tennis, like briefing lawmakers, is a craft honed through repetition. The more you do it, the better you get. What works in one match may fail in another; styles differ, and you have to adapt. You begin to sense when you’re losing someone’s attention and when you’ve drawn them in, which examples land and which fall flat. Much of it is decided in the moment, guided less by explicit rules than by intuition built over time.
On iteration through feedback: How much evidence is enough?
Consider the first sentence of ControlAI’s UK campaign statement: “Nobel Prize winners, AI scientists, and CEOs of leading AI companies have stated that mitigating the risk of extinction from AI should be a global priority.”
There are happier, more palatable messages, I can see that!
When we first showed our statement to a number of staffers and MPs, they all sang the same song: “Nobody will add their name to a statement with the word extinction in it.” Ouch! This is exactly how foremost AI experts view the scale of the risk, and I certainly don’t know more than they do, nor do I wish to change their message.
It was discouraging and, in all honesty, I came to believe at times it wouldn’t work. Yet over 100 parliamentarians from across the political spectrum have now supported the statement! I’ve learned a lot from that.
Feedback from reality matters, but it’s easy to overindex on it (especially when we don’t like what we hear!) When I receive feedback, I try to ask: how large is the sample? Two people? Five? Twenty?
My threshold for acting on feedback depends on how much I care about the underlying idea. If the issue is peripheral and the downside of sticking with it is high, I’m happy to change course on limited evidence. But when it comes to core principles or messages I deeply care about, the bar is much higher: it takes a much larger body of evidence before I’m willing to reconsider.
This matters most at the beginning, when feedback is scarce and often noisy. Be patient. Persist. Adapt, but don’t overcorrect. Otherwise, what you’re building can get diluted by early signals until its essence disappears entirely.
On building relationships: Grab that coffee!
I remember a busy day at Portcullis House (where MPs have their offices and take meetings), when the queue for coffee was even worse than usual and our meeting (a short one) was already starting late. We were just sitting down with an MP and a staffer when the MP offered to grab us coffee. ‘I’m alright, but thanks for offering!’ I said nervously, eyeing the queue. ‘I’ll have a black americano’, said my colleague. My eyebrows raised as I watched the MP join that long queue. Over the five minutes that followed, speaking with the staffer, I could only think: ‘Damn! We shouldn’t have ordered that coffee!’
I learned a lot from what my colleague said when we came out of Parliament. It was something along these lines:
“Look, I know you were stressed about time! But think about it: if you want to work with this person, and hence build a relationship with them, you need to act accordingly. If we come rushing in and show that we can’t take time for anything other than our talking points (not even time to get to know each other) that makes it hard to build a relationship. Actually, I’d have the feeling that this person wants to sell me on their thing and then run away once they have what they want. So, yes, I ordered that coffee. And you should too!”
I’ve had many coffees (and orange juices, please mind your caffeine and hot chocolate intake!) since. At the end of the day, that is what I would do with any other person! If it has to be quick, have a quick coffee! But that is still better than a rushed conversation where you haven’t offered a chance to build a relationship.
On trust: Competence over confidence
Confidence, understood as sounding sure, is not always a virtue. Many people speak confidently while being wrong or imprecise, and that only worsens the problem. If I were an MP being asked to engage or take a stance, I wouldn’t want to work with a good performer or salesperson. I would want someone competent, and competence looks very different from confidence. It shows up in three ways:
In environments like Parliament, where people are constantly trying to influence lawmakers, confidence is cheap and often suspect. What is disarming is the absence of performance: clear, careful speech grounded in knowledge, and an evident commitment to honesty to oneself and others.
Miscellanea: Leave the chickpeas at home, bring the suit instead
I was surprised when someone told me: “I really liked your post on how to engage with lawmakers. But, you know what? You should have recommended wearing a suit!” Alright!
Please, do wear a suit! It is nicer to engage with people who are well presented and have good hygiene. Since we’re here: keep a toothbrush handy, you don’t want to be remembered as the person with coriander in their teeth.
And if you carry a bag to Parliament, think about what’s inside. Believe it or not, I once spotted someone who got stopped at security and whose meeting got delayed because he was carrying something strange. I couldn’t believe it when I saw the security guard pull out a can of chickpeas. I’m sure, for the puzzled staffer watching the situation unfold, he became “the chickpea guy”.
Many thanks to my colleagues at ControlAI for helpful feedback!
If there’s anything I haven’t addressed that you think would be valuable, please leave a comment and I will consider addressing it in future posts.
About me
I lead ControlAI’s engagement with UK parliamentarians, having briefed over 100 parliamentarians and the UK Prime Minister’s office on emerging risks from advanced AI and the threat posed by superintelligent AI. I have experience in policy consultancy, communications, and research. I’m an economist and international affairs specialist by training, and I hold a Master’s in Philosophy and Public Policy from the London School of Economics.