leticiagarcia

Wikitag Contributions

Comments

Sorted by

Thank you for your kind words! Of course you can use this essay. 

China does come up in our conversations. I didn’t mention it here because the aim of this post is to reflect on what we’ve learned across more than 70 meetings, rather than to present a scripted pitch - no two conversations have been the same! So it doesn’t cover every single question that may arise. 

You’re right to point out that this is an important one. It’s too big to capture fully in a format like this, but here’s my view in a nutshell: Broadly speaking, I believe that racing ahead to develop a technology we fundamentally do not understand - one that poses risks not only through misuse but by its very nature - is neither a desirable nor inevitable path. There’s a lot at stake, and we're working to find a different approach: one in which we develop the technology with safeguards, while ensuring we deepen our understanding and maintain control over it.

Thank you for your kind words and thoughtful questions; I really appreciate it.

  • US advocacy: We’ve already had some meetings with US Congressional offices. We are currently growing our team in the US and expect to ramp up our efforts in the coming months.
  • Policy proposals, in a nutshell: We advocate for the establishment of an independent AI regulator to oversee, regulate, and enforce safety standards for frontier AI models. We support the introduction of a licensing regime for frontier AI development, comprising: a training license for models exceeding 10^25 FLOP; a compute license, which would introduce hardware tracking and know-your-customer (KYC) requirements for cloud service providers exceeding 10^17 FLOP/s; and an application license for developers building applications that enhance the capabilities of a licensed model. The regulator would have the authority to prohibit specific high-risk AI behaviours, including: the development of superintelligent AI systems; unbounded AI systems (i.e., those for which a robust safety case cannot be made); AI systems accessing external systems or networks; and recursive self-improvement.
  • Questions on loss of control: I completely agree on the importance of emphasising loss of control to explain why AI differs from other dual-use technologies, and why regulation must address not only the use of the technology but also its development. I wouldn’t say there’s a single, recurring question that arises in the same form whenever we discuss loss of control. However, I have sometimes observed confusion stemming from the underlying belief that: “Well, if the AI system behaves a certain way, it must be because an engineer programmed it to do that.” This shows the importance of explaining that AI systems are no longer traditional software coded line by line by engineers. The argument that this technology is “grown, not built”* helps lay the groundwork for understanding loss of control when it is introduced.
  • Differences between parties: Had I been asked to bet before the first meeting, I would certainly have expected significant differences between parties in their approaches (or at least that meetings would feel noticeably different depending on the party involved). In practice, that hasn’t generally been the case. Put simply, the main variations can arise from two factors: the individual's party affiliation and their personal background (e.g. education, familiarity with technology, committee involvement, etc.). In my view, the latter has been the more important factor. Whether a parliamentarian has previously worked on regulation, contributed to legislation like the GDPR in the European Parliament, or has a technical background often makes a bigger difference. I believe this is very much in line with your view that we tend to overestimate the extent of partisan splits on new issues.
  • Labour v. Conservatives: Our view of the problem, the potential solutions, and our ask of parliamentarians remain consistent across parties. In meetings with both Labour and the Conservatives, we’ve noted their recognition of the risks posed by this technology. The Conservatives established the AI Safety Institute (renamed the AI Security Institute by the current government). Labour’s DSIT Secretary of State, Peter Kyle, acknowledged that a framework of voluntary commitments is insufficient and pledged to place the AISI on a statutory footing. The key difference in our conversations with them is the natural one. The government/you have committed to putting these voluntary commitments on a statutory footing. We’d like to see the government/you deliver on this commitment.”
  • Did they have different perspectives or questions? The answer is the same as above: The main differences were led by individual background rather than party affiliation.
  • Was there any sort of backlash against Rishi Sunak's focus on existential risks? Or the UK AI Security Institute? You mention that “in the US, it's somewhat common for Republicans to assume that things Biden did were bad (and for Democrats to assume that things Trump does is bad).” This doesn’t apply to the UK in this specific context, and I was surprised to see it myself. It's rare for the opposition to acknowledge a government initiative as positive and seek to build on it. Yet that’s exactly what happened with AISI: Labour’s DSIT Secretary of State, Peter Kyle, did not scrap the institute but instead pledged to empower it by placing it on a statutory footing during the campaign for the July 2024 elections. 
    When it comes to extinction risk from AI, the Labour government is currently more focused on how AI can drive economic growth and improve public services. Loss of control is not at the core of their narrative at the moment. However, I believe this is different from a backlash (at least if we mean a strong negative reaction against the issue or an effort to bury it). Notably, Labour’s DSIT Secretary of State, Peter Kyle, referred to the risk of losing control of AI (particularly AGI) as “catastrophic” earlier this year. So, while there is currently more emphasis on how AI can drive growth than on mitigating risks from advanced AI, those risks are still acknowledged, and there is at least some common ground in recognising the problem.

    *Typo corrected, thanks for spotting!