This post was written by @Thomas Vassil Brcic and is cross-posted from our Substack. Kindly read the description of this sequence to understand the context in which this was written.
Policies, Laws, Reports, Guidelines; Organizations, States, Municipalities, Companies; Individuals, Workers, Businesspeople, Researchers.
That’s a lot of nouns. And it does little to paint the picture of the emerging governance landscape of Artificial Intelligence (AI) in October 2025. Let not the tranquil implication of landscape fool you into imagining a painting of green, expansive fields, or some vast, beautifully imposing mountains towering far over a small pine forest. If AI governance were on a canvas, Da Vinci’s Last Supper would perhaps come closer to serving it justice.
The multitude of actors and mediums governing this ‘thing’ (I won’t say tool, as some see that as an understatement to its sociological enmeshment) needs not only clarification, but also reorganisation. This blog represents one quick attempt at both of these things; a look into the present representing an attempt to concisely draw the existing picture, and a glance into potential realities which may offer promise.
The existing picture
Governance exists on a wide spectrum of consequence. Whilst the most famous institution, the ‘government’, is the one whose name is most often correlated with the act of governance, it is very far from being the only one with consequence. Institutions are any structure of norms and rules that influence behaviour. This includes large, important organisations such as banks, the UN, and the EU. But more broadly, it includes customs and cultures, less tangible structures with less concrete enforcement mechanisms, that nevertheless greatly affect intra- and intersocial relationships.
This post won’t focus on the law, for a good reason. Although the law is one of the most effective governance structures, backed up by democratically-empowered enforcement mechanisms (such as the military, or the police), it is slow and oftentime inadequate. These latter facets are partly related to its promulgation as a compromise between actors in society, though it’s more of a reflection of the disproportionate influence of some of them. In the face of something like AI, its weaknesses may prove too overwhelming for meaningful change.
Anyway, a matter for political scientists.
In the matter of AI, where societal adoption is increasing, the law may prove too laggy a tool. Instead, I will focus on another historically powerful institution.
Ethics.
Let’s take the hypothetical case of a worker at a fictional corporate consulting firm, ProsewesternheightCrests (PwC), in the city of Groningen, the Netherlands. Upon their onboarding, they are given a host of documents to read. Half of these relate to their direct role in the Technical Auditing Department, and include an AI Code of Ethics, Computer Security Basics, Confidentiality Agreements, Work Product Assignment Agreements, and Guidelines for Using AI Tools. They are aware now that they can’t submit confidential client details or copyrighted material into chat-bots. About 45 minutes of reading later (brought down to 5, thanks to a handy ChatGPT summary of the key points), they are up to speed on the internals of their company’s policies. One week into their job, their manager forwards them an email from the Data and Technology Ethics Committee of the Gemeente Groningen, the local municipal government. It contains a message that they have developed a new Ethical Assessment Framework, not binding to non-municipal employees but “of crucial importance” to all workers implementing AI systems into their workflows in the Province of Groningen. It is part of a broader strategy to ‘Keep Groningen on the Frontier’, and includes a mixture of advice and ethical considerations that must be considered.
This worker begins believing they have grasped the picture of how they are allowed to use AI at work until they read a new document by the Digital Task Force / National Authority of the Netherlands. It contains a whole host of new requirements, with unclear sanctions owing to the very new nature. Notwithstanding all of this, the EU AI Office’s AI Act (Regulation (EU) 2024/1689) is quickly rolling out. Although the worker is aware that most of its contents aren't of effect to them, they are aware that its obligations will trigger a wave of systems auditing that their team will have to conduct on their clients. This is to ensure the AI systems that are classified as high-risk to the rights and freedoms of data subjects are properly handled.
And yet still, in spite of the white-collar workers’ struggle to stay afloat in this barrage of overlapping, intertwining and convoluted messaging, there is more. The Organization for Economic Co-operation and Development (OECD) recently updated their AI Principles (2024), the first intergovernmental standard on AI. The European Commission’s High Level Expert Group on Artificial Intelligence have their own Ethics guidelines for trustworthy AI (2019). Many companies have also benevolently adopted their own; Google’s Responsible AI Progress Report is updated yearly and based on their own formulation of what is important, namely Bold innovation, Collaborative progress, together and … (I will not bother continuing, for everyone’s respect).
For an average worker in Groningen, this is the state of AI governance. Its complexity isn’t a mere product of the technology’s inherent nature; it is instead a product of the multilateral and manifold actors, institutions, and stakeholders whose missions and motives are in opposition. And it is a complexity that characterises a loud, chaotic void - namely the absence of a mediating structure - wherein ethical principles do not translate into practice (Mittelstadt 2019).
Better, future realities
To attempt to conjure up a solution to the disparate state of ethical realities across the globe is to ignore every conflict that has occurred in all of time. Instead, I’ll touch upon one attempt that approaches this in an altogether novel way. In their 2023 publication titled A multilevel framework for AI governance, Chuong et al. dissect the notion of ‘trust’ and operationalise it as a way of bringing together three of the most important stakeholders – governments, corporations, and citizens.
Trust is “a confident relationship with the unknown” (Botsman 2017), and the authors extend this definition to encompassing “the cornerstone of all relationships”. Yet interpersonal trust is built on different pillars than trust between people and technologies, and between people and automation. Reconciling key studies in the field of psychology over multiple decades, the authors devised the following table of what encompassess each of the differing trust relationships;
To create an ethical framework for AI that is widely accepted, and thus could fertilise stronger modes of governance, trust is needed across both multilevel and multidimensional domains.
Why?
Because as the law is slow, and people are the primary sources of pressure in a democracy, the corporations from whom the bulk of governance will have to emanate are going to need to have this trust embedded from within.
The authors offer the European Commission’s Assessment List for Trustworthy Artificial Intelligence (ALTAI) as a sound reconciliation of all three dimensions of interpersonal trust that will be a prerequisite for this;
Converting principles into practice can run afoul of many errors, especially related to the wide-ranging interpretations to which they can be subjected and naturally confined by. Nevertheless, leaving them without a mechanism may be more grave of a mistake. As an actionable follow-up, detracted from abstractness, two bureaucratic processes are suggested. The first is internal review boards that offer differing levels of security. These can be accompanied by broader scale review boards, “such as those like the Food and Drug Administration (FDA)” for external insurance. The second is certification, by way of accreditation of individual corporate users. The efficacy of each of these has only been studied in relation to contexts that presently exist, and is a matter for policymakers to deliberate over.
An FDA-like audit could be a potential approach to corporate AI ethics
Conclusion
Governance is not purely law administered by the government. It encompasses a huge variety of institutions, from ourselves as moral individuals to multinational corporations with significant social, cultural and political influence. Law is incredibly centralised and, for better or worse, slow and inherently reactive. This post tries to appreciate the less centralised governance means of ethics as a way of confronting the overbearing assemblage of principles that exist and presently cloud approaches, instead of aiding them.