This post was written by Thomas Brcic. It is cross-posted from our Substack. Kindly read the description of this sequence to understand the context in which this was written.
governance
noun
the act or process of governing or overseeing the control and direction of something
“I like to break things and regulators create red tape which, well, limits my ability to do this”. It is upon hearing this line over the other side of the table whilst having lunch at a workshop that my ears - at that time not particularly tuned on any one spot of dialogue - prick up and I cannot help but respond in intervention.
Oftentime in the definition of something so general and broad, explanatory value may be found in the description of what that something is not, as opposed to what it is. It is via this approach of a negative definition that I’ll deliberately choose to define ‘governance’, a response to a startlingly large misconceived understanding I’ve noticed in my personal conversations and wider attitudes to norm-shaping and regulation.
“If I understand right, you mean to say that regulation inhibits innovation and so naturally interferes in the process of creation, right?” In affirmation the man sitting opposite me nods, and expands on the statement he made. He is interning at a fintech startup and explains that due to a particular subset of laws of an EU Regulation, their work is continually inhibited by approval and permitting processes.
This notion of governance is precisely the starting point from where we can chart what it precisely is not intended to be; stifling. For the purposes of this blog, I’ll refer primarily to regulation - just one of several tools in the governance toolbox, though the most severe and as a result, the most contentious. Regulation and innovation have been presented time-honoured as mighty adversaries, and this understanding has also made its way into the world of AI development, inviting with it an echoing chorus of tirades in the wake of regulatory proposals and evaluations on both sides of the Atlantic, where most such developments take place. Tellingly, a recent executive brief by DigitalEurope - a consortium of 117 corporate members (comprising the usual names of Google, Apple, Microsoft, Meta, etc.) - advanced this idea in an EU competition context; “there is a broad consensus amongst CEOs that regulatory simplification would be the single most powerful lever to boost investment and innovation in Europe”. It is a critique that is not stand-alone, and supported, ideologically, by free-marketeers and Chicago School economics to an extent so pervasive, it has gained the consensus of the general public. And it is a critique that manifests itself in a dichotomy that imbues the thinking of the man opposite me who, like many others, touts the narrative of regulators as suffocation of innovation. This is, however, a fake dichotomy.
“I think this idea wrongly frames the rationale of regulation,” I begin. And in the next few minutes I attempt to evince the idea of Porter’s Hypothesis, which holds that environmental regulation actually can encourage innovation and enhance a firm's competitiveness. By analogy I draw the function of regulation as a tool for stimulating innovation, a direct contrast from the former outlook. “Think of regulation not as a lid, but as a steering wheel for incentives. In the context of AI, this means channeling the direction of AI development to ends that are societally desirable, and don’t result in disaster.”
The point I was attempting to elucidate is one (very roughly) visualised in the images below. I’ll first list some base assumptions;
Regulation can affect the manner in which innovations interact with society at large.
.
The left image demonstrates the conventional narrative applied to this interaction; regulation impeding the fruit of creativity by acting as a barrier. The right image applies the understanding of creativity - and thus innovation - as a force that is not net-positive by default; it can lead to both positive and negative outcomes. Regulators are thus responsible for assessing which of these outcomes innovations are likely to lead to, and then either hinder or bolster them. This is represented by the red and blue arrows, respectively. So how exactly do these arrows of regulation ‘steer’ innovation?
The year is 2006. Pollution from automobiles is a growing concern of the public, and carmakers are acting in response. At the VW engine development center in Ingolstadt, a new software is being developed, one that enables the car’s onboard computer to register if the wheels are in motion, if its steering wheel has been engaged, and if the car is moving uphill or downhill. Software developers are racing to be able to roll out VW’s new EA189 engine in-time for the Detroit Auto Show in 2010, where the new Volkswagen Golf TDI is due to meet a huge demand for clean, efficient diesel cars in the US and European markets. The purpose of this software? To detect if the car is being tested in a laboratory, or is instead being used on the road. This enabled the car to turn on full emissions controls when being assessed by emissions-standards bodies, and turn these off when not. The difference is one equating to 40x more nitrogen oxide pollution when toggled on or off. By the time this is uncovered in 2015, 11 million cars worldwide have been sold and used on roads in violation of legal emissions limits. VW Group has, until this day, paid over $35 billion USD in penalties, fines, recalls, and legal fees.
Regulation, in this case, has the following functions;
In the context of the EU AI Act, the regulator (the EU Commission) has steered innovation by preventing the use of AI-based systems that engage in practices such as ‘emotion recognition in workplaces and education institutions’ and ‘social scoring’ (amongst others, grouped as the highest risk tier of ‘unacceptable risk’). Innovators thus have clear instructions on the direction to which their creativity must be channeled - ie., toward applications which will not be categorised as ‘unacceptable risk’ (red arrows). They are as such incentivised to seek business via other means. Examples include the InvestAI initiative, which aims to invest €200 billion EUR in building gigafactories via which “every company, not only the biggest players, can access large-scale computing power to build the future” (blue arrows).
To briefly conclude, the range of possibilities from a world with AI is - to say the least - diverse. Some benefits have already emerged, most acutely in certain STEM fields of research. It would be futile to repeat the risks too, whose presence is a growing feature of headlines and whose breadth was the subject of our blog last week. Governing this technology has the incredible potential of avoiding catastrophes whilst taking into consideration some of the biggest aims of our modern times. These include ensuring its implementation doesn’t come with the expense of individual isolation. Or accelerating humanitarian crises. Or replacing the value of human input. Good governance is thus a careful science, with levers that extend unto society at large. It steers, rather than deters, innovation. And it should be understood as such in the AI context.
To my friend across the table, I say:
It is important to distinguish freedom from anarchy.