Full text of the blog post below:

I recently signed an open letter asking to slow down the development of giant AI systems more powerful than GPT-4  –those that currently pass the Turing test and can thus trick a human being into believing it is conversing with a peer rather than a machine.

I found it appropriate to sign this letter to alert the public to the need to reduce  the acceleration of AI systems development currently taking place at the expense of the precautionary principle and ethics. We must take the time to better understand these systems and  develop the necessary frameworks at the national and international levels to increase public protection.

It is because there is an unexpected acceleration –I probably would not have signed such a letter a year ago– that we need to take a step back, and that my opinion on these topics has changed.

There is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviors that deviate from human goals and values. The short and medium term risks –manipulation of public opinion for political purposes, especially through disinformation– are easy to predict, unlike the longer term risks –AI systems that are harmful despite the programmers’ objectives,– and I think it is important to study both.

With the arrival of ChatGPT, we have witnessed a shift in the attitudes of companies, for whom the challenge of commercial competition has increased tenfold. There is a real risk that they will rush into developing these giant AI systems, leaving behind good habits of transparency and open science they have developed over the past decade of AI research. 

There is an urgent need to regulate these systems by aiming for more transparency and oversight of AI systems to protect society. I believe, as many do, that the risks and uncertainty have reached such a level that it requires an acceleration also in the development of our governance mechanisms.

Societies take time to adapt to changes, laws take time to be passed and regulatory frameworks take time to be put in place. It therefore seems important to raise awareness quickly, to put this issue on more radar screens, to encourage a greater public debate.

I have been saying it for years and I reiterate in this letter, which focused on governance for the huge AI systems larger than GPT-4, not AI at large: it is instead essential to invest public funds in the development of AI systems dedicated to societal priorities often neglected by the private sector, such as research in health or to better fight climate change. Such applications of AI are far less likely to be misused for socially harmful purposes, unlike generalist systems like GPT-4.

We need to stimulate the efforts of researchers, especially in the social sciences and humanities, because the solutions involve not only a technical, computational aspect, but also –and especially– social and human considerations.

We must continue to fight for the well-being of humanity and for the development of beneficial AI applications, such as those to address the climate crisis. This will require changes at the level of each country and the adoption of international treaties. We succeeded in regulating nuclear weapons on a global scale after World War II, we can reach a similar agreement for AI.

Despite what this letter may suggest, I am fundamentally optimistic that technology will be able to help us overcome the great challenges facing humanity. However, in order to face them, we need to think now about how to adapt our societies, or even reinvent them completely.

Here are some frequently asked questions about the open letter. The opinions expressed in the answers are not necessarily shared by all the signatories of this letter.

Why did you sign this open letter?

We have passed a critical threshold: machines can now converse with us and pretend to be human beings. This power can be misused for political purposes at the expense of democracy. The development of increasingly powerful tools risks increasing the concentration of power. Whether in the hands of a few individuals, a few companies, or a few countries, this is a danger to democracy (which means power to the people, and therefore the opposite of concentration of power), to the –already fragile– global security balance, and even to the functioning of markets (which need competition, not monopolies).

Isn’t the tone of the letter excessively alarmist?

No one, not even the leading AI experts, including those who developed these giant AI models, can be absolutely certain that such powerful tools now or in the future cannot be used in ways that would be catastrophic to society. The letter does not claim that GPT-4 will become autonomous –which would be technically wrong– and threaten humanity. Instead, what is very dangerous –and likely– is what humans with bad intentions or simply unaware of the consequences of their actions could do with these tools and their descendants in the coming years.

Will this letter be enough to convince tech giants to slow down?

Even if the hope is slim, it is worth starting a discussion that includes society as a whole. Because we are going to have to make collective choices in the coming years, including what we want to do with the powerful tools we are developing. I even suspect that many in these companies are hoping for regulation that levels the playing field: since ChatGPT, the less cautious players have a competitive advantage and can therefore more easily get ahead by reducing the level of prudence and ethical oversight.

Is a six month break enough? Isn’t this effort futile?

The same could be said of our efforts to combat climate change. Much damage has already been done to the environment, the inertia of our industrial fabric is stubborn –not to mention the lobbies–, and the efforts of scientists and climate activists to deflect our collective trajectory don’t seem to be working. Should we give up? Certainly not. There are always ways to do better, to reduce future damage, and every step in the right direction can be helpful.

Aren’t fears about AI systems science fiction?

There is already a body of literature documenting current harms that regulation would help minimize, from human dignity violations like discrimination and bias to military use of AI like autonomous drones that can kill.

Should we stop development of AI systems altogether?

This is not what the letter says, it only talks about systems that are not yet developed and that will be more powerful than GPT-4. Nevertheless, it seems obvious to me that we must accept a certain slowdown in technological progress to ensure greater safety and protect the collective well-being.  Society has put in place powerful regulations for chemicals, for aviation, for drugs, for cars, etc. Computers, which now have an enormous influence on many areas in our societies, seem to me to deserve similar considerations.

Why did you co-sign a letter with people who have political positions opposite to yours? 

It is an open letter on a specific subject. By signing it, one does not become bound by all the other statements of the signatories. On the issue of existential and long-term risks for humanity, I am not aligned with many of the things that have been written on this subject (“long-termism”). Of course, the long-term risks are important –and that is why I advocate for better action on climate issues. But these risks have to be weighed against the probability of their materialization, and the longer the time horizon, the more uncertain these risks are. In comparison, the damage caused to human beings who are alive today should not be put aside, because these shorter-term risks are much more tangible. So I think we need to consider all risks, based on the magnitude of their impact and their estimated probability, which includes risks in different time horizons. And we must use reason and science to evaluate all this, without leaving aside our empathy for the flesh and blood beings who are suffering today.

Is society ready to face the arrival of these new and extremely powerful technologies?

Humanity is facing several global crises that are likely to worsen: in public health, in the environment, in military and political security and stability, in inequalities and injustices and, of course, the risks posed by the uncontrolled development of AI. I doubt that the current organization of our society, at the national and global levels, is adequate to face these challenges. In the short term, regulation seems essential to me. I hope that it will be sufficient in the long term, but we must now consider the possibility that the social organization that worked in the last century is no longer adapted to these global crises.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 12:27 PM

I know it's so common, but I really don't get the "science fiction" objection. Jules Verne wrote science fiction, and that didn't stop us from building submarines and going to the moon.

I'd be great if he could put his hands to it by refocusing the efforts of the institute he founded. Words are nice and all, be we need material actions yesterday.