A letter signed by renowned names such as Stephen Hawking and Elon Musk warned about the risks represented by artificial intelligence, including ethical dilemmas, such as in the case of self-driving cars, as well as mass unemployment. However, the risks may have a cause that is not only distinct but also reach a much higher level than the text considers.

With an annual growth rate of 38% in the estimated market value increase, artificial intelligence progresses at a dizzying pace, making China's GDP growth look like the steps of a turtle. Such is this growth rate that it represents, over the course of 10 years, a 25-fold increase in the size of the market. And, as mathematics dictates, nothing less than 625 times in a 20-year period, if such growth persists. We are, of course, referring only to the direct economic force exerted by artificial intelligence, without considering, for example, the certainly immense impacts that such expansion represents in numerous areas, from the labor market, already mentioned, to economic risks (whether due to the great sudden transfer of investment to this sector, withdrawal from others, or by a sudden drop in profits obtained in the same), environmental risks, contributing to global warming and the supersaturation of natural resources (due to the demands for the production of supercomputers that run them), not to mention geopolitical risks (since growth is exponential, the first country or company to stop it will be absurdly ahead of the others) or related to conflicts between countries, as well as social, psychological, and cultural risks caused by the vertiginous change with which such technologies have emerged.

With such a dizzying growth rate, we can imagine such changes occurring within a very short interval, something within a period of 5 to 10 years, possibly, and almost certainly for the next few decades. Despite all these threats, however, the subject continues to be discussed only superficially in the media, being tested and planned, in general, by large companies and governments, away from the spotlight, and without the participation, scrutiny, or monitoring of the public, who do not seem to be aware of the seriousness of the issue, to the point that, whether the former or current president, to mention the particular case of Brazil, have little addressed the issue in the campaign or inaugural speech. We have, consequently, a paradoxical situation, where the more the subject is researched and sought after, within the academic environment and technology companies, the less it is debated, and the fewer means or mechanisms for democracy or the population to participate are considered.

However, we consider that the greatest risk represented by artificial intelligence is not directly related to its potential, but its use within the context of a modern economy. It is recognized that, within an economy oriented towards the increase in capital, numerous sectors begin to act as if they had a "life of their own," being maintained even with the knowledge of numerous harms caused, simply because doing so is "profitable," "business is business." As a well-known example, we have the relationship between the fossil fuel industry, oil, and coal, with global warming. The seriousness of the issue, and the urgent need to reduce measures aimed at reducing greenhouse gas emissions, is recognized not only by climatologists, being basically a consensus among them but also receiving strong support from the population, as well as global leaders and even various millionaires. The same can be said about the seriousness of the recently faced COVID pandemic, in which, even though the immense risks of loosening social distancing measures were known, it was done in the name of "economic forces."

In this way, even with all the seriousness presented, the subject remains largely ignored and with very little press coverage or concrete measures to address the issue. It is not necessary, of course, to mention a whole host of environmental or social "tragedies" indirectly caused by the same "forces of the market."

Thus, we realize that the risk of artificial intelligence NOT arising from the accidental and spontaneous creation of a monstrous superintelligence that escapes human control and destroys and subjugates us, as we might imagine in a comic book. On the contrary, we argue that artificial intelligence, with immense potential for harm, will continue to be produced with the knowledge of those who hold it, purely and simply because its holders so desire it, since for those involved, such a process will prove immensely profitable, and with the economic power involved, the external damages caused, although known, will be neglected, since their beneficiaries will hold the power, and their victims will not.

We point, therefore, to the immense contradiction in which, while the need for democracy and popular participation is preached and proclaimed worldwide, the technologies that will effectively determine our destiny remain concentrated in very few hands, of a few companies and billionaires, in the most authoritarian way possible.

In this way, we conclude on the immediate need for action to stimulate debate, awareness, and, most fundamentally of all, the democratization of control over the creation and development of artificial intelligence technologies. A step that will impact the destiny of all humanity.

I found this text here, in portuguese: https://questionetudo.medium.com/o-grande-elefante-na-sala-o-grande-risco-da-intelig%C3%AAncia-artificial-pode-n%C3%A3o-ser-aquilo-que-3a810f55a69d



New Answer
Ask Related Question
New Comment

New to LessWrong?