Dark Artificial Intelligence 

Artificial intelligence

We can talk about an AI in systems or machines that imitate human intelligence to perform tasks  that  can improve iteratively from the information they collect and store, but also as the combination of algorithms in order to build machines that present the same or similar capabilities enjoyed by an adult human being.

Among the most important challenges, the possible lack of control in the creation of AI algorithms  begins to stand out.

Some of these challenges and risks are referred to by the technology consultancy McKinsey, which annually prepares a report on the progress in the use  of AI, which reveals that companies around the world continue to increase the application of this technology, with figuresthat increase by double digits every year.   According to their research, organizations worldwide have been adding at least one new AI capability  annually. From the consultancy they regret  that there are few companies capable of detecting the dangers of a deficient implementation, and even less, those that are taking measures to protect themselves.  

Dark data

The information available in the organization can be classified into four categories:

  • Known and used data.  Identified information used for analytical purposes or any other purpose that adds value to the organization. 
  • Data known, but not used.  Thisinformation has been stored after being identified in the analytical processes, but it cannot be used, either due to lack of time, budget or ignorance.
  • Known, but disorganized data.  In this segment we could find the vast majority of organizations around the world. 
  • Unknown data. As they have not been identified, this data cannot be used by companies. This unusable unknown data is part of a complete and comprehensive ecosystem known as dark data.

In this context of growth, often without control, of data, dark data appears, a term created by Gartner that is defined as "the information assets that companies process and store during their business activities, but that they cannot use for other purposes, such as analytical vision or monetization". 

Obscure algorithms

By dark algorithm (AO) we can understand that programming of data that can lead at some point to present a different and negative bias to the one that was originally created, and be manipulated for radically different purposes from the initial ones generating confusion and in most cases, for commercial and economic purposes.

These AOs are endowed with programming based on dark patterns or programming  tricks  and algorithm design that leave "open doors" and high potential to manipulate people, which is growing in recent years thanks to its optimization with AI systems.

AI ethicists  have long demanded  from Big Tech companies a significant increase in the transparency of their algorithms. AI systems, if not regularly audited,  can cause vulnerabilities due to a  set of undetected(or worse, manipulated or manipulable) biases.

 

Tik Tok algorithms

News of July 2022.  "The families of two girls aged 8 and 9 in the United States who allegedly died as a result of a viral TikTok challenge have sued the platform alleging that the social network's 'dangerous' algorithms are to blame for the deaths of their daughters."

This news went around the world, suggesting at least two certainly worrying issues:

1.The social network could have manipulated the algorithm to generate a higher level of attraction and drag among its users, and therefore, viralize certain challenges with which to generate greater prominence.

2.The social network could have ignored (by not assessing or not auditing its algorithms) the true scope and negative potential of the biases and attributes of its algorithms most used in communication with the users of its network, and to a greater extent, in groups easily manipulated or exposed to a greater impact on the communities of users who access their challenges and initiatives,  Most of the time, suggested by other members of this social community.

In any case, and due to its high notoriety, Tik Tok but also other well-known social networks, face millionaire demands for the consequence of some of their challenges that become absolutely viral in communities with greater exposure and less knowledge of the risks inherent to these challenges, at least, unclear and not adequately controlled.

In fact, these algorithms are known as " dark algorithms" because of the consequences they have produced with an intensive use of the activities and challenges of this social network.

 

How do Tik Tok's algorithms work?

This type of algorithms with a certain obscure bias, are based on clearly planned actions:

1.Use certain effects that are proven to be part of a social trend

2.Feature popular and/or trendy hashtags

3.Use songs or sounds easily identifiable by the user

4.Generate expectations of belonging to a community or tribe

5. They promote "creativity" thus stimulating the user's ability to "be the best" in an established challenge.

What is undoubtedly alarming is a hypothetical future if the use of these obscure algorithms is not properly controlled. If today AI is still controllable by humans, what could happen in a future advanced state of AI, where it could have "some freedom" to create its own algorithms. Would he behave like the human being, with instincts of control and benefit, or would he be able to discern between right and wrong?

Dark artificial Intelligence

By dark AI  we can understand the use of AI programming algorithms, techniques or technologies based on data, which could manifest anomalous or unwanted behaviors, due to ignorance, bad faith or manipulation in biases and patterns of data and its commercialization, use or putting in the hands of the general public. 

It is logical to think of an inadequate use of patterns of behavior towards the user or consumer, the result of a conscious strategy of manipulation of the variables and attributes of the algorithms with non-transparent or ethical objectives and very usually with an economic purpose.

In recent years and almost for the first time in history, critical decisions affecting the lives of individuals are made by artificial intelligence simulators and algorithms that serve that AI. It is known as the "dark side" of AI, and sometimes, it can transfer the control of organizations. 

Conclusions and recommendations

Achieving greater and better control and supervision of both AI algorithms and ethics in their programming is essential to reduce or mitigate this "darkness", and the risks and their consequences:

  1. Periodically audit the algorithms with the greatest capacity for attraction in the usual use of technological platforms.
  2. Identify ethical and transparent policies and strategies easily observable by any user or consumer.
  3. The creation of a supranational body capable of monitoring and/or auditing and endowed with sanctioning capacity, of the most relevant algorithms used by the technology industry for its exploitation. 
  4. Establish in organizations a specific role or area of defense of the user or consumer, with full independence in decision-making, which ensures integrity, ethics, adequacy and transparency.
  5. The creation of an international seal or standard for the use of algorithms with commonly accepted principles that identify possible negative uses or lack of ethics in the programming of AI algorithms.
  6. Establish control and supervision mechanisms in the use of algorithm programming.
  7. Understand the scope of advertising and marketing that reaches highly vulnerable segments of the population or with some limited ability to differentiate between right and wrong.
  8. Provide such users and consumers with warnings and / or recommendations on the use of services and products that make use of algorithms that could manifest undesirable behaviors that generate direct or indirect risks.
New Comment