RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2019/02/26 16:56:27

Sexism and the chauvinism of artificial intelligence. Why is it so difficult to overcome it?

What is "artificial intelligence bias" (AI bias)? What is the reason for the emergence of this phenomenon and how to deal with it? In the material prepared specifically for TAdviser, journalist Leonid Chernyak answers these questions.

At the heart of all that is AI practice (machine translation, speech recognition, natural language word processing, computer vision, auto-driving automation, and more) is in-depth learning. This is a subset of machine learning, distinguished by the use of neural network models, which can be said to mimic the work of the brain, so they can be attributed with tension to AI. Any neural network model is trained on large sets of data, so it acquires some "skills," but the way it uses them - it remains unclear to the creators, which ultimately becomes one of the most important problems for many deep learning applications. The reason is that such a model works with images formally, without any understanding of what it does. Is such an AI system and can systems built on machine learning be trusted? The significance of the answer to the last question goes beyond scientific laboratories.

The reason for the high interest in AI bias is explained by the fact that the results of the introduction of AI technologies in several cases violate the principles of racial and gender equality

That's why media attention for the phenomenon dubbed AI bias has noticeably escalated over the past couple of years. It can be translated as "AI bias" or "AI partiality." About racism and sexism, characteristic of AI, write not only professional publications as Engadget[1] TechTires[2]but also mass magazines and newspapers,[3][4][3]

The reason for such a high interest in AI bias is explained by the fact that the results of the introduction of AI technologies in some cases touch the basic values ​ ​ of modern society. They manifest themselves in violating important principles such as racial and gender equality.

Externally, AI bias manifests itself in the fact that many analytical systems created on the basis of deep learning unexpectedly demonstrate a tendency to accept, let's say, biased conclusions, such that subsequently can lead to erroneous decisions made on their basis. Decisions suffering from AI bias have caused public outrage over the unfairness of some of the actions of the US prison system towards African-Americans, they were caused by errors in the recognition of ethnic minorities. The scandal with the launch by Microsoft of the voice assistant Tay, which was soon replaced by Zo[5] well known.

The manifestation of relatively uncomplicated systems of supposedly "human qualities" turned out to be a tidbit for those who tend to anthropomorphize AI. It is quite natural that the first to pay attention to the possible harmful consequences of AI bias were the philosophical defenders of the Azilomar Principles of Artificial Intelligence[6]. Among these 23 provisions are perfectly sound (from 1 to 18), but others (from 19 to 23), adopted under the influence of Elon Musk, Ray Kurzweil and the late Stephen Hawking, are, let's say, of a general discourse. They spread into the field of super-intelligence and singularity, which regularly and irresponsibly scare naive population.

Natural questions arise - where did AI bias come from and what about this bias to do? It is fair to assume that AI bias is not caused by any of the models' own properties, but is a direct consequence of two other types of bias - the well-known cognitive and lesser-known algorithmic. In the process of learning the network, they are added to the chain and as a result, a third link arises - AI bias.

Three-link bias chain:

  • Developers who create deep learning systems are the owners of cognitive bias.
  • They inevitably transfer these biases to the systems they develop and create algorithmic biases.
  • During operation, the systems demonstrate AI bias.

Let's start with the cognitive ones. Developers of systems based on the principles of in-depth learning, like all other representatives of the human race, are carriers of a particular cognitive bias. Each person has his own life path, accumulated experience, so he is not able to be a carrier of absolute objectivity. Individual partiality is an inevitable feature of any individual. Psychologists began to study cognitive bias as an independent phenomenon in the seventies of the twentieth century, in the domestic psychological literature it is usually called cognitive distortion.

"Cognitive distortion is an example of evolutionarily established mental behavior. Some of them have an adaptive function, as they contribute to more efficient actions or faster solutions. Others appear to originate from a lack of appropriate thinking skills or from inappropriate application of skills that have been adaptive in other settings[7]. There are also established directions as cognitive psychology and cognitive behavioural therapy (CBT). For February 2019, about 200 types of various cognitive distortions have been identified.

Partisanship and bias are part of human culture. Any artifact created by a person is a carrier of certain cognitive biases of its creators. We can cite many examples when the same actions acquire their own character in different ethnic groups, an illustrative example is the use of chopping, in Europe they push it from themselves, and in Japan they pull it on themselves.

Systems built on the principles of deep learning in this sense are no exception, their developers cannot be free from their inherent biases, so with inevitability they will transfer part of their personality to algorithms, generating, ultimately, AI bias. That is, AI bias is not its own property of AI, about the consequence of the transfer to the systems of qualities inherent in their authors.

The existence of algorithmic bias cannot be called a discovery. Many years ago, Joseph Weizenbaum, better known as the author of Elise's first dialogue program, written by him back in 1966, first thought about the threat of a possible "infection of the machine with human addictions." The name of the program addresses us to Eliza Doolittle, the heroine of "Pygmalion" Bernard Shaw. With her, Weizenbaum was one of the first to attempt the Turing test, but he originally conceived Eliza as a means to demonstrate the possibility of imitation dialogue at the most superficial level. It was an academic drawing of the highest level. Quite unexpectedly, he discovered that his "conversation with a computer," which was based on a primitive parody based on the principles of client-centered psychotherapy by Karl Rogers, was taken seriously by many, including experts, with far-reaching conclusions.

In modern times, we call this kind of technology chat bots. Those who believe in their intellectuality should be reminded that these programs are not smarter than Eliza. Weizenbaum, along with Hubert Dreyfus and John Serle, went down in AI history as one of the main critics of claims about the possibility of creating an artificial brain, and even more so an artificial consciousness comparable to human in its capabilities. In the book "The Possibilities of Computers and the Human Mind," translated into Russian in 1982, Weizenbaum warned of the fallacy of identifying natural and artificial minds, based on a comparative analysis of fundamental ideas of psychology and the presence of fundamental differences between human thinking and information processes in a computer. And returning to AI bias, we note that more than thirty years ago Weizenbaum wrote that the bias of the program may be the result of erroneously used data and the features of the code of this very program. If the code is not trivial, say, not the formula written in Fortran, then such code somehow reflects the programmer's ideas about the outside world, so you should not blindly trust machine results.

And in deep learning applications that are far from trivial in their complexity, algorithmic partiality is all the more possible. It arises in cases where the system reflects the internal values ​ ​ of its authors, during the stages of coding, collecting and selecting data used to train algorithms.

Algorithmic partiality arises not only due to the available cultural, social and institutional ideas, but also due to possible technical limitations. The existence of algorithmic bias is in conflict with intuitive representation, and in some cases with mystical conviction in the objectivity of results obtained from processing data on a computer.

A good introduction to the subject of algorithmic bias can be found in The Foundations of Algorithmic Bias[8].

In the article "This is why AI attachments arise and why they are difficult to fight"[9], published in February 2019 in the MIT Review, three points stand out that contribute to the emergence of AI bias. However, oddly enough, they are not associated with cognitive biases, although it is not difficult to notice that they are at the root of all three.

  • Framing the problem. The problem is that machine learning methods usually want to get ahead of something that does not have a strict definition. Let's say the bank wants to determine the credit qualities of the borrower, but this is a very vague concept and the result of the model will depend on how the developers, due to their personal ideas, will be able to formalize this quality.
  • Collecting the data. At this stage, there may be two sources of bias: the data may not be representative or may contain prejudice. A well-known precedent, when the system better distinguished light-skinned compared to dark-skinned, was due to the fact that there were more light-skinned in the original data. And no less well-known error in automated recruiting services, which gave preference to the male half, was due to the fact that they were trained on data suffering from male chauvinism.
  • Preparing the data. Cognitive bias can leak when selecting the attributes that the algorithm will use when evaluating a borrower or job candidate. No one can guarantee the objectivity of a selected set of attributes.

It is almost impossible to fight AI bias head-on, the same article in the MIT Review names the main reasons for this:

  • There are no clear methods for correcting the model. If, for example, the model suffers from gender bias, then it is not enough to simply remove the word "woman," since there are still a huge number of gender-oriented words. How to detect them all?
  • Standard training practices and models do not take AI-bias into account.
  • The creators of the models are representatives of certain social groups, carriers of certain social views, it is impossible to objectify them themselves.
  • And most importantly, it is not possible to understand what objectivity is, since computer science has not yet encountered this phenomenon.

What conclusions can be drawn from the fact of the existence of the AI bias phenomenon?

  • The first and simplest conclusion is not to believe those whom the classic of Soviet fiction Kir Bulychev called talking birds, but to read the classics, in this case the works of Joseph Weizenbaum, and besides Hubert Dreyfus and John Serl. It greatly contributes to the development of sobriety of consciousness and an understanding of the role of man in complex systems.
  • The conclusion of the second, the next of the first - systems built on the principles of deep learning do not possess AI, this is nothing more than a new and more complex way than programming, a way to use computers as a tool for data analysis. It is possible that the power of modern and future computers will allow us to betray the conditions and methods of solving problems in some other forms than programming. Today it is teaching with a teacher, and tomorrow there may be other approaches to machine learning or something new, more perfect.
  • The third conclusion, perhaps the most important - the computer was and will be a tool for expanding the intellectual potential of humans, and the main task is not to create an artificial AI mind, but to develop systems called Intelligence amplification, Cognitive augmentation or Machine augmented intelligence. This path is well and long known. Back in 1945, Vannevar Bush wrote a non-obsolete, essentially programmatic article, "How We Can Think." The great cybernetic William Ross Ashby wrote about the strengthening of intelligence. Joseph Licklider, the author of the idea of ​ ​ the Internet, devoted his work to man-computer symbiosis. Practical approaches to enhance human intelligence (from the mouse to the basics of the human-machine interface) were developed by Douglas Engelbart. These pioneers have outlined a pillar road, and you should follow it. What distinguishes them from popular AI creators is that everything they have conceived successfully works and forms an important part of our lives.
  • Conclusion fourth and last. The detection and analysis of the AI bias phenomenon allows us to argue that artificial intelligence in the form of in-depth learning does not possess any own bias, and incorrectness, as usual, is explained by the human factor.

Notes

  1. In 2017, society started taking AI bias seriouslyor
  2. What is algorithmic bias?
  3. [1]например Forbes#34398619134c AI Bias And The 'People Factor' In AI Development,
  4. [2]The Guardian Even algorithms are biased against black men, The New York Times The Real Bias Built In at Facebook.
  5. The Tay episode proves we're still not ready for true AI, is
  6. Azilomar Principles of Artificial Intelligence
  7. "List of Cognitive Distortions
  8. The Foundations of Algorithmic Bias
  9. This is how AI bias really happens - and why it's so hard to fix