Sexism and chauvinism of artificial intelligence. Why it is so difficult to overcome it?
What is "the bias of artificial intelligence" (AI bias)? What emergence of this phenomenon is connected with and how to fight against it? In the material prepared especially for TAdviser these questions are answered by the journalist Leonid Chernyak.
At the heart of all that is practice of AI (machine translation, speech recognition, processing of texts in natural languages computer vision, automation of driving of cars and many other things) deep training lies. This subset of machine learning differing in use of models of neural networks about which it is possible to tell that they imitate work of a brain therefore they with great reserve can be referred to AI. Any model of neural network studies at big data sets, thus, it finds some "skills", but how it uses them - for creators remains not clear that eventually becomes one of the major problems for many applications of deep training. The reason is that such model works with images formally, without any understanding that it does. Whether such AI system and whether it is possible to trust systems constructed on the basis of machine learning is? The value of the answer to the last question goes beyond scientific laboratories.
That is why for the last few years the attention of mass media to the phenomenon which received the name AI bias considerably became aggravated. It can be translated as "bias of AI" or "bias of AI". About racism and sexism inherent to AI, write not only professional editions asbut also mass magazines and newspapers, for example,,.
The reason of so high interest in AI bias is explained by the fact that results of implementation of AI technologies in some cases touch the core values of modern society. They are shown in violation of such important principles as racial and gender equalities.
Externally AI bias is shown that many analytical systems created on the basis of deep training, show an unexpected image tendency to acceptance, so to say, of biased outputs, such which in the subsequent can lead to the wrong solutions made on their basis. The solutions suffering from AI bias became the reason of public perturbations in connection with injustice of some actions of penal system of the USA in relation to the Afro-Americans, they were caused by errors in face recognition of ethnic minorities. Scandal with start by Microsoft corporation of the voice assistant to Tay soon replaced with is well-known].
Manifestation rather simple systems allegedly of "human qualities" were a tidbit for those who are inclined to antropomorfizirovat AI. It is quite natural that AI bias, the first on possible harmful effects, the philosophizing defenders of "the Azilomarsky principles of artificial intelligence" paid attention]. Among these 23 provisions is absolutely sensible (from 1 to 18), but others (with 19 on 23), accepted under the influence of Elon Musk, Ray Kurveyl and the late Stephen Hawking have, so to say, all-colloquial character. They extend to the area of superreason and singularity by which regularly and irresponsibly frighten the naive population.
There are natural questions – from where AI bias undertook and what with this bias to do? It is fair to assume that the bias of AI is not caused by any own properties of models, and is a direct consequence of two other types of biases – well-known cognitive and less known algorithmic. In learning process of network they develop in a chain and as a result there is the third link – AI bias.
Three-unit chain of biases:
- The developers creating the systems of deep training are owners of cognitive biases.
- They with inevitability transfer these biases to systems developed by them and create algorithmic biases.
- In use systems show AI bias.
Let's begin with cognitive. Developers of systems on the principles of deep training, as well as all other representatives of human race, are carriers of this or that cognitive bias (cognitive bias). Each person has the course of life, the accumulated experience therefore he is not able to be the carrier of absolute objectivity. The individual bias is inevitable line of any personality. Psychologists began to study cognitive bias as the independent phenomenon in the seventies of the XX century, in domestic psychological literature it is accepted to call it cognitive distortion.
"Cognitive distortions are an example of evolutionarily developed mental behavior. Some of them perform adaptive function as they promote more effective actions or faster solutions. Others, apparently, come from lack of the corresponding skills of thinking or because of inappropriate use of the skills which were adaptive in other conditions". There are also developed directions as cognitive psychology and the cognitive and behavioral therapy (CBT). For February, 2019 about 200 types of different cognitive distortions are selected.
Biases and biases are a part of human culture. Any artifact created by the person is the carrier of these or those cognitive biases of his creators. It is possible to give a set of examples when the same actions purchase own character in different ethnoses, an indicative example – use of a plane, in Europe it is pushed from themselves, and in Japan it is pulled on themselves.
Systems constructed on the principles of deep training in this sense are not an exception, their developers cannot be free from biases inherent in them therefore with inevitability will transfer a part of the personality to algorithms, generating, finally, AI bias. So AI bias not the AI own property, about a consequence of transfer in the systems of the qualities inherent in their authors.
It is impossible to call existence of algorithmic bias (Algorithmic bias) opening. Joseph Veytsenbaum better known as the author of the first capable to conduct dialogue of the program Elisa written to them in 1966 more for the first time thought of threat of possible "infection of the machine with human addictions" many years ago. The name of the program addresses us to Elisa Dulittl, the heroine of Pygmalion of Bernard Shaw. With it Veytsenbaum one of the first made an attempt to pass the Turing test, but he initially conceived Elisa as means for demonstration of a possibility of simulation dialog at the most superficial level. It was the academic draw of the highest level. Absolutely unexpectedly for himself he found out that to its "conversation with the computer" which cornerstone the primitive parody based on the principles the client - the centered Karl Rogers's psychotherapy was, many including specialists, belonged seriously with far-reaching outputs.
In the present we call such technologies chat-bots. Those who believes in their intellectuality should reminding that these programs it is not more smart than Elisa. Veytsenbaum along with Hubert Dreyfus and John Serl became history AI as one of the main critics of statements about a possibility of creation of an artificial brain and the more so artificial consciousness, comparable with human by the opportunities. In the book "Possibilities of Computers and Human Mind" translated to Russian in 1982 Veytsenbaum warned about inaccuracy of identification of natural and artificial intelligence, based on contrastive analysis of fundamental representations of psychology and on existence of fundamental differences between a human thought and information processes in the computer. And returning to AI bias we will notice that more than thirty years ago Veytsenbaum wrote that the bias of the program can be a consequence of mistakenly used data and features of the code of this program. If the code is not trivial, say, not a formula written on Fortran, then such code anyway reflects ideas of the programmer of the outside world therefore it is not necessary to trust machine results blindly.
And in applications of deep training, not trivial on the complexity, the algorithmic bias especially is possible. It arises when a system reflects intrinsic values of her authors, in stages of coding, collecting and selection of the data used for a training of algorithms.
The algorithmic bias arises not only owing to the available cultural, social and institutional representations, but also because of possible technical restrictions. Existence of algorithmic bias contradicts intuitive representation, and in certain cases mystical conviction in objectivity of the results received as a result of data processing on the computer.
Good introduction to the subject connected with algorithmic biases can be found in.
In article "That Is Why There Are AI Attachments and why It Is Difficult to Fight with Them"published in February, 2019 in MIT Review three moments promoting emergence of AI bias are selected. However, as not strange, they are not connected by cognitive biases though it is easy to notice that in a root of all three they lie.
- Framing the problem. The problem consists that by methods of machine learning usually there is a wish to outstrip something, not having strict determination. Let's tell bank wants to define credit qualities of the borrower, but this very indistinct concept and result of work of model will depend on how developers, owing to the personal representations, will be able to formalize this quality.
- Data collection for training (Collecting the data). At this stage there can be two sources of bias: data can be not representative or may contain prejudices. The known precedent when a system distinguished white-skinned in comparison with black better, was connected with the fact that in initial data white-skinned there was more. And not less known error in the automated recruiting services which gave preferences to the men was are connected with the fact that they were trained at the data suffering from male chauvinism.
- Preparing the data. The cognitive bias can filter when choosing those attributes which the algorithm will use at assessment of the borrower or candidate for work. Nobody can give guarantees of objectivity of the chosen set of attributes.
Fight against AI bias "bluntly" it is impracticable, in the same article in MIT Review basic reasons of it are stated:
- There are no clear methods for correction of model. If, for example, the model suffers from gender bias, then it is insufficiently simple to delete the word "woman" as there is still a huge number of gendernooriyentirovanny words. How everything to detect their?
- Standard practicians of training and model do not take AI-bias into consideration.
- Creators of models are representatives of certain social groups, carriers of these or those social views, they cannot be objektivizirovat.
- And, above all, it is not possible to understand what is objectivity as computer sciences did not face this phenomenon yet.
What conclusions can be drawn from the fact of existence of a phenomenon AI bias?
- The output the first and the simplest – not to believe those whom the classic of the Soviet fantasy Cyrus Bulychev called birds chatterers, and to read classics, in this case Joseph Veytsenbaum's works, and besides Hubert Dreyfus and John Serl. Very much contributes to the development of sobriety of consciousness and understanding of a role of the person in complex systems.
- The output the second following from the first – systems constructed on the principles of deep training have no AI, it that other as new and more difficult, than programming, a method of use of computers as the tool for data analysis. It is possible that capacities of modern and future computers will allow to betray conditions and methods of solving of tasks in some forms, other, excellent from programming. Today this supervised learning, and tomorrow there can be also other approaches to machine learning or something new, more perfect.
- The output the third, perhaps the most important – the computer was and will be the tool for expansion of intellectual potential of the person, and the main task consists not in creation of artificial intelligence of AI, and in development of systems which call Intelligence amplification (gain of intelligence), Cognitive augmentation (cognitive gain) or Machine augmented intelligence (machine gain of intelligence). This way well and long ago is known. In 1945 Vannevar Bush wrote not outdated in fact program article "As We Can Think". William Ross Eshbi wrote about gain of intelligence great kibernetik. Cheloveko-kompyyuternomu Joseph Licklider, the author of the idea of the Internet devoted to symbiosis the works. Practical approaches to gain of human intelligence (from a mouse to human-computer interface bases) were developed by Douglas Engelbart. These pioneers planned the highway, on it and it is necessary to go. From popular creators of AI they are distinguished that everything conceived by them successfully works and makes an important part of our life.
- Output the fourth and the last. Detection and the analysis of a phenomenon of AI bias allows to claim that the artificial intelligence in the form of deep training has no own bias, and incorrectness, as usual, is explained by a human factor.
- Robots (robotics)
- Robotics (world market)
- In the industry, medicine, fighting
- Service robots
- Collaborative robot, cobot (Collaborative robot, kobot)
- IoT - IIoT
- Artificial intelligence (AI, Artificial intelligence, AI)
- Artificial intelligence (market of Russia)
- In banks, medicine, radiology
- National Association of Participants of the Market of Robotics (NAPMR)
- Russian association of artificial intelligence
- National center of development of technologies and basic elements of robotics
- The international Center for robotics (IRC) based on NITU MISIS
- ↑ Engadget In 2017, society started taking AI bias seriouslyili TechTolks What is algorithmic bias
- ↑ ?
- ↑ Forbes#34398619134c AI Bias And The 'People Factor' In AI Development
- ↑ The Guardian Even algorithms are biased against black men
- ↑ The New York Times The Real Bias Built In at Facebook
- ↑ [https://bdtechtalks.com/2016/04/01/the-tay-episode-proves-were-still-not-ready-for-true-ai/ Zo The Tay episode proves we’re still not ready for true AI
- ↑ [http://robopravo.ru/azilomarskiie_printsipy_ii the Azilomarsky principles of artificial intelligence
- ↑ the List of cognitive distortions
- ↑ article The Foundations of Algorithmic Bias The Foundations of Algorithmic Bias
- ↑ of This is how AI bias really happens — and why it's so hard to fix