RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/07/19 10:32:41

Risks of using artificial intelligence

The development of artificial intelligence technologies in the future can carry not only benefits, but also harm. This article is devoted to the potential risks of using AI.

Content

Main article: Artificial Intelligence

Advantages and disadvantages of AI versus human

Man vs. AI - what is the marginal depth of integration and what is the ability to replace man with AI?

The issue is extremely important, Spydell Finance wrote in 2023, because this depends on the ability of AI to integrate into human spheres of activity, and therefore fundamentally influence the structure of the labor market with all the ensuing consequences.

What are the fundamental advantages of AI over humans?

  • Unlimited memory and speed of information storage. The speed of learning a person is extremely low, but even after learning, a person loses skills and information every day, i.e. a constant concentration on the information unit (research object) and skills support is required. It is enough for AI to learn once to keep information in direct access.

  • Information processing speed. Parallel processing of unlimited arrays of information allows almost unlimited scaling of computing power, where mathematical problems can be solved billions of times faster than the average person. The average person will take about 3500 minutes to read and comprehend 5 million characters, while AI can be managed within a split second.

If a person does not have enough life to get acquainted with all the works of world literature (even the main ones), then for AI these are moments. Even after reading the literature, a person will already forget what was in the previous book (at least the main details), while AI remembers everything. For a dismissively small time interval, AI can study all scientific literature in physics, chemistry, astronomy, biology, history, etc. Not just to study, but also in the primary form to remember to the smallest details.

  • Accuracy and objectivity. AI is not wrong, at least if the embedded functioning algorithm is not wrong. A person is constantly mistaken due to limited information retention, processing and interpretation abilities. A person is prone to prejudice, AI reproduces information on the principle of "as is."

  • Information transmission. Access to the correct research vector by one of the AI segments is instantly broadcast to the entire AI subnet, which expands the knowledge of one segment to the entire subnet at once. The discovery of one person or group of scientists cannot be instantly expanded to an interested circle of persons. AI can be scaled, copied and cloned, but one person's knowledge cannot be transplanted into another.

  • Lack of fatigue. A person's productivity and efficiency drops as the resource is developed, both within the day and by age. AI can work 24 by 7 with parity efficiency stably and without failures (as long as the servers are running). A person ages, cognitive functions weaken, while AI only increases.

  • Continuing training. A person needs to change the type of activity in order to maintain the necessary emotional balance, while AI continuously expands its power.

  • Lack of emotionality. AI is not subject to mood swings, AI does not require a salary increase, respect, does not require justice and does not reflect on the level of freedom, AI does not feel pity, pain or fatigue, does not weave intrigue, conspiracies and does not try to jump off the workflow, because "suddenly urgent matters appeared."

There are few disadvantages, but they are:

  • Difficulty in understanding the context of information (fixable over time);

  • Lack of empathy, which forms ethical problems if too many rights are given in favor of AI;

  • Limited space for creativity and innovation due to fundamental built-in limitations on understanding "what is good and what is bad."

AI is able to replicate successful creative experiments based on analysis of patterns and preferences, but is AI able to create fundamentally new products? Not yet.

Is AI capable of disordered integration and decision-making, where intuition can be an important element? No now.

There are many restrictions, but for 2024 the balance is still strong in favor of AI.

How robots replace humans

Main article: How robots replace humans

2024

The OpenAI model used in hospitals turned out to be subject to hallucinations

The model used in hospitals OpenAI turned out to be subject to hallucinations.

Generative models of artificial intelligence are prone to generating incorrect information. Surprisingly, this problem also affected the field of automatic transcription, where the model must accurately play the audio recording. Software engineers, developers and scientists are seriously concerned about OpenAI's Whisper decryptions, Haitek + reported on October 28, 2024, citing the Associated Press. Read more here.

The Ministry of Internal Affairs warned of fraud with fake orders of the FSB

The Ministry of Internal Affairs of Russia reported the appearance of a fraudulent scheme in which attackers use fake orders from the FSB. So, acting on behalf of the head, they go to the company's employees and report that the FSB of Russia began an audit against them due to a possible violation of the current legislation. This was announced on October 8, 2024 by the press service of Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications. Read more here.

The Ministry of Digital Development of the Russian Federation created consortium on safety of artificial intelligence

The Ministry of Digital Development, Communications and Mass Media of Russia (Ministry of Digital Development of the Russian Federation) has created a consortium whose task will be to ensure information security in the field of artificial intelligence (AI). As it became known in August 2024, the new association will include about 10 leading companies and 5 higher educational institutions engaged in the development and research of AI technologies. Read more here

Named 9 main risks of generative AI

(GIi) Generative artificial intelligence marks a significant leap in the ability to neuronets understand, interact with, and create new content from complex data structures. This technology opens up a wide range of opportunities in a wide variety of industries. At the same time, new risks are being created, as stated in the materials IDC published on July 10, 2024.

IDC notes that there are many options for using Genia in various fields: marketing, customer interaction, increased productivity, production planning, quality control, maintenance with AI, program code generation, supply chain management, retail, medical data analysis and much more. Companies in all market segments integrate Genia into business operations and products, which is often due to the need to meet business expectations and maintain competitiveness. However, as noted, the hasty introduction of Genia can turn into serious problems. In particular, there is a possibility of leaks of personal or confidential data. Due to incorrect actions of Genia, legal problems may arise, and the reputation of the brand will be damaged. The authors of the review name the nine main risks of the introduction and use of Genia:

  • Data poisoning (a neural network can come up with numbers, facts and create fake objects or signs);
  • Bias and limited explainability;
  • Threat to brand reputation;
  • Copyright infringement;
  • Cost overruns;
  • Environmental impact;
  • Management and security issues;
  • Integration and interaction issues;
  • Litigation and compliance.

Named the main risks of Genia

Some of the listed risks can be minimized by the introduction of labeling (including hidden) content obtained using Genia. In addition, specialized services can be created to check for the presence of materials generated by the neural network. In addition, a responsible approach to the use of Genia is required.

As noted in the IDC study, a significant part of CEOs (45%) and Chief information officers (66%) believe that technology providers are not fully aware of the risks associated with Genia. Therefore, analysts believe, one should carefully study issues related to privacy, information protection and security. Attention also needs to be paid to which datasets the AI model was trained on. In general, according to IDC, GiI risk management requires a comprehensive understanding of the maturity of AI in the organization, the application of a balanced approach and a thorough assessment of technology providers. By solving these issues and using the necessary infrastructure, organizations will be able to maximize the benefits of Genia, while minimizing risks.

At the same time, IDC believes, in the future, Genia will lead to fundamental changes in companies. According to IDC President Crawford Del Prete, by 2027, Genia will account for up to 29% of organizations' spending on AI in general. It is assumed that most companies will choose a hybrid approach to building their AI infrastructures, that is, they will use third-party solutions with optimization for their own needs, as well as develop their AI tools for specific purposes. It is estimated that by 2027 organizations around the world will spend about $150 billion on Genia technologies, and the total economic effect will be $11 trillion.[1]

US authorities insist on demonopolization of the AI technology market

The chief antitrust inspector of the United States will "urgently" deal with the AI sector, fearing that power over the transforming technology is concentrated on several players with large capital.

Jonathan Kanter in June 2024 insists on "meaningful intervention" in the situation with the concentration of power in the artificial intelligence sector.

OpenAI staff want protection to speak out about 'serious risks' of AI

Current and former employees OpenAI Google and DeepMind said in June 2024 that "broad confidentiality agreements prevent us from raising our concerns." As long as there is no effective government oversight of these corporations, current and former employees are among the few who can hold them accountable to the public.

Fraudsters using deepfakes forge documents of Russians

Fraudsters have learned to fake citizens' documents using artificial intelligence (AI) technologies. As before, when creating digital fake copies, they either change the numbers or try to pass off an invalid document as valid, but now deepfakes are also used for this purpose for the process of authentication and data synthesis. Such information on May 8, 2024 with TAdviser was shared in the press service of the State Duma deputy RFAnton Nemkin with reference to Izvestia. Read more here.

Ministry of Economic Development creates a commission to investigate AI incidents

In mid-April 2024, information appeared that the Ministry of Economic Development of the Russian Federation was creating a special commission to investigate incidents related to the use of artificial intelligence. The new structure will also regulate property rights to the results of intellectual activity obtained using AI.

According to the Vedomosti newspaper, referring to the information provided by representatives of the Ministry of Economic Development, changes are being made to the bill "On experimental legal regimes (EPR) in the field of digital innovation" (258-FZ). In total, more than 20 amendments have been prepared. In particular, we are talking about reducing the list of documents provided when submitting an initiative proposal for EPR and reducing the timing of approval due to the optimization of procedures.

Ministry of Economic Development of the Russian Federation creates a special commission to investigate incidents related to the use of artificial intelligence

The idea is that it will be possible to create an EPR faster. Boris Zingerman, general director of the National Medical Knowledge Base Association of AI Developers and Users in Medicine, notes that in the event of the introduction of insurance for incidents related to EPR, a special commission will be engaged in assessing insurance claims.

File:Aquote1.png
There are few EPRs, and the process of considering them is slow, because departments are afraid of these experiments on the basis that some problems may arise. To make EPRs move faster, they are trying to come up with a mechanism with which it will be possible to protect against such incidents, but how this will work in practice is not entirely clear, says Zingerman.
File:Aquote2.png

At the same time, Senator Artem Sheikin emphasizes that a participant in the EPR will have to maintain a register of persons who are associated with the technology and who are responsible when using solutions created using AI. In the event of an incident with AI, the EPR subject is obliged to provide the commission with the documents necessary to investigate the causes and establish the circle of persons responsible within two working days. Further, the commission will prepare an opinion on the causes of the incident, circumstances indicating the guilt of persons, as well as on the necessary measures to compensate for harm.[2]

Artificial intelligence began to be used to forge documents

The OnlyFake website has appeared on the Internet, with the help of which any user can create a photo of fake documents, Izvestia said in a statement. At the same time, there is no data on the creators of the service anywhere. This was announced on February 13, 2024 by the press service of the State Duma deputy RFAnton Nemkin. Read more here.

2023

The Central Bank of Russia listed the main risks of the introduction of artificial intelligence

At the end of September 2023, the Bank of Russia named the main risks in the introduction of artificial intelligence. The main among them, as follows from the presentation of the State Secretary - Deputy Chairman of the Central Bank Alexei Guznov, are:

  • The likelihood of monopolization among major technology players. To support AI, large investments are needed for computing power, data processing infrastructures, professional personnel, and so on. Guznov noted that when using AI, only companies that have the opportunity to "invest" will be able to get the result, which will cause "distortions" in the market;
  • Risk of leakage of information that is used for AI training;
  • The risk of making biased or discriminatory decisions against the background of the fact that the AI model provides for the issuance of decisions based on certain factors and introduced algorithms. "For the most part, this is not our problem. It is now being comprehended as a philosophical, if you like, problem of combining human intelligence and artificial intelligence, "Guznov said. So he noted that as part of the special work of AI, problems may arise when communicating artificial intelligence with consumers.

The Central Bank named the main risks of the introduction of AI

The Bank of Russia plans to issue an advisory report on artificial intelligence by the end of 2023, which will touch upon the application and regulation of AI in the field of finance, said Olga Skorobogatova, First Deputy Chairman of the Central Bank, in early September 2023. The Bank of Russia also intends to create an AI competence center. The regulator is primarily interested in the issue of data security and customer operations. And already on the basis of public discussions, the Central Bank will decide on the need to regulate AI.

According to Alexei Guznov, by the end of September 2023, the Central Bank does not imply any radical solutions in regulating the use of artificial intelligence, but "the question is worth it."[3]

The Ministry of Economic Development has developed a mechanism for protecting against harm caused by AI technologies

In December 2023, the Ministry of Economic Development of the Russian Federation announced the development of a mechanism for protecting against harm caused by artificial intelligence. Amendments were made to the law "On experimental legal regimes in the field of digital innovation."

Participants in experimental legal regimes (EPR) in the field of innovative developments will be required to take out insurance that provides for responsibility for the negative effects of AI technology.

The Ministry of Economic Development has developed a mechanism for protecting against harm caused by artificial intelligence
File:Aquote1.png
Testing even such a complex tool as AI was safe and consistent so that it could be fixed in which industries, in which business processes it can be used effectively and what legal mechanisms can provide this, - noted RBC in the press service of the Ministry of Economic Development.
File:Aquote2.png

The publication lists the main innovations initiated by the Ministry of Economic Development:

  • subjects of the experimental legal regime (mainly legal entities, but there may also be government agencies, including regional ones) will be obliged to maintain a register of persons who entered into legal relations with it, and this register will have to contain information about those responsible for the use of decisions based on AI. In addition, the bill proposes to oblige companies to maintain a register of the results of intellectual activity created using AI, indicating their copyright holder;
  • the register will display information about persons directly working with AI technologies, who "in case of emergency situations will be responsible for the improper operation of such technologies," noted in the accompanying materials to the amendments;
  • participants in "digital sandboxes" will be ordered to insure civil liability for harm to the life, health or property of other persons as a result of the use of AI. The program of the experimental regime (act of special regulation with the conditions of the regime) will have to contain requirements for the conditions of such insurance - including the minimum amount of the insured amount, a list of risks and insured events.[4]

Famous cryptographer warned of the risk of using AI for mass espionage

The famous American cryptographer Bruce Schneier[5] published on his blog a post entitled "AI and mass espionage." In it, he explains the difference between surveillance (collecting data about a person) and espionage, which is aimed at clarifying the context of certain actions of each individual person.

File:Aquote1.png
If I were to hire a private investigator to spy on you, that detective could hide the bug in your house or car, listen to your phone and listen to what you're saying, "Bruce Schneier explained to[6]. - At the end, I would get a report on all the conversations you had and the content of those conversations. If I were to hire the same private investigator to establish surveillance on you, I would get another report: where you went, who you talked to, what you bought, what you did
File:Aquote2.png

According to him, the Internet has simplified surveillance of a person, and it is almost impossible to abandon it, since most of the human activity in the modern world in one way or another leaves traces on the Internet or various databases. Moreover, by using big data technologies, it became possible to analyze the accumulated information and draw conclusions.

File:Aquote1.png
Mass surveillance has fundamentally changed the nature of surveillance, says Bruce Schneier. - Since all the data is preserved, mass surveillance allows people to conduct surveillance in the past and not even knowing who exactly you want to target. Tell me where that person was last year. List all the red sedans that have driven this road in the past month. List all the people who purchased all the ingredients for the pressure cooker bomb last year. Find me all the pairs of phones that moved towards each other, switched off, and then turned on again an hour later, moving away from each other (a sign of a secret meeting)
File:Aquote2.png

American cryptographer Bruce Schneier with his book

However, until recently, spying on everyone using technology was difficult, since to understand the context of certain actions, it was necessary to involve a person who understood the sequence of events and concluded about their goals. With the involvement of artificial intelligence technologies, this limitation can be removed - AI is able to independently build a consistent picture for a sequence of human actions and assume its purpose. Therefore, the use of AI to analyze the information accumulated in various databases will allow organizing mass espionage.

{{quote 'Mass espionage will change the nature of espionage, warns a well-known cryptographer. - All data will be saved. All this will be searchable and understandable for mass use. Tell me who spoke on this or that topic last month, and how the discussions on this topic developed. Man A did something - check if anyone told him to do it. Find anyone plotting a crime, spreading rumors or planning to take part in a political protest.

And that's not all. To reveal the organizational structure, find someone who gives similar instructions to a group of people and then to all the people to whom they have passed these instructions. To find people's proxies, see who they tell secrets to. You can track friendships and alliances as they form and break up in minute detail. In short, you may know everything about what everyone is talking about}}

Of course, Bruce Schneier, as an American, is primarily afraid of using mass espionage from the state to identify protest sentiments and leaders of opposing opinions, citing the example of the spy group developer PONSO Group and the Chinese government. Hints that large corporations and technology monopolies will not be able to resist the temptation to use mass espionage technologies to point marketing their products and form offers that cannot be abandoned. However, he does not say at all that crime can do the same to optimize its fraudulent and phishing activity. Now fraudsters spend a lot of effort and money in vain, calling everyone, and using mass espionage technologies, they will be able to choose the most priority goals that will bring them more "income." And now such technologies are already being developed, and data is accumulating in order to further train criminal artificial intelligence on them.

File:Aquote1.png
We could limit that opportunity, "sums up Bruce Schneier. "We could ban mass espionage. We could adopt strict data privacy rules. But we did nothing to limit mass surveillance. Why should espionage be different?
File:Aquote2.png

2022

US presidential administration issues 5 provisions to protect people from AI

On October 7, 2022, the White House Office of Scientific and Technical Policy (OSTP) issued five provisions to guide the design, use and implementation of automated systems. The document comes as more voices join the call for measures to protect people from the technology as artificial intelligence develops. The danger, according to experts, is that neural networks easily become biased, unethical and dangerous.

The White House Office of Science and Technology Policy issued five provisions that should guide the design, use and implementation of automated systems
  • Secure and efficient systems

The user must be protected from unsafe or inefficient systems. Automated systems should be developed in consultation with various communities, stakeholders, and experts in the field to identify problems, risks, and potential impacts of the system. Systems must be tested before deployment to identify and mitigate risks, and continuously monitored to demonstrate their safety and effectiveness.

  • "'Protection against algorithmic discrimination

The user should not face discrimination from algorithms, and the systems should be used and developed on the principles of equality. Depending on the specific circumstances, algorithmic discrimination may violate legal protections. Designers, developers and implementers of automated systems should take proactive and consistent measures to protect individuals and communities from algorithmic discrimination, as well as to use and design systems based on equality.

  • Copyright/IP Policy

The user must be protected from data misuse using built-in security, and he must have the right to control how data they are used.Designers, developers and automated system implementers must request permission from the user and respect his decisions regarding the collection, use, access, transmission and deletion of his data in appropriate ways and to the maximum extent possible; if this is not possible, alternative design-based privacy protections should be used.

  • "'Notice and Clarification

The user needs to know that the automated system is being used and understand how and why it contributes to the results that affect it. Designers, developers and implementers of automated systems should provide public documentation in a simple language, including a clear description of the overall functioning of the system and the role that automation plays, notification that such systems are in use, about the person or organization responsible for the system, and an explanation of the results, which should be clear, timely and accessible.

  • "'Human alternatives, decision-making and back-up

The user should be able to opt out of services where necessary and have access to a specialist who can quickly review and resolve issues that have arisen. The user should be able to abandon automated systems in favor of a human alternative where appropriate.[7]

Former head of Google Eric Schmidt creates a fund to solve the "key" problems of AI and its bias

On February 16, 2022, information appeared that the former executive director Google Eric Schmidt (Eric Schmidt) announced the creation charitable foundation of 125 million with a total capital, dollars which will assist research in the field of artificial intelligence. First of all, we are talking about research aimed at solving fundamental problems manifested when using artificial intelligence technologies, including bias (- AI bias phenomenon approx.), TAdviser The possibility of harm and abuse. The list also includes geopolitical conflicts and the scientific limitations of the technology itself. More. here

2019: Sexism and the chauvinism of artificial intelligence. Why is it so difficult to overcome it?

At the heart of all that is AI practice (machine translation, speech recognition, natural language word processing, computer vision, auto-driving automation, and more) is in-depth learning. This is a subset of machine learning, distinguished by the use of neural network models, which can be said to mimic the work of the brain, so they can be attributed with tension to AI. Any neural network model is trained on large sets of data, so it acquires some "skills," but the way it uses them - it remains unclear to the creators, which ultimately becomes one of the most important problems for many deep learning applications. The reason is that such a model works with images formally, without any understanding of what it does. Is such an AI system and can systems built on machine learning be trusted? The significance of the answer to the last question goes beyond scientific laboratories. Therefore, the attention of the media to the phenomenon, called AI bias, has noticeably intensified. It can be translated as "AI bias" or "AI partiality." Read more here.

2017: Risk of Destruction of Humanity

British scientist Stephen Hawking has often spoken out about the development of artificial intelligence (AI) as the real reason for the possible destruction of the human race.

In April 2017, Stephen Hawking, during a video conference in Beijing held as part of the Global Mobile Internet Conference, said:

"The development of artificial intelligence can be both the most positive and the most terrible factor for humanity. We must be aware of the danger it poses, "he stressed[8] of[9]

As the scientist said in his interview with Wired at the end of November 2017, he fears that AI could generally replace people.

According to Hawking himself, people can create too powerful artificial intelligence that will be extremely good at achieving their goals. And if these goals do not coincide with human goals, then people will have problems, the scientist believes. Read more here

Notes