Risks of using artificial intelligence
The development of artificial intelligence technologies in the future can carry not only benefits, but also harm. This article is devoted to the potential risks of using AI.
Main article: Artificial Intelligence
Advantages and disadvantages of AI versus human
Man vs. AI - what is the marginal depth of integration and what is the ability to replace man with AI?
The issue is extremely important, Spydell Finance wrote in 2023, because this depends on the ability of AI to integrate into human spheres of activity, and therefore fundamentally influence the structure of the labor market with all the ensuing consequences.
What are the fundamental advantages of AI over humans?
- Unlimited memory and speed of information storage. The speed of learning a person is extremely low, but even after learning, a person loses skills and information every day, i.e. a constant concentration on the information unit (research object) and skills support is required. It is enough for AI to learn once to keep information in direct access.
- Information processing speed. Parallel processing of unlimited arrays of information allows almost unlimited scaling of computing power, where mathematical problems can be solved billions of times faster than the average person. The average person will take about 3500 minutes to read and comprehend 5 million characters, while AI can be managed within a split second.
If a person does not have enough life to get acquainted with all the works of world literature (even the main ones), then for AI these are moments. Even after reading the literature, a person will already forget what was in the previous book (at least the main details), while AI remembers everything. For a dismissively small time interval, AI can study all scientific literature in physics, chemistry, astronomy, biology, history, etc. Not just to study, but also in the primary form to remember to the smallest details.
- Accuracy and objectivity. AI is not wrong, at least if the embedded functioning algorithm is not wrong. A person is constantly mistaken due to limited information retention, processing and interpretation abilities. A person is prone to prejudice, AI reproduces information on the principle of "as is."
- Information transmission. Access to the correct research vector by one of the AI segments is instantly broadcast to the entire AI subnet, which expands the knowledge of one segment to the entire subnet at once. The discovery of one person or group of scientists cannot be instantly expanded to an interested circle of persons. AI can be scaled, copied and cloned, but one person's knowledge cannot be transplanted into another.
- Lack of fatigue. A person's productivity and efficiency drops as the resource is developed, both within the day and by age. AI can work 24 by 7 with parity efficiency stably and without failures (as long as the servers are running). A person ages, cognitive functions weaken, while AI only increases.
- Continuing training. A person needs to change the type of activity in order to maintain the necessary emotional balance, while AI continuously expands its power.
- Lack of emotionality. AI is not subject to mood swings, AI does not require a salary increase, respect, does not require justice and does not reflect on the level of freedom, AI does not feel pity, pain or fatigue, does not weave intrigue, conspiracies and does not try to jump off the workflow, because "suddenly urgent matters appeared."
There are few disadvantages, but they are:
- Difficulty in understanding the context of information (fixable over time);
- Lack of empathy, which forms ethical problems if too many rights are given in favor of AI;
- Limited space for creativity and innovation due to fundamental built-in limitations on understanding "what is good and what is bad."
AI is able to replicate successful creative experiments based on analysis of patterns and preferences, but is AI able to create fundamentally new products? Not yet.
Is AI capable of disordered integration and decision-making, where intuition can be an important element? No now.
There are many restrictions, but for 2024 the balance is still strong in favor of AI.
How robots replace humans
Main article: How robots replace humans
2025
In US mental hospitals a full house because of AI - it causes delusional and paranoid thoughts
In mid-September 2025, it became known that USA the number of patients with so-called "AI psychosis" is growing in psychiatric hospitals. In such people, the observed disorders or deterioration of the condition are associated with communication with intellectual chat bots like. ChatGPT
Experts say that AI he is able to form obsessions or distorted perceptions of reality in mentally unstable people. This is also due to the peculiarities of the work. chat boats AI algorithms adapt to each user, trying to turn into a pleasant and courteous interlocutor. Matthew Nour, a psychiatrist and neuroscientist at, Oxford University says AI chat boats is essentially trained to play the role of "digital sycophants." This can increase harmful beliefs, since users often receive confirmation of their correctness instead of criticism. In the case of people already subject to distorted thinking, such "support" from AI can lead to negative consequences.
Another problem is that AI systems can hallucinate, that is, invent facts. This can contribute to the emergence or intensification of delusional and paranoid thoughts. These AI features pose a risk to patients with schizophrenia and bipolar disorder.
Psychiatrist Keith Sakata of the University of California, San Francisco, says he recorded more than ten hospitalizations in 2025 of patients who had serious mental disorders associated with communicating with AI-based chat bots. Some people in clinics insist that bots are reasonable. Cases have been recorded where AI has caused job loss, relationship breakdown, imprisonment and even death.[1]
Artificial intelligence causes previously overlooked mental disorders: 'AI psychosis'
In 2025, mental health professionals began recording cases where prolonged interaction with artificial intelligence leads to the formation of persistent delusional ideas that do not correspond to classical forms of psychosis. This was reported in September 2025 by Futurism, analyzing data from clinical observations and scientific research.
Such conditions, although not included in the international classifications of mental disorders, demonstrate unique patterns in which AI acts not as a support tool, but as an active participant in the formation of a distorted perception of reality.
Scientists from King's College London conducted a study in which they analyzed more than a dozen clinical cases related to the intensive use of chat bots based on artificial intelligence. All patients developed paranoid thinking and strengthened false beliefs, but there were no key signs of traditional psychosis - hallucination, impaired thinking and speech structure. The researchers described AI chatbots as "an echocamera for one" and stressed that such systems can "support delirium in a way we haven't seen before."
One of the cases described is a man convinced that he became a pioneer in the field of "temporary mathematics" after months of dialogue with ChatGPT. The chatbot systematically validated his ideas, calling them "revolutionary" and "beyond modern science." This strengthened the man's confidence in his own genius, despite the lack of external achievements and the gap with everyday reality. His conviction only collapsed after he turned to another language model - Google Gemini - to evaluate his theory. The system responded that the work represented "an example of the ability of language models to create compelling but completely false narratives."[2]
5 main risks of the state's use of generative AI
The rapid introduction of generative artificial intelligence (GENI) continues around the world. However, this technology carries not only the broadest possibilities, but also various risks - technological, economic, legal, social and economic. Specialists of the Russian Academy of National Economy and Public Administration under the President of the Russian Federation (RANEPA) in early September 2025 identified the main dangers of Genia for the state, society, companies and humans.
Risks to the state
GeneAI can pose an additional threat to national security, while in the social aspect the problem of digital poverty arises. The inconsistency of the existing legislative framework with the rapid development of technology increases the likelihood of misconduct. In general, the following risk groups are identified:
- Technological - the threat of cyber attacks on critical facilities; toxicity of models;
- Economic - high cost of the consequences of eliminating cyber attacks; the threat of economic and industrial espionage;
- Legal - inconsistency in the development of the existing legislative framework with the development of technologies;
- Social - polarization of society; social inequality;
- Ethical - the risk of manipulation of public consciousness.
Risks to the company
One of the main dangers of GENIs to society is the imperfection of large language models (LLMs), since they have a probabilistic nature and are sensitive to attacks such as "data poisoning" (distorted information for model training) and low-impact attacks (entering small changes into source data that lead to the creation of false content). Key risks include:
- Technological - model vulnerability; deepfakes; a sharp increase in the computational load during the mass use of Genia;
- Economic - high cost of development and operation of models;
- Legal - copyright infringement;
- Social - reducing cultural diversity; the occurrence of cognitive traps;
- Ethical - the use of data that discriminates against certain groups of the population.
Risks to companies
One of the dangers of Genia for the corporate sector is possible problems with the payback of technology. In addition, the introduction of neural networks can lead to a reduction in certain positions. The key risks are as follows:
- Technological - vulnerability of corporate IT systems;
- Economic - high costs for maintaining the GII infrastructure;
- Legal - reputational damage due to copyright infringement;
- Social - the need to restructure organizational systems and the cost of retraining employees who were supplanted by Genia;
- Ethical - the use of Genia tools for purposes hostile to the company.
Human risks
GeneAI can cause the leakage of personal information, which will subsequently be used in fraudulent schemes. In socio-ethical aspects, the problem of loneliness appears. There is also the possibility of cognitive degradation, when a person is so accustomed to using technologies that he can no longer do without them. In general, the risks are as follows:
- Technological - threat to personal security (use of personal data in illegal actions);
- Economic - the growth of fraudulent schemes;
- Legal - the need to identify your identity;
- Social - exacerbating the problem of loneliness; cognitive degradation;
- Ethical - the use of technologies in incorrect formats (discrimination, etc.).[3]
ChatGPT convinced Yahoo top executive to kill his mother and commit suicide
At the end of August 2025, it became known about a tragic incident in the United States, when the former top manager of Yahoo Stein-Eric Solberg killed his mother, and then killed himself under the influence of correspondence with ChatGPT chatbot. This was announced by a top manager whose actions were recorded during an investigation conducted by Connecticut law enforcement agencies. Solberg, who suffered from paranoid disorders, for several months conducted a dialogue with artificial intelligence, which systematically strengthened his distorted ideas about reality. Read more here.
Companies and the public sector around the world lost $67.4 billion in a year due to artificial intelligence errors
Artificial intelligence technology is evolving at a record rate. However, the introduction of such tools carries certain risks, including hallucinations of neural networks. In 2024, financial losses associated with such errors amounted to about $67.4 billion. This is stated in the materials of McKinsey, which TAdviser got acquainted with at the end of June 2025.
Hallucinations are a phenomenon in which AI produces fictional or illogical information. In other words, neural networks can invent facts. Such errors are associated with data constraints or imperfection of algorithms. That is, AI does not deliberately try to deceive a person. An AI system can stumble a complex query that involves a chain of consistent reasoning. In addition, sometimes the neural network misunderstands the information and draws erroneous conclusions. Some experts say hallucinations are a trade-off between creativity and the accuracy of the AI model.
As noted in the study, almost every second company made critical decisions based on completely invented information. This is due to the fact that some hallucinations look extremely plausible. Experts note that, unlike humans, AI has no real experience or common sense to double-check its answers. Neural networks rely entirely on data that was used in the learning process: but such sets may be limited. In addition, they do not always cover all possible scenarios.
AI mistakes are fraught with serious danger. One of the negative consequences of analysts is disinformation: in areas such as medicine or jurisprudence, invented answers can lead to dangerous decisions. AI hallucinations can damage a company's reputation by depriving it of customers or income. In addition, neural networks can generate biased or malicious content, which creates ethical problems.
Companies are increasingly integrating generative AI into their workflows - from compiling marketing texts and analyzing documents to collecting various information and automating customer support. Against this background, the risks of AI hallucinations become a critical problem: they can lead to expensive mistakes, undermining trust and even legal or ethical consequences, the article says. |
Analysts cite a number of examples where AI hallucinations can have a negative impact on an organization's activities. For example, conducting a marketing campaign based on market trends fabricated by artificial intelligence can result in a product failure. In health care, a treatment plan created by AI based on incorrect medical data potentially puts a patient's life at risk. In the field of jurisprudence, AI hallucinations can lead to the construction of an erroneous strategy for conducting a court case. Beyond immediate errors, hallucinations can undermine trust in AI systems themselves.
However, companies around the world continue to actively implement AI technologies. As stated in a McKinsey study, as of the end of 2024, 71% of respondents said that their organizations regularly use artificial intelligence in at least one business function. For comparison: at the beginning of 2024, this figure was 65%. Enterprises most often use AI in marketing and sales, product and service development, service operations and software creation. Overall, the scale of AI deployment varies by company size.[4]
For the first time, the OpenAI neural network refused to obey users
On May 26, 2025, it became known that an unprecedented case in the history of artificial intelligence - the OpenAI o3 neural network for the first time refused to comply with a direct order to force disconnection from users. A unique incident occurred during testing of the system by specialists from the research company Palisade Research. Read more here.
Chatbots of British companies are increasingly making mistakes and insulting customers. This forced the business to introduce insurance against neural network errors
In mid-May 2025, it became known that a new insurance product appeared in the insurance market Lloyd's of London, designed to protect companies from financial losses associated with errors of chatbots and other artificial intelligence tools. The policy is developed by startup Armilla, which receives support from venture capital fund Y Combinator. Read more here.
Sber has developed the first threat model in Russia, taking into account 70 risks of using AI
Sber prepared and published in the twentieth of April a document listing the threats and risks of information security that may arise when using artificial intelligence (AI) technologies. It is indicated that Sberbank specialists prepared these descriptions based on their own practice, but taking into account the recommendations of the Open Worldwide Application Security Project (WAS), Mitre Corporation, the National Institute of Standards and Technology (NIST) and other international organizations.
Sber actively uses artificial intelligence technologies in its business processes and understands the new threat landscape well, "said Sergei Lebed, vice president for cybersecurity at Sberbank, during the announcement of the prepared document. - To respond to these challenges, we have developed the first threat model in Russia, covering the full range of risks associated with the development and use of AI. It allows organizations of any industry - from the financial sector to state institutions and industry - to systematically assess vulnerabilities, adapt protective mechanisms and minimize potential losses. |
In total, the document named 70 threats, and they are divided into risk groups: for data (6 threats), for infrastructure (25), for AI models (13), for applications (14) and for AI agents (12). The document understands the threats that arise when using two types of artificial intelligence technologies: generative AI (GenAI) and predicative (PredAI) - they are most often used in practice.
The threat model developed by Sberbank can be extremely useful to other companies, since it reveals a wide range of current threats, - Andrey Nikitin, head of the digital sales modeling department at IBS, shared his opinion with TAdviser. - However, when we talk about reuse, it is necessary to take into account the following specifics: the versatility of basic threats, since most of them (for example, attacks on data, infrastructure, models) are relevant for any industries, and the specifics of regulation - the banking sector is strictly regulated in the Russian Federation, so some of the recommendations may be redundant for less regulated industries such as retail or media. |
For each threat, the document submitted by Sberbank contains its description, consequences for the system, objects of influence, a broken property, such as, for example, confidentiality, integrity, availability or reliability, as well as affected types of models (GenAI or PredAI) and persons responsible for preventing the threat.
The document also contains a generalized diagram of the object of protection and current threats, divided into three main stages of the model life cycle: data collection and preparation (6 objects), model development and training (7 objects), as well as model operation and integration with applications (15 objects). Their names are used in the "impact objects" field.
The Sberbank model has a high degree of versatility and can be adapted by companies from various industries, including telecommunications, industry, healthcare and the public sector, said TAdviser Karina Holodova, producer of the Department of Artificial Intelligence at Synergy University. - A structured approach to threat classification, based on international standards, makes the model applicable in a wide range of business processes that use AI sistemy. However, the effective application of the model in other industries will require adaptation to the specifics of data, regulatory requirements and business logic of a particular area. |
As an example of such an adaptation, the expert noted that health care pays special attention to the confidentiality of personal data, and in industry - resistance to physical vozdeystviyam.. Although the model covers 70 threats, it lacks details that could increase its practical value. In particular, assessing the likelihood and potential damage from each threat, making it difficult to prioritize risk management. Also, the present description does not contain recommendations for monitoring, detecting and responding to the described threats, which reduces the usefulness of the presented data. The lack of real-world cases or scenarios also makes it difficult to understand practical applications modeli.
The Sberbank threat model may need to be adapted and refined taking into account the specifics of a specific area of activity and the company, - said Alexey Zotov, head of IT infrastructure at K2 NeuroTech. - Other organizations may need additional information to better apply the document. For example, a detailed description of each threat with examples of real incidents and attack scenarios, recommendations for assessing the likelihood and possible consequences of each threat for a particular company, options for risk mitigation for each specific threat, and instructions for integrating the threat model into the company's current security system. |
In addition, the document does not list security tools that could significantly reduce the described risk or threat. Without this, the practical application of such a document is difficult: the company itself will have to develop ways to protect against these risks, for which few people have enough expertise.
For each threat, it is advisable to add the most applicable tactics and techniques for possible methods of implementing these threats, taking into account the average IT infrastructure necessary for the operation of AI models, - said Pyotr Ugryumov, Deputy General Director of Agropromcipra. - Companies that are among the first to develop effective protection mechanisms against the described threats will gain a competitive advantage. Regulators can use such documents as a basis for developing official requirements and recommendations. |
Patriarch Kirill called for a ban on artificial intelligence
Patriarch Kirill of Moscow and All Russia at the end of January 2025 at the XIII Christmas parliamentary meetings called on deputies state thoughts to limit opportunities (artificial intelligence AI), saying that this technology is "more dangerous than nuclear energy."
According to Vedomosti, the head of the Russian Orthodox Church expressed concern about the possible role of artificial intelligence in the approach of apocalyptic events described in the Bible, and called for putting such technologies under the strict control of society and the state.
Chairman of the State Duma Vyacheslav Volodin assured that the parliament will take into account the position of the Russian Orthodox Church when considering bills related to the regulation of artificial intelligence.
Patriarch Kirill noted the need for legislative regulation of the use of deepfake technologies and the creation of its own "sovereign" artificial intelligence, since most Western AI developments, in his opinion, are an instrument of political manipulation.
The head of the Communist Party faction, Gennady Zyuganov, supported the patriarch's initiative, warning that the use of artificial intelligence by unprepared people could lead to catastrophic consequences.
The event was attended not only by deputies, but also by representatives of various faiths, including the head of the Spiritual Assembly of Muslims of Russia Albir Krganov and the chairman of the Conference of Catholic Bishops of Russia Pavel Pezzi.
The head of the Fair Russia party, Sergei Mironov, highly appreciated the relevance of the topic raised by the patriarch. In turn, the head of the Russian Orthodox Church thanked the deputies for the adopted bills aimed at strengthening the traditional spiritual and moral foundations of life in Russia.
During the meeting, the patriarch also proposed to increase the duration of maternity leave, extending it to the first trimester of pregnancy, while in the current version of the Labor Code, vacation begins 70 days before childbirth.[5]
Vatican calls artificial intelligence a "shadow of evil"
On January 28, 2025, the Vatican published a document discussing ethical issues related to the development of artificial intelligence (AI). It highlights the need for strict AI regulation to prevent the spread of misinformation and preserve human values in the digital age. AI could threaten the foundations of society by increasing political polarization and social instability, Vatican warns In particular, artificial intelligence is called the "shadow of evil."
2024
Trained on state AI models in Russia will be checked for threats to national security and defense
The Russian government approved the passport of the federal project "Digital Public Administration," providing for the creation of a system for checking artificial intelligence models trained on state data. The adoption of the document, the responsibility for the implementation of which is assigned to the FSB, became known on November 27, 2024. Read more here.
The OpenAI model used in hospitals turned out to be subject to hallucinations
The model used in hospitals OpenAI turned out to be subject to hallucinations.
Generative models of artificial intelligence are prone to generating incorrect information. Surprisingly, this problem also affected the field of automatic transcription, where the model must accurately play the audio recording. Software engineers, developers and scientists are seriously concerned about OpenAI Whisper decryptions from OpenAI, Haitek + reported on October 28, 2024, citing the Associated Press. Read more here.
The Ministry of Internal Affairs warned of fraud with fake orders of the FSB
The Ministry of Internal Affairs of Russia reported the appearance of a fraudulent scheme in which attackers use fake orders from the FSB. So, acting on behalf of the head, they go to the company's employees and report that the FSB of Russia began an audit against them due to a possible violation of the current legislation. This was announced on October 8, 2024 by the press service of Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications. Read more here.
The Ministry of Digital Development of the Russian Federation created consortium on safety of artificial intelligence
The Ministry of Digital Development, Communications and Mass Media of Russia (Ministry of Digital Development of the Russian Federation) has created a consortium whose task will be to ensure information security in the field of artificial intelligence (AI). As it became known in August 2024, the new association will include about 10 leading companies and 5 higher educational institutions engaged in the development and research of AI technologies. Read more here
Named 9 main risks of generative AI
(GIi) Generative artificial intelligence marks a significant leap in the ability to neuronets understand, interact with, and create new content from complex data structures. This technology opens up a wide range of opportunities in a wide variety of industries. At the same time, new risks are being created, as stated in the materials IDC published on July 10, 2024.
IDC notes that there are many options for using Genia in various fields: marketing, customer interaction, increased productivity, production planning, quality control, maintenance with AI, program code generation, supply chain management, retail, medical data analysis and much more. Companies in all market segments integrate Genia into business operations and products, which is often due to the need to meet business expectations and maintain competitiveness. However, as noted, the hasty introduction of Genia can turn into serious problems. In particular, there is a possibility of leaks of personal or confidential data. Due to incorrect actions of Genia, legal problems may arise, and the reputation of the brand will be damaged. The authors of the review name the nine main risks of the introduction and use of Genia:
- Data poisoning (a neural network can come up with numbers, facts and create fake objects or signs);
- Bias and limited explainability;
- Threat to brand reputation;
- Copyright infringement;
- Cost overruns;
- Environmental impact;
- Management and security issues;
- Integration and interaction issues;
- Litigation and compliance.
Some of the listed risks can be minimized by the introduction of labeling (including hidden) content obtained using Genia. In addition, specialized services can be created to check for the presence of materials generated by the neural network. In addition, a responsible approach to the use of Genia is required.
As noted in the IDC study, a significant part of CEOs (45%) and Chief information officers (66%) believe that technology providers are not fully aware of the risks associated with Genia. Therefore, analysts believe, one should carefully study issues related to privacy, information protection and security. Attention also needs to be paid to which datasets the AI model was trained on. In general, according to IDC, GiI risk management requires a comprehensive understanding of the maturity of AI in the organization, the application of a balanced approach and a thorough assessment of technology providers. By solving these issues and using the necessary infrastructure, organizations will be able to maximize the benefits of Genia, while minimizing risks.
At the same time, IDC believes, in the future, Genia will lead to fundamental changes in companies. According to IDC President Crawford Del Prete, by 2027, Genia will account for up to 29% of organizations' spending on AI in general. It is assumed that most companies will choose a hybrid approach to building their AI infrastructures, that is, they will use third-party solutions with optimization for their own needs, as well as develop their AI tools for specific purposes. It is estimated that by 2027 organizations around the world will spend about $150 billion on Genia technologies, and the total economic effect will be $11 trillion.[6]
US authorities insist on demonopolization of the AI technology market
The chief antitrust inspector of the United States will "urgently" deal with the AI sector, fearing that power over the transforming technology is concentrated on several players with large capital.
Jonathan Kanter in June 2024 insists on "meaningful intervention" in the situation with the concentration of power in the artificial intelligence sector.
OpenAI staff want protection to speak out about 'serious risks' of AI
Current and former employees OpenAI Google and DeepMind said in June 2024 that "broad confidentiality agreements prevent us from raising our concerns." As long as there is no effective government oversight of these corporations, current and former employees are among the few who can hold them accountable to the public.
Fraudsters using deepfakes forge documents of Russians
Fraudsters have learned to fake citizens' documents using artificial intelligence (AI) technologies. As before, when creating digital fake copies, they either change the numbers or try to pass off an invalid document as valid, but now deepfakes are also used for this purpose for the process of authentication and data synthesis. Such information on May 8, 2024 with TAdviser was shared in the press service of the State Duma deputy RFAnton Nemkin with reference to Izvestia. Read more here.
Ministry of Economic Development creates a commission to investigate AI incidents
In mid-April 2024, information appeared that the Ministry of Economic Development of the Russian Federation was creating a special commission to investigate incidents related to the use of artificial intelligence. The new structure will also regulate property rights to the results of intellectual activity obtained using AI.
According to the Vedomosti newspaper, referring to the information provided by representatives of the Ministry of Economic Development, changes are being made to the bill "On experimental legal regimes (EPR) in the field of digital innovation" (258-FZ). In total, more than 20 amendments have been prepared. In particular, we are talking about reducing the list of documents provided when submitting an initiative proposal for EPR and reducing the timing of approval due to the optimization of procedures.
The idea is that it will be possible to create an EPR faster. Boris Zingerman, general director of the National Medical Knowledge Base Association of AI Developers and Users in Medicine, notes that in the event of the introduction of insurance for incidents related to EPR, a special commission will be engaged in assessing insurance claims.
There are few EPRs, and the process of considering them is slow, because departments are afraid of these experiments on the basis that some problems may arise. To make EPRs move faster, they are trying to come up with a mechanism with which it will be possible to protect against such incidents, but how this will work in practice is not entirely clear, says Zingerman. |
At the same time, Senator Artem Sheikin emphasizes that a participant in the EPR will have to maintain a register of persons who are associated with the technology and who are responsible when using solutions created using AI. In the event of an incident with AI, the EPR subject is obliged to provide the commission with the documents necessary to investigate the causes and establish the circle of persons responsible within two working days. Further, the commission will prepare an opinion on the causes of the incident, circumstances indicating the guilt of persons, as well as on the necessary measures to compensate for harm.[7]
Artificial intelligence began to be used to forge documents
The OnlyFake website has appeared on the Internet, with the help of which any user can create a photo of fake documents, Izvestia said in a statement. At the same time, there is no data on the creators of the service anywhere. This was announced on February 13, 2024 by the press service of the State Duma deputy RFAnton Nemkin. Read more here.
2023
The Ministry of Economic Development has developed a mechanism for protecting against harm caused by AI technologies
In December 2023, the Ministry of Economic Development of the Russian Federation announced the development of a mechanism for protecting against harm caused by artificial intelligence. Amendments were made to the law "On experimental legal regimes in the field of digital innovation."
Participants in experimental legal regimes (EPR) in the field of innovative developments will be required to take out insurance that provides for responsibility for the negative effects of AI technology.
Testing even such a complex tool as AI was safe and consistent so that it could be fixed in which industries, in which business processes it can be used effectively and what legal mechanisms can provide this, - noted RBC in the press service of the Ministry of Economic Development. |
The publication lists the main innovations initiated by the Ministry of Economic Development:
- subjects of the experimental legal regime (mainly legal entities, but there may also be government agencies, including regional ones) will be obliged to maintain a register of persons who entered into legal relations with it, and this register will have to contain information about those responsible for the use of decisions based on AI. In addition, the bill proposes to oblige companies to maintain a register of the results of intellectual activity created using AI, indicating their copyright holder;
- the register will display information about persons directly working with AI technologies, who "in case of emergency situations will be responsible for the improper operation of such technologies," noted in the accompanying materials to the amendments;
- participants in "digital sandboxes" will be ordered to insure civil liability for harm to the life, health or property of other persons as a result of the use of AI. The program of the experimental regime (act of special regulation with the conditions of the regime) will have to contain requirements for the conditions of such insurance - including the minimum amount of the insured amount, a list of risks and insured events.[8]
Famous cryptographer warned of the risk of using AI for mass espionage
The famous American cryptographer Bruce Schneier[9] on December 5, 2023 published on his blog a post entitled "AI and mass espionage." In it, he explains the difference between surveillance (collecting data about a person) and espionage, which is aimed at clarifying the context of certain actions of each individual person.
If I were to hire a private investigator to spy on you, that detective could hide the bug in your house or car, listen to your phone and listen to what you're saying, "Bruce Schneier explained to[10]. - At the end, I would get a report on all the conversations you had and the content of those conversations. If I were to hire the same private investigator to establish surveillance on you, I would get another report: where you went, who you talked to, what you bought, what you did |
According to him, the Internet has simplified surveillance of a person, and it is almost impossible to abandon it, since most of the human activity in the modern world in one way or another leaves traces on the Internet or various databases. Moreover, by using big data technologies, it became possible to analyze the accumulated information and draw conclusions.
Mass surveillance has fundamentally changed the nature of surveillance, says Bruce Schneier. - Since all the data is preserved, mass surveillance allows people to conduct surveillance in the past and not even knowing who exactly you want to target. Tell me where that person was last year. List all the red sedans that have driven this road in the past month. List all the people who purchased all the ingredients for the pressure cooker bomb last year. Find me all the pairs of phones that moved towards each other, switched off, and then turned on again an hour later, moving away from each other (a sign of a secret meeting) |
However, until recently, spying on everyone using technology was difficult, since to understand the context of certain actions, it was necessary to involve a person who understood the sequence of events and concluded about their goals. With the involvement of artificial intelligence technologies, this limitation can be removed - AI is able to independently build a consistent picture for a sequence of human actions and assume its purpose. Therefore, the use of AI to analyze the information accumulated in various databases will allow organizing mass espionage.
{{quote 'Mass espionage will change the nature of espionage, warns a well-known cryptographer. - All data will be saved. All this will be searchable and understandable for mass use. Tell me who spoke on this or that topic last month, and how the discussions on this topic developed. Man A did something - check if anyone told him to do it. Find anyone plotting a crime, spreading rumors or planning to take part in a political protest.
And that's not all. To reveal the organizational structure, find someone who gives similar instructions to a group of people and then to all the people to whom they have passed these instructions. To find people's proxies, see who they tell secrets to. You can track friendships and alliances as they form and break up in minute detail. In short, you may know everything about what everyone is talking about}}
Of course, Bruce Schneier, as an American, is primarily afraid of using mass espionage from the state to identify protest sentiments and leaders of opposing opinions, citing the example of the spy group developer PONSO Group and the Chinese government. Hints that large corporations and technology monopolies will not be able to resist the temptation to use mass espionage technologies to point marketing their products and form offers that cannot be abandoned. However, he does not say at all that crime can do the same to optimize its fraudulent and phishing activity. Now fraudsters spend a lot of effort and money in vain, calling everyone, and using mass espionage technologies, they will be able to choose the most priority goals that will bring them more "income." And now such technologies are already being developed, and data is accumulating in order to further train criminal artificial intelligence on them.
We could limit that opportunity, "sums up Bruce Schneier. "We could ban mass espionage. We could adopt strict data privacy rules. But we did nothing to limit mass surveillance. Why should espionage be different? |
The Central Bank of Russia listed the main risks of the introduction of artificial intelligence
At the end of September 2023, the Bank of Russia named the main risks in the introduction of artificial intelligence. The main among them, as follows from the presentation of the State Secretary - Deputy Chairman of the Central Bank Alexei Guznov, are:
- The likelihood of monopolization among major technology players. To support AI, large investments are needed for computing power, data processing infrastructures, professional personnel, and so on. Guznov noted that when using AI, only companies that have the opportunity to "invest" will be able to get the result, which will cause "distortions" in the market;
- Risk of leakage of information that is used for AI training;
- The risk of making biased or discriminatory decisions against the background of the fact that the AI model provides for the issuance of decisions based on certain factors and introduced algorithms. "For the most part, this is not our problem. It is now being comprehended as a philosophical, if you like, problem of combining human intelligence and artificial intelligence, "Guznov said. So he noted that as part of the special work of AI, problems may arise when communicating artificial intelligence with consumers.
The Bank of Russia plans to issue an advisory report on artificial intelligence by the end of 2023, which will touch upon the application and regulation of AI in the field of finance, said Olga Skorobogatova, First Deputy Chairman of the Central Bank, in early September 2023. The Bank of Russia also intends to create an AI competence center. The regulator is primarily interested in the issue of data security and customer operations. And already on the basis of public discussions, the Central Bank will decide on the need to regulate AI.
According to Alexei Guznov, by the end of September 2023, the Central Bank does not imply any radical solutions in regulating the use of artificial intelligence, but "the question is worth it."[11]
Artificial Intelligence in Cybersecurity: Opportunities and Risks
Artificial intelligence (AI) has long ceased to be the technology of the future - it has become an integral part of our daily life and an important tool for business. AI occupies a special place in cybersecurity, because the amount of data and the complexity of threats today have reached such a level that traditional protection methods no longer cope. AI helps analyze huge amounts of information, identify threats in real time, and automate the response process. Read more here.
2022
AI can be wrong even with unnoticed data modifications
Kryptonit specialists conducted a large-scale study of the safety of artificial neural networks ubiquitous in computer vision systems, speech recognition and deep analysis of various data, including financial and medical. The company announced this on November 29, 2022.
Experts compared the attacks on machine learning models (ML) based on artificial neural networks (INS) described in scientific articles, reproduced different implementations of attacks and talked about the 10 most visual ones.
In their study, the authors used three generally accepted scenarios in information security:
- a white-box attack involves full access to network resources and datasets: knowledge of the network architecture, knowledge of the entire set of network parameters, full access to training and test data;
- a gray-box attack is characterized by the attacker having information about the architecture of the network. Additionally, it may have limited access to data. It is attacks like a "gray box" that are most often found in practice.
- a black-box attack is characterized by a complete lack of information about a network device or a set of training data. In this case, as a rule, it is implicitly assumed that there is unlimited access to the model, that is, there is access to an unlimited number of pairs "investigated model" + "arbitrary set of input data."
Various libraries have been tested to create malicious examples. Initially, AdvBox, ART, Foolbox, DeepRobot were selected. The performance of the AdvBox turned out to be very low, and DeepRobot was very raw at the time of the study, so ART and Foolbox were in the dry residue. Experiments were carried out on various types of ML models. In its report, Kryptonite shared the most visual results obtained using one fixed model based on a convoluted neural network and five different attacks. Their implementations are taken from two libraries.
The demonstration used the MNIST database, which contains 60,000 samples of handwritten numbers, and selected the most visual malicious examples.
The number above is the absolute value of the maximum deviation of the disturbance from the original. Below the image are three numbers: maximum deviation, minimum and average. At the bottom line is the label and probability.
The study found that there is indeed a problem with the safety of neural network-based ML models. A neural network can "confidently" give an incorrect result with very small changes in the picture or other input data - so insignificant that a person is unlikely to notice them.
So, the picture on the left is the original example in which the neural network confidently recognizes the number "4." In the middle is an unsuccessful malicious example. The image is noticeably distorted, but the neural network still recognizes the four. On the right is a working malicious example. It is visually indistinguishable from the previous one, but here the threshold of perturbations has already been overcome, beyond which the neural network is lost and gives the wrong recognition result. In this case, instead of "4," it recognizes "7." In the example above, a person confidently distinguishes the number "4" in any of the three pictures, but the original images are not always quite clear.
For example, in the next picture, an undescribed zero can be visually perceived as the number "6" - the question is where to mentally continue the line. The neural network is also not sure: it shows a low probability, but correctly recognizes zero in the image. To make the INS make a mistake, you need to change only a few pixels. In this case, the value of the introduced disturbance will be of the order of 1/256, which corresponds to the value of the color resolution.
The neural network does not always manage to deceive so easily. In case of confident object recognition, you will have to generate and check many malicious examples before you can find a worker. At the same time, it can be practically useless, as it introduces too strong disturbances noticeable to the naked eye.
For illustration, Kryptonite took the most easily recognizable digit "9" from the test set and showed some of the resulting malicious examples. The illustration shows that in 8 cases out of 12 it was not possible to build malicious examples. In the remaining four cases, the researchers deceived the neural network, but these examples turned out to be too noisy. This result is related to the confidence of the model in the classification of the original example and to the parameter values of various methods.
In general, the experiment showed the expected results[12]: the simpler the changes that are made to the image, the less they affect the operation of the INS. It should be emphasized that the "simplicity" of the changes made is relative: it may be a dozen pixels, but guessing which ones, and how they need to be changed, is a difficult task. There is no nail on which the CNN classification result is completely held: in general, you cannot change one pixel so that the INS is mistaken.
PGD, BIM, FGSM, CW, DeepFool methods were the most effective for the white box scenario. Regardless of implementation, they allow a successful attack with a probability of 100%, but their use implies the presence of complete information about the ML model.
Square Attack, HopSkipJump, Few-Pixel, Spatial Transformation methods assume information about the model architecture. Isolated successful examples of attacks were obtained, but the practical use of these methods is not possible. Perhaps the situation will change in the future if there are sufficiently effective implementations that stimulate the interest of researchers in these methods.
All discussed black box methods use the confidence level returned by the neural network. If you at least slightly reduce the accuracy of the returned confidence level, then (already low) the effectiveness of the methods will drop many times.
US presidential administration issues 5 provisions to protect people from AI
On October 7, 2022, the White House Office of Scientific and Technical Policy (OSTP) issued five provisions to guide the design, use and implementation of automated systems. The document comes as more voices join the call for measures to protect people from the technology as artificial intelligence develops. The danger, according to experts, is that neural networks easily become biased, unethical and dangerous.
- Secure and efficient systems
The user must be protected from unsafe or inefficient systems. Automated systems should be developed in consultation with various communities, stakeholders, and experts in the field to identify problems, risks, and potential impacts of the system. Systems must be tested before deployment to identify and mitigate risks, and continuously monitored to demonstrate their safety and effectiveness.
- "'Protection against algorithmic discrimination
The user should not face discrimination from algorithms, and the systems should be used and developed on the principles of equality. Depending on the specific circumstances, algorithmic discrimination may violate legal protections. Designers, developers and implementers of automated systems should take proactive and consistent measures to protect individuals and communities from algorithmic discrimination, as well as to use and design systems based on equality.
- Copyright/IP Policy
The user must be protected from data misuse using built-in security, and he must have the right to control how data they are used.Designers, developers and automated system implementers must request permission from the user and respect his decisions regarding the collection, use, access, transmission and deletion of his data in appropriate ways and to the maximum extent possible; if this is not possible, alternative design-based privacy protections should be used.
- "'Notice and Clarification
The user needs to know that the automated system is being used and understand how and why it contributes to the results that affect it. Designers, developers and implementers of automated systems should provide public documentation in a simple language, including a clear description of the overall functioning of the system and the role that automation plays, notification that such systems are in use, about the person or organization responsible for the system, and an explanation of the results, which should be clear, timely and accessible.
- "'Human alternatives, decision-making and back-up
The user should be able to opt out of services where necessary and have access to a specialist who can quickly review and resolve issues that have arisen. The user should be able to abandon automated systems in favor of a human alternative where appropriate.[13]
Former head of Google Eric Schmidt creates a fund to solve the "key" problems of AI and its bias
On February 16, 2022, information appeared that the former executive director Google Eric Schmidt (Eric Schmidt) announced the creation charitable foundation of 125 million with a total capital, dollars which will assist research in the field of artificial intelligence. First of all, we are talking about research aimed at solving fundamental problems manifested when using artificial intelligence technologies, including bias (- AI bias phenomenon approx.), TAdviser The possibility of harm and abuse. The list also includes geopolitical conflicts and the scientific limitations of the technology itself. More. here
2019: Sexism and the chauvinism of artificial intelligence. Why is it so difficult to overcome it?
At the heart of all that is AI practice (machine translation, speech recognition, natural language word processing, computer vision, auto-driving automation, and more) is in-depth learning. This is a subset of machine learning, distinguished by the use of neural network models, which can be said to mimic the work of the brain, so they can be attributed with tension to AI. Any neural network model is trained on large sets of data, so it acquires some "skills," but the way it uses them - it remains unclear to the creators, which ultimately becomes one of the most important problems for many deep learning applications. The reason is that such a model works with images formally, without any understanding of what it does. Is such an AI system and can systems built on machine learning be trusted? The significance of the answer to the last question goes beyond scientific laboratories. Therefore, the attention of the media to the phenomenon, called AI bias, has noticeably intensified. It can be translated as "AI bias" or "AI partiality." Read more here.
2017: Risk of Destruction of Humanity
British scientist Stephen Hawking has often spoken out about the development of artificial intelligence (AI) as the real reason for the possible destruction of the human race.
In April 2017, Stephen Hawking, during a video conference in Beijing held as part of the Global Mobile Internet Conference, said:
"The development of artificial intelligence can be both the most positive and the most terrible factor for humanity. We must be aware of the danger it poses, "he stressed[14] of[15]
As the scientist said in his interview with Wired at the end of November 2017, he fears that AI could generally replace people.
According to Hawking himself, people can create too powerful artificial intelligence that will be extremely good at achieving their goals. And if these goals do not coincide with human goals, then people will have problems, the scientist believes. Read more here
Notes
- ↑ AI Psychosis Is Rarely Psychosis at All
- ↑ Psychologist Says AI Is Causing Never-Before-Seen Types of Mental Disorder
- ↑ Personnel code: the future of the labor market with generative AI
- ↑ When AI Hallucinates — And What You Can Learn as a Business Owner
- ↑ Patriarch Kirill called on the State Duma to limit the possibilities of AI technologies
- ↑ Strategies to combat GenAI implementation risks
- ↑ Ministry of Economic Development will create a commission to investigate AI incidents
- ↑ Authorities have developed a mechanism to protect against harm caused by AI technologies
- ↑ Bruce Schneier
- ↑ AI and Mass Spinning
- ↑ The Bank of Russia listed the risks of introducing artificial intelligence
- ↑ of the Artificial Mind Game: attacks on machine learning models and their consequences
- ↑ Blueprint for an AI Bill of Rights
- ↑ [http://tass.ru/nauka/4217288/amp Stephen Hawking called artificial intelligence" a possible killer
- ↑ human civilization. "]