RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2025/06/19 15:14:49

Regulation of artificial intelligence

Content

Main article: Artificial Intelligence

Chronicle

2025

Ministry of Digital Development has prepared a bill: for cyber fraud using AI in Russia, they will be imprisoned for 6 years

The Ministry of Digital Development of the Russian Federation proposed new fines and tougher liability for cyber fraud. The relevant bills on amendments to the Code of Administrative Offenses (CAO) and on amendments to the Criminal and Criminal Procedure Codes (Criminal Code and Criminal Procedure Code) were published on the federal portal of draft regulatory legal acts on August 26, 2025.

According to Interfax with reference to these documents, one of the draft laws proposes the introduction of responsibility for the use of "special technical means and special knowledge when committing crimes," including using artificial intelligence.

For cyber fraud using AI in Russia, they will be imprisoned for 6 years. The Ministry of Digital Science has already prepared a Ministry of Digital Development

AI "refers to a set of technological solutions that allows you to imitate a person's cognitive functions (including self-learning and finding solutions without a predetermined algorithm) and get results comparable, at least, to the results of human intellectual activity when performing specific tasks," the bill says.

Its clause recognizes the use of neural networks in the commission of cybercrimes as an aggravating circumstance. For this, it can threaten up to six years in prison or a fine of up to 500 thousand rubles.

File:Aquote1.png
Punishment is provided for the use of artificial intelligence by fraudsters when committing theft or extortion of funds and property of citizens. The minimum punishment that can threaten fraudsters for using AI when committing thefts is a fine of 100 thousand rubles, "Dmitry Grigorenko, Deputy Prime Minister of the Russian Federation, told reporters.
File:Aquote2.png

In addition, it is proposed to tighten the sanctions of Articles 272 of the Criminal Code of the Russian Federation (Illegal access to computer information), 273 of the Criminal Code of the Russian Federation (Creation, use and distribution of malicious computer programs) and 274 of the Criminal Code of the Russian Federation (Violation of the rules for the operation of means of storing, processing or transmitting computer information and information and telecommunication networks), according to which the maximum punishment will be eight years in prison in the event of the onset or threat of grave consequences.[1]

Ministry of Digital Development has developed the concept of AI regulation in Russia

Ministry of Digital Development Russia prepared a draft concept for the development of regulation of relations in the field of artificial intelligence technologies until 2030. The document defines the principles of future legislative regulation of the industry and describes the factors affecting the development of AI technologies in various sectors of the economy. The concept was completed in August 2025.

According to Vedomosti, the project was created in cooperation with the Alliance in the field of artificial intelligence, uniting large technology companies interested in the development of these technologies. The representative of the Ministry of Digital Development confirmed the authenticity of the document and clarified that it is undergoing expert discussion with representatives of the industry and industries of active implementation of AI solutions.

Ministry of Digital Development has created the concept of regulating artificial intelligence in Russia

Upon completion of the consultation process, the concept will be aimed at interdepartmental coordination. Representatives of the Ministry of Economic Development and the Ministry of Industry and Trade said that the draft concept has not yet been submitted to their departments in the prescribed manner.

The representative of the Ministry of Industry and Trade noted that the creation of a unified regulatory system in the field of AI requires the development of an integrated approach that takes into account the ethical, legal and social aspects of the use of technologies. The agency expressed its readiness to actively participate in the development of relevant legislative initiatives.

The representative of the Alliance in the field of AI stressed that favorable conditions for the safe development of technologies can only be developed in close cooperation with the industry, taking into account practical experience and a long-term strategic perspective. The project was based on consolidated proposals from industry organizations and Alliance companies, including Sber.

Russia is taking a hybrid approach to regulating artificial intelligence. Most regulations are stimulating, but there are point restrictions and self-regulatory mechanisms. In 2020, a law was adopted on experimental legal regimes in the field of digital innovation, which allows creating legal exceptions for testing new technologies.[2]

German Gref: Blockchain is an example of negative regulation of technology in Russia, it is important not to repeat this with the field of AI

State regulation in the field of artificial intelligence (AI) cannot be avoided, since this is a technology that affects citizens, business, and the state, and in addition to a large number of advantages, also carries a large number of risks. But the question is how to regulate, said the president, chairman of the board of Sberbank German Gref, speaking on June 19 at a forum in St. Petersburg.

Looking at the experience of other countries, the ways of regulation were divided into three types. The first is a stimulating type of regulatory support for AI. It is adhered to by countries such as Britain, South Korea, Japan. They try to practically not regulate AI, creating favorable conditions for the development of national systems: "Everyone understands how important it is to have national systems."

The second type is regulatory, it is characteristic of the EU, for example. They were the first in the world to adopt a rather comprehensive regulatory document - the Act on Artificial Intelligence: "And everything is fine with regulation there, but there is no artificial intelligence."

File:Aquote1.png
This is very expensive and the responsibility is such that no company today can vouch that the technology, which is still only in experimental development, will not make any mistakes. And the fines are prohibitive. Therefore, all companies are trying to move somewhere outside the European Union so as not to expose themselves to such risks, - said German Gref.
File:Aquote2.png

The head of Sberbank, German Gref, believes that blockchain is not an example that should be guided by state regulation

And the third type, to which Russia can be attributed, is hybrid regulation. In the same wing is the United States. Here, on the one hand, there is regulation, and on the other, there is support and an attempt not to run forward and create barriers to the development of new technology.

File:Aquote1.png
We ran forward in terms of regulation, and blockchain I think this is a good negative example for us. Blockchain technology is develops in us not good. And it is impossible to make such a mistake for the second time. In the history of our country, there were many such things when we lost the whole sector of the economy of the future, stopping research in some area, - said the chairman of the board of Sberbank.
File:Aquote2.png

Now, in his opinion, there is a consensus: there is moral regulation - the "Code of the developer of artificial intelligence," to which all the largest AI developers in Russia have already joined. There is both technical regulation and stimulating regulation, including, for example, the effect of the experimental legal regime in Moscow.

Deputy Chairman of the State Duma Vladislav Davankov, who participated in the same discussion, believes that regulation in the field of AI is necessary. Now there is not even a conceptual apparatus, it has yet to be developed. But regulation should be "smart," not "oak": so that it is not like in the United States, where the founder of the famous social network Mark Zuckerberg "goes to the Senate as a job and justifies himself for his every action, and he is scolded there all the time." In September 2025, large parliamentary hearings are planned, where these issues will be "calmly discussed." Vladislav Davankov agrees with German Gref that there is no need to rush in this.

In general, the role of the state in the development of AI is critical, notes German Gref. All over the world, AI is recognized as the number one technology, and all the largest developed countries are trying to help develop these technologies in their countries. At the same time, all the examples of state assistance that the head of Sberbank has seen, the largest assistance is provided in China. It is thanks to state aid in China that the largest number of fundamental models: as calculated in Sberbank, there are about 240 models in the country with more than a billion parameters and over 20 models with more than 200 billion parameters.

No other state in the world has such competition in the field of AI. The path that the government has taken to create competition is justified, said German Gref. At the same time, AI is a very expensive technology, which is primarily associated with computing power. In China, state data centers are being built, which are transferred to the use of medium-sized companies.

The United States is also doing a lot to develop the AI sphere. For example, many amendments were adopted in terms of tax legislation. Looking at these examples, we can say that the help of the state is definitely needed. At the same time, the main help is to build the necessary education system. And here the head of Sberbank again made a reference to China, where textbooks in the field of AI were introduced as a compulsory course from the 1st to the 11th grade: "The Chinese are starting to study artificial intelligence in kindergarten."

In addition, it is necessary to subsidize science, scientific institutions, so that they interact with companies that are engaged in development in the field of AI. At the same time, such large companies as Sberbank or Yandex do not need to help, German Gref believes.

For the use of AI for criminal purposes in Russia will be imprisoned for up to 15 years. The Ministry of Digital Science has already prepared a Ministry of Digital Development

Ministry of Digital Development Russia prepared a bill on the introduction of criminal liability for crimes using. artificial intelligence The document, which became known in June 2025, provides for punishment in the form of fines of up to ₽2 million and prison terms of up to 15 years. The bill was developed as part of the implementation of the minutes of the meeting under the leadership of the Deputy Prime Minister Dmitry Grigorenko and is aimed at regulating law enforcement practice in the context of an increase in the use of AI technologies.

According to Vedomosti, the bill amends several articles of the Criminal Code of Russia. Theft, fraud, extortion, malicious influence on information systems and violation of the rules for the operation of computer information storage will fall under the new norms.

In Russia,
for the use of artificial intelligence in criminal acts, punishment is provided in the form of imprisonment for up to 15 years

Article 158 of the Criminal Code of Russia defines artificial intelligence. It means "a set of technological solutions that allows you to imitate human cognitive functions and obtain results comparable to the results of human intellectual activity when performing specific tasks." The complex includes ICT infrastructure, software, processes and data processing services.

For theft using AI, fines of ₽100 thousand or in the amount of the convict's salary for 1-3 years are provided. An alternative is forced labor up to 5 years with restriction of freedom to 2 years or imprisonment up to 6 years with a possible fine of up to ₽80 thousand.

For extortion using ICT and AI, fines are set from ₽100 thousand to ₽500 thousand or in the amount of income for 1-3 years. Forced labor is provided for up to 5 years with restriction of freedom to 1.5 years or imprisonment to 5 years with a fine of up to ₽80 thousand.

When committing extortion or theft by a group of persons, with the use of violence or on a large scale, criminals face up to 7 years in prison. The fine will be up to ₽500 thousand or in the amount of income for 3 years, plus restriction of freedom to 2 years.[3]

Ministry of Digital Development of the Russian Federation announced the development of the Concept for the development of AI regulation until 2030

Ministry of Digital Development Russia completes the preparation of the "Concept for the development of regulation artificial intelligence until 2030." The revision of the document is almost complete, in the near future the department will agree on it with other ministries. This was reported Ministry of Digital Development on May 26, 2025.

According to Kommersant, special attention is paid to the concept of ethics and human control over artificial intelligence, depending on their tasks. The Ministry of Digital Development noted that modern artificial intelligence systems can perform unprogrammed tasks, which increases the risks of their use.

The Ministry of Digital Development of the Russian Federation announced the development of a concept regarding the regulation of artificial intelligence until 2030

In St. Petersburg at the end of May 2025, closed and public discussions of the future regulation of artificial intelligence were held. Representatives of regulators - the Ministry of Digital Industry, the Ministry of Digital Development of Justice and the presidential administration - discussed the final approaches to the concept being developed.

In parallel with the preparation of the concept, the government has already begun to introduce artificial intelligence technologies into management, economics and the social sphere. Standard solutions based on trusted artificial intelligence are being created for federal and regional departments. Solutions will be universal, that is, created for more than one local task.

Deputy Prime Minister Dmitry Grigorenko said on May 12, 2025, that artificial intelligence has ceased to be a distant innovation, becoming an understandable technology. The state's overall focus on artificial intelligence is shifted to implementing and scaling best practices.

To understand the goals and objectives of the solutions, the regions were interviewed, on the basis of which work is underway to create universal systems. In early May 2025, it became known that the government plans to create a center for the development of artificial intelligence on the basis of its analytical center. The division should coordinate the interaction of federal, regional authorities and business in the implementation of specialized tasks.[4]

A working group has been formed in the State Duma to develop laws in the field of artificial intelligence

State Duma Russia In April 2025, an inter-factional working group was formed, the task of which will be to prepare legislative initiatives in the field of regulation. artificial intelligence The structure is headed by Deputy Chairman of the Lower House Alexander Babakov. The association included representatives of all parliamentary factions. The working group will act until the completion of the powers of the eighth convocation and will focus on the development of legal mechanisms for the implementation and use of "nationally oriented" AI systems in various industries.

According to Vedomosti, the materials for the organizational meeting of deputies were confirmed by a source in the lower house of parliament. Alexander Babakov said that the issue of regulating artificial intelligence "should not scare anyone," stressing that this area cannot develop exclusively according to market laws and focus only on profit rates.

Deputies organized a special group to prepare laws on AI

The head of the working group noted the need for a comprehensive assessment of factors, including potential threats, including ethical ones related to the development of new cross-border technologies. Babakov also pointed out the importance of studying the saturation of the market and its development in combination with the energy sector.

The creation of the working group takes place against the background of the activation state of policy in the field of artificial intelligence. In February 2024, the president Russia Vladimir Putin signed a decree updating the national AI development strategy until 2030. According to this document, by the deadline, the proportion of workers with AI skills should reach 80% (for comparison, in 2022 this figure was only 5%).

The strategy also provides for an increase in the level of confidence of citizens in artificial intelligence technologies to 80% (from 55% in 2022). By 2030, organizations will have to invest at least ₽850 billion on the introduction and use of AI technologies.[5]

Russian President Vladimir Putin instructed to develop a national regulatory framework for AI

Russia Vladimir Putin On February 24, 2025, the President took the initiative to develop a regulatory framework for regulating work with large amounts of data in the field artificial intelligence of (AI). The main goal is to reduce the time for the development and introduction of new materials to 2-3 years.

File:Aquote1.png
Due to the introduction of AI, computer modeling in our country, it is necessary - and this is quite realistic - to reduce to 5-10 years, and in the future to two or three years the time for the development and introduction of new materials, "said Vladimir Putin.
File:Aquote2.png

Russian President Vladimir Putin

The president outlined key applications for new materials, including composites and alloys for mechanical engineering and aerospace, plant protection products, energy transmission and storage systems for vehicles and drones, and innovative prototypes of human organs and tissues for medicine.

Russian Deputy Prime Minister Dmitry Chernyshenko noted that the country is one of the world leaders in the development of AI technologies. Russian scientists have unique neural processors and mathematical models that ensure technological sovereignty.

The government is already actively using artificial intelligence technologies, including working with closed data in an isolated segment. Digital agents and twin systems are used to improve the efficiency of internal management and document management.

The Deputy Prime Minister also drew attention to the importance of learning to work with neural networks, stressing that only 20% of teachers and students use AI effectively. According to him, artificial intelligence should act as a universal reference book and an online assistant that contributes to human development, and not replace him.[6]

2024

ROC has developed and approved the principles of using artificial intelligence

General church postgraduate studies and doctoral studies named after Saints Equal to the Apostles Cyril and Methodius approved the first document in Russia regulating the use of artificial intelligence technologies in spiritual education. The decision was made by the Academic Council of the institution on September 30, 2024. Read more here.

Ministry of Economy of the Russian Federation will prepare regulatory sandboxes for the use of AI

In early September 2024, the Ministry of Economy of the Russian Federation published three draft government resolutions aimed at implementing amendments to the federal law on experimental legal regimes (EPR - "regulatory sandboxes") in the field of digital innovation (169-FZ). This, in particular, is about the use of artificial intelligence tools.

It is noted that as of September 2024, 16 EPRs are being implemented in Russia, of which 13 are related to drones. At the same time, "regulatory sandboxes" allow you to abandon some regulatory requirements that interfere with the development of innovation. Thanks to this approach, companies engaged in the development of new products and services, as well as representatives of authorities, can test them without risk of violating current legislation, and subsequently, if the testing was successful, enter the market with them.

Ministry of Economy of the Russian Federation will create regulatory sandboxes for the use of AI

One of the draft decisions of the Ministry of Economy changes the rules for amending the EPR - they, as the Kommersant newspaper notes, can be prepared by an authorized body or regulator on their own initiative, on behalf of the president or government, as well as on the proposals of companies. The second project is designed to optimize the procedures for submitting reports of EPR subjects on their activities. The third document gives the Ministry of Economy an additional basis for suspending the status of the subject of the EPR - according to the conclusion of the commission of the department, created to establish the circumstances in which harm was caused as a result of the use of decisions using artificial intelligence.

In general, the amendments are focused on accelerating the launch of new programs, as well as on clarifying the procedure for testing AI technologies. It is assumed that the changes will greatly simplify business access to EPR.[7]

Ministry of Digital Development of the Russian Federation is preparing rules for the use of artificial intelligence

On July 26, 2024, the Human Rights Council under the President of the Russian Federation (HRC) Ministry of Digital Development Russia announced a joint initiative to develop rules and restrictions on use artificial intelligence in certain industries. We are talking about health education care,, legal proceedings,, the transport field of security and psychological assistance.

It is said that the adviser to the President of the Russian Federation, the head of the HRC Valery Fadeev met with the Minister of Digital Development, Communications and Mass Media Maksut Shadayev, proposing to discuss the article by the Chairman of the Constitutional Court Valery Zorkin on the need for a constitutional and legal analysis of the introduction of AI. According to Fadeev, around digitalization "there is a certain euphoria," which is often not justified and sometimes leads to erroneous, premature decisions.

Ministry of Digital Development of the Russian Federation develops rules for the use of artificial intelligence

It is noted that AI is increasingly used in the analysis of medical data and diagnosis. However, there is a danger of professional degradation of doctors who prefer to give decisions to AI, and not to make them on their own. New digital technologies make it possible to more closely monitor the academic performance of schoolchildren, but this is said to be fraught with building a trajectory for children's development. In addition, against the background of the widespread introduction of AI, there may be excessive collection and illegal trafficking of personal data. Therefore, it is necessary to develop rules for the use of AI that will help minimize possible risks and avoid information leaks.

File:Aquote1.png
We will understand specific industries, develop specific rules and determine the amount of data collected at the industry level. I propose to start with education as the most "phonizing" topic. We will prepare our proposals in the next month (until the end of August 2024), - said Shadayev.[8]
File:Aquote2.png

In Russia, liability has been introduced for causing harm when using solutions with AI

On July 9, 2024 Russia , liability for causing harm when using solutions with artificial intelligence was introduced. This step was the result of amendments to the Federal Law "On Experimental Legal Regimes in the Field of Digital Innovation in." of the Russian Federation Under the new provisions of the law, insurance of risks arising from the use of technology is provided, AI which will provide additional protection for citizens and legal entities.

Legislative changes include liability for harm caused to the life, health or property of individuals and legal entities in the implementation of experimental legal regimes using AI. The amendments provide for the creation of a commission to investigate all circumstances related to the infliction of such harm. The Commission will assess the scope and nature of harm, including technical failures and errors made in the development and implementation of AI technologies, as well as the actions or omissions of persons that may have caused harm.

Liability for harm from using AI solutions introduced in Russia

Based on the conclusions of the commission, decisions will be made to minimize and eliminate the consequences of harm, prevent similar cases in the future, change the conditions of the experimental legal regime or suspend the status of the subject of the experimental legal regime. These measures are aimed at ensuring the safety and reliability of AI technologies as part of legal experiments.

The law also establishes compulsory civil liability insurance for participants in experimental legal regimes for harm caused to the life, health or property of other persons, including cases related to the use of AI-based decisions. Requirements have been introduced for the conditions of such insurance, including the minimum amount of insurance, a list of insurance risks and insured events. The subject of the experimental legal regime is obliged to maintain a register of persons who entered into legal relations with him, which will ensure transparency and control over the insurance process.

The law will enter into force 180 days after its official publication.[9]

AI developers in Russia were obliged to insure the risks of harm to their systems

On June 25, 2024, the State Duma of the Russian Federation adopted a law on compulsory liability insurance for harm from artificial intelligence. The document, as noted, is aimed at improving the mechanisms for the application of experimental legal regimes (EPR) in the field of digital innovation.

The law is aimed, among other things, at preventing and minimizing the risks of the emergence of negative consequences of the use of AI technologies. Participants of the EPR are ordered to insure civil liability for causing harm to the life, health or property of other persons as a result of the use of AI. In accordance with the new rules, the subject of the EPR is obliged to maintain a register of persons entering into legal relations with him. This database should contain information about those responsible for using AI-based solutions. In addition, companies will have to maintain a register of created results of intellectual activity with the indication of their copyright holder.

AI developers in Russia obliged to insure risks

Another innovation is the formation of a commission to identify the circumstances as a result of which harm was caused when using AI. It is proposed that the commission will include representatives of the authorized and regulatory bodies, as well as organizations of the business community. In addition, other persons may be included if necessary. The changes made are consistent with the basic principles of the development and use of AI technologies, the observance of which is mandatory when implementing the National Strategy for the Development of Artificial Intelligence for the period up to 2030.

File:Aquote1.png
If, during the implementation of the EPR, as a result of the use of solutions developed using AI technologies, harm to the life, health or property of a person or the property of a legal entity is caused, within 30 days from the date of detection of the fact of causing such harm, the regulatory body creates a commission to establish the circumstances under which such harm was caused, - the document says.
File:Aquote2.png

In accordance with the law, the requirement for the initiator to have no criminal record is excluded, since the practice of establishing an EPR has shown that there is no need and effectiveness of this requirement as a whole. It is emphasized that the presence of a certificate of no criminal record does not affect the decision on the establishment of an EPR - this document is not the subject of consideration by interested bodies (in particular, the Ministry of Internal Affairs of Russia, the FSB of Russia and the Government of the Russian Federation). At the same time, the requirement of no criminal record can be presented under the EPR program primarily in relation to persons directly testing innovations.

File:Aquote1.png
This approach has already been reflected in the current EPR programs, where a person with a criminal record is prohibited from performing the functions of a test driver, a dispatcher of unmanned aircraft systems, which is a more targeted and effective measure to counter the commission of illegal actions, the document says.
File:Aquote2.png

President of the All-Russian Union of Insurers (ARIA) Evgeny Ufimtsev cites the risk of civil liability for causing harm to an unmanned vehicle as an example. Liability insurance itself for harm from the use of AI poses a number of new issues for legal and insurance practice, he said. At the same time, Dmitry Shishkin, head of the Ingosstrakh liability insurance department, says that the AI insurance market will develop, but "the responsibility of AI developers should become the driver of this development[10]"

How state regulation of the AI sphere is carried out in the USA, EU and China

The rapid development of artificial intelligence, including generative services, has led to the need to regulate the relevant sphere. Various legislative initiatives in the field of AI have already been adopted or are being discussed at the level of governments in the European Union, China and the United States. The Institute for Statistical Research and Knowledge Economics of the Higher School of Economics spoke about the new requirements in mid-January 2024.

In particular, the European Parliament and the European Council agreed on the provisions of the Law on Artificial Intelligence (AI Act). The document is designed to protect civil rights and democracy from high-risk AI, ensure the rule of law and environmental sustainability, and stimulate innovation. The bill is based on a risk-oriented approach: the concepts of prohibited malicious AI practices, high-risk AI systems, systems of limited risk and systems with low or minimal risk are introduced (no restrictions are imposed on them). High-risk AI systems must meet requirements for risk management, testing, technical reliability, training and data management, transparency, cybersecurity and human manageability. AI systems used for biometric identification will require evaluation by a specialized body.

The United States, in turn, adopted the Decree on Safe, Reliable and Trustworthy AI. Like the European bill, the American document requires the creators of AI systems to be transparent about processes. To improve the safety of using AI-based technologies, the National Institute of Standards and Technology will develop requirements that these systems must meet.

China has adopted so-called Temporary Measures to Manage AI Generative Systems. According to the document, the developers of such platforms are responsible for all generated content. Service creators are obliged to improve the accuracy and reliability of generated materials, as well as increase the transparency of services. In addition, developers must prevent the creation of content that undermines socialist values ​ ​ or incites the overthrow of the political system. It is also necessary to protect the personal data of users and respect the rights to intellectual property and privacy.[11]

Robotics