RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/09/13 11:26:34

Regulation of artificial intelligence

Content

Main article: Artificial Intelligence

Chronicle

2024

The Ministry of Economy of the Russian Federation will prepare regulatory sandboxes for the use of AI

In early September 2024, the Ministry of Economy of the Russian Federation published three draft government resolutions aimed at implementing amendments to the federal law on experimental legal regimes (EPR - "regulatory sandboxes") in the field of digital innovation (169-FZ). This, in particular, is about the use of artificial intelligence tools.

It is noted that as of September 2024, 16 EPRs are being implemented in Russia, of which 13 are related to drones. At the same time, "regulatory sandboxes" allow you to abandon some regulatory requirements that interfere with the development of innovation. Thanks to this approach, companies engaged in the development of new products and services, as well as representatives of authorities, can test them without risk of violating current legislation, and subsequently, if the testing was successful, enter the market with them.

Ministry of Economy of the Russian Federation will create regulatory sandboxes for the use of AI

One of the draft decisions of the Ministry of Economy changes the rules for amending the EPR - they, as the Kommersant newspaper notes, can be prepared by an authorized body or regulator on their own initiative, on behalf of the president or government, as well as on the proposals of companies. The second project is designed to optimize the procedures for submitting reports of EPR subjects on their activities. The third document gives the Ministry of Economy an additional basis for suspending the status of the subject of the EPR - according to the conclusion of the commission of the department, created to establish the circumstances in which harm was caused as a result of the use of decisions using artificial intelligence.

In general, the amendments are focused on accelerating the launch of new programs, as well as on clarifying the procedure for testing AI technologies. It is assumed that the changes will greatly simplify business access to EPR.[1]

Ministry of Digital Development of the Russian Federation is preparing rules for the use of artificial intelligence

On July 26, 2024, the Human Rights Council under the President of the Russian Federation (HRC) Ministry of Digital Development Russia announced a joint initiative to develop rules and restrictions on use artificial intelligence in certain industries. We are talking about health education care,, legal proceedings,, the transport field of security and psychological assistance.

It is said that the adviser to the President of the Russian Federation, the head of the HRC Valery Fadeev met with the Minister of Digital Development, Communications and Mass Media Maksut Shadayev, proposing to discuss the article by the Chairman of the Constitutional Court Valery Zorkin on the need for a constitutional and legal analysis of the introduction of AI. According to Fadeev, around digitalization "there is a certain euphoria," which is often not justified and sometimes leads to erroneous, premature decisions.

Ministry of Digital Development of the Russian Federation develops rules for the use of artificial intelligence

It is noted that AI is increasingly used in the analysis of medical data and diagnosis. However, there is a danger of professional degradation of doctors who prefer to give decisions to AI, and not to make them on their own. New digital technologies make it possible to more closely monitor the academic performance of schoolchildren, but this is said to be fraught with building a trajectory for children's development. In addition, against the background of the widespread introduction of AI, there may be excessive collection and illegal trafficking of personal data. Therefore, it is necessary to develop rules for the use of AI that will help minimize possible risks and avoid information leaks.

File:Aquote1.png
We will understand specific industries, develop specific rules and determine the amount of data collected at the industry level. I propose to start with education as the most "phonizing" topic. We will prepare our proposals in the next month (until the end of August 2024), - said Shadayev.[2]
File:Aquote2.png

In Russia, liability has been introduced for causing harm when using solutions with AI

On July 9, 2024 Russia , liability for causing harm when using solutions with artificial intelligence was introduced. This step was the result of amendments to the Federal Law "On Experimental Legal Regimes in the Field of Digital Innovation in." of the Russian Federation Under the new provisions of the law, insurance of risks arising from the use of technology is provided, AI which will provide additional protection for citizens and legal entities.

Legislative changes include liability for harm caused to the life, health or property of individuals and legal entities in the implementation of experimental legal regimes using AI. The amendments provide for the creation of a commission to investigate all circumstances related to the infliction of such harm. The Commission will assess the scope and nature of harm, including technical failures and errors made in the development and implementation of AI technologies, as well as the actions or omissions of persons that may have caused harm.

Liability for harm from using AI solutions introduced in Russia

Based on the conclusions of the commission, decisions will be made to minimize and eliminate the consequences of harm, prevent similar cases in the future, change the conditions of the experimental legal regime or suspend the status of the subject of the experimental legal regime. These measures are aimed at ensuring the safety and reliability of AI technologies as part of legal experiments.

The law also establishes compulsory civil liability insurance for participants in experimental legal regimes for harm caused to the life, health or property of other persons, including cases related to the use of AI-based decisions. Requirements have been introduced for the conditions of such insurance, including the minimum amount of insurance, a list of insurance risks and insured events. The subject of the experimental legal regime is obliged to maintain a register of persons who entered into legal relations with him, which will ensure transparency and control over the insurance process.

The law will enter into force 180 days after its official publication.[3]

AI developers in Russia were obliged to insure the risks of harm to their systems

On June 25, 2024, the State Duma of the Russian Federation adopted a law on compulsory liability insurance for harm from artificial intelligence. The document, as noted, is aimed at improving the mechanisms for the application of experimental legal regimes (EPR) in the field of digital innovation.

The law is aimed, among other things, at preventing and minimizing the risks of the emergence of negative consequences of the use of AI technologies. Participants of the EPR are ordered to insure civil liability for causing harm to the life, health or property of other persons as a result of the use of AI. In accordance with the new rules, the subject of the EPR is obliged to maintain a register of persons entering into legal relations with him. This database should contain information about those responsible for using AI-based solutions. In addition, companies will have to maintain a register of created results of intellectual activity with the indication of their copyright holder.

AI developers in Russia obliged to insure risks

Another innovation is the formation of a commission to identify the circumstances as a result of which harm was caused when using AI. It is proposed that the commission will include representatives of the authorized and regulatory bodies, as well as organizations of the business community. In addition, other persons may be included if necessary. The changes made are consistent with the basic principles of the development and use of AI technologies, the observance of which is mandatory when implementing the National Strategy for the Development of Artificial Intelligence for the period up to 2030.

File:Aquote1.png
If, during the implementation of the EPR, as a result of the use of solutions developed using AI technologies, harm to the life, health or property of a person or the property of a legal entity is caused, within 30 days from the date of detection of the fact of causing such harm, the regulatory body creates a commission to establish the circumstances under which such harm was caused, - the document says.
File:Aquote2.png

In accordance with the law, the requirement for the initiator to have no criminal record is excluded, since the practice of establishing an EPR has shown that there is no need and effectiveness of this requirement as a whole. It is emphasized that the presence of a certificate of no criminal record does not affect the decision on the establishment of an EPR - this document is not the subject of consideration by interested bodies (in particular, the Ministry of Internal Affairs of Russia, the FSB of Russia and the Government of the Russian Federation). At the same time, the requirement of no criminal record can be presented under the EPR program primarily in relation to persons directly testing innovations.

File:Aquote1.png
This approach has already been reflected in the current EPR programs, where a person with a criminal record is prohibited from performing the functions of a test driver, a dispatcher of unmanned aircraft systems, which is a more targeted and effective measure to counter the commission of illegal actions, the document says.
File:Aquote2.png

President of the All-Russian Union of Insurers (ARIA) Evgeny Ufimtsev cites the risk of civil liability for causing harm to an unmanned vehicle as an example. Liability insurance itself for harm from the use of AI poses a number of new issues for legal and insurance practice, he said. At the same time, Dmitry Shishkin, head of the Ingosstrakh liability insurance department, says that the AI insurance market will develop, but "the responsibility of AI developers should become the driver of this development[4]"

How state regulation of the AI sphere is carried out in the USA, EU and China

The rapid development of artificial intelligence, including generative services, has led to the need to regulate the relevant sphere. Various legislative initiatives in the field of AI have already been adopted or are being discussed at the level of governments in the European Union, China and the United States. The Institute for Statistical Research and Knowledge Economics of the Higher School of Economics spoke about the new requirements in mid-January 2024.

In particular, the European Parliament and the European Council agreed on the provisions of the Law on Artificial Intelligence (AI Act). The document is designed to protect civil rights and democracy from high-risk AI, ensure the rule of law and environmental sustainability, and stimulate innovation. The bill is based on a risk-oriented approach: the concepts of prohibited malicious AI practices, high-risk AI systems, systems of limited risk and systems with low or minimal risk are introduced (no restrictions are imposed on them). High-risk AI systems must meet requirements for risk management, testing, technical reliability, training and data management, transparency, cybersecurity and human manageability. AI systems used for biometric identification will require evaluation by a specialized body.

The United States, in turn, adopted the Decree on Safe, Reliable and Trustworthy AI. Like the European bill, the American document requires the creators of AI systems to be transparent about processes. To improve the safety of using AI-based technologies, the National Institute of Standards and Technology will develop requirements that these systems must meet.

China has adopted so-called Temporary Measures to Manage AI Generative Systems. According to the document, the developers of such platforms are responsible for all generated content. Service creators are obliged to improve the accuracy and reliability of generated materials, as well as increase the transparency of services. In addition, developers must prevent the creation of content that undermines socialist values ​ ​ or incites the overthrow of the political system. It is also necessary to protect the personal data of users and respect the rights to intellectual property and privacy.[5]

Robotics