RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/06/26 08:21:13

Regulation of artificial intelligence

Content

Main article: Artificial Intelligence

Chronicle

2024

AI developers in Russia were obliged to insure the risks of harm to their systems

On June 25, 2024, the State Duma of the Russian Federation adopted a law on compulsory liability insurance for harm from artificial intelligence. The document, as noted, is aimed at improving the mechanisms for the application of experimental legal regimes (EPR) in the field of digital innovation.

The law is aimed, among other things, at preventing and minimizing the risks of the emergence of negative consequences of the use of AI technologies. Participants of the EPR are ordered to insure civil liability for causing harm to the life, health or property of other persons as a result of the use of AI. In accordance with the new rules, the subject of the EPR is obliged to maintain a register of persons entering into legal relations with him. This database should contain information about those responsible for using AI-based solutions. In addition, companies will have to maintain a register of created results of intellectual activity with the indication of their copyright holder.

AI developers in Russia obliged to insure risks

Another innovation is the formation of a commission to identify the circumstances as a result of which harm was caused when using AI. It is proposed that the commission will include representatives of the authorized and regulatory bodies, as well as organizations of the business community. In addition, other persons may be included if necessary. The changes made are consistent with the basic principles of the development and use of AI technologies, the observance of which is mandatory when implementing the National Strategy for the Development of Artificial Intelligence for the period up to 2030.

File:Aquote1.png
If, during the implementation of the EPR, as a result of the use of solutions developed using AI technologies, harm to the life, health or property of a person or the property of a legal entity is caused, within 30 days from the date of detection of the fact of causing such harm, the regulatory body creates a commission to establish the circumstances under which such harm was caused, - the document says.
File:Aquote2.png

In accordance with the law, the requirement for the initiator to have no criminal record is excluded, since the practice of establishing an EPR has shown that there is no need and effectiveness of this requirement as a whole. It is emphasized that the presence of a certificate of no criminal record does not affect the decision on the establishment of an EPR - this document is not the subject of consideration by interested bodies (in particular, the Ministry of Internal Affairs of Russia, the FSB of Russia and the Government of the Russian Federation). At the same time, the requirement of no criminal record can be presented under the EPR program primarily in relation to persons directly testing innovations.

File:Aquote1.png
This approach has already been reflected in the current EPR programs, where a person with a criminal record is prohibited from performing the functions of a test driver, a dispatcher of unmanned aircraft systems, which is a more targeted and effective measure to counter the commission of illegal actions, the document says.
File:Aquote2.png

President of the All-Russian Union of Insurers (ARIA) Evgeny Ufimtsev cites the risk of civil liability for causing harm to an unmanned vehicle as an example. Liability insurance itself for harm from the use of AI poses a number of new issues for legal and insurance practice, he said. At the same time, Dmitry Shishkin, head of the Ingosstrakh liability insurance department, says that the AI insurance market will develop, but "the responsibility of AI developers should become the driver of this development[1]"

How state regulation of the AI sphere is carried out in the USA, EU and China

The rapid development of artificial intelligence, including generative services, has led to the need to regulate the relevant sphere. Various legislative initiatives in the field of AI have already been adopted or are being discussed at the level of governments in the European Union, China and the United States. The Institute for Statistical Research and Knowledge Economics of the Higher School of Economics spoke about the new requirements in mid-January 2024.

In particular, the European Parliament and the European Council agreed on the provisions of the Law on Artificial Intelligence (AI Act). The document is designed to protect civil rights and democracy from high-risk AI, ensure the rule of law and environmental sustainability, and stimulate innovation. The bill is based on a risk-oriented approach: the concepts of prohibited malicious AI practices, high-risk AI systems, systems of limited risk and systems with low or minimal risk are introduced (no restrictions are imposed on them). High-risk AI systems must meet requirements for risk management, testing, technical reliability, training and data management, transparency, cybersecurity and human manageability. AI systems used for biometric identification will require evaluation by a specialized body.

The United States, in turn, adopted the Decree on Safe, Reliable and Trustworthy AI. Like the European bill, the American document requires the creators of AI systems to be transparent about processes. To improve the safety of using AI-based technologies, the National Institute of Standards and Technology will develop requirements that these systems must meet.

China has adopted so-called Temporary Measures to Manage AI Generative Systems. According to the document, the developers of such platforms are responsible for all generated content. Service creators are obliged to improve the accuracy and reliability of generated materials, as well as increase the transparency of services. In addition, developers must prevent the creation of content that undermines socialist values ​ ​ or incites the overthrow of the political system. It is also necessary to protect the personal data of users and respect the rights to intellectual property and privacy.[2]

Robotics