RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

Consortium for Artificial Intelligence Technology Security Research

Company

Content

History

2024

6 directions of development of AI safety in Russia have been identified

Alexander Shoitov, Deputy Minister of Digital Development of the Russian Federation, on December 10 at a conference of the ISP RAS spoke about the work that is planned as part of a consortium created in May for research on the safety of artificial intelligence technologies. As of December, the consortium includes 12 organizations, and in January 2025, 16 more significant organizations that are engaged in artificial intelligence are expected to join.

According to Alexander Shoitov, the consortium was created in order to gather developers of artificial intelligence, the most competent manufacturers of protective equipment, the scientific community, FNIV and regulators on one site. It is expected that this will make it possible to implement a mature approach to the development of artificial intelligence technologies, which will make it possible, within the framework of one association, to prepare convenient, unified, inexpensive and fairly simple technical solutions for implementation and offer the regulator the necessary documents for regulatory regulation of their use.

Alexander Shoitov talks about the work of the AI security consortium

The national program "Data Economics" has formed a new strategy for the development of artificial intelligence until 2030, which involves the accelerated, large-scale and mass introduction of AI technologies.

File:Aquote1.png
The new strategy significantly pays more attention to issues of trust in artificial intelligence systems, security in general and information security, including, "said Alexander Shoitov. - The program states that in critical information infrastructure, in state information systems and in general in critical areas where significant security requirements, it is necessary to apply adequate compensating measures for information security.
File:Aquote2.png

Work in the field of information security of AI was started back in 2019 at the Academy of Cryptography (ANO NTC Central Committee), headed by Alexander Shoitov. She is one of the founders of the consortium. The academy also attracted leading information security manufacturers to theoretical research. In addition, a reference center for trusted AI was created, built on the basis of the ISP RAS, which is engaged in the creation of trusted working environments (framework).

In the process of work, the main risks and types of attacks that can be carried out using artificial intelligence have been identified. The AI security consortium has the task of developing technological solutions that would not require strict regulation in the field of AI, but would be safe. The main objects of the application of the forces of the companies included in the consortium are as follows: trusted working environments for the development of AI solutions, a testing ground, a data depersonalization center and theoretical research.

The consortium forms 6 working groups: regulatory regulation in the field of AI (NPA), testing of AI technologies for attacks (polygon), trusted AI technologies (decision register), secure AI development (MLSecOps), depersonalization of big data and artificial intelligence in KII.

It is assumed that the work will be organized as follows: everyone offers ideas to the NPA development group, which forms and submits proposals to the regulators to improve the relevant documents. Within the framework of the testing working group, the NTC of the Central Committee is in the lead. She is engaged in the development of a set of various checks - for safety, for functional stability, etc. The register should be filled with decisions that have been tested at the training ground and have proven their safety. Presumably, the result of the activities of this group will be a state information system where trusted AI models will be stored. Most likely, the Ministry of Digital Development will be responsible for this system. Also, the NTC Central Committee is the leader in the working group on data depersonalization.

The Secure Development Team is led by ISP RAS. There it is planned to develop techniques for how to properly organize the secure development of artificial intelligence technologies. She will be responsible for the practical implementation of the necessary checks both on safety and related aspects. It may also include some issues related to ethics. Work in the group has just begun.

File:Aquote1.png
We are leading in it, since we already have a long history with the center for the security of artificial intelligence, "Vartan Padaryan, a leading researcher at the ISP RAS, explained for TAdviser. - What we have previously developed for the organization of working environments in terms of the development of the necessary checks, private implementation methods, the practice of developing safe software, specific checks that take into account the features of artificial intelligence, this will be our contribution to the working group. We want to synchronize offers from the rest of the participants so that the output is some MLSecOps pipeline that satisfies everyone. It should be complete, compliant with global and regulatory requirements, although they have not yet set clear regulatory requirements.
File:Aquote2.png

In the future, it is planned within the framework of the consortium to develop various methods for the implementation and use of AI, as well as draft standards. Specialized standards will be created. Now we are developing a concept for expanding the map of national standards for secure development software to support work with artificial intelligence technologies.

Movement towards standardization is inevitable, therefore, safety standards and confidence in AI must be synchronized with the relevant technical committees of Rosstandart for various industries. One of the tasks of the consortium is to bring a consolidated scientifically sound position to all industry technical committees.

According to Alexander Shoitov, artificial intelligence is now at an early stage of development compared to cryptography. It is not yet very clear how to interpret its results. There are significant features that are that this software is combined with data. Russia does not fully own all artificial intelligence technology, although there is a significant movement in this direction. In such conditions, adequate measures are needed to ensure confidence in AI. If such measures are not taken, then in critical areas it will be necessary to set strict requirements for the development of technology, and they will slow down development. This, according to Alexander Shoitov, cannot be done.

Dmitry Shevtsov supported the activities of the AI security consortium

File:Aquote1.png
It is important that the work takes place with the support of government agencies, - said Dmitry Shevtsov, head of the FSTEC department. - Specialists from federal executive bodies engaged in regulatory activities in the relevant field, where these technologies are either used or planned to be used, should be involved in the work to ensure confidence in artificial intelligence. We support and participate in the work of this consortium.
File:Aquote2.png

The consortium is open to competent participants. There are, for example, tasks related to deepfakes and their regulation that also need to be solved. This is an urgent task, but a working group in this area has not yet been formed. Joining a consortium of strong companies that can offer a solution to control deepfakes is expected. Other areas of development are possible.

Creation of a consortium

The Ministry of Digital Development, Communications and Mass Media of Russia (Ministry of Digital Development of the Russian Federation) has created a consortium whose task will be to ensure information security in the field of artificial intelligence (AI). As it became known in August 2024, the new association will include about 10 leading companies and 5 higher educational institutions engaged in the development and research of AI technologies.

The consortium was created with the participation of the Academy of Cryptography, the Institute of System Programming of the Russian Academy of Sciences (ISP RAS) and the National Coordination Center for Computer Incidents. The main goal of the consortium is to develop and implement secure technologies for working with various types of data, including impersonal personal data. The adoption of these technologies will take place using machine learning and artificial intelligence techniques.

The Ministry of Digital Development of the Russian Federation created consortium on safety of artificial intelligence

Deputy Minister of Digital Development, Communications and Mass Media Alexander Shoitov noted that in September 2024, about 10 more companies and 5 universities will join the consortium. Among the key tasks facing the new participants will be to increase the level of security of domestic software products and hardware and software systems (PAC) using trusted and secure AI technologies.

File:Aquote1.png
Today, for Russia, within the framework of work on artificial intelligence, issues of information security and reliability of technologies are especially relevant, "Shoitov emphasized.
File:Aquote2.png

The consortium will also develop new methods and solutions that will be aimed at ensuring confidence in AI technologies and increasing their security using advanced cryptographic tools. Within the framework of the consortium, it is planned to create centers for the collective use of scientific and technological equipment, which will significantly strengthen the research and production capabilities of the participants.[1]

Notes