RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

St. Petersburg Federal Research Center of the Russian Academy of Sciences: Neural network for identifying deepfakes

Product
The name of the base system (platform): Artificial intelligence (AI, Artificial intelligence, AI)
Developers: St. Petersburg Federal Research Center of the Russian Academy of Sciences

The main articles are:

2024: Scientists have created a neural network to identify deepfakes

Scientists of the St. Petersburg Federal Research Center of the Russian Academy of Sciences (St. Petersburg Federal Research Center of the Russian Academy of Sciences) have developed a method for automatically determining deepfakes by identifying manipulations to improve the quality of the generated video for its persuasiveness (upscaling). Based on this method, a neural network was trained, which analyzes video and photos and helps to identify deepfakes. This was announced on August 2, 2024 by the press service of Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications.

File:Aquote1.png
Almost all modern smartphones use neural networks to improve photos. However, when creating deepfakes, photos change much more, which is the difference. Our algorithm has learned to identify upscaling, that is, artificial improvement in image quality by increasing its resolution, "Dmitry Levshun, a leading expert at the International Center for Digital Forensics at St. Petersburg Federal Research Center of the Russian Academy of Sciences, told TASS.
File:Aquote2.png

Most often, high-quality deepfakes, which are created for fraudulent or political purposes, are not without upscaling, so the neural network prepared by scientists provides high efficiency in identifying artificially created content. Further, experts also plan to create a database and train neural networks to identify deepfakes by other features.

File:Aquote1.png
At the same time, in 2023, compared to 2022, the number of such content increased 17 times. Citizens are already showing concern about this situation - as the results of a fresh study by the Faculty of Law of the Higher School of Economics (HSE) showed, concerns about deepfakes, which allow fraudsters to create fake video and audio that imitate the voices and appearance of people, are shared by 30% of Russians. This scenario cannot be allowed when fakes and deepfakes created with the help of AI literally flooded our digital space. Citizens must clearly understand what content they are dealing with, whether it was generated artificially or has already been created by a person. In this regard, it is important to label the products of AI activities, which I have already mentioned earlier. And the method proposed by St. Petersburg scientists will definitely be to identify those deepfakes that will be distributed without appropriate labeling and most likely for illegal purposes, - said the deputy.
File:Aquote2.png

File:Aquote1.png
The relevant provisions will necessarily be included in the Digital Code being developed in Russia. In the meantime, I advise citizens to use simple rules that can help identify a deepfake on their own. For example, a video message created using deepfake technology can be identified by the movement of a person's eyes on the video, the color of the skin and hair, the contour of the oval of the face - often they can be blurred, strange. In the case of voice fakes, it is always worth carefully assessing the intonations and clarity of speech. And, of course, it is always generally critical of any requests that come to you on the network, if it concerns your personal data or financial resources, "the parliamentarian concluded.
File:Aquote2.png