RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/02/21 16:22:39

Using a deepfake to rob a bank. In what situations it is possible, and how to protect yourself from this

In February 2024, the Russian media disseminated information about the case of allegedly using deepfakes to bypass authentication at Tinkoff Bank. In the original source How scammers learned to confirm their identity in the bank. Everything turned out to be simpler... it was suggested that fraudsters allegedly with the help of a deepfake were able to withdraw 200 thousand rubles from the user's accounts. TAdviser discussed with experts how great the risk of bypassing bank authentication and stealing funds using deepfakes is.

Content

Deepfakes vs. Biometrics

Biometric identification technology is increasingly used in remote identification and authentication of users, which allows the development of deepfake technologies to bypass them. Services are already emerging that can generate fake photos of documents used by scammers to prove their identity in online communication with bank employees.

File:Aquote1.png
Given the development of technologies, the use of deepfakes can bypass systems based on face recognition, voice characteristics and other biometric data, "Anastasia Fedorova, director of development at the monitoring center, explained to TAdviser cyber security." K2 Cybersecurity- So far, attackers are using them mainly to deceive people, not hack systems. For example, recently the financier from Hong Kong transferred more than $25 million to fraudsters after a call with top managers of his company "[1]After the call, it turned out that he had spoken with deepfakes of colleagues. However, deepfakes will probably soon begin to be massively used by scammers around the world.
File:Aquote2.png

The situation with the Unified Biometric System (EBS), which is now supported at the level of legislation and is actively promoted by the state, may be especially dangerous in connection with deepfakes. However, there are also more dangerous projects, such as WorldCoin, which is organized by Sam Altman, CEO of OpenAI. The project collects retinal scans using a special Orb device. More than 5 million people are already participating in the project, the retinal prints of which are loaded into the system. Whether, with the help of such a base of scans and artificial intelligence technologies developed by Altman's company, it is not yet clear whether it is possible to generate fake images of the retina to deceive the relevant biometric protection systems.

Components of the Orb device, which has already captured the image of the retina of more than 5 million people
File:Aquote1.png
From my point of view, deepfakes are an atomic bomb, "Alexander Yegorkin, First Vice President of Gazprombank, told TAdviser. - The idea of ​ ​ collecting biometric data after the appearance of deepfakes should be forgotten forever. A few seconds of stupid conversation between the rogue and the victim then allow using artificial intelligence to build any spoken text in the voice of this victim. And not so many seconds of conversation are necessary - it is enough to competently build the original conversation so that there are all the right sounds and emotions. After that, artificial intelligence can collect any pre-written text. And then some digitized code comes to the bank for authentication - some numbers - and these numbers must be converted into sounds, image or video with a certain processing. The Zyfras that artificial intelligence gives birth to and the numbers that the video camera gives birth to are the same.
File:Aquote2.png

Nevertheless, while generative artificial intelligence creates works with certain defects, experts assure that artifacts remain in any deepfake image that allow you to identify processing using artificial intelligence. Actually, while public examples of deepfake images and videos are very different from real ones - even a person can recognize them. Most likely, a special neural network analyzer will always be able to detect signs of the use of image and video processing using AI. Therefore, those who are building biometric identification systems now should provide for an appropriate element of protection in them.

Fake communications

If we return to the situation with Tinkoff Bank, then according to the results of the investigation, it was found out that there was no deepfake in that case. The author of the post himself, after investigating the situation, wrote the following on his blog:

File:Aquote1.png
The bank investigated and contacted me. According to the representative, the fraudster misled employees. A new account was established, then he achieved its merger with mine and gained access... There have been no official cases of deepfake fraud. But this does not mean that this is impossible in the future. I wish the Tinkoff security service to stay one step ahead.
File:Aquote2.png

A screenshot of the phone that cited the author of the post as proof of the deepfake

Also on behalf of the bank in the correspondence there is a fixed message of the following content:

File:Aquote1.png
We checked your situation in detail - it has nothing to do with deepfakes, they were not used here. Tinkoff has liveness technology, which excludes the possibility of passing checks using deepfakes. In your case, the employee made a technical error, and the fraudster was able to gain access to his personal account. At the same time, the fraudster did not undergo video identification and did not use deepfakes. Now the error has been corrected, additional work has been carried out with the employee, disciplinary measures have been applied to him, he has been sent for retraining. All funds will be reimbursed in full, we apologize to our deepest apologies.
File:Aquote2.png

Actually, banks do not conduct the procedure for authenticating their clients through a video conference - at best, by photo, however, this is not always true. You can't just take and transfer a photo for authentication. It is necessary that the photo be taken by the bank application, and the client must first be authenticated first in the operating system, and then in the application. Moreover, most likely, for the application just installed on the device, this authentication method will not be used at all.

Protection Recommendations

Now deepfakes can only be used to deceive people, as in the case above with misleading the financier Hong Kong. Indeed, he, without conducting a full-fledged procedure authentications for, biometrics transferred money to fraudsters. Such attacks have spread under the name "fake boss." They involve sending voice messages in instant messengers, allegedly on behalf of company executives, where they ask to cooperate with the'security services' or 'investigators'. MINISTRY OF INTERNAL AFFAIRS Such fakes can be generated from public speeches by relevant officials. However, people who perform responsible operations, and not authentication systems, are deceived in this way.

File:Aquote1.png
If the attackers have collected certain information about the victim and know her preferences, then voice fakes can work, - commented Alexey Lukatsky, business consultant for information security Positive Technologies. - We have an executive who constantly uses voice messages, so an attack using his deepfake can work. There are also options when a voice message arrives at first or a fake video conference is created with poor quality, so that it cannot be determined that this is a fake, and already referring to low quality, they suggest switching to text correspondence through the same messenger.
File:Aquote2.png

Thus, measures to protect against deepfakes should be aimed at people who are engaged in the verification of the multimedia component of communications in financial organizations themselves. Experts recommend teaching them to distinguish deepfakes - pictures, videos and sound, for which they organize special trainings so that they can independently determine the signs of using deepfake.

When choosing tools for biometric identification, it is also worth using technologies that are difficult to deceive using a deepfake. For example, identification by the pattern of veins on the palms is guided by the thermal pattern created by the veins of the palm, and it is artificially very difficult to repeat it.

File:Aquote1.png
Oddly enough, biometric devices with voice activation are more difficult to deceive, - added Anastasia Fedorova. - They often combine voice activation with unique questions that the user should know the answer to. Their attackers are more difficult to recognize. More advanced security systems may also listen to audio vibrations that produce only vocal cords. Deepfake is also difficult to fake fingerprints. Scanners contain not only information about verified prints, which in general can be hacked. The devices also focus on heat to make sure the finger is on their surface.
File:Aquote2.png

In general, we can conclude that the appearance of deepfakes greatly worsened the reliability of biometric identification, but did not reduce it to zero. In some cases, biometrics can be used quite conveniently and efficiently, but it is important to build a security system so that it does not depend entirely on biometric authentication - other factors such as secret information or one-time passwords need to be checked.

Notes