RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/10/10 15:04:32

Deepfake fraud

.

Content

Deepfakes (DeepFake)

Main article: Deepfakes (DeepFake)

Chronicle

2024: Deepfake attacks on bank customers are growing in Russia

In Russia, an increase in the number of attacks using deepfake technology aimed at customers of banks and financial platforms was recorded. This became known in October 2024.

According to the system integrator "Informzaschita," since the beginning of 2024 the number of such incidents has increased by 13%, reaching 5.7 thousand cases. Experts attribute this to the widespread adoption and availability of technology that allows attackers to create high-quality face and voice fakes, creating more trust among potential victims.

The number of deepfake attacks on bank customers is growing in Russia

According to Kommersant, the main targets of such attacks are bank customers and employees of financial organizations. According to Pavel Kovalenko, director of the Informzaschita Fraud Prevention Center, attackers create fake financial advisers who contact customers through video calls, posing as well-known experts or company leaders. Thus, they convince their victims to invest in fictitious projects or transfer access to bank data. Experts warn that in 2025 the number of such attacks may double.

The main mechanism of deception is the substitution of voice and facial expressions using artificial intelligence. According to Artem Brudanin, head of cybersecurity at RTM Group, deepfake technology is highly successful, since a person is inclined to trust familiar faces and voices. According to the company "Informzaschita," the effectiveness of such attacks is about 15-20%.

Among the most common schemes are the following: forging the voice and appearance of company leaders in order to gain access to financial information or convincing employees to transfer funds to fraudulent accounts. Andrei Fedorets, head of the Information Security Committee of the Association of Russian Banks, explains that the standard scenario involves hacking an employee's account, after which attackers create a deepfake based on the voice messages and photos available in the correspondence.[1]

Notes