RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/11/21 14:28:06

Deepfake fraud

.

Content

Deepfakes (DeepFake)

Main article: Deepfakes (DeepFake)

Chronicle

2025: Phone scammers used Italian defence minister's deepfake and lured money from billionaires

On February 9, 2025, it became known that telephone scammers used artificial intelligence tools to create deepfakes of Italian Defense Minister Guido Crozetto and other officials. Using synthesized voices, attackers lured money from a number of Italian entrepreneurs and billionaires. Read more here

2024

Telephone scammers in Russia began to use deepfakes of dead people to lure money from relatives

In early November 2024, it became known that a new cybercriminal scheme based on artificial intelligence technologies was gaining momentum in Russia. Telephone scammers use deepfakes of dead people, luring money from their relatives under various pretexts.

One of these cases, according to Komsomolskaya Pravda, occurred in Tatarstan. The girl received a call from an unknown number in the middle of the night: at the same time, the voice of her mother sounded on the phone, who died about six months ago. The woman said that in Tibet, in a sublime temple, "scientists of the ninth universal level" invented a system by which a connection between the worlds of the dead and the living became possible. However, this is a "very expensive service," and therefore the girl was asked to transfer 20 thousand rubles to a certain bank card to continue communication with her deceased mother.

Telephone scammers in Russia began to use deepfakes of the dead to lure money from relatives

However, the girl's husband doubted the invention of the afterlife and explained to his wife that with the help of modern AI technologies, cybercriminals can fake not only the voice, but also the image of almost any person. And since the deceased woman recorded culinary videos, the scammers had enough audio samples to generate deepfakes.

Another similar case occurred in the city of Kurgan, where a 90-year-old woman received a call from her "late son" and asked to transfer 100 thousand rubles to that world through an ATM in a shopping center. However, the grandmother got confused in the ATM system and turned to the security guard for help, who, having learned to whom and why the money was transferred, prevented fraud.

To protect against voice deepfakes, experts recommend that when an unusual call arrives, pay attention to the sound quality, unnatural monotony of the voice, illegibility of speech and noise.[1]"

How scammers earned $46 million using beautiful video dypfakes to meet on sites

In mid-October 2024, Hong Kong police arrested 27 people on suspicion of committing fraud using the technology of replacing persons (video lipfakes) in order to get acquainted on sites on the Internet. As a result of this scheme, the victims lost about $46 million. Read more here

The Ministry of Internal Affairs warned of fraud with fake orders of the FSB

The Ministry of Internal Affairs of Russia reported the appearance of a fraudulent scheme in which attackers use fake orders from the FSB. So, acting on behalf of the head, they go to the company's employees and report that the FSB of Russia began an audit against them due to a possible violation of the current legislation. This was announced on October 8, 2024 by the press service of Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications. Read more here.

Fraudsters in Russia attack parents: they began to fake videos with images and voices of their children

In early September 2024, it became known that fraudsters in Russia are actively using new deception schemes implemented using artificial intelligence. Attackers attack parents by spoofing videos of their children's images and voices.

The fraudulent scheme comes down to the fact that cybercriminals turn to the victim through voice or video communication, posing as close relatives. For this, deepfake records generated by AI can be used. Further, the attackers report any unpleasant news. The goal is to bring a person to emotions and force him to make a rash decision - to transfer money.

Fraudsters in Russia attack parents by faking videos with children

One of these cases in early September 2024 occurred in Voronezh. Local resident Polina Markovna S. called from Moscow daughter - student Vera. Via video link, allegedly from the police station, she reported problems and asked her mother to urgently transfer 100 thousand rubles without asking any questions. At the same time, the recipient's account number was sent via SMS.

File:Aquote1.png
Mom, I'm in trouble! Urgently transfer me 100 thousand rubles! Urgent! And don't ask about anything. I will explain everything to you later when you translate. I'm in the police now. The phone will be disconnected. As soon as you transfer the money, they will let me go, and I will call you right away. I sent you an e-mail account number, "the girl said.
File:Aquote2.png

By coincidence, Pauline Markovna soon received a real call from her daughter, who said that she was fine. This made it possible to avoid losing the money that the woman was going to transfer to fraudsters.

Attackers can not only impersonate the children of victims, but also pose as employees of banks, law enforcement agencies, etc. Information security experts in case of suspicious calls recommend asking clarifying questions, and then calling real loved ones on their own. This will help avoid financial losses.[2]

Central Bank: Fraudsters hack Russian social networks, make deepfakes and lure money from their friends and relatives

In mid-August 2024, the Bank of Russia warned of a new threat from fraudsters actively using modern technologies to steal money. The message published by the press service of the Central Bank of the Russian Federation says that attackers are increasingly using deepfakes - fake videos created using neural networks - to deceive citizens and extort funds from their friends and relatives.

As noted in the message of the Central Bank, the scheme of fraudsters usually begins with hacking the victim's account on social networks or instant messengers. By gaining access to personal data such as photos, videos and audio recordings, attackers create realistic videos using deepfake technology. In such videos, a person talks about an alleged problem with him, for example, a serious illness or a traffic accident, and asks to urgently transfer the money to the specified account.

The Central Bank warned that fraudsters hack into the social networks of Russians, create deepfakes and lure money from their friends and relatives

According to RBC, the Bank of Russia stressed that such appeals could be a trap for fraudsters. Especially often, attackers make deepfakes depicting employers, colleagues or even employees of government agencies, which adds an additional degree of trust to those who receive such messages.

The Central Bank strongly recommends that citizens be vigilant and not succumb to requests for financial assistance received through social networks or instant messengers. Bank of Russia specialists offer several methods for verifying the authenticity of such requests:

  • Call the sender - before transferring money, you need to contact the person directly and clarify whether they really need help.
  • Ask a personal question - if there is no way to call, you can ask a question that only this person knows the answer to. This will help identify the fraudster.
  • Evaluate a video message - pay attention to possible defects, such as monotony of speech, unnatural facial expressions or anomalies in the sound, which may indicate that you have a deepfake.

With an increase in the number of cases of the use of deepfakes, in May 2024, a bill was proposed in the State Duma providing for criminal liability for the creation and distribution of such fake materials.[3]

Fraudsters using deepfakes forge documents of Russians

Fraudsters have learned to fake citizens' documents using artificial intelligence (AI) technologies. As before, when creating digital fake copies, they either change the numbers or try to pass off an invalid document as valid, but now deepfakes are also used for this purpose for the process of authentication and data synthesis. Such information on May 8, 2024 with TAdviser was shared in the press service of the State Duma deputy RFAnton Nemkin with reference to Izvestia. Read more here.

In South Korea, a fraudster stole $50,000 from a woman using Elon Musk's deepfake

In South Korea, a fraudster stole 70 million won (about $50,000) from a woman using the deepfake of Tesla founder Elon Musk. The incident became known at the end of April 2024.

According to the Independent, citing South Korean publications, the fake Musk wrote to a woman who was a fan of the businessman. The Korean woman did not believe at first, but the attacker convinced her by sending a photo of her identity card and several pictures from work.

Fraudster stole 70 million won from a woman using the deepfake of the founder of Tesla Motors

File:Aquote1.png
Musk talked about children and how he flies by helicopter to work at Tesla or. SpaceX He also explained that he contacts fans very rarely, - a deceived citizen of South Korea shared the details of the correspondence.
File:Aquote2.png

The couple continued to communicate on social media. At one point, they decided to contact via video. Then the fraudster told her that he loved her. During a video call, the scammer used a deepfake to pretend to be Musk. Deepfake turned out to be so truthful that after that the girls had no doubts - Musk himself really contacted her.

After that, the fraudster named the Korean bank account number, saying: "I'm happy when my fans got rich because of me." He said the account belonged to one of his Korean employees. As a result, she put in this account a total of 70 million won, which the fake Elon Musk promised to invest in business development and return the money with large interest to make the girl rich. But this scheme turned out to be fraudulent, and the victim went to the police.

This is not the first time that Elon Musk's deepfake has been used in South Korea. Previously, unknown persons hacked into a YouTube channel owned by the South Korean government, renamed it SpaceX Invest and broadcast fabricated videos with Elon Musk discussing cryptocurrencies.[4]

4 thousand world celebrities became victims of pornodipfakes

In 2023, approximately 4 thousand famous people around the world became victims of pornographic deepfakes. Such data in mid-March 2024 was disclosed by Channel 4 News.

An analysis of the five most visited deepfake websites by Channel 4 News found that attackers fabricate material depicting female actors, TV stars, musicians and bloggers. Of the approximately 4 thousand victims of pornodipfakes, the British account for 255 people.

Approximately 4
thousand famous people around the world became victims of pornographic deepfakes

In 2016, researchers discovered one fake pornographic video on the Internet. At the same time, in the first three quarters of 2023, almost 144 thousand new deepfake materials were uploaded to the 40 most visited porn sites - more than in all previous years combined. It is noted that, for example, in Britain on January 31, 2024, the Online Safety Act came into force: it provides that the unauthorized exchange of deepfake materials in the country is prohibited. At the same time, the creation of pornodipfakes is not prosecuted. Representatives of Ofcom (Office of Communications), a British agency that regulates the work of television and radio companies, as well as the postal service, speak about the problem.

File:Aquote1.png
Illegal deepfake materials cause significant damage. In accordance with the Internet Security Act, companies will have to assess the risk of distributing such content in their services, take measures to prevent its appearance, and promptly remove such materials, Ofcom said in a statement.
File:Aquote2.png

Deepfakes can be used not only to harm specific individuals. Such materials give attackers the opportunity to spread fake news, carry out various fraudulent schemes, etc.[5]

Russians are lured by advertisements for paid voice acting of films to steal samples of their voice. Then they steal money from their relatives

Russians are lured by advertisements for paid voice acting of films to steal samples of their voice. Then they steal money from their relatives and friends. Angara Security, a company specializing in information security, spoke about the new fraud scheme in early March 2024.

As Vedomosti writes with reference to Angara Security materials, in ads posted on the Internet, their authors are asked to provide an audio recording in the format of a phone call or a recording of a conversation that must be sent with a personal message or bot. For participation in the project, they offer a fee from 300 to 5 thousand rubles, which can really be paid to the victim.

Russians are lured by ads about paid voice acting of films to steal samples of their voice

According to experts, these ads do not pose a direct threat, but fraudsters use voice data sets in training, neuronets generating audio messages based on them. In the future, they are used to extort money from victims, posing as a relative, colleague, friend, etc. In addition, swindlers can apply on behalf of the victim to where banks he has an account.

Angara Security notes that most often such ads appear on Telegram channels. However, attackers can use other sites, as well as use spam calls with an offer to make money on a "big project." The number of such messages, excluding spam calls, in 2021 was 1,200, in 2022 their number quadrupled to 4,800, and in 2023 reached 7,000, information security experts calculated.

Experts interviewed by Vedomosti also note another potential source for collecting samples of Russian votes: fraudsters can receive them from videos published on social networks. Moreover, you do not need to hack user accounts, because most of the video content is in the public domain, experts say.[6]

The company transferred $25 million to fraudsters after a video conference with employee deepfakes

In early February 2024, it became known about a major scam using deepfakes in Hong Kong. A large local company with international business transferred $25 million to fraudsters after a fabricated video conference with employees.

According to the South China Morning Post, an employee of the finance department received a phishing message claiming that it was from a CFO from Britain. The message instructed to carry out a secret transaction, but the employee was not convinced of the veracity of the letter. It was then that AI helped organize the fraudulent scheme.

It became known about a major scam using deepfakes in Hong Kong

Using deepfake technology, the attacker organised a conference call with deepfake footage of the CFO and other employees to persuade the victim to transfer the money. Watching colleagues in the video was enough to initiate a transaction.

According to an employee of the company who was the victim of the scam, during the call he did not even think about the trick - all the participants looked natural, talked and behaved like his colleagues. The realization came only after the employee nevertheless decided to contact the head office of the company and clarify the details of the bank transfer. At the call, the scammers asked the employee to introduce himself, but did not interact with him and mostly handed out orders. And then the meeting suddenly ended.

By the beginning of February 2024, an investigation is underway, there are no detainees yet. The name of the company and the name of the victim are not disclosed by the police in the interests of the investigation. It is known that scammers took audio and video recordings to create deepfakes of video call participants in the public domain.

As noted in the Hong Kong police, this was the first case of this kind, the reception concerning a large amount. According to Baron Chan Shun-ching, Acting Senior Superintendent of the Crime Bureau, in previous cases, scammers deceived victims with one-on-one video calls[7]

Fraudsters began to fake the voices of Russians with the help of AI and deceive their relatives in instant messengers

Fraudsters began to fake the voices of Russians with the help of AI and deceive their relatives and friends in instant messengers. On January 11, 2024, this scheme was told in the department for organizing the fight against the illegal use of information and communication technologies of the Ministry of Internal Affairs of Russia.

According to the department, first swindlers hack accounts in Telegram or WhatsApp using fake votes. After that, scammers download voice messages and form new ones with the context they need.

Fraudsters began to fake the voices of Russians with the help of AI and deceive their relatives

Further, according to the Ministry of Internal Affairs, the attackers send the formed messages to personal and group chats with a request to lend a certain amount of funds, adding photos of a bank card with fake names of recipients to the voice message.

One of the victims of this scam told RBC that fraudsters send a fake voice message both in personal correspondence and in all chats where the account owner is. A photo of a bank card with a name and surname is sent to the same addresses. The interlocutor of the publication had a name and surname that differed in social networks from the information in the passport, but the fraudsters used the data of the passport. The amount that the scammers wanted in this case was 200,000 rubles. One of the VKontakte users lost 3,000 rubles in a similar way.

In the company, F.A.C.C.T. called such a scheme new and "quite advanced" for Russia. According to experts, at the first stage of the scheme, a Telegram or WhatsApp account is hacked, for example, through fake voting, a wave of which was observed at the end of 2023. Then scammers download saved voice messages and use AI services to create new ones with the necessary context, the F.A.C.C.T.

Irina Zinovkina, head of the Positive Technologies research group, questioned the effectiveness of such fraud, since not all users use voice messages, and it is not always possible to glue the necessary phrase from the material that already exists.[8][9]

21% of Russian companies were attacked using deepfakes

B1 MTS AI On January 14, 2025, a group of companies and the company presented a study on to the attacks companies using audio and video spoofing (in cyber attacks which the criminal pretends to be a trustee in order to gain benefits). According to the survey, 92% of respondents believe that spoofing based on deepfakes poses a real threat to business, and 21% of respondents admitted that their companies have already suffered from fraud. AI More. here

Scammers in Russia began to use the method of "cybermystification" to deceive people

In November 2024, it became known that criminals developed a new technique for deceiving citizens, uniting psychological manipulations with digital technologies. Attackers use the so-called "cybermystification," using simultaneously social engineering methods and modern digital tools, including instant messengers and voice and image substitution technologies.

According to RIA Novosti, fraudsters are actively introducing deepfakes - fake images and voices - into traditional deception schemes to increase the trust of potential victims. The technology allows you to create convincing copies of the voices and appearance of real people.

Fraudsters in Russia began to use the method of "cybermystification" to deceive people

File:Aquote1.png
The power of fraudsters is that they are the first to completely master the victim's consciousness, they enter into full confidence using social engineering methods, "said Sergei Veligodsky, director of the Sberbank Fraud Prevention Department.
File:Aquote2.png

To counter new threats, Sberbank has developed a system for identifying fake images and voices. The bank's anti-fraud system unites more than 30 partners in the online exchange of data on risky customer transactions and automatically blocks suspicious transactions.

Experts of the financial organization emphasize the need to improve methods for developing digital literacy of the population. Traditional ways of informing about cyber threats are losing their effectiveness, which requires the development of new approaches to protecting customers.

The bank notes an increase in the number of cases of using deepfake technologies in fraudulent schemes. Attackers use them to imitate the voices and appearance of bank employees, law enforcement agencies and other officials.

The credit institution's anti-fraud system analyzes transactions for signs of fraud, regardless of the use of substitution technologies. If suspicious activity is detected, the system automatically suspends operations for additional verification.[10]

Donald Trump's use of deepfakes in crypto fraud recorded

Company. F.A.C.C.T notes the use deepfakes Donald Trump USA to advertizing fake of crypto resources after his victory in the presidential election. The F.A.C.C.T. reported on November 18, 2024. In addition to Trump, billionaire Elon Musk, American journalist Tucker Carlson, co-founder of the blockchain platform, Ethereum Vitaly Buterin football player, Cristiano Ronaldo model Kim Kardashian appear in the list of popular images among scammers. Given the growth of the course bitcoin and the emergence of new fraudulent resources, the risks of investors to invest "not there" are quite high: only one of the large teams of crypto cameras in 13 months stole more than $16 million from the victims.

One of the main trends in the development of cryptoscam in 2023-2024 was the active use of deepfakes in advertising fraudulent crypto projects, according to a study by analysts at the Digital Risk Protection department of F.A.C.C.T. To create videos using neural networks, fraudsters use both paid and free tools.

The proposed deepfakes are designed for an English-speaking audience and the generation of advertisements for fake crypto exchanges and cryptocurrency exchange platforms for TikTok, YouTube and banned social networks in Russia. Deepfake generation technologies are not yet ideal: if you look closely, in many such videos you can notice shortcomings in facial expressions.

Analysts at the F.A.C.C.T. note three main fraud schemes in the crypto industry: fake crypto exchanges and cryptocurrency exchangers, dryers and scam tokens.

Since 2022, analysts at the F.A.C.C.T. have found at least 600 domains fake crypto exchanges. Externally, they hardly differ from the real ones and offer a standard set of operations. The attacker's task is to bring the user to a deposit. The victim of this scheme may even be a person who does not have a crypto wallet: most fake exchanges and exchangers cryptocurrencies have the opportunity to make a deposit with. bank card Links to fake crypto resources are distributed through YouTube and prohibited in. Russia social networks

Simultaneously with the address of the fake crypto exchange site, scammers usually tell the victim a promotional code to receive a bonus. To withdraw the bonus, attackers offer to replenish the account with their own funds. Of course, as in all such schemes, the victim will not be able to return this money.

In the case of fake cryptocurrency exchange services, the situation is even easier. The victim transfers money to the wallet indicated on the website and does not receive anything back. Since 2023, analysts at the Department of Digital Risk Protection F.A.C.C.T. have discovered about 70 domains created by this scheme.

For two years, attackers have been actively using divers for cryptocurrency theft - malware that allows attackers to check the contents of victims' crypto wallets and withdraw their assets. The attacker's task is to bring the user to a malicious site that infects the victim's device.

Links to such sites are usually distributed through ads or posts on social networks, on video hosting, through email mailings, promotion of malicious sites in search results by popular keywords, messages on cryptocurrency forums. Often, attackers turn to potential victims directly through instant messengers, using contact details that users leave on cryptocurrency-related sites.

The scam token scheme is quite simple: a fraudster creates a token and promotes it, promising that it will soon rise in price and bring profit. In fact, the victim can only buy a token, but can never sell it.

Channels in Telegram, accounts in X (formerly Twitter) and channels in Discord are usually used to advertise the token . First, such channels work to attract subscribers, "warm up" for several weeks or months. As soon as the number of subscribers grows to several tens or hundreds of thousands, scammers begin to publish posts about a new "miracle token," which, according to them, is about to rise in price several times and therefore needs to be quickly bought. The token itself is placed on the official exchange, which, in turn, directly warns: the token may turn out to be fraudulent, so there is a high risk of irrevocably losing funds.

An analysis by F.A.C.C.T researchers of five relatively large criminal groups with the active participation of Russian-speaking workers working on a scheme with fake crypto exchanges showed that for them the average amount of theft is $233, and the largest amount stolen from one victim was $26,958. At the same time, in one of the largest teams, which is mainly engaged in dreaders, the average theft amount is ten times higher - $5,528, and the maximum transaction amount is $832,787. For 13 months, from April 2023 to April 2024, this team stole $16,384,483 from investors around the world. Read more details on the blog on our website.

File:Aquote1.png
Each cryptoscam scheme has its own characteristics that affect its profitability, "said Maria Sinitsyna, senior analyst at Digital Risk Protection at F.A.C.C.T. - In the case of fake crypto exchanges, the victim does not have to have cryptocurrency: it can be" purchased "on the same exchange and paid with a bank card. This means that the user will lose only the money that he decided to invest through the exchange. In the case of dryers, almost all funds are withdrawn from the victim's account from the connected wallet. This can explain the differences in the average amount of theft according to different cryptoscam schemes.
File:Aquote2.png

To protect brands from digital risks and the direct damage associated with their misuse on fake resources, companies working in the field of blockchain and cryptocurrencies are advised to use automated solutions that combine analysis of cyber intelligence data and machine learning capabilities.

The number of deepfake attacks on bank customers is growing in Russia

In Russia, an increase in the number of attacks using deepfake technology aimed at customers of banks and financial platforms was recorded. This became known in October 2024.

According to the system integrator "Informzaschita," since the beginning of 2024 the number of such incidents has increased by 13%, reaching 5.7 thousand cases. Experts attribute this to the widespread adoption and availability of technology that allows attackers to create high-quality face and voice fakes, creating more trust among potential victims.

The number of deepfake attacks on bank customers is growing in Russia

According to Kommersant, the main targets of such attacks are bank customers and employees of financial organizations. According to Pavel Kovalenko, director of the Informzaschita Fraud Prevention Center, attackers create fake financial advisers who contact customers through video calls, posing as well-known experts or company leaders. Thus, they convince their victims to invest in fictitious projects or transfer access to bank data. Experts warn that in 2025 the number of such attacks may double.

The main mechanism of deception is the substitution of voice and facial expressions using artificial intelligence. According to Artem Brudanin, head of cybersecurity at RTM Group, deepfake technology is highly successful, since a person is inclined to trust familiar faces and voices. According to the company "Informzaschita," the effectiveness of such attacks is about 15-20%.

Among the most common schemes are the following: forging the voice and appearance of company leaders in order to gain access to financial information or convincing employees to transfer funds to fraudulent accounts. Andrei Fedorets, head of the Information Security Committee of the Association of Russian Banks, explains that the standard scenario involves hacking an employee's account, after which attackers create a deepfake based on the voice messages and photos available in the correspondence.[11]

2023

Cyber ​ ​ fraudsters use AI and deepfakes to lure data or money from users

Cyber ​ ​ fraudsters are actively using artificial intelligence and deepfakes to impersonate other people in instant messengers or social networks. Therefore, any messages with strange requests, even received from relatives or friends, should be treated critically.

File:Aquote1.png
"Deepfakes can be used to create fake video or audio recordings in which attackers impersonate another person: a relative, friend or colleague. Having recorded in a voice a request to transfer money into debt, fraudsters are more likely to receive financial benefits or convince the user to give access to personal data, "Konstantin Shulenin, an expert on network threats at Security Code, warned in an interview with Лентой.ру.
File:Aquote2.png

Neural networks are also actively used - they help scammers automate phishing campaigns and put them on stream, affecting a larger number of "victims." And they make phishing emails themselves more realistic, the press service of the State Duma deputy RFAnton Nemkin told TAdviser on December 29, 2023.

In addition, employees of various companies and organizations who receive emails from "management" and "colleagues" are often targeted by cybercriminals. Moreover, by the end of the year and on the eve of the holidays, people's vigilance decreases, respectively, the likelihood of entering the system for fraudsters is growing. In addition, many go on long vacations, so they cannot detect suspicious activity in the account in time.

One of the main reasons for how masterfully fraudsters build their attacks is the excess of information about citizens on the network, said Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications, deputy of the United Russia faction.

File:Aquote1.png
"Any information is easy to use in order to personalize the attack, make it more convincing, and therefore more malicious. I recommend always keeping your head "cold," do not neglect to check any information that comes to you on the network. Even if you already believe the message you received, it will not be superfluous to try to contact the person on whose behalf you were contacted. During the festive time, there are more traps online than usual by about a quarter. It's not worth the risk, "he said.
File:Aquote2.png

The quality of deepfakes has grown significantly over the past few years, but there are still a number of signs by which you can identify a "fake," Anton Nemkin is sure.

File:Aquote1.png
"We all live in an age of active development phase of generative machine learning models, and deep fake technology in particular. With the help of the human eye, it is no longer always possible to distinguish between real and unreal, therefore, special detection technologies exist to analyze potential visual fakes. For users, the following can become a universal instruction: pay attention to the movement of the eyes, the color of the skin and hair, the contour of the oval of the face - often they can be blurred, strange, - explained the parliamentarian. - In the case of voice fakes, you should always carefully evaluate the intonations and clarity of speech. And, of course, always generally critical of any requests that come to you online, if it concerns your personal data or financial resources. "
File:Aquote2.png

Child psychiatrist in the United States received 40 years in prison for creating pornographic deepfakes

On November 8, 2023, the US Department of Justice announced that Charlotte (North Carolina) child psychiatrist David Tatum was sentenced to 40 years in prison for the production, possession and transfer of materials related to child sexual abuse. The criminal, in particular, is charged with creating pornographic deepfakes - images generated using artificial intelligence. Read more here.

Paedophiles using AI to create children's photos of stars and make up stories with them

Paedophiles are actively using generative artificial intelligence systems to "rejuvenate" celebrities in photos and create their child's sexual images. In addition, thousands of AI images showing child abuse were found on the Internet. This is stated in the report of the non-governmental British organization Internet Watch Foundation (IWF), published on October 25, 2023. Read more here.

FBI: Deepfakes used in sex extortion scam

Network attackers began using deepfakes to generate sexually explicit content for the purpose of blackmail. This was warned on June 5, 2023 by the Center for Complaints of Internet Crimes (IC3) as part of the US Federal Bureau of Investigation.

Cybercriminals are said to use AI technology and services to alter photos or videos involving the victim. Original materials can be taken, for example, from profiles on social networks or, under some pretext, requested from the user himself. The resulting sex deepfakes can then be used to extort or damage the victim's reputation. Often fake images and videos in which a person is represented in an unsightly light are published on forums or pornographic sites.

Cybercriminals use AI technology and services to change photos or videos involving a victim

By April 2023, a sharp increase in the number of cases of sexual extortion using deepfake photos or fake videos was recorded. Attackers usually demand a ransom, threatening to distribute materials on the Internet or send them to the victim's relatives/colleagues. Sometimes criminals pursue other goals, in particular, require any information.

The FBI urges caution when posting personal photos, videos and identifying information on social media, dating apps and other online platforms. Despite the fact that such materials seem harmless, they can provide attackers with a lot of opportunities to organize fraudulent schemes. New generative AI systems make it easier to carry out personalized attacks using fake images or videos based on real content. Moreover, victims of sex extortion, in addition to financial losses, may be at the very disadvantage.[12]

In China, a fraudster used a fake video call to deceive a businessman for $610,000

In May 2023, it became known that in China, a fraudster used artificial intelligence to impersonate a friend of businessman Guo and convince him to transfer $610,000.

Guo received a video call from a man who looked and spoke as a close friend. But the caller was actually a fraudster "using technology to change his face" and voice. Guo was persuaded to transfer 4.3 million yuan after a fraudster claimed another friend needed money from the company's bank account to pay for a guarantee at the tender.

In the US, scammers stole $11 million with deepfakes imitating someone else's voice

In 2022, fraudsters, using artificial intelligence models to accurately imitate (deepfake) human voices, stole about $11 million from their victims in the United States alone. Such data are contained in the report of the Federal Trade Commission (FTC), published on February 23, 2023. Read more here.

With the help of a deepfake, you can impersonate another person in the bank

On May 19, 2022, it became known that with the help of a deepfake, you can impersonate another person in a bank.

Deepfake technology allows you to bypass the facial recognition system.

Sensation, which specializes in identifying attacks using deepfake technology, has investigated the vulnerability of 10 identification services. Sensation used deepfakes to superimpose the user's face on the ID card for scanning and then copy the same face into the attacker's video stream for identification.

The Liveness test usually asks the user to look at the camera of the device, sometimes turning their head or smiling, and compares the appearance of the user and his identity card using facial recognition technology. In the financial sector, such a check is called "Know Your Customer" (KYC) and is part of the check of documents and accounts.

File:Aquote1.png
We tested 10 services and found that 9 of them are vulnerable to deepfakes, "said Sensation Chief Operating Officer Francesco Cavalli. - There is a new generation of AI that could pose a serious threat to companies. Imagine what you can do with a fake account created with a deepfake. And no one will be able to detect a fake.
File:Aquote2.png

Cavalli is disappointed with the reaction of services that considered the vulnerability insignificant.

File:Aquote1.png
We have informed vendors that services are vulnerable to deepfake attacks. The developers ignored the danger. We decided to publish the report, as the public should be aware of these threats, the researcher added.
File:Aquote2.png

Suppliers sell Liveness tests to banks, dating apps and cryptocurrency projects. One service was even used to verify the identity of voters in elections in Africa. (Although there is no indication in the Sensation report that the review was compromised by deepfakes)

Deepfake technology poses a great danger to the banking system, in which a deepfake can be used for fraud.

File:Aquote1.png
I can create an account, transfer the stolen money to a bank account or take out a mortgage, because online lending companies compete with each other in the speed of issuing loans, the expert added.
File:Aquote2.png

An attacker can easily intercept a video stream from a phone camera and use deepfake technology for malicious purposes. However, it is impossible to bypass the Face ID facial recognition system in this way. Apple's identification system uses depth sensors and checks your identity not only based on your appearance, but also the physical shape of your face[13] of[14].

2021

Fraudsters stole $35 million from a UAE bank with the help of a deepfake of the voice of its head

In mid-October 2021, it became known that the criminals took possession of a huge amount of $35 million from a bank in the UAE, imitating the voice of the head of the bank using advanced artificial intelligence. They reportedly used a deepfake to mimic a legitimate commercial transaction linked to the bank.

Forbes reported that deepfake voices were used to trick a bank worker into thinking he was transferring money as part of a legitimate operation. The story began to be discussed after the publication of court materials that took place in January 2021, when the manager of an unnamed bank received a seemingly ordinary phone call.

In the UAE, scammers with the help of a deepfake voice deceived a bank employee and stole $35 million


The person on the other side of the phone claimed to be a director of a large company the manager had previously spoken to, with their voices identical, according to court filings. All this is supported by emails from the bank and its lawyer, looking like real, who were able to convince the branch manager that he was talking to the director, and the company was indeed in the process of a large commercial transaction worth $35 million.

Subsequently, he followed the instructions of the caller and made several large money transfers from the company to a new account. Unfortunately, it was all a sophisticated scam.

Investigators from Dubai found out that the scammers used "deep voice" technology, which allowed them to mimic the voice of the head of a large company. Police concluded that up to 17 people were involved in the scheme and that the stolen money was being transferred to several different bank accounts scattered around the world.

So, two accounts were registered with the United States, at Centennial Bank. They received an amount of $400 thousand. UAE investigators have already reached out to US officials for help in the investigation.

This is not the first time that fraudsters were able to carry out a major scam with the help of imitation of a voice. In 2019, a certain energy company in the UK lost $243 thousand after a person who pretended to be the CEO of the company contacted an employee of the company.

Attackers are increasingly using the latest technology to manipulate people who are unaware of their existence, according to Jake Moore, a cybersecurity expert at ESET.

According to experts watching the artificial intelligence market, this will not be the last time.[15]

Fraudsters in China with the help of deepfakes deceived the state facial recognition system for $76.2 million

Fraudsters in China with the help of deepfakes deceived the state facial recognition system for $76.2 million. To deceive her, scammers bought high-quality photos and fake personal data on the black market. It costs from $5. The resulting photos of Wu and Zhou were processed in deepfake applications - they can "revive" the uploaded picture and make a video from it, giving the impression that faces nod, blink, move and open their mouths. Such applications can be downloaded for free.

For the next stage, scammers bought special reflashed smartphones: during face recognition, the front camera of such a device does not turn on, instead the system receives a pre-prepared video, perceives it as an image from the camera. Such phones cost approximately $250.

With the help of such a scheme, fraudsters registered a dummy company that could issue fake tax returns to its clients. For two years, fraudsters earned $76.2 million on this.

Biometrics are widespread in China - with its help they confirm payments and purchases, check their identity when applying for public services, and so on. But along with the development of technology, data protection has become one of the main problems.

2020: The emergence of Telegram bots that create fake "porn photos" based on DeepNude for blackmail

At the end of October 2020, a system of deepfake bots was discovered in Telegram, which, upon request, generate fake "porn photos." Users substitute the faces of familiar women taken from images from social networks for such pictures, and then send them out in public channels and chats or use them for blackmail. Read more here.

2019

Energy company lured $243,000 using AI to fake voice

In early September 2019, criminals lured $243 thousand from a British energy company, posing as an executive director using artificial intelligence to fake a voice.

The general manager of an unnamed energy company thought he was on the phone with his boss, an executive at the German parent company. "Boss" asked him to send funds to a Hungarian supplier. According to Euler Hermes Group SA, the offender said that the request was very urgent and asked the manager to transfer the money within an hour. Euler Hermes declined to name the victim company.

Criminals lured $243 thousand from the British energy company.

Fraud expert from Euler Hermes insurance company Rüdiger Kirsch said that the injured manager recognized his boss's weak German accent and general tone of voice over the phone. Immediately after the call, the manager handed the money to the Hungarian supplier and contacted the boss again to report the completed task. Soon, the head of the company and his subordinate realized that they had become a victim of fraud and turned to the police.

Specialist cyber security ESET Jake Moore (Jake Moore) at the end of August 2019 said that in the near future we will face a monstrous increase in cybercrime. DeepFake is able to frame the faces of celebrities and public faces in any video, but it takes at least 17 hours of recordings with this person to create a convincing image. To fake a voice, it takes much less material. As the computing power of computers increases, such fakes become easier to create.

To reduce risks, Moore recalls, it is necessary not only to inform people that such imitations are possible, but also to include special verification methods before transferring money, for example, two-factor identification.[16]

In the United States, begin to imprison for the distribution of porn with the substitution of faces

In early July 2019, Virginia passed a law prohibiting the distribution of pornography with the substitution of persons. This is the first state in the United States to have such an initiative.

In the United States and other countries, the so-called revenge porn is widespread - posting materials of an open sexual nature on the Internet without the consent of the person depicted on them. As a rule, such materials are posted by former partners in revenge or by hackers who have gained unauthorized access to such materials. With the development of artificial intelligence technologies and simple tools for high-quality editing of photo and video content, revenge porn increasingly began to be carried out by superimposing a face on a porn actor.

In early July 2019, Virginia passed a law prohibiting the distribution of pornography with the substitution of persons

Starting July 1, 2019, anyone who distributes or sells fake photos and videos of a sexual nature for the purpose of "blackmail, harassment or intimidation" in Virginia can be fined up to $2,500. A prison term of up to 12 months is also provided.

Virginia became the first American state to outlaw the so-called deepfake. Similar initiatives are being prepared in other states by July 2019. For example, New York is considering a bill prohibiting the creation of "digital copies" of people without their consent, and in Texas on September 1, 2019, a law will come into force on responsibility for the distribution of sexual content with the substitution of persons.

File:Aquote1.png
We must rebuild our outdated and disparate laws, including criminal ones, for the paralyzing and life-threatening consequences of threats, and recognize the significant harm of fake porn, says Professor Claire McGlynn of Durham University.
File:Aquote2.png

According to experts, it is becoming more and more difficult to identify support even in videos.[17]

Fake AI porn has overerrorised women

By the beginning of 2019, artificial intelligence has reached a level of development that allows you to easily and without special technical skills "attach" the heads of stars and ordinary women to the bodies of porn actresses to create realistic videos. These explicit films, created using the DeepFake method, are videos edited so well that they are indistinguishable from the real ones. Their emergence is dangerous because the technology may also begin to be used to spread fake news. But even more dangerous is their use as a tool to blackmail and humiliate women.

In light of the proliferation of AI and easy access to photos of former partners, colleagues and other people without their consent on social media sites such as VKontakte and Facebook, there is a growing demand for tools to create fake videos. Despite the fact that the law may be on the side of victims, they often face significant obstacles associated with the difficulties of harassment on the Internet.

File:Aquote1.png
Fake porn videos cause the same stress as intimate photos posted online, says writer and former politician Charlotte Lozas. - Fake porn videos are realistic, and their influence is exacerbated by the growing number of fake news among which we live.
File:Aquote2.png

AI-faked porn has overerrorised women

Lozas add that fake videos have become a common way to humiliate or blackmail women. In a survey of 500 women who were victims of revenge porn, Lozas found that 12% were victims of fake porn videos.

One way to solve this problem may be to revise and supplement the laws prohibiting revenge porn. These laws, which exist in 41 US states at the beginning of 2019, have recently appeared and indicate a change in the government's attitude towards "uncoordinated" pornography.

Fabricated porn featuring actress Paul Gadot

Another approach is to bring civil action against offenders. As noted on the website of the independent non-profit organization Electronic Frontier Foundation, persons who are victims of fake porn videos in the United States can sue for defamation or presenting them in a "false light." They can also file a "right to public use" claim, pointing out that the creators of the video benefited from the victim's image without her permission.

However, all these possible solutions could run into a serious obstacle: a free speech law. Anyone sued for making a fake clip can claim the video is a form of cultural or political expression and falls under the first amendment. Lozas believe that in the case of fake porn videos, most judges will be critical of the First Amendment reference, especially when the victims will not be famous personalities, and the video will only affect sexual exploitation and will not include political satire or materials of artistic value.

A fragment from a fake porn video in which the face of a porn actress was replaced with the face of Hollywood star Paul Gadot

At the same time, the victim herself has almost no opportunity to close access to the offensive video. The reason lies in Article 230 of the US law, which protects the provider in terms of what users publish on their pages. In the case of sites that host fake porn videos, providers can claim immunity because not they, but their users, upload videos. An exception in this situation is a violation of intellectual property, when the operator is obliged to remove materials if he receives a notification from the copyright owner.

According to Professor Loyola Law School and author of a book on privacy and publicity rights, Jennifer Rothman, courts do not have a clear idea of ​ ​ whether this exception applies to state laws (for example, the right to publicity) or only to federal ones (such as copyright and trademark).

This raises the question of whether Congress can draft legislation narrow enough to help victims of fake porn videos, but which will not have undesirable consequences. As a cautionary example, University of Idaho law professor Annemarie Bridy cites the misuse of copyright law when companies and individuals acted in bad faith to remove legitimate criticism and other legitimate content.

Still, according to Bridey, given what's at stake in the case of fake pornographic videos, the new law is needed now, Fortune said in a Jan. 15, 2019, publication.[18]

See also

Main article: Forgery of documents

Notes