RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/10/30 09:32:35

Deepfakes (DeepFake)

DeepFake  (from deep learning - "deep learning" and fake - "fake") is a method for synthesizing human image based on artificial intelligence. It is used to combine and overlay existing images on video. Facial recognition systems are the automatic localization of a human face in an image or video and, if necessary, identification of a person's identity based on available databases. Interest in these systems is very high due to the wide range of tasks they solve.

Content

Deepfake fraud

Main article: Deepfake fraud

2024

How companies protect themselves from deepfakes. 5 strategies

Deepfake technologies that allow you to fake a person's voice and appearance are increasingly used by cybercriminals. Using such tools based on artificial intelligence, fraudsters steal companies' money, compromise their executives, and even try to get a job in order to then gain unauthorized access to the organization's IT infrastructure. On October 29, 2024, research and consulting company Gartner spoke about five strategies for protecting against deepfakes.

File:Aquote1.png
The introduction of generative AI (Genia) is associated with numerous risks - from the spread of misinformation to the abuse of employees and the creation of digital counterparts of other people. It is important that executives implement effective protective mechanisms in their organization to balance unprecedented opportunities with the significant external and internal reputational threats that Genia opens up, "says Amber Boyes, director analyst at Gartner.
File:Aquote2.png

Gartner analysts gave advice to companies on how to protect themselves from deepfakes

Enhanced Social Media Monitoring

As a survey conducted by Gartner between April and May 2024 showed, 80% of consumers indicate that Genia complicates the recognition of real and synthesized content on the Internet. By creating a human-in-the-loop protocol, managers can arm themselves with the necessary tools to monitor social media and manage reputational risks. Relevant processes should be adapted for GeniI-specific scenarios, including the introduction of new means of alerting false and misleading content on social platforms.

Building trust in corporate communications

Misinformation and undermining trust remain the main issues in the media landscape that has formed. Therefore, it is imperative that managers turn their own organization into a source of accurate and reliable information. In order to reduce damage from an information attack, experts recommend developing effective corporate communications. To do this, you will need to carefully monitor the information published on the Internet and take the necessary measures in the event of misinformation. At the same time, the number of communication channels of the company should be minimized and coordinated distribution of information should be organized.

Action scenarios for the most likely attacks

Gartner experts point out the need to develop a plan for responding to deepfake attacks. After identifying the key risks to the company, managers are encouraged to organize practical exercises to effectively counter attackers and minimize possible consequences. At the same time, special attention should be paid to areas with the highest reputational risks.

Transparency regarding the use of Genia

According to surveys, approximately 75% of consumers consider it necessary for companies to disclose information about when they use Genia technologies when creating their content. Against this background, Gartner analysts say, the heads of organizations need to ensure a thorough check of the generated materials and facts. At the same time, employees should develop explanations and recommendations for the use of Genia, providing appropriate scenarios for the use of technology and real examples.

Empowering Thanks to Genia Experiments

By developing a culture of safely conducting experiments with Genia, managers can increase the confidence of their employees in this technology. Experimentation opportunities should focus on the most useful and least risky uses to minimize potential failures.[1]

Brit Hugh Nelson gets 18 years in prison for creating deepfake photos of child sex abuse

On October 28, 2024, it became known that a British court sentenced 27-year-old Hugh Nelson to 18 years in prison for creating deepfake images with child sexual abuse. The convict sold the generated materials on the Internet. Read more here

How hackers use AI to hack the IT infrastructure of Russian enterprises

Hackers are increasingly using the capabilities of artificial intelligence (AI) to hack into the information technology systems of Russian enterprises. According to data presented in Kaspersky Lab, attackers use AI to create fake content that allows them to bypass companies' security systems and gain access to confidential information. The data confirm the increase in the number of such attacks, which was announced on October 25, 2024 by the company's regional representative in the South and North Caucasus federal districts Igor Malyshev.

According to Malyshev, hackers are actively using neural networks to create fake images and voices, using them for social engineering and targeted attacks. These technologies can be used not only against enterprises, but also to extort individuals, in order to obtain information that can be used for future attacks. He also noted that there are no accurate statistics on such cases, since many companies do not disclose the details of the hacks, but the available data is enough to conclude about the growing threat.

Hackers use AI to hack IT infrastructure

According to TASS, as of October 2024, software tools for the automatic detection of deepfakes and other counterfeit materials have not been introduced at the mass level. Malyshev stressed that the creation of such solutions is associated with a number of technical difficulties, which makes the main way to protect the increase in cyber literacy of users and the observance of digital hygiene. Among the main recommendations, he highlighted caution when using open Wi-Fi networks, attentiveness to messages and calls from unfamiliar persons, as well as to clicks on suspicious links.

File:Aquote1.png
To date, compliance with basic cybersecurity rules remains the best defense. The user should be wary if he is asked to urgently perform some actions, especially if it does not correspond to the usual procedures or the request comes from a person who does not have the authority to do so, - said Malyshev.[2]
File:Aquote2.png

How scammers earned $46 million using beautiful video dypfakes to meet on sites

In mid-October 2024, Hong Kong police arrested 27 people on suspicion of committing fraud using the technology of replacing persons (video lipfakes) in order to get acquainted on sites on the Internet. As a result of this scheme, the victims lost about $46 million. Read more here

The Ministry of Internal Affairs warned of fraud with fake orders of the FSB

The Ministry of Internal Affairs of Russia reported the appearance of a fraudulent scheme in which attackers use fake orders from the FSB. So, acting on behalf of the head, they go to the company's employees and report that the FSB of Russia began an audit against them due to a possible violation of the current legislation. This was announced on October 8, 2024 by the press service of Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications. Read more here.

Bills have been submitted to the State Duma to protect against deepfakes. Will they help?

On one day - September 16 - two bills were submitted to the State Duma at once, aimed at protecting against deepfakes. One was developed by Yaroslav Nilov, a deputy of the State Duma, together with Senator Alexei Pushkov. He makes changes to the Criminal Code of the Russian Federation. The second, which was proposed by senators Andrei Klishas, Artyom Sheikin, Natalya Kuvshinova, Ruslan Smashnev and State Duma deputy Daniil Bessarabov, proposes amendments to the Civil Code of the Russian Federation.

Bill No. 718538-8[3] the Russian Federation" "On Amending[4] Federation" proposes to determine the use of deepfakes as an aggravating circumstance for such crimes as libel (Article 128.1), fraud (Article 159), theft (Article 158) extortion (Article 163) and causing property damage (Article 165). The explanatory note to the bill says the following:

File:Aquote1.png
Considering that it is the voice and image of a citizen that are most often used for deception, it is proposed to distinguish them into a separate category. In connection with the above, the bill proposes to introduce into a number of articles of the Criminal Code of the Russian Federation (libel, theft, fraud, fraud in the field of computer information, extortion, causing property damage by deception or breach of trust) an additional qualifying feature - committing a crime using an image or voice (including falsified or artificially created) and (or) biometric data of a citizen.
File:Aquote2.png

If the law is passed, the use of deepfake under the above articles will be qualified by the court as an aggravated crime, according to which the punishment can be increased. For example, for libel, a fine can be raised to 1.5 million rubles (without aggravating circumstances, a fine for libel is limited to 500 thousand rubles), and for fraud, the term of imprisonment can be tightened to 6 years (without aggravating circumstances, imprisonment is limited to 2 years).

File:Aquote1.png
Existing technologies for creating deepfakes have indeed reached a level at which they can be used to harm, "Yuri Mitin, managing partner of the law firm Intellectual Defense, confirmed to TAdviser the importance of protecting against deepfakes. - Deepfakes can be used to spread misinformation, cyberbullying, blackmail, as well as to create a ground for distrust in society. With the help of such technologies, it is possible to create fake video images or audio recordings where a person is credited with words or actions that he did not commit. It puts people's reputations at risk and could have serious consequences.
File:Aquote2.png

This opinion is shared by Nikita Leokumovich, head of the cyber intelligence and digital forensics department at Angara Security. He explained to TAdviser the danger of deepfakes as follows:

File:Aquote1.png
Since the beginning of 2024, cases of fraudulent schemes have begun to be recorded in Russia, in which social engineering techniques and deepfake are used in combination. For example, fraudsters can create a fake message from the head of a company in a messenger using an audio or video PDF to trick them into obtaining a money transfer. Attackers can generate realistic images or videos that elicit emotional responses from victims, making their attacks even more effective.
File:Aquote2.png

In fact, this means that the set of criminal articles, where the use of deepfake can be qualified as an aggravating circumstance, could be expanded to disinformation (Article 207), and to violation of privacy (Article 137), and to hooliganism (Article 213) and even terrorism (Article 205).

Two bills submitted to the State Duma to protect against deepfakes

The second bill,[5]" "On Amendments to Part One of the Civil Code of the Russian Federation," introduces a new article 152.3 to the Civil Code. "Citizen Voice Protection." She assigns to a citizen the right to dispose of recordings of her voice, including those synthesized using artificial intelligence. True, three exceptions are assumed if:

  • the use of a citizen's voice is carried out in state, public or other public interests;
  • a citizen's voice is recorded in a video or audio recording, which is held in places open for free visit, or at public events;
  • a citizen's voice was recorded for a fee.

The explanatory note to the bill says the following:

File:Aquote1.png
With the unfair use of speech synthesis technologies, adverse legal consequences arise for citizens. In particular, artificial intelligence, having studied on audio recordings of a real voice, can further imitate it, and "derived" recordings can be used in a way that is unacceptable for a person - the owner of the voice, since obtaining his consent is not directly provided for by law. Thus, the right of a citizen to independently decide how to manage the recordings of his own voice is violated, including with the aim of using them to create products based on speech synthesis technologies.
File:Aquote2.png

In accordance with the bill, a citizen has the right to demand the destruction of carriers with his voice, including those generated, as well as the cessation of its distribution on the Internet. Interestingly, the ML model itself, which was trained on the corresponding voice, does not yet fall under the requirement of destruction, but in the future it can generate similar messages in an unlimited amount.

File:Aquote1.png
Technologies for creating deepfakes are developing very quickly and in many directions: both legal and illegal, "said TAdviser, Valery Sidorenko CEO of the digital agency". " Interium- For example, legal deepfakes are used in advertising or AI assistants, illegal ones - by fraudsters and political provocateurs. I also note that voice deepfakes are much easier to make than videos, and much more difficult to track and regulate. Unfortunately, technologies for 100% detection of deepfakes do not exist: yes, there are scanner programs, they are patented by large IT corporations and large banks (for example, in Russia -), Sber but there are absolutely reliable algorithms due to the speed of development of generation technologies.
File:Aquote2.png

It is clear that if a citizen cannot prove that a recorded or generated voice is really his - and there is no technology for this, then he is unlikely to be able to remove such a deepfake. Although Dmitry Ovchinnikov, head of the laboratory for strategic development of cybersecurity products at the Gazinformservice Cybersecurity Analytical Center, believes differently.

File:Aquote1.png
The same neural network technologies allow you to analyze photo and video materials and, through the use of special algorithms, determine a fake, "he explained to TAdviser readers. - This uses the analysis of a variety of digital data in a fake that is inaccessible to human hearing or vision. Over time, such algorithms and services will not only be available on individual sites or specialized servers, but also integrated into social networks, video hosting, file storage and mail services, which will automatically, through trained AI, cut off such content and prevent its distribution.
File:Aquote2.png

That is, for developers of services and information security products, one of the areas of development may be the integration of technologies for detecting deepfakes and marking them for users and customers into their products. At the same time, the adoption of such an amendment to the Civil Code opens up opportunities for the whole industry to legally use deepfakes. An example of this was given by Valery Sidorenko:

File:Aquote1.png
I will give an example of a legal deepfake from Russian practice: last year MegaFon bought the rights to the face of Bruce Willis and removed AI ads from him (this video made a splash on Runet). The actor himself is sick, but sells the rights to his appearance for such videos, and due to the increasing cases of illegal use of the appearance of actors, the governor of California, where Hollywood is located, recently even introduced bills to toughen the punishment for deepfakes without consent. In the United States as a whole, at the moment, regulation is largely determined by the lobbying of the "show business union," it is they who suffer the main reputational and financial losses from deepfakes there.
File:Aquote2.png

A still from an advertisement for Megaphone with a deepfake of Bruce Willis

Laws that restrict the use of generated images and voices of actors in filmmaking and that Valery Sidorenko mentions have already been recently signed into law by state governor Gavin Newsom. This was reported by[6] American Federation of Television and Radio Artists (SAG-AFTRA). Actually, our bill, as follows from the explanatory note, was developed by the Alliance in the field of AI and the National Federation of the Music Industry, that is, it is aimed at creating a market for the legal use of voice deepfakes.

File:Aquote1.png
On the one hand, generative technologies, including deepfakes, have great potential for creative industries, education and technologies, Yury Mitin said. - They are able to provide new forms of art, duplication of the necessary media and entertainment innovations. On the other hand, concerns about the abuse of such technologies lead many specialists to talk about the need to limit or control their use in order to minimize the risks associated with disinformation and human rights violations. It is important to find a balance between the development of technologies and their responsible use in society.
File:Aquote2.png

According to the company's analytics and digital threat assessment service expert ETHIC [[ Infosecurity|Infosecurity]] Maxim Gryazev, it is unlikely that deepfake fraud will be completely eradicated . Artificial intelligence will continue to develop, and with it - criminal methods. However, it is possible to significantly reduce the number of such crimes by developing deepfake detection technologies and increasing expertise in this area. The key areas should be the development of tools for automatic detection of counterfeits and the regular update of protection methods.

Gryazev is confident that the state should actively invest in the development of technologies and the training of specialists to combat deepfake fraud. Awareness campaigns and public awareness of the existence of such threats are needed. Citizens should also be vigilant, distrust dubious messages and always check sources of information, added the source TAdviser.

Experts interviewed by TAdviser agreed that the main pillar in the fight against deepfakes is to raise awareness of citizens.

File:Aquote1.png
We ourselves must determine suspicious signs in the frame, or on the audio track, if a citizen, in principle, knows about the existence of a deepfake, the percentage of hits in the traps of intruders will significantly decrease. Warned means armed, "says Alisher Juraev, a social engineering researcher at StopPhish (specializing in information security training).
File:Aquote2.png

Anton Prokopchin, Product Director at IT company Proscom, sees two main ways to combat deepfakes:

  • Legislative regulation, including tougher liability for the use of this technology, as well as various marking methods for information recognition.
  • Technical methods. In this category, the same neural networks help to calculate inaccurate data. They are trained in real videos, helping to recognize the characteristic details of certain people. For example, a program to identify fake video content was developed at the Don State Technical University.

According to Yury Mitin, managing partner of the Intellectual Defense law firm, the introduction of criminal liability for the use of deepfakes is an important step in the fight against fraud and the dissemination of false information. In Russia, there are already mechanisms to counter deepfakes, for example, checking for the authenticity of video material and using special programs to analyze them. The new legislative initiative, in fact, tightens responsibility, making the use of deepfakes more risky, the lawyer argues.

In his opinion, the legal regulation of deepfakes is a difficult task, requesting a balanced approach. On the one hand, it is necessary to protect people from abuse of this technology. On the other hand, it should be borne in mind that deepfakes can have legitimate applications in the field of art, education and entertainment.

Yury Mitin called raising awareness of users about existing threats and methods of identifying them as a key point for effectively countering deepfakes. It is necessary to develop technologies for detecting deepfakes, as well as improve the mechanisms for monitoring and controlling their distribution on the Internet.

Lawyer Vladimir Kutaev notes that at first glance, the spread of deepfakes should be extremely tightly regulated by full-fledged norms enshrined in the main source of criminal law, and on the other hand, it is not clear whether it makes sense to increase the volume of the Criminal Code if the use of deepfake technology can be included in standard offenses by issuing decisions of the Plenum of the RF Armed Forces, on which law enforcement will be built.

File:Aquote1.png
In any case, this is a positive initiative that should expand the ability of citizens to protect their rights and establish appropriate measures of responsibility, Kutaev said.
File:Aquote2.png

Fraudsters in Russia attack parents: they began to fake videos with images and voices of their children

In early September 2024, it became known that fraudsters in Russia are actively using new deception schemes implemented using artificial intelligence. Attackers attack parents by spoofing videos of their children's images and voices.

The fraudulent scheme comes down to the fact that cybercriminals turn to the victim through voice or video communication, posing as close relatives. For this, deepfake records generated by AI can be used. Further, the attackers report any unpleasant news. The goal is to bring a person to emotions and force him to make a rash decision - to transfer money.

Fraudsters in Russia attack parents by faking videos with children

One of these cases in early September 2024 occurred in Voronezh. Local resident Polina Markovna S. called from Moscow daughter - student Vera. Via video link, allegedly from the police station, she reported problems and asked her mother to urgently transfer 100 thousand rubles without asking any questions. At the same time, the recipient's account number was sent via SMS.

File:Aquote1.png
Mom, I'm in trouble! Urgently transfer me 100 thousand rubles! Urgent! And don't ask about anything. I will explain everything to you later when you translate. I'm in the police now. The phone will be disconnected. As soon as you transfer the money, they will let me go, and I will call you right away. I sent you an e-mail account number, "the girl said.
File:Aquote2.png

By coincidence, Pauline Markovna soon received a real call from her daughter, who said that she was fine. This made it possible to avoid losing the money that the woman was going to transfer to fraudsters.

Attackers can not only impersonate the children of victims, but also pose as employees of banks, law enforcement agencies, etc. Information security experts in case of suspicious calls recommend asking clarifying questions, and then calling real loved ones on their own. This will help avoid financial losses.[7]

Central Bank: Fraudsters hack Russian social networks, make deepfakes and lure money from their friends and relatives

In mid-August 2024, the Bank of Russia warned of a new threat from fraudsters actively using modern technologies to steal money. The message published by the press service of the Central Bank of the Russian Federation says that attackers are increasingly using deepfakes - fake videos created using neural networks - to deceive citizens and extort funds from their friends and relatives.

As noted in the message of the Central Bank, the scheme of fraudsters usually begins with hacking the victim's account on social networks or instant messengers. By gaining access to personal data such as photos, videos and audio recordings, attackers create realistic videos using deepfake technology. In such videos, a person talks about an alleged problem with him, for example, a serious illness or a traffic accident, and asks to urgently transfer the money to the specified account.

The Central Bank warned that fraudsters hack into the social networks of Russians, create deepfakes and lure money from their friends and relatives

According to RBC, the Bank of Russia stressed that such appeals could be a trap for fraudsters. Especially often, attackers make deepfakes depicting employers, colleagues or even employees of government agencies, which adds an additional degree of trust to those who receive such messages.

The Central Bank strongly recommends that citizens be vigilant and not succumb to requests for financial assistance received through social networks or instant messengers. Bank of Russia specialists offer several methods for verifying the authenticity of such requests:

  • Call the sender - before transferring money, you need to contact the person directly and clarify whether they really need help.
  • Ask a personal question - if there is no way to call, you can ask a question that only this person knows the answer to. This will help identify the fraudster.
  • Evaluate a video message - pay attention to possible defects, such as monotony of speech, unnatural facial expressions or anomalies in the sound, which may indicate that you have a deepfake.

With an increase in the number of cases of the use of deepfakes, in May 2024, a bill was proposed in the State Duma providing for criminal liability for the creation and distribution of such fake materials.[8]

Scientists have created a neural network to identify deepfakes

Scientists of the St. Petersburg Federal Research Center of the Russian Academy of Sciences (St. Petersburg Federal Research Center of the Russian Academy of Sciences) have developed a method for automatically determining deepfakes by identifying manipulations to improve the quality of the generated video for its persuasiveness (upscaling). Based on this method, a neural network was trained, which analyzes video and photos and helps to identify deepfakes. This was announced on August 2, 2024 by the press service of Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications. Read more here.

In Russia, deepfakes can be recognized as an aggravating circumstance

The Ministry of Internal Affairs of Russia is developing a bill that will allow the use of deepfakes to be attributed to aggravating circumstances. On July 1, 2024, State Duma deputy, deputy chairman of the committee on information policy Anton Gorelkin told about this, RIA Novosti quotes him.

File:Aquote1.png
The Ministry of Internal Affairs is developing a bill to make deepfakes an aggravating circumstance and this is the right direction. I am sure that in the near future there will be some kind of regulator, - said Anton Gorelkin.
File:Aquote2.png

File:Aquote1.png
People need an affordable tool for quickly checking the origin of the content... I brought the smartphone camera, received a verdict... I would like there to be more such projects, - said Gorelkin.
File:Aquote2.png

The Ministry of Internal Affairs has long been working to tighten legislation in the field of using IT to carry out illegal actions, said State Duma deputy RFAnton Nemkina.

File:Aquote1.png
Let me remind you that the Minister of Internal Affairs Vladimir Kolokoltsev instructed to work out the issue of classifying crimes using IT as aggravating circumstances of the Criminal Code of Russia. The corresponding order may apply to deepfakes, - said the deputy.
File:Aquote2.png

File:Aquote1.png
As a rule, the technology is used by cybercriminals for fraudulent purposes in order to gain the confidence of a citizen. According to the Office of the President of the Russian Federation for the Development of Information and Communication Technologies, in January 2024 alone, about 2 thousand incidents using deepfakes for fraudulent purposes were identified. At the same time, in 70% of cases, the attackers used diplomatic services, posing as relatives of the victim, colleagues or acquaintances, "he said.
File:Aquote2.png

File:Aquote1.png
This is because voice deepfakes are much easier to create. In addition, it is messengers and social networks that are gradually becoming a key place for the actions of attackers, since people there often communicate through voice messages, the deputy explained.
File:Aquote2.png

The parliamentarian noted that the use of video deepfakes may be associated with scenarios in which an attacker extorts funds from citizens in return for non-publication of private content. Such incidents already exist.

File:Aquote1.png
The problem is that deepfakes can potentially be used for various illegal purposes. For example, use the technology to generate fake documents - a passport or driver's license. Let me remind you that in the framework of the preparation of the new national project "Data Economics" in Russia, the idea of ​ ​ creating a single platform is also being discussed, which will identify inaccurate information generated, including using artificial intelligence technologies. There is already a movement towards the formation of a regulatory framework, as well as tools designed to protect citizens, the deputy emphasized.
File:Aquote2.png

The Ministry of Digital Science of the Russian Federation is preparing a single platform to counter fraud. She will fight deepfakes

On June 5, 2024, it became known that the Ministry of Digital Development of the Russian Federation is working with the federal executive authorities (FOIV) on the creation of a single platform to counter various types of fraud using information technologies. We are talking, in particular, about the fight against deepfakes and other schemes based on the use of artificial intelligence. Read more here.

Fraudsters using deepfakes forge documents of Russians

Fraudsters have learned to fake citizens' documents using artificial intelligence (AI) technologies. As before, when creating digital fake copies, they either change the numbers or try to pass off an invalid document as valid, but now deepfakes are also used for this purpose for the process of authentication and data synthesis. Such information on May 8, 2024 with TAdviser was shared in the press service of the State Duma deputy RFAnton Nemkin with reference to Izvestia. Read more here.

In South Korea, a fraudster stole $50,000 from a woman using Elon Musk's deepfake

In South Korea, a fraudster stole 70 million won (about $50,000) from a woman using the deepfake of Tesla founder Elon Musk. The incident became known at the end of April 2024.

According to the Independent, citing South Korean publications, the fake Musk wrote to a woman who was a fan of the businessman. The Korean woman did not believe at first, but the attacker convinced her by sending a photo of her identity card and several pictures from work.

Fraudster stole 70 million won from a woman using the deepfake of the founder of Tesla Motors

File:Aquote1.png
Musk talked about children and how he flies by helicopter to work at Tesla or. SpaceX He also explained that he contacts fans very rarely, - a deceived citizen of South Korea shared the details of the correspondence.
File:Aquote2.png

The couple continued to communicate on social media. At one point, they decided to contact via video. Then the fraudster told her that he loved her. During a video call, the scammer used a deepfake to pretend to be Musk. Deepfake turned out to be so truthful that after that the girls had no doubts - Musk himself really contacted her.

After that, the fraudster named the Korean bank account number, saying: "I'm happy when my fans got rich because of me." He said the account belonged to one of his Korean employees. As a result, she put in this account a total of 70 million won, which the fake Elon Musk promised to invest in business development and return the money with large interest to make the girl rich. But this scheme turned out to be fraudulent, and the victim went to the police.

This is not the first time that Elon Musk's deepfake has been used in South Korea. Previously, unknown persons hacked into a YouTube channel owned by the South Korean government, renamed it SpaceX Invest and broadcast fabricated videos with Elon Musk discussing cryptocurrencies.[9]

Deepfake Yuri Nikulin will play in the new film

March 26, 2024 it became known that one of the heroes of the family comedy "Manyunya: Adventures in Moscow" will be the image of the Soviet actor and circus artist Yuri Nikulin, formed using artificial intelligence technologies. This is the first successful experience in Russia to recreate the appearance and voice of the late actor through a neural network. Read more here.

Central Bank of the Russian Federation introduced measures to combat deepfakes

At the end of March 2024, it became known that the Central Bank of Russia intends to update the procedure for informing about fraudulent transfers in online services for financial transactions and the exchange of digital financial assets. We are talking, in particular, about the fight against deepfakes.

It is noted that attackers are actively using modern technologies and tools of artificial intelligence to simulate the voice of the victim and other scams. According to the Bank of Russia, the volume of funds stolen by fraudsters in 2023 reached 15.8 billion rubles. This is 11.5% more than in 2022. The regulator reports that the surge was partly due to an increase in the volume of cash transactions using payment cards.

Central Bank intends to update the procedure for informing about fraudulent transfers in online services

The Bank of Russia, together with representatives of law enforcement and supervisory authorities, is working on expanding the list of information subject to registration and storage about the actions of clients of credit institutions and money transfer operators. According to the new rules, from June 2024, operators of payment systems and electronic platforms, including banks, will have to transfer data on stolen customer funds to the financial regulator. It is assumed that such measures will help in preventing crimes and reducing losses.

The Bank of Russia notes that in order to improve the security of operations, a list of threats and guidelines have been developed and approved. In addition, the use of the Unified Biometric System is monitored. Therefore, "when identifying for payments, the risk of using deepfakes is minimized." The new rules, according to CNews, also establish the procedure for the financial regulator to request and receive information from banks about transactions in respect of which information about illegal actions has been received from the Ministry of Internal Affairs.[10]

4 thousand world celebrities became victims of pornodipfakes

In 2023, approximately 4 thousand famous people around the world became victims of pornographic deepfakes. Such data in mid-March 2024 was disclosed by Channel 4 News.

An analysis of the five most visited deepfake websites by Channel 4 News found that attackers fabricate material depicting female actors, TV stars, musicians and bloggers. Of the approximately 4 thousand victims of pornodipfakes, the British account for 255 people.

Approximately 4
thousand famous people around the world became victims of pornographic deepfakes

In 2016, researchers discovered one fake pornographic video on the Internet. At the same time, in the first three quarters of 2023, almost 144 thousand new deepfake materials were uploaded to the 40 most visited porn sites - more than in all previous years combined. It is noted that, for example, in Britain on January 31, 2024, the Online Safety Act came into force: it provides that the unauthorized exchange of deepfake materials in the country is prohibited. At the same time, the creation of pornodipfakes is not prosecuted. Representatives of Ofcom (Office of Communications), a British agency that regulates the work of television and radio companies, as well as the postal service, speak about the problem.

File:Aquote1.png
Illegal deepfake materials cause significant damage. In accordance with the Internet Security Act, companies will have to assess the risk of distributing such content in their services, take measures to prevent its appearance, and promptly remove such materials, Ofcom said in a statement.
File:Aquote2.png

Deepfakes can be used not only to harm specific individuals. Such materials give attackers the opportunity to spread fake news, carry out various fraudulent schemes, etc.[11]

Russian Prime Minister Mikhail Mishustin instructed the Ministry of Digital Development to create a system for identifying deepfakes

Prime Minister RFMikhail Mishustin instructed Ministry of Digital Development to create a system for identifying deepfakes. This became known in March 2024.

As they write Sheets"" with reference to the representative of the Ministry of Digital Development, we are talking about the development of a single platform capable of identifying inaccurate information generated, including using technologies. artificial intelligence Read more about this. here

Russians are lured by advertisements for paid voice acting of films to steal samples of their voice. Then they steal money from their relatives

Russians are lured by advertisements for paid voice acting of films to steal samples of their voice. Then they steal money from their relatives and friends. Angara Security, a company specializing in information security, spoke about the new fraud scheme in early March 2024.

As Vedomosti writes with reference to Angara Security materials, in ads posted on the Internet, their authors are asked to provide an audio recording in the format of a phone call or a recording of a conversation that must be sent with a personal message or bot. For participation in the project, they offer a fee from 300 to 5 thousand rubles, which can really be paid to the victim.

Russians are lured by ads about paid voice acting of films to steal samples of their voice

According to experts, these ads do not pose a direct threat, but fraudsters use voice data sets in training, neuronets generating audio messages based on them. In the future, they are used to extort money from victims, posing as a relative, colleague, friend, etc. In addition, swindlers can apply on behalf of the victim to where banks he has an account.

Angara Security notes that most often such ads appear on Telegram channels. However, attackers can use other sites, as well as use spam calls with an offer to make money on a "big project." The number of such messages, excluding spam calls, in 2021 was 1,200, in 2022 their number quadrupled to 4,800, and in 2023 reached 7,000, information security experts calculated.

Experts interviewed by Vedomosti also note another potential source for collecting samples of Russian votes: fraudsters can receive them from videos published on social networks. Moreover, you do not need to hack user accounts, because most of the video content is in the public domain, experts say.[12]

Using a deepfake to rob a bank. In what situations it is possible, and how to protect yourself from this

In February 2024, the Russian media disseminated information about the case of allegedly using deepfakes to bypass authentication at Tinkoff Bank. The original source suggested that fraudsters allegedly with the help of a deepfake were able to withdraw 200 thousand rubles from the user's accounts. TAdviser discussed with experts how great the risk of bypassing bank authentication and stealing funds using deepfakes is. More

The Ministry of Digital Development of the Russian Federation was engaged in regulation of dipfeyk

On February 16, 2024, it became known that the Ministry of Digital Development of the Russian Federation began to work out issues of legal regulation of the sphere of deepfakes - technologies for convincing substitution of personality based on artificial intelligence. The initiative is attended by the Ministry of Internal Affairs and Roskomnadzor.

Deepfakes can be used by attackers for various purposes. Fraudsters can mimic the voice or image of a particular person, such as a company executive, to withdraw funds or steal sensitive data. The technology is used to spread disinformation, create unrest in the political arena, etc.

The Ministry of Digital Development is working on the issue of legal regulation of deepfakes

The Vedomosti newspaper reports that the need to regulate the sphere of deepfakes was discussed at a meeting of the government commission for the prevention of offenses chaired by Interior Minister Vladimir Kolokoltsev in February 2024. The Ministry of Digital Development will have to work out the issue of identifying fakes created using AI. The report on the work done must be submitted to the Commission by November 1, 2024.

In Russia, as of mid-February 2024, the use of deepfakes is not regulated by law. But, as noted by Dmitry Kletochkin, partner of the law firm Rustam Kurmaev and Partners, a criminal case can be opened on the fact of acts using personality substitution technology: such actions can be qualified as theft by modifying computer information (Article 159.6 of the Criminal Code) or as fraud (Article 159 of the Criminal Code). The identification of deepfakes is built primarily on the manual work of forensic scientists who detect voice jitter, speech lag, gluing and other features of audio or video recording. In the future, this process is expected to be automated, in which AI algorithms will help.[13]

The company transferred $25 million to fraudsters after a video conference with employee deepfakes

In early February 2024, it became known about a major scam using deepfakes in Hong Kong. A large local company with international business transferred $25 million to fraudsters after a fabricated video conference with employees.

According to the South China Morning Post, an employee of the finance department received a phishing message claiming that it was from a CFO from Britain. The message instructed to carry out a secret transaction, but the employee was not convinced of the veracity of the letter. It was then that AI helped organize the fraudulent scheme.

It became known about a major scam using deepfakes in Hong Kong

Using deepfake technology, the attacker organised a conference call with deepfake footage of the CFO and other employees to persuade the victim to transfer the money. Watching colleagues in the video was enough to initiate a transaction.

According to an employee of the company who was the victim of the scam, during the call he did not even think about the trick - all the participants looked natural, talked and behaved like his colleagues. The realization came only after the employee nevertheless decided to contact the head office of the company and clarify the details of the bank transfer. At the call, the scammers asked the employee to introduce himself, but did not interact with him and mostly handed out orders. And then the meeting suddenly ended.

By the beginning of February 2024, an investigation is underway, there are no detainees yet. The name of the company and the name of the victim are not disclosed by the police in the interests of the investigation. It is known that scammers took audio and video recordings to create deepfakes of video call participants in the public domain.

As noted in the Hong Kong police, this was the first case of this kind, the reception concerning a large amount. According to Baron Chan Shun-ching, Acting Senior Superintendent of the Crime Bureau, in previous cases, scammers deceived victims with one-on-one video calls[14]

Russia may introduce responsibility for the unauthorized use of deepfakes

In Russia, responsibility may be introduced for the unauthorized use of voice and images of people, said Alexander Khinshtein, chairman of the State Duma Committee on Information Policy, Information Technology and Communications, during a plenary session. This was announced on January 25, 2024 by the press service of the State Duma deputy RFAnton Nemkin.

During the discussion of amendments aimed at toughening responsibility for allowing personal data leaks, the first deputy chairman of the Duma Committee on Science and Higher Education Oleg Smolin raised the issue of spreading deepfakes - fakes of both voice and human images using artificial intelligence technologies.

The deputy proposed, as part of the second reading of bills on personal data, to consider amendments that imply responsibility for the use of deepfakes for the purpose of fraud and discrediting.

According to Alexander Khinshtein, the deputies are dealing with this issue. {{quote 'The problem is quite acute, and together with the relevant departments we are working on the preparation of such an initiative, he noted. - It will be reflected not within the framework of these bills. We have to amend the basic law on personal data. }}

File:Aquote1.png
So far, deepfakes are more likely to bring more negative than good to society, creating new challenges. Among them, first of all, the problem of using deepfakes for fraudulent purposes, the deputy said.
File:Aquote2.png

File:Aquote1.png
For example, the biometrics of a famous actor can be used by some brand for advertising purposes. In many ways, deepfakes are also used for entertainment purposes: the Internet is "filled" with various videos in which famous people say phrases that never really belonged to them. In fact, we are faced with a situation in which a person's face, as well as their voice, can be used by anyone for their own purposes. Such a situation can lead to humiliation of the honor and dignity of certain people, and, probably, to an increase in social tension if deepfakes are used, for example, for political purposes, - said the deputy.
File:Aquote2.png

The problem of using deepfakes for fraudulent purposes is gradually gaining its relevance, the deputy emphasized.

File:Aquote1.png
Let me remind you that not so long ago a fraudulent scheme spread, in which, it would seem, the actions already familiar to us take place: hacking an account in instant messengers, as well as sending messages to a close circle with a request to borrow. The innovation of scammers is the sending of such messages using audio dipfakes, which, of course, is an almost win-win way to deceive the victim. Imagine: a person close to you actually directly addresses you, will you refuse his request? In the future, the number of such schemes will only grow, so the inclusion of the deepfake problem in the legal hollow is a necessity, Nemkin said.
File:Aquote2.png

File:Aquote1.png
The State Duma is just developing an appropriate regulatory framework. I do not think that the inclusion of relevant provisions in related bills will lead to effective results. Here you need to work with the main legislation in the field of personal data, - concluded the parliamentarian.
File:Aquote2.png

Fraudsters began to fake the voices of Russians with the help of AI and deceive their relatives in instant messengers

Fraudsters began to fake the voices of Russians with the help of AI and deceive their relatives and friends in instant messengers. On January 11, 2024, this scheme was told in the department for organizing the fight against the illegal use of information and communication technologies of the Ministry of Internal Affairs of Russia.

According to the department, first swindlers hack accounts in Telegram or WhatsApp using fake votes. After that, scammers download voice messages and form new ones with the context they need.

Fraudsters began to fake the voices of Russians with the help of AI and deceive their relatives

Further, according to the Ministry of Internal Affairs, the attackers send the formed messages to personal and group chats with a request to lend a certain amount of funds, adding photos of a bank card with fake names of recipients to the voice message.

One of the victims of this scam told RBC that fraudsters send a fake voice message both in personal correspondence and in all chats where the account owner is. A photo of a bank card with a name and surname is sent to the same addresses. The interlocutor of the publication had a name and surname that differed in social networks from the information in the passport, but the fraudsters used the data of the passport. The amount that the scammers wanted in this case was 200,000 rubles. One of the VKontakte users lost 3,000 rubles in a similar way.

In the company, F.A.C.C.T. called such a scheme new and "quite advanced" for Russia. According to experts, at the first stage of the scheme, a Telegram or WhatsApp account is hacked, for example, through fake voting, a wave of which was observed at the end of 2023. Then scammers download saved voice messages and use AI services to create new ones with the necessary context, the F.A.C.C.T.

Irina Zinovkina, head of the Positive Technologies research group, questioned the effectiveness of such fraud, since not all users use voice messages, and it is not always possible to glue the necessary phrase from the material that already exists.[15][16]

2023

How cybercriminals replenish the base for creating audio and video chipfakes

According to a study by Angara Security, in 2023, the number of requests for voicing "advertising" and "films" in instant messengers, social networks and community sites increased by 45% (about 7,000 messages were recorded) compared to 2022. At the same time, analysts conclude that the trend towards collecting audio data was formed precisely in 2022, when the number of such requests quadrupled relative to the data of 2021. (about 4,800 materials vs. 1200 in 2021). The company announced this on March 1, 2024.

Most of the ads are posted on Telegram, but other resources are also used, such as Habr or spam calls with an offer to make money on a "big project." The authors of such messages ask for names or set the condition that the recorded audio file should be similar to a phone call. For participation in such projects offer a fee from 300 to 5000 rubles. Angara Security analysts conclude that as a result of voice data collection, cybercriminals have the opportunity to improve the tactics of phishing attacks on individuals and businesses that use audio and video clips.

File:Aquote1.png
If the accounts are closed, then cybercriminals can use the "theft" of the account or a more technically simple way - social engineering to gain trust. Therefore, obtaining source data for video and audio files is much more accessible than it seems, − said Alina Andruch, Angara Security incident response specialist.
File:Aquote2.png

Since the beginning of 2024, cases of fraudulent schemes have begun to be recorded in Russia, in which social engineering and deepfake techniques are used in combination. The purpose of such an attack is to receive money from company employees who receive messages from a fake manager's Telegram account.

For example, in January, a similar technique was used against one of the companies. First, several Telegram user accounts were stolen, then audio files (voice messages) were received. This data was used to generate fake records in which fraudsters on behalf of the account owner extorted money from users who were with him in various chats and working groups.

File:Aquote1.png
We expect that the trend for this kind of attack will only gain momentum with the development of AI technologies. Therefore, it is extremely important to form methods and methods for recognizing fake materials and resolve the issue at the level of legislation in order to reduce cybersecurity risks for ordinary users of digital services and business, − Alina Andrukh continued.
File:Aquote2.png

An important step was taken in Russia regarding the regulation of the deepfake materials method: the Russian government instructed to develop ways to regulate the use of such technology until March 19, 2024. In 2023, a way has already been proposed to delimit real content and created using AI by placing a special stamp on the object. It is worth noting that this method is quite difficult to implement and control.

To identify traces of AI work, including audio and video clips, new tools are being developed, for example, the Russian project "Zephyr," presented last summer, capable of detecting artificially created (audio, video clips) with a high probability. The creation of new tools and developments will make it easier to identify and distribute such materials in the near future.

Angara Security recommends checking a person's identity by asking additional questions: If you received an audio, video call or a message with suspicious content, you need to check the identity of the interlocutor by asking clarifying questions with details that are unlikely to be known to cyber fraudsters, or simply contact you personally by email or by number from the contact database on the phone's SIM card.

You need to pay attention to speech and external features:

Pay attention to the hands of the interlocutor in the video, since most often they are the ones who "suffer" when generating content: fingers are added, removed or glued together. It is worth noting that the attackers take this moment into account in order to avoid recognizing the video clip, therefore, when communicating, they choose a portrait zone.

It is worth paying attention to facial expressions and the frequency of changes in facial expression. Most often, the generated model supports one speed of change of positions of the head position, blink frequency or repetition of the same movements in a certain period of time.

It is also worth checking the features of the face. For example, hair can be borrowed from a "fake" video and not correspond to reality, or lubricated by superimposing one face on another. If the interlocutor is familiar in real life, you can compare moles, scars, tattoos, if they are characteristic of contact.

It is also worth paying attention to the voice (how realistic it is), comparing the movements of the lips and the sound track. Despite the development of technology, this item remains one of the key in recognizing fake materials.

Companies can use both commercial deepfake recognition proposals and open source-based ones to possibly prevent manipulation of video footage by public individuals, such as company executives. For example, technologies that allow you to apply a filter invisible to the human eye to videos in the public domain (for example, recordings of speeches by top managers that companies share in open sources). This filter distorts the final version when trying to generate fake content.

Regular information campaigns and training in how to identify fakes, for the spread of which cybercriminals use instant messengers, corporate mail and other communication channels, are needed.

Cyber ​ ​ fraudsters use AI and deepfakes to lure data or money from users

Cyber ​ ​ fraudsters are actively using artificial intelligence and deepfakes to impersonate other people in instant messengers or social networks. Therefore, any messages with strange requests, even received from relatives or friends, should be treated critically.

File:Aquote1.png
"Deepfakes can be used to create fake video or audio recordings in which attackers impersonate another person: a relative, friend or colleague. Having recorded in a voice a request to transfer money into debt, fraudsters are more likely to receive financial benefits or convince the user to give access to personal data, "Konstantin Shulenin, an expert on network threats at Security Code, warned in an interview with Лентой.ру.
File:Aquote2.png

Neural networks are also actively used - they help scammers automate phishing campaigns and put them on stream, affecting a larger number of "victims." And they make phishing emails themselves more realistic, the press service of the State Duma deputy RFAnton Nemkin told TAdviser on December 29, 2023.

In addition, employees of various companies and organizations who receive emails from "management" and "colleagues" are often targeted by cybercriminals. Moreover, by the end of the year and on the eve of the holidays, people's vigilance decreases, respectively, the likelihood of entering the system for fraudsters is growing. In addition, many go on long vacations, so they cannot detect suspicious activity in the account in time.

One of the main reasons for how masterfully fraudsters build their attacks is the excess of information about citizens on the network, said Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications, deputy of the United Russia faction.

File:Aquote1.png
"Any information is easy to use in order to personalize the attack, make it more convincing, and therefore more malicious. I recommend always keeping your head "cold," do not neglect to check any information that comes to you on the network. Even if you already believe the message you received, it will not be superfluous to try to contact the person on whose behalf you were contacted. During the festive time, there are more traps online than usual by about a quarter. It's not worth the risk, "he said.
File:Aquote2.png

The quality of deepfakes has grown significantly over the past few years, but there are still a number of signs by which you can identify a "fake," Anton Nemkin is sure.

File:Aquote1.png
"We all live in an age of active development phase of generative machine learning models, and deep fake technology in particular. With the help of the human eye, it is no longer always possible to distinguish between real and unreal, therefore, special detection technologies exist to analyze potential visual fakes. For users, the following can become a universal instruction: pay attention to the movement of the eyes, the color of the skin and hair, the contour of the oval of the face - often they can be blurred, strange, - explained the parliamentarian. - In the case of voice fakes, you should always carefully evaluate the intonations and clarity of speech. And, of course, always generally critical of any requests that come to you online, if it concerns your personal data or financial resources. "
File:Aquote2.png

A program is presented that creates realistic videos from one photo and audio recording

On November 16, 2023, Singaporean researchers from the School of Computer Science and Engineering as part of Nanyang Technological University announced the development of an artificial intelligence-based program that allows the generation of video materials based on a single photo and audio recording. A system called DIRFA is capable of reproducing facial expressions and head movements of a talking person. Read more here.

Child psychiatrist in the United States received 40 years in prison for creating pornographic deepfakes

On November 8, 2023, the US Department of Justice announced that Charlotte (North Carolina) child psychiatrist David Tatum was sentenced to 40 years in prison for the production, possession and transfer of materials related to child sexual abuse. The criminal, in particular, is charged with creating pornographic deepfakes - images generated using artificial intelligence. Read more here.

Indian authorities ordered social networks to remove deepfakes

On November 7, 2023, the Ministry and electronic engineers (information technology India MeitY) released a document requiring operators of large social networks to remove deepfakes from their platforms. This should be done within 36 hours after receiving a notification or complaint about the publication of such content.

The department notes that deepfakes, that is, falsified video materials, photographs or audio recordings created using artificial intelligence technologies, can cause serious damage to citizens - primarily women. As an example, a sensational case is given when a video appeared on social networks in which the Indian actress Rashmika Mandanna was allegedly captured. The video, created using AI algorithms, quickly gained a huge number of views, and Mandanna was forced to make a public statement that she was not his heroine.

India's Electronics and Information Technology Ministry requires major social media operators to remove deepfakes

Given the serious problems associated with disinformation and deepfakes, MeitY issued a second recommendation in six months (by November 2023) calling on online platforms to take decisive measures against the distribution of such materials. The ministry emphasizes that in accordance with the rules in force in the country from 2021, Internet resources are obliged to prevent the dissemination of falsified information by any users. Failure to comply with this requirement entitles affected persons to go to court under the provisions of the Indian Penal Code.

File:Aquote1.png
Our government takes very seriously the duty to guarantee safety and ensure the trust of all citizens, particularly children and women against whom such content is used. It is imperative that online platforms take active measures to combat this threat, MeitY said in a statement.[17]
File:Aquote2.png

Paedophiles using AI to create children's photos of stars and make up stories with them

Paedophiles are actively using generative artificial intelligence systems to "rejuvenate" celebrities in photos and create their child's sexual images. In addition, thousands of AI images showing child abuse were found on the Internet. This is stated in the report of the non-governmental British organization Internet Watch Foundation (IWF), published on October 25, 2023. Read more here.

FBI: Deepfakes used in sex extortion scam

Network attackers began using deepfakes to generate sexually explicit content for the purpose of blackmail. This was warned on June 5, 2023 by the Center for Complaints of Internet Crimes (IC3) as part of the US Federal Bureau of Investigation.

Cybercriminals are said to use AI technology and services to alter photos or videos involving the victim. Original materials can be taken, for example, from profiles on social networks or, under some pretext, requested from the user himself. The resulting sex deepfakes can then be used to extort or damage the victim's reputation. Often fake images and videos in which a person is represented in an unsightly light are published on forums or pornographic sites.

Cybercriminals use AI technology and services to change photos or videos involving a victim

By April 2023, a sharp increase in the number of cases of sexual extortion using deepfake photos or fake videos was recorded. Attackers usually demand a ransom, threatening to distribute materials on the Internet or send them to the victim's relatives/colleagues. Sometimes criminals pursue other goals, in particular, require any information.

The FBI urges caution when posting personal photos, videos and identifying information on social media, dating apps and other online platforms. Despite the fact that such materials seem harmless, they can provide attackers with a lot of opportunities to organize fraudulent schemes. New generative AI systems make it easier to carry out personalized attacks using fake images or videos based on real content. Moreover, victims of sex extortion, in addition to financial losses, may be at the very disadvantage.[18]

The number of deepfakes in the world since the beginning of the year has grown several times

The total number of deepfakes around the world during the first months of 2023 increased several times compared to the same period in 2022. This is evidenced by a study by DeepMedia, the results of which were disclosed on May 30, 2023.

It is noted that the explosive increase in the number of fakes on a global scale is explained by the sharply reduced costs of creating such audio and video materials. If earlier about $10 thousand was required to accurately simulate voice, taking into account the operation of server equipment and the use of artificial intelligence algorithms, then by the beginning of May 2023, costs had decreased to only a few dollars. This is due to the emergence of generative AI models of a new generation and more powerful hardware platforms designed specifically with an eye on neural networks and machine learning.

The explosive increase in the number of fakes on a global scale is due to the sharply reduced cost of creating such audio and video materials

According to DeepMedia estimates, from January to May 2023, three times more fake video materials of all types were posted on the Internet and eight times more voice deepfakes than in the same period in 2022. AI technologies can be used to spread false statements on behalf of politicians and well-known public figures, which can provoke serious public unrest and conflict. Although large social platforms like YouTube and Facebook (recognized as an extremist organization; activities on the territory of the Russian Federation are prohibited) they introduce algorithms for combating deepfakes, the effectiveness of such tools is not high enough.

It is said that leading AI developers, such as OpenAI, embed special functions into their services that do not allow generating content with the participation of public persons. But small startups often neglect such measures. The possibility of creating an industry solution to identify materials created by artificial intelligence is already being discussed: these can be, for example, special digital tags.[19]

In China, a fraudster used a fake video call to deceive a businessman for $610,000

In May 2023, it became known that in China, a fraudster used artificial intelligence to impersonate a friend of businessman Guo and convince him to transfer $610,000.

Guo received a video call from a man who looked and spoke as a close friend. But the caller was actually a fraudster "using technology to change his face" and voice. Guo was persuaded to transfer 4.3 million yuan after a fraudster claimed another friend needed money from the company's bank account to pay for a guarantee at the tender.

A deepfake system is presented that makes the user look directly at the camera

On January 12, 2022, Nvidia announced the Maxine Eye Contact system. This is a deepfake technology that provides constant eye contact for users when conducting video conferencing sessions. Read more here.

2022

The volume of the global market for instruments for identifying deepfakes is estimated at $3.86 billion

The development of the digital creative industry, the production and consumption of content on a global scale are increasingly dependent on artificial intelligence technologies, including the negative impact of deepfakes. The volume of the global market for tools to identify such fakes in 2022 reached $3.86 billion. Such data are given in a study by the Institute for Internet Development (ANO IRI) and Rostelecom, the results of which were published on June 19, 2024.

According to the company DeepMedia, which develops technologies for detecting synthetic media content, in 2023 three times more deepfake videos appeared on the Internet and eight times more voice fakes than a year earlier. Moreover, as of 2022, about 43% of the surveyed users could not distinguish falsified videos from real ones. This opens up ample opportunities for fraudsters and cybercriminals. For example, deepfakes can be used for the purpose of political manipulation or to create fake video materials with the "participation" of famous people.

The volume of the global market for tools to identify such fakes in 2022 reached $3.86 billion

One of the main ways of regulating deepfakes is to establish requirements for labeling artificially generated content: it can be an explicit (visible) or implicit (extracted by technical means) watermark. In Russia, there were periodic reports of regulatory initiatives that should regulate the use of deepfakes, but as of December 2023, such bills were not submitted to the State Duma.

At the same time, various solutions are available on the market for checking for generated content in publications, developed by such specialized companies as Sentinel AI and Sensation, as well as products from major IT corporations, for example, FakeCatcher from Intel and Video AI Authenticator from Microsoft. In addition, leading social networks create their own tools for recognizing deepfakes.[20]

Investments in deepfake startups have skyrocketed in the world

In 2022, venture capital funds in the world invested approximately $187.7 million in startups specializing in deepfake technologies. For comparison: in 2017, investments in the relevant area were estimated only at $1 million. Such figures are given in the PitchBook study, the results of which were released on May 17, 2023.

It is said that during the first months of 2023, financial injections into deepfake startups reached $50 million. The largest recipient of venture money over the past five years (by the beginning of 2023) was the New York company Runway, which, among other things, is developing an artificial intelligence-based tool capable of generating short videos by text description. This firm raised at least $100 million and was valued at $1.5 billion.

Venture capital funds in the world have invested in startups specializing in deepfake technologies, approximately $187.7 million

At the same time, the London company Synthesia, which is developing a platform for creating realistic virtual characters based on video and audio recordings, received $50 million for development from a number of investors, including Kleiner Perkins. Israeli startup Deepdub, which developed AI-based audiovisual duplication and language localization technology, raised $20 million. The deepfake visual effects studio Deep Voodoo received the same amount.

Together with the advent of new and more realistic deepfake tools, the market for specialized fake detection tools is rapidly developing. According to calculations by the research firm HSRC, in 2022 the volume of this segment was approximately $3.86 billion. A compound percentage CAGR of 42% is expected between now and 2026. Means of detecting deepfakes are necessary to prevent the appearance of disinformation in the media, counter various fraudulent schemes on the Internet, etc.[21]

In the US, scammers stole $11 million with deepfakes imitating someone else's voice

In 2022, fraudsters, using artificial intelligence models to accurately imitate (deepfake) human voices, stole about $11 million from their victims in the United States alone. Such data are contained in the report of the Federal Trade Commission (FTC), published on February 23, 2023. Read more here.

Cloud and Pyaterochka created a commercial using deepfake technology

Cloud (Cloud Technologies LLC), with the support of the AIRI Institute of Artificial Intelligence, together with the Pyaterochka retail chain, created a commercial using DeepFake technology. Cloud announced this on December 22, 2022. Trained by AIRI specialists on the Cloud ML Space cloud platform, the model became the basis of the digital image of actress Olga Medynich, who was not even present on the set. Read more here.

Chinese regulator publishes rules to protect citizens from deepfakes

On December 12, 2022, it became known that the Cyberspace Administration of China (CAC) is introducing new rules for content providers that change the face and voice data of users.

On January 10, 2023, the norms governing the so-called "deepfake" technologies and services will come into force. It is an image synthesis technique based on artificial intelligence. The essence is that synthesized personnel is superimposed on the source materials. In the vast majority of cases, generative and adversarial neural networks are used to create such videos. One part of the algorithm learns from real photographs of a specific object and creates an image, literally "competing" with the second part of the program, until it begins to confuse the copy with the original. As a result, the resulting images are almost indistinguishable from the original: they are used for manipulation or disinformation.

Chinese regulator publishes rules to protect citizens from deepfakes

The CAC ruling provides for the protection of people from possible fraudulent activities through deepfakes. This can be, for example, passing off users as other persons. The document also suggests deepfakes could be used by online publishers, who must consider China's myriad other rules on acceptable online content.

At the same time, China expects synthetic images of people to be widely used in applications such as chatbots. In similar scenarios, deepfakes should be labeled "digital creations." The rules also spell out how the creators of deepfakes, called "deep synthesis service providers," should ensure that their models and algorithms of artificial intelligence and machine learning are as accurate as possible. In addition, it is necessary to ensure the security of the collected personal data.[22]

Intel introduced deepfake recognition technology

Intel has introduced deepfake recognition technology. This became known on November 17, 2022. Read more here.

Roskomnadzor creates a system for checking videos for lies and searching for deepfakes

In early November 2022, it became known about the creation of the Expert service, which will allow checking video recordings of performances for lies and manipulations. This technology is being developed by specialists from the ITMO National Center for Cognitive Development for the Main Radio Frequency Center (GRCC) subordinate to Roskomnadzor. Read more here.

The world's first web series using Deepfake technology was shot in Russia

As it became known on October 10, 2022, the Russian company created the world's first web series using Deepfake technology, the main character of the parody comedy was the image of British actor Jason Statham. The project was created with the support of the Institute for Internet Development.

Alexey Parfun, CEO of Agenda Media Group, said in an interview with TASS that the use of images of actors using this technology is not legally prohibited, but Deepfake should not hurt a person's honor, dignity and business reputation and disclose his personal information.

A snippet of a series filmed using Deepfake technology
File:Aquote1.png
This is an ironic view of a foreigner in the person of Statem on Russian life, - said Parfun.
File:Aquote2.png

According to him, the project reveals the character's attempts to understand and integrate into Russian life. According to the plot, the main character came to shoot in the Russian Federation and stayed to live in it.

The events of the series take place in 2027. After five years of filming in Russia, Statham stayed to live in a Russian village. For his 60th birthday, friends come to see him - Reeves, Robbie and Pattinson. Their images were created using deepfake technology: artificial intelligence produces synthetic content, where a person's face is replaced by another in photo, video or audio space. The series starred Yulia Bashorina, Andrei Deryugin, Andrei Korotkov.

According to the production director of Agenda Media Group Maria Artemova, the entire project was implemented in three months, taking into account the writing of the script and post-production. The team had to control some of the features of the shooting every minute, she noted.

File:Aquote1.png
For example, deepfake shooting does not allow you to use close-ups, there are certain conditions for light, optics, cameras, as well as the positions of the actors in the frame. It spends a lot of time and takes. In addition, there were significant limitations on the amplitude of the actors' movements, which also caused some difficulties, she added.[23]
File:Aquote2.png

The EU intends to fine social networks for non-removal of deepfakes

The EU intends to fine social networks for non-removal of deepfakes. This became known on June 14, 2022.

Google, Twitter and other tech companies will have to take action to crack down on deepfakes and fake accounts, or face hefty fines, up to 6% of their global turnover, the updated code of action says. European Union

The creation of deepfakes is possible thanks to neural networks that can simulate people's faces and voices from photos and audio recordings.

The European Commission intends to publish a regulatory document in the field of disinformation by the end of June 2022. It contains examples of manipulative behavior, the signatories of the document will be obliged to fight fake accounts, disinformation advertising and deepfakes, and they will also have to provide greater transparency in political advertising. It is expected that the companies that signed the document within six months will adopt and implement "a policy regarding unacceptable manipulative behavior and practice in their services, based on the latest data on behavior and tactics, methods and procedures used by cybercriminals[24].

With the help of a deepfake, you can impersonate another person in the bank

On May 19, 2022, it became known that with the help of a deepfake, you can impersonate another person in a bank.

Deepfake technology allows you to bypass the facial recognition system.

Sensation, which specializes in identifying attacks using deepfake technology, has investigated the vulnerability of 10 identification services. Sensation used deepfakes to superimpose the user's face on the ID card for scanning and then copy the same face into the attacker's video stream for identification.

The Liveness test usually asks the user to look at the camera of the device, sometimes turning their head or smiling, and compares the appearance of the user and his identity card using facial recognition technology. In the financial sector, such a check is called "Know Your Customer" (KYC) and is part of the check of documents and accounts.

File:Aquote1.png
We tested 10 services and found that 9 of them are vulnerable to deepfakes, "said Sensation Chief Operating Officer Francesco Cavalli. - There is a new generation of AI that could pose a serious threat to companies. Imagine what you can do with a fake account created with a deepfake. And no one will be able to detect a fake.
File:Aquote2.png

Cavalli is disappointed with the reaction of services that considered the vulnerability insignificant.

File:Aquote1.png
We have informed vendors that services are vulnerable to deepfake attacks. The developers ignored the danger. We decided to publish the report, as the public should be aware of these threats, the researcher added.
File:Aquote2.png

Suppliers sell Liveness tests to banks, dating apps and cryptocurrency projects. One service was even used to verify the identity of voters in elections in Africa. (Although there is no indication in the Sensation report that the review was compromised by deepfakes)

Deepfake technology poses a great danger to the banking system, in which a deepfake can be used for fraud.

File:Aquote1.png
I can create an account, transfer the stolen money to a bank account or take out a mortgage, because online lending companies compete with each other in the speed of issuing loans, the expert added.
File:Aquote2.png

An attacker can easily intercept a video stream from a phone camera and use deepfake technology for malicious purposes. However, it is impossible to bypass the Face ID facial recognition system in this way. Apple's identification system uses depth sensors and checks your identity not only based on your appearance, but also the physical shape of your face[25] of[26].

Deepfakes can easily fool many Facial Liveness Verification authentication systems

On March 3, 2022, it became known that some deepfake detection modules are tuned to outdated equipment. A team of researchers from the University of Pennsylvania (USA) and Zhejiang and Shandong Universities (China) studied the susceptibility to deepfakes of some face-based authentication systems . As the results of the study showed, most systems are vulnerable to developing forms of deepfakes.

Deepfakes deceive recognition systems

The study carried out deepfake-based attacks using a dedicated platform deployed in Facial Liveness Verification (FLV) systems, which are supplied by large suppliers and sold as a service to downstream customers such as airlines and insurance companies.

Facial Liveness is designed to reflect the use of methods such as image attacks, the use of masks and pre-recorded video, so-called "master faces" and other forms of cloning visual identification.

The study concludes that a limited number of deepfake detection modules in similar systems may have been tuned to outdated techniques or may be too architecture-specific. Experts note that even if processed videos seem unrealistic for people, they can still bypass the current deepfake detection mechanism with a very high probability of success.

Another finding was that the current configuration of shared facial verification systems is biased against white men. The faces of women and minorities of color have proven more effective at circumventing vetting systems, putting clients in these categories at greater risk of hacking through deepfake-based methods.

The authors propose a number of recommendations for improving the current state of FLV, including not authenticating on a single image ("image-based FLV") when authentication is based on a single frame from a client camera; more flexible and comprehensive update of deepfake detection systems in graphic and voice domains; imposing the need to synchronize voice authentication in user video with lip movements (which, as a rule, is not); and requiring users to perform gestures and movements that are difficult for deepfake systems to reproduce (e.g., profile viewing and partial face darkening).[27]

Deepfakes banned in China

At the end of January 2022, a bill banning the use of deepfakes would be adopted in China. This technology consists in the synthesis of artificial intelligence of human images: the algorithm combines several photos in which a person is depicted with different expressions of the face, and makes video from them. At the same time, the system analyzes a large number of images and learns how a particular person can look and move.

The initiative, developed by China's cyberspace administration, explains the need for regulation in the context of the government's desire to ensure that the Internet is a tool for good. The explanatory note to the law says that criminals and fraudsters will be attracted by the use of digitally created voices, videos, chat bots or manipulation of faces or gestures.

Deepfakes banned in China

The bill excludes the use of such fakes for any applications that could disrupt social order, infringe on citizens' rights, spread fake news or portray sexual activity. The draft also proposes requiring permission to use what China calls deep synthesis before using it for legitimate purposes. Exactly what legitimate purposes can be used is not specified, but the draft contains extensive provisions on how digital assets should be protected so as not to violate user privacy.

In the case of deep synthesis, the project proposes the requirement to label it as a digital creation in order to eliminate any doubts about authenticity and origin. The draft also sets out requirements for service providers to implement security practices and always act in China's national interest.[28]

2021

Fraudsters stole $35 million from a UAE bank with the help of a deepfake of the voice of its head

In mid-October 2021, it became known that the criminals took possession of a huge amount of $35 million from a bank in the UAE, imitating the voice of the head of the bank using advanced artificial intelligence. They reportedly used a deepfake to mimic a legitimate commercial transaction linked to the bank.

Forbes reported that deepfake voices were used to trick a bank worker into thinking he was transferring money as part of a legitimate operation. The story began to be discussed after the publication of court materials that took place in January 2021, when the manager of an unnamed bank received a seemingly ordinary phone call.

In the UAE, scammers with the help of a deepfake voice deceived a bank employee and stole $35 million


The person on the other side of the phone claimed to be a director of a large company the manager had previously spoken to, with their voices identical, according to court filings. All this is supported by emails from the bank and its lawyer, looking like real, who were able to convince the branch manager that he was talking to the director, and the company was indeed in the process of a large commercial transaction worth $35 million.

Subsequently, he followed the instructions of the caller and made several large money transfers from the company to a new account. Unfortunately, it was all a sophisticated scam.

Investigators from Dubai found out that the scammers used "deep voice" technology, which allowed them to mimic the voice of the head of a large company. Police concluded that up to 17 people were involved in the scheme and that the stolen money was being transferred to several different bank accounts scattered around the world.

So, two accounts were registered with the United States, at Centennial Bank. They received an amount of $400 thousand. UAE investigators have already reached out to US officials for help in the investigation.

This is not the first time that fraudsters were able to carry out a major scam with the help of imitation of a voice. In 2019, a certain energy company in the UK lost $243 thousand after a person who pretended to be the CEO of the company contacted an employee of the company.

Attackers are increasingly using the latest technology to manipulate people who are unaware of their existence, according to Jake Moore, a cybersecurity expert at ESET.

According to experts watching the artificial intelligence market, this will not be the last time.[29]

Deepfake voices can fool IoT devices and people after five seconds of learning

Deepfake voices can fool IoT devices and people after five seconds of learning. This became known on October 14, 2021.

Deepfake could have tricked Microsoft Azure about 30% of the time and successfully tricked WeChat and Amazon Alexa 63% of the time.

Researchers from the Security, Algorithms, Networking and Data (SAND) laboratory University of Chicago tested deepfake voice synthesis programs available on the developer community website with open source GitHub the aim of finding out if they can bypass voice recognition systems in Amazon Alexa, WeChat Microsoft and Azure.

According to the developers of the SV2TTS, the program takes only five seconds to create an acceptable simulation.

The program could fool human ears - of the 200 volunteers asked to identify real voices among deepfakes, about half the time the answers were wrong.

Deepfake audio was more successfully used to imitate the female voices and voices of those people for whom English is not their native language.

File:Aquote1.png
We found that both humans and computers can be easily deceived by synthetic speech, and existing protections against synthesized speech do not work, the researchers told NewScientist.
File:Aquote2.png

Experts tested another voice synthesis program called AutoVC, which takes five minutes of speech to recreate a person's voice. AutoVC managed to trick Microsoft Azure only 15% of the time, so researchers refused to test it on WeChat and Alexa[30].

Ernst & Young started using employee video dypfakes to communicate with customers instead of face-to-face meetings

In mid-August 2021, it became known that Ernst & Young (EY) she began to use employees' video libraries to communicate with clients instead of personal meetings. To do this, the firm uses technology provided by the British. startup Synthesia More. here

The Ministry of Internal Affairs has taken up the fake video recognition system

In early May 2021, it became known about the contract that the Ministry of Internal Affairs (Ministry of Internal Affairs) of Russia concluded with the Moscow Scientific and Industrial Company High Technologies and Strategic Systems JSC. We are talking about scientific development under the code "Mirror" ("Camel"), designed to identify fabricated videos (deepfakes). The study is necessary for forensic units and video technical examinations. Read more here.

Fraudsters in China with the help of deepfakes deceived the state facial recognition system for $76.2 million

Fraudsters in China with the help of deepfakes deceived the state facial recognition system for $76.2 million. To deceive her, scammers bought high-quality photos and fake personal data on the black market. It costs from $5. The resulting photos of Wu and Zhou were processed in deepfake applications - they can "revive" the uploaded picture and make a video from it, giving the impression that faces nod, blink, move and open their mouths. Such applications can be downloaded for free.

For the next stage, scammers bought special reflashed smartphones: during face recognition, the front camera of such a device does not turn on, instead the system receives a pre-prepared video, perceives it as an image from the camera. Such phones cost approximately $250.

With the help of such a scheme, fraudsters registered a dummy company that could issue fake tax returns to its clients. For two years, fraudsters earned $76.2 million on this.

Biometrics are widespread in China - with its help they confirm payments and purchases, check their identity when applying for public services, and so on. But along with the development of technology, data protection has become one of the main problems.

2020

Facebook researchers call NtechLab's deepfake recognition algorithm the most attack-resistant

NtechLab, a company in the field of solutions for video analysts based neural networks and technological partner of the State Corporation, Rostec December 11, 2020 announced that it was deepfake recognition company algorithm called the most resistant to deception using the so-called adversarial. attacks More. here

The appearance of Telegram bots that create fake "porn photos" based on DeepNude for blackmail

At the end of October 2020, a system of deepfake bots was discovered in Telegram, which, upon request, generate fake "porn photos." Users substitute the faces of familiar women taken from images from social networks for such pictures, and then send them out in public channels and chats or use them for blackmail. Read more here.

2019

Samsung introduced digital people that are not distinguishable from real ones

In early January 2020, Samsung presented a project of "artificial people," called Neon. It was developed by Samsung Technology and Advanced Research Labs (STAR Labs). Read more here.

Zao App Release

In early September 2019, it became known about the release of the Zao application, which, using artificial intelligence, allows you to insert the user's face into a scene from the film. Read more here.

Energy company lured $243,000 using AI to fake voice

In early September 2019, criminals lured $243 thousand from a British energy company, posing as an executive director using artificial intelligence to fake a voice.

The general manager of an unnamed energy company thought he was on the phone with his boss, an executive at the German parent company. "Boss" asked him to send funds to a Hungarian supplier. According to Euler Hermes Group SA, the offender said that the request was very urgent and asked the manager to transfer the money within an hour. Euler Hermes declined to name the victim company.

Criminals lured $243 thousand from the British energy company.

Fraud expert from Euler Hermes insurance company Rüdiger Kirsch said that the injured manager recognized his boss's weak German accent and general tone of voice over the phone. Immediately after the call, the manager handed the money to the Hungarian supplier and contacted the boss again to report the completed task. Soon, the head of the company and his subordinate realized that they had become a victim of fraud and turned to the police.

Specialist cyber security ESET Jake Moore (Jake Moore) at the end of August 2019 said that in the near future we will face a monstrous increase in cybercrime. DeepFake is able to frame the faces of celebrities and public faces in any video, but it takes at least 17 hours of recordings with this person to create a convincing image. To fake a voice, it takes much less material. As the computing power of computers increases, such fakes become easier to create.

To reduce risks, Moore recalls, it is necessary not only to inform people that such imitations are possible, but also to include special verification methods before transferring money, for example, two-factor identification.[31]

In the United States, begin to imprison for the distribution of porn with the substitution of faces

In early July 2019, Virginia passed a law prohibiting the distribution of pornography with the substitution of persons. This is the first state in the United States to have such an initiative.

In the United States and other countries, the so-called revenge porn is widespread - posting materials of an open sexual nature on the Internet without the consent of the person depicted on them. As a rule, such materials are posted by former partners in revenge or by hackers who have gained unauthorized access to such materials. With the development of artificial intelligence technologies and simple tools for high-quality editing of photo and video content, revenge porn increasingly began to be carried out by superimposing a face on a porn actor.

In early July 2019, Virginia passed a law prohibiting the distribution of pornography with the substitution of persons

Starting July 1, 2019, anyone who distributes or sells fake photos and videos of a sexual nature for the purpose of "blackmail, harassment or intimidation" in Virginia can be fined up to $2,500. A prison term of up to 12 months is also provided.

Virginia became the first American state to outlaw the so-called deepfake. Similar initiatives are being prepared in other states by July 2019. For example, New York is considering a bill prohibiting the creation of "digital copies" of people without their consent, and in Texas on September 1, 2019, a law will come into force on responsibility for the distribution of sexual content with the substitution of persons.

File:Aquote1.png
We must rebuild our outdated and disparate laws, including criminal ones, for the paralyzing and life-threatening consequences of threats, and recognize the significant harm of fake porn, says Professor Claire McGlynn of Durham University.
File:Aquote2.png

According to experts, it is becoming more and more difficult to identify support even in videos.[32]

Service launched to strip women in a photo in 30 seconds

At the end of June 2019, it became known about the launch of the DeepNude service, which allows you to undress a girl in a photo. The name of the developer of the application is unknown, but his Twitter blog says that the product is being developed by a "small team" from Estonia. Read more here.

Moscow AI Center Samsung created a neural network that revived the portrait of Dostoevsky

In May 2019, researchers from the Samsung Artificial Intelligence Center in Moscow presented a neural network capable of "animating" static images of faces. The operation of the system is described in the materials published on the portal arXiv.org.

Artificial intelligence revived the portrait of Dostoevsky - animation provided by the press service of Samsung

The neural network records the movements and facial expressions of the human face on video, and then transfers the received data to a static portrait. Scientists "showed" artificial intelligence a large number of personnel with people's faces.

A special mask was applied to each face on such a frame, which denotes borders and basic facial expressions. How such a mask relates to the original frame is stored as a vector, data from which is used to superimpose a separate mask on the person's image, after which the finished animation is compared with the template.

Samsung notes that such a development can be used in telepresence systems, video conferencing, multiplayer games and when creating special effects in films.



According to ZDNet, deepfake technologies themselves are not new, but the Samsung system is interesting in that it does not use 3D modeling and allows you to create a "live" face model with only one photo. If you upload 32 pictures to it, you can "achieve a perfectly realistic picture and personalization," the company noted.

Its developers demonstrated the capabilities of the system in photographs of Marilyn Monroe, Salvador Dali and Albert Einstein. However, she also works in paintings and portraits. In the video, the authors of the project showed a "lively" portrait of the Russian writer Fyodor Dostoevsky.

At the time of the demonstration of the development, the artificiality of the movements is still noticeable - the developers plan to deal with these defects in the future.[33]

McAfee Video Face Spoofing Can No Longer Be Determined

In early March 2019, McAfee, a cybersecurity company, announced that the substitution of faces in the video could no longer be determined with the naked eye. In a keynote speech at the RSA cybersecurity conference in San Francisco, Steve Grobman, chief technology officer at McAfee, and Celeste Fralick, chief data officer, warned it was only a matter of time before hackers used the new technology.

Now that attackers are able to create individualized targeted content, they can use AI for various purposes - for example, to hack accounts using social psychology techniques or phishing attacks. Personalized phishing, that is, fraud aimed at obtaining bank confidential data in order to steal money, is more successful, but new capabilities of artificial intelligence allow it to be carried out on the scale of automated attacks.

It will be more and more difficult to distinguish deepfake video from genuine materials, and this could become a problem for cybersecurity, experts say

There is a whole field of cybersecurity called adversarial machine learning, where possible cyberattacks on machine learning classifiers are studied. McAfee believes that the image spoofing technique is a serious threat and can be used to distort the image classifier.

One way to fool people and AI is to take a real photo and quietly change a small part of it. So, with minimal change, a photo of penguins could be interpreted by AI as a frying pan. However, false positives on a more serious scale can be disastrous.

Grobman stressed that DeepFake technologies in themselves are a tool that can be used for a wide variety of purposes. It is impossible to prohibit attackers from using new methods, but it is possible to establish a line of defense in a timely manner, he said.[34]

Fake AI porn has overerrorised women

By the beginning of 2019, artificial intelligence has reached a level of development that allows you to easily and without special technical skills "attach" the heads of stars and ordinary women to the bodies of porn actresses to create realistic videos. These explicit films, created using the DeepFake method, are videos edited so well that they are indistinguishable from the real ones. Their emergence is dangerous because the technology may also begin to be used to spread fake news. But even more dangerous is their use as a tool to blackmail and humiliate women.

In light of the proliferation of AI and easy access to photos of former partners, colleagues and other people without their consent on social media sites such as VKontakte and Facebook, there is a growing demand for tools to create fake videos. Despite the fact that the law may be on the side of victims, they often face significant obstacles associated with the difficulties of harassment on the Internet.

File:Aquote1.png
Fake porn videos cause the same stress as intimate photos posted online, says writer and former politician Charlotte Lozas. - Fake porn videos are realistic, and their influence is exacerbated by the growing number of fake news among which we live.
File:Aquote2.png

AI-faked porn has overerrorised women

Lozas add that fake videos have become a common way to humiliate or blackmail women. In a survey of 500 women who were victims of revenge porn, Lozas found that 12% were victims of fake porn videos.

One way to solve this problem may be to revise and supplement the laws prohibiting revenge porn. These laws, which exist in 41 US states at the beginning of 2019, have recently appeared and indicate a change in the government's attitude towards "uncoordinated" pornography.

Fabricated porn featuring actress Paul Gadot

Another approach is to bring civil action against offenders. As noted on the website of the independent non-profit organization Electronic Frontier Foundation, persons who are victims of fake porn videos in the United States can sue for defamation or presenting them in a "false light." They can also file a "right to public use" claim, pointing out that the creators of the video benefited from the victim's image without her permission.

However, all these possible solutions could run into a serious obstacle: a free speech law. Anyone sued for making a fake clip can claim the video is a form of cultural or political expression and falls under the first amendment. Lozas believe that in the case of fake porn videos, most judges will be critical of the First Amendment reference, especially when the victims will not be famous personalities, and the video will only affect sexual exploitation and will not include political satire or materials of artistic value.

A fragment from a fake porn video in which the face of a porn actress was replaced with the face of Hollywood star Paul Gadot

At the same time, the victim herself has almost no opportunity to close access to the offensive video. The reason lies in Article 230 of the US law, which protects the provider in terms of what users publish on their pages. In the case of sites that host fake porn videos, providers can claim immunity because not they, but their users, upload videos. An exception in this situation is a violation of intellectual property, when the operator is obliged to remove materials if he receives a notification from the copyright owner.

According to Professor Loyola Law School and author of a book on privacy and publicity rights, Jennifer Rothman, courts do not have a clear idea of ​ ​ whether this exception applies to state laws (for example, the right to publicity) or only to federal ones (such as copyright and trademark).

This raises the question of whether Congress can draft legislation narrow enough to help victims of fake porn videos, but which will not have undesirable consequences. As a cautionary example, University of Idaho law professor Annemarie Bridy cites the misuse of copyright law when companies and individuals acted in bad faith to remove legitimate criticism and other legitimate content.

Still, according to Bridey, given what's at stake in the case of fake pornographic videos, the new law is needed now, Fortune said in a Jan. 15, 2019, publication.[35]

2018: Artificial intelligence taught to fake people's movements in video

Main article Artificial intelligence in video

As it became known in June 2018, a new development in the field of artificial intelligence (AI) has appeared, which will allow the creation of realistic fake video plots.

Tools that allow you to simulate the movement of the lips and facial expressions of a person already exist. However, according to the Futurism portal, the new AI-based system represents a significant improvement in existing developments. It provides the ability to create photorealistic videos in which all movements and words uttered by the actor in the source video will be transferred to the changeable video.

There is a new development in the field of artificial intelligence (AI), which will allow you to create realistic fake video plots

A public demonstration of the development will take place in August 2018 at the SIGGRAPH computer graphics conference. The creators of the new system plan to show development possibilities through experiments comparing the new algorithm with existing tools for creating believable video plots and images, many of which were partially developed by Facebook and Google. The characteristics of an AI-based solution outperform existing systems. Reportedly, the AI system in just a few minutes of working with the original plot will help create a flawless fake video. The participants in the experiments hardly managed to distinguish real videos from fake ones.

The developers, who have received financial support from Google, hope their work will be used to improve virtual reality technology and make it more accessible.

2017: Replacing the face of a porn actress with the face of a Hollywood movie star

In December 2017, a porn video allegedly appeared on the Internet with the participation of the famous actress Gal Gadot. However, in reality, the video showed the body of a porn actress, whose face was replaced with the face of a Hollywood movie star using artificial intelligence. Read more here.

See also

Notes

  1. Gartner Identifies Five Strategies for Corporate Communications Leaders to Combat Generative AI Reputational Threats
  2. Kaspersky Lab told about the use of AI by hackers
  3. No. 718538-8 "On Amending the Criminal Code of
  4. the Criminal Code of the Russian
  5. No. 718834-8 No. 718834-8 "On Amendments to Part One of the Civil Code of the Russian Federation
  6. the American Federation of Television and Radio Artists (SAG-AFTRA)
  7. Sobbing daughter called her mother via video link from prison: Scammers have learned to fake a voice and video with your child
  8. Central Bank warned that fraudsters have learned to use deepfakes
  9. South Korean woman loses £40k in Elon Musk romance scam involving deepfake video
  10. Central Bank will start a fight against "deepfakes"
  11. Nearly 4,000 celebrities found to be victims of deepfake pornography
  12. Fraudsters began to actively lure out samples of citizens' votes
  13. Ministry of Digital Development of Internal Affairs and Roskomnadzor will determine the punishment for deepfakes
  14. "Everyone looked real": multinational firm's Hong Kong office loses HK $200 million after scammers stage deepfake video meeting
  15. The Ministry of Internal Affairs warned of a new scheme of scammers generating the voices of friends in social networks
  16. Extortionists began to use AI to fake voice in Telegram
  17. Union Government issues advisory to social media intermediaries to identify misinformation and deepfakes
  18. Malicious Actors Manipulating Photos and Videos to Create Explicit Content and Sextortion Schemes
  19. Deepfaking it: America's 2024 election collides with AI boom
  20. IRI and Rostelecom presented a study on trends in the development of consumer communication technologies
  21. Deepfake Startups Become a Focus for Venture Capital
  22. China's rules for "deepfakes" to take effect from Jan. 10
  23. filmed the world's first web series using deepfake technology
  24. The EU intends to fine social networks for failing to remove deepfakes
  25. [https://www.securitylab.ru/news/531760.php. With the help
  26. a deepfake, you can impersonate another person in the bank]
  27. Deepfakes can easily fool many Facial Liveness Verification authentication systems
  28. China reveals draft laws that heavily restrict deepfakes
  29. Bank Robbers Used Deepfake Voice for $35 Million Heist|AI-Enhanced Voice Simulation Used
  30. Deepfake voices can trick IoT devices and people after five seconds of training
  31. Manager at energy firm loses £200,000 after fraudsters use AI to impersonate his boss's voice
  32. Virginia bans 'deepfakes' and 'deepnudes' pornography
  33. Samsung uses AI to transform photos into talking head videos
  34. McAfee shows how deepfakes can circumvent cybersecurity
  35. Fake Porn Videos Are Terrorizing Women. Do We Need a Law to Stop Them?