[an error occurred while processing the directive]
RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/03/27 11:37:19

Deepfakes (DeepFake)

DeepFake  (from deep learning - "deep learning" and fake - "fake") is a method for synthesizing human image based on artificial intelligence. It is used to combine and overlay existing images on video. Facial recognition systems are the automatic localization of a human face in an image or video and, if necessary, identification of a person's identity based on available databases. Interest in these systems is very high due to the wide range of tasks they solve.

Content

2024

Deepfake Yuri Nikulin will play in the new film

March 26, 2024 it became known that one of the heroes of the family comedy "Manyunya: Adventures in Moscow" will be the image of the Soviet actor and circus artist Yuri Nikulin, formed using artificial intelligence technologies. This is the first successful experience in Russia to recreate the appearance and voice of the late actor through a neural network. Read more here.

The Central Bank of the Russian Federation introduced measures to combat deepfakes

At the end of March 2024, it became known that the Central Bank of Russia intends to update the procedure for informing about fraudulent transfers in online services for financial transactions and the exchange of digital financial assets. We are talking, in particular, about the fight against deepfakes.

It is noted that attackers are actively using modern technologies and tools of artificial intelligence to simulate the voice of the victim and other scams. According to the Bank of Russia, the volume of funds stolen by fraudsters in 2023 reached 15.8 billion rubles. This is 11.5% more than in 2022. The regulator reports that the surge was partly due to an increase in the volume of cash transactions using payment cards.

Central Bank intends to update the procedure for informing about fraudulent transfers in online services

The Bank of Russia, together with representatives of law enforcement and supervisory authorities, is working on expanding the list of information subject to registration and storage about the actions of clients of credit institutions and money transfer operators. According to the new rules, from June 2024, operators of payment systems and electronic platforms, including banks, will have to transfer data on stolen customer funds to the financial regulator. It is assumed that such measures will help in preventing crimes and reducing losses.

The Bank of Russia notes that in order to improve the security of operations, a list of threats and guidelines have been developed and approved. In addition, the use of the Unified Biometric System is monitored. Therefore, "when identifying for payments, the risk of using deepfakes is minimized." The new rules, according to CNews, also establish the procedure for the financial regulator to request and receive information from banks about transactions in respect of which information about illegal actions has been received from the Ministry of Internal Affairs.[1]

4 thousand world celebrities became victims of pornodipfakes

In 2023, approximately 4 thousand famous people around the world became victims of pornographic deepfakes. Such data in mid-March 2024 was disclosed by Channel 4 News.

An analysis of the five most visited deepfake websites by Channel 4 News found that attackers fabricate material depicting female actors, TV stars, musicians and bloggers. Of the approximately 4 thousand victims of pornodipfakes, the British account for 255 people.

Approximately 4
thousand famous people around the world became victims of pornographic deepfakes

In 2016, researchers discovered one fake pornographic video on the Internet. At the same time, in the first three quarters of 2023, almost 144 thousand new deepfake materials were uploaded to the 40 most visited porn sites - more than in all previous years combined. It is noted that, for example, in Britain on January 31, 2024, the Online Safety Act came into force: it provides that the unauthorized exchange of deepfake materials in the country is prohibited. At the same time, the creation of pornodipfakes is not prosecuted. Representatives of Ofcom (Office of Communications), a British agency that regulates the work of television and radio companies, as well as the postal service, speak about the problem.

File:Aquote1.png
Illegal deepfake materials cause significant damage. In accordance with the Internet Security Act, companies will have to assess the risk of distributing such content in their services, take measures to prevent its appearance, and promptly remove such materials, Ofcom said in a statement.
File:Aquote2.png

Deepfakes can be used not only to harm specific individuals. Such materials give attackers the opportunity to spread fake news, carry out various fraudulent schemes, etc.[2]

Russian Prime Minister Mikhail Mishustin instructed the Ministry of Digital Development to create a system for identifying deepfakes

Prime Minister RFMikhail Mishustin instructed Ministry of Digital Development to create a system for identifying deepfakes. This became known in March 2024.

As they write Sheets"" with reference to the representative of the Ministry of Digital Development, we are talking about the development of a single platform capable of identifying inaccurate information generated, including using technologies. Within artificial intelligence the framework of this order, "research work on the indicated topic will be carried out." At the same time, the Ministry of Digital Development did not specify when the platform is planned to be launched and who will develop it.

Mikhail Mishustin instructed Ministry of Digital Development to create a system for identifying deepfakes

The publication notes that by March 2024, deepfakes are manually identified by criminologists and information security specialists analyzing specific videos. Signs of counterfeiting are considered to be a wavy pattern that is not on the subject and which occurs due to the superimposition of one image on another, shaking, speech delay, gluing, violation of audio recording streams and histogram features.

The government believes that the creation of a single platform will contribute to the disclosure of crimes. According to experts interviewed by Vedomosti, banks and other financial institutions, media, owners of large Telegram channels and social networks are interested in creating such a mechanism.

The only option for protecting against the illegal use of deepfakes can be watermarks, which should be installed by the developer of the technology himself, believes AI-architect of the Samolet Group of Companies, expert of the Alliance of Artificial Intelligence Andrei Komissarov. But such marking is unprofitable for the services themselves, unless it becomes mandatory, he noted.

According to Alexei Borshchov, product manager at Just AI, the deepfake detection system, the creation of which is an order from the Ministry of Digital Development, can be a browser plugin that will automatically scan sites for AI content.[3]

Russians are lured by advertisements for paid voice acting of films to steal samples of their voice. Then they steal money from their relatives

Russians are lured by advertisements for paid voice acting of films to steal samples of their voice. Then they steal money from their relatives and friends. Angara Security, a company specializing in information security, spoke about the new fraud scheme in early March 2024.

As Vedomosti writes with reference to Angara Security materials, in ads posted on the Internet, their authors are asked to provide an audio recording in the format of a phone call or a recording of a conversation that must be sent with a personal message or bot. For participation in the project, they offer a fee from 300 to 5 thousand rubles, which can really be paid to the victim.

Russians are lured by ads about paid voice acting of films to steal samples of their voice

According to experts, these ads do not pose a direct threat, but fraudsters use voice data sets in training, neuronets generating audio messages based on them. In the future, they are used to extort money from victims, posing as a relative, colleague, friend, etc. In addition, swindlers can apply on behalf of the victim to where banks he has an account.

Angara Security notes that most often such ads appear on Telegram channels. However, attackers can use other sites, as well as use spam calls with an offer to make money on a "big project." The number of such messages, excluding spam calls, in 2021 was 1,200, in 2022 their number quadrupled to 4,800, and in 2023 reached 7,000, information security experts calculated.

Experts interviewed by Vedomosti also note another potential source for collecting samples of Russian votes: fraudsters can receive them from videos published on social networks. Moreover, you do not need to hack user accounts, because most of the video content is in the public domain, experts say.[4]

Using a deepfake to rob a bank. In what situations it is possible, and how to protect yourself from this

In February 2024, the Russian media disseminated information about the case of allegedly using deepfakes to bypass authentication at Tinkoff Bank. The original source suggested that fraudsters allegedly with the help of a deepfake were able to withdraw 200 thousand rubles from the user's accounts. TAdviser discussed with experts how great the risk of bypassing bank authentication and stealing funds using deepfakes is. More

The Ministry of Digital Development of the Russian Federation was engaged in regulation of dipfeyk

On February 16, 2024, it became known that the Ministry of Digital Development of the Russian Federation began to work out issues of legal regulation of the sphere of deepfakes - technologies for convincing substitution of personality based on artificial intelligence. The initiative is attended by the Ministry of Internal Affairs and Roskomnadzor.

Deepfakes can be used by attackers for various purposes. Fraudsters can mimic the voice or image of a particular person, such as a company executive, to withdraw funds or steal sensitive data. The technology is used to spread disinformation, create unrest in the political arena, etc.

The Ministry of Digital Development is working on the issue of legal regulation of deepfakes

The Vedomosti newspaper reports that the need to regulate the sphere of deepfakes was discussed at a meeting of the government commission for the prevention of offenses chaired by Interior Minister Vladimir Kolokoltsev in February 2024. The Ministry of Digital Development will have to work out the issue of identifying fakes created using AI. The report on the work done must be submitted to the Commission by November 1, 2024.

In Russia, as of mid-February 2024, the use of deepfakes is not regulated by law. But, as noted by Dmitry Kletochkin, partner of the law firm Rustam Kurmaev and Partners, a criminal case can be opened on the fact of acts using personality substitution technology: such actions can be qualified as theft by modifying computer information (Article 159.6 of the Criminal Code) or as fraud (Article 159 of the Criminal Code). The identification of deepfakes is built primarily on the manual work of forensic scientists who detect voice jitter, speech lag, gluing and other features of audio or video recording. In the future, this process is expected to be automated, in which AI algorithms will help.[5]

The company transferred $25 million to fraudsters after a video conference with employee deepfakes

In early February 2024, it became known about a major scam using deepfakes in Hong Kong. A large local company with international business transferred $25 million to fraudsters after a fabricated video conference with employees.

According to the South China Morning Post, an employee of the finance department received a phishing message claiming that it was from a CFO from Britain. The message instructed to carry out a secret transaction, but the employee was not convinced of the veracity of the letter. It was then that AI helped organize the fraudulent scheme.

It became known about a major scam using deepfakes in Hong Kong

Using deepfake technology, the attacker organised a conference call with deepfake footage of the CFO and other employees to persuade the victim to transfer the money. Watching colleagues in the video was enough to initiate a transaction.

According to an employee of the company who was the victim of the scam, during the call he did not even think about the trick - all the participants looked natural, talked and behaved like his colleagues. The realization came only after the employee nevertheless decided to contact the head office of the company and clarify the details of the bank transfer. At the call, the scammers asked the employee to introduce himself, but did not interact with him and mostly handed out orders. And then the meeting suddenly ended.

By the beginning of February 2024, an investigation is underway, there are no detainees yet. The name of the company and the name of the victim are not disclosed by the police in the interests of the investigation. It is known that scammers took audio and video recordings to create deepfakes of video call participants in the public domain.

As noted in the Hong Kong police, this was the first case of this kind, the reception concerning a large amount. According to Baron Chan Shun-ching, Acting Senior Superintendent of the Crime Bureau, in previous cases, scammers deceived victims with one-on-one video calls[6]

Russia may introduce responsibility for the unauthorized use of deepfakes

In Russia, responsibility may be introduced for the unauthorized use of voice and images of people, said Alexander Khinshtein, chairman of the State Duma Committee on Information Policy, Information Technology and Communications, during a plenary session. This was announced on January 25, 2024 by the press service of the State Duma deputy RFAnton Nemkin.

During the discussion of amendments aimed at toughening responsibility for allowing personal data leaks, the first deputy chairman of the Duma Committee on Science and Higher Education Oleg Smolin raised the issue of spreading deepfakes - fakes of both voice and human images using artificial intelligence technologies.

The deputy proposed, as part of the second reading of bills on personal data, to consider amendments that imply responsibility for the use of deepfakes for the purpose of fraud and discrediting.

According to Alexander Khinshtein, the deputies are dealing with this issue. {{quote 'The problem is quite acute, and together with the relevant departments we are working on the preparation of such an initiative, he noted. - It will be reflected not within the framework of these bills. We have to amend the basic law on personal data. }}

File:Aquote1.png
So far, deepfakes are more likely to bring more negative than good to society, creating new challenges. Among them, first of all, the problem of using deepfakes for fraudulent purposes, the deputy said.
File:Aquote2.png

File:Aquote1.png
For example, the biometrics of a famous actor can be used by some brand for advertising purposes. In many ways, deepfakes are also used for entertainment purposes: the Internet is "filled" with various videos in which famous people say phrases that never really belonged to them. In fact, we are faced with a situation in which a person's face, as well as their voice, can be used by anyone for their own purposes. Such a situation can lead to humiliation of the honor and dignity of certain people, and, probably, to an increase in social tension if deepfakes are used, for example, for political purposes, - said the deputy.
File:Aquote2.png

The problem of using deepfakes for fraudulent purposes is gradually gaining its relevance, the deputy emphasized.

File:Aquote1.png
Let me remind you that not so long ago a fraudulent scheme spread, in which, it would seem, the actions already familiar to us take place: hacking an account in instant messengers, as well as sending messages to a close circle with a request to borrow. The innovation of scammers is the sending of such messages using audio dipfakes, which, of course, is an almost win-win way to deceive the victim. Imagine: a person close to you actually directly addresses you, will you refuse his request? In the future, the number of such schemes will only grow, so the inclusion of the deepfake problem in the legal hollow is a necessity, Nemkin said.
File:Aquote2.png

File:Aquote1.png
The State Duma is just developing an appropriate regulatory framework. I do not think that the inclusion of relevant provisions in related bills will lead to effective results. Here you need to work with the main legislation in the field of personal data, - concluded the parliamentarian.
File:Aquote2.png

Fraudsters began to fake the voices of Russians with the help of AI and deceive their relatives in instant messengers

Fraudsters began to fake the voices of Russians with the help of AI and deceive their relatives and friends in instant messengers. On January 11, 2024, this scheme was told in the department for organizing the fight against the illegal use of information and communication technologies of the Ministry of Internal Affairs of Russia.

According to the department, first swindlers hack accounts in Telegram or WhatsApp using fake votes. After that, scammers download voice messages and form new ones with the context they need.

Fraudsters began to fake the voices of Russians with the help of AI and deceive their relatives

Further, according to the Ministry of Internal Affairs, the attackers send the formed messages to personal and group chats with a request to lend a certain amount of funds, adding photos of a bank card with fake names of recipients to the voice message.

One of the victims of this scam told RBC that fraudsters send a fake voice message both in personal correspondence and in all chats where the account owner is. A photo of a bank card with a name and surname is sent to the same addresses. The interlocutor of the publication had a name and surname that differed in social networks from the information in the passport, but the fraudsters used the data of the passport. The amount that the scammers wanted in this case was 200,000 rubles. One of the VKontakte users lost 3,000 rubles in a similar way.

In the company, F.A.C.C.T. called such a scheme new and "quite advanced" for Russia. According to experts, at the first stage of the scheme, a Telegram or WhatsApp account is hacked, for example, through fake voting, a wave of which was observed at the end of 2023. Then scammers download saved voice messages and use AI services to create new ones with the necessary context, the F.A.C.C.T.

Irina Zinovkina, head of the Positive Technologies research group, questioned the effectiveness of such fraud, since not all users use voice messages, and it is not always possible to glue the necessary phrase from the material that already exists.[7][8]

2023

How cybercriminals replenish the base for creating audio and video chipfakes

According to a study by Angara Security, in 2023, the number of requests for voicing "advertising" and "films" in instant messengers, social networks and community sites increased by 45% (about 7,000 messages were recorded) compared to 2022. At the same time, analysts conclude that the trend towards collecting audio data was formed precisely in 2022, when the number of such requests quadrupled relative to the data of 2021. (about 4,800 materials vs. 1200 in 2021). The company announced this on March 1, 2024.

Most of the ads are posted on Telegram, but other resources are also used, such as Habr or spam calls with an offer to make money on a "big project." The authors of such messages ask for names or set the condition that the recorded audio file should be similar to a phone call. For participation in such projects offer a fee from 300 to 5000 rubles. Angara Security analysts conclude that as a result of voice data collection, cybercriminals have the opportunity to improve the tactics of phishing attacks on individuals and businesses that use audio and video clips.

File:Aquote1.png
If the accounts are closed, then cybercriminals can use the "theft" of the account or a more technically simple way - social engineering to gain trust. Therefore, obtaining source data for video and audio files is much more accessible than it seems, − said Alina Andruch, Angara Security incident response specialist.
File:Aquote2.png

Since the beginning of 2024, cases of fraudulent schemes have begun to be recorded in Russia, in which social engineering and deepfake techniques are used in combination. The purpose of such an attack is to receive money from company employees who receive messages from a fake manager's Telegram account.

For example, in January, a similar technique was used against one of the companies. First, several Telegram user accounts were stolen, then audio files (voice messages) were received. This data was used to generate fake records in which fraudsters on behalf of the account owner extorted money from users who were with him in various chats and working groups.

File:Aquote1.png
We expect that the trend for this kind of attack will only gain momentum with the development of AI technologies. Therefore, it is extremely important to form methods and methods for recognizing fake materials and resolve the issue at the level of legislation in order to reduce cybersecurity risks for ordinary users of digital services and business, − Alina Andrukh continued.
File:Aquote2.png

An important step was taken in Russia regarding the regulation of the deepfake materials method: the Russian government instructed to develop ways to regulate the use of such technology until March 19, 2024. In 2023, a way has already been proposed to delimit real content and created using AI by placing a special stamp on the object. It is worth noting that this method is quite difficult to implement and control.

To identify traces of AI work, including audio and video clips, new tools are being developed, for example, the Russian project "Zephyr," presented last summer, capable of detecting artificially created (audio, video clips) with a high probability. The creation of new tools and developments will make it easier to identify and distribute such materials in the near future.

Angara Security recommends checking a person's identity by asking additional questions: If you received an audio, video call or a message with suspicious content, you need to check the identity of the interlocutor by asking clarifying questions with details that are unlikely to be known to cyber fraudsters, or simply contact you personally by email or by number from the contact database on the phone's SIM card.

You need to pay attention to speech and external features:

Pay attention to the hands of the interlocutor in the video, since most often they are the ones who "suffer" when generating content: fingers are added, removed or glued together. It is worth noting that the attackers take this moment into account in order to avoid recognizing the video clip, therefore, when communicating, they choose a portrait zone.

It is worth paying attention to facial expressions and the frequency of changes in facial expression. Most often, the generated model supports one speed of change of positions of the head position, blink frequency or repetition of the same movements in a certain period of time.

It is also worth checking the features of the face. For example, hair can be borrowed from a "fake" video and not correspond to reality, or lubricated by superimposing one face on another. If the interlocutor is familiar in real life, you can compare moles, scars, tattoos, if they are characteristic of contact.

It is also worth paying attention to the voice (how realistic it is), comparing the movements of the lips and the sound track. Despite the development of technology, this item remains one of the key in recognizing fake materials.

Companies can use both commercial deepfake recognition proposals and open source-based ones to possibly prevent manipulation of video footage by public individuals, such as company executives. For example, technologies that allow you to apply a filter invisible to the human eye to videos in the public domain (for example, recordings of speeches by top managers that companies share in open sources). This filter distorts the final version when trying to generate fake content.

Regular information campaigns and training in how to identify fakes, for the spread of which cybercriminals use instant messengers, corporate mail and other communication channels, are needed.

Cyber ​ ​ fraudsters use AI and deepfakes to lure data or money from users

Cyber ​ ​ fraudsters are actively using artificial intelligence and deepfakes to impersonate other people in instant messengers or social networks. Therefore, any messages with strange requests, even received from relatives or friends, should be treated critically.

File:Aquote1.png
"Deepfakes can be used to create fake video or audio recordings in which attackers impersonate another person: a relative, friend or colleague. Having recorded in a voice a request to transfer money into debt, fraudsters are more likely to receive financial benefits or convince the user to give access to personal data, "Konstantin Shulenin, an expert on network threats at Security Code, warned in an interview with Лентой.ру.
File:Aquote2.png

Neural networks are also actively used - they help scammers automate phishing campaigns and put them on stream, affecting a larger number of "victims." And they make phishing emails themselves more realistic, the press service of the State Duma deputy RFAnton Nemkin told TAdviser on December 29, 2023.

In addition, employees of various companies and organizations who receive emails from "management" and "colleagues" are often targeted by cybercriminals. Moreover, by the end of the year and on the eve of the holidays, people's vigilance decreases, respectively, the likelihood of entering the system for fraudsters is growing. In addition, many go on long vacations, so they cannot detect suspicious activity in the account in time.

One of the main reasons for how masterfully fraudsters build their attacks is the excess of information about citizens on the network, said Anton Nemkin, a member of the State Duma Committee on Information Policy, Information Technology and Communications, deputy of the United Russia faction.

File:Aquote1.png
"Any information is easy to use in order to personalize the attack, make it more convincing, and therefore more malicious. I recommend always keeping your head "cold," do not neglect to check any information that comes to you on the network. Even if you already believe the message you received, it will not be superfluous to try to contact the person on whose behalf you were contacted. During the festive time, there are more traps online than usual by about a quarter. It's not worth the risk, "he said.
File:Aquote2.png

The quality of deepfakes has grown significantly over the past few years, but there are still a number of signs by which you can identify a "fake," Anton Nemkin is sure.

File:Aquote1.png
"We all live in an age of active development phase of generative machine learning models, and deep fake technology in particular. With the help of the human eye, it is no longer always possible to distinguish between real and unreal, therefore, special detection technologies exist to analyze potential visual fakes. For users, the following can become a universal instruction: pay attention to the movement of the eyes, the color of the skin and hair, the contour of the oval of the face - often they can be blurred, strange, - explained the parliamentarian. - In the case of voice fakes, you should always carefully evaluate the intonations and clarity of speech. And, of course, always generally critical of any requests that come to you online, if it concerns your personal data or financial resources. "
File:Aquote2.png

A program is presented that creates realistic videos from one photo and audio recording

On November 16, 2023, Singaporean researchers from the School of Computer Science and Engineering as part of Nanyang Technological University announced the development of an artificial intelligence-based program that allows the generation of video materials based on a single photo and audio recording. A system called DIRFA is capable of reproducing facial expressions and head movements of a talking person. Read more here.

Child psychiatrist in the United States received 40 years in prison for creating pornographic deepfakes

On November 8, 2023, the US Department of Justice announced that Charlotte (North Carolina) child psychiatrist David Tatum was sentenced to 40 years in prison for the production, possession and transfer of materials related to child sexual abuse. The criminal, in particular, is charged with creating pornographic deepfakes - images generated using artificial intelligence. Read more here.

Indian authorities ordered social networks to remove deepfakes

On November 7, 2023, the Ministry and electronic engineers (information technology India MeitY) released a document requiring operators of large social networks to remove deepfakes from their platforms. This should be done within 36 hours after receiving a notification or complaint about the publication of such content.

The department notes that deepfakes, that is, falsified video materials, photographs or audio recordings created using artificial intelligence technologies, can cause serious damage to citizens - primarily women. As an example, a sensational case is given when a video appeared on social networks in which the Indian actress Rashmika Mandanna was allegedly captured. The video, created using AI algorithms, quickly gained a huge number of views, and Mandanna was forced to make a public statement that she was not his heroine.

India's Electronics and Information Technology Ministry requires major social media operators to remove deepfakes

Given the serious problems associated with disinformation and deepfakes, MeitY issued a second recommendation in six months (by November 2023) calling on online platforms to take decisive measures against the distribution of such materials. The ministry emphasizes that in accordance with the rules in force in the country from 2021, Internet resources are obliged to prevent the dissemination of falsified information by any users. Failure to comply with this requirement entitles affected persons to go to court under the provisions of the Indian Penal Code.

File:Aquote1.png
Our government takes very seriously the duty to guarantee safety and ensure the trust of all citizens, particularly children and women against whom such content is used. It is imperative that online platforms take active measures to combat this threat, MeitY said in a statement.[9]
File:Aquote2.png

Paedophiles using AI to create children's photos of stars and make up stories with them

Paedophiles are actively using generative artificial intelligence systems to "rejuvenate" celebrities in photos and create their child's sexual images. In addition, thousands of AI images showing child abuse were found on the Internet. This is stated in the report of the non-governmental British organization Internet Watch Foundation (IWF), published on October 25, 2023. Read more here.

FBI: Deepfakes used in sex extortion scam

Network attackers began using deepfakes to generate sexually explicit content for the purpose of blackmail. This was warned on June 5, 2023 by the Center for Complaints of Internet Crimes (IC3) as part of the US Federal Bureau of Investigation.

Cybercriminals are said to use AI technology and services to alter photos or videos involving the victim. Original materials can be taken, for example, from profiles on social networks or, under some pretext, requested from the user himself. The resulting sex deepfakes can then be used to extort or damage the victim's reputation. Often fake images and videos in which a person is represented in an unsightly light are published on forums or pornographic sites.

Cybercriminals use AI technology and services to change photos or videos involving a victim

By April 2023, a sharp increase in the number of cases of sexual extortion using deepfake photos or fake videos was recorded. Attackers usually demand a ransom, threatening to distribute materials on the Internet or send them to the victim's relatives/colleagues. Sometimes criminals pursue other goals, in particular, require any information.

The FBI urges caution when posting personal photos, videos and identifying information on social media, dating apps and other online platforms. Despite the fact that such materials seem harmless, they can provide attackers with a lot of opportunities to organize fraudulent schemes. New generative AI systems make it easier to carry out personalized attacks using fake images or videos based on real content. Moreover, victims of sex extortion, in addition to financial losses, may be at the very disadvantage.[10]

The number of deepfakes in the world since the beginning of the year has grown several times

The total number of deepfakes around the world during the first months of 2023 increased several times compared to the same period in 2022. This is evidenced by a study by DeepMedia, the results of which were disclosed on May 30, 2023.

It is noted that the explosive increase in the number of fakes on a global scale is explained by the sharply reduced costs of creating such audio and video materials. If earlier about $10 thousand was required to accurately simulate voice, taking into account the operation of server equipment and the use of artificial intelligence algorithms, then by the beginning of May 2023, costs had decreased to only a few dollars. This is due to the emergence of generative AI models of a new generation and more powerful hardware platforms designed specifically with an eye on neural networks and machine learning.

The explosive increase in the number of fakes on a global scale is due to the sharply reduced cost of creating such audio and video materials

According to DeepMedia estimates, from January to May 2023, three times more fake video materials of all types were posted on the Internet and eight times more voice deepfakes than in the same period in 2022. AI technologies can be used to spread false statements on behalf of politicians and well-known public figures, which can provoke serious public unrest and conflict. Although large social platforms like YouTube and Facebook (recognized as an extremist organization; activities on the territory of the Russian Federation are prohibited) they introduce algorithms for combating deepfakes, the effectiveness of such tools is not high enough.

It is said that leading AI developers, such as OpenAI, embed special functions into their services that do not allow generating content with the participation of public persons. But small startups often neglect such measures. The possibility of creating an industry solution to identify materials created by artificial intelligence is already being discussed: these can be, for example, special digital tags.[11]

In China, a fraudster used a fake video call to deceive a businessman for $610,000

In May 2023, it became known that in China, a fraudster used artificial intelligence to impersonate a friend of businessman Guo and convince him to transfer $610,000.

Guo received a video call from a man who looked and spoke as a close friend. But the caller was actually a fraudster "using technology to change his face" and voice. Guo was persuaded to transfer 4.3 million yuan after a fraudster claimed another friend needed money from the company's bank account to pay for a guarantee at the tender.

A deepfake system is presented that makes the user look directly at the camera

On January 12, 2022, Nvidia announced the Maxine Eye Contact system. This is a deepfake technology that provides constant eye contact for users when conducting video conferencing sessions. Read more here.

2022

Investments in deepfake startups have skyrocketed in the world

In 2022, venture capital funds in the world invested approximately $187.7 million in startups specializing in deepfake technologies. For comparison: in 2017, investments in the relevant area were estimated only at $1 million. Such figures are given in the PitchBook study, the results of which were released on May 17, 2023.

It is said that during the first months of 2023, financial injections into deepfake startups reached $50 million. The largest recipient of venture money over the past five years (by the beginning of 2023) was the New York company Runway, which, among other things, is developing an artificial intelligence-based tool capable of generating short videos by text description. This firm raised at least $100 million and was valued at $1.5 billion.

Venture capital funds in the world have invested in startups specializing in deepfake technologies, approximately $187.7 million

At the same time, the London company Synthesia, which is developing a platform for creating realistic virtual characters based on video and audio recordings, received $50 million for development from a number of investors, including Kleiner Perkins. Israeli startup Deepdub, which developed AI-based audiovisual duplication and language localization technology, raised $20 million. The deepfake visual effects studio Deep Voodoo received the same amount.

Together with the advent of new and more realistic deepfake tools, the market for specialized fake detection tools is rapidly developing. According to calculations by the research firm HSRC, in 2022 the volume of this segment was approximately $3.86 billion. A compound percentage CAGR of 42% is expected between now and 2026. Means of detecting deepfakes are necessary to prevent the appearance of disinformation in the media, counter various fraudulent schemes on the Internet, etc.[12]

In the US, scammers stole $11 million with deepfakes imitating someone else's voice

In 2022, fraudsters, using artificial intelligence models to accurately imitate (deepfake) human voices, stole about $11 million from their victims in the United States alone. Such data are contained in the report of the Federal Trade Commission (FTC), published on February 23, 2023. Read more here.

Cloud and Pyaterochka created a commercial using deepfake technology

Cloud (Cloud Technologies LLC), with the support of the AIRI Institute of Artificial Intelligence, together with the Pyaterochka retail chain, created a commercial using DeepFake technology. Cloud announced this on December 22, 2022. Trained by AIRI specialists on the Cloud ML Space cloud platform, the model became the basis of the digital image of actress Olga Medynich, who was not even present on the set. Read more here.

Chinese regulator publishes rules to protect citizens from deepfakes

On December 12, 2022, it became known that the Cyberspace Administration of China (CAC) is introducing new rules for content providers that change the face and voice data of users.

On January 10, 2023, the norms governing the so-called "deepfake" technologies and services will come into force. It is an image synthesis technique based on artificial intelligence. The essence is that synthesized personnel is superimposed on the source materials. In the vast majority of cases, generative and adversarial neural networks are used to create such videos. One part of the algorithm learns from real photographs of a specific object and creates an image, literally "competing" with the second part of the program, until it begins to confuse the copy with the original. As a result, the resulting images are almost indistinguishable from the original: they are used for manipulation or disinformation.

Chinese regulator publishes rules to protect citizens from deepfakes

The CAC ruling provides for the protection of people from possible fraudulent activities through deepfakes. This can be, for example, passing off users as other persons. The document also suggests deepfakes could be used by online publishers, who must consider China's myriad other rules on acceptable online content.

At the same time, China expects synthetic images of people to be widely used in applications such as chatbots. In similar scenarios, deepfakes should be labeled "digital creations." The rules also spell out how the creators of deepfakes, called "deep synthesis service providers," should ensure that their models and algorithms of artificial intelligence and machine learning are as accurate as possible. In addition, it is necessary to ensure the security of the collected personal data.[13]

Intel introduced deepfake recognition technology

Intel has introduced deepfake recognition technology. This became known on November 17, 2022. Read more here.

Roskomnadzor creates a system for checking videos for lies and searching for deepfakes

In early November 2022, it became known about the creation of the Expert service, which will allow checking video recordings of performances for lies and manipulations. This technology is being developed by specialists from the ITMO National Center for Cognitive Development for the Main Radio Frequency Center (GRCC) subordinate to Roskomnadzor. Read more here.

The world's first web series using Deepfake technology was shot in Russia

As it became known on October 10, 2022, the Russian company created the world's first web series using Deepfake technology, the main character of the parody comedy was the image of British actor Jason Statham. The project was created with the support of the Institute for Internet Development.

Alexey Parfun, CEO of Agenda Media Group, said in an interview with TASS that the use of images of actors using this technology is not legally prohibited, but Deepfake should not hurt a person's honor, dignity and business reputation and disclose his personal information.

A snippet of a series filmed using Deepfake technology
File:Aquote1.png
This is an ironic view of a foreigner in the person of Statem on Russian life, - said Parfun.
File:Aquote2.png

According to him, the project reveals the character's attempts to understand and integrate into Russian life. According to the plot, the main character came to shoot in the Russian Federation and stayed to live in it.

The events of the series take place in 2027. After five years of filming in Russia, Statham stayed to live in a Russian village. For his 60th birthday, friends come to see him - Reeves, Robbie and Pattinson. Their images were created using deepfake technology: artificial intelligence produces synthetic content, where a person's face is replaced by another in photo, video or audio space. The series starred Yulia Bashorina, Andrei Deryugin, Andrei Korotkov.

According to the production director of Agenda Media Group Maria Artemova, the entire project was implemented in three months, taking into account the writing of the script and post-production. The team had to control some of the features of the shooting every minute, she noted.

File:Aquote1.png
For example, deepfake shooting does not allow you to use close-ups, there are certain conditions for light, optics, cameras, as well as the positions of the actors in the frame. It spends a lot of time and takes. In addition, there were significant limitations on the amplitude of the actors' movements, which also caused some difficulties, she added.[14]
File:Aquote2.png

The EU intends to fine social networks for non-removal of deepfakes

The EU intends to fine social networks for non-removal of deepfakes. This became known on June 14, 2022.

Google, Twitter and other tech companies will have to take action to crack down on deepfakes and fake accounts, or face hefty fines, up to 6% of their global turnover, the updated code of action says. European Union

The creation of deepfakes is possible thanks to neural networks that can simulate people's faces and voices from photos and audio recordings.

The European Commission intends to publish a regulatory document in the field of disinformation by the end of June 2022. It contains examples of manipulative behavior, the signatories of the document will be obliged to fight fake accounts, disinformation advertising and deepfakes, and they will also have to provide greater transparency in political advertising. It is expected that the companies that signed the document within six months will adopt and implement "a policy regarding unacceptable manipulative behavior and practice in their services, based on the latest data on behavior and tactics, methods and procedures used by cybercriminals[15].

With the help of a deepfake, you can impersonate another person in the bank

On May 19, 2022, it became known that with the help of a deepfake, you can impersonate another person in a bank.

Deepfake technology allows you to bypass the facial recognition system.

Sensation, which specializes in identifying attacks using deepfake technology, has investigated the vulnerability of 10 identification services. Sensation used deepfakes to superimpose the user's face on the ID card for scanning and then copy the same face into the attacker's video stream for identification.

The Liveness test usually asks the user to look at the camera of the device, sometimes turning their head or smiling, and compares the appearance of the user and his identity card using facial recognition technology. In the financial sector, such a check is called "Know Your Customer" (KYC) and is part of the check of documents and accounts.

File:Aquote1.png
We tested 10 services and found that 9 of them are vulnerable to deepfakes, "said Sensation Chief Operating Officer Francesco Cavalli. - There is a new generation of AI that could pose a serious threat to companies. Imagine what you can do with a fake account created with a deepfake. And no one will be able to detect a fake.
File:Aquote2.png

Cavalli is disappointed with the reaction of services that considered the vulnerability insignificant.

File:Aquote1.png
We have informed vendors that services are vulnerable to deepfake attacks. The developers ignored the danger. We decided to publish the report, as the public should be aware of these threats, the researcher added.
File:Aquote2.png

Suppliers sell Liveness tests to banks, dating apps and cryptocurrency projects. One service was even used to verify the identity of voters in elections in Africa. (Although there is no indication in the Sensation report that the review was compromised by deepfakes)

Deepfake technology poses a great danger to the banking system, in which a deepfake can be used for fraud.

File:Aquote1.png
I can create an account, transfer the stolen money to a bank account or take out a mortgage, because online lending companies compete with each other in the speed of issuing loans, the expert added.
File:Aquote2.png

An attacker can easily intercept a video stream from a phone camera and use deepfake technology for malicious purposes. However, it is impossible to bypass the Face ID facial recognition system in this way. Apple's identification system uses depth sensors and checks your identity not only based on your appearance, but also the physical shape of your face[16] of[17].

Deepfakes can easily fool many Facial Liveness Verification authentication systems

On March 3, 2022, it became known that some deepfake detection modules are tuned to outdated equipment. A team of researchers from the University of Pennsylvania (USA) and Zhejiang and Shandong Universities (China) studied the susceptibility to deepfakes of some face-based authentication systems . As the results of the study showed, most systems are vulnerable to developing forms of deepfakes.

Deepfakes deceive recognition systems

The study carried out deepfake-based attacks using a dedicated platform deployed in Facial Liveness Verification (FLV) systems, which are supplied by large suppliers and sold as a service to downstream customers such as airlines and insurance companies.

Facial Liveness is designed to reflect the use of methods such as image attacks, the use of masks and pre-recorded video, so-called "master faces" and other forms of cloning visual identification.

The study concludes that a limited number of deepfake detection modules in similar systems may have been tuned to outdated techniques or may be too architecture-specific. Experts note that even if processed videos seem unrealistic for people, they can still bypass the current deepfake detection mechanism with a very high probability of success.

Another finding was that the current configuration of shared facial verification systems is biased against white men. The faces of women and minorities of color have proven more effective at circumventing vetting systems, putting clients in these categories at greater risk of hacking through deepfake-based methods.

The authors propose a number of recommendations for improving the current state of FLV, including not authenticating on a single image ("image-based FLV") when authentication is based on a single frame from a client camera; more flexible and comprehensive update of deepfake detection systems in graphic and voice domains; imposing the need to synchronize voice authentication in user video with lip movements (which, as a rule, is not); and requiring users to perform gestures and movements that are difficult for deepfake systems to reproduce (e.g., profile viewing and partial face darkening).[18]

Deepfakes banned in China

At the end of January 2022, a bill banning the use of deepfakes would be adopted in China. This technology consists in the synthesis of artificial intelligence of human images: the algorithm combines several photos in which a person is depicted with different expressions of the face, and makes video from them. At the same time, the system analyzes a large number of images and learns how a particular person can look and move.

The initiative, developed by China's cyberspace administration, explains the need for regulation in the context of the government's desire to ensure that the Internet is a tool for good. The explanatory note to the law says that criminals and fraudsters will be attracted by the use of digitally created voices, videos, chat bots or manipulation of faces or gestures.

Deepfakes banned in China

The bill excludes the use of such fakes for any applications that could disrupt social order, infringe on citizens' rights, spread fake news or portray sexual activity. The draft also proposes requiring permission to use what China calls deep synthesis before using it for legitimate purposes. Exactly what legitimate purposes can be used is not specified, but the draft contains extensive provisions on how digital assets should be protected so as not to violate user privacy.

In the case of deep synthesis, the project proposes the requirement to label it as a digital creation in order to eliminate any doubts about authenticity and origin. The draft also sets out requirements for service providers to implement security practices and always act in China's national interest.[19]

2021

Fraudsters stole $35 million from a UAE bank with the help of a deepfake of the voice of its head

In mid-October 2021, it became known that the criminals took possession of a huge amount of $35 million from a bank in the UAE, imitating the voice of the head of the bank using advanced artificial intelligence. They reportedly used a deepfake to mimic a legitimate commercial transaction linked to the bank.

Forbes reported that deepfake voices were used to trick a bank worker into thinking he was transferring money as part of a legitimate operation. The story began to be discussed after the publication of court materials that took place in January 2021, when the manager of an unnamed bank received a seemingly ordinary phone call.

In the UAE, scammers with the help of a deepfake voice deceived a bank employee and stole $35 million


The person on the other side of the phone claimed to be a director of a large company the manager had previously spoken to, with their voices identical, according to court filings. All this is supported by emails from the bank and its lawyer, looking like real, who were able to convince the branch manager that he was talking to the director, and the company was indeed in the process of a large commercial transaction worth $35 million.

Subsequently, he followed the instructions of the caller and made several large money transfers from the company to a new account. Unfortunately, it was all a sophisticated scam.

Investigators from Dubai found out that the scammers used "deep voice" technology, which allowed them to mimic the voice of the head of a large company. Police concluded that up to 17 people were involved in the scheme and that the stolen money was being transferred to several different bank accounts scattered around the world.

So, two accounts were registered with the United States, at Centennial Bank. They received an amount of $400 thousand. UAE investigators have already reached out to US officials for help in the investigation.

This is not the first time that fraudsters were able to carry out a major scam with the help of imitation of a voice. In 2019, a certain energy company in the UK lost $243 thousand after a person who pretended to be the CEO of the company contacted an employee of the company.

Attackers are increasingly using the latest technology to manipulate people who are unaware of their existence, according to Jake Moore, a cybersecurity expert at ESET.

According to experts watching the artificial intelligence market, this will not be the last time.[20]

Deepfake voices can fool IoT devices and people after five seconds of learning

Deepfake voices can fool IoT devices and people after five seconds of learning. This became known on October 14, 2021.

Deepfake could have tricked Microsoft Azure about 30% of the time and successfully tricked WeChat and Amazon Alexa 63% of the time.

Researchers from the Security, Algorithms, Networking and Data (SAND) laboratory University of Chicago tested deepfake voice synthesis programs available on the developer community website with open source GitHub the aim of finding out if they can bypass voice recognition systems in Amazon Alexa, WeChat Microsoft and Azure.

According to the developers of the SV2TTS, the program takes only five seconds to create an acceptable simulation.

The program could fool human ears - of the 200 volunteers asked to identify real voices among deepfakes, about half the time the answers were wrong.

Deepfake audio was more successfully used to imitate the female voices and voices of those people for whom English is not their native language.

File:Aquote1.png
We found that both humans and computers can be easily deceived by synthetic speech, and existing protections against synthesized speech do not work, the researchers told NewScientist.
File:Aquote2.png

Experts tested another voice synthesis program called AutoVC, which takes five minutes of speech to recreate a person's voice. AutoVC managed to trick Microsoft Azure only 15% of the time, so researchers refused to test it on WeChat and Alexa[21].

Ernst & Young started using employee video dypfakes to communicate with customers instead of face-to-face meetings

In mid-August 2021, it became known that Ernst & Young (EY) she began to use employees' video libraries to communicate with clients instead of personal meetings. To do this, the firm uses technology provided by the British. startup Synthesia More. here

The Ministry of Internal Affairs has taken up the fake video recognition system

In early May 2021, it became known about the contract that the Ministry of Internal Affairs (Ministry of Internal Affairs) of Russia concluded with the Moscow Scientific and Industrial Company High Technologies and Strategic Systems JSC. We are talking about scientific development under the code "Mirror" ("Camel"), designed to identify fabricated videos (deepfakes). The study is necessary for forensic units and video technical examinations. Read more here.

Fraudsters in China with the help of deepfakes deceived the state facial recognition system for $76.2 million

Fraudsters in China with the help of deepfakes deceived the state facial recognition system for $76.2 million. To deceive her, scammers bought high-quality photos and fake personal data on the black market. It costs from $5. The resulting photos of Wu and Zhou were processed in deepfake applications - they can "revive" the uploaded picture and make a video from it, giving the impression that faces nod, blink, move and open their mouths. Such applications can be downloaded for free.

For the next stage, scammers bought special reflashed smartphones: during face recognition, the front camera of such a device does not turn on, instead the system receives a pre-prepared video, perceives it as an image from the camera. Such phones cost approximately $250.

With the help of such a scheme, fraudsters registered a dummy company that could issue fake tax returns to its clients. For two years, fraudsters earned $76.2 million on this.

Biometrics are widespread in China - with its help they confirm payments and purchases, check their identity when applying for public services, and so on. But along with the development of technology, data protection has become one of the main problems.

2020

Facebook researchers call NtechLab's deepfake recognition algorithm the most attack-resistant

NtechLab, a company in the field of solutions for video analysts based neural networks and technological partner of the State Corporation, Rostec December 11, 2020 announced that it was deepfake recognition company algorithm called the most resistant to deception using the so-called adversarial. attacks More. here

The appearance of Telegram bots that create fake "porn photos" based on DeepNude for blackmail

At the end of October 2020, a system of deepfake bots was discovered in Telegram, which, upon request, generate fake "porn photos." Users substitute the faces of familiar women taken from images from social networks for such pictures, and then send them out in public channels and chats or use them for blackmail. Read more here.

2019

Samsung introduced digital people that are not distinguishable from real ones

In early January 2020, Samsung presented a project of "artificial people," called Neon. It was developed by Samsung Technology and Advanced Research Labs (STAR Labs). Read more here.

Zao App Release

In early September 2019, it became known about the release of the Zao application, which, using artificial intelligence, allows you to insert the user's face into a scene from the film. Read more here.

Energy company lured $243,000 using AI to fake voice

In early September 2019, criminals lured $243 thousand from a British energy company, posing as an executive director using artificial intelligence to fake a voice.

The general manager of an unnamed energy company thought he was on the phone with his boss, an executive at the German parent company. "Boss" asked him to send funds to a Hungarian supplier. According to Euler Hermes Group SA, the offender said that the request was very urgent and asked the manager to transfer the money within an hour. Euler Hermes declined to name the victim company.

Criminals lured $243 thousand from the British energy company.

Fraud expert from Euler Hermes insurance company Rüdiger Kirsch said that the injured manager recognized his boss's weak German accent and general tone of voice over the phone. Immediately after the call, the manager handed the money to the Hungarian supplier and contacted the boss again to report the completed task. Soon, the head of the company and his subordinate realized that they had become a victim of fraud and turned to the police.

Specialist cyber security ESET Jake Moore (Jake Moore) at the end of August 2019 said that in the near future we will face a monstrous increase in cybercrime. DeepFake is able to frame the faces of celebrities and public faces in any video, but it takes at least 17 hours of recordings with this person to create a convincing image. To fake a voice, it takes much less material. As the computing power of computers increases, such fakes become easier to create.

To reduce risks, Moore recalls, it is necessary not only to inform people that such imitations are possible, but also to include special verification methods before transferring money, for example, two-factor identification.[22]

In the United States, begin to imprison for the distribution of porn with the substitution of faces

In early July 2019, Virginia passed a law prohibiting the distribution of pornography with the substitution of persons. This is the first state in the United States to have such an initiative.

In the United States and other countries, the so-called revenge porn is widespread - posting materials of an open sexual nature on the Internet without the consent of the person depicted on them. As a rule, such materials are posted by former partners in revenge or by hackers who have gained unauthorized access to such materials. With the development of artificial intelligence technologies and simple tools for high-quality editing of photo and video content, revenge porn increasingly began to be carried out by superimposing a face on a porn actor.

In early July 2019, Virginia passed a law prohibiting the distribution of pornography with the substitution of persons

Starting July 1, 2019, anyone who distributes or sells fake photos and videos of a sexual nature for the purpose of "blackmail, harassment or intimidation" in Virginia can be fined up to $2,500. A prison term of up to 12 months is also provided.

Virginia became the first American state to outlaw the so-called deepfake. Similar initiatives are being prepared in other states by July 2019. For example, New York is considering a bill prohibiting the creation of "digital copies" of people without their consent, and in Texas on September 1, 2019, a law will come into force on responsibility for the distribution of sexual content with the substitution of persons.

File:Aquote1.png
We must rebuild our outdated and disparate laws, including criminal ones, for the paralyzing and life-threatening consequences of threats, and recognize the significant harm of fake porn, says Professor Claire McGlynn of Durham University.
File:Aquote2.png

According to experts, it is becoming more and more difficult to identify support even in videos.[23]

Service launched to strip women in a photo in 30 seconds

At the end of June 2019, it became known about the launch of the DeepNude service, which allows you to undress a girl in a photo. The name of the developer of the application is unknown, but his Twitter blog says that the product is being developed by a "small team" from Estonia. Read more here.

Moscow AI Center Samsung created a neural network that revived the portrait of Dostoevsky

In May 2019, researchers from the Samsung Artificial Intelligence Center in Moscow presented a neural network capable of "animating" static images of faces. The operation of the system is described in the materials published on the portal arXiv.org.

Artificial intelligence revived the portrait of Dostoevsky - animation provided by the press service of Samsung

The neural network records the movements and facial expressions of the human face on video, and then transfers the received data to a static portrait. Scientists "showed" artificial intelligence a large number of personnel with people's faces.

A special mask was applied to each face on such a frame, which denotes borders and basic facial expressions. How such a mask relates to the original frame is stored as a vector, data from which is used to superimpose a separate mask on the person's image, after which the finished animation is compared with the template.

Samsung notes that such a development can be used in telepresence systems, video conferencing, multiplayer games and when creating special effects in films.



According to ZDNet, deepfake technologies themselves are not new, but the Samsung system is interesting in that it does not use 3D modeling and allows you to create a "live" face model with only one photo. If you upload 32 pictures to it, you can "achieve a perfectly realistic picture and personalization," the company noted.

Its developers demonstrated the capabilities of the system in photographs of Marilyn Monroe, Salvador Dali and Albert Einstein. However, she also works in paintings and portraits. In the video, the authors of the project showed a "lively" portrait of the Russian writer Fyodor Dostoevsky.

At the time of the demonstration of the development, the artificiality of the movements is still noticeable - the developers plan to deal with these defects in the future.[24]

McAfee Video Face Spoofing Can No Longer Be Determined

In early March 2019, McAfee, a cybersecurity company, announced that the substitution of faces in the video could no longer be determined with the naked eye. In a keynote speech at the RSA cybersecurity conference in San Francisco, Steve Grobman, chief technology officer at McAfee, and Celeste Fralick, chief data officer, warned it was only a matter of time before hackers used the new technology.

Now that attackers are able to create individualized targeted content, they can use AI for various purposes - for example, to hack accounts using social psychology techniques or phishing attacks. Personalized phishing, that is, fraud aimed at obtaining bank confidential data in order to steal money, is more successful, but new capabilities of artificial intelligence allow it to be carried out on the scale of automated attacks.

It will be more and more difficult to distinguish deepfake video from genuine materials, and this could become a problem for cybersecurity, experts say

There is a whole field of cybersecurity called adversarial machine learning, where possible cyberattacks on machine learning classifiers are studied. McAfee believes that the image spoofing technique is a serious threat and can be used to distort the image classifier.

One way to fool people and AI is to take a real photo and quietly change a small part of it. So, with minimal change, a photo of penguins could be interpreted by AI as a frying pan. However, false positives on a more serious scale can be disastrous.

Grobman stressed that DeepFake technologies in themselves are a tool that can be used for a wide variety of purposes. It is impossible to prohibit attackers from using new methods, but it is possible to establish a line of defense in a timely manner, he said.[25]

Fake AI porn has overerrorised women

By the beginning of 2019, artificial intelligence has reached a level of development that allows you to easily and without special technical skills "attach" the heads of stars and ordinary women to the bodies of porn actresses to create realistic videos. These explicit films, created using the DeepFake method, are videos edited so well that they are indistinguishable from the real ones. Their emergence is dangerous because the technology may also begin to be used to spread fake news. But even more dangerous is their use as a tool to blackmail and humiliate women.

In light of the proliferation of AI and easy access to photos of former partners, colleagues and other people without their consent on social media sites such as VKontakte and Facebook, there is a growing demand for tools to create fake videos. Despite the fact that the law may be on the side of victims, they often face significant obstacles associated with the difficulties of harassment on the Internet.

File:Aquote1.png
Fake porn videos cause the same stress as intimate photos posted online, says writer and former politician Charlotte Lozas. - Fake porn videos are realistic, and their influence is exacerbated by the growing number of fake news among which we live.
File:Aquote2.png

AI-faked porn has overerrorised women

Lozas add that fake videos have become a common way to humiliate or blackmail women. In a survey of 500 women who were victims of revenge porn, Lozas found that 12% were victims of fake porn videos.

One way to solve this problem may be to revise and supplement the laws prohibiting revenge porn. These laws, which exist in 41 US states at the beginning of 2019, have recently appeared and indicate a change in the government's attitude towards "uncoordinated" pornography.

Fabricated porn featuring actress Paul Gadot

Another approach is to bring civil action against offenders. As noted on the website of the independent non-profit organization Electronic Frontier Foundation, persons who are victims of fake porn videos in the United States can sue for defamation or presenting them in a "false light." They can also file a "right to public use" claim, pointing out that the creators of the video benefited from the victim's image without her permission.

However, all these possible solutions could run into a serious obstacle: a free speech law. Anyone sued for making a fake clip can claim the video is a form of cultural or political expression and falls under the first amendment. Lozas believe that in the case of fake porn videos, most judges will be critical of the First Amendment reference, especially when the victims will not be famous personalities, and the video will only affect sexual exploitation and will not include political satire or materials of artistic value.

A fragment from a fake porn video in which the face of a porn actress was replaced with the face of Hollywood star Paul Gadot

At the same time, the victim herself has almost no opportunity to close access to the offensive video. The reason lies in Article 230 of the US law, which protects the provider in terms of what users publish on their pages. In the case of sites that host fake porn videos, providers can claim immunity because not they, but their users, upload videos. An exception in this situation is a violation of intellectual property, when the operator is obliged to remove materials if he receives a notification from the copyright owner.

According to Professor Loyola Law School and author of a book on privacy and publicity rights, Jennifer Rothman, courts do not have a clear idea of ​ ​ whether this exception applies to state laws (for example, the right to publicity) or only to federal ones (such as copyright and trademark).

This raises the question of whether Congress can draft legislation narrow enough to help victims of fake porn videos, but which will not have undesirable consequences. As a cautionary example, University of Idaho law professor Annemarie Bridy cites the misuse of copyright law when companies and individuals acted in bad faith to remove legitimate criticism and other legitimate content.

Still, according to Bridey, given what's at stake in the case of fake pornographic videos, the new law is needed now, Fortune said in a Jan. 15, 2019, publication.[26]

2018: Artificial intelligence taught to fake people's movements in video

Main article Artificial intelligence in video

As it became known in June 2018, a new development in the field of artificial intelligence (AI) has appeared, which will allow the creation of realistic fake video plots.

Tools that allow you to simulate the movement of the lips and facial expressions of a person already exist. However, according to the Futurism portal, the new AI-based system represents a significant improvement in existing developments. It provides the ability to create photorealistic videos in which all movements and words uttered by the actor in the source video will be transferred to the changeable video.

There is a new development in the field of artificial intelligence (AI), which will allow you to create realistic fake video plots

A public demonstration of the development will take place in August 2018 at the SIGGRAPH computer graphics conference. The creators of the new system plan to show development possibilities through experiments comparing the new algorithm with existing tools for creating believable video plots and images, many of which were partially developed by Facebook and Google. The characteristics of an AI-based solution outperform existing systems. Reportedly, the AI system in just a few minutes of working with the original plot will help create a flawless fake video. The participants in the experiments hardly managed to distinguish real videos from fake ones.

The developers, who have received financial support from Google, hope their work will be used to improve virtual reality technology and make it more accessible.

2017: Replacing the face of a porn actress with the face of a Hollywood movie star

In December 2017, a porn video allegedly appeared on the Internet with the participation of the famous actress Gal Gadot. However, in reality, the video showed the body of a porn actress, whose face was replaced with the face of a Hollywood movie star using artificial intelligence. Read more here.

See also

Notes

  1. Central Bank will start a fight against "deepfakes"
  2. Nearly 4,000 celebrities found to be victims of deepfake pornography
  3. Ministry of Digital Development plans to create a system for identifying deepfakes
  4. Fraudsters began to actively lure out samples of citizens' votes
  5. Ministry of Digital Development of Internal Affairs and Roskomnadzor will determine the punishment for deepfakes
  6. "Everyone looked real": multinational firm's Hong Kong office loses HK $200 million after scammers stage deepfake video meeting
  7. The Ministry of Internal Affairs warned of a new scheme of scammers generating the voices of friends in social networks
  8. Extortionists began to use AI to fake voice in Telegram
  9. Union Government issues advisory to social media intermediaries to identify misinformation and deepfakes
  10. Malicious Actors Manipulating Photos and Videos to Create Explicit Content and Sextortion Schemes
  11. Deepfaking it: America's 2024 election collides with AI boom
  12. Deepfake Startups Become a Focus for Venture Capital
  13. China's rules for "deepfakes" to take effect from Jan. 10
  14. filmed the world's first web series using deepfake technology
  15. The EU intends to fine social networks for failing to remove deepfakes
  16. [https://www.securitylab.ru/news/531760.php. With the help
  17. a deepfake, you can impersonate another person in the bank]
  18. Deepfakes can easily fool many Facial Liveness Verification authentication systems
  19. China reveals draft laws that heavily restrict deepfakes
  20. Bank Robbers Used Deepfake Voice for $35 Million Heist|AI-Enhanced Voice Simulation Used
  21. Deepfake voices can trick IoT devices and people after five seconds of training
  22. Manager at energy firm loses £200,000 after fraudsters use AI to impersonate his boss's voice
  23. Virginia bans 'deepfakes' and 'deepnudes' pornography
  24. Samsung uses AI to transform photos into talking head videos
  25. McAfee shows how deepfakes can circumvent cybersecurity
  26. Fake Porn Videos Are Terrorizing Women. Do We Need a Law to Stop Them?