RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2025/09/04 11:58:06

AI fraud

Content

Main article: Artificial Intelligence

Deepfakes (DeepFake)

Main article: Deepfakes (DeepFake)

Chronicle

2026: How to spot doctored documents with AI. The Ministry of Internal Affairs of the Russian Federation gave instructions

In January 2026, representatives of the Russian Ministry of Internal Affairs reported on the characteristic signs of documents created using artificial intelligence technologies. Generative neural networks capable of creating texts, images and media files are actively used by attackers to produce fake notifications, letters and web pages that imitate official sources. Read more here.

2025

How Hackers Use Generative Neural Networks to Scout, Infiltrate Company Networks, and Compile Ransom Demands

At the end of August 2025, the American company Anthropic, which develops artificial intelligence technologies, said that its tools are used by hackers "as a weapon" to carry out complex cyber attacks. We are talking about large-scale theft and theft of personal data.

Anthropic, the creator Claude of the chatbot, claims that in one case it generative AI was used to write malicious code to carry out cyber attacks. This code was then used to hack the IT systems of at least 17 different organizations, including government agencies. It is alleged that attackers used Claude to "make both tactical and strategic decisions" - for example, to determine what data should be extracted from hacked systems and how to make psychologically sound demands for extortion. Moreover, AI was used even to determine ransom amounts.

How attackers use generative neural networks for intelligence, intrusion into corporate networks, and formulation of ransom requirements

In another cyber scheme, North Korean fraudsters used Claude to fraudulently obtain remote work at leading US companies. According to Anthropic, they used AI to form fake profiles for the purpose of submitting applications to Fortune 500 American tech corporations. In case of successful employment, fraudsters continued to use generative AI to translate correspondence and write program code for employer tasks.

Information security experts note that AI can significantly reduce the time it takes to find and exploit vulnerabilities. In addition, neural networks help cybercriminals organize completely new fraudulent schemes. On the other hand, AI makes it possible to increase the efficiency of intrusion detection and expand the functionality of protective equipment.[1]

Return of Google, McKinsey and Cisco to offline interviews due to cheating candidates using AI

Tech giants Google and Cisco Systems, as well as consultancy McKinsey, are reverting to the format of in-person job interviews due to massive fraud by job seekers using artificial intelligence to deceive employers. This became known in August 2025. Candidates use AI services as hidden assistants to answer questions and complete test tasks, which forces companies to reconsider approaches to recruiting.

According to The Wall Street Journal, in recent years, companies have massively switched to virtual interviews, trying to speed up the hiring process and adapting to the growing popularity of remote work. However, this format created favorable conditions for unscrupulous applicants, especially when hiring for technical specialties.

Google, McKinsey and Cisco return to offline interviews over AI cheating of candidates

Job seekers use artificial intelligence in a variety of ways to gain unfair benefits. Candidates stealthily inject interviewer questions into AI-based chatbots and get ready-made answers in real time. This approach is especially popular when performing technical tasks for programming, where AI can generate code and solutions to algorithmic problems.

Some applicants use AI to prepare for interviews, analyzing typical company questions and getting optimized answers. Artificial intelligence helps candidates formulate answers in such a way that they sound as convincing as possible for a specific position and company.

The most problematic area was interviews for the positions of software engineers. Most of these interviews are conducted remotely, especially in small companies, which creates ideal conditions for using AI assistants. Candidates can quietly get help solving algorithmic problems and writing code.[2]

2024: How artificial intelligence helps fraudsters deceive travellers. Scams in the world became 900% more

At the end of June 2024, the Booking.com service warned its users about an explosive increase in travel fraud cases related to the use of AI. According to rough estimates, over the past 18 months, the number of cases of this kind of scam has grown by 500-900%.

Especially noticeable is the increase in the number of phishing attacks, when people are tricked into transferring their financial data. According to security researchers, this is due to the launch of ChatGPT, which perfectly simulates electronic correspondence with real specialists. In a phishing attack, scammers persuade people to hand over credit card details by sending fake but very convincing links to bookings by the type of Booking.com and Airbnb sites.

Booking.com warned its users about the explosive growth of travel fraud

Similar types of fraud have been known for many years, and often such fake letters give out spelling and grammatical errors. However, AI makes it harder to detect phishing emails because it can generate realistic images and grammatically correct text in multiple languages. Experts urge hotels and travelers to use two-factor authentication, which includes additional security checks, and in addition, be vigilant when clicking on links from an unfamiliar source.

However, it's not just scammers who use AI - the technology also allows Booking.com to quickly remove fake hotels that tried to trick people. AI models can quickly detect such schemes and either block scammers from accessing or disable them before they can cash in on cheating users.

Travel providers are advised by experts to advise travellers to minimise the risk of fraud, and travellers themselves to explore holiday options "with due diligence" by checking for contact details and a phone number.[3]

2023: From bank cards to artificial intelligence: How technology is changing the face of fraud

Social engineering remains the main tool for attackers - it accounts for 90% of financial crimes. Of these, 94% are related to telephone fraud.[4] Against this background, expert Dmitry Emelyanov tells how technologies - from simple phone calls to the use of artificial intelligence - change the strategies of attackers and what measures will help reduce financial risks for citizens. Read more here.

Notes