RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/05/08 08:54:18

Boats

A bot is a program that automatically performs actions on a computer instead of people. Computer programs can manipulate social networks, for example, focus on some public problems, support humanitarian actions, or, conversely, suppress social activity. Article:Botnet

Content

Why bots are used

Robots, or "bots" for short, are specialized programs that can perform tasks ranging from indexing sites and searching for information, to spreading spam and hacking user accounts. Digital data protection experts divide bots into several groups, by type and task. For example, the largest group of "bots" are malware that collects user data or organizes computers into special botnets to attack sites and services. In addition, "robots" are used to search for vulnerabilities, monitor traffic, there are spider bots or search robots, ticket "bots" and "bots" - loaders.

The most easily recognizable are spammer bots that litter forums with advertising posts or send relevant messages on social networks from specially created accounts.

Bots for pumping the rating

Robots are slightly more difficult, the task of which is to create the illusion of user attention. "As soon as such a concept appeared as a rating of the discussion of a topic, for example, a post on LiveJournal or a video on YouTube, methods for raising it immediately appeared," says Alexei Goncharenko, head of Social Media LLC. "On primitive damage, this looks like this: a special program automatically registers hundreds of accounts," users "of which periodically go to the desired discussion thread and leave phrases gleaned from a special library. As a result, the topic becomes popular and begins to attract already living users. "

More complex virtuals friend each other and live users, and can also comment on other people's answers and answer questions with random phrases. A striking example is the blog of Alexei Navalny. To check rumors that it is being promoted using bots, Money correspondents began to go to the pages of users who leave comments in his diary. At first glance, many of the living zhezheshki were clogged with reposts of Internet garbage, and others were simply pristine, without a single author's post.

We turned to expert Alexei Goncharenko for comment. "Navalny's blog is thoroughly cleaned up - it can be seen that the bots that were launched by his enemies were banned," Aleksey shares his observations. "Nevertheless, I see that the diaries of many friendly" commentators "of Navalny consist exclusively of reposts of his blog."

Bots in Reputation Management

Another fashionable area today, in which bots are actively involved, is reputation management. This term is understood very widely and has a lot of florid and mysterious synonyms, for example, "cancellation of negativity," "management of reviews." The reality behind them, however, is rather prosaic. As the Internet spread, people found that they could express their indignation at poor-quality service or defective products on a thematic forum or on the blog of a popular network character. According to the principle: at least I will receive moral satisfaction, and let people know. It often turned out that representatives of companies learned about the problem along with potential buyers, when messages that such and such a phone was... but flooded all the popular forums. The Internet began to play the role of a public complaint book and a technical support service in one bottle. Of course, the appearance of black PR was not long in coming. In this regard, it became necessary to track and correctly respond to such negative reviews.

At first, as is often the case, problems were solved with the help of enthusiasts or through a strong-willed solution: "Is there a blog? So you will respond to users. " For small companies or individuals, this approach works. When it comes to famous persons and international brands, the situation is fundamentally different, since there is a need to process gigantic volumes of information.

"If this is an international figure or a transnational corporation and you need to monitor many language countries, it takes hundreds of thousands, or even millions of dollars a year. The Zyfras, of course, are indicative. But this is only part of the process. To actually manage your reputation, you need to build a response service to these messages. Integrate this service into business processes within your company. It is necessary, if you like, to build a social CRM (Customer Relationship Management System - "customer interaction management system." - "Money"), parallel to the traditional one, "says Alexander Smirnov.

So far, there are virtually no such social CRMs in Russia, although large companies are actively interested in the possibilities of such a response method. "Properly organized social CRM will allow you to react quickly and targeted. In reality, this will allow providing service standards close to VIP to a mass client. Therefore, I think that the pioneers in the construction of such systems will be either cellular operators or banks, "said Alexander Smirnov.

2019: Companies waste $1.3 billion a year on Instagram, Facebook and YouTube: Ads "watched" by bots

At the end of July 2019, researchers reported that companies are wasting more than $1.3 billion a year on targeted marketing on social networks, since many unscrupulous users of social networks buy fake subscribers or use services to cheat on likes, views and comments. As a result, about 15% of advertising spending goes down the drain as bots "watch it."

Brands pay huge amounts of money to promote their products through influencers with large followings on mainstream social media. According to Mediakix, advertising on Instagram, Facebook and YouTube will cost marketers a total of $8.5 billion in 2019, and in 2020 this figure will reach $10 billion. However, a study by cybersecurity company Cheq and the University of Baltimore shows that $1.3 billion and $1.5 billion, respectively, will fall into the hands of users with fake subscribers.

Brands waste $1.3 billion per year due to fake social media accounts

Cheq calculated the size of the expected fraud based on an analysis of its own data, a review of the relevant services, as well as research and surveys on this issue. In addition to direct cash costs, companies can incur indirect costs, in particular, "undermining trust and potential impact on the brand."

According to the study, up to 25% of the influencer's 10,000 followers turned out to be fake, and out of 800 advertisers, more than 60% encountered "twisted" likes and comments or fabricated advertising content that was supposed to convince the brand of having other customers.

The researchers believe brands using this kind of marketing should invest in their own vetting programs to guarantee a real audience.[1]

Bots in Runet

Bots in Runet have been noticeable since at least the early 2000s.

2024: Roskomnadzor told how dangerous foreign search bots are

Specialists of the subordinate To Roskomnadzor (ILV) Center for Monitoring and Management of the Public Communications Network (CMU SSOP) have developed and sent to providers hosting recommendations that should help them in analyzing the current risks associated with the work of foreign search bots. This was announced on May 7, 2024 by the press service of the deputy. State Duma of the Russian Federation Anton Nemkin

In particular, according to to data experts, such bots can collect information information vulnerabilities resources, including those related to. They critical information infrastructure (CII) can also index personal data the Russian users and use information in foreign models. machine learning

File:Aquote1.png
The decision on the need to block them is determined by the owner of the resource himself, taking into account the analysis, - emphasized in the department.
File:Aquote2.png

Search bots are search engines software modules that are responsible for searching websites for their scanning and adding materials to the database, Anton Nemkin recalled. They can visit millions of sites and process a huge amount of information, but not all of them are malicious and require blocking.

{{quote "I note that such bots cannot penetrate as" spies "into password-protected sections of the site. It all depends on what functions were built into them by the developers. It is in order to determine which bot you encountered and whether it is worth restricting access to your resources, and the recommendations of Roskomnadzor were developed, - said the deputy. }}

However, a large number of really malicious search bots run on the network for May 2024. For example, some of them can aggregate user data in order to then use it for illegal purposes - in particular, the constant collection of data on Russians helps unfriendly countries in the formation of the next sanctions packages. In addition, malicious search bots can try to hack the site, and in some cases, attackers will try to overload the server and then hundreds of bots attack resources at once.

File:Aquote1.png
At the same time, "bad" robots can mimic well-known robots of search engines, and the most malicious of them, as a rule, disguise themselves as a person: they reproduce mouse movements, keystrokes, and different pauses between actions. Such a robot from the site side looks like an ordinary user, and it is extremely difficult to calculate it. That is why it is important to distinguish electronic visitors and know how to block access for uninvited guests. I would advise not to neglect the recommendations of Roskomnadzor, which will help protect resources from another type of network threat, the parliamentarian concluded.
File:Aquote2.png

2022

40.5% of traffic in Runet generated bots

About 40.5% of all traffic in Runet in 2022 fell on bots (automated programs capable of performing various tasks on the network), which is 5 percentage points more than a year earlier. This was announced in mid-June 2023 by Qrator Labs.

According to experts, Internet traffic generated by people decreased in 2022 from 64.5 to 59.5%. Qrator.AntiBot product manager at Qrator Labs Georgy Tarasov noted that in 2022, "good" bots - for example, include search robots that take part in scanning and indexing web resources, and bots to check resources for vulnerabilities - accounted for about 17.5% of total traffic. Their average share for the year changed slightly, within 1 pp, he said.

About 40.5% of all traffic in Runet in 2022 fell on bots

"Bad" bots for the year increased their share in traffic by 5-6 pp. Such systems collect confidential information about users, leave spam comments and fake reviews, "click" advertisements (burn the advertising budget, ensuring an increase in the number of impressions).

The traffic of malicious bots, according to experts, poses threats to all online resources. But if earlier they threatened more online stores and marketplaces, then recently their activity has been observed in the financial sector, housing and communal services, the fields of medicine, education and entertainment. Bot traffic will continue to grow and in the coming years will exceed the share of "human," market participants predict.

Among the reasons for the growth of traffic of "good" bots are the development of artificial intelligence, said Skoltech Internet Alexander Sivolobov, deputy head of the NTI Competence Center based on "" for wireless communication technologies and things. A simple request to, ChatGPT according to him, causes a series of interactions between various digital services.[2]

The number of " bots" in the Russian segment of the Internet increased from 2-3% to 23%

the Russian advertizing Network specialists SlickJump recorded a record increase in the number of "bots" (short for "robot") - small programs for automatically performing certain tasks. According to data to experts, starting from November 2021, the number of "bots" in the Russian segment Internet increased from 2-3% to 23%. For - this is RuNet a record indicator of the activity of bot systems. The company announced this on November 7, 2022.

Robots, or "bots" for short, are specialized programs that can perform tasks ranging from indexing sites and searching for information, to spreading spam and hacking user accounts. Digital data protection experts divide bots into several groups, by type and task. For example, the largest group of "bots" are malware that collects user data or organizes computers into special botnets to attack sites and services. In addition, "robots" are used to search for vulnerabilities, monitor traffic, there are spider bots or search robots, ticket "bots" and "bots" - loaders.

The increase in the activity of "bots" in Runet, according to experts, is the result of several trends that have been observed recently. First time of all, the frequency hacker of attacks and the activity of "bots" aimed at collecting, personal data searching for keys to access various resources have increased. Experts at SlickJump note that search giants, such as Google Yandex by changing algorithms the indexation of sites on the Internet using search "bot systems," could also affect the increase in the activity of bots on the Runet. Experts are sure that the change of algorithms affects the number of search "bots," there are much more of them.

File:Aquote1.png
Tracking the activity of "bots" on the Russian Internet has been conducted since the beginning of 2000. As a rule, research and analytics on this topic is focused on the advertising activity of robots on the network, in particular, cheating advertising accounts on social networks, supporting advertising messages or increasing site traffic. The SlickJump network closely monitors the technologies for creating robotic programs and their activity on the Internet, as this affects the assessment of traffic and the general strategy for promoting goods and services on the Runet, "said Anton Ermakov, executive director of SlickJump.
File:Aquote2.png

According to him, the increase in the number of "bots" in Runet does not yet allow us to draw up a clear picture of the change in content consumption in connection with the events taking place in the world in 2022.

2012: Up to 76% of bots recorded in groups of large companies on the VKontakte social network

Up to 76% of bots were recorded in groups of large companies on the VKontakte social network. This was stated by Ilya Perekopsky, Deputy General Director of VKontakte, during a report at the RIF conference opening the SMM or Fraud section? (April 2012).

Of the brands listed, the largest number of bots - 76% of users - were noted on the Rollton noodle page. Almost the same number - 73% of bots - was recorded in the Svyaznoy Bank group. The Adidas group has 59% of bots, while Coca-Cola has 37%. As examples of pages with a small number of bots, Perekopsky named the Sberbank groups (3% of bots), Nescafe (2%), UniCredit Bank (2%), Pampers (5%)[3].

According to the top manager, VKontakte is the main platform for SMM marketing, since this social network in terms of audience reach in Russia is significantly superior to Facebook and Odnoklassniki. According to Perekopsky, savings on bots allow advertising agencies and top managers of companies to "saw" unspent funds.

Bots in the activities of the Russian Foreign Intelligence Service

In January-February 2012, the Russian Foreign Intelligence Service held three closed tenders for the development of methods for monitoring the blogosphere and throwing information messages to form public opinion abroad, the Kommersant newspaper reported. Tenders totaling over 30 million rubles were won by Iteranet, whose general director is Igor Matskevich.

By 2013, the contractor will have to develop three systems:

  1. "Dispute" (order amount - 4.41 million rubles), which will monitor the blogosphere and analyze factors affecting the popularity and dissemination of information.
  2. "Monitor-3" (4.99 million rubles) to study methods of covert control on the Internet.
  3. Storm-12 (22.8 million rubles). The system is responsible directly for stuffing the necessary information and its further dissemination in order to manipulate public opinion.

Disput and Monitor-3 systems will begin operation in 2012.

According to Kommersant's sources, it is quite possible that the developments will be used not only to manipulate public opinion outside Russia, but also on the Runet. Pilots of monitoring systems will be deployed in the countries of Eastern Europe, which were part of the Soviet Union.

"The text of the tender for the Storm-12 system says in plain text that the special services want to spend millions of rubles on creating a stuffing system through accounts registered in social networks in advance. The main obstacle to the operation of the system is the means of protecting social networks from spam. Part of the money will be spent on neutralizing these protections, "Anton Nosik commented to Kommersant
.

Bots in the activities of the US military

Employees of the US Central Command, which deals with hostilities in the Middle East and Central Asia, in June 2010 (?) Requested the development of a system designated as "online persona management service." Judging by the accompanying documentation, the Pentagon is going to create 50 fictitious online citizens of Afghanistan and Iraq. True, it is unclear who will manage these "bloggers" - a computer or still a person[4]

We are talking about technologies for manipulating discussions in order to take conversations away from dangerous topics and promote their own ideas and scenarios. According to the official concept, American bots will only function in the Asian region so far - fooling the heads of users who speak four languages: Arabic, Farsi, Urdu and Pashto[5].

"I am sure that the American special services have been using such technologies for a long time," comments Igor Ashmanov, General Director of Ashmanov & Partners. "How else did the Twitter revolutions swing?" It seems to me that there were more posts from virtuals than from real users. "

The Pentagon's "revelations" also smiled at the head of the Rosfinkom news agency Konstantin Prokhortsev: "The announcement of the American military department can be regarded as a fact of recognition that it has long existed, or that all other special services have it, but only the Pentagon itself has not yet had. I myself personally encountered American Russian-speaking bots back in 2003 - they were launched on our IraqWar.ru project, which covered the invasion of Iraq. Then they looked still very primitive and were calculated by most users. I think that just by the beginning of that military campaign, technologies were still being tested, and at first the bots were thrown into battle unfinished. " Now, according to the same Pentagon statement, it is planned to create not just replicant bots, but specially created trolls, partially controlled by a living person. It is assumed that it will no longer be possible to distinguish such terminators from real users.

"It
is impossible to create an absolutely autonomous virtual personality that will operate on the network without constant correction and at the same time completely mimic the real user," says Igor Ashmanov. "The infs (virtual characters that can communicate with visitors to the site in natural language. -" Money ") developed by our company, for example, can support quite long dialogues, but it is easy to identify such a bot, people will quickly feel the catch. If we are talking about technology, when the actions of bots are partially directed by a person, and all their remarks are at least simply viewed before publication, then here you can get relatively high-quality trolls. However, there are quite natural restrictions: one person, with the most optimal organization of an automated system, is able to manage dozens, but not hundreds of virtuals. " On the other hand, as Ashmanov said, in particular, for effective manipulation in Runet, only about 5 thousand virtuals are needed. "This will require about a hundred operators, each of whom will directly control five dozen bots," says Igor Ashmanov. "But since one person is not able to play fifty completely different lives, there is a risk that the bots of one operator will behave very similar. To prevent this from happening, a more complex hierarchical management system is needed: ideologists who develop strategy, masters who script bots, etc. But this is still twenty people. "

Bots are beneficial to sites

"The
system administrator is able to calculate the bot according to understandable indirect signs," says web project developer Alexei Nemiro.For example, at Kbyte.Ru we take special control of users with the same IP, as well as users trying to massively be friends with others. Also, each user has a counter that shows his activity. Multiple excess of the average indicators is a suspicious criterion. But I never take any action before the user violates the law as an administrator. Just take note, and nothing more. "

The latter remark very clearly reflects the situation - social networks and other resources are interested in the number of users, even bots, because of the rating. Banyat, as a rule, only malicious spammers. Until recently, no serious fight was waged against bots. According to the owners of most forums and social networks, at the stage of promoting the resource, they themselves launched virtuals to create the illusion of mass.

The fight against bots is of little interest to anyone. "It is unlikely that the appearance of bots will turn away live users from social networks," says Anton Paltsev, technical director of Teralabs. "This may well become interesting if the bots are quite intelligent or well-managed. There are a lot of lonely people who lack communication. And they don't care who to talk to on the social network. "

However, bot identification systems will evolve. "Some time ago, the creators of social networks were interested in only one thing - the number of users, but whether they were bots or not, this is the second question," says Sergei Grebennikov, deputy director of the Russian Association of Electronic Communications. "But now they have paid attention to the" strange "accounts - this is due to the increased interest of law enforcement agencies in the Internet."

Last month, Roskomnadzor announced a competition to create an automated control system over online media, which should monitor and identify materials containing, among other things, calls for extremism around the clock. And here, in addition to the task of finding and punishing the author of the post, monitoring pursues another goal - to estimate the share of bot posts in the general information flow in order to get an idea of ​ ​ true public opinion. After all, as the author of the scientific and technical blog dxdt.ru Alexander Venedyukhin noted, the global task of the creators of bots pursuing, say, political goals, is not to deceive a hundred ordinary netizens, but to bring to certain decisions those who use the services of analytical services: "A second-order deception tool is important, which allows you to create such an information background in social networks so that analysts engaged in opinion research draw the necessary conclusions."

According to experts, already now in social networks there are battles between bots belonging to opposing groups, that is, bots often "communicate" exclusively with each other. "This is a short-term effect associated with the youth of the industry, with the fact that" shustriki "frolic in it," comments Alexander Smirnov, commercial director of Ashmanov & Partners. "The industry walks on the initial rake, but over time all obvious possible abuses will be found, studied and learned to fight them."

According to Anton Paltsev, very soon we can expect the appearance of bots that can distinguish a living user from a robot, so as not to waste time on it. As for the likely appearance in the future of biocomputers, as well as highly intelligent self-learning robots based on an artificial neural network, which will no longer be possible to distinguish from a living user, then in reality they are not needed to seize power on the Internet. You probably noticed that today, to manage public opinion in the network of a single state, a little more than a company of living fighters is needed. Around the world, apparently, there is no more division.

Or maybe this division already exists? In fact, creating it is not a problem - people on the network are relatively cheap. It is easier to hire a dozen competent operators with well-written virtuals and give them high-quality guidance on the target than to produce millions of bots. If, of course, we are talking about a serious operation to manage public opinion, and not about an attempt to create the illusion of puppy delight from another movie village. But the healing system is an expensive thing. According to Alexander Smirnov, technological development in the near future will not go in terms of response, but in the direction of improving the means of automatic monitoring and improving its quality.

A third of Internet traffic falls on bots

Artificially created traffic is becoming a serious problem for advertisers. So, 36% of web traffic was generated using hacked computers, writes The Wall Street Journal , citing an assessment by Interactive Advertising Bureau (IAB). For fake traffic, botnets fake sites are used, as well as intermediaries whose identities are almost impossible to establish.

Fake traffic leads to deceiving advertisers, as the latter pay for the number of calls to advertising material, regardless of whether visitors to the web page are real people.

WSJ sources report that L'Oréal, General Motors and Verizon Communications corporations have faced deception during advertising campaigns. It is noted that fake traffic undermines advertisers' confidence in the Internet compared to traditional media outlets, including compared to television. Roxanne Baretto, assistant vice president of digital marketing at L'Oréal, reveals that 'delay represents a missed opportunity to build connections with our core audience'.

It is difficult to estimate the size of online fraud, but White Ops calculated that in 2013 American advertisers suffered losses in the amount of $6 billion. In addition, many factors affect the size of advertising digital budgets, including not only fraud, but also difficulties in assessing the size of the audience.

Recall that in the United States, according to forecasts, the online advertising market, including social networks and mobile Internet, will grow by 17% and reach $50 billion this year. In general, the share of Digital Signage will be 28% of total advertising spending.

Notes