RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2023/03/24 16:59:08

Social Media Security and Challenges

Content

Main article: Social media

Data leaks in social networks

Main article: Data leaks in social networks

2023

For the first time, the United States passed a law prohibiting parents from taking away the income of child bloggers. Money will be kept in a special account

In August 2023, the US state of Illinois for the first time in the United States passed the first law to protect the money of child influencers. Their income on social networks will be transferred to a special trust account, where they will be stored until the child comes of age. The measure is directed against parents who earn on content with children. At the same time, they will be entitled to a certain percentage of income. In case of refusal to transfer part of the income to them, child influencers can sue. Read more here.

Texas governor bans children from using social networks without parental consent

On June 13, 2023, Texas Gov. Greg Abbott signed a bill banning children under 18 from registering on most social media without parental consent. Read more here.

In the USA, children now need parental consent to enter social networks

In March 2023, for the first time in the United States, children were banned from social networks without parental consent. The first state to pass such a law was Utah.

We are talking about two laws signed by Utah Governor Spencer Cox at once - SB152 and HB311. The first document requires social media to check that those who create accounts in Utah, over 18 years old, adolescents under this age must provide parental consent. The second law prohibits social media from using designs or features that make minors addicted and makes it easier to file lawsuits against companies for causing harm.

In Utah, children were banned from using social networks without parental consent

As explained on the Utah website, the laws allow parents of minor children to gain full access to their child's account, set up "curfew" settings when social media is blocked (the default time is 10:30 p.m. to 6:30 a.m.), prohibit correspondence with anyone who is not friends with the child, and also hide underage accounts in search results.

In addition, social networks are prohibited from collecting personal data of minors and launching advertising campaigns aimed at children and adolescents. User accounts under the age of 18 must be removed from all search results. In addition, they will limit the ability to send personal messages to a child from accounts that are not on the list of his friends.

How the authorities will monitor the implementation of the law is not specified by March 24, 2023. Civil liberties movements have expressed concern about the bill. In their opinion, for the law, the authorities will have to create services to collect information about adolescents and their family ties, and this creates an additional danger of hacking personal data.[1]

2022: AI model identifies mental disorders based on online posts

Researchers at Dartmouth have created an artificial intelligence model to identify mental disorders based on discussions on Reddit.

A distinctive feature of this model is the emphasis on emotions, and not on the specific content of the analyzed texts of social networks. The researchers said that this approach works regardless of the messages discussed.

There are numerous reasons why people do not seek help for mental health problems: stigma, high cost and lack of access to GP services. There is also a tendency to minimize signs of mental disorders or mix them with stress, says Xiaobo Guo, co-author of the paper. According to him, such digital streaming technologies can provide additional motivation for going to a doctor.

In the study, scientists trained artificial intelligence to detect three types of mental disorders from messages written by social media users. These are common emotional disorders: deep depression, bipolar affective disorder and anxiety disorder - which are characterized by distinct emotional patterns. They looked at data from users who reported having one of these disorders and users without any known mental disorders.

Scientists trained AI by analyzing posts on the social network Reddit for several reasons. First, because it's a network where people can share information, and it has a lot of active users (more than 430 million, according to the study) discussing a wide range of topics. Posts and comments are in the public domain and researchers can collect data starting in 2011.

Various emotional disorders have their own characteristic patterns of emotional transitions. By creating an emotional "fingerprint" of the user and comparing it with established signs of emotional disorders, the model can detect deviations. To confirm their results, they tested it on messages that were not used during training and showed that the model accurately predicts which users may or may not have one of these disorders.

"training" took place in several stages. Researchers are not the first to become interested in analyzing emotions on social media. So they started by using existing datasets before "feeding" their AI with posts from Reddit. For each disorder category, they found 1,997 users who claimed to have a mental disability. They also found 1997 users for the test group who rejected the absence of mental problems. 70% of these users' publications were used for AI training, 15% for validation procedure, and 15% for real model testing. The researchers trained their model to label emotions expressed in user messages and display emotional transitions between different messages so that the message could be labeled "joy," "anger," "sadness," "fear," "lack of emotion," or a combination thereof. The map is a matrix showing how likely the user is to move from one state to another, such as from anger to a neutral state of no emotion.

Therefore, the model they developed focuses on transitions, creating an "emotional imprint" associated with the user that can be compared to "typical" signatures corresponding to emotional disorders. By testing this model on publications that had not previously been used to teach AI, the researchers found that AI was able to pinpoint the presence or absence of emotional personality disorder.

The researchers hope that the presented work can be used to prevent mental disorders. In their article, they make a strong case for a more thoughtful study of models based on the analysis of social media data[2].

2021: The Russian government will analyze the identities of Russians on social networks using an AI system

At the end of November 2021, it became known that the Institute for System Programming named after V.P. Ivannikova received from the Analytical Center under the Government of the Russian Federation (AC) a grant to conduct a study on the possibilities of using artificial intelligence for psychological diagnostics of personality on social networks. Read more here.

2020

Turkey fines a number of social networks for refusing to open representative offices in the country

On November 4, 2020, it became known that the authorities Turkey fined on Wednesday 10 million Turkish lira ($1.175 million),, social networks Facebook, Instagram Twitter Periscope, Youtube and due to TikTok (social network) a violation of the bill providing for the opening of representative offices of the listed companies in the republic. More. here

Russia may introduce fines for social networks for illegal content

The working group of the Federation Council Committee on Science, Education and Culture, at the initiative of the Safe Internet League (LBI), is developing a bill on fines for social networks and Internet platforms for publishing illegal content. This became known on March 10, 2020. Read more here.

2019

Survive on the Internet: cyber threats that lie in wait for a child online

July 29, 2019 DIT Moscow reported how to survive Internet in and what cyber threats lie in wait for a child on the network.

Several generations have grown up who navigate the Internet better than their parents. The Internet, instant messengers and closed groups for them are the same extreme places for communication and games as for the generation of the 80s there were construction sites, garages and forest parks.

According to, to data Google 98% of schoolchildren and students aged 13 to 24 use Internet every day. And children aged 5 to 16 spend an average of six and a half hours a day in front of a screen, up from about three hours recorded in 1995, according to market research by Childwise.

Along with a sharp increase in the number of users, the level of aggression on the network has also increased. Words have tremendous power. They can inspire and heal, but they can injure, become weapons for manipulation, inciting conflict and bullying. This is especially true when it comes to children.

Almost half of the surveyed the Russian children and adolescents aged 8 to 17 admitted that they were targets of bullying on the Internet, the study said. And over the Microsoft years, the level of aggression on the Internet has not decreased. According to this indicator Russia , it is one of the five countries where cyberbullying is most widespread - bullying, countless attacks and persecution on the Internet.

The main difference between cyberbullying and bullying in everyday life is the mask of anonymity behind which the offender is hiding. It is difficult to calculate and neutralize. It is very rare for children to tell parents and even friends that they are being bullied. Silence and experiencing this alone can cause a huge number of mental problems, difficulties in communicating with classmates. People who became victims of cyberbullying remember this all their lives.

Parents, teachers or school psychologists should explain to the child that if he is faced with cyberbullying on the Internet, you do not need to immediately answer the offender on emotions, insult him in response or, conversely, beg for mercy, pay a ransom. First, you need to calm yourself down and only then make a decision, preferably by consulting with adults. They will help to correctly assess the situation, not to take the actions of the offender too close to heart.


For cybercriminals, online trolls or haters, it doesn't matter who is on the other side of the screen. From our experiences, we at Group-IB have repeatedly come across cases where children's lives have hung in the balance (for example, the history of suicidal groups like 'Blue Whale'). Cyberbullying, trolling, covert filming via webcams and subsequently blackmail can ruin a child's life or severely undermine their mental health.

Survive on the Internet: cyber threats that lie in wait for a child online

Network hooligans have a fairly large arsenal of various techniques and techniques for ruining the life of their victim.

Hate (hate) are negative comments and messages, unreasonable criticism of a particular person, often without justifying their position.

Cyberstalking (cyberstalking; to stalk - harass, hunt) - using gadgets to harass the victim, constant threats against the victim and her family members.

Cyber ​ ​ trolls - the publication of aggressive information on websites, social media pages, even memorial pages dedicated to dead people.

Griefers - the pursuit of other players in multiplayer online games. Their goal is not to win the game, but to deprive others of the pleasure of the game. They actively use swearing, block certain areas of the game and openly cheat in the game.

Sexting (sexting) is the process of sending or publishing photo and video materials with naked and semi-naked people. It is used for blackmail, extortion or revenge on an ex-partner for a painful break-up.

The sooner a child masters the basic rules of staying online, the less likely the aforementioned incidents are.

How not to become a victim on the Internet:

  • Make it a rule to publish as little personal information (phones, addresses) and photos as possible. They did not boast of big expensive purchases. Never tell on social networks that the whole family is leaving, for example, on vacation. Remember, everything that got on the Internet remained there forever and is available to everyone!
  • Don't trust virtual friends. The face on the avatar, the name and age of your virtual friend - may be fictional, and a criminal may be hiding behind a friendly interlocutor. Don't meet strangers online without alerting your parents.
  • Watch your behavior online, perhaps in your own words you can hurt or offend someone a lot. Before you write any comment, message, post in, Instagram think what will follow.
  • When you come across a 'troll' on the Internet, stop the dialogue, do not get into a quarrel or sort things out.
  • Block the senders of unpleasant messages for you. Do not hesitate to report (complain) about offensive or provocative posts, comments, etc. - the function is in almost all well-known social networks.
  • Do not forget about digital hygiene: establish privacy in the profile settings in social networks. Use complex passwords, different for each account. Close the camera on your computer and do not send your photos and videos to'virtual friends'.

Kasperskaya: 7 million teenagers are subjected to destructive effects on social networks

President of InfoWatch Group Natalya Kasperskaya spoke about groups of destructive effects on children and adolescents on social networks, InfoWatch reported on April 1, 2019. In particular, she cited statistics from the Kribrum social media monitoring and analysis system: seven million teenagers are destructively affected on the Internet, and the increase in involvement, for example, on drugs, murders, bullying and suicide, is two million users a year. The expert stressed that the system considers those who liked, reposted or commented on dangerous content to be involved in the groups.

Natalia Kasperskaya noted that the so-called "funnel of involvement" is arranged in this way, at its upper level there is no prohibited content, instead, as a rule, attractive photos, videos or appeals of a general nature. Then the creators of the groups, based on what interested the teenager (drugs, violence, etc.), involve him in further actions, closed chats, after this action go offline. The head of InfoWatch noted that blocking such groups is extremely difficult, and the social networks in which they are registered are not responsible for their activities.

Deputy Head of the Presidential Administration of the RFUergey Kiriyenko drew attention to the relevance of the problem of destructive impact on young people on the network and the need to coordinate the efforts of the state, business and society to solve it.

2018

Putin instructed to intensify the fight against destructive movements in social networks

In March 2018, Russian President Vladimir Putin instructed to intensify the actions of state bodies, business and the public to combat the negative impact of destructive movements on social networks.

Identifying 20,000 potential suicides in China

Artificial intelligence helped Chinese scientists identify 20,000 potential suicides and provide them with psychological assistance, TASS reported in January 2018, citing Xinhua news agency[3]

AI in China helped provide psychological assistance to 20 thousand potential suicides

The notorious "artificial intelligence" refers to a system of automated monitoring of social networks, which itself is able to determine suicidal tendencies and "preventive" order to send them letters with words of support and coordinates of psychologists who are ready to help.

As one of the curators of the project, an employee of the Academy of Sciences China Zhu Tingshao, noted, only 20% of those in need of help are ready to apply for it on their own, although they are quite willing to share their thoughts online.

The information base for identifying potential suicides was compiled by the results of the 2016 study. Then, after analyzing more than 4,000 publications on social networks, directly or indirectly dedicated to suicide, scientists calculated the most common linguistic constructs, which now serve as markers that cause the triggering of "anti-suicidal" AI.

According to Zhu Tingshao, the service independently sends messages with words of support and advice, while maintaining the anonymity of the recipients. In particularly difficult cases, the program also informs Internet users about hotlines and care centers.

At the end of November 2017, the social network Facebook launched a similar system: its "artificial intelligence" checks users' publications for "suicidal" signs, and, if any, sends recommendations for receiving psychological assistance either to the user himself, or to his/her friends, or to those who first commented on the "suicidal" message, provided that they live in the same area.[4] The service has already been deployed around the world, except for the European Union, where legislation on the protection of private information creates serious difficulties for using this technology.

In addition to "artificial intelligence," however, Facebook has a number of partner organizations around the world that are specially engaged in suicide prevention and are ready to help users. When it became known about the launch of the Facebook system, the reaction from the network public was mixed: in the expected way, there were fears that invasive "artificial intelligence" could be used not for good purposes. Facebook executives were quick to defend the technology.

Security director Alex Stamos, in particular, wrote:

File:Aquote1.png
A frightening/unseemly/malicious way to use AI is a risk that will always exist. And that is why now it is necessary to establish adequate standards, ensuring a balance between the use of data and practical benefits, and constantly keep in mind possible distortions.
File:Aquote2.png

For his part, Mark Zuckerberg said that this technology is the first step towards an in-depth understanding of linguistic nuances by artificial intelligence, which over time will identify not only potential suicide, but also manifestations of cyberbullying and hateering.

File:Aquote1.png
Any means to prevent tragedies is good, but, on the other hand, concerns about the misuse of AI look justified, - said Roman Ginyatullin, an information security expert at SEQ (formerly SEC Consult Services). - At the moment, AI, fortunately, is only a tool that people can use at their discretion. In fact, it depends only on AI operators what it will be used for - to save people or, for example, to invade their privacy. It may turn out that further development of AI will require a review of existing privacy laws.
File:Aquote2.png

2016: Out of 60 years of life, 20 goes to gadgets and digital technologies

A modern person spends about 17 hours a day awake, while about 50% of this time is communication with various digital devices. Thus, out of 60 years of life, 20 goes to "gadgets," digital technologies and "digesting" terabytes of information.

Penetration information technology into the real world happened so deeply that today we can talk about an almost equal ratio of the world's population with the number of mobile devices. In 2005, data volumes amounted to about 130 exabytes (1018 bytes), in 2010 - 1.2 zettabytes (1021 bytes), and at the end of 2015, indicators reached 0.8 iottabytes (1024bytes). We almost every second receive information by e-mail, from microblogs, social networks, push notifications and other information channels (data). NeuroNation

This constant increase in data volumes leads to information congestion, which, in turn, affects both the business processes of entire corporations and the performance of each individual specialist. An excess of the information received and a fairly low level of its reliability and quality lead to dissipation of attention, a drop in the speed of thought reactions, which, as a result, seriously affects the level of performance of a person and even his well-being.

2015: Dangers of Social Media: ROCIT Citizen Survey

Social networks "deprive" us of loneliness and boredom - there is always someone to talk to, you can discuss the just released film, boast of a gift, buy almost anything..., but sometimes get into trouble.

According to the All-Russian study "Digital Literacy Index of Citizens of the Russian Federation," conducted by ROCIT in 2015, about 20% of users declare that they have encountered data security problems on social networks. Most often, these are problems of password theft (26%) and account hacking (69.2%). At the same time, a little more than 23% of respondents noted that when hacking, the fraudster posed as them and tried to lure money from their friends. However, there are other problems, for example, stealing a password from the email to which the social network account is linked (15.6%), and stealing funds from a bank card also linked to the account (3.5%).

Despite the fact that the sale of goods and services on social networks is unauthorized, more than half of the respondents said that they shop there regularly or periodically, and only 18% of users consider it unsafe.

According to the legislation of the Russian Federation, any purchase, whether online or offline, must be accompanied by the issuance of a cash check or a public offer. However, more than 40% of respondents noted that they were not issued a check when buying goods on social networks.

According to the survey, about 4% of users faced fraud when making purchases in social networks in person, just under 37% said that their acquaintances had such problems.

2013: Social media deprives Russia's economy of $10 billion a year

From excessive activity of citizens in social networks, the Russian economy loses from 281.7 billion to 311.5 billion rubles. in year. This figure was named at the FBK Institute for Strategic Analysis, writes Rossiyskaya Gazeta (June 2013). The calculations were built as follows: by the end of 2012, the number of Russians - users of social networks - reached 51.8 million people. According to a ComScore study, Russians spent an average of 25.6 minutes a day (or 12.8 hours a month) on social media in 2012, including at work.

Based on this, analysts calculated that in 2012, 3187.2 minutes, or 53.1 hours of working time, were lost by every employee active in social networks. Analysts have established the lower and upper limits of possible losses - based on the structure of using social networks by employees of different industries and by average values ​ ​ for the economy as a whole. In the lower border, the amount of economic damage amounted to 281.7 billion rubles, in the upper border - 311.5 billion rubles.

If we proceed from the structure of the use of social networks by employees of different industries, then the greatest economic damage, according to the calculations of the Institute for Strategic Analysis of FBK, was caused by employees of the financial sector and operations with real estate, rental and provision of services (67.9 billion rubles), in second place - education workers (39.8 billion rubles), in third - state administration and ensuring military security and social insurance (36.6 billion rubles). The most insignificant economic losses from Russians active in social networks engaged in agriculture and forestry (0.9 billion rubles).

2012: Banning social media in the workplace. Infographics

Source: Kelly, July 2012

Notes