Machine Learning
A highly specialized area of knowledge, which is part of the main sources of technologies and methods applied in the fields of big data and the Internet of Things, which studies and develops algorithms for automated extraction of knowledge from a raw set of data, training software systems based on the obtained data, generating predictive and/or prescriptive recommendations, pattern recognition, etc.
Main article: Training artificial intelligence
Machine learning (ML) is a class of artificial intelligence methods, a characteristic feature of which is not a direct solution to a problem, but training in the process of applying solutions to many similar problems. To build such methods, means of mathematical statistics, numerical methods, optimization methods, probability theory, graph theory, various techniques for working with data in digital form are used.
Components of machine learning
Data. For example, if we want to predict the weather, we need a weather report over the past few years (the more, the better). The better the data, the more efficient the program will work. No matter how perfect the algorithm of work, if the data quality is not very good, the result will be appropriate.
Signs. A set of properties, characteristics, or features that describe the model you are creating. If we're talking about the weather, it's the temperature, the wind speed, the season. Correctly selected signs are the key to successful training.
Algorithm. Each problem can be solved in different ways. For different purposes, you can choose different algorithms.
3 Basic Machine Learning Methods
For 2023, three main methods of machine learning are used:
Method 1: Classical Learning Most often, classical learning is used for AI. These are simple algorithms based on patterns in the data. There are 2 types of classical learning:
- with a teacher. We train the machine with real examples. Let's say we want to teach her to distinguish apples from pears. We upload data to the program and tell her that some pictures show apples, while others show pears. The machine must find common signs and build connections.
- without a teacher. This method is used when it is not possible to provide the robot with marked-up data. The program must find the general characteristics itself and classify the obtained data. This approach is often used in targeted advertising, when the actions or preferences of the user cannot be classified in advance.
Method 2: Reinforcement Training It's a more complex kind of learning. AI needs not just to analyze the data, but to act independently in a real environment. Training is like a game: for a correctly made decision, the machine receives a point, for errors - points are deducted.
Let's consider the example of the game "Snake." There is an object on the field that must be reached by a snake. She doesn't know which path is the most effective. She only knows the distance to the object. By trial and error, the snake finds the optimal movement option and analyzes the situations that lead to loss. This method is used to train robotic vacuum cleaners or self-driving cars.
Method 3: Neural Networks and Deep Learning Deep this learning is called because the structure of artificial neural networks consists of several layers that interact with each other and create a complex data analysis process. There are 3 types of layers:
- inlet;
- a day off;
- hidden.
2024: Yandex will allow partners to train neural networks together and store data separately
Yandex, together with the V.P. Ivannikov Institute for System Programming of the Russian Academy of Sciences and Sechenov University, used federal machine learning for medical problems. It is also called collaborative because it is intended for projects where there are multiple participants with their own datasets, or datacets. A federated approach allows participants to collectively train models without passing on their data to anyone. This opens up new opportunities for partnerships in the field of artificial intelligence. Read more here.
2023: Machine Learning helps real estate agencies
Residents of Russia do not have access to data on property transactions, as is practiced in the West, but real estate agencies do not stand still and create high-quality databases using Machine Learning. This was announced on December 18, 2023 by Homeapp. Read more here.
2022
What industrial machine learning is and why billions are invested in it
Investments in companies specializing in industrial machine learning peaked in 2021 at $4.7 billion. In 2022, this figure fell to $3.4 billion, which is explained by macroeconomic challenges and the desire of companies to reduce costs in conditions of high inflation. Such figures are provided in the McKinsey report, which was published on July 20, 2023.
It is noted that industrial machine learning was initially used by a small number of leading companies, but then the technology began to develop actively. We are talking about the concept of MLOps - a set of practices aimed at reliable and effective deployment and maintenance of machine learning models in production. With MLOps, enterprises can optimize workflows and implement effective automation, diagnostics, monitoring, and management.
In general, MLOps tools help companies move from pilot projects to viable business products, accelerate the scaling of analytical solutions, identify and solve problems in production, and improve team performance. At the same time, a rapidly developing ecosystem of software and hardware allows you to reduce risks in the development, deployment and maintenance of machine learning solutions.
Against the background of growing interest in industrial machine learning, the demand for specialists in the relevant field is increasing. The number of job advertisements in this area almost quadrupled from 2018 to 2022. And during 2021-2022, growth was recorded by almost a quarter - by 23.4%. Companies expanding their machine learning initiatives require professionals with specific technical skills, including in the field of development. ON[1]
Scientists of the Faculty of VMK Moscow State University have identified the weak points of applications when using machine learning
On April 1, 2022, the VMK MSU reported that within the framework of the research conducted at the faculty on the topic "Artificial Intelligence in Cybersecurity," representatives of the NOSH MSU "Brain, Cognitive Systems, Artificial Intelligence" revealed that the main obstacle to the use of machine learning models in critical applications is the problem associated with the resistance of algorithms of this class to external influences. The results of the study are published in the International Journal of Open Information Technologies.
Machine learning is almost synonymous with the term "artificial intelligence," the development programs of which are already national in many countries. It is becoming easier to add machine learning capabilities to applications: many machine learning libraries and online services no longer require deep knowledge in this area.
The published work discusses the problem of attacks on machine learning systems in order to achieve the desired behavior of the system or prevent its correct operation. The first step towards countering such threats, according to scientists, is their classification, understanding of their types, as well as the places of the application. This is because the nature of attacks on machine learning and deep learning systems is different from other cyber threats.
However, even easy-to-use machine learning systems have their own problems. Among them is the threat of adversarial attacks, which has become one of the important problems of machine learning applications. This means special actions on the elements of the system pipeline that trigger the behavior required by the attacker. Such behavior can be, for example, incorrect work of the classifier. But there are also attacks that are aimed at extracting model parameters.
{{quote 'author
= told Evgeny Ilyushin, an employee of the Department of Information Security.
|«Эта информация поможет атакующему создать обманывающие систему примеры. Существуют атаки, которые позволяют проверить, например, принадлежность определенных data to the training set and possibly thereby disclose confidential information , "-}}
Adversarial attacks differ from other types of security threats. They rely on the complexity of deep neural networks and their probabilistic nature to find ways to use them and change their behavior.
"Adversarial attack is often used broadly to refer to different types of malicious actions against machine learning models. There is no way to detect them using the classic tools used to protect software from cyber threats, "- |
Adversarial attacks manipulate the behavior of machine learning models. According to one version, adversarial attacks exist due to the nonlinear nature of systems, which leads to the existence of some data areas not captured by the algorithm. On the other hand , this is, on the contrary, a retraining of the system, when even small deviations from the training data set are processed incorrectly.
For critical applications of machine learning, the issue of certification of systems, models and datasets is acute, similar to what is done for traditional software systems. It is also important that adversarial attacks cause confidence problems in machine learning algorithms, especially deep neural networks. These tools provide great opportunities in different areas and have already become part of our lives. That is why it is so important to study their vulnerabilities and protect them from intruders.
Machine Learning Model Identifies Compromised Power System Components
Machine learning can help energy providers better identify faulty or compromised components in power grids. This became known on February 28, 2022. A research project led by the Massachusetts Institute of Technology describes a technique that allows you to model complex interconnected systems consisting of many variables whose values change over time. By mapping connections in these so-called multiple time series, a "Bayesian network" can learn to detect anomalies in data.
The mains state may be composed of a plurality of data points, including the magnitude, frequency, and voltage angle of the entire mains, as well as the current. Anomaly detection depends on detecting abnormal data points that can be caused by things like a cable break or insulation damage.
In the case of the power grid, people tried to collect data using statistics and then define detection rules with domain knowledge. For example, if the voltage rises by a certain percentage, then the network operator must be warned. Such systems, even enhanced by statistical analysis of data, require a lot of work and experience. We can automate this process, as well as extract patterns from the data using advanced machine learning methods, the experts explained. |
This method uses uncontrolled learning to determine abnormal results, instead of using manual rules. When the researchers tested their model on two private datasets recording measurements of two interconnections in, USA they revealed the superiority of the model over other machine learning methods based on. neural networks
The general method of detecting an abnormal change in data can even be used to raise an alarm in the event of a power system breach.
It can be used to detect devaluation of an electrical failure for cyberattacks. Since our method is essentially aimed at modeling the power grid in a normal state, it can detect anomalies regardless of the cause, the experts noted. |
The model cannot pinpoint the exact cause of the anomalies, but can determine how much of the power system is failing, the researchers said. The model can be used to monitor the state of the power grid and can report a failure in the network within one minute[2].
2021
IBM has become the leader in the number of patents in the field of machine learning
In early March 2022, BanklessTimes analysts published data according to which IBM leads in the number of patents in the field machine learning (ML). The herself IBM claims that its inventors are developing technologies to spur business to expand use. artificial intelligence
IBM registered 5.4 thousand different machine learning patents from 2017 to 2021, beating Microsoft and Google in the fight for first place. Moreover, Microsoft took the second position with 2.1 thousand patents, and Google the third with 1.3 thousand. Samsung registered 937 patents (this is the fourth indicator in the world), and Capital One ranked fifth with 921 patents.
Since the beginning of 2017, relative to 2021, the popularity of machine learning tools has increased sharply. This is due to both increased confidence in their accuracy and lower costs. Many companies for March 2022 use ML to provide accurate forecasts and quickly analyze large amounts of data. It is against this background that IBM is increasing investment in artificial intelligence. The company focuses on initiating change through natural language processing (NLP), automation, and developing trust in artificial intelligence (AI). In addition, IBM continues to introduce new capabilities gained from research and development into its products.
IBM says the next step in the development of AI will be what the company calls liquid intelligence due to the fact that machine learning technology for March 2022 is narrow. Therefore, using trained models for updated needs requires considerable time and data preparation. The company needs AI that mixes a wide range of information, investigates causal relationships, and independently discovers modified experiences.
This step extends IBM's hybrid cloud and AI strategy, helping enterprises modernize and transform complex mission-critical applications across multiple clouds and platforms. The company combines the capabilities of AI and hybrid cloud to provide the business with complete analytics, "said Kareem Yusuf, CEO of IBM AI Applications. |
IBM's R&D department uses a variety of approaches to help build AI systems focused on 2025-2035. In addition, the company develops architectures and devices with huge computing capabilities, this is due to the fact that the equipment is reliable and fast enough to process huge amounts of data that the company produces on a daily basis.[3]
Launch Enterprise Machine Learning Platform in Real Time
In mid-August 2021, Abacus launched a platform that, according to the company itself, is the world's first real-time machine learning and deep learning solution of corporate scale. Read more here.
2019
Increase in the number of AI/ML projects in Russia almost 3 times in 2 years and 9 months
On October 14, 2019, "" Jet Infosystems reported that it analyzed more than AIML 360/-projects from early 2017 to September 2019 implemented in. Russia The study showed growth of more than three times.
Analysts at Jet Infosystems note that in 2018 there was an explosive increase in the popularity of machine learning (ML) projects. For one project in 2017, there are 2.7 projects in 2018.
In 2019, the number of projects continued to increase relative to 2018 (by about 10%), nevertheless, their structure changed dramatically. If in 2017 these were point projects of IT companies, then in 2019 artificial intelligence became a fully working technology that is used in many industries. In addition, test (pilot) projects have decreased significantly compared to similar indicators in 2018.
As for the industry application, the leadership still belongs to the banking industry (20%) and retail (20%), where there is enough data, high competition and there is a budget for implementation. In 2019, AI technologies also came to industry - every 14th project belongs to this area.
The five leaders include aggregator companies (for example, Yandex) that offer mail, translation, transport services, etc., as well as advertising and travel companies.
According to the study, it is not only big business that implements AI: the number of projects in small enterprises has been growing for the third year in a row. Internet services, online stores, small-scale industrial production, small regional transport companies, regional divisions of federal state institutions, etc. are actively introducing digital technologies.
Software 2.0: How a new approach to software development will make computers smarter
The Software 2.0 paradigm is an approach to software development that can make a qualitative breakthrough in the development of computing. The goal of Software 2.0 is to create a model that can generate codes itself, it learns which codes in accordance with the specified rules should be created to obtain certain results. Read more here.
IBM launches portal with free data sets for machine learning in companies
On July 16, 2019, IBM launched a portal with free machine learning datasets to the company. The company calls IBM Data Asset eXchange (DAX) a unique project for corporate clients, despite the presence of a large number of open data arrays on the Internet (for example, on GitHub). Read more here.
10 best programming languages for machine learning - GitHub
In January 2019, the service for hosting IT projects and their joint development GitHub published a rating of the most popular programming languages used for (machine learning MO). The list is based on the number of repositories whose authors indicate that their applications use MO algorithms. More. here
2018
Salaries of specialists in Russia - 130-300 thousand rubles
According to HeadHunter (2018 data), machine learning specialists receive 130-300 thousand rubles, and large companies are in a fierce struggle for them.
Machine Learning Challenges - IBM
On February 27, 2018, IBM Watson CTO Rob High said that currently the main task of machine learning is to limit the amount of data required to train neural networks. High believes that there is every reason to consider this problem completely solvable. His opinion is shared by colleagues: so the head of the development of artificial intelligence technologies (AI) Google John Giannandrea (John Giannandrea) noted that his company is also busy with this problem.
As a rule, machine learning models work with huge amounts of data to ensure the accuracy of the neural network, but in many industries large databases simply do not exist.
High, however, believes that this problem is solvable, because people's brains have learned to cope with it. When a person is faced with a new task, the accumulated experience of actions in such situations is used. It is contextual thinking that suggests using High. Transfer learning technology can also help with this, that is, the ability to take an already trained AI model and use its data to train another neural network, the data for which is significantly less.
However, problems with machine learning are not limited to this, especially when it comes to natural speech.
We are trying to figure out how to teach AI to interact with people without causing distrust, how to influence their thinking, "High explained. - When communicating, people perceive not only the information itself, but also gestures, facial expressions, intonation, voice modulation. |
High notes that AI does not have to reflect these aspects in an anthropomorphic form, however, some response signals, such as visual ones, must arrive. At the same time, most AIs must first understand the essence of the issues and learn to navigate in context, especially how this issue is related to the previous ones.
This indicates the following problem. Many of the machine learning models currently in use are inherently biased, since the data on which they were trained are limited. As for such bias, then High identifies two aspects.
Firstly, the data may indeed be collected incorrectly, and those who are engaged in their selection for machine learning systems should carefully ensure that they take into account the interests of all cultural and demographic strata, - commented Hai. - On the other hand, sometimes the data is deliberately selected to reflect only a certain aspect of the problem or a certain sample, since the task is set. |
As an example, High cited a joint project between IBM and Sloan Kettering Cancer Center. They prepared an AI algorithm based on the work of the best cancer surgeons.
However, doctors at Sloan Kettering Cancer Center take a specific approach to treating cancer. This is their school, their brand, and this philosophy must be reflected in the AI created for them and preserved in all subsequent generations that will spread outside of this oncocenter. Most of the efforts in creating such systems are aimed at ensuring the correct selectivity of data. A sample of people and their data should reflect the larger cultural group they belong to. |
High also noted that IBM representatives finally began to regularly discuss these problems with customers. In Hye's view, this is a step in the right direction, especially when you consider that many of his colleagues prefer to ignore this issue.
Concerns about AI bias are shared by Giannandrea. Last fall, he said he was afraid not of an uprising of intelligent robots, but of artificial intelligence bias. This problem becomes more significant, the more technology penetrates into fields such as medicine or law, and the more people without technical education begin to use it.[4]
2017
3% of companies use machine learning - ServiceNow
In October 2017, the manufacturer of cloud automation solutions business processes ServiceNow published the results of a study on the introduction of technology machine learning in companies. Together with the Oxford Economics research center, 500 Chief information officers were interviewed in 11 countries.
It turned out that by October 2017, 89% of companies whose employees answered analysts' questions use machine learning mechanisms to varying degrees.
Thus, 40% of organizations and enterprises explore the possibilities and plan the stages of introducing such technologies. 26% of companies conduct pilot projects, 20% use machine learning for certain areas of business, and 3% use it for all their activities.
According to 53% of Chief information officers, machine learning is a key and priority area for the development of which companies are looking for relevant specialists.
By October 2017, the highest penetration of machine learning takes place in: North America 72% of companies are at some stage of studying, testing or using technology. Asia This figure is 61%, in - To Europe 58%.
About 90% of Chief information officers say automation improves accuracy and speed of decision-making. According to more than half (52%) of survey participants, machine learning helps to automate not only routine tasks (for example, displaying warnings about cyber threats), but also more complex workloads, such as ways to respond to hacker attacks.
Above is a chart showing the degree of automation of various areas in companies in 2017 and with a forecast for 2020. For example, in 2017, about 24% of information security operations are fully or largely automated, and in 2020 the figure may grow to 70%.
The most promising technology. What is the reason for the general insanity in machine learning?
Machine learning, according to analysts, is the most promising technological trend of our time. How did this technology arise and why has it become so in demand? What are the principles of machine learning? What prospects does it open up for business? Answers to these questions are provided by material that journalist Leonid Chernyak prepared for TAdviser.
A sign of the coming era of cognitive computing (see more in a separate article) is the increased interest in machine learning (ML) and numerous attempts to introduce ML in various, sometimes unexpected areas of human activity.
This is evidenced by Gartner's Hype Cycle, dated August 2016. On it, the ML takes a position at the peak of expectations. The report of this analytical company emphasizes that the current surge in interest in Artificial Intelligence (AI) in general and ML, in particular, should be distinguished from the unjustified expectations of past decades, which led to the temporary oblivion of AI.
Everything that happens in 2016-2017 is more prosaic and pragmatic, devoid of romantic promises about anthropomorphic technologies that mimic the human brain. There is no reasoning about thinking machines, much less threats from robots. The Gartner report cited IBM Vice President of Research John Kelly's "cynical" and clearly unacceptable to supporters of strong AI:
The success of cognitive computer computing will not be measured by either the Turing test or any other ability of the computer to mimic the human brain. It will be measured by practical indicators such as return on investment, new market opportunities, the number of people cured and lives saved |
No matter how great the interest in ML is, it is incorrect to identify all Cognitive Computing (CC) exclusively with ML. CC proper is a constituent of AI, a holistic ecosystem that ML serves as a part of. In addition, CC includes automatic decision-making, audio and video data recognition, machine vision, word processing in natural languages, and much more.
However, a strict separation between the individual directions of the CC is difficult to carry out. Some of them are mutually suppressed, but, for sure, ML includes mathematical algorithms that support the process of cognitive learning.
ML is the training of systems with elements of weak AI. Strong AI (Strong AI) is called Artificial general intelligence, which can theoretically be embodied by some hypothetical machine that exhibits thought abilities comparable to human abilities.
Strong AI is endowed with features such as:
- ability to feel (sentience),
- ability to make judgments (sapience),
- self-awareness and even
- consciousness.
And Weak AI (Weak AI) is called Non-sentient computer intelligence, an AI focused on solving applied problems.
Being part of weak AI, ML, however, has common features with human learning, discovered by psychologists at the beginning of the 20th century. Several theoretically possible approaches to learning as a process of knowledge transfer were then identified. Moreover, one of the approaches, called cognitive learning, directly corresponds to ML.
The student, in our case AI, is presented with certain images in the form available to him. To perceive the transmitted knowledge on the part of the student, it is enough to have the appropriate abilities and stimuli. The basis of the theory of cognitive learning was developed by the Swiss psychologist Jean Piaget (1896-1980). He, in turn, used works in the field of gestaltsychology, developed by the German and later American psychologist Wolfgang Koehler (1887-1967).
The theory of cognitive learning is built on the assumption that a person has the ability to learn, has the necessary stimuli and can structure and store accumulated information. The same applies to ML. It can be considered a version of cognitive learning, but adapted for a computer.
The history of ML, like much else in artificial intelligence, began with seemingly promising work in the 1950s and 1960s. This was followed by a long period of learning, known as the "winter of artificial intelligence." In recent years, there has been an explosive interest mainly in one of the directions - deep learning.
ML pioneers were Arthur Samuel, Joseph Weizbaum, and Frank Rosenblatt. The first was widely known for the creation in 1952 of the self-learning Checkers-playing program, which, as the name suggests, knew how to play checkers. Perhaps more significant for posterity was his participation, together with Donald Knuth, in the TeX project, which resulted in a computer layout system that has been unparalleled for almost 40 years for the preparation of mathematical texts. The second in 1966 wrote a virtual interlocutor ELIZA, capable of imitating (but rather parodying) a dialogue with a psychotherapist. Obviously, the program owes its name to the heroine from Bernard Shaw's play. And then Rosenblatt went. In the late 1950s, at Cornell University, he built the Mark I Perceptron system, which can be conditionally recognized as the first neurocomputer.
In the sixties and seventies of the XX century, the basic scientific principles of ML developed. In the modern view, ML combines previously independent directions:
- neural networks,
- case-based learning,
- genetic algorithms,
- rule induction and
- analytical learning.
It has been shown that the practical transfer of knowledge to a trained machine (neural network) can be based on the theory of computational learning by precedent, which has been developing since the sixties of the XX century.
Informally, ML can be represented as follows. Descriptions of individual precedents are taken, which are called the training sample. Further, according to the totality of individual data fragments, it is possible to identify general properties (dependencies, regularities, relationships) inherent not only in this particular sample used for training, but also in general in all precedents, including those that have not yet been observed. The learning algorithm and the fitting of the model by data sampling allow you to find the optimal set of model parameters, and then use the trained model to solve certain application problems.
In general, ML can be represented by the formula:
Training = Presentation + Evaluation + Optimization
where:
- Representation - a representation of the classified element in a formal language that the machine can interpret
- Evaluation - a function that allows you to select bad and good classifiers
- Optimization - Find the best classifiers
The main goal of ML is to create, for example, in a neural network the ability to detect something else that is not part of the set used for training, but has the same properties.
Training includes pattern recognition, regression analysis, and prediction. Most often, they use an approach based on building a model of a recoverable dependence in the form of a parametric family of algorithms. Its essence is numerical optimization of model parameters in order to minimize the number of errors on a given training sample of precedents.
The training consists in fitting the model to be created for sampling. But this approach has an inherent weakness, manifested in the fact that as the complexity of the model increases, the algorithms optimizing the model begin to catch not only the features of the restored dependence, but also the measurement errors of the training sample, and the error of the model itself. As a result, the quality of the algorithm deteriorates.
A way out of this situation was proposed by V.N. Vapnik and A. Ya. Chervonenkis in the theory of dependency recovery developed by them, recognized worldwide in the eighties and which became one of the most important branches of the theory of computational learning.
The transition from theory to ML practice, which happened in the 21st century, was facilitated by work in the field of deep neural networks ( Deep Neural Network, DNN). It is believed that the term deep learning itself was proposed in 1986 by Rina Dehter, although the true history of its appearance is probably more complicated.
By the mid-2000s, a critical mass of knowledge in the field of DNN was accumulated and, as always in such cases, someone breaks away from the peloton and receives the leader's jersey. So it was and, apparently, will always be in science. In this case, the role of leader turned out to be Jeffrey Hinton, a British scientist who continued his career in Canada. Since 2006, he himself and together with colleagues began to publish numerous articles on DNN, including in the popular science journal Nature, which earned himself the lifetime fame of a classic. A strong and close-knit community has formed around it, which has been operating for several years, as it is now said, "in an invisible mode." Its members call themselves "Deep Learning Conspiracy" or even "Canadian maffia."
A leading trio was formed: Ian Lekun, Yeshua Bendjo and Jeffrey Hinton. They are also called LBH (LeCun & Bengio & Hinton). LBH's exit from the underground was well prepared and supported by Google, Facebook and Microsoft. Andrew Eun, who worked at MIT and Berkeley and now heads artificial intelligence research at Baidu Laboratory, has been actively collaborating with LBH. He linked in-depth learning to GPUs.
The current success of ML and universal acceptance was made possible by three circumstances:
1. Exponentially increasing amount of data. It creates a need for data analysis and is a prerequisite for the implementation of ML systems. At the same time, this amount of data opens up an opportunity for training, since it generates a large number of samples (precedents), and this is a sufficient condition.
2. The necessary processor base has been formed. It is known that the solution to ML problems breaks down into two phases. On the first, training of an artificial neural network (training) is performed. During this stage, a large number of samples must be processed in parallel. At the moment, there is no alternative to GPUs for this purpose, in the vast majority of cases GPUs are used. Nvidia Conventional high-performance CPUs can be used to operate a trained neural network. This distribution of functions between processor types may soon undergo significant changes. Firstly, already in 2017 Intel it promises to launch a specialized processor Nervana on the market, which will be about as productive as the GPU. Secondly, there are new types of programmable matrices FPGA and large specialized ASIC circuits, and a specialized Google TensorFlow Processing Unit (TPU) processor.
3. Create libraries for ML software. As of 2017, there are more than 50 of them. Here are just some of the most famous: TensorFlow, Theano, Keras, Lasagne, Caffe, DSSTNE, Wolfram Mathematica. You can continue the list. Almost all of them support the OpenMP application interface, Pyton languages, Java and C++ and the CUDA platform.
The future scope of ML, without any exaggeration, is not visible. In the context of the Fourth Industrial Revolution , ML's most significant role is to expand the capacity of the Business Intelligence (BI ) area, whose name conventionally translates to "business analytics."
In addition to the more traditional quantitative question for BI: "What is happening in business?," With the help of ML, it will be possible to answer the following: "What and why are we doing?," "How can we do this better?," "What should we do?" and similar qualitative and meaningful questions.
About machine learning with simple examples
What is machine learning?
This is a programming method in which the machine itself forms an algorithm based on the model specified by the person and the data loaded into it.
This approach differs from classical programming: when "teaching" a program, they show many examples and teach them to find patterns in them. People learn in a similar way - instead of a verbal description of the dog, the child is simply shown the dog and told what it is. If such a program shows, for example, a million photographs of oncological formations on the skin, it will learn to diagnose cancer[5] from a picture better than a live specialist[6]
Why is model training so complex?
Imagine learning a machine using a group of people... and here the golden rule is that they should be equally interested and familiarized with the process, so let's say I can't take five programmers and four yesterday's students... You need to try to select people either in a completely random order, or in the same interests. There are two ways to do that. You show them a lot, a lot of pictures. You show them images of mountains interspersed with photos of camels, as well as images of objects that are almost exactly like mountains, such as ice cream in a waffle cup. And you ask them to say that of these items can be called a mountain. At the same time, the machine monitors people and, based on their behavior in the process of selecting images with mountains, it also begins to choose mountains. This approach is called heuristic, - writes PCWeek author Michael Kriegsman[7] |
We look at people, model their behavior by observation and then try to replicate what they're doing. This is a type of training. Such heuristic modeling is one way of machine learning, but it is not the only way.
But there are many simple techniques with which this system can be deceived. A great example is the recognition of human faces. Look at the faces of different people. Probably everyone knows that there are technologies for modeling based on certain points on the face, say, corners of the eyes. I do not want to go into intellectual secrets, but there are some areas between which you can build angles, and these angles usually do not change much over time. But here you are shown photographs of people with wide eyes or grimaces in the mouth. Such people try to confuse these algorithms by distorting their features. That's why you can't smile in a passport photo. But machine learning has already gone far ahead. We have tools like Eigenface and other technologies to simulate turning and distorting faces to determine that they are the same person.
Over time, these tools get better. And sometimes when people try to confuse the learning process, we also learn from their behavior. So this process is self-developing, and there is constant progress in this regard. Sooner or later, the goal will be achieved, and yes, the machine will only find mountains. She won't miss a mountain and will never be confused by a glass of ice cream.
How is this different from classic programming?
Initially, this process took place in a playful form or consisted of identifying images. Then researchers asked participants to play games or help with training with simple statements like "This is a mountain," "This is not a mountain," "This is Mount Fuji," "This is Mount Kilimanjaro." So they have accumulated a set of words. They had a group of people using words to describe images (for example, in the Amazon Mechanical Turk project).
Using these techniques, they actually selected a set of words and said, "So the word'mountain' is often associated with this and that, and there is a high statistical correlation between the word 'mountain' and this image. So if people are looking for information about the mountains, show them this image. If they're looking for Mount Fuji, show them this image, not that.' This was the technique of sharing the human brain and descriptive words. As of 2017, this reception is not the only one. At the moment, there are many more sophisticated techniques.
Will I be able to apply machine learning to my business?
Machine learning is of high practical importance for many industries, from, public sectors transport and to medicine ,,, and marketing. sales finance insurance There are a huge number of ways to use it - for example, predictive maintenance, supply chain optimization, fraud detection, personalization, health care traffic reduction, rational flight schedule planning and many others.
Government agencies use machine learning for data mining to improve their efficiency and save money. Banks use machine learning to identify investment opportunities, high-risk customers or signs of cyber threats. In the field of health, machine learning helps to use wearable device and sensor data to assess a patient's health in real time.
Machine learning algorithms
- Linear and logistic regression
- SVM
- Crucial trees
- Random forest
- AdaBoost
- Gradient booster
- Neural Networks
- K-means
- EM algorithm
- Autoregression
- Self-organizing maps
Malicious machine learning
- Main article: Malicious Machine Learning (AML)
Prospects for the development of the mathematical apparatus of AI or whether there is life outside ML/DL
Main article: Prospects for the development of the mathematical apparatus of AI or whether there is life outside ML/DL
Graph neural networks: a fleeting trend or the future behind them
Graph neural networks are actively used in machine learning on graphs to solve local (classification of vertices, prediction of connections) and global (similarity of graphs, classification of graphs) problems. Local methods have many examples of applications in word processing, computer vision, and recommendation systems. Global methods, in turn, are used in approximating problems that are not effectively solved on modern computers (with the exception of the quantum computer of the future), and are used at the junction of computer and natural sciences to predict new properties and substances (this is relevant, for example, when creating new drugs).
The peak of popularity of graph neural networks reached in 2018, when they began to be used and showed high efficiency in various applications. The most famous example is the PinSage model in the service recommendation system. Since then, there Pinterest are more and more new applications of the technology in areas where previously existing methods were not able to effectively take into account in models of communication between objects. More. here
- Internet of Things (IoT)
- Internet of Things, IoT, M2M (Global Market)
- Internet of Things, IoT, M2M (Russian market)
- Internet of Things: you can't stay in time
- What is the Internet of Things (IoT)
- IIoT - Industrial Internet of Things
- Chief Data Officer (CDO)
- Chief Digital Officer, CDO
- CIO (Chief Information Officer)
- Chief information security officer (CISO)
- CFO - Chief Financial Officer
- ↑ McKinsey Technology Trends Outlook 2023
- ↑ The machine learning model detects compromised components of the power system
- ↑ IBM leads in machine learning research with 5400 patents
- ↑ IBM Watson CTO Rob High on bias and other challenges in machine learning
- ↑ Deep learning algorithm does as well as dermatologists in identifying skin cancer
- ↑ What can a computer teach?
- ↑ Let's try to explain to business what artificial intelligence is