RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/11/02 10:02:34

Neural Networks Neural Networks of Neurotechnology

Neural networks are one of the directions of artificial intelligence, the purpose of which is to simulate analytical mechanisms carried out by the human brain. Problems that a typical neural network solves - classification, prediction and recognition.

Content

The main articles are:

Neural networks are able to independently learn and develop, building their experience on perfect mistakes. By analyzing and processing information from a specific source, or from the Internet as a whole, a self-organizing system is able to create new products, not only reproducing and structuring input data, but also forming a qualitatively different result, previously inaccessible to artificial intelligence.

Neural networks and Bayesian machine learning models

Two popular paradigms in machine learning. The first made a real revolution in the processing of large amounts of data, giving rise to a new direction, called deep learning. The latter have traditionally been used for processing small data. The mathematical apparatus, developed in 2010, allows you to design scalable Bayesian models. This makes it possible to apply Bayesian inference mechanisms in modern neural networks. Even early attempts to build hybrid neurobayes models lead to unexpected and interesting results. For example, by using Bayesian output in neural networks, it is possible to shrink the network by about 100 times without losing the accuracy of its operation.

The neurobayes approach can potentially solve a number of open problems in deep learning: the possibility of catastrophic retraining to noise in data, self-confidence of the neural network even in erroneous predictions, non-interpretability of the decision-making process, vulnerability to hostile attacks (adversarial attacks). All these problems are recognized by the scientific community, many teams around the planet are working on their solution, but there are no ready-made answers yet.

How to use neural networks

Large Language Models (LLMs)

Main article: LLM (Large Language Models)

A large language model is essentially a very large neural network built on the transformer architecture, which was trained on a huge amount of text data (books, articles, code, web pages, etc.).

Neural networks for creating pictures

Main article: Neural networks for creating pictures

Neural networks in radiology

Main article: Artificial Intelligence in Medicine

Neural networks in the media

2023: What neural networks can already be used in the media today

Today, more and more people understand that the future lies with neural networks, and that things can be done on them that were previously impossible. Like any innovative product - to a wide audience, neural networks seem to be something of little use, but curious. They know how to write music, process and generate images, highlight the main thing, voice text, and maintain a simple dialogue. But after the first delight, everyone will play enough, and the novelty will become a working commonplace in all areas. For example, several ways were selected specifically for the media to potentially use neural networks to solve real problems.

The article "Media of the Future: What Neural Networks Can Be Used in the Media Today " presents the results of a study by experts who, based on their many years of experience in online media, analyzed: what could simplify journalists' work, improve the quality of materials and increase business efficiency. Read more here.

Neural networks for the work of a PR specialist

  • Perplexity AI is a smart information search engine that will help you find the data you need in a sea of ​ ​ information noise.
  • Stable Diffusion provides work with text, transcription, translation, as well as YouTube.
  • Writesonic generates texts based on up-to-date information from the network.
  • 300. ya.ru offers a squeeze of theses from various video content.
  • Gerwin AI will help with the generation of posts for Russian social networks.
  • DeepL is an excellent tool for working with texts in foreign languages.
  • Gamma is a tool for creating interactive and creative presentations.
  • @ smartspeech_sber_bot - the ability to decrypt audio messages in Telegram.
  • ChatGPT for YouTube - will help you quickly transcribe videos from YouTube.

Neural networks in the field of sales

2024:5 stages of the sales funnel in which the neural network can (and should) be introduced

Reports are pouring in from all sides, how many companies are already using AI (artificial intelligence; it is usually meant by AI, neural networks and ML models) and which industries are the most advanced.

This article will consider the use of neural networks built into CRM by commercial departments. Five classic stages of the sales funnel, which are in absolutely any business: small and large, b2c and b2b, narrowly focused and for a wide audience, economy and luxury, will be taken as "checkpoints." Learn more here.

Neural networks in sports

Main article: Artificial intelligence in sports

Neuronet is one of the most likely stages in the development of the Internet

Neuronet (NeuroNet) is one of the supposed and most likely stages of Internet development. At the new stage of the development of the World Wide Web, the interaction of participants will be carried out on the principles of neurocompunctions, i.e. on the basis of the transfer of information about the activity of the brain.

In 2017, scientists predicted the formation of the Neuronet market by 2030-2040. Moreover, it was expected that at that time at least 10 Russian companies with a total capitalization of about 700 billion rubles would already operate on the market.

Neural networks in Russia

In Russia, developments in the field of neural network programming are carried out by the largest Internet holdings, in particular, VK (formerly Mail.ru Group) and Yandex, using neural networks for image analysis and text processing in a search engine. The most famous examples were technologies from Microsoft, Google, IBM and Facebook, as well as startups MSQRD, PrismaOverview of [1]

2024

Russia has found a way to increase the efficiency of neural networks by 40%

Smart Engines scientists have found a way to increase the efficiency of neural networks. The method is based on a fundamentally new quantization scheme, thanks to which the speed of work increases by 40%. The results of the study were published in the journal Mathematics (Q1). Smart Engines announced this on April 25, 2024.

The development is already used in solving applied computer vision problems - for searching for objects and recognizing texts. It could also become an integral part of the next generation of unmanned autonomous systems, expanding the class of tasks that on-board computers can perform .

We are talking about the breakthrough of domestic scientists in the field of optimizing the execution of neural networks. As of April 2024, mainly neural networks are performed on specialized video cards, but not every computer is equipped with them. At the same time, any user device has a central processor, the world standard for which is the use of 8-bit neural networks. However, deep neural networks become more complex, containing hundreds of millions or more of coefficients, which require more computing power. This limits the use of central processors in artificial intelligence systems.

Smart Engines researchers solved this problem by offering a qualitative improvement to the 8-bit model - 4.6-bit networks. It works 40% faster than the 8-bit model, but is practically not inferior to it in quality due to the more efficient use of the features of the central processors of mobile devices.

To do this, the input data and coefficients of the model are quantized so that their products are placed in 8-bit registers. The summation of the results is done using a two-level system of 16- and 32-bit batteries to achieve maximum efficiency. As a result, on average, there are 4.6 bits of information per value.

Such a quantization scheme compares favorably with existing ones, since it allows you to flexibly set the bitness of the input data depending on the problem and does not bind to the powers of two. Therefore, this development provides noticeably higher quality recognition than, for example, 4-bit models.

File:Aquote1.png
Computer vision tasks should be solved on end devices - mobile phones, surveillance cameras, on-board computers of drones. All these tasks are characterized by low computing capabilities of devices and significant limitations on power consumption. And our development allows almost one and a half times to increase the possibilities of solving these problems. Classical networks in our recognition systems have already been replaced with 4.6-bit analogues, and we continue to work on more optimal schemes for quantizing and training neural networks, said Vladimir Arlazarov, CEO of Smart Engines, Doctor of Technical Sciences.
File:Aquote2.png

In all Smart Engines software products, "heavy" neural networks have been replaced by their 4.6-bit counterparts.

Russian scientists have improved the model of the diffusion neural network

Scientists from the Center for Artificial Intelligence and the Faculty of Computer Science of the Higher School of Economics, as well as the Institute of Artificial Intelligence AIRI and Sber AI have developed a diffusion model structure for which it is possible to set eight types of noise distribution. Instead of the classical structure of the Markov chain model and the application of normal distribution, scientists proposed a star-shaped model where it is possible to choose the type of distribution. This will help solve problems in different geometric spaces using diffusion models. This was announced on February 15, 2024 by representatives of the Higher School of Economics. Read more here.

Russian scientists taught neural networks to recognize humor humanly

A group of scientists from the Faculty of Computer Science at the Higher School of Economics conducted a study of the ability of neural networks to recognize humor. It turned out that for more reliable recognition, the approach to creating data sets on which neural networks are trained should be changed. Such information was shared with TAdviser by representatives of the Higher School of Economics on January 11, 2024.

As you know, voice assistants can only tell a ready-made anecdote, come up with their own or recognize the joking tone they are not able to. At the same time, users of voice assistants created on the basis of artificial intelligence technology want more humanity from them - the ability to recognize a joke and joke.

Since the mid-2000s, scientists have been engaged in recognizing humor as a classification task "funny is not funny," in the same frame datacets (a set of data) are collected and marked. A group of scientists from the HSE proposed to change the approaches to the formation of such datacets - to make them more diverse, and the data sets do not have to be very large.

As representatives of the Higher School of Economics explained, the task of recognizing humor is also difficult because there are no formal criteria for determining what is funny and what is not. Most existing datacets for learning and evaluating humor recognition models contain puns. Sarcasm and irony are even more complex, as is situational humor that requires knowledge of context or cultural code.

File:Aquote1.png
"We wanted to assess the tolerability and robustness of models trained on different datacets. Portability is how well a datacet-trained model with one type of humor defines a different type of humor. It was not at all obvious how the training would work, because humor is different, "said Pavel Braslavsky, associate professor at the Faculty of Computer Science at the Higher School of Economics.
File:Aquote2.png

Scientists tested the stability with "adversarial attacks" - attempts to force the neural network to see humor where it is not. The neural network received an unfunny, but formally similar to a humorous text - instead of a pun in the dialogue, a "wrong" consonant word was used. The smaller the network falls into such traps, the more stable it is.

The researchers trained the models on standard sensors for recognizing humor and on their mixtures. In addition, the models were tested with dialogue from Lewis Carroll's Alice in Wonderland, Charles Dickens' Antiquities Shops, Jerome K. Jerome's Three in a Boat, Not Counting a Dog, The Walking Dead, Friends, and a collection of ironic tweets.

It turned out that some models are retraining and consider everything ridiculous.

File:Aquote1.png
"We showed various models of Dickens' Antiquities Shop, which is a very sad story, and asked to assess what was happening. It turned out that some models believe that all dialogues from 19th-century literature are funny. And even more - everything that is too similar to the news of the XXI century is accepted as humor, "said Alexander Baranov, graduate student at the Faculty of Computer Science at the Higher School of Economics.
File:Aquote2.png

Models trained on puns are more often mistaken if in unfunny text one word is replaced with a consonant one. It also turned out that neural networks trained on small parts of different datasets recognize humor better than those trained on a large amount of the same type of data. The authors conclude that existing datacets are too narrow, the humor in each is very limited, and this reduces the quality of joke recognition.

The researchers proposed changing the approach to learning and evaluating models of humor recognition. We need new data sets, more diverse and close to ordinary conversations, natural communication. Large language models, such as ChatGPT, trained on huge amounts of data of different types, on average do a good job of recognizing humor, and scientists suggest that it is precisely the diversity of data they studied on.

File:Aquote1.png
"We are now only talking about binary recognition of humor: funny or not funny. It is very far from defining shades of humor, distinguishing between sarcasm and irony, recognizing situational, contextual humor. Our voice assistants have jokes so far "pinned with nails" and covered with filters that determine what kind of joke to issue depending on the user's words. This programming of responses feels unnatural. The demand for the greater humanity of artificial intelligence is absolutely understandable, but it will not be easy to satisfy it, "said Vladimir Knyazevsky, one of the authors of the study, a student at the Faculty of Computer Science at the Higher School of Economics.
File:Aquote2.png

The study was carried out as part of the project of the Scientific and Educational Laboratory of Models and Methods of Computational Pragmatics.[2]

2017: A boom in neural network startups

The world has created neural networks capable of painting in any existing artistic style, confidently beating the world champion in the most difficult logical game on the planet, recording music albums and imitating human behavior in electronic correspondence. All of the above is still only a demonstration of part of the capabilities of the technology, the real application of which both in business and in everyday life, we will see in the near future.

In other words, neural networks will allow not only and not so much to replace human labor in more complex labor activities, but to become a useful tool for specialists and managers of many areas.

Vlad Shershulsky, Director of Advanced Technologies, Microsoft Rus comments: "This area finally became hot in 2016: since about 2009, there has been rapid progress in creating more and more complex, and at the same time more and more efficient, deep neural networks, and most recently we have seen impressive applications and witnessed the creation of a number of successful startups. The threshold for entering the neural network services market has significantly decreased, and projects built around the idea of ​ ​ one interesting application are implemented in a matter of months. All this gave rise to a boom in neural network startups, aroused the interest of large corporations and contributed to an increase in demand for specialists in this field, including in Russia. It is nice to note that the most important contribution to the creation of a new generation of technologies for working with natural languages ​ ​ was made by Microsoft specialists. In the famous television series Star Trek, the creation of an online translation of the spoken word was predicted in the XXII century, and we already have it today. Of course, other applications - from predicting car breakdowns and bankruptcy of counterparties to new cybersecurity tools - are also developing very successfully. "

Neural networks in the world

2024

On the basis of the works of Soviet academicians Andrei Kolmogorov and Vladimir Arnold, a fundamentally new architecture of neural networks was created

At the end of April 2024, American researchers from a number of scientific organizations announced the development of a fundamentally new neural network architecture - Kolmogorov-Arnold Networks (KAN). The platform is based on the works of Soviet academicians Andrei Kolmogorov and Vladimir Arnold. Read more here.

A neural network has been created that imitates the handwriting of a person

At the end of December 2023, specialists from the Mohamed bin Zayed University of Artificial Intelligence in the UAE (MBZUAI) announced the creation of a neural network capable of imitating human handwriting. The developers registered their technology with the United States Patent and Trademark Office (USPTO). Read more here.

2023: The global deep neural network market grew by a third over the year to $24.4 billion

At the end of 2024, costs in the global deep neural network (DNN) market reached $24.4 billion. For comparison, a year earlier, the volume of the industry was estimated at $18.46 billion. Thus, expenses rose by almost a third. This is stated in the Market Research Future study, the results of which were published on November 1, 2024.

One of the main drivers of the market is the rapid development and introduction of artificial intelligence technologies, including generative. Organizations are increasingly using deep learning for a wide variety of applications, such as image and speech recognition, natural language processing, and autonomous systems. This surge in demand is largely due to innovations and advances in computing resources that empower and enhance DNN efficiency. With powerful graphics accelerators, tensor processors, and other specialized solutions, organizations can handle huge amounts of data at high speeds.

Advanced AI-based solutions are being implemented in many areas, including healthcare, finance, automotive and retail. Such systems offer advanced analytical capabilities, which allows you to optimize operations, reduce costs and increase revenue. The industry is further fueled by the need to improve the customer experience with personalized services made possible by the use of AI media. Organizations that leverage large amounts of structured and unstructured data with DNN technologies gain a competitive advantage. Another stimulating factor is the growing investment in research and development work related to deep learning technologies. Significant funds are invested in the development of AI by both private companies and government agencies.

The authors of the study highlight five main applications of DNN: image recognition, natural language processing, speech recognition, video analysis and anomaly detection. In 2023, the first of these segments accounted for $6.8 billion, the second - $5.5 billion. In the field of speech recognition, costs are estimated at $4.6 billion, while analysis of video materials brought in $4 billion. The detection of anomalies provided another $3.5 billion. The bulk of revenue comes from software, including the frameworks and platforms needed to develop and deploy neural networks. Hardware solutions, including GPUs and specialized accelerators, make a significant contribution. At the same time, services that cover a variety of areas - from consulting to deployment and support - are becoming more and more significant.

Geographically, North America leads with an estimate of $10.5 billion. This is followed by the Asia-Pacific region with $6 billion and Europe with $5.7 billion. South America, the Middle East and Africa together provided $2.2 billion. The list of leading industry players includes:

Market Research Future analysts believe that in the future, the CAGR in the market under consideration will be 32.15%. Thus, by 2032, costs in the field of deep neural networks on a global scale, according to the presented estimates, could reach $300 billion.[3]

2018: Graph Neural Networks: A Fleeting Trend or the Future Behind Them

Main article: Graph neural networks

Graph neural networks are actively used in machine learning on graphs to solve local (classification of vertices, prediction of connections) and global (similarity of graphs, classification of graphs) problems. Local methods have many examples of applications in word processing, computer vision, and recommendation systems. Global methods, in turn, are used in approximating problems that are not effectively solved on modern computers (with the exception of the quantum computer of the future), and are used at the junction of computer and natural sciences to predict new properties and substances (this is relevant, for example, when creating new drugs).

The peak of popularity of graph neural networks reached in 2018, when they began to be used and showed high efficiency in various applications. The most famous example is the PinSage model in the service recommendation system. Since then, there Pinterest are more and more new applications of the technology in areas where previously existing methods were not able to effectively take into account in models of communication between objects. More. here

Notes