RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2022/08/06 16:46:01

Where does the world of practical AI implementations roll?

The article is included in the TAdviser review "Artificial Intelligence Technologies"

Content


We see that the world of practical implementations of AI products due to the powerful development of technologies has stepped far forward from specialized solutions that implement individual business transactions to the level of complex complex systems for supporting decision-making in difficult business processes. Accordingly, AI technologies used to solve business problems using IT are changing. They develop towards hybrid models.

For example, the borrower valuation system that is implemented in Home Credit Bank"" uses predictive modeling, causal modeling (which involves some impact on the client), reinforcement training, and linear discrete algorithms. programming In addition, heuristics are used in the field of pricing and limit policy. This approach, according to Sergey Gerasimov, head of the research and innovation department at Home, Credit Bank makes it possible to automate the assessment process in such a way as to achieve maximum profitability.

Source: Home Credit Bank, September 2021

In general, the resulting model becomes the result of the work of other models, moreover, a complex optimization option is implemented based on aggregated (global) indicators for all customers. In fact, the problem of linear discrete programming is being solved, notes Sergey Gerasimov.

Artem Grishkovsky, the commercial director of the company Trusted environment"," the developer BI of the platform "," Triaflai also says that in practice companies use different types of models in different areas of the business: imitation, econometric ("what is needed for," "what will happen if"), forecast, optimization. Accordingly, there is a need for environments in which these models can be customized and used. Thus, the Triaflai platform contains its own environment for customizing econometric models, mainly focused on creating calculated indicators. And in order to provide the company with the ability to use a library of predictive, optimization multifactor models, Triaflai is integrated with the Jupyter Notebook, and for creating simulation models - with. ON AnyLogic

What can be expected in the near future in the field of practical use of AI mechanisms?

File:Aquote1.png
The mechanisms of computer vision show quite convincing results, many unmanned taxis in test mode are already running in Moscow. But in order to make sure that this technology has already really entered our daily life, you need to transport at least 10 thousand first passengers, unfortunately, the first accidents must occur. Surely this technology still needs more technical improvement.
File:Aquote2.png

But health care has to go through the path of turning many disparate AI solutions into "intellectual coverage" of the medical services provided to the population. "The volumes of necessary medical diagnostics in the modern world are gigantic, and only continue to grow, and doctors are sorely lacking, so here we see huge prospects for automation," says Dmitry Nikolaev. Smart Engines, in particular, provides tomography software. AI algorithms for 3D tomographic reconstruction have a very great future, according to the company's specialists.

The company's solution Nanosemantics"" helps the doctor to diagnose the pictures. MRI The company's technologies on the base ML segment all parts of the body and define their contours (for example, for the back it can be vertebrae, dorsal discs, dorsal canal, etc.). Next, accurate mathematical calculations are performed (areas inside the contours, the shortest distances between points), by which an accurate diagnosis is made.

In early June, Anastasia Rakova, Deputy Mayor of Moscow for Social Development, announced that a unique digital library of impersonal data sets had been created in the capital to assess and train neural networks. AI service developers will have access to nine datacets of impersonal radiological images, which are necessary for testing and further training of artificial intelligence. At the same time, the largest open dataset consists of more than a thousand unique studies of patients with signs of COVID-19. Quality data for training medical services is becoming more accessible to developers.

The emergence of technologies and methodologies leading to the democratization of the use of AI for various tasks, which means that their further dissemination, seems to be a promising direction of AI development to Alexander Khledenev, Director of Digital Solutions at VS Lab. Among them, for example, the direction of AutoML, which allows you to simplify and automate the process of developing, implementing and managing machine-learning models.

File:Aquote1.png
The products and components of this area provide an opportunity to reduce the requirements for specialists leading the development of models, perform automated fichering and choose the most suitable algorithms for training. All this should reduce the cost and speed up the analysis of data using AI, - said Alexander Khledenev.
File:Aquote2.png

File:Aquote1.png
It consists of methodology and tools to standardize and simplify the development, deployment, monitoring and management of machine models. This will give business an increase in the security and continuity of IT processes, and developers a framework for continuous optimization of results in accordance with its tasks, the expert is sure.
File:Aquote2.png

File:Aquote1.png
Today, for example, services for image recognition, speech synthesis, automated preparation and data augmentation and model creation are already available. We should expect the development of players with more specialized solutions for businesses and industries, - says Alexander Khledenev.
File:Aquote2.png

Analysts at Statista predict the active promotion of open source platforms, which makes co-learning possible. The beginning of this trend was laid in 2015 by Google, which released the open source TensorFlow framework for teaching AI models using streaming graphs. This was followed by the DeepMind Lab platform, created to experiment with universal AI systems that can solve complex problems without first learning how to solve this problem. Theano Library can provide high-precision AI models of computational operations with large amounts of data. Caffee is a framebook that can process more than 60 million images in one day using just one NVIDIA K4010 graphics card. Torch is an open source computing environment for machine learning algorithms that offers a GPU for numerical optimization and linear algebra. Deeplearning4j Open Source Deep Learning Library for Java (JVM). MLlib is a machine learning tool that includes various machine learning algorithms for classification, regression, decision trees, recommendations, clustering, thematic modeling, etc.

Igor Pivovarov, chief analyst at the MIPT Center for Artificial Intelligence, says that this year the production of a Russian processor for inference (the operation of a neural network on a final device), developed by IVA Technologies, may begin. This will be a real breakthrough in the field of hardware of Russian artificial intelligence.

Cognitive breakthrough

Among the breakthrough technologies of recent times, it is worth mentioning the joint development of Russian scientists from the Institute of Artificial Intelligence (AIRI) and MIPT. In May, they announced a biologically plausible memory model for internally motivated AI systems. The corresponding article was published in the scientific journal Brain Informatics.

A team-developed cognitive agent is a program that learns to interact with the world on its own and learn from its mistakes by performing a specific task. The agent is based on the structure of algorithms, including neural networks, which helps him execute the developer's instructions. From the whole variety of artificial neural networks, the researchers chose Spike Neural Networks (SNN), built on the basis of the spike (pyramidal) neuron model, learns faster than a traditional artificial neuron.

Spike neural networks, as the developers say, use only neurons active at a particular time, which provides significant resource savings in their training and practical use.

The cognitive capabilities of the developed cognitive agent suggest the possibility of operating on abstractions of states and actions. This means that this agent is able to perform complex actions on the basis of simple operations already known to him, scientists say. Moreover, in addition to external motivation (reward for a successfully performed action), he has internal motivation that provides meaningful behavior of the agent in the absence of an external reinforcing signal. Scientists say that such an agent will be able not only to look for a solution to the problem, like most standard programs, but also to study the world around him.

Fundamental ambitions of neural networks

In recent years, artificial neural networks have been striving for gigantism: the GPT-3 model (the third generation of OpenAI's natural language processing algorithm), and then the DALL-E model of visual transformers, which generates images by description, use millions and billions of features. Some experts believe that a new era of Machine Learning is coming. The "manifesto" of this direction was the article "On the Opportunities and Risks of Foundation Models," prepared by a large team of scientists working at the Center for Research on Foundation Models (CRFM) of Stanford University (Human-Centered Artificial Intelligence University).

AI is undergoing a paradigm shift with increasing interest in models (e.g. BERT, DALL-E, GPT-3) that train on the basis of broad data capable of scaling and adapting to a wide range of different tasks. The researchers called a limited set of such huge models based on standard deep learning and knowledge transfer fundamental models (Foundation Model). They suggested that these models will become elements of the "super architecture" of AI, on the basis of which a variety of application products using ML to solve their problems will work.

Evolution of machine learning

Source: On the Opportunities and Risks of Foundation Models Center for Research on Foundation Models (CRFM) Stanford Institute for Human-Centered Artificial Intelligence (HAI) Stanford University, август, 2021 г.

This view is based on the following view: when implementing deep learning, high-level functions used for forecasting appear, and with fundamental models, advanced functionality such as contextual learning appears. Thus, machine learning homogenizes (that is, reduces the degree of heterogeneity) neural network training algorithms. An example is logistic regression, that is, a controlled learning classification algorithm used to predict the probability of a target variable. Deep learning homogenizes model architectures (e.g. convolutional neural networks). And fundamental models homogenize the model itself (for example, GPT-3).

Illustration of the multimodality of the fundamental ML model

Source: On the Opportunities and Risks of Foundation Models Center for Research on Foundation Models (CRFM) Stanford Institute for Human-Centered Artificial Intelligence (HAI) Stanford University, август, 2021 г.

In Russia, there are elements of fundamental models - they can be created by such large IT companies, Yandex or Sber. According to Igor Pivovarov, something similar to the Foundation Model for the Russian language is the DeepPavlov library - a library for creating virtual assistants and analyzing text, built on TensorFlow and Keras, which was created at MIPT under the leadership of Mikhail Burtsev.

Sketches of Future Metaverse

In January, Nanjing-based Purple Mountain, a Chinese laboratory, officially announced a world record real-time 6G transmission rate for terahertz-band wireless communications. Communication scientists managed to demonstrate data transfer rates up to 200 Gbps. China plans to launch first 6G network by 2030, it says

Experts say that the 6G is not just an even higher communication speed, but also an ultra-low signal delay, ultra-high communication energy efficiency, ultra-high reliability and security, ultra-high sensitivity and localization. This means that there will be fundamentally new scenarios for using communication.

Source: ieeexplore.ieee.org/document/9335927

Among such scenarios, for example:

  • Multidimensional augmented reality as the foundation of all business and consumer applications.
  • "Tactile Internet of Feelings": transmission of a complex spectrum of sensations (aroma, taste, etc.).
  • Holographic communications.
  • Ultra-smart telemedicine based on tactile communication.
  • An ultra-smart city, which implements a full of transport, urban infrastructure and a person "inscribed" in this infrastructure.

So, there is an assumption that in the near future the world will become a union of physical, biological and digital reality, in which people and ultra-smart devices connected to 6G networks will live and communicate. All objects and entities located by the "umbrella" 6G will be included in intellectual processes at various levels of their existence.

Source: ieeexplore.ieee.org/document/9335927

In a certain sense, the idea of ​ ​ the pervasive intellectual component of 6G is well consistent with the idea of ​ ​ fundamental models of deep learning, and with smart chips in the Edge infrastructure, and with the trend of hybridization of AI mechanisms.

Browse Home > > >

Other Review Materials

Other materials on the topic of AI