[an error occurred while processing the directive]
RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2020/08/07 09:53:02

The analysis of a continuous speech in a natural language: "… to another how to understand you?"

One of the central problems for IT solutions of artificial intelligence is the problem of "understanding" of the text, more precisely, than extraction of sense from the text in a natural language. Practical solutions of smart speech technologies, eventually, come down to it. However it is very far from the solution in a general view, neither psychologists, nor neyfrofiziolog can explain or simulate a phenomenon of "understanding" yet. On this field of the most fascinating scientific research decades various theories and the assumptions grow. Occasionally they find the practical embodiment. But "think of creation of the independent robot capable as people", it does not come. And what are capable "understand" computer programs from the offered text today? Article is included into the overview of TAdviser "Technologies and solutions of artificial intelligence: change point"

Content

Main article: Speech technologies

U F. I. Tyutcheva is a lot of poetic creations with the deep philosophical background, and the known quatrain from the poem Silentium! it is possible to call to consider by really providchesky task description from area of the computer analysis of texts in the natural language (NL):

How to state to heart itself?
To another how to understand you?
Whether he will understand, than you live?
The thought uttered is a lie.

Evolution of technologies

The systems of computer natural languag processing deal with issues of "understanding" of EYa-texts (NLP, Natural Language Processing). About what problems concern scientific community today, Natalya Lukashevich, the leading researcher of RCC of MSU, professor of department of theoretical and applied linguistics of MSU, the member of the international program committee of the Dialog conference tells (this conference for many years is consolidation center of the intellectual resources conducting developments in the field of NLP): "This science became experimental. So in most cases during creation something new (models, an algorithm, linguistic resources) also assessment procedure of the quality offered on some data sets existing or which are specially created should be provided. The same occurs also in close spheres of machine learning and artificial intelligence".

File:Aquote1.png
Such methods require existence not only data for testing, but also for training. It is possible to tell that now at the Dialog conference as, however, and around the world, there is a shift towards those problems of hands-off processing of texts and the speech for which exist or data sets (data sets), for training and testing can be created,
emphasizes professor Lukashevich.
File:Aquote2.png

As a result, according to Natalya Lukashevich, the most burning and interesting directions of works to researchers is search of approaches which allow to lower a problem of preparation (marking) of data for a specific objective. Among them it is possible to note such as:

  • Methods of transfer of the trained models from one task on another, from one area in another, from one language on another.
  • Methods of automatic generation or replenishment of the existing data sets (augmentation).

Besides, the question is of serious practical interest to scientists: find problem definition for which there are no data yet and to formulate it so that it was clear how to mark data or how to find the marked data.

Technology break in NLP

In the field of NLP speak about the break connected with neuronets: 2018 became a critical point for development of the models of machine learning focused on text processing solving of tasks. This critical point is called still by NLP's ImageNet moment, referring to events during which a few years ago similar developments considerably accelerated development of machine learning in the field of computer vision.

File:Aquote1.png
The most important that occurred not so long ago in the field of NLP, is a universal distribution of models, model-based deep learning of BERT. The release of BERT'a is an event which laid the foundation for a new era in NLP,
tells Valentin Malykh, the research associate of the Russian Research Institute of Huawei (Huawei Russian Research Institute, RRI), R&D divisions of Huawei company in Russia.
File:Aquote2.png

Soon after the publication of article in which the model was described the development team uploaded publicly the code of model and provided availability of different versions of BERT which are already pretrained at big data sets. And today the opened BERT mechanism relies on a number of the developments offered by world NLP community. Generally it is possible to tell that the community the combined efforts tried to solve several big problems which significantly limited earlier applicability of algorithms deep learning for the analysis of EYa-texts. In particular, treat them: knowledge of a context for exact understanding of a word sense, existence of the trained selection, the small sizes of the texts available to training in large volumes, mainly, news notes.

Some of the most important achievements of this model should considering that the training happens in the BERT model on the big not marked data set on which the problem of language modeling, i.e. predictions of the following word is solved. This task in itself is not of practical value, but she allows model to learn features of language and further to use them for the solution of applied tasks.

It in principle cannot be implemented on the scale of tweets or news notes. As one enthusiast of BERT told, "it is rather simple to load seven thousand books and to train for them model". Perhaps, actually everything is not absolutely simple, but the BERT model precisely already broke several records on success of the solution of a number of NLP tasks.

Development of model using BERT


From the practical point of view, it is important that such models of deep learning can use a big array of not marked data for pretraining, and at a stage of training of rather small amount of data. As a result it was succeeded to reach significant quality improvement for a large number of tasks, for example, for question answering systems.

Achievements of today

Quality of text recognition

Quality parameter in relation to NLP systems has difficult character. How to describe it? How some semantic construction or result of semantic analysis?

Natalya Lukashevich explains that the term "semantic analysis" has two interpretation. First, there are problems of semantic analysis - those which require deep understanding of the text from the people participating in any communication: machine translation, answers to questions, chat-bots. Secondly, semantic analysis is an automatic procedure which in processing creates internally formalized semantic representation of meaning of the text which can be then is interpreted by the person. Both directions are interesting to researches.

The computer "understands" the text?

On a question whether it is possible to say that the computer system "understands" meaning of the text approximately as it is done by the person, Natalya Lukashevich answers so:

File:Aquote1.png
It is possible to tell that lately machine translation quality significantly grew. Transfer became more "smooth", i.e. correctly constructed grammatical though specific words can be translated incorrectly. However in the machine translator the neural network which will transform the sequence on an input language to words sequence in other language through not interpreted internal representation generated by neural network works.

File:Aquote2.png

File:Aquote1.png
But in these applications we can usually detect any combinations of three technologies: application of templates, search of the answer among a set of the available answers or remarks and generation of the answer using the neural network trained at a set of remarks. Anyway it is not similar to process of understanding of the text by the person and semantic analysis in the second interpretation at all.
File:Aquote2.png

File:Aquote1.png
It is necessary to make a reservation that "understanding" of the text by a computer system does not happen. A system can carry out some specific objective to translate, for example, the text from one language on another. But it becomes statistically, and is available for the modern systems many restrictions connected with lack of understanding of the text.
File:Aquote2.png

The formalized representation of meaning of the text

File:Aquote1.png
It turned out that it is extremely difficult to offer a uniform semantic formalism for all variety of texts. For example, earlier researchers thought that optimum to make machine translation through the formalized language and independent submission of the text contents (interlingua), but it did not turn out. And today neural machine translation is executed on the basis of creation of neintepretiruyemy vector representations,
notices Natalya Lukashevich.
File:Aquote2.png

File:Aquote1.png
Now in the field of NLP in many respects there was deviation from the description of linguistic structures in an explicit form. Instead so-called methods of distributive semantics which development are BERT models are used. For models of machine learning (and BERT, including) exists a large amount of different aspects which require refining, for example, in the field of training of neural networks - it is search of a global optimum of loss function, search of optimal representations of distributions in data, for example, using variation autoencoders, etc.,
confirms Valentin Malykh.
File:Aquote2.png

The ABBYY company using NLP technologies based on different mechanisms of machine learning, including using neural networks, implements possibilities of extraction from EYa-texts of a broad spectrum of entities, events and communications, builds difficult semantic constructions. According to in the company, the technology is capable to define the relations between the interconnected words even in multiline compound sentences with difficult turnovers.

File:Aquote1.png
The system of ABBYY, as far as I know, uses the big graph of knowledge created by people for accomplishment of applied tasks. The Huawei company develops the TinyBERT model which uses big corpus for pretraining, on the one hand, and with another — special technicians for reduction of the size of model. So the final model turns out very compact and at the same time showing high quality in applied tasks. It is possible to note that its training practically does not require application of manual work.
File:Aquote2.png

Experience of real implementations

Tatyana Danielyan, the associate director on research and development of ABBYY company, tells that NLP technologies are used in a number of solutions of the company for a corporate segment:

  • Analysis of documents: the technology at first defines words meaning and subject of all unstructured text.
  • Classification: the program defines a document type and data, refers them to different categories.
  • Problems of a clustering: documents are distributed on groups on the basis of any certain principle. For example, to collect all acts or agreements which are similar on sense.
  • Intellectual search, information extraction from unstructured texts (entities, the facts, events).

For example, in Sberbank the solution of ABBYY is applied in the system of online monitoring of news to assessment of credit risks. Today through this ontomodel in the online mode there pass news more than 200 thousand companies – partners of bank. At the same time only significant messages with the illuminated facts are included in the file of risk managers. In Point the solution of ABBYY allows to process 15% quicker requests of clients in a support service: the program automatically analyzes the text content of a conversation, defines its subject and sends data to base of answers.

Interesting option of use of NLP technologies of ABBYY – the system of corporate search in NPO Energomash. This solution works in style of popular searchers, but on internal information sources and millions of documents of the enterprise. Employees can enter natural language query into a search line and for read fractions of a second to find the documents and files necessary for work. For example, design drawings, research works, financial statements and so on.

Solutions of tomorrow

File:Aquote1.png
At the high level problems of classification — distribution of documents by certain categories are solved. The task of the personal assistants configured on a certain area of services is more or less well solved: reserve a table at restaurant, purchase the plane ticket, look for information on some book, etc. These are very simple requests on which there are specific answers. For them it is not necessary to collect information on different sources and to aggregate it,
explains Tatyana Danielyan.
File:Aquote2.png

File:Aquote1.png
The person speaks differently, can be mistaken, talk literary and nonliterary language. For certain data domains or some pairs of languages there can not be no data for training at all, or these data critically are not enough,
speaks Danielyan.
File:Aquote2.png

New methods of machine learning which will well work even at a limited number of relevant data are designed to change a situation. So, transfer learning – a method at which the neuronet uses data from available sources for training is even more often used, for example, texts of news.

File:Aquote1.png
After the architecture is picked up, such network can be doobuchit on a limited set of relevant data. Transfer learning already actively use banks, producers of cars, the medical organizations and other companies in different tasks,

File:Aquote2.png

File:Aquote1.png
Now for simplification solutions of new tasks are used technology of transfer of knowledge, including above-mentioned pretraining (if to speak about the BERT model).
File:Aquote2.png

File:Aquote1.png
Now transfer of universal solutions to the specialized area can be very difficult and be followed by considerable decline in quality of hands-off processing. For example, the universal systems of recognition of named entities work with quality higher than 90%. However in specific data domain it is necessary to draw additional types of named entities. For example, in the field of a computer security are names of viruses, hackers, the computer equipment, programs. And it means what needs to be done all over again, marking the training collection or creating rules for extraction of new types of entities.
File:Aquote2.png

Serious problem, in opinion Lukashevich, is also transfer of the solutions which are set up on qualitative texts on processing of texts from social networks on which numerous typos, reductions meet, rules of syntax creation of offers, and rules of capitalization are not followed (writing with capital - a lowercase letter).

Specialists of the Research center of artificial intelligence of the INFORMATION RETRIEVAL SYSTEM of RAS remind that identification of meaning of the text matters not only in itself, but also in order that on the basis of information taken from the text to create descriptions of knowledge of the considered data domain or the analyzed situation. The question of specific ways of integration of information (the text facts) taken from texts into knowledge of a system is still open, scientists claim, and fixed assets for expression of knowledge of data domain in problems of information extraction from the text are contextual rules and ontologies today. One of the perspective directions of researches — closer integration of rules and ontology, including, development of language of requests to ontology, believe in TsII IPS RAS.

File:Aquote1.png
Active use of chat-bots, self-governed devices, stimulates development of logical artificial intelligence which combines not only the saved-up data, but also logical rules. The value of such algorithms is that they can find an optimal solution for rather short time, at the same time for their creation availability of data, as in a case with machine learning, the rules enough set and mechanisms of their formation is not obligatory. We in Foresight have a special division of logical artificial intelligence which develops such technologies, and rather large number of the companies shows interest in such technologies,
notices Alexey Vyskrebentsev, the head of the center of examination of solutions of Foresight company.
File:Aquote2.png

The calls facing the industry

Special problems of application of deep neural networks

File:Aquote1.png
At specific problem definition it can turn out that for this task there are no data for training, and to mark them too expensive. For example, all know that the problem of text classification is solved by methods of machine learning. But for this purpose there is an important precondition - there has to be a training selection. Also it turns out that in many situations when it would be desirable to do automatic classification of texts, it is impossible to create and support the corresponding training collection for reasonable means and/or time.
File:Aquote2.png

In what complexity?

File:Aquote1.png
And after that opportunities and creations of the training selection for the system of machine learning open if it is necessary,
comments Natalya Lukashevich.
File:Aquote2.png

File:Aquote1.png
In the field of processing of texts it, in particular means that it is very difficult to correct regular errors. Therefore it is useful not to forget that there are also other directions of artificial intelligence, for example, representation of knowledge, a logical output on the basis of which integration with neural network approaches the quality and reliability of results can significantly grow,
notices Natalya Lukashevich.
File:Aquote2.png

File:Aquote1.png
Key barrier to development of NLP – a lack of data for training. Scope of NLP in business constantly extends, technologies apply in more difficult tasks. But, as a rule, the majority of documents for specific business scenarios are a trade secret or are personal data, available documents quickly become outdated. It limits business in creation of public resources and libraries. The company cannot take data from the client, and then upload results of training publicly. In order that it became possible, data need to be depersonalized. Besides, the companies reluctantly share results of the researches as competitors can use them,
believes Tatyana Danielyan.
File:Aquote2.png

Context problem

The ordinary person has a level of "understanding" of the events, including, literary texts, significantly depends on knowledge of a context. In fact, takes very many knowledge necessary for understanding of this or that text, people from the memory. How today this issue in the NLP systems is resolved?

File:Aquote1.png
Accounting of a context is necessary at expansion of a request by search (the task consists in that, a system automatically added still some words to a request by which it is possible to look for relevant documents), when translating, when choosing value of an ambiguous word, at named entity recognition (Moscow in a specific context is the city or the river).
File:Aquote2.png

The break of 2018, among other, led to progress in representation of a word in a context: words (or other language units) are presented in the form of vectors (a set of numbers from 100 to 1000 elements), and this vector depends on a context. Such approach is implemented, for example, in the BERT model and it similar.

File:Aquote1.png
It already led to improvement of quality of the solution of a number of problems of hands-off processing of texts, for example, named entity recognitions and some other. Also the best accounting of a context manages to be reached with the help, so-called, the attention mechanism which significantly improved quality of automatic translation. All these new solutions were created on the basis of neural networks of deep learning.
continues Natalya Lukashevich.
File:Aquote2.png

However, the expert specifies, at the same time it is about rather local context, problems of accounting of a global context in the text of rather big size, in dialog are still rather essential.

File:Aquote1.png
BERT allows to complete with a big accuracy offers, proceeding from a context. Similar technologies can be used in search systems, online stores, the analysis of unstructured corporate documents, etc. The context can be analyzed within a phrase, the offer, the paragraph or all text,
speaks the specialist of ABBYY.
File:Aquote2.png

According to Valentin Malykh, the question of "understanding" of a context by a computer system is one of the central tasks of NLP now.

File:Aquote1.png
Big graphs of knowledge allow to solve a part of the problems connected with it, first of all in situations which assume use of well-known knowledge, for example, that water — wet, and Earth turns around the Sun. However it does not remove all problems, for example, where some logical output is required,
tells Malykh.
File:Aquote2.png

He gives an example of the description: Vasya fell in water and then was hospitalized. It is obvious to the person that Vasya, most likely, caught a cold after got to water, and to the machine "guess" about it it is immeasurably more difficult.

File:Aquote1.png
Large language models, for example, such, BERT, can catch some communication between water and cold, but for implementation of a logical output more specialized models, as a rule, are required. For the current time of break in this area did not occur yet, and all "successful" answers of the same Alice Yandeksa are either result of in advance written scenario, or the good choice from in advance made list of answers,
summarizes Valentin Malykh.
File:Aquote2.png

The main calls for researchers and developers in the field of NLP

File:Aquote1.png
The main call for researchers – polysemy and complexity of language and also a lack of data for training. The significant directions of researches – information extraction for decision making, predictive analytics when it is possible to predict further behavior of the person or the company, relying on causes and effect relationships and also intellectual search.
File:Aquote2.png

File:Aquote1.png
Call for researchers is in categorical sense the problem of understanding of the person by the computer. In such setting now the problem is not solved, but selection of separate subtasks allows, solving them, to approach this treasured purpose. Now the most part of efforts of community and our laboratory is directed to the solution of applied problems of processing of EYa, for example, to named entity recognition from the text. In more remote perspective we will work on creation of question answering systems which will be able to use knowledge from unstructured sources. It already, apparently, is much closer to understanding of the person by the computer, but still — case of the future.
File:Aquote2.png

Why the smart program cannot "understand the text how the person?"

Narrow-minded question of whether the understand program the text as well can as the person, at specialists usually raises a smile – they think in other categories.

File:Aquote1.png
For the IT companies servicing business a task to create the computer system functioning in accuracy "as the person", usually is not necessary. And it is not specifics of information technologies,
explains Anna Vlasova, the head of department of linguistics of Nanosemantika Laboratory.
File:Aquote2.png

File:Aquote1.png
But it is the different R&D directions. In the same way in the IT industry there is a place both to researches of understanding of EYa by the person, and researches which purpose – creation of the computer technologies allowing to imitate this understanding to a certain degree.
File:Aquote2.png

File:Aquote1.png
For example, intelligent dialogue systems solve a problem of mass service of end consumers of goods or services: consultations, answers to frequent questions, recommendations about selection of goods/services, help in transactions, etc.
File:Aquote2.png

Besides, the expert pays attention that the complex computing system has a mass of such opportunities which the person does not have in principle. For example, instantly to compare the history of purchases of the specific person and the selected representative user groups, or literally "remember" all previous dialogs with each specific person and take information from them.

File:Aquote1.png
Naturally, all developers derive the maximum benefit from these opportunities, but do not neglect them to receive "purely human" model of dialogue behavior. There is no final decision here also the universal solution – too,
speaks Anna Vlasova.
File:Aquote2.png

But at the moment, the expert notices, the tendency to a combination of different types of language models and switching between them in different points of communication, depending on a task which a system wants to solve in this point is traced.

For example, if there is a problem of promotion of specific goods or service, then here in dialog scenario models best of all work with the preset scenarios of a system behavior and if there is a task to answer specific questions then it is possible to connect the pretrained neural network, etc.

File:Aquote1.png
the Computer system cannot "become now the person" for the banal reason: the complexity of a human brain on two orders exceeds complexity of models which can be processed by modern computers. So the strong artificial intelligence is far from us now. At the same time the weak artificial intelligence — the solution of narrow tasks — quite surely develops,

File:Aquote2.png

What creates perspectives of development of a segment of NLP

Understanding limitation of modern instruments of "computer understanding", professional participants of the NLP market think of what aspects of the modern world will help practical implementations of such solutions. Some see the stimulating role of written communications which become more active in process of informatization of private and business life.

File:Aquote1.png
People began to communicate more in writing. Natural reaction of business to it is replacement of the person by the machine, for example, for answers to trivial questions,
speaks Valentin Malykh.
File:Aquote2.png

Implementation of such systems will be promoted by development the technician of transfer of knowledge that systems could use large volumes of knowledge which are implicitly contained, for example, in all volume of web pages of Runet, the expert believes. However the complexity of EYa which often does not allow the machine to treat the text of the user correctly appears a barrier on the way of widespread introduction of such systems.

File:Aquote1.png
If the virtual consultant answers instantly, but does not keep the user to look forward to hearing from the operator on several minutes, both answers at the same time correctly and solves a problem from which the person came to the consultant, then such service attracts new users and holds old. It is more than users or buyers = more money for the company.
File:Aquote2.png

From the economic point of view, the colloquial intelligence allows to reduce or to effectively redistribute expenses.

Experts specify also that the perspective directions of development NLP should be looked for not where the computer program only tries to gather additionally to the level of the person, and in other tasks – with what the computer copes much better than the person. By the way, there is a lot of such examples. For example, IT solutions for work with huge information volumes give many opportunities today, for example, allow to connect data from unstructured documents with structured.

File:Aquote1.png
The number of documents, especially unstructured, such as agreements, contracts, agreements, etc., hugely, their number already reaches 80% of the total amount of data and constantly grows. On the basis of these data it is possible to draw conclusions and to define priorities for technology development, to find in the general data stream the significant facts for development of the company, including for business process optimization,
notes Tatyana Danielyan.
File:Aquote2.png

Besides the volume of already developed intelligent solutions for processing of various data for the benefit of business is so big that this research backlog will be enough for significant promotion in practical implementations.

File:Aquote1.png
So in the next years development of technologies of colloquial intelligence (and the hype provoked by this development) will not end, and will only accrue,

File:Aquote2.png

NLP system of the future: what it will be?

Regarding NLP systems the world endures the period of interest in practical implementations of available technologies today.

File:Aquote1.png
Undoubtedly, existence of available resources (annotated corpora, electronic dictionaries) and the open source repeatedly simplified an input on the market or researches in NLP. Besides, there were powerful pretrained models which when training require the large volume of text data, computing resources, professional specialists, and the student in the term paper can apply them already,
tells Natalya Lukashevich.
File:Aquote2.png

File:Aquote1.png
A NLP system just executes conversion of the text of an initial format in those structures which are necessary for the solution of this task, or draws the set entities from the text. All this occurs due to application of the determined algorithms, or models trained on specially, the created data for a specific objective,
explains Lukashevich.
File:Aquote2.png

By and large, in the 21st century the world not really promoted in a perspective of "understanding" as properties of the smart program. It should be noted, however, efforts of scientists – mathematicians regarding philosophical judgment of a phenomenon of knowledge. In particular, there was a term "cognitive system" describing the computer program capable to reflect, do conclusions. Using this entity specialists in the field of AI try to find more pragmatic determination of the concept "artificial intelligence" or "artificial intelligence", than the indistinct concepts "strong AI" or "weak AI" used today.

The IBM corporation, by the way, positions the IBM Watson project as a cognitive system, emphasizing existence of some single platform of intellectual processing of knowledge and a uniform context of knowledge within which there is a search of specific solutions of applied tasks. However, this idea does not seem to our experts fruitful.

File:Aquote1.png
Properties of some IT system as cognitive are defined by its capability to make decisions without participation of the person. But validity range is small here, and in the nearest future it is hardly worth expanding it,
considers Georgy Lagoda, the deputy CEO of Programmny Produkt Group.
File:Aquote2.png

He suggests not to multiply entities without need, to use the usual term "artificial intelligence technologies" taking into account that the specific content of these technologies for a measure of scientific and technical progress changes.

File:Aquote1.png
From the technical point of view both today, and tomorrow it is much more pragmatic to remain within the conversation on functional characteristics which can be measured and compared objectively on them work of AI with work of the person, than to try to estimate complexity of internal thought processes and models of representations at intelligent systems, in comparison with human consciousness,
believes Yury Vizilter, the chief of division of intelligent data analysis and technical sight of State Research and Development Institute of the Aviation systems, professor of RAS.
File:Aquote2.png

In other words, NLP solutions will also have today in the near future clearly pragmatical character, to contain specific functionality and to be used in real business processes with measurable results. Perhaps it to the best what on a desktop of the computer the smart program with which it is possible "to have a heart-to-heart talk will not lodge"? But smart programs will become the tireless "working horses" helping people with their work.

Read Also

You look also (voice assistants)