RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2020/12/23 09:08:19

Machine translation of 21 centuries: available fantasy of the Interview with Yulia Epiphantseva, PROMT

The director of business development of PROMT Yulia Epiphantseva in an interview of TAdviser told about recent trends in development of machine translation technologies and also about what the companies should pay attention when choosing machine translation solutions to.

Yulia
Epiphantsev's
Neural network technologies develop so promptly that the translation quality grows literally in the eyes

What today key directions of technology development of machine translation systems?

Yulia Epiphantseva: Principal direction is a development of NMT (Neural Machine Translation) - a machine translation technology on the basis of neural networks. In general it is necessary to tell that the market of technologies and machine translation solutions contains more than 70 years. If to remember history and the known phrase that machine translation is a child of "Cold War", then everything began really with intelligence agencies and investigation. First of all are necessary to these structures technical means for understanding of texts of "enemy" were always on the lips. But very quickly, in the 1960-70th years, there were new requests for automatic translation technologies – all thanks to a scientific and technological revolution when wide circulation of automation and a computerization began, and data passed into digital space. Scientific knowledge in a digital form became physically available to scientists worldwide, but programs for transfer were required. And with the advent of the MT personal computers came to business as one of means for communication and support of document flow between clients and partners.

Now we endure new peak of development of machine translation technologies. It is connected with several factors at once – first of all, with use for training of neuronets and with Deep Learning development. Large-scale researches in the field of neural network technologies began last century, but the mass growth and wide circulation happened in this century. In turn, development of Deep Learning became perhaps thanks to data availability and a maturity of hardware for acceleration of training of neural networks – graphic processors or GPU. GPU have high performance, they effectively cope with a large number of the same tasks. Where earlier for training months were required, now there are enough several hours. Besides, graphic processors use advantages of parallelism that essentially for problems of transfer. In what feature of neural network approach to transfer? The neuronet tries to code at first the offer in original language in an abstract set of numbers, and then to decode from word numbers, but already in other language. A neuronet, predicting a new word in the offer, considers earlier made predictions. Thereby for more exact words choice for transfer the context of all source sentence is used that essentially affects quality. In technologies which were applied to neuronets it was impossible to consider a context of all offer, only the immediate environment was considered.

What difficulties how developers overcome them does development of technologies of neural transfer also face?

Yulia Epiphantseva: The first complexity is connected with training data which are necessary for a translation system training. The training selection should correspond to a number of criteria. First, data should be much, it is millions of offers in original language and transfer. Secondly, data should be structured on couples of offers (one offer in original language, the second – in target language). Thirdly, data should be various – hundreds of thousands of examples with contexts where words and phrases which the neuronet should remember meet are required.

It is necessary to tell that machine translation to some extent was lucky with data. In the 80th years of the last century Translation Memory technologies which allowed to accumulate original texts and transfers in the structured type (sentence by sentence) began to develop. Today these data are used for training of neural network translators. The second complexity is connected with the fact that, unlike earlier technologies, for example, on the basis of rules, NMT does not allow to correct the translation of separate terms pointwise.

As it often happens to new inventions, the NMT technology has the features and restrictions. One of the main problems of NMT transfer is the address with terminology. Users can notice that the translation of the text differs in smoothness, but terminology in all text is not always sustained. For example, it is necessary for us that in some document the word agreement was always translated into Russian as "agreement". But a NMT system translates it as "agreement", as "agreement" because these transfers almost synonyms and occur in similar contexts in data at which the neuronet studied. For systems working on RBMT technology (machine translation on the basis of rules), this problem does not exist – using RBMT-dictionaries it is possible to manage transfer practically of any word. But for NMT it is a serious problem, and it results from the entity of NMT technology which is under construction not on dictionaries, and on parallel corpora.

Now in the NMT technology there is no opportunity to influence transfer of terminology directly in learning process of a language model. Therefore development of tools is very important for correction of terminology already in results of NMT transfer.

How do professionals estimate machine translation quality?

Yulia Epiphantseva: It is the most important issue – how to estimate translation quality! It is possible to tell that to this question as much years how many and to machine translation technologies. It is possible to involve experts to quality evaluation, but, first, the expert should know well original language and transfer, secondly, the expert is required to understand some data domain, thirdly, time, and, at last, fourthly, any expert is not free from subjectivity.

Therefore the question of development of automatic metrics was always particularly acute. I will not stop in detail on all metrics. I will note only that experience showed that it is possible to apply effectively an automatic metrics if systems working on one technology are compared. Then comparison will be relevant. One of the most popular metrics is a BLEU (Bilingual Evaluation Understudy). The algorithm BLEU was developed by IBM company and estimates translation quality on a scale from 0 to 100 on the basis of comparison of machine translation with human (reference) and search of a common word and phrases. The grammar at the same time is not considered in any way. Of course, at once it is visible that at such comparison the translations, equivalent on quality, will be got by different estimates if one of them resembles a standard according to lexicon, and the second will differ. One more important aspect is that the metrics is indicative only for texts.

Nevertheless, this metrics is used by developers long ago, for example, for quality evaluation of transfer to time of various "competitions" of machine translation systems or in the commercial environment for comparison of translation results from different vendors.

You mentioned "competition" of systems. What there is a speech about?

Yulia Epiphantseva: Every year the Association of computational linguistics (ACL) holds the international conference (WMT) where developers of machine translation systems from around the world gather. Within preparation for this action organizers prepare practical tasks which allow to make idea of how technologies develop as the translation quality changes. In 4 months prior to a conference organizers transfer to participants at the event data at which it is necessary to train the machine translation system. Then developers using already trained systems translate the test body which is also offered by organizers of the event and publish result in the special section on the website of WMT. After that experts compare and evaluate all translations, and then range them, placing as it should be from the best to the worst. Besides, the translation quality is estimated using a BLEU metrics. Before a conference the consolidated results of expert and automatic evaluation become known.

What leading solutions are presented at the market?

Yulia Epiphantseva: If we speak about the market of machine translation, then there is a lot of players. And all this the known names are the largest IT companies of the world, such as Google, Microsoft, Amazon and others which have many other services and huge budgets on development and researches. Besides, there are such companies as PROMT, Systran, Tilde which are engaged only in development of machine translation solutions. And still there are research centers and development teams at the universities and even in large bureaus of transfer. It turns out that we have many competitors.

How it is possible to find the niche in such difficult competitive situation?

Yulia Epiphantseva: We consider what is possible and that is why. First, it is important to understand a difference between technology and the software product. Really, thanks to development of open source of technologies, data availability can construct language neural model which will translate in principle. But it is yet not a product. Besides, if you look at solutions which are proposed to the companies by such players as Google, then the speech always goes about services. It is convenient and optimal for a huge number of scenarios of use, but, after all, not for all.

In the Russian market of PROMT – the only solution provider on machine translation which work in the local area network and do not address to the Internet. We deliver not only the server, but also desktop NMT solutions working offline. Work is offline not so important for some tasks. However there are situations when it essentially: if work with data which should not leave corporate network or even the specific computer or when the possibility of the appeal of employees to cloud services is for safety reasons excluded is supposed. And our solutions guarantee that data when translating are not transferred anywhere that is will not get to third parties.

One more aspect - customization of solutions under a task. Above I told how it is important to develop tools for correction of terminology in NMT. Just we have a technology – Smart Neural Dictionary which allows to influence the translation of terms. It is smart technology: it is enough to user to tell that agreement should be translated as the agreement, and correct translation will be in the text in the necessary case and number.

How did the pandemic of COVID-19 affect demand for solutions of machine translation?

Yulia Epiphantseva: Against the background of distribution of a pandemic the role of machine translation significantly increased, machine translation, first, provides fast and rather inexpensive transfer considerable on amount of data, secondly, it became the only opportunity to quickly convey information in regional language. And it can be very important in the conditions of a pandemic.

Among scientists, media demand for the translation of medical and pharmacological information from foreign sources increased. Among our clients there is a specialized service for doctors. On it specialists can communicate and also request articles from foreign sources and translate them in the automatic mode. According to the statistics on this service in March and April, 2020 the total quantity of translation requests increased by 160 times, and transfer volumes - by 55 times. As you understand, digits speak for themselves.

How it is correct to company to select the machine translation solution? What needs to be considered?

Yulia Epiphantseva: It is necessary to pay attention to several aspects: for whom the machine translation system for what it will be used and what translation quality will be required is intended. For example, if the customer – the large company with employees in different regions, machine translation is used for business communication and exchange of internal documentation, then the client-server solution which is integrated in network of the customer will be optimal, supports the translation of documents with preservation of formatting and ensures information security. If the most exact transfer from the terminological point of view is necessary, then at once it is necessary to pay attention whether there is a possibility of training of the solution. If training is possible, then there are related issues – what data are necessary for training whether the customer on whose party there will be a training – vendor or the customer has these data. All this influences both additional finance costs, and for time and human resources when training a system.

What content is translated most often by corporate users?

Yulia Epiphantseva: Today there were already very many scenarios of use of MT in business. MT use support services for communication with clients from different regions and with different language competence, the large companies for communication with clients and partners, internal communication, preparation of the presentations and reports, the staff of global corporations often is specialists from the different countries. One more popular scenario is a localization of services, the news websites and online stores. Even more often machine translation technologies are added to third-party applications as support function. For example, in 2018 our translator was added to devices for verification of e-tickets on the railroad. It was made for convenience of communication of conductors with guests of World Cup 2018 in Russia.

What factors influence the cost of enterprise-solutions and what to be guided when purchasing software by?

Yulia Epiphantseva:Machine translation solutions can be desktop, server, cloud or in-house. The cost of any solution depends on costs for start and operating expenses. For example, when choosing the server inhouse-solution it is necessary to consider expenses on the hardware – the server hardware that for cloud solutions is irrelevant. Is included the cost of user licenses, expenses on training in work with a system for some scenarios in the price of software, the support cost and also setup of the solution if insufficiently starting translation quality.

It is obvious that cloud solutions require smaller expenses: they are cheaper than in-house of solutions. However software in the local area network will better ensure data security. Therefore when choosing the machine translation system it is necessary to be guided by priorities and opportunities of the company.

What tasks are put before themselves by developers of machine translation systems for the next years?

Yulia Epiphantseva:At the last WMT conferences in 2019 and 2020 spoke about review of a quality assessment system of transfer much. The matter is that the applied BLEU metrics - a figure of merit for MT systems using which it is possible to find out how machine translation is comparable to the transfer executed by the person – assumes comparison of transfers according to offers: the sentence translated by the machine translation system is compared to the sentence which was translated by the person. But neural network technologies develop so promptly that the translation quality grows literally in the eyes. Therefore experts incline that it is necessary to compare not offers, but documents entirely any more. Therefore one of relevant tasks – search of new methods of evaluation of machine translation at the level of the document. At the same time, it is necessary to recognize that the translation of documents with preservation of formatting – too not completely solved task. Therefore, on the one hand, in language translation technologies there was an important and considerable break, but still there is a lot of unresolved and so important tasks.