RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2024/11/02 09:29:29

About technology trends in the voice and chatbot market

The article is included in the Review of the Russian market for voice and chat bots.

Content

Technological trends

The interviewed experts noted the unconditional exclusivity of the influence on the industry of generative neural networks and flagships of this industry, especially ChatGPT, in 2023.

According to Svetlana Zakharova, director of business development at Just AI, the impact of generative AI on the market is difficult to compare with the impact of other technologies. The emergence of LLM models "turned around," she said, the approach to product and solution development, to customer service, and in the future this influence will only increase.

File:Aquote1.png
At the same time, we must not forget that LLM is not only about working with text, but also about images, audio and video, "the expert emphasized. - So gene-AI technologies are starting to move on the principle of an unwinding spiral.
File:Aquote2.png

Vladislav Belyaev, executive director and co-founder of the AI platform AutoFAQ, is confident that the popularity of generative neural networks, which is associated with the emergence of ChatGPT, the fastest growing IT product in history, has become a key event for the industry.

{{quote 'Initially, the technology was aimed at Internet users (B2C), today we see interest in the corporate segment (B2B), - said the expert. - The new reality has really come, but its essence is not that neural networks will replace a person, but that a person has received a powerful tool that can change the approach in solving business problems. }}

So, ChatGPT does an excellent job of writing text, but without the participation of a person it is impossible to assess its quality, reliability and uniqueness. The same is true in the support market: neural networks do an excellent job of solving simple user issues, but without human participation it is impossible to process appeals that require a certain scenario, work in accordance with regulations and personal data.

Vladislav Viryasov, director of Avantelecom, noted a special surge in the field of neural network technologies and natural language processing. This year, many new AI products have appeared on the market both in Russia and in the world, and existing ones have released new releases and developed functionality. According to the expert, this type of AI products continues to have a significant impact on many areas of business, significantly transforming some of them. AI has become an integral part of the entire ecosystem of corporate communications, and its influence will increase in the coming years, the expert concluded.

OpenAI, as a market leader, promptly releases new models and sets trends, and the rest of the players catch up with them no less quickly, said Sberbank Business Director SoftMaxim Ivanov. In 2023, OpenAI released GPT-4 and GPT-4V, GPT-Store and GPTs - fundamentally new products for creating personalized assistants, he said. The company also introduced the 4o model - there is support for multimodality, the model "understands" images, voice and more. In o1, the model learned to reason, check its actions, correct errors using reinforcement training.

In addition, LLaMA 2 was released - Open Source a model that quickly gained the trust of the community. Google released Gemini and RT-2 (model for robotics), MicrosoftAppleGoogle the same//began to implement models in their operating systems. -Agents and AI constructors appear to create them. At the same time Sber , he introduced a competitive LLM GigaChat and a set of tools for working with it. All these events have greatly influenced the market, the expert emphasized.

Alexander Krushinsky, director of the voice digital technology department at BSS, also believes that GPT-like bots (LLM or generative conversational AI) are the main trend and the main technological opportunity that is "looking" for its place in Enterprise solutions. And while this search process, according to the expert, is at the very beginning. On the one hand, LLMs open up a lot of opportunities: you can make virtual assistants who will advise on the entire array of internal company documentation without the need for additional training, you can analyze customer requests to identify insults and automatically issue recommendations for improving the quality of service, or you can replace programmers with AI agents who will write TAs, develop systems on them and test them.

File:Aquote1.png
And it seems that there are even real examples of LLM operation for these tasks. But all this so far in the form of, rather, early prototypes. Upon closer inspection, it turns out that the use of LLM is very limited: the size of the "context window," performance, intelligence, "hallucination" of the neural network, etc., - he said.
File:Aquote2.png

Significant investment in process piping (such as RAG, Human in Loop, etc.) is required to circumvent these limitations. However, the expert made a reservation, this does not mean at all that LLM has already rested on some theoretical ceiling of applicability, through which the business will not be able to break through. Now two processes are taking place simultaneously. First, the base models themselves are getting better (almost every month). Secondly, the business is learning to bypass or eliminate existing restrictions. GPU clusters are being built, the boundaries of the real application of LLM in routine tasks are being found.

In the future, voice and chat bots will become inextricably linked with various LLM models, says Tatyana Gaponenko, Marketing Director of Nanosemantics Group of Companies.

File:Aquote1.png
In 2024, we saw an increase in the number of LLM models in the world and in Russia, in the future of three years they will multiply, but at the same time leaders will clearly stand out, in which billions of dollars will be invested, - said the specialist. - As for other models, they will be sharpened for a certain functionality and sphere.
File:Aquote2.png

She also predicts the active development of LLM models that can be put in the loop to customers and additionally trained for specific requests and subject area. And those companies that will use communicative AI will have an advantage over competitors.

Roman Milovanov, head of the development of chatbots and voice robots at Satel, noted the ChatGPT and Gemini releases as the main events of 2023: the updated ChatGPT showed successes in natural language processing, and Google's Gemini stood out for multitasking and AI integration into applications. These products have had a significant impact on the voice robot and chatbot market, accelerating automation and enhancing service personalization, improving the customer experience process, reducing operational costs and increasing efficiency. The emergence of these technologies set new standards for the market and directed the development of AI for the coming years.

Anton Korniliev, an expert on solutions for contact centers and unified communications K2 Tekh believes that the emergence and mass distribution of solutions in the field of genia has greatly changed the approach of many people to solving work problems: what previously required an exclusively creative and "human" mind, it became possible to entrust the machine, and with an acceptable result that requires minimal improvement by a professional. For example, in the field of unified communications, thanks to genia, it is now possible to summarize operator dialogues with clients within the framework of a contact center or discussions of colleagues during VKS meetings.

File:Aquote1.png
Using the combination of GPT and RAG, we can reduce the time-to-market tenfold to develop and update bot-client interaction scenarios in terms of personalized answers to frequent information questions, the expert stated.
File:Aquote2.png

Anton Korniliev is confident that the rapid development of geneAI has a chance to give rise to a new era both in IT and in the daily life of people.

File:Aquote1.png
One of the ongoing trends is the application of AI in various industries, including the B2C market. According to a study by Business Wednesday JSC and Rambler & Co, about one in five Russians (19%) actively use AI in their work. AI technologies in a sense change the culture of business communication, help companies and users give part of the routine actions to artificial intelligence and focus on important strategic tasks. For example, our VK WorkSpace platform has an "AI Assistant." He knows how to create a summarization of video conferencing, letters, correspondence in chats, highlighting the main thing. Such a service can be in demand in any business and helps employees resolve work issues faster and not spend time rereading long dialogues in work chats and in the mail, "commented Dmitry Pleshakov, Product Director at VK Teams.
File:Aquote2.png

New trends

According to Svetlana Zakharova, by the end of 2024, a large number of voice cases are expected, since voice communication is the most familiar. To the greatest extent, the development of generative AI will affect voice technologies such as TTS, speech technologies, etc., and will also affect the development of scenarios for dialog solutions - companies are already connecting AI models to the development of scripts to increase leader generation.

File:Aquote1.png
In a sense, generative AI "gives a second life" to solutions that have existed on the market for 3-5 years, the expert noted.
File:Aquote2.png

Vladislav Viryasov believes that one of the leaders in the year will be the NLP development trend. Chatbots are becoming more "smart," he said, thanks to advances in natural language processing: They have a better understanding of context, irony and complex queries, making interaction more natural.

The expert also expects deep and flexible integration of voice and text robots with corporate CRM, BI systems and databases. There are not enough basic integration options for companies, especially when it comes to achieving really significant results in the field of automation, and therefore customized integration options will work here, worked out jointly by the client and the vendor, taking into account specific business processes and customer features.

Vladislav Viryasov also noted a trend towards using user data to create a personalized experience, which can include product recommendations, individual offers and adapting the communication style for a specific client.

ELMA Bot technical director Nail Akhmedzhanov considers multimodal models to be the main trend in AI development, which can understand not only text, but also images, videos, any types of media and respond in the same formats. This direction is especially promising, in his professional opinion, in the field of entertainment.

Alexander Pavlov, Managing Director of Robovoice of SL Soft, noted several trends: the deep integration of GeneAI with the knowledge bases of companies, business systems such as CRM or ERP, as well as the development of omnichannel. According to the expert, the focus is shifting to a comprehensive strategy of interaction with customers over the entire duration of CJM: from the first touch to repeated purchases. This leads to a change in contact points: for example, instead of interviewing customers about the reasons for unsubscribing, companies predict potential care and provide proactive service to prevent it. In addition, market players strive to create unified solutions for all interaction channels: instant messengers, social networks and phone calls. Thus, seamless and high-quality service is achieved at all levels.

As Anna Vlasova, head of the department of computer linguistics at the Nanosemantics Group of Companies, said, globally in the field of conversational AI in 2024, the following trends dominate: the use of gene and LLM for human communication, the expansion of the functionality of intelligent chat bots, Low/No-Code training chat bots by employees of companies that are not programmers or technical specialists. In addition, the specialist noted the tendency to use generative conversational AI where their specialized technologies have traditionally been used: document search, classification of documents or appeals.

The types of communication with AI are constantly expanding: from technical support to sales. Bots now do not just communicate with a person in voice or text channels. They are expected to be able to work with documents, images, as well as integration with all other systems of companies (CRM, task trackers, calendars, etc.). These directions of development were visible in previous years, but in 2024 technologies made it possible to make a leap and bring them to the first place. Such conclusions are based on research by consulting and analytical companies such as Gartner, as well as on review industry publications, the specialist said.

File:Aquote1.png
In 2024, at the request of our market, we see that the development of the intellectual chat bots industry in Russia as a whole follows global trends, "said Anna Vlasova.
File:Aquote2.png

So, compared to 2023, the number of requests for the possibility of developing a chat bot for gene and LLM has increased 6-7 times. There was an expansion of the functionality of bots, although there was not such a sharp increase in requests as with genia.

As for Low/No-Code, according to the specialist, this approach is in demand in the Russian Federation, but the specifics of local business are expressed in the fact that many companies would simultaneously like to be able to use the program code in the responses of the chatbot, if necessary, or at least "twist" a variety of training settings for neural network modules.

The trend towards the use of AI for the classification of appeals has been observed for several years, but the growth of interest in searching for documents or corporate knowledge bases using genia was noticeable in 2024.

The specialist noted a trend towards creating platforms that allow you to develop chat bots, configure a call to LLM (which can also be partner or free), integrate with various systems to obtain information and further train chat bots.

The trend towards strengthening data security creates an interesting situation on the Russian market, Anna Vlasova emphasized. For security reasons, companies strive to install any IT solutions in their loop. Many customers in 2024, in principle, did not consider cloud models of work even at the pilot stage.

File:Aquote1.png
It would seem that this directly contradicts the popularization and growth of the use of well-known LLMs, the most popular of which at the moment ‒ the generative ChatGPT model from OpenAI, the specialist said. - ChatGPT, like other LLM models from large vendors, is not placed in someone else's loop.
File:Aquote2.png

But in the end, 2024 showed that this situation only stimulates research and development in the field of training local generative language models "under the customer" and on its data. There are also customers who are ready to use cloud solutions with LLM, but from domestic vendors: Sber Gigachat or Yandex GPT.

In 2024, generative AI models allow voice robots to cope with call processing and sales automation, said Roman Milovanov. According to the specialist, multimodal neural networks working with text and voice open up new horizons by integrating various interaction formats and improving user experience. Open LLMs make it easier to develop and adapt solutions, allowing companies to build quality systems faster and cheaper. Voice technology is becoming more integrated into business processes, improving analytics collection, order processing, and operations management.

Voice and chat bots are becoming a significant part of the business, said Alexander Sidorov, leading engineer of the AMT Group contact center department. They provide new opportunities to improve services, sales and customer engagement. According to the specialist, the main technological trends in the company are considered to improve speech recognition and synthesis, integration with various business processes.

Modern bots can generate natural speech that is difficult to distinguish from a living person. They allow not only to automate many typical tasks, but also to improve interaction with the client, providing him with individual offers, they know how to work immediately in voice and text channels, which allows you to interact with the client as widely as possible.

Personalization capabilities with AI

Svetlana Zakharova noted that with the development of GIA, we expect significant changes in the field of biometrics, storage and use of datacets. The issue will also affect products that close the security issues of working with various LLMs. After all, the deeper people dive into didgital technologies and various applications, the more personal data the systems collect. The level of personalization is growing, but at the same time the question of the security of the collected data is acute, so after the technologies will come laws on tightening the collection and use of this data. Experts observed a similar situation with car calls. As soon as generative AI takes its place in the market, and the first precedents related to security, as well as an understanding of the scope of the possibility of new technologies, will appear new points in the legislation in this regard.

According to general estimates, today in Russia 88% of users have had at least one contact with an AI-based assistant in their lives, and 82% will prefer to talk with an AI-assistant here and now, without waiting for the operator's response, Vladislav Viryasov said. Therefore, the integration of No-Code designers with other products is in demand and helps in the development of "empathy" and "humanity," which is important for marketing and client service.

According to the expert, this can be seen in the example of chat bots that communicate with the client in real time and provide personalized recommendations and suggestions. The use of biological parameters of celebrity in marketing is a rather controversial issue, since it affects not only the field of information security, but also ethics. So, for example, there are legal precedents that prohibit the use of the image or voice of celebrities without their permission. In addition, this practice can lead to a loss of customer trust.

Anna Vlasova confirmed that a request has already taken shape on the market today, which should become a trend: we need services that recognize deepfakes.

File:Aquote1.png
We observe that over the past year, requests for "digital doubles" of media persons have become more frequent, who must not only copy the appearance, but also communicate in the appropriate manner, "said Tatyana Gaponenko. ‒ Moreover, both the persons themselves and the corporations that cooperate with them and order the brand's ambassador avatars.
File:Aquote2.png

For example, this year Nanosemantic made a voice synthesis for the famous blogger Ruslan Usachev, which he uses to create content and speed up its production, and continues to work on a voice model of the Soviet announcer Yuri Levitan commissioned by his great-grandson.

The main risks of No-Code designers, according to Roman Milovanov, are possible problems with copyright, potential violations of personal security during experiments with celebrity (deepfakes). All this can lead to an increase in the risks of cyber threats, litigation and cases of unfair use of content. However, in the field of personalization using voice and chatbots, new opportunities are opening up. For example, technologies can create customized promotions tailored to the user's emotional state and behavior, which can significantly improve customer experience and improve the effectiveness of marketing campaigns.

Alexander Krushinsky believes that the main obstacle to personalization now is not the lack of "empathy of chat bots," but the absence, as before, of mutual coordination of client service processes in different channels and on different sections of the client path with the subsequent centralization of knowledge. Different business units of an organization can use different systems that are not integrated with each other, working according to different logic and with their own amount of information.

File:Aquote1.png
The term "omnicanality" appeared 14 years ago and managed to become a trend and get tired during this time, but still very often, when contacting, the client has to repeat the same thing first to the chat bot, then to the operator who connected to the chat, and again - when you get tired of waiting for an answer in the chat and decide to call the contact center, - the expert noted.
File:Aquote2.png

The introduction of AI itself does not solve this problem, but rather manifests itself even more clearly, because the main "food" for AI is the BigData of the organization - data that AI can analyze, on which it can learn, and which will affect the result of its work. However, the AI world is young and dynamic, which entails the patchwork introduction of AI, when within one organization a dozen AI platforms and AI solutions coexist independently, closing different areas of work. And at each such site, their own sets of data are accumulated, consisting of both the initial facts and the results of their processing, which are not used outside this site.

When receiving a call, the operator must not only see the name of the client and the date of his birth, but also his current parallel dialogue with the support bot, his latest actions on the site, a summary of his previous calls. And since this is a fairly large amount of information, AI can just help the operator by issuing recommendations for service based on the entire array of information about the client, thereby making this service truly personalized.

File:Aquote1.png
And empathy can be done when the operator stops asking what I just told the bot in the chat, and will offer a solution even before I voiced it, "smiled Alexander Krushinsky.
File:Aquote2.png

A qualitative leap in dialogue technologies

Vladislav Belyaev does not expect a qualitative jump in AI development in the next 2-3 years. According to him, despite the powerful progress over the past year and a half, the basic principles of modern AI technologies have not changed, the greater focus was on scaling (more video cards and data) and engineering solutions.

File:Aquote1.png
Over the past 1.5 years, 2 large updates have been released using the example of OpenAI technologies: from GPT-4 to GPT4o, the answers have become more accurate and better to disclose the topic, "he said. - And from GPT-4o to o1-preview, the system has learned to solve multi-level problems. Each of these updates showed a jump in the quality of AI.
File:Aquote2.png

The distant prospects of the direction are associated with the development of AI technologies, which can consume less computing resources and be able to work with causal structures, and not just statistics. In the meantime, alternative LLM paths for the development of AI are at the level of basic research and cannot show comparable results in quality. The expert predicts that in the next 2-3 years more and more applied AI solutions for specific "narrow" tasks will appear.

File:Aquote1.png
Generative AI is the alternative way that AI should have taken, so in the near future we will observe how GeneAI penetrates into all spheres of life, "Svetlana Zakharov believes.
File:Aquote2.png

In the meantime, according to her, most companies are still trying to understand how to live in the new "GPT reality": in which cases to use and how to measure efficiency. And here lies one of the differences from the "classic AI," with which the majority have been working for a long time, because in this market there are specialists, cases, and a large selection of implemented solutions.

Svetlana Zakharova recalled that Russia has very strict restrictions on the use of foreign LLMs, and the choice of domestic solutions is small. Their cost exceeds foreign counterparts, and only a small number of companies can afford to use domestic models. In this context, the expert notes the following options for players: find funds for the use of domestic LLMs, find ways to bypass restrictions and use foreign models, assemble your team and further study the Open Source model for the needs of your business.

File:Aquote1.png
Small and medium-sized businesses are in a small risk zone in terms of the use of foreign LLMs, - said Svetlana Zakharova. - Therefore, the faster companies from this segment understand how to use geneAI, the higher their competitive advantage will be.
File:Aquote2.png

File:Aquote1.png
Experts predict us the transition to AGI by 2027, while it is hard to believe in it, "Maxim Ivanov shared. - I think that by this moment we will be able to simplify and automate quite a lot of routine tasks that we are now solving without LLM, and the cost of the technology will significantly decrease due to its popularization.
File:Aquote2.png

Specialists have many expectations from multimodal models that can bring communications to a new level, said Nail Akhmedzhanov. In addition, according to him, foreign companies are working on LLM for medicine, pilot projects have already been launched in medical institutions.

File:Aquote1.png
Unfortunately, these projects are not yet in Russia, but in the future they will make medical care more accessible and of high quality at the global level, the expert hopes.
File:Aquote2.png

Robovoice experts expect further development of geneAI and wider practical application. According to Alexander Pavlov, solutions that predict the needs of users based on big data analysis are very promising.

File:Aquote1.png
As for alternative ways of development, we are already witnessing interest in the field of interpreted AI, - said the expert. - This direction allows not only to increase the efficiency of technologies, but also to solve important issues of responsibility and transparency of decision-making, which arise with an increase in the influence of AI on business and society.
File:Aquote2.png

File:Aquote1.png
It seems to me that we live right inside a high-quality jump, - said Alexander Krushinsky. - We now have to literally on a daily basis study new AI models, new approaches to their use and new successful application cases in order to remain at the forefront of progress.
File:Aquote2.png

The development of LLM requires a lot of money, but companies are interested in them and are ready to invest, said Tatyana Gaponenko.

File:Aquote1.png
So soon we will see the emergence of new LLMs, sharpened for a specific customer and subject area, she is sure.
File:Aquote2.png

According to the specialist, this will cause an increase in the number of communication channels (for example, VR will make it possible to communicate with the customer's digital representative - an "avatar" in the digital space), new communication devices, virtual rooms, even mobile phones with projectors, and the possibility of using holograms will appear. And everywhere there will be adapted and self-developing LLMs, suggests Tatyana Gaponenko.

According to Roman Milovanov, thanks to the modernization of natural language processing and machine learning systems, a significant improvement in the accuracy and naturalness of communication using chat bots is expected, an expansion of the range of application of the technology in the range from client services to personal assistants.

Exhibition of achievements

As Dmitry Pleshakov said, a new service, AI Assistant, became available in the cloud version of the VK WorkSpace communication platform in 2024. He knows how to create a summarization of video conferencing, letters, correspondence in chats, highlighting the main thing. "AI Assistant" is able to create spam correspondence in corporate mail and messenger of the VK WorkSpace platform. To do this, you need to send letters or messages from a working chat to the bot in the VK Teams user application. The service will provide the employee with a short retelling of the conversation - it will indicate the actors, figures and other indicators that were mentioned in the discussion. The virtual assistant can also create an autoresame of an online meeting in the VK WorkSpace video conferencing service. To use the function, you need to enable recording during a call, and then send the file to a bot with "AI Assistant."

File:Aquote1.png
The new service simplifies communication in the team and helps employees work more productively, the specialist believes. - Less time is spent on routine activities, such as analyzing mail correspondence and chatting, and more time can be devoted to intellectual tasks.
File:Aquote2.png

AutoFAQ has launched a digital assistant based on controlled generative neural networks AutoFAQ Xplain, thanks to which you can create chatbots for instant answers to questions based on existing documents in the company. Such a product was the first in Russia in its class, said Vladislav Belyaev.

The product allows managers of client service, IT support and any departments that require interaction with a large amount of regularly updated information to quickly implement robotization tools in order to optimize the time spent on searching in corporate documents. At the same time, the system does not just search for information and copies part of the finished text, but rewrites it in dialog form and gives clarifications on demand, providing links to sources for deeper immersion if necessary.

Unlike ChatGPT, whose chatbot has already been tested by the support services of many companies, AutoFAQ Xplain not only adapts the text to the user's request, but also fully controls the content of the responses, since artificial intelligence only refers to the sources of information that the company provides, so business does not need to worry that the chatbot will provide false information or mislead the user.

AutoFAQ Xplain is useful for companies that store hundreds or thousands of pages of documents, including in corporate knowledge bases: Wiki, Confluence, SimpleOne, Minerva or on the site. The digital assistant saves resources on matching questions and answers: just upload the document to the system, and you can start working. There is no preparation period for the connection, you need to provide documents in the format docx, pdf, excel, or give access to existing knowledge bases, sites. It takes 1 day to create a chatbot. Moreover, the solution can be deployed both in the cloud and on customer servers.

By providing an instant answer to a question, AutoFAQ Xplain's digital assistant saves employees from lengthy searches, saving 3 to 5 hours a week, increasing their productivity on 25%.

Based on AutoFAQ Xplain, you can also run narrowly directed AI assistants.

Xplain AI Copilot is a personal AI assistant for contact center operators that allows the operator to accurately answer any client question outside the script. Saves team time searching for information and allows you to focus on building stronger customer relationships.

Xplain Sales is a digital sales consultant to increase sales in chat on the site, social networks, instant messengers, which allows you to increase conversion to sales up to 23%.

In 2024, on the basis of this technology, the company launched projects in "production" and in the pilot stage in companies such as Novosibirsk EnergySport, Trust Technologies and a number of other companies in the construction and energy sector.

Svetlana Zakharova said that in 2023, Just AI had a separate department with its own product stack based on genia: starting with solutions for automating routine tasks (Jay Copilot), building question and response systems based on RAG (Knowledge Hub), ending with solutions for data protection when working with LLM (Jay Guard). Just AI, she said, is one of the few companies in the market willing to supply their solutions to On-Anticipate as well as offer hybrid delivery.

File:Aquote1.png
Our decisions are piloted in many companies: from retail to banking, - the expert summed up. - The main focus today is on improving the client service, accelerating the work of the back office and increasing the efficiency of technical specialists through tools to speed up programming.
File:Aquote2.png

Alexander Pavlov said that today bots are becoming a full-fledged MedTech tool.

File:Aquote1.png
For example, one of our clients, a telemedicine service, uses a chatbot to timely collect key patient health indicators to monitor the treatment of chronic diseases, which significantly increases the effectiveness of treatment, the expert noted.
File:Aquote2.png

According to Vladislav Viryasov, in 2023-2024, Avantelecom specialists concentrated on the development of their own platform for configuring Kaspium voice assistants. The platform allows you to collect customized voice assistants working on natural language understanding technology.

File:Aquote1.png
We have improved the algorithms for processing dialogue and training dialog models, - said the expert. - This made it possible to significantly increase the accuracy of the assistant's responses. We also equipped the platform with our own Datalens system, which allows you to build any reports on entities and visualize them in the form of separate graphs or dashboards.
File:Aquote2.png

In 2024, Avantelecom launched a specialized software product for. medical call centers The solution includes eight voice modules that allow you to automatically close all typical call processing tasks. For example, such: an appointment with a doctor, a call to the house, putting on a waiting list, auto-informing patients. Specialists have configured the integration of the voice assistant medical with information systems for seamless transfer of information to the institution's registry. At the moment, the solution is being piloted in several regions of the Russian Federation as part of automation "," Single service 122 as well as in private medical clinics.

Also at the beginning of 2024, Avantelecom put into commercial operation a new product - the SferaGPT voice analytics system, which helps to track and improve the efficiency of contact centers. The technology works on the basis of genia.

Maxim Ivanov spoke about the AI assistant from Sber Business Software - Low-Code designer for creating virtual assistants using LLM Gigachat.

File:Aquote1.png
You can download your documents and the AI assistant will quickly learn how to answer any questions yourself, taking into account the general knowledge of LLM and the information in the documents, "he explained.
File:Aquote2.png

Today, the company's clients use such assistants to replace the first line of technical support, the second opinion for lawyers, employee/student training, customer advice, sales, etc. Also, experts began to actively use Gigachat in speech analytics products for offline analysis of communications in a contact center.

File:Aquote1.png
We are now very actively working on building LLM capabilities into our products. And here over the past year we have released a number of new products at once, - Alexander Krushinsky shared the good news.
File:Aquote2.png

Thus, BSS has released a RAG - an "adapter" to LLM, which allows it to answer questions, relying not on the general knowledge on which it was trained, but on the corporation's closed knowledge base. This functionality is now built into the company's bot platform and will be added to the knowledge base in October. Other BSS products saw the light: its own LLM for customers who are not ready to use cloud LLMs like ChatGPT or YaGPT, as well as an AI supervisor who can analyze calls in speech analytics (RA) using LLM, for example, to analyze the tone of a call or identify informal errors in service. Auto-clustering is another development that allows you to identify unexpected trends in the causes of calls. For example, in this way you can see that the proportion of requests for issues that were not expected at all has sharply increased. For example, difficulties when using a mobile application.

Tatyana Gaponenko spoke about the development of digital avatars ‒ unique animated characters with full synchronization of speech, emotions, facial expressions and gestures and with the ability to interact with voice. According to her, this direction has become one of the leading for Nanosemantics over the past year.

File:Aquote1.png
We developed the first projects as part of a demo for internal purposes in 2021, then there was an interesting project with MIPT ‒ Snezhinka, a 3D avatar for the multimedia stand of the International Arctic Station, she recalled.
File:Aquote2.png

According to her, digital avatars gained full popularity after showing the digital double of Vladimir Zhirinovsky, whom Nanosemantics specialists made for the Liberal Democratic Party, on PMEF-2023. The digital copy imitated the style of the prototype's sayings, and its presentation at the forum received great public outcry.

File:Aquote1.png
After the launch of Zhirinovsky, we began to receive more and more requests to create such avatars from various companies and famous people in the Russian Federation, - said Tatyana Gaponenko.
File:Aquote2.png

In addition, Nanosemantics continue to improve its flagship product ‒ the DialogOS platform, which allows you to create and train dialog robots used to process user requests in linked dialog mode. The platform operates in 40 languages and includes a knowledge base that includes 3,611 dialog scripts, 5,230 specialized dictionaries, and more than 3 million adaptive questions. The correctness and literacy of speech, as well as the logic of dialogue, in the company is supervised by a department of 30 computer linguists.

In Nanosemantics, work is underway on the implementation of new neural network modules - sentiment assessment, guardianship, clustering of topics - they make it possible to assess how much the client's request is negative or positive, identify words with errors, and collect and classify information by conversation, taking into account topics.

The company's immediate plans include the development of analytics that allows customers to deeply analyze user interaction and more accurately track performance, updating the system NER (Named Entity Recognition) to add more named entities, such as dates, emails, addresses, monetary amounts, etc., to improve the quality of data processing and the accuracy of assistants. Work is also underway to improve the catalog of common elements for collecting a library of standard scripts and elements, which will allow you to quickly develop and launch new virtual assistants, as well as integrate with telephony to create first-line voice bots that can make calls and process calls without the participation of operators. This will open up new opportunities for automation in areas such as customer support and marketing, Tatyana Gaponenko is sure.

Roman Milovanov shared the details of the development of the functionality of the dialogue platform for creating voice robots and Ziax chat bots. In particular, the Ziax TTS voice module was integrated into the platform, which converts text information into voice using neural networks with the ability to brand voice. The solution involves two formats of operation: streaming (in real time) and generating audio files offline. The Ziax TTS module allows you to significantly optimize various business tasks: mass calling of an unlimited number of subscribers, customer service on the first support line, recording commercials, voicing pre-prepared text and much more. According to the specialist, Ziax solutions have already been successfully integrated into the largest insurance companies, banking organizations and industrial enterprises.

File:Aquote1.png
Our clients were able to reduce the cost of targeted calls by 2.5 times, automate up to 80% of calls and increase the speed of departments by 5 times, he proudly noted. - At the same time, the cost of contact center employees decreased to 50%.
File:Aquote2.png