[an error occurred while processing the directive]
RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

Tinkoff VoiceKit

Product
Developers: Tinkoff Bank
Date of the premiere of the system: 2019/07/23
Technology: IB - Biometric identification,  Speech technologies

Main articles:

2019: Tinkoff VoiceKit sales start

On July 23, 2019, Tinkoff announced that it had begun selling its own TinkoffVoiceKit speech technologies to corporate customers, which allow you to convert voice into text and synthesize voice from text.

Tinkoff VoiceKit

According to the company, TinkoffVoiceKit speech technologies are deep neural network models for speech synthesis and recognition, which over the past years have been developed in Tinkoff as part of the AIFirst strategy and which were also used to create Oleg, a financial voice assistant of his own development.

The TinkofVoiceKit technology can be used, for example, for:

  • Create your own voice assistants
  • Creating Robots to Automate Call Center
  • Quickly record audio books, voice and edit videos
  • Constructing a voice analytics system for transcribed texts - for example, in call centers for monitoring operator performance
  • Creating Apps for People with Disabilities
  • Transcribations of any sound recordings of public performances
  • Search and full-text search by audio and video

Tinkoff will provide technology to educational institutions and students for free - thus the group plans to make an additional contribution to the Russian education system as part of the development of its own educational projects, support for all-Russian Olympiads and cooperation with leading Russian universities and educational centers.

As noted in the company, Tinkoff began developing his own speech recognition technology in 2016. As of July 2019, this technology correctly identifies up to 95% of spoken words and uses terabytes of data and tens of thousands of hours of human speech to teach. She copes equally well with noisy speech on the telephone channel, as well as with pure speech obtained from high-quality data sources.

The development of its own speech synthesis technology began in Tinkoff in 2018 on the basis of such neural network models as WaveNet, Tacotron-2, DeepVoice. For this, the knowledge and expertise about sound accumulated by Tinkoff's specialists over the previous two years was used, so all the work on creating speech synthesis took only about 9 months. The neural network architectures developed at Tinkoff allow the quality of the synthesized voice to come close to the human.

Also, the Kolmogorov cluster was involved in the development of TinkoffVoiceKit and training of neural network models.

Voice technologies are used for July 2019 in the Tinkoff group not only in the voice assistant: they help automate service-related processes. So, about a million service calls pass through speech recognition every day, the quality of processing customer calls is analyzed, and its own biometric system, trained on customer voices, successfully helps screen out all fraudulent actions in the call center.

File:Aquote1.png
Our solutions, no matter what format they will be used - streaming recognition or batch offline processing - will only be available as APIs. In cases where customers will need to refine their systems or on-site solution, we plan to collaborate with large integrators who will be ready to take over this work. Also preparing for the release of mobile SDKs for iOS and Android.

said Vyacheslav Tsyganov, Vice President Tinkoff, Information Technology Director
File:Aquote2.png