RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
Project

"Kama" has implemented Selectel solutions in the process of developing IT systems for drivers

Customers: Kama (JSC)

Naberezhnye Chelny; Transport

Contractors: Selectel
Product: Selectel ML platform

Project date: 2024/03  - 2024/09

2024: Building a Computing Infrastructure

KAMA announced a partnership with Selectel on October 10, 2024. As part of the project, Selectel has deployed a powerful computing infrastructure on the basis of which Atom specialists are developing IT services for the future electric car, in particular the Advanced Driver Assistance Systems (ADAS) and the data platform.

File:Aquote1.png
The Selectel team has selected for us non-standard equipment that provides the optimal ratio of performance and price, - said Vladislav Ladenkov, engineer for machine learning and operating software support at Atomic. - They also provided a mature ML platform where we started running experiments from day one. R&D teams run daily on this platform. Selectel actively helps us implement new functionality, and we also independently expand the platform by adding our own components that are not part of a standard assembly.
File:Aquote2.png

The Selectel ML platform includes tools for managing ML experiments and deploying machine learning models. The solution is powered by a powerful computing infrastructure, including graphics accelerators (GPUs). The Selectel ML team provides expert and technical support to the project. Thanks to the capabilities of Selectel, the Atom team can effectively train their ML models, which will significantly increase the safety and ease of driving.

File:Aquote1.png
We are pleased to support the ambitious goals of the Atom in the field of developing IT systems for the future electric vehicle, - said Anton Chunaev, manager of ML products at Selectel. - At the beginning of the project, we conducted joint testing and selected a productive infrastructure that meets the customer's requirements. The accumulated expertise in the field of ML, as well as the warehouse reserve of graphics accelerators (GPU) required for such resource-intensive computing, allow us to provide reliable support for the infrastructure and ensure its flexible scaling.
File:Aquote2.png