RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

Nvidia GPU Cloud (NGC)

Product
Developers: Nvidia
Date of the premiere of the system: 2017/10/26
Last Release Date: 2017/12/04
Technology: PaaS - Platform As A Service - the Business platform as service,  Development tools of applications

Content

Nvidia GPU Cloud (NGC) is a repository of containers for developers of solutions for artificial intelligence.

2020: Investment of $40 million into AI-applications

On March 27, 2020 "Yandex" announced cooperation of the platform " Yandex.cloud " with Nvidia within development of artificial intelligence technologies (AI). Partners open for the Russian companies library of applications for work with machine learning, neuronets and AI of Nvidia GPU Cloud (NGC).

The Russian companies can apply the NGC applications to the solution of applied business challenges: constructions of the recommendatory systems, the optimized inventory management, optical checking on production, the organization of traffic in the smart cities, developments of different applications and systems on the basis of computer vision.

"Yandex. A cloud" and Nvidia will help to implement artificial intelligence

Ready applications for work with AI and machine learning will also help with creation of essentially new products and services of "tomorrow": in the field of pilotless transport, the gene analysis and medical researches, augmented and virtual reality

The platform " Yandex.cloud " became the first public cloud in Russia which received the status of the official partner with NGC certification from Nvidia.

The division of "Yandex" is going to invest not less than $40 million in development of infrastructure of a cloud platform and development of the AI tools, the representative of the company told Vedomosti. And if demand for solution AI grows, then investments will be increased. Since the end of 2019 more than $5 million are invested, he specified.

Investments will direct to purchase of GPU processors for increase in computing opportunities of the platform and also to expansion of a command on development of own services based on AI.

The platform will help to start quickly AI in business, but the Russian companies aim to make everything, but not to use third-party services, the top analyst of the Center for artificial intelligence of NTI based on MIPT Igor Pivovarov considers. 
[1]

2017

Expansion of functionality with adding of support of ONNX and MXNet 1.0

On December 4, 2017 the NVIDIA company announced support of a cloud platform of Nvidia GPU Cloud (NGC) for the products Nvidia Titan and the researchers in the field of the artificial intelligence (AI) working on the graphic processors Nvidia.

Nvidia expanded possibilities of NGC, having added program updates for a repository of the containers NGC. Now the broad tool kit, capable to accelerate the work connected with artificial intelligence and high-performance computing is available to researchers.

Users of GPU TITAN on architecture of Pascal can be registered free of charge in the NGC system to get access to the complete directory optimized for GPU instruments of deep learning and HPC-calculations. The list of supported platforms includes NVIDIA DGX-1, DGX Station and copies with NVIDIA Volta on Amazon EC2.

Repoziory of the containers NGC contains the optimized frameworks of deep learning of NVIDIA - TensorFlow and PyTorch, a HPC-application from the third-party companies, instruments of visualization of NVIDIA for HPC and the programmable accelerator of logical outputs of NVIDIA TensorRT 3.0.

In addition to availability of NVIDIA TensorRT in a repository of NGC, NVIDIA announced updates for NGC:

  • support of Open Neural Network Exchange (ONNX) for TensorRT;
  • support and availability of the first release of MXNet 1.0;
  • availability of an AI framework of Baidu PaddlePaddle.

ONNX is the open format created by Facebook and Microsoft through which developers can exchange models in different frameworks. In a container of development TensorRT NVIDIA created the converter allowing to use the ONNX models in the engine of logical outputs TensorRT. It allows to simplify implementation of low-latent models in TensorRT.

The software source in the field of AI calculations – from researches before development, training and implementation of applications is available to developers.

Opening of access to a repository of containers

On October 25, 2017 the NVIDIA company announced availability of a repository of the containers NVIDIA GPU Cloud (NGC) to solutions AI developers around the world.

According to the company, free access to full, simple in use and to the optimized program stack for problems of deep learning NGC will help developers to start development of programs of deep learning.

The cloud service is available to users of the announced instans of Amazon Elastic Compute Cloud (Amazon EC2) P3 on the basis of the graphic processors NVIDIA Tesla V100.

After registration in NGC developers can load the container program stack including and optimizing a broad spectrum of frameworks of deep learning, NVIDIA libraries and working versions of CUDA which smoothly work in a cloud or in the NVIDIA DGX systems.

Program writers of deep learning using NGC need to execute three steps:

  • It is free to create an account of NGC on the www.nvidia.com/ngcsignup page.
  • Start the optimized image of NVIDIA on the platform of the supplier of cloud services.
  • Load containers from NGC.

Main properties of a repository of the containers NGC:

  • quick access to frameworks with GPU acceleration:
    • as a part of the software package packed into containers:
      • NVCaffe,

  • ** Caffe2,
  • ** Microsoft Cognitive Toolkit (CNTK),

      • DIGITS,
      • MXNet,
      • PyTorch,
      • TensorFlow,
      • Theano
      • Torch,
      • CUDA, for application development.

  • performance: the repository of the containers NGC configured, tested and certified by NVIDIA provides to developers optimal performance on the graphic processors NVIDIA working in a cloud.
  • preintegration: containers allow users to start development of solutions of deep learning, passing a difficult and long-term phase of program integration.
  • relevance: containers are improved by the NVIDIA command, guaranteeing optimization of each framework of deep learning for fast training at GPU NVIDIA. Engineers of NVIDIA regularly optimize libraries, drivers and containers for the account monthly updates.

Representation of Nvidia GPU Cloud service, (2017)