Developers: | IBM, Nvidia |
Date of the premiere of the system: | November 14, 2016 |
Branches: | Information technologies |
Technology: | Big Data, Supercomputer |
Content |
2017: IBM developed distributed system of deep training
As it became known on August 13, the corporation IBM developed distributed system deep training — Distributed Deep Learning (DDL). A system works for servers families OpenPower on a software platform of IBM PowerAI and includes a number of platforms for work with technologies of deep training, including: Google TensorFlow Torch Caffe, Chainer and Theano.[1]
The Distributed Deep Learning system is capable to distribute automatically the calculations necessary for a training of models in technology of deep training, on several physical servers with the graphic accelerators.
Performance, according to in IBM, grows almost linearly with increase in quantity of computing nodes of a system. So, for example, training of the test program at data sets of ResNet-101 and ImageNet-22K required 16 days of operation of the IBM S822LC server with two Nvidia Tesla P100 accelerators. At start on network with 64 servers accomplishment of the same task required only seven hours — i.e., is 58 times less than time.
As noted, with system DDL it is possible to work or on servers with the PowerAI platform, or in a cloud service which is provided by Nimbix company approximately for $0.43 per hour.
2016: IBM and Nvidia created the fastest artificial intelligence for business
In November, 2016 the American companies IBM also Nvidia announced as they claim, the fastest in the world corporate solution for deep training.
The joint product of IBM and Nvidia a product represents the specialized IBM Power System S822LC server constructed on IBM Power 8 processors on which the platform of artificial intelligence IBM PowerAI is started.
This server intended for high-performance computing (High Performance Computing, HPC), received the new bus based on Nvidia NVLink technology which provides fivefold acceleration of data transmission between the central and graphic processor. The hardware of direct interaction between CPU and GPU allowed to increase performance more than twice in comparison with comparable servers from four GPU in a test neuronet of AlexNet constructed on Caffe framework, says IBM and Nvidia.
In addition to NVLink, the deposit Nvidia in joint about the IBM project is expressed in providing Nvidia GPUDL libraries: cuDNN, cuBLAS and NCCL.
IBM and Nvidia hope that their solution will allow computers "think" and study quicker, becoming more similar to people in this plan. Technologies of deep training and artificial intelligence in general are even more often used in the bank industry (for example, for face recognition), automotive industry (for pilotless machines) and retail trade (for creation of completely automated call centers capable to understand the human speech and to answer questions).
Our innovations connected with Nvidia NVLink use by IBM company created new opportunities for Power processors in the market of technologies of deep training and analytics — the vice president and the general manager of division of Nvidia Accelerated Computing Group Ian Buck noted. |
By November 16, 2016 the packet of IBM PowerAI is free of charge offered to users of the IBM Power S822LC server.[2]