[an error occurred while processing the directive]
RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2

IBM Sequoia

Product
Developers: IBM
Last Release Date: 2014/12/12
Technology: Supercomputer

The American IBM Sequoia supercomputer installed in Livermore National Laboratory became in the summer of 2012 the most powerful in the world with a performance of 16.32 petaFLOPS (quadrillions of operations per second), having returned leadership in this area of the USA.

The Sequoia system bypassed on performance the Japanese K Computer which in comparison with last rating (November, 2011), was not exposed to upgrade and has the same maximum volume of 10.51 petaFLOPS that moved K Computer on the second place.

The winner supercomputer uses architecture of IBM BlueGene/Q and contains 1.5 million computing cores. A system occupies 96 racks. Sequoia is also recognized as one of the most energy efficient supercomputers.

The most high-speed supercomputer of Russia - Lomonosov took the 22nd place with an indicator of the maximum volume of 0.9 petaFLOPS. In total June Top500 list included five Russian supercomputers.


Top500 is not the only rating of supercomputers: in 2010 there was Graph 500 in which systems are estimated on capability to process Big Data. And if the first place in Top500 was taken by Cray, then in present Graph 500 the IBM Sequoia supercomputer of National laboratory of Lawrence in Livermore became the leader.

The Graph 500 tests allow to estimate, how fast a system is capable to bypass random check of in-memory addresses, developers of rating explain. At intensive data processing capacity of memory it is frequent more important, than purely computing high-speed performance.

When testing the supercomputer is given on processing the big branched graph. A task — on the set top of the graph to detect the others by a bypass of edges. Sequioia touches 15,263 billion tops per second whereas in the first edition of the list of 2010 the record was only 7 billion. Nine of ten upper lines of the present Graph 500 list occupy the IBM BlueGene/Q systems. The previous supercomputers of IBM, BlueGene/L, were more expected transactions with a floating comma and therefore in Graph 500 they are lower.

The Graph 500 list, as well as Top500, is published twice a year. Inclusion in the list voluntary, and for the present 500 participants was not gathered. But if in the first edition there were only nine lines, then already 124. Russia in rating is represented by four supercomputers, the leader taking the 39th position — Lomonosov from MSU.

2015: IBM Sequoia reproduced tectonic processes of Earth

On December 9, 2015 the IBM company announced work of scientists of the University of Texas at Austin, the Research center IBM Research, New York University and California Institute of Technology and realistic reproduction of the processes in Earth managing tectonics of plates on the Sequoia IBM BlueGene/Q supercomputer.

Sequoia IBM BlueGene/Q at a stage of installation (2012)

Erudite staff of IBM got Gordon's award Bella for the most realistic reproduction of processes in Earth subsoil which can become a key to understanding of origins of earthquakes and volcanoes.

The result is achieved by means of the algorithms executed by the Sequoia IBM BlueGene/Q computing system located in Livermore National Laboratory of E. Lawrence.

The group of researchers developed algorithms for a mathematical method of calculation, the so-called "implicit solution" which helped to create realistic model of elements of Earth in unprecedented permission and high accuracy. Scientists managed to predict the movement of terrestrial plates and forces influencing them, and at the same time to reproduce processes in Earth subsoil. As a part of the created model more than 600 billion nonlinear equations - big achievement in computing science and design.

The Sequoia computing system consists of 96 racks of IBM BlueGene/Q. Theoretical performance is 20.1 petaFLOPS. Each rack consists of 1024 computing nodes on which chips of the 16-core processors on the POWER platform created for processing of Big Data with a frequency of 1.6 GHz are located.

The group developed the code using which 97% of parallel efficiency of scaling of the program of the solution up to 1.6 million cores are reached - it is a world record. Such result is received as a result of reconsideration of computing approach: from a mathematical model and numerical methods to massive parallel implementation. The group created the numerical method capable at the same time to cover a large number of the different scales which are used at the description of an Earth's mantle, and, at the same time, effectively using massive and parallel architecture of the BlueGene/Q supercomputer.

File:Aquote1.png
This success will help to answer some fundamental questions, for example, what main reasons of the movement of plates and what processes lead to strong earthquakes.
Michael Gernis, director of seismological laboratory of California Institute of Technology, professor
File:Aquote2.png

File:Aquote1.png
While the commonly accepted point of view believes that effective solution of the systems of significantly nonlinear equations on a system from millions of cores is almost unattainable, we showed that gradual reconstruction of sampling, algorithms, solvers and instruments of implementation does it possible.
George Stadler, professor of Kurantovsky institute of mathematical sciences of New York University
File:Aquote2.png

File:Aquote1.png
This mechanism is applicable to much wider class of models in the science and design including a complex multiscale operation mode.
Omar Gattas, the director of the Center of computing geosciences at Institute of computing design and sciences, professor of geological sciences and mechanical design of the University of Texas, Austin
File:Aquote2.png

File:Aquote1.png
We only begin to show as a combination of the advanced algorithms, use of the supercomputer, the analysis of the Big Data collected from touch sensors and devices of Internet of Things can help is realistic to reproduce the most critical nonlinear diverse forces of nature. We investigate new methods of use of a large number of available sensory data and their cognitive processing on the set subject. It will allow experts specialists to reduce the amount of time required for solution development from several years to weeks and even days in any area, since the invention of new materials before opening of new, earlier undeveloped power sources.
Costas Bekas, head of department of bases of cognitive calculations of IBM Research, Zurich
File:Aquote2.png

Authors of scientific work told of a research:

  • Johan (Johann) Rudy – the University of Texas at Austin
  • Cristiano I. Malossi – IBM Corporation
  • Tobin Isaak – the University of Texas at Austin
  • George Stadler – New York University
  • Michael Gernis – California Institute of Technology
  • Peter U. J. Staar – IBM Corporation
  • Yves Inaykhen – IBM Corporation
  • Costas Bekas – IBM Corporation
  • Alessandro Curioni – IBM Corporation
  • Omar Gattas – the University of Texas at Austin