Developers: | Hewlett Packard Enterprise (HPE) |
Date of the premiere of the system: | 2017/05 |
Technology: | DWH |
Content |
Memory-Driven Computing is architecture of the computing platform which central element is memory, but not the processor that allows to receive a gain of performance unavailable earlier and efficiency.
The architecture of Memory-Driven Computing from HPE is a large-scale set of technologies which are developed by division of Hewlett Packard Labs within the research The Machine project.
2017
The announcement of Memory-Driven Computing in Russia
Croc and Hewlett Packard Enterprise at the end of November, 2017 submitted the concept of Memory-Driven Computing in Russia and told how to the enterprises to prepare local IT infrastructure for digital transformation.
As you know, data become the main strategic company asset. Big Data, Internet of Things, machine learning — all these technologies create an information stream which volume in the future will promptly increase. On belief of representatives Croc and HPE to adapt to revolutionary changes in area of data processing and to be prepared for transition to an era of digital transformation and artificial intelligence with advantage for business the infrastructure of the next generation focused on superproductive calculations will allow. Besides, complex approach to data management which will allow to accelerate their analysis is necessary and to reduce information storage cost.
The vice president and the associate director of HPE Laboratory Andrew Wheeler told about the developments which are conducted on flowing (as of November, 2017) project stages. According to him, the next generation of computing architecture in the long term will allow to scale the used computing resources up to almost boundless volumes, to keep in memory and to analyze all digital processes on the planet at the same time. It will reduce time of calculations and will give almost unlimited performance for processing of huge amounts of data.
The prototype of the computing architecture developed by Hewlett Packard Enterprise company within the research The Machine project provided in the middle of 2017 has 160 TB RAM and is capable to work at the same time with amount of data, is triple exceeding contents of all books stored in the Russian state library (about 160 million books). Never before a system with total memory could process data arrays of such size, and it — only a part of huge potential of architecture of Memory-Driven Computing — Martin Mozer, Hybrid IT Sales and PreSales the director of the region of CEEMA Central Eastern Europe Middle East and Africa and an ambassador of the The Machine project emphasized. |
In turn, the concept Croc "smart data storage" helps customers to increase information processing rate and quality of analytics, to reduce a time frame from development to entry into the market of new products and services, allowing to find balance between performance and investment volume into computing infrastructure. The main idea of the concept — separation of data into their uses, "hot" and "cold" depending on frequency, in critical business services. At the same time "hot" data are stored on the high-performance equipment, and "cold" — seldom used — on slow carriers.
IT departments should focus on digitalization and services for end consumers. Customers need to adapt infrastructure to digital transformation not to lose fight for the new markets. Together with HPE we showed the technology capable to cardinally change traditional views of the speed of calculations. Understanding it, Croc investigates these innovations and aims to provide access to them, proposing customers the solutions and services based on the advanced developments from global technology leaders, such as HPE — Valentin Gubarev, the director of the department of computing systems of Croc company said. |
Computer prototype from 160 TB total memories
In May, 2017 Hewlett Packard Enterprise showed an important stage of the research The Machine program — a computer prototype from 160 TB total memories. The program is directed to development of the computing architecture focused on memory, but not on the processor (memory-driven computing).
Before HPE received a research grant from the U.S. Department of Energy on creation of a reference model of the exaFLOPS supercomputer which will allow to create unattainable still mathematical models and simulations for use in science, medicine, design and other areas.
For achievement of exaFLOPS performance by 2022-2023 it is necessary to increase high-speed performance, energy efficiency and density of high-performance computing systems by 10 times in comparison with the fastest modern supercomputers. To implement exaFLOPS calculations with a low delay, the reference model created by HPE will have to fix these problems and to lift limits on amount of memory, the scalabilities of factory of memory and capacity inherent in modern high-performance computing architecture.
The concept of Memory-Driven Computing (the calculations focused on memory) is the basis for reference model of development of HPE. This architecture of the computing platform which central element is memory, but not the processor that allows to receive a gain of performance unavailable earlier and efficiency. The architecture of Memory-Driven Computing from HPE is a large-scale set of technologies which are developed by division of Hewlett Packard Labs within the research The Machine project.
The architecture of calculations focused on memory allows to remove the problem inherent in traditional architecture: inefficient interaction of subsystems of RAM, storage system and processors. Thanks to it runtime of difficult tasks is considerably reduced: from several days till several o'clock, from o'clock — about one minutes, and so on, allowing to receive significant result in real time.
The fundamental technologies which are been the basis for the developed reference architecture of the supercomputer of exaFLOPS calculations include the advanced factory of memory and data transmission using photonics with low power consumption. Factory of memory — an optimal technology basis for a broad spectrum of the high-performance computing and tasks aimed at processing of considerable amount of data including Big Data and analytics. HPE also continues to investigate different options of a non-volatile memory which can be connected to factory of memory, increasing reliability and efficiency of the exaFLOPS systems.[1]