[an error occurred while processing the directive]
RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
{{{Дата_публикации}}}

New war of processors. As IBM steps on positions of Intel and ARM

.

Content

The British edition The Register has no analogs – it is very qualified comments on new technologies, but does it, unlike all others, with specific English causticity. Emphasizing the special position, The Register leaves under the motto "Bite the hand that feeds IT", claiming that it pecks the hand feeding IT, and, speaking Russian, does not hesitate to spit in an advertizing well. In it in 2016 there was a material "IBM lifts lid, unleashes Linux-based x86 killer on unsuspecting world" which name can be translated approximately so: "IBM represents nothing to the suspecting world the murderer based on Linux". The heading hints at the developed war of processors "core war".

From the background

Regardless of whether such war is really waged or not and if yes, that what possible outcomes, it is necessary to recognize that you should not doubt an unrestricted stock of vital forces h86 neither now, nor from now on. Besides, it is necessary to understand that the Power processors and an initiative of OpenPower used in this fight on the party of IBM do not belong to weapon of lethal action. And still emergence of one more processor ecosystem in addition to two existing - h86 and ARM - most likely will have an impact on alignment of forces in the computer market, and can be will set a new vector of development of computing for the years ahead.

The paradigm offered IBM "straightens" the diagram of falling of an indicator the price/performance and recovers operation of the Law of Moore (a photo source - PowerWire.eu

Confrontation began in 2013. The parties acting as opponents in processor war are in different conditions. While OpenPower is created almost from scratch and taking into account the latest trends, ecosystems h86 and ARM remain result of evolutionary development and for years of the existence managed to find the burdening heritage load.

The first steps in the history of x86 platform are dated 1982 along with emergence of IBM PC. Any person in sensible mind could not assume then that they will force out RISC processors and will borrow present provisions. Today they serve as the platform for more than more than 95% of servers, but traces PK-shnogo of the past in them are still browsed.

The ARM platform is nearly ten years younger, but also it is not deprived of the burdening inheritance as thought as low power-intensive basis for mobile devices and thanks to license policy under different names almost exclusively occupies the market of gadgets and the built-in systems.

Certainly both ecosystems considerably changed on the long evolutionary way, but completely it is not possible to get off the family tree with it what it is necessary to pay, most often, in excess system complexity for.

As for the ecosystem created by IBM, to be fair, It is necessary to tell, as it is created not absolutely "from scratch", her immediate ancestor by PowerPC too. This processor family was joint undertaking of IBM, Apple and Motorola of the end of the eightieth years, it thought as an alternative to products of Intel. In turn PowerPC is related to the IBM 801 processor created by John Kok in 1975. Alas, despite promises and huge investments, the attempt of triune alliance to follow ways of Wintel turned out a complete fiasco therefore Motorola in general left the computer market, finally passed Apple to Intel processors, and IBM turned PowerPC into the processor with architecture of POWER ISA which was released in 1990 and intended for family of the RS/6000 servers.

Economy of the Law of Moore and three components of an ecosystem of IBM

And here before us the next attempt of IBM to create own ecosystem. Its specifics are defined by the fact that it is undertaken in the second ninth anniversary the 21st century when, on the one hand, that badly, usual long-term pattern of decrease in a ratio the price/performance of processors, but on the other hand is broken, and it is good, there were means to save this pattern.

Sometimes the ratio the price/performance is called Moore's Law that, strictly speaking, is incorrect. In a canonical form the Law postulates regular doubling of density of placement of transistors on a substrate with the period 1.5-2 years and only. Very often it is treated even as doubling of performance. Tell much about Moore's law and also those who are convinced of inevitable death of the law of Moore, and those who believe in his immortality are differently right — a question in how to understand this law.

If to discard rhetoric, then not the Law is of original interest. The question price, i.e. the ratio mentioned above the price/performance which is its consequence as it defines the accelerated obsolescence of a computer hardware matters. For decades this indicator linearly fell synchronously with the Law, but after 2008 the linear coefficient changed and there was an enormous threat for all computer business. What will be if need for permanent updating disappears? It is difficult to imagine what happens if computers as machines in mechanical engineering, it is possible to operate decades.

Emergence of multi-core processors became the first reaction of producers to change of the paradigm existing until recently. However it is not enough to collect a large number of cores on one substrate, they still need to be loaded. But there is an obstacle in the form of the law Amdala on restriction of increase in productivity at parallelization of calculations. It says: "In case the task is separated into several parts, total time of its accomplishment on a parallel system cannot be less than a runtime of the longest fragment".

Perhaps, about the future is more perspicacious than others the director of DARPA of the microsystem direction Robert Kolvell spoke: "Most likely, we came to an end of the law of Moore as to a limit of capacity of one chip, but to speak about a limit of increase in productivity of systems still early. While CPU and GPU really approach the limit, there is a set of different ways to make computers faster. It is unlikely sometime growth under Moore's law will stop — developers will find alternative methods of how to make them quicker and more effectively".

The output is obvious - new chips should be more compactly, both quicker, and cheaper, and, above all are better adapted to tasks, then with system and with commercial the points of view Moore's law will be able to save the justice. Apparently were guided by such reasons about preserving of effectiveness of the Law of Moore at the system level in IBM when forming strategic directions in development of a new ecosystem.

They are three:

  • New solutions for performance improvement
  • Readiness for work with Big Data and to high-performance computing
  • Openness for participation

Implementation of this comprehensive program will allow to save usual "cost efficiency" of the Law of Moore due to disposal it from a tough priznannost to semiconductor technologies. The uniform general hardware-software system stack including semiconductor technologies, processors, built in software, operating systems, hypervisors, accelerators, means for systems management and management of clouds, applications and services forms.

The fact that the paradigm offered IBM "straightens" the diagram of falling of an indicator the price/performance and recovers operation of the Law of Moore, allows to assume that we deal not just with an alternative ecosystem, and with qualitatively new approach to the fundamental bases of computing saving economic bases of computer business.

Several words about architecture of Power today and tomorrow

The listed above economic advantages of an ecosystem of IBM are provided due to architecture of Power8 processors, and in the future her successor to Power9 which release is expected in 2017, and Power10 planned for 2020. It is obvious that production was profitable, processors need to be released big circulations, therefore they should be universal. At the same time the universality limits performance.

High performance on applications requires specialization. Therefore the processor architecture created within OpenPower should meet different and quite often contradictory requirements. Let's tell, on the one hand it should support the systems with unlimited scaling (hyper scalable). With another - to allow to create on the basis the specialized and hybrid high-performance systems especially demanded in connection with distribution of machine learning. And taking into account openness, from the third party, it should provide freedom of choice and actions to members of the OpenPOWER Consortium public organization numbering more than 250 members as of 2017.

The logic prompts that it is necessary to repeat at the new level something that was made during creation of architecture of Industry Standard Architecture (ISA) which opened an opportunity for emergence of IBM-PC compatible of computers and all that huge market of devices which surrounds us. However this time the compatibility is provided not at the level of the bus on the motherboard, and in the processor.

The compromise between universality and specialization can be implemented due to use of accelerators. The idea of acceleration of operation of the base universal processor at the expense of support processors is not new. Coherent Accelerator Processor Interface (CAPI) serves in Power8 for the purposes of integration of chips of different producers with different architecture. It allows the processor and accelerators, such as GPU or schemes on with a basis of programmable arrays of FPGA, to interact directly through memory. In 2016 the industrial consortium OpenCAPI Consortium was organized.

Considering special value of the graphic processors Nvidia as accelerators, Power8 are supplied with ports for contact with the new high-performance bus NVLink.

If to trust forecasts, then CAPI and NVLink at the level of Power8 are first steps in creation of new processor architecture of Power. In Power9 CAPI and NVLink will gain new development and there will be a technology of optimization of work in DPC (Datacenter TCO Optimization), and in Power10 - the technologies supporting analytics (Extreme Analytic Optimization).