RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2017/10/26 15:47:13

Revolution in Tsodostroyeniya approaches

New resource-centrichny approach to creation of DPCs can raise indicators of scalability and adaptation to loadings and also to lower capital and operating costs of provider.

Content

In 2014-2015 unexpectedly for the majority transformation of ideas of what in the long term should be hardware infrastructure of DPCs began. Still quite recently about something other, than the servers installed in 19-inch racks (rack server or rack-mounted), and the speech could not be. It is easy to integrate them in physical clusters that allows to minimize the volume of network equipment, the occupied space, simplifies cooling and it is rational from the economic point of view.

But, since 2015, at this seeming firm paradigm the alternative was outlined, still soft junction from the developed cluster servero-centrichny architecture of DPCs (server-centric architecture) to a resource-centrichnoy to architecture (resource-centric architecture) began. The last name is not quite correct as the server is one of possible resources too, more precisely — it is worth speaking about transition from coarse-grained (coarse-grained) to fine-grained (fine-grained) architecture. Not the server, but smaller components becomes "atom" in that case there are processors, memory and storage systems. They receive unicast addressing, from them arbitrarily it is possible to collect the uniform infrastructure which is best meeting the qualifying standards.

Market of disaggregated infrastructure

Role of communications in IT infrastructure

The most ancient component of infrastructure of DPCs are to 19 inch telecommunication are resistant. This construction in 1890 George Vestingauz, the author offered more than 400 inventions. For more than centenary history of a rack, reassigned for telecommunication devices, slightly exchanged, but their filling changed. In 1994 Compaq for the first time placed the ProLiant server in this construct, having opened category Rack-Mountable, and in 2001 the RLX company which is not existing nowadays rack-mounted the modern server edge. The subsequent story of edges is described in article "Ten Years of Revolution of Edges"[1].

The cluster architecture, that type in which it exists as of 2017 is historically proved. As any technical solution, it developed under the influence of technological limits which existed at the time of its creation. It is clear, that, for example, constructions of airplanes or cars are defined by physical and economic restraints. Features of any infrastructure solutions are most often connected with the limited potential of communications. For example, infrastructure of the state in the past in many respects was defined by a day run of horse crew. With the advent of other means of transport this restriction disappeared. Globalization is a consequence of progress in communications of all types.

Too most occurs in computing. The choice of the server as the module is connected with a variety of reasons, including with impossibility to provide sufficient data transfer rate on all space of a rack. This restriction has temporality. Researches show that in the future using technologies of silicon photonics an opportunity to refuse "feudal dissociation" of a cluster architecture will open and to perform complete review of the relation to system resources. In other words, to untie among themselves the processors installed in a rack, memory and DWH to collect from them any any software-defined infrastructures on demand. For this purpose it is necessary to provide "only" channels between processors and memory capacity of 500-800 Gbit/sec. at distance to 1 meter, and between processors and the periphery — 100-200 Gbit/sec. at distance of 5-100 meters.

Premises and conditions of disaggregation

For the forthcoming transition from the server to a rack the special term - "disaggregation" (disaggregation) is offered. This word is normal translate into Russian as "disaggregation". But in this context of disaggregation it is necessary to understand differently, namely — as dismantling, but with the subsequent assembly, with formation of new infrastructure. It would be worth calling such process "reagregation" rather, but there is no such word.

Theoretically as a result of disaggregation it is possible to receive three independent pools of resources — a pool of processors, a pool of memory and a pool storage systems (DWH). Each of pools develops irrespective of another. It is well known that in consent with Moore's law with regularity processors are updated, new types of memory and DWH appear (the flash, PCM, or phase-change memory is memory on the basis of phase transition). They can be aggregated in software-defined infrastructures (Software-defined infrastructure, SDI).

Benefits from disaggregation are obvious — the dependence on a mismatch of lifecycles of components and DPC, incoincident on duration, decreases; the DPC constructed by such principles is adapted for upgrade better; the scaling option and adaptations to loadings raises. As a result decrease both capital (CAPEX), and operating costs (OPEX).

However complete disaggregation on three pools is possible if rather high exchange rates on both paths — between processors and memory and between processors and the periphery are provided that is still unattainable - it is case of the future. At the moment with the advent of 100 gigabit Ethernets it is possible to execute partial disaggregation.

About problems of disaggregation of memory read in the separate article of TAdviser.

The movement from today a day future is shown in the drawing.

Disaggregation stages

There is a natural question, than is stimulated failures from clusters for benefit of disaggregated infrastructure? For the future transition there were two conditions.

The first condition for disaggregation and for emergence of qualitatively new design approach of server racks on the basis of disaggregation (Disaggregated Rack-Scale Server Design) is change of structure of the market. With growth of volume of cloud services by the preferential consumer of server technologies there are companies providing these services - they are called by giperskeyler (hyperscaler). They build the hyper scalable DPCs numbering hundreds of thousands, and, maybe, and millions of servers.

In total in the world there are 24 companies belonging to the category of giperskeyler two thirds of the volume of all network services are the share of them. They own about 300 hyper scalable DPCs, i.e., such which are created in uniform scalable architecture. In the drawing geographical distribution of hyper scalable DPCs is shown below.

Geographical distribution of hyper scalable DPCs

The analysis made[2]shows that for 2017-2021 income gained from providing cloud services in different forms will increase with a speed of 23-29% a year that will lead to annual growth in sales of infrastructure solutions for giperskeyler, equal 11%. For the same period sales returns of traditional technologies for corporate systems will drop by 2%. Indicators of dynamics of cumulative annual average growth rate of GAGR are shown in the drawing below.

Loudspeakers of cumulative annual average growth rate of GAGR

It is possible to consider the events on the example of Google. On the dawn of the new millennium when "the bubble of dotcoms" (Dot-com bubble), a basis of the Internet and Google was inflated, including, formed powerful Unix servers (midfreyma), first of all, of the Sun 10,000 and 15,000 models. But in 2002 in the deepest secret and absolutely unexpectedly for all Google was reoriented on the unclear then rack servers which received the name "yellow boxes". However over time all secret becomes explicit. As it became clear, these bright boxes, to be exact — specialized search servers (Google Search Appliance, GSA) were made by Dell company.

There passed 14 years, and here in 2016 Google announced the step-by-step termination of release of these servers since 2018 and the termination of their support in 2019. In the future the company will be guided by the technologies which are specially focused on needs of suppliers of the network services called by hyper scalable (hyperscaler) among which it is high on the list along with Amazon, IBM and Microsoft. It is one of the first certificates from failure from clusters and to the movement towards disaggregation.

As the second condition serves the silicon photonics which allows to create electron-optical chips on one crystal of silicon which provides communications within one or several racks by means of optical, but not electric signals. On creation of the first working hybrid microcircuit at IBM about 12 years left. The silicon and photon chip is capable to transfer data with the help of light pulses at a speed up to 100 Gbps to distance to two kilometers. Light allows to transfer data quicker, than copper cables which in data processing centers connect storage systems, network equipment and servers.

Difference between disaggregation and hyper convergence

Difference between disaggregation and hyper convergence (HCI) consist in the relation to virtualization. If to remain at the physical layer, then it is necessary to work with means of disaggregation. If physical resources can be transformed to virtual then it is more preferable than means of hyper convergence.

Besides, it is possible to express the careful assumption that these two approaches can have different consumers. Programs hypervisors and at technology of virtualization in general have limits therefore HCI, most likely, will not go beyond private clouds. Disaggregation has no limits therefore its scope of application - DPCs of the companies-giperskeylerov, service providers of global clouds.

Notes