RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2017/11/20 13:15:49

What is hyper convergent infrastructure and why it became so popular?

* HCI world market

* DaaS

Content

What is hyper convergent infrastructure and why it became so popular?

It is no secret that for the best use of resources, in any systems, information not the exception, resources should be collected in the general pools they could be redistributed reasonablly. In computing optimization regarding use of resources began long ago, with the systems of distribution of time. Then virtualization queue came. In the second decade of the 21st century focus of attention was naturally displaced on hyper convergence and disaggregation because at this level of development of technologies it is possible to create resource pools by these two alternative methods.

Difference between them consist in the relation to virtualization. If to remain at the physical layer, then it is necessary to work with means of disaggregation. If physical resources can be transformed to virtual then it is more preferable than means of hyper convergence.

Hyper convergence allows to create program centrichnye infrastructures in which the various "hardware consumer goods" (commodity hardware) are closely integrated, but such infrastructures are brought to the market under a brand of one vendor.

The prefix "hyper" assumes existence and just convergent systems. Jointly two types of systems form approach which was initially called the integrated stack (integrated stack). The general for both is that in a scalable stack three principal components – networks, servers and DWH gather, and they integrate means of virtualization and management.

The short history of hyper convergence which began with the middle of zero years of the 21st century is interesting by the evolution. The foundation was laid by formation of alliances by several companies that is rather atypical for the market. Jointly they offered the so-called integrated stacks. Before others - the VCE group (VMware Cisco i EMC) with the Vblocks, then group as a part of VMware, Cisco and NetApp with network of storage FcoE and a stack of FlexPod. The sextet in structure Dell Fujitsu HP Microsoft, IBM and NEC with the solution around a hypervisor Microsoft Hyper-V and also two independent companies - Oracle with the solution Exalogic and HP of page was a trace Matrix.

The arisen one-way traffic of the largest producers necessarily is consolidated set thinking on fundamental nature of expected shift.

Here what was told then analysts. Stefan Reyd from Forrester Research:

File:Aquote1.png
Tectonic plates of all industries started moving, and it means that the basic rules of interaction between producers and users change
File:Aquote2.png

Mark Bouker from Enterprise Strategy Group:

File:Aquote1.png
The gravity of the happening changes does not raise questions. All industry is rebuilt under the influence of what happens in Oracle, IBM, Cisco, Microsoft and at all those who do business together with them
File:Aquote2.png

And actually, emergence of the integrated stacks changed rules of the game.[1] did not remain to the enterprise nothing else Earlier how to collect systems by own forces or to resort to services of the companies integrators, and it was possible to purchase the integrated stack ready, almost like home theater. Certainly, fine judges and critics which in computing there is a lot of, will always find the mass of faults of any integrated solution. So was, for example, with the integrated software packages for the PC. Consolidation of editors, DBMS and spreadsheets in one product was given a hostile reception by them until one general Microsoft Office appeared, on it all discussions ended.

No doubt, on separate indicators the ready complex can concede to specialized solutions, but it has the main advantage — a possibility of shift of expenses from operational (OpEx) towards investment into new solutions (CapEx), i.e. on support of accomplishment of new functionality. Having continued a parallel with audio, one may say, that the ready integrated stacks allow to spend more money for music, but not for the equipment for its reproduction.

Then there was a new generalized concept "convergent infrastructures" (Converged Infrastructure, CI).[2] Is normal understand the complete finished solution collected from complementary ready components as CI: servers, storage systems, the communication equipment and software for infrastructure management, the including means for automation and the orchestration. In certain cases specialized hardware modules of management also can be a part of CI. Managing software and specialized modules provide centralized operation with the resources which are usually virtualized and integrated in the general pools distributed between different applications. It increases cost efficiency and reduces operation cost. Emergence of CI became natural result of evolution of corporate DPCs towards private clouds.

On degree of consolidation of CI it is possible to separate into three groups: template, or reference architecture (Reference Architecture, RA), infrastructure of a uniform stack (Single Stack Infrastructure, SSI) and truly convergent infrastructure (True Converged Infrastructure, TCI).

In the first case consolidation comes down to the separate recommended configurations consisting of components from different producers. In the second - all stack of the equipment is made and gathers in a whole, and in the third - in a whole components from different producers gather, i.e. advantages of RA and SSI are united in TCI.[3]

And, at last, one more logical step. It was made by Nutanix company startup, having offered Complete Cluster - scalable infrastructure which integrates in itself components of a corporate system: computer resources and resources of storage systems in the form of uniform basic modules. In the subsequent such infrastructures began to call hyper convergent (Hyperconverged Infrastructure, HCI).[4]

The distinctive feature of Complete Cluster and all his followers consists in an opportunity to collect from modules a private corporate cloud without additional investments in network DWH SAN or NAS. The idea of Complete Cluster, apparently, lay on a surface. It is known that success of such giants as Google Yahoo, Amazon and Facebook, considerably is based on own block proprietary technologies used for creation DPC. Why not to apply similar approaches to significantly smaller corporate DPCs with the classical servers and storage systems adapted to traditional loadings? Such loan of experience makes sense as in modern corporate DPCs, irrespective of their scale, the close virtualized loadings dominate.

Founders of Nutanix before others paid attention to two problems of the classical DPCs consisting of servers, SAN and NAS. The first is connected with heritage of old traditions — DPCs were created counting on static loading, and new idea of a dynamic load arose in connection with virtualization of servers and transition to clouds. Dynamics consists in need on the fly to create virtual machines, to move them between servers, however in these conditions management of the existing network systems of storage becomes difficult and inconvenient and it is impossible to wave away from this problem already. The number of virtual machines, amount of data with which they operate grows, increases, and in no small measure quantitative changes are promoted by new technologies of virtualization, in particular, virtualization of the desktop systems.

One more problem of modern DPCs arises because they were designed taking into account mismatch between more and more increasing capacity of processors and sluggishness, owing to their mechanical nature, hard drives. This gap was taken for granted, and it had to be compensated by means of these or those methods of distribution of data on disks. However there were solid state drives (SSD) with an exchange rate two-three orders higher, than at traditional hard drives (HDD) that could level a gap, but infrastructure was not prepared for them, and simple replacement of HDD by SSD creates a new problem — the existing networks are not able to cope with the increased need for data transfer rate that complicates use of advantages of SSD.

Nutanix frankly recognized that they borrowed approach to horizontal scaling, and their own deposit consists in distribution of a technique of Google on corporate systems and the offer of creation "ready to the use" blocks for construction of corporate systems. To apply Complete Cluster to conditions of a single enterprise, the general idea had to be overworked seriously. If Google File System is own custom solution adapted for specific internal applications (search and e-mail), then Nutanix proposes more common decision adapted to conditions of the corporate virtualized environment with the emphasis on its special requirements: effective management of data, high readiness, reservation, recovery after failures.

The architecture of Nutanix Complete Cluster represented the cluster with horizontal scaling collected from high-performance nodes. The separate node incorporates processors, the memory and local storage system consisting of the disks HDD and SSD. Own copy of a hypervisor works at each node. The node serves as a host for the virtual machines working at it. For consolidation of resources of separate storage systems in the uniform virtualized pool SOCS (Scale-out Converged Storage) — an analog of Google File System serves. Virtual machines by means of SOCS operate with data in the same way as if they worked with SAN, in other words, SOCS serves as a hypervisor of storage systems, moving data as it is possible closer to that virtual machine, on which they are used that increases performance and reduces cost. Besides, Complete Cluster serves as the tool for horizontal scaling up to several hundred nodes.

By 2017 process of formation of HCI systems reached a certain maturity, along with startups it joined also large vendors. In Russia the hyper convergent solution is proposed by IBS company. Its computing Scala-R platform is completely configured module based on which any power can bring together data center practically.

The market of HCI in Forrester representation

Comparing CI and HCI, it is possible to use construction analogy. In the first case construction is collected from the predetermined set of large blocks from the producers connected among themselves, and in the second used suitable from the bricks which are available in the market. Usually in creation the CI leading role belongs to that from members of alliance who deliver management, but about the others who deliver servers, DWH, networks and virtualization, too do not forget. And the systems of the class HCI are delivered by one vendor, it collects all necessary in the body (box) by such necessary things as means for backup, creation of snapshots, deduplication, a compression in the in-lint mode, etc. The specific structure of set at different producers can vary, some, for example, SimpliVity, for acceleration use own dedicated integrated microcircuits of ASIC.

Who would not make HCI, the virtualized infrastructure, more precisely, the infrastructure virtualized by determination is created. It has the following advantages:

  • Flexibility – possible types of scaling are implemented.
  • Ekonmichesky efficiency – both CAPEX, and OPEX decrease.
  • High readiness – natural reservation of blocks and fast reconfiguration.
  • Security of data – data are not tied to the only thing in terms of vulnerability to the place.
  • Efficiency of work with data – costs for DWH and networks are reduced.

Virtual and physical in HCI

In HCI three types of scaling are implemented: well-known on clusters horizontal (scale out), on mainframes and Unix servers vertical (scale up) and new scale through.

thum
  • Scale out is increase in number of sides.
  • Scale up is use of ready blocks with a bigger power.
  • Scale through is accumulation of power of the existing blocks.]]

If to compare two approaches - hyper virtualization to disaggregation, then it is possible to express the careful assumption that these two approaches can have different consumers. Programs hypervisors and at technology of virtualization in general have limits therefore HCI, most likely, will not go beyond private clouds. Disaggregation has no limits therefore its scope of application - DPCs of the companies-giperskeylerov, service providers of global clouds.

Classical server stack "the western edition" From the presentation ""Rosplatforma: the Russian technologies for creation of server IT infrastructures and clouds"
General architecture: from "classics" before "hyper convergence"
Hyper convergent stack: "sovereign edition"

Software and network solutions for hyper convergent infrastructures

Hyper convergent infrastructures (HCI) become one of the most perspective technologies for creation of private and hybrid clouds. The idea of HCI startups, before others offered the leading positions which realized a possibility of radical changes and in time to take in this segment of the market. However recently and the industry heavyweights who as usual are delayed on start, but having sufficient scientific and technical potential do everything possible to reach the position appropriate to them.

General situation is as follows: irrespective of under what brand this or that competitive HCI is made, it consists of four interconnected components – servers, DWH, network and virtualization of DWH for the purpose of creation of the general pools (Storage Pool). In this respect, perhaps, the only exception represents the appeared three-component approach to formation of HCI based on Intel Xeon Scalable with a chipset of Intel C620 and DWH of Optane SSD quite recently.

The companies providing functionality, namely two first of the listed above four components with rare exception act as the supplier of HCI. They can make servers and/or DWH, or can purchase them, practically without having own production, i.e. to be as now speak, fabless.

As for two other components (virtualizatsionny software and network equipment) which it is possible to call backbone, a picture absolutely other here. In these segments there are several undisputed leaders and they set rules of the game (in more detail here).

Read Also

Notes