RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2020/11/16 11:11:16

Computing infrastructure of DPC

The digital world of the companies, productions, the industries and territories which is created today before our eyes relies on reference network of DPCs. These new DPCs are peculiar digital "intersections" which regulate transfer of the processed data in the necessary points of a digital network. As a result of DPC actually loses the material appearance, turning into the program mechanism of manipulation of digital resources. However a "iron" part remains and, in turn, undergoes revolutionary changes. Article is included into the Technologies for DPC overview

Content

Data management is an Alpha and Omega of IT of the 21st century.

File:Aquote1.png
Data are a key to the future as knowledge based on data changes business methods and sets new tasks in all areas: from a cloud to DPC and boundary infrastructure. Perhaps, the biggest problem of DPC of the future are 50 billion intelligent devices and data streams generated by them on border of IT, - Mikhail Orlenko, the director of the department of server and network solutions, Dell Technologies describes a situation.
File:Aquote2.png

It is necessary to emphasize that the steady overcast and Edge-infrastructure do not belittle a role of a traditional corporate data processing center at all. Moreover, its value in the conditions of the workloads extremely sensitive to delays in processing and transfer, becomes even more essential, including, on-premises. The corporate DPC takes hybrid shape, is transparent being integrated with different clouds, both public, and private. Data overflow between processing points, submitting to strict requirements of business systems. Moreover, borders between "two worlds" of DPC are erased.

File:Aquote1.png
It is worth mentioning emergence of new classes of the multiple server processor systems. It is possible to give systems working on the advanced 64-core EPYC AMD processors as an example. Considering new technology features of the equipment, influence on the engineering systems, in particular, of the cooling system, providing with power supply (UPS) and so forth is possible. Thus, it is difficult to revaluate influence of modern computing infrastructure on other subsystems of ensuring work of DPC, - Dmitry Chindyaskin, the head of technical directorate "says AiTeco.
File:Aquote2.png

But really revolutionary changes happen in the industry of the computing base of DPC: in fact, borders between "iron" and software are erased.

Software-defined everything

The principle of a program opredelyaemost (Software Defined) means transition from strictly implemented hardware architecture to programmed that gives the whole range of the positive investigations at once – essentially new level of dynamism, flexibility and cost efficiency of the corresponding IT projects.

Groups of a large number of servers of universal Intel architecture turned into a uniform source of the unified computing resources so system administrators just needed to monitor that these resources was enough. Software-defined DWH can effectively and reliably work with disks of different suppliers, and besides it disks of level can be inexpensive. Implementation of this idea in all completeness, however, is not reached yet, but the direction of the movement is obvious. In program-controlled data networks (Software Defined Network, SDN) the control is transferred in a programmable part of network, and the devices managed by the program are responsible for scheduling of traffic. As a result management of network functioning separates from routine work on pumping of data.

According to Forrester, implementation of SDN allows to cut down significantly expenses on upgrade of infrastructure — for 68% costs for updating of network at implementation of strategy of cybersecurity with "zero level of trust" decrease, by 37% - purchase costs of new servers and for 87% - labor costs on standard administration tasks of networks.

File:Aquote1.png
Also SDN helps to unify management of politicians of network access and security upon transition to hybrid or the IT multi-cloud model, - Andrey Kosenko, the senior consultant for business strategy, VMware emphasizes.
File:Aquote2.png

The idea of the unified management of IT resources of different types found reflection in the generalizing concept of Software-Defined Everything (SDE), i.e. "software-defined everything". The main sense of SDE consists in an opportunity to transfer the main part of managing intelligence of information systems to a program component based on the mass inexpensive hardware. In addition to the software-defined entities stated above, there were, for example, concepts, software-defined jobs (Software-Defined Workplace, SDW), software-defined security (Software-defined security, SDsec) which is based on virtual devices of security (Software-based Virtual Security Appliances) and, at last, software-defined DPC (Software Defined Data Center).

The term SDDC was put into practice by VMware company in 2012, during the stay of his technical director. Then it meant virtualization of infrastructure of DPC and application of certain software technologies to be lifted to level above the specific hardware and to have a control automation opportunity. For last years the concept of SDDC gained significant development.

File:Aquote1.png
The most significant trend in the field of evolution of data processing centers I see implementation of the concept of software-defined DPC at which all components (calculation, data storage, network and security) are implemented programmatically, - Andrey Kosenko tells. - This concept gives notable advantages — reducing costs of resources, the improved scalability, effective use of resources of infrastructure, acceleration of an output of new services to the market, transparency of actions in networks and the improved security model. And the most important — automation and simple operation. Besides, the software-defined DPC exempts business from dependence on a certain type of the equipment and need to develop and maintain skills of personnel in separate areas.
File:Aquote2.png

Software-defined DPC

File:Aquote1.png
The explicit trend on transition from the specialized systems in the field of calculations, storage and data transmission to software-defined solutions when infrastructure is under construction based on normal x86 servers, using normal processors, memory, hard drives/SSD and Ethernet- connections is observed, and all logic and management are implemented at the programming layer, - Oleg Lyubimov, the CEO says Selectel.
File:Aquote2.png

It is initial to refuse the specialized systems for benefit of software-defined began the largest Internet companies, such as Google, Facebook and Amazon, but now, the expert emphasizes, it becomes de facto the standard for all service providers and the companies having examination in IT.

For example, the HPE company in the concept of software-defined DPC uses a concept of software-defined means (Software-defined facilities). "Highlight" of this approach is that software-defined means are considered from positions of convergence of platforms of management of means and IT resources. Bright manifestation of this convergence - the solution DCIM for management of engineering infrastructure of DPC is integrated with solutions for management of IT, for example, of HPE OneView that provides a uniform point of management of diverse resources and systems.

Software-defined entities in the concept of HPE

The software-defined data processing center in understanding of HPE is a DPC in which infrastructure is virtualized using abstraction of software-defined resources, the uniform pool of these resources and the automation equipment for providing infrastructure as services (IaaS) is supported. Software-defined infrastructure (SDI) gives the chance to IT administrators easily to unroll and manage physical infrastructure using software-defined templates and API for determination and automation of setup of infrastructure and transactions of its lifecycle.

File:Aquote1.png
In my opinion, software-defined infrastructure, monitoring of infrastructure and virtualization will provide maximum efficiency and economic feasibility of development of DPCs in the first half of the 20th years, - Stanislav Mirin, the head of DPC iKS-Consulting notes.
File:Aquote2.png

Andrey Kosenko is sure that the future of DPC behind software-defined approaches, every year digital infrastructure of business becomes complicated — the number of the applications used resources grows requirements to availability and speed of providing services raise, and at the same time the problem of expense optimization on IT and transparency of all processes in infrastructure does not disappear anywhere.

According to Pavel Karnaukh, the head of technical department of Dell Technologies in Russia, two pacing factors have an impact on the market of corporate DPCs now: the continuing rapid growth of public clouds (more precisely, offers of cloud infrastructure as services), and already rather noticeable use of new technologies of development and deployment of applications, first of all containerization. In general, many experts consider software-defined DPCs a new stage of development of virtualization, containers and cloud services.

At this level as analysts of Gartner in the research Gartner Magic Quadrant for Data Center and Cloud Networking 2020 published in June note, corporate DPCs are strongly virtualized (usually 80%), are more and more containerized (from 10% and show sure dynamics), and the most important business result is availability from corporate DPC of cloud environments which is reached is automated by means of centralized operation.

Influence of a cloud

Analysts of Gartner in the report Magic Quadrant of "for Data Center and Cloud Networking 2020" note that requests of the companies for creation of networks of data processing centers grew in a year approximately by 10%, and about 10% of clients of Gartner expressed desire in general to close the corporate Data-center on own platform in the near future. However even more significant factor of influence on moods of Gartner companies considers that fact, as local, and commercial DPCs provide the developed functionality of public clouds of type today AWS and Microsoft Azure.

Solutions such are available also to the Russian companies, for example, start of a public hybrid cloud to corporate clients within the joint project of MTS and Microsoft. The idea of services is that corporate users will be able to create hybrid applications, using capacities, both local DPCs of the operator, and a global cloud of Microsoft Azure. At the same time the Russian customers will be able to select, where to turn new virtual machines and to store data: on the platform, in DPC of MTS in the territory of the Russian Federation or in a global cloud of Microsoft.

The border separating the cloud world of xaaS (lease of any resources) from the world of own virtualized hyper convergent environments begins to be blurred. In 2025, experts predict, the companies will begin to distribute flexibly IT infrastructure resources between the local platforms of geographically distributed offices centralized by DPCs, hybrid and public clouds. And the problem of the optimal choice between xaaS and virtualization of local infrastructure will be one of the hottest in the medium term.

Vladimir Leonov, the technical director of AMT Group, thus describes today's situation:

File:Aquote1.png
Hyper convergent solutions are surely developed, but classical infrastructure with the selected with SAN and proprietary storage systems continues to remain the most preferred solution for most of corporate customers. At the same time with development of software-defined solutions apply the same requirements to flexibility and controllability to classical infrastructure. Modern DWH has program interfaces for integration into cloud infrastructure, and such integration will develop further.
File:Aquote2.png

Just like a cloud

Today's ideas of DPC of the 21st century were influenced most of all by experience of providing cloud services which cornerstone technologies of virtualization are.

File:Aquote1.png
Users expect from corporate services IT of the same lack of restrictions and flexibility which they can (or assume that they can) receive at cloud providers, but at the same time are not ready to sacrifice security, availability and usual level of control, - Pavel Karnaukh notes.
File:Aquote2.png

  • Versatile flexibility. According to Alexander Sysoyev, the head of computing infrastructure of CROC IT company, business needs easily transformed and vendoronezavisimy systems.

File:Aquote1.png
More and more clients refuse a binding to a certain producer and make the decision on implementation, first of all, being guided by indicators of reliability and fault tolerance, simplicity of operation, process automation perspective, - Alexander Sysoyev notes.
File:Aquote2.png

The expert notes also other aspect of flexibility of IT infrastructure of DPC - gradual withdrawal from logic of universal DPCs.

File:Aquote1.png
Build and then iron is unprofitable and inexpedient to put "somehow some". Therefore customers even more often plan in advance what systems and on what equipment will take place in the Data-center, and already proceeding from it design for it all life support systems, - Alexander Sysoyev says.
File:Aquote2.png

Application of the systems of artificial intelligence helps achieve higher power and flexibility of IT infrastructure today, for example, Oleg Kotelyukh, the managing partner of Inpro Tekhnolodzhis company considers:

File:Aquote1.png
Algorithms of AI are capable to execute a large number of calculations in the shortest possible time and also to balance workloads and to control an IT equipment status in real time.
File:Aquote2.png

One more direction of application of intelligent systems – improvement quality of a cloud service without involvement of additional personnel. For example, the CROC company develops the special systems for the aid to service of operation, in particular, monitoring of loading of virtual machines on the basis of a neuronet.

  • Profitability. Technologies for the advanced DPC are designed to reduce anyway cost value of IT services and to make them more available to users. And, it concerns, both hardware, and program infrastructure. Pavel Goryunov, the technical director of network of data centers of CROC, gives specific examples:

File:Aquote1.png
At new processors for DPC power computing power counting on one core grows. Hyper convergent infrastructures allow to integrate in one box several types of systems at once and to ask management of all environment. Cutting of costs for service is also promoted by software-defined solutions.
File:Aquote2.png

The deduplication supported on some equipment and in software, for example, in backup systems, gives the chance to reduce many times the volume of the stored copies, i.e. to actually reduce assignments to provider for services.

  • Scaling. Alexander Sysoyev tells that "ecological foundations" of the modern Data-centers assume reuse of standard computing nodes, giperkonvergenty infrastructure and all complex of software-defined solutions.

File:Aquote1.png
All this allows to scale most quickly business, reacting to requests of clients and a situation in the market, - the expert notes.
File:Aquote2.png

According to Alexander Sysoyev, use of the free software, for example, for creation of private cloud infrastructures and container solutions allows to improve scaling of IT systems. At the same time demand for specialized proprietary software products, say, the best practices in plot area of virtual remote jobs or backup systems remains.

The Dell Technologies company makes use of experience of cloud solutions in the concept of DPC of the future. In its basis - boundary calculations and multiclouds.

File:Aquote1.png
The matter is that the modern DPC is not limited to walls of one or several buildings with ranks of racks any more. The need for information processing arising on IT infrastructure border on the one hand draws on itself computing resources from a core of DPC, doing this core not homogeneous, but distributed environment. On the other hand, many DPCs of the modern organizations to some extent use the external resources provided by cloud providers, - Mikhail Orlenko explains.
File:Aquote2.png

Quite so in Dell Technologies the DPC of tomorrow – from IT infrastructure borders through the distributed core of DPC to multi-clouds looks. And the company developed the relevant decisions for each part of this DPC: from IoT of devices on border to powerful servers and DWH in DPC and also means on integration with a multicloud.

Concept of DPC of tomorrow of Dell company

Necessary level of efficiency, flexibility and reliability can be provided only in that case when all components are implemented as softwares and are closely integrated among themselves, Andrey Kosenko is sure, at the same time control is exercised in the mode of "one window" and is based on application the politician.

File:Aquote1.png
At the same time when implementing hybrid or multicloud scenarios these politicians it is possible "to stretch" on all segments of infrastructure of DPC, both own, and third-party. It also allows to implement security model with zero level of trust and to save a uniform operational model, - the expert emphasizes.
File:Aquote2.png

Transition to space of IT services

The main result of influence of a cloud – change of the point of view with DPC: transition from characteristics of "iron" to the indicators characterizing IT services.

File:Aquote1.png
Into the forefront the set number of input-output operations, volumes of storage, indicators of reliability leaves any more to provide not capability of IT services are requirements clear and usual, and readiness to quickly unroll the necessary platform for rendering IT services and is effective it to manage for the benefit of business users, - Pavel Karnaukh says.
File:Aquote2.png

  • "Infrastructure as code". This concept - a "soft" method to provide functioning of software-defined DPC. Actually it is about providing flexible opportunities of a cloud in local infrastructure due to broad automation of transactions. Then business applications, infrastructure management, tools for automation and the orchestration of services allow to unroll infrastructure and to select resources in real time and also to support dynamic working tasks and to react quickly the changing requirements of business challenges. It is obvious that approach "Infrastructure as the code" perfectly corresponds to the ideas of DevOps, self-service of IT and modern methods of development on the basis of the concept of Agile, including low-code.
  • Everything As A Service. The consulting company Deloitte calls the concept of Everything As A Service (Everything As A Service, EaaS) one of the most powerful technology trends which define the need for emergence of flexible platform architecture. It means access to a flexible pool of resources which can be configured and to distribute easily in real time. Everything As A Service is a scalable capacity on demand from automated functionality and simplicity of management plus interfaces which allow to integrate easily among themselves suppliers of IT services and model of deployment.

File:Aquote1.png
At the moment virtualization of all physical resources of DPC, whether it be a communication channel (QoS, bandwidth control and so forth), the server, DWH (the full-fledged virtual machine or a pool of resources) is possible. And considering availability of graphical interfaces of management of virtualization clusters, routine work of the administrator on resource management, their delegation to other divisions, control of their utilization and planning considerably becomes simpler.
File:Aquote2.png

File:Aquote1.png
Not always implementation of approach of EaaS is a panacea and the rule for creation of the correct DPC. The correctness for each specific customer should result from full audit of hardware, network and program infrastructures, requirements of corporate regulations of use of the software and information security and also business processes of the company.
File:Aquote2.png

The movement towards full-scale software-defined DPC

As analysts Frost&Sullivan in the research "Upgrade of a data processing center without work interruption fairly specify. Five advice to heads of IT", most of IT heads are not interested in implementation of technologies for the sake of improvement of technologies. They intend to get rid of the checked solutions only if the perspective appears to update the existing infrastructure using more flexible and functionally developed versions of software. At the same time the scale of upgrade can be small.

File:Aquote1.png
Thus, it is possible to tell with confidence that the virtual, convergent and software-defined DPC is today, - Mikhail Orlenko is sure.
File:Aquote2.png

At the same time evolutionary transition to software-defined DPC is always the upgrade directed, first of all, by business needs, Alexander Sysoyev emphasizes:

File:Aquote1.png
IT departments stuck long ago to be the "providing" units, became full-fledged partners whose task – to participate in achievement of goals of business. They are forced to improve technologies, to look for points of growth and optimization of IT. For this reason we see gradual leaving from Hi-end for benefit of software-defined solutions, it leads to simplification of operation and unification of IT infrastructure".

File:Aquote2.png

Evolution of IT on the way to completely software-defined DPC
File:Aquote1.png
Work on upgrade of a data processing center requires accurate coordination, including calculations, storage, network, security and management of a cloud. Such complete approach reduces the cost, complexity and risks of applications launch and workloads, crucial for business, as in own data processing center, and public clouds when implementing hybrid or multi cloud scenarios, - Andrey Kosenko notes.
File:Aquote2.png

For these purposes the company offers the special VMware Cloud Foundation platform — the checked integrated solution. On its base it is possible to implement the concept of software-defined DPC, including virtualization of servers (vSphere), data warehouses (vSAN), networks (NSX), security and fault tolerances of a data processing center.

Gradation to SDDC supports also HPE, offering the special arranged architecture (composible). The main idea – creation of DPC on a basis a component: the platform provides the software for consolidation of IT resources, their configuration and rearrangement according to requirements of separate workloads or applications. The logic of changes also is software-defined. As the principle "Infrastructure the code" is implemented, there is an opportunity to quickly optimize resources for their smooth scaling as required in the automated mode.

HPE: development of architecture of software-defined DPC
File:Aquote1.png
Projects which implement functionality of virtualization of a technology stack are started. In the natural way understanding comes that within this model hyper convergent infrastructure has a set of advantages. Then becomes clear that adding of several program a component allows to implement "cloud" management model and provision of services. Well, and cherry on cake – seamless integration of means of deployment of cloud applicaions", - Pavel Karnaukh summarizes, meaning, first of all, solutions of VMware and Dell EMC (as a part of Dell Technologies).
File:Aquote2.png

File:Aquote1.png
Software-defined networks already became a reality in most DPC worldwide. For example, solutions of software-defined networks VMware are deployed at more than 13000 customers today. The number of the branches and remote offices connected to network using the solution VMware SD-WAN by VeloCloud exceeded 150 thousand worldwide".
File:Aquote2.png

File:Aquote1.png
The highest point of development of software-defined technologies for data processing centers is still not reached. But already today we see development of the software-defined Data-centers – a new milestone in development of modern DPCs when separate software-defined elements of infrastructure integrate in the uniform concept of SDDC. It is promoted by promptly growing customers needs in receiving digital services, - Alexey Malyshev, the founder and the CEO summarizes SONET
File:Aquote2.png

Processing of Big Data in DPC

The range of needs for high-performance data processing as a part of DPC constantly extends: to traditional calculation tasks for an oil and gas industry, CAE support systems of engineering modeling for digital production "gluttonous" algorithms of AI and Deep Learning are added. And ahead tasks of Internet of Things, service of "smart" houses and ensuring the movement of autonomous cars already loom.

File:Aquote1.png
Promptly the quantity of cores in the CCP, amount of RAM, speeds of input-output interfaces grows. There are absolutely new types of drives with a record capacity and access rate to data, new accelerators of calculations and specialized high-speed coprocessors for difficult computing tasks from a class of analytics of data (AI, ML, DL).
File:Aquote2.png

As experts of Uptime Institute celebrated, last year of herds sign: the scepticism concerning advantage for business of Big Data and their analytical processing using intellectual algorithms is replaced by their awareness of their obvious usefulness for modeling of processes in DPCs, for example, for optimization of loading of resources, increase in efficiency of service, failure prediction and reduction of terms of response to them.

File:Aquote1.png
Large volumes of data which it is necessary not only to store, but also quickly to process, is our reality. And the farther, the users of DPC to information processing rate will be more exacting, - Alexander Sysoyev notes, - In these conditions business is simply forced to apply new solutions. Depending on tasks and types of the processed data the broad spectrum of solutions already everywhere is used: from In-memory Data Base for data processing in real time and to high density high-performance computer systems ("supercomputers"). We note steady demand for them from the different companies from financial to medical sectors.
File:Aquote2.png

File:Aquote1.png
These are, in fact, the private clouds unrolled in the territory of the customer, Yury Novikov explains.
File:Aquote2.png

File:Aquote1.png
Especially, in a present economic situation very few people will decide to make investments in creation of high-performance clusters on own platform, - Yury Novikov adds.
File:Aquote2.png

However support of high-performance computing imposes additional requirements and restrictions for structure of DPC.

File:Aquote1.png
The high density of computing resources typical for HPC requires existence of the communications capable to send big amount of data with the minimum delay, and special solutions on heat extraction, - Vladimir Leonov lists new requirements to DPC.
File:Aquote2.png

File:Aquote1.png
If to glance further in the future, then it is possible to see that future applications of modeling and simulation will combine artificial intelligence technologies and analysts of Big Data – there is a convergence of these workloads. And all this should work at the same time on one computing system! - Vyacheslav Yelagin, the specialist in sales of high-performance computing systems and the systems of artificial intelligence of Hewlett Packard Enterprise in Russia reflects .
File:Aquote2.png

For producers of hardware platforms it is this call as at these types of applications - different delivery, and, besides, significantly differs the hardware stuffing which is required for each of these workloads.

Algorithms of machine learning, training of neural networks are not so difficult and knowledge-intensive, that are applied to mathematical modeling today. However the volume of machine calculations which need to be done for a training, for example, of model of machine vision – is huge. This volume for difficult models cannot still be digested on site therefore today models train in the majority in DPC.

Until recently software for training of Deep Learning and obtaining result on models (inferens) used the same standard chips: central processors (CPU), graphic processors (GPU) programmed by the user gate arrays (FPGA) and special purpose integrated microcircuits (ASIC) which were applied in a certain combination depending on a specific situation. All these chips have the big amount, high price and volumes of energy consumption, generate a lot of heat. Respectively, the hardware for the AI systems constructed based on such processors are placed in DPC.

File:Aquote1.png
As convergent workloads become more and more general, our customers require to provide them such universal hardware platform which would be productive as it the supercomputer, and at the same time would work as a cloud, - Vyacheslav Yelagin says.
File:Aquote2.png

File:Aquote1.png
Within digital transformation we are going to introduce these mighty technologies of processing of convergent workloads in as the bigger number of corporate DPCs is possible worldwide.
File:Aquote2.png

Special smart "iron"

File:Aquote1.png
But already the following wave of revolution gains strength - AI chips will begin to move from a core of network, DPC, to its border – the periphery. In comparison with traditional architecture of the CPU chips of AI will help to execute several orders quicker parallel computings and quicker to process the tasks connected with artificial intelligence, - Mikhail Orlenko predicts.
File:Aquote2.png

Unlike today's GPU and FPGA, these chips will be focused on work of customized applications, for example, computer vision, speech recognition, robotics, pilotless transport and, respectively, natural languag processing, task performance of ML and DL will be optimized for an inferens.

Revolution already goes: Deloitte expects that the size of the market of AI processors (for local devices and DPC) will increase from 6 bln. dollars in 2018 to more, than 90 bln. dollars in 2025, and annual average growth rate will make 45%.

Source: Deloitte, 2020.

And, the volume of a segment of AI processors for consumer devices (smartphones of bonus lines, tablets, smart columns, etc.) is much more segment for corporate devices today, but it will grow more slowly: during the period from 2020 to 2024 increase on average by 18% a year is expected. And here the market of AI processors for corporate devices (robots, video cameras, sensors, etc.) is much younger (the first such processor went on sale in 2017), however it grows much quicker: during 2020-2024 annual average growth at the level of 50% is expected.

Thus, the "soft" DPC completely software-defined on all set of the signs will be based on more and more complex, productive and smart "iron".

Read Also