RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2025/01/24 13:04:07

Data Center Engineering Infrastructure

The data center engineering infrastructure market has always been among the rather conservative ones. However, the winds of change literally blow the data center premises, demanding a revision of the usual technical solutions that can harmonize the needs of increasing energy consumption with the cost of cooling and the benefit of digitalizing business processes. The principles of the "green data center" are no longer the dreams of convinced environmental activists, but an economically sound business model of the corporate data center. This article is part of the Data Center Technologies review

Content

The amount of data stored in data centers around the world, according to the Statista portal, for the period 2016 - 2019. has tripled. The scale of the computing power required to process them increases proportionally, which directly affects the appearance of the Data Center: both the level of load on the rack and the density of the racks increase, the cost of data center resources and the ability to connect to data sources change.

File:Aquote1.png
If 10 years ago, the typical power per server rack was 4 - 5 kW, now the average is 10 kW, and peak figures can reach 20 - 40 kW, - said Oleg Lyubimov, CEO of Selectel. - This is due to the slowdown in progress in semiconductors, due to which a further increase in the computing power of the equipment is now due to an increase in the area and power consumption of processors, rather than improving their technical process and increasing their efficiency, as well as the increasing use of GPU computing, especially for ML tasks.
File:Aquote2.png

According to the profile organization Uptime Institute, published in April 2019, the consumption of electricity only by servers in data centers around the world will exceed 140 GW by 2023, and with engineering systems - more than 200 GW. For comparison, the total installed capacity of UES power plants in Russia amounted to 246.3 GW at the end of 2019, according to the official report on the functioning of the UES of Russia in 2019. At the same time, it is noted that the growth in energy consumption of cloud providers is about six times higher than the growth in private cloud consumption.

According to Eaton experts, by 2025 the data center industry may need up to 20% of all electricity generated in the world. That is why the construction of large data centers near electricity generation sources is gaining momentum on the planet. And the head of Rostelecom, Mikhail Oseevsky, appealed to the Russian government with a request to provide data centers that ensure the operation of IT systems of government agencies, for example, the Kalininsky data center near the Kalinin NPP, with the status of a wholesale consumer of electricity, which will significantly reduce the corresponding costs.

File:Aquote1.png
We see a focus on consolidating IT resources from some companies and decentralization from others. Still others look towards peripheral computing and Edge solutions. However, despite the various business needs of these companies, they are increasingly interested in energy efficient solutions as a means of reducing their costs. And if earlier this trend was inherent only in commercial data centers, now more and more corporate sites are thinking about saving energy resources, "said Sergey Makhlin, head of power supply and climatic systems at the CROC IT company.
File:Aquote2.png

In the struggle to improve energy efficiency, these battles go for tenths of the PUE (Power Usage Effectiveness) coefficient, which is the ratio of the total power consumed by the data center to the power consumed by the IT equipment.

Data center classification by energy efficiency parameters
File:Aquote1.png
The energy efficiency of the data center is determined by many factors. This is the efficiency of its engineering infrastructure - cold supply, power supply, and the efficiency of the server equipment used. The cost of changes in the data center infrastructure is greatest, so the importance of choosing the right solutions in design cannot be overestimated, "emphasizes Vladimir Leonov, Technical Director of AMT GROUP.
File:Aquote2.png

Heat - to the wind

File:Aquote1.png
All thermal energy released is often dispersed over a large area and cannot be collected in single points without any significant losses during its transportation.
File:Aquote2.png

To a certain extent, the severity of the problems was reduced by the emergence of modern server models, which, as Artem Kuznetsov notes, function perfectly even when maintaining 400S in a cold corridor. Even with such modes, it is impossible to completely get rid of problems with excess heat. The second most important problem is the cost of electricity, the cost of which is the most significant in terms of the cost of the Data Centers business after the capital costs of equipment.

The most common type of heat removal from the data center is compressor cooling, which itself requires a large amount of electricity. In addition, the tendency to consolidate servers and the growing popularity of high-density blade server systems leads to the fact that more and more computing power is concentrated within smaller areas. At the same time, hot spots may appear, with which, experts say, heating, ventilation and air conditioning systems (HVAC) are almost impossible to cope with: local cooling of such hot zones of the data center only redistributes the distribution of heat, but does not reduce electricity consumption.

File:Aquote1.png
The joint between saving resources and operating costs of a data center is a difficult compromise between increasing the cost of creation and reducing the cost of ownership, - emphasizes Roman Shumeiko, head of the sales support department system integrator. Hi-Tech""
File:Aquote2.png

During the search for an effective solution, the concept of trigeneration (CCHP - combined cooling, heat and power) appeared, which implies the process of joint production of electricity, heat and cold.

One approach implies that absorption refrigerators are installed in the cold center of the data center, which in their cycle use heat coming from a source located in the immediate vicinity of the data center, for example, a thermal power plant or a plant. Cold generation occurs in a special cycle of evaporation of water in vacuum and absorption in lithium bromide solution, which is then regenerated using an external heat source. The PUE value for this scheme may be as high as 1.15.

Trigeneration technology is more advantageous compared to cogeneration (combined production of thermal and electric energy), since it makes it possible to effectively use recycled heat not only in winter for heating, but also to utilize heat in summer.

File:Aquote1.png
The trigeneration technology is extremely difficult to operate and has a high CAPEX capital expenditure at the initial stage of deploying IT solutions, "notes Artem Kuznetsov, and the payback of such solutions, even with competent maintenance, is quite large - 7-8 years.
File:Aquote2.png

Cold center of the Kalininsky data center: the supply of cold coolant will ensure the operation of the system at full load in case of failure of all refrigerating machines. Source: Rosenergoatom, 2019

Where is the point that will break the cycle of increasing CAPEX and OPEX in climate solutions for data centers? According to Alexandra Ehrlich, General Director of ProfAiTiKul, this is a general decrease in electricity consumption.

{{quote 'Mathematics is very simple: we save on the cold supply system and invest colossal money in power. Or we spend a little more on cold, but this allows us to get significant savings on adjacent sections, - explains Alexandra Ehrlich. }}

Oleg Lyubimov also sees a tendency to reduce the energy consumption of data center service systems: an increase in the operating temperature of equipment, an increasing use of outdoor air for cooling instead of freon air conditioners, and minimizing losses in power supply systems.

Air Flow Control

From a practical point of view, cooling systems should maintain temperature and humidity corridors despite climatic jumps, have as few possible points of failure as possible, consume as little electricity as possible, be as simple and convenient to operate as possible, notes Alexandra Ehrlich. And, in addition, maintain scalability, that is, be able to easily increase the climate system if there is a need to increase the power or number of racks in the data center.

These requirements are largely met by the air flow control system in the data center premises. Ilya Tsarev, architect of the Schneider Electric Data Center Solutions Development Center, notes that a number of simple rules from the experience of the best data center projects allow you to decisively reduce data center power consumption. These include, for example, the rules for the separation of cold air flows into cooling of IT equipment and hot exhaust, the absence of recirculation and bypass of air.

Alexandra Ehrlich relies on two approaches: individual direct-flow ventilation systems and moving away from precision air conditioners towards heat exchangers. Large modules of liquid-cooled heat exchangers are used simultaneously as a partition between the support infrastructure and the computing center. Hot air from the IT racks is pumped by a fan into the space behind the cooling liquid heat exchanger. Passing through the heat exchanger, it is cooled and fed back to cool the racks.

File:Aquote1.png
Heat exchangers can be used in data centers of any configuration, in any climatic zone, both during the construction of a new data center, and during modernization/reconstruction, - notes Alexandra Erlikh. - They easily fit into any architecture, scale, are well operated. And most importantly, they cost and consume half as much as precision air conditioners.
File:Aquote2.png

This is especially important during the modernization of the data center. {{quote "It is often difficult to enter a modern ventilation system into existing mines, but the heat exchanger systems we practice for the data center can work without a false floor, and without a false ceiling, and at the same time occupy a minimum of space in the turbine hall, or generally located outside it," she notes. }}

Finding Free Cold: Meatballing

Friculing - a refrigeration system using the natural cooling mode, as noted by Alexandra Ehrlich, was the first and very rational attempt to reduce the power consumption of the data center, since it was possible to implement a system that works at least part of the year without compressors and other energy-intensive equipment. We are talking about the supply of cool street air directly to the room (if the temperature of the outside air is lower than in the room) or by means of a coolant (if it is hot outside).

The most popular option is friculation chillers, which are refrigerators equipped with an additional heat exchanger. If the external temperature is higher than the specified coolant temperature, then the coolant is cooled in the evaporator of the refrigerating circuit built into the chiller. And in the cold season, the liquid is cooled not in the evaporator, but in a special heat exchanger - a drycooler, where low-temperature outdoor air is used as a cooling source.

Chillers on the roof of the Kalininsky data center building in Udomla. Source: Rosenergoatom, 2019

According to experts, such a system requires significant capital investments, but pays off quickly enough (possibly even in the first year of operation), since the main consumer of electricity in such a system - the chiller - does not work for several cold months.

Very often, the chiller fancoil system is optimal in price and energy efficiency. The fan coils are air conditioners with a cooling system CW (chiller water), which are paired with a refrigeration machine. The main cooling elements in CW air conditioners are a heat exchanger and a two or three-way valve, which changes the coolant flow rate depending on the heat level in the room.

Ilya Tsarev emphasizes that the use of chillers with smooth adjustment of performance and compressor-free systems, as well as the refusal to actively maintain humidity by constant operation of steam humidifiers in favor of adjusting temperature and humidity indicators in the hall in such a way as to prevent condensate from falling out on heat exchangers, contributes to increasing the energy efficiency of the data center.

Ventilation systems using air meatballing and adiabatic cooling (due to evaporation of water, for example, by a sprayed high-pressure system) indoors make it possible to achieve very good PUE indicators - up to 1.043, since auxiliary equipment, including the cooling system, even in summer consumes only about 4% of the data center's power, and in winter - even less.

The next round of development of friculation systems is associated with the emergence of hybrid and cheerless systems. Hybrid (external) systems use their evaporative counterparts instead of air dryculers: water evaporates from the surface of the heat exchanger, while a huge amount of energy is spent on breaking intermolecular bonds, which allows you to cool the air. Power consumption is reduced by 20-30% compared to the basic version of friculation, and the natural cooling mode can be extended for almost the entire year.

The reverse side of the coin is the flow of water, which can be very large. Green Grid even introduced another parameter that characterizes the useful water consumption at the Data Center - the WUE (Water Usage Effectiveness) water utilization ratio, which, by analogy with PUE, is calculated as the ratio of annual water consumption to IT equipment power and is measured in l/kW/h.

In cheerless systems, the chillers are in cold reserve, and the entire cooling load falls on the hybrid coolers. This cools, for example, DPC Google in. Germany Cheerless systems are considered as one of the options for the backbone development of data centers, but they are very sensitive to the parameters of the internal and external environment and therefore require detailed calculations during design.

The trend towards the use of water for cooling the data center can be considered one of the key in the near future - it became clear that the limits for improving air cooling are already close and in order to achieve more tangible results in terms of energy efficiency, it is necessary to replace air with a more efficient coolant. Today, water is the best candidate for the role.

File:Aquote1.png
In the development of the data center structure, two global trends can be distinguished: the transition to energy-saving technologies and liquid cooling technologies.
File:Aquote2.png

2021: Air will save costs in the data center

Free cooling or friculation in recent years has been increasingly used in data centers around the world, especially in large data centers. Rising electricity prices are forcing site owners to look for new methods and approaches to cooling equipment and significant cost savings. Read more here.

Submersible Cooling: From Supercomputer to Data Center

File:Aquote1.png
One of the promising directions is the decision to immerse active equipment in the liquid medium of the dielectric (liquid coolant).
File:Aquote2.png

This gives significant advantages in the operation of the equipment. For example, maintaining the dielectric heat carrier temperature at 35 C requires significantly less energy than supplying air at 13 C.

Global Market Insights estimates that the market for such Data Center solutions will exceed $2.5 billion by 2025 . Analysts explain this by the fact that with the increase in the volume of processed data and, accordingly, the load on servers in the data center, especially those engaged in high-performance computing, the capabilities of air cooling systems become insufficient.

Submersible cooling is the simplest option: computing modules are immersed in a dielectric liquid (mineral or synthetic oil) that removes heat during circulation.

Submersible cooling of electronic equipment. Source: "Ecoflops"

For a number of years, the high-performance technology industry has been experimenting with this type of cooling. supercomputers The most famous projects of this type supercomputer SuperMUC are the capacity of 3 Pflops, working Leibniz Computing Center Bavarian Academy of Sciences in. Germany The realized idea is as follows: processor it is equipped with a special water heat exchanger, which supplies water with a temperature of + 40 ° C. Waste water with a temperature of + 70 ° C is either used for heating or cooled in a climatic system built on the principle of year-round friculation.

In Russia, using a water cooling system with year-round friculation, the supercomputer of the RCC Moscow State University is cooled.

Other options for water cooling equipment from the inside:

  • Water-cooled doors. In this case, the air conditioner is installed on the back wall of the server rack and immediately removes heat from the equipment, and hot air does not enter the machine room.

In this case, it is possible to obtain a multiple reduction in the length of the hot corridor, which significantly increases the energy efficiency of the cooling system. In addition, the solution does not require additional space, allowing optimal use of space in the Data Center. Water-cooled doors can be considered as a good alternative to in-row air conditioners, especially in situations where cooling of high-load racks is required.

  • Cooling with loop heat pipes that deliver refrigerant directly to the active server components. Heat from them is removed to the heat exchanger, which can be located both inside the server rack and outside.

Google is equipping liquid cooling systems with server equipment designed to perform calculations based on machine learning methods. The corresponding modules - ASIC-chips Tensor Processing Unit - are located in a group on the motherboard along with a cooling plate. The liquid refrigerant is supplied to a cooling plate contacting each ASIC TPU chip through a heat tube.

Capital costs for water-cooled server systems are still too high today. Oddly enough, one of the essential components of the cost of the solution can be water, because a large data center needs a lot of it. So, according to Bloomberg, in 2019 Google needed more than 8.7 million cubic meters. m of water for its data centers in three states. The need for water is so great that it has to be requested from the city authorities in which the data centers are located. For example, a new data center located in Red Oak, Texas, requires up to 1.46 billion gallons of water, while the entire county, where this and two dozen other cities are located, consumes 15 billion gallons of water for all its municipal needs.

File:Ii cod 5.jpg
Submariner server. Source: "Ecoflops"

Recently, immersion cooling - cooling server equipment by completely immersing it in a liquid dielectric coolant - has become used for equipment of conventional data centers. Such a system is compact, unassuming in power supply - energy is needed only for the operation of several low-power pumps. On the other hand, a stable temperature environment is provided, which excludes the appearance of hot zones with high-load racks.

DTL immersion-cooled data center was commissioned in 2019 in Moscow.

File:Aquote1.png
Increase in the number of areas for active equipment and the need to use special lifting mechanisms to quickly replace failed computer equipment.
File:Aquote2.png

Inpro Technologies is a Russian developer and manufacturer of computing and communication systems that has developed its own cooling solution based on the Liquid Cube direct liquid cooling technology. The company says that computing systems and networks built on the basis of the Liquid Cube consume 30% less electricity and reduce operating costs by 50% compared to traditional data center solutions.

Energy efficiency of direct liquid cooling systems. Source: Inpro Technologies

Oleg Kotelyukh, managing partner of Inpro Technologies, says that the Liquid Cube solution can be used for a wide range of tasks: from data storage to highly specialized high-density calculations.

Liquid Cube Liquid Cooled Container Data Center

The Liquid Cube container data center is a versatile direct liquid cooling computing and communication platform focused on use within hyperconverged and Edge architectures. The Liquid Cube pre-fabricated data center can operate in a wide range of ambient temperatures and even in an aggressive environment, providing an OPEX reduction of 50% or more compared to a traditional data center.

Comparison of traditional data center and Liquid Cube with the same power consumption

Floating data center

For several years, Nautilus Data Technologies has been building floating data centers. The first such Eli M Data Center with a capacity of 8 MW, including 800 server racks, was launched at the end of 2015. And at the end of this year in the port of Stockton (California) on a barge moored in the port, another floating data center with a capacity of 6 MW should be launched.

Nautilus Data Technologies Waterfowl Data Center. Source: Selectel

The company's proprietary cooling system with heat exchangers will be installed in the data center on the barge, which uses the water surrounding the facility. The average outboard water consumption to support the server cooling system is about 17 thousand liters per minute and can reach 45 thousand liters per minute at its peak.

Nautilus Data Technologies claims that the cooling method allows you to increase the specific power per rack five times, and the data center will be less demanding on resources than the data centers of competitors.

Two-deck Data Center from Nautilus Data Technologies with a modular structure. Source: Selectel

A floating data center is also located on a specially designed barge. Google The first such Data center, cooled by sea water, was launched in 2011. The company does not disclose the design of the internal cooling system.

Underwater Data Center

In mid-September, Microsoft summed up the results of two years of testing of its underwater Data Center, which were carried out in Scotland as part of the Natick project. The prototype of the first generation Leona Philpot - a small container measuring 3 x 2 m back in 2015 was submerged under water to a depth of 10 meters near the Pacific coast of the United States. It ran for 105 days, showing a PUE ratio of 1.07.

The second generation data center is more impressive in size - 12.2 x 2.8 m, it accommodates 12 racks with 864 servers. respectively. A data center with a power consumption of 240 kW was located next to the tidal power plant, which provided power to the data center during its operation. The developers noted that this prototype is designed for five years of operation and does not need intermediate maintenance.

Natick Project Microsoft Underwater Data Center

The results of the Natick 2 project showed that the failure rate (the ratio of the number of failed objects per unit of time to the average number of objects operating properly for a given period of time) of the underwater data center was eight times lower compared to ground data centers.

Microsoft also noted that the main problems of conventional data centers are temperature changes and corrosion caused by oxygen and moisture in the air. Sealed underwater data centers provide protection against corrosion, and the temperature in them practically does not change due to the use of sea water for cooling.

Currently, Microsoft engineers are busy creating a third-generation data center - it will include 12 cylindrical containers with Natick 2 specifications. They, together with the entire auxiliary infrastructure of the Data Center, will be attached to a steel frame at a depth of 200 meters under water. The total capacity of the Natick 3 data center will be 5 MW.

Microsoft's Natick 3 Underwater Data Center

Immersion in groundwater

In a situation where there is no sea nearby, it is proposed to use groundwater to cool the data center. Indeed, at a depth of 10 to 100 meters, the temperature does not change during the year, and ranges from 8 ° C to 12 ° C depending on the terrain. And special water treatment, as a rule, is not required. Similar to air friculation, the option of direct cooling or using heat exchangers is possible.

The PUE of such a system is 1.06 to 1.08. An important aspect of the solution is the cost of the project, which is seriously increasing with increasing depth.

Groundwater is used, in particular, to cool servers at the IGN data center in Germany. Water from a depth of 300 m with the help of a pump rises to the data center, cools the internal closed circuit of the water cooling of the servers, while heating up by only 5 K, and is lowered into another well. The system saved 30-40% of electricity compared to the usual air cooling.

File:Aquote1.png
If we talk about the economic efficiency of the data center, then you need to start with climatic systems. The more effective they are, the cheaper everyone else is. And here the absolute leaders today are groundwater, - Alexandra Erlikh is sure. - The consumption of the system is equal to the consumption of several pumps. And that's it. Year-round meatballing, virtually free cold. We are now designing such a system in Germany, they are widespread here, but, unfortunately, we have not yet seen systems on groundwater in Russia.
File:Aquote2.png

Data centers turn green

Another option to optimize the air cooling of the data center was the choice of an appropriate location with suitable natural conditions. It is not surprising that the first such experiments were carried out in Finland and Iceland - countries with a cold climate. It should be noted that in addition to natural conditions, the successful implementation of such a project requires the presence of a developed infrastructure: roads, communications, electricity, etc.

Not so long ago, the Kyoto cooling system appeared in data centers. This is a "green" technology that uses the cold of the environment all year round. And for guaranteed performance, backup steam compression machines are used. The average annual PUE reaches 1.15.

File:Aquote1.png
It may depend on the time of year and the climatic area of ​ ​ the Data Center. The latter also affects the choice of solutions whose goal is to improve energy efficiency. In the southern regions, these can be solar panels to power equipment, in the steppe areas with predominant strong winds - "windmills," in the north - cooling with groundwater or seawater.
File:Aquote2.png

File:Aquote1.png
Today, green technologies are becoming more in demand than ever. It is important that they not only reduce the impact of infrastructure on the environment, but also significantly save resources by reducing energy costs.
File:Aquote2.png

Since the decline in PUE below the current level of 1.1 is difficult to achieve in practice, the focus is shifting to technologies with low electricity consumption and renewable energy sources. So, one of Apple's data centers, located in North Carolina, is already 100% powered by renewable energy sources: 42 million kWh come from solar panels, the rest of the needs are covered by burning biogas.

File:Aquote1.png
A number of major Western and Asian corporations said their data centers operate with wind and solar power. And one Finnish developer recently shared plans to use heat from the data center to heat houses and agricultural greenhouses.
File:Aquote2.png

But projects of this kind are very non-trivial: at the exit from the server there is low-potential heat, which is quite difficult to utilize. At least because for this, the data center becomes a heat supply organization and receive an appropriate license. This is one of the promising tasks that should be solved in the future if Russian data centers follow the path of energy saving due to "green" technologies: to whom and how to transfer the resulting heat, in particular, in the summer?

There are point successful projects in the world. For example, Yandex data center in the city of Mäntsylä in Finland is cooled by direct friculation, and heated air through a heat exchanger enters the city heat supply network. Yandex also receives money from municipal services for the supplied thermal energy.

File:Aquote1.png
Of course, today all customers are thinking about energy efficiency. In particular, they choose engineering systems with high efficiency and additional power saving capabilities (for example, free-cooling), use IT equipment with improved resource utilization algorithms and reduced environmental requirements.
File:Aquote2.png

File:Aquote1.png
For Russia, this is rather an experimental story. The feasibility of building "green" and energy-efficient data centers is directly related to the cost of energy resources. Now they are quite cheap in comparison with the cost of the required equipment, which is purchased for currency.
File:Aquote2.png

File:Aquote1.png
This is unacceptable for commercial sites, but it is applicable for large corporate customers and government agencies. It should be borne in mind that energy efficient solutions are often more reliable. And this also affects the choice of decision, - notes Konstantin Zinoviev.
File:Aquote2.png

Technology Integration Course

A wide selection of promising solutions for cooling data centers is complicated by the fact that the practical implementation of each of them depends on many parameters: somewhere there is not enough water for evaporative cooling, somewhere there is not enough electrical power, and somewhere - space for a ventilation chamber.

Energy efficient systems, such as precision freon air conditioners, often have much higher CAPEX than traditional ones, but excellent OPEX performance. When using friculation, the temperature mode in the machine room is important: the greater the difference between the temperature inside the data center and outside, the less CAPEX, the less the work on heat transfer. The smaller this difference, the more work is required to cool the machine room, which means an increase in power consumption, that is, OPEX systems. In general, experts say, meatballing is always a high CAPEX, and such a project pays off with a low OPEX.

File:Aquote1.png
It all depends on the climate, the availability of resources and the purpose of the data center itself.
File:Aquote2.png

However, in her opinion, there are general trends in the development of this segment of solutions:

  • direct-flow ventilation;
  • replacement of obsolete precision conditioners with heat exchangers of various configurations;
  • technologies based on water evaporation.

File:Aquote1.png
The solution to this issue can be the construction of data centers in more northern latitudes, the use of adiabatic cooling technology, reducing the IT load itself by abandoning fans to cool active equipment and installing radiators on processors with heat pipes.
File:Aquote2.png

Sanjay Kumar Sainani, Senior Vice President and CTO of Huawei's Global Data Center Division, in his 2020-2025 Data Center Development Forecast, notes as one of the industry's significant trends the convergence of liquid and air cooling systems, the wider use of indirect evaporative cooling technologies instead of water. So, in areas with a suitable climate, water cooling systems will be gradually replaced by indirect evaporative cooling.

Energy efficiency of computing systems

File:Aquote1.png
The higher the peak consumption, the more powerful the distribution power supply network of the data center should be, and the higher the power required from the city, and in the future (taking into account the position of the Ministry of Energy of the Russian Federation on the non-use of network power reserves) - higher fines for unused power reserves.
File:Aquote2.png

File:Aquote1.png
The key consumer of electricity in the data center is computing equipment. Accordingly, costs can be reduced by more efficient utilization of systems, the use of virtualization and the use of solutions that are already positioned as energy efficient, - notes Pavel Goryunov.
File:Aquote2.png

File:Aquote1.png
The consequence of virtualization is the enlargement of the data center, which allows you to use technical solutions to reduce PUE, the use of which in smaller data centers is not economically feasible, adds Alexey Malyshev.
File:Aquote2.png

File:Aquote1.png
Virtualization technologies at one time broke the trend of power growth per rack for several years and slowed down the growth rate of data center scale, says Ilya Tsarev. - We reach the values ​ ​ promised by 2013-2015 a decade earlier at 8 - 10 kW per rack in the average corporate data center only now, and in colocation in many cases have not yet reached.
File:Aquote2.png

According to the expert, it is possible to increase the energy efficiency of the data center by following even an uncomplicated strategy of efficient use of IT equipment. First, timely get rid of equipment that does not perform useful work. Secondly, provide the ability to scale the architecture of engineering systems with the option, both its painless expansion and reduction following the volume of IT equipment. Third, plan for rack IT by using DCIM systems to improve the efficiency of cooling and power distribution systems.

File:Aquote1.png
We have implemented such a solution at our data center. With the help of hundreds of sensors, the system records the temperature in the premises twice a minute, builds heat maps and allows you to identify the equipment located suboptimally, "says the representative of the CROC.
File:Aquote2.png

According to him, this makes it possible to save about 5% of electricity costs per year, as well as reduce the number of breakdowns due to overheating of equipment or high humidity.

Uninterrupted power supply

Today's de facto standard for data center uninterruptible power supply systems (UPS) is the installation of static UPSs operating in the on-line mode (double conversion), allowing to obtain the required quality of electricity. In addition, modern equipment is characterized by the use of pulse power supplies with a non-linear nature of consumption. To power this type of equipment, powerful three-phase UPS with double conversion are well used, which avoid overloading of neutral cables of input power grids and equipment of transformer substations.

According to Ilya Tsarev, the search for more effective solutions goes towards topologically more complex solutions in power distribution systems of large data centers than the classic dual-beam power supply and the 2N, N + 1 backup scheme.

One of the promising methods of synchronization and load sharing is implemented, for example, in the Hot Sync technology owned by Eaton. Unlike parallel systems from other manufacturers, devices do not exchange with each other the information necessary for synchronization and load balancing. The algorithm of the system operation is based on checking any deviations of the UPS output power, and each device operates independently in full synchronization mode with the rest. We note other areas in which the development of the SBE is underway.

  • Lithium-ion batteries as a replacement for traditional lead-acid batteries. They are much lighter and more compact than traditional batteries and are more energy intensive.

File:Aquote1.png
They are more durable in terms of service life and the number of charge-discharge cycles, and contain fewer heavy metals and aggressive substances, Ilya Tsarev notes and adds that so far lithium-ion batteries are too expensive for mass use in data centers, but as they become cheaper, they will gradually replace traditional lead-acid batteries in data centers.
File:Aquote2.png

According to Schneider Electric, depending on the scope of lithium-ion batteries, total cost of ownership savings of 10-40% can be achieved.

  • Modular UPS. Almost all leading vendors have added modular installations with a capacity of 1 MW and higher to their product portfolio: easily scalable and serviceable. The power of the system can be increased in a relatively small step by adding additional cabinets with batteries.

In particular, Vertiv has a super-powerful Liebert Trinergy Cube UPS (from 150 kW to 3.4 MW), formed according to the modular principle. Moreover, it itself can play the role of a single module and form, as a result of scaling, an uninterruptible power supply structure with a capacity of up to 27 MW in a parallel configuration.

Due to the decentralized architecture, modular solutions have great structural flexibility and make it possible to connect one or more additional modules to an already functioning device with an increase in the need for power supply in a short period of time. Moreover, the average system recovery time after a failure (MTTR) is radically reduced due to the possibility of hot replacement of a faulty module.

  • Dynamic (diesel-rotor) UPS. Do not use batteries. They include three main elements: the flywheel - a key element of the DIBP, which plays the role of an energy storage and rotates on a precisely aligned axis; synchronous electric machine; diesel engine.

Uninterrupted operation is supported by the kinetic energy of the flywheel, which eliminates the need for batteries. DIBP service life is at least 25 years, while static UPS will last 10-15 years. The higher efficiency of the system is 98% versus 95%. The DIBP solution takes up much less space, is easier to maintain, and helps reduce capital and operating costs.

  • Cluster systems. The power is increased using a parallel installation of high-power power units, for example, in increments of 250 kVA. Thus, an independent design of a higher technological stage is created with a common service bypass, a common battery power supply and a single control scheme.

  • Container power solutions. Provides greater flexibility and ease of scaling when building megawatt data centers. Modules can be transported, quickly mounted and reused. This is very convenient, for example, for telecom operators in the construction of 5G networks.

  • Distributed UPS. They are mounted either in the server rack or directly next to it so that there is practically no free space between each server and the UPS connected to it. Thanks to this approach, the risk of connection defects in the power supply circuit is significantly reduced, and a small mass simplifies the installation and transfer of distributed UPSs.

  • Centralized UPS management. The ability to instantly obtain information about the UPS status, including their capacity, location, load status, and the need to replace the batteries of each UPS.

  • Intelligent UPS control. The integration system for the UPS-based data center is implemented, for example, by the Russian company IMPULSE. This is a centralized monitoring system with a local display that monitors the state of the power supply system, temperature and humidity level, as well as the state of each individual subsystem. 24-hour and year-round remote monitoring of system health and equipment load minimizes maintenance, repair, and loss of unscheduled downtime.

Schneider Electric's special ECOnversion power saving mode reduces operating costs. Building fault-tolerant systems with N + 1 redundancy is possible on the basis of built-in redundancy. Power expansion is possible both within 1500 kW (in increments of 250 kW) and above - by parallel connection of several systems.

  • Ability to share unused electricity. Vertiv is partnering with Upside Energy to develop Virtual Energy Store technology. It allows you to share unused electricity with the central network. In fact, this is a rapid demand response system that does not require the launch of additional generating capacities.

According to the developers, Upside Energy's Virtual Energy Store platform can organize the collaboration of more than 100,000 devices in real time, controlling customer UPS systems without compromising their functionality in the field of emergency backup power supply to the data center.

  • Smart power management. Modern management systems collect power consumption data from servers, racks, distribution equipment, up to the point that you can monitor each individual outlet. You can find periods of load decline and schedule maintenance for that time. The analysis of consumption peaks will keep the power reserve within 10-15% instead of 30-40% when manually controlled.

  • High availability data center. This approach is offered, for example, by Schneider Electric. It provides a single integrated approach to creating an engineering infrastructure based on the proprietary InfraStruXure architecture and mutual compatibility of components at the physical level.

  • Modular data center engineering infrastructure. Implemented, for example, in the integrated Delta InfraSuite solution: you can increase the power of the data center by gradually adding the necessary equipment: racks, cooling systems, cabinets and power distribution units, control and distribution units, etc.

  • Software-defined power supply (SD-Power). We are talking about the formation of an abstraction level that allows you to effectively manage the available power resources in the interests of end "users" - devices.

All leading vendors offer their own solutions of "software definability" of power supply. For example, Power System Manager, one of the software modules of Vertiv's Trellis Enterprise DCIM complex, is capable of analyzing power consumption of both IT and engineering equipment, generating reports and recommendations for planning data center power supply. This solution "knows" how to predict possible bottlenecks, overloaded and underloaded racks, predict the state of the power supply system.

Intelligent Power Manager Eaton has developed tools for monitoring and managing various power devices in physical and virtual environments. The application ensures the continuity of business processes and ensures the uninterrupted operation of IT equipment. And Schneider Electric is developing Smart Grid solutions designed to combine facilities that differ in type of consumption, power consumption and dynamic power redistribution capabilities. A data center as an energy-intensive object will be one of the key links in this chain.

BIM

The use of BIM technologies in the construction of data centers is gaining more and more popularity. Indeed, due to the fact that the mutual arrangement of engineering systems can be calculated to the smallest detail, the number of collisions during installation decreases significantly. The use of BIM also helps to control the progress of construction work and the consumption of materials, which allows you to stay within the allocated budget. Control over the progress of work in a single information space becomes much easier.

The possibilities of information modeling when creating an engineering infrastructure cover the artistic visualization of various objects: from details of engineering systems to the development of areas with real terrain, as well as 3D models of the object and working with it in virtual reality mode . The use of such developed tools is understandable: full-scale models for data centers are hardly possible, mathematical modeling is the only option for a comprehensive analysis of such objects.

An important part of this toolkit is CFD (Computational Fluid Dynamics) software, which implements the tasks of computational dynamics of fluids (liquids and gases). It is used to model and evaluate the efficiency of mass and heat exchange in data centers.

CFD model for assessing the efficiency of mass and heat exchange in a data center. Source: ICS Group

The work also creates a complete 3D catalog of all data center equipment with a visual representation of the temperature distribution and air flows at the level of the room and cabinets. Such a model allows the data center owner to rationally place and timely replace equipment, assess the impact on the temperature and environment of various scenarios of arrangement, power supply, cooling and restrictions, as well as simulate the impact of any physical actions in advance.

Three-dimensional mathematical model of the physical data center. Source: ICS Group

Power modeling

Building Energy Modeling (BEM) is an assessment of the integrated energy efficiency of all engineering systems and design solutions using specialized software. Thanks to engineering calculations, it is possible to estimate the energy consumption of the facility during the year and predict the payback of design solutions.

BEM is a complex simulation that creates a thermal map of the data center (temperature distribution by volume), determines the energy efficiency indicators of the data center, and also provides the ability to choose the optimal solutions for cooling the server room, optimizing air flows in the data center machine rooms.

Modeling of a typical solution for cooling a diesel uninterruptible power supply (DIBP) installed in the data center. Source: MM-Technologies Company

Using the outdoor air potential during cooling, the date of the centers is one of the promising energy efficient solutions. However, the specific performance factor depends significantly on the correct configuration of the mixing chamber. If the configuration of the chamber is unsuccessful, the air mixing temperature obtained by the designer according to the algebraic formula will not be observed in reality. For example, in winter, warm recirculation air, instead of mixing with outdoor air, can go outside, and an excessive amount of street air enters the mixing chamber.

The analysis performed by mathematical modeling methods will allow to analyze the actual picture of air flow propagation in the mixing chamber, and, if necessary, develop a modification of the mixing chamber.

Numerical simulation of the mixing chamber of the Data Center microclimate maintenance system. Source: MM-Technologies Company

Data center energy modeling helps you choose the most energy efficient way to cool the Data Center, choose the optimal package of air conditioners, and also get a more accurate estimate of the OPEX data center, because it is largely determined by energy costs.

Balance modeling of machine room cooling in the data center. Source: MM-Technologies Company

... In ancient times, human settlements were located on the banks of large rivers and seas. Rivers were the main transport arteries along which large loads and people moved. In the digital age, data centers appear on the same shores, in which information "cargoes" are moved and virtual human communications are carried out. Nature is eternal, and technological progress is fleeting. Maybe several decades will pass, and fairy tales for kids will begin like this: on the shore of the blue-presine sea there was a green-pretended data center, and small friendly virtual containers lived in it...

Notes