RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2020/11/23 12:33:44

Data Center Engineering Infrastructure

The market for the engineering infrastructure of the data center has always been among the rather conservative. However, the winds of change literally blow the data center premises, requiring a revision of familiar technical solutions that can harmonize the needs of growing energy consumption with cooling costs and the benefits of digitalizing business processes. The principles of the "green data center" are no longer the dreams of convinced environmental fighters, but an economically sound business model of an enterprise data center. The article is included in the overview "Technologies for data centers"

Content

The volume of data stored in data centers around the world, according to the Statista portal, for the period 2016-2019. tripled. The amount of computing power required for their processing increases proportionally, which directly affects the appearance of the Data Center: both the level of load on the rack and the density of the racks changes the cost of data center resources and the ability to connect to data sources.

File:Aquote1.png
If 10 years ago, the typical capacity per server rack was 4 - 5 kW, now the average is 10 kW, and peak figures can reach 20 - 40 kW, - said Oleg Lyubimov, CEO of Selectel.- This is due to the inhibition of progress in semiconductors, due to which a further increase in the processing power of equipment is now due to an increase in the area and power consumption of processors, and not to an improvement in their process and efficiency, as well as the more frequent use of GPU computing, especially for ML tasks.
File:Aquote2.png

According to the profile organization Uptime Institute, published in April 2019, electricity consumption only by servers in data centers around the world will exceed 140 GW by 2023, and with engineering systems - more than 200 GW. For comparison, the total installed capacity of power plants of the UES of Russia amounted to 2019 GW at the end of 246.3, according to the official report on the functioning of the UES of Russia in 2019. At the same time, it is noted that the growth in energy consumption of cloud providers is about six times higher than the growth in private cloud consumption.

According to Eaton experts, by 2025 the data center industry may require up to 20% of all electricity generated in the world. That is why the construction of large data centers near electricity generation sources is gaining strength on the planet. And the head of Rostelecom, Mikhail Oseevsky, appealed to the Russian government with a request to provide data centers that ensure the operation of IT systems of government agencies, for example, the Kalininsky data center near the Kalinin NPP, with the status of a wholesale consumer of electricity, which will significantly reduce the corresponding costs.

File:Aquote1.png
We see a focus on consolidating IT resources from some companies and decentralization from others. Still others look toward peripheral computing and Edge solutions. However, despite the various business needs of these companies, they are increasingly interested in energy-efficient solutions as a means of reducing their costs. And if earlier this trend was inherent only in commercial data centers, now more and more corporate sites are thinking about saving energy resources, "said Sergey Makhlin, head of electricity supply and climate systems at the CRIC IT company.
File:Aquote2.png

In the fight to improve energy efficiency, real battles are going for tenths of the PUE (Power Usage Efficiency) coefficient, which is the ratio of the total power consumed by the data center to the power consumed by IT equipment.

Data center classification by energy efficiency
File:Aquote1.png
Data center energy efficiency is determined by many factors. This is the efficiency of its engineering infrastructure - cold supply, energy supply, and the efficiency of the server equipment used. The cost of changes in the data center infrastructure is greatest, so the importance of the right choice of solutions during design is difficult to overestimate, "emphasizes Vladimir Leonov, technical director of AMT GROUP.
File:Aquote2.png

Heat - to the wind

File:Aquote1.png
All released thermal energy is often dispersed over a large area and cannot be collected into single points without any significant losses during its transportation.
File:Aquote2.png

To a certain extent, the severity of the problems was reduced by the appearance of modern server models, which, as Artem Kuznetsov notes, function perfectly even when maintaining 400C in a cold corridor. Even in such modes, it is not possible to completely get rid of problems with excess heat. The second most important problem is the cost of electricity, the cost of which is the most significant in terms of the cost of business Data-centers after capital costs for equipment.

The most common type of heat removal from the data center is compressor cooling, which itself requires a large amount of electricity. In addition, the tendency to consolidate servers and the growing popularity of high-density blade server systems leads to the concentration of more and more computing power within increasingly smaller areas. At the same time, the appearance of hot spots, which, experts say, are almost impossible for heating, ventilation and air conditioning systems (HVAC) to cope with: local cooling of such hot areas of the data center only redistributes heat distribution, but does not reduce electricity consumption.

File:Aquote1.png
The junction of resource savings and operating costs of the data center is a difficult compromise between increasing the cost of creation and lowering the cost of ownership, "emphasizes Roman Shumeiko, head of sales support at the HayTek system integrator.
File:Aquote2.png

During the search for an effective solution, the concept of trigeneration (Trigeneration, CCHP - combined cooling, heat and power) appeared, implying a process of joint generation of electricity, heat and cold.

One approach implies that absorption refrigerators are installed in the data center refrigerator, which in their cycle uses heat from a source located in the immediate vicinity of the data center, for example, a CHP or a plant. The generation of cold occurs in a special cycle of evaporation of water in vacuo and absorption in a solution of lithium bromide, which is further regenerated using an external heat source. The PUE value for this scheme can reach 1.15.

The technology of trigeneration is more profitable than cogeneration (combined production of thermal and electrical energy), since it makes it possible to effectively use recycled heat not only in winter for heating, but also to dispose of heat in summer.

File:Aquote1.png
Trigeneration technology is extremely difficult to operate and has a high CAPEX capital expenditure at the initial stage of deploying IT solutions, "says Artem Kuznetsov, and the payback of such solutions, even with competent maintenance, is quite high - 7-8 years.
File:Aquote2.png

Cold Center Data Center "Kalininsky": the cold coolant reserve will ensure the system operation at full load in case of failure of all refrigeration machines. Source: Rosenergoatom, 2019

Where is the point that will break the vicious cycle of increasing CAPEX and OPEX in climate solutions for data centers? According to Alexandra Erlich, CEO of ProfAyTikul, this is a general decrease in electricity consumption.

{{quote 'Mathematics is very simple: we save on the cold supply system and invest enormous money in energy. Or we spend a little more on the cold, but this allows us to get significant savings on adjacent sections, "explains Alexandra Erlich. }}

Oleg Lyubimov also sees a tendency to reduce the energy consumption of service systems of data centers: increasing the operating temperature of equipment, increasing use for cooling outdoor air instead of freon air conditioners, minimizing losses in power supply systems.

Air Flow Management

From a practical point of view, cooling systems should support temperature and humidity corridors despite climatic surges, have as few possible points of failure as possible, consume as little electricity as possible, be as simple and convenient to operate, Alexandra Erlich notes. And, in addition, maintain scalability, that is, be able to easily grow the climate system if there is a need to increase the power or number of racks in the data center.

To a large extent, air flow control systems in the premises of the data center meet these requirements. Ilya Tsarev, architect of the Schneider Electric Data Center for Data Center Solutions, notes that a number of simple rules from the experience of the best data center projects make it possible to decisively reduce the power consumption of the data center. These include, for example, rules for separating cold air flows into cooling of IT equipment and hot discharged, and the lack of recirculation and bypass of air.

Alexandra Erlich relies on two approaches: individual direct-flow ventilation systems and moving away from precision air conditioners towards heat exchangers. Large modules of liquid-cooled heat exchangers are used simultaneously as a partition between the support infrastructure and the computing center. Hot air from the IT racks is blown by the fan into the space behind the cooling liquid heat exchanger. Passing through the heat exchanger, it is cooled and supplied back to the cooling of the racks.

File:Aquote1.png
Heat exchangers can be used in a data center of any configuration, in any climatic zone, both during the construction of a new data center, and during modernization/reconstruction, "says Alexandra Erlich. - They easily fit into any architecture, scale, are perfectly operated. And most importantly, they cost and consume half as much as precision air conditioners.
File:Aquote2.png

This is especially important when modernizing the data center. {{quote 'The modern ventilation system is often difficult to fit into existing mines, but the heat exchanger systems we practice for the data center can work both without a false floor and without a false ceiling, and at the same time occupy at least places in the machine hall, or be located outside it at all, "she notes. }}

In search of free cold: friculating

Friculing - a refrigeration system using a natural cooling mode, as Alexandra Erlich notes, was the first and very rational attempt to reduce the power consumption of the data center, since it was possible to implement a system that works at least part of the year without compressors and other energy-intensive equipment. We are talking about supplying cool street air directly to the room (if the outside air temperature is lower than in the room) or by means of a coolant (if it is hot on the street).

The most popular option is chillers with the function of friculating, which are refrigeration machines equipped with an additional heat exchanger. If the outside temperature is higher than the specified coolant temperature, the coolant is cooled in the refrigerating circuit evaporator built into the chiller. And in the cold season, the liquid is cooled not in the evaporator, but in a special heat exchanger - a driver, where external air with a low temperature is used as a cooling source.

Chillers on the roof of the building of the data center "Kalininsky" in Udomla. Source: Rosenergoatom, 2019

According to experts, such a system requires significant capital investments, but pays off quite quickly (possibly even in the first year of operation), since the main consumer of electricity in such a system - the chiller - does not work for several cold months.

Very often, the chiller-fancoil system is optimal in price and energy efficiency. Fancoyles are air conditioners with a CW (chiller water) cooling system, which work in tandem with a refrigerator. The main cooling elements in the CW conditioners are a heat exchanger and a two or three-way valve, which changes the coolant flow rate depending on the heat level in the room.

Ilya Tsarev emphasizes that the use of chillers with smooth adjustment of productivity and compressor-free systems, as well as the refusal to actively maintain humidity by constantly operating steam humidifiers in favor of adjusting the temperature and humidity indicators in the hall in such a way as to prevent condensate from falling out on heat exchangers, contributes to improving the energy efficiency of the data center.

Ventilation systems using air friculing and adiabatic cooling (by evaporating water, for example, sprayed by a high pressure system) indoors make it possible to achieve very good PUE performance - up to 1.043, since auxiliary equipment, including the cooling system, even in summer consumes only about 4% of the power of the data center, and in winter - even less.

The next round of development of friculing systems is associated with the advent of hybrid and chillless systems. Hybrid (external) systems are used instead of air drivers of their evaporative brethren: water evaporates from the surface of the heat exchanger, while a huge amount of energy is spent on breaking intermolecular bonds, which allows you to cool the air. Power consumption is reduced by 20-30%, compared to the basic freecling option, and the natural cooling mode can be extended for almost the entire year.

The reverse side of the coin is the flow rate of water, which can be very large. Green Grid even introduced another parameter that characterizes the useful water consumption in the Data Center - the Water Usage Efficiency (WUE), which, by analogy with PUE, is calculated as the ratio of annual water consumption to IT equipment power and is measured in l/kW/h.

In chillless systems, chillers are in cold reserve, and the entire cooling load falls on hybrid coolers. So cooled, for example, DPC Google in. Germany Chillless systems are considered as one of the options for the main development of data centers, but they are very sensitive to the parameters of the internal and external environment and therefore require detailed calculations during design.

The trend for the use of water for data center cooling can be considered one of the key in the near future - it became clear that the limits for improving air cooling are already close and in order to achieve more tangible results in terms of energy efficiency, it is necessary to replace air with a more efficient coolant. Today, water is the best candidate for this role.

File:Aquote1.png
In the development of the data center structure, two global trends can be distinguished: the transition to energy-saving technologies and liquid cooling technologies.
File:Aquote2.png

2021: Air will save costs in the data center

Free cooling or friculating has been increasingly used in data centers around the world in recent years, especially in large data centers. Rising energy prices are forcing computer site owners to look for new methods and approaches to cooling equipment and significant cost savings. More details here.

Submersible cooling: from supercomputer to data center

File:Aquote1.png
One of the promising directions is the decision to immerse the active equipment in the liquid medium of the dielectric (liquid coolant).
File:Aquote2.png

This gives significant advantages in the operation of the equipment. For example, maintaining the temperature of the dielectric coolant at 35 C requires significantly less energy than the 13 ° C air supply.

According to Global Market Insights estimates, the market for such solutions for Data centers by 2025 will exceed $2.5 billion . Analysts explain this by the fact that with the increase in the volume of data processed and, accordingly, the load on servers in the data center, especially those engaged in high-performance computing, the capabilities of air cooling systems become insufficient.

Immersion cooling is the simplest option: computing modules are immersed in a dielectric liquid (mineral or synthetic oil), which removes heat during circulation.

Submersible cooling of electronic equipment. Source: Ecoflops

With this type of cooling, the industry of high-performance equipment and is experimenting for several years. supercomputers The most famous projects of this type supercomputer SuperMUC are the capacity of 3 Pflops, working Leibniz Computing Center Bavarian Academy of Sciences in. Germany The implemented idea is: processor supplied with a special water heat exchanger, which supplies water with a temperature of + 40 ° C. Waste water with a temperature of + 70 ° C either goes to heating or is cooled in a climatic system built on the principle of year-round friculing.

In Russia, with the help of a water cooling system with year-round freeculling, the supercomputer of RCC Moscow State University is cooled.

Other versions of water cooling equipment from the inside:

  • Water-cooled doors. In this case, the air conditioner is installed on the rear wall of the server rack and immediately removes heat from the equipment, and hot air does not enter the engine room.

In this case, it is possible to obtain a multiple reduction in the length of the hot corridor, which significantly increases the energy efficiency of the cooling system. In addition, the solution does not require additional space, allowing optimal use of space in the Data Center. Water-cooled doors can be considered as a good alternative to in-line air conditioners, especially in a situation where it is necessary to provide cooling for highly loaded racks.

  • Cooling with loop heat tubes that deliver refrigerant directly to active server components. Heat is removed from them to the heat exchanger, which can be located both inside the server rack and outside.

Google equips liquid cooling systems with server equipment designed to perform calculations based on machine learning methods. The corresponding modules - Tensor Processing Unit ASIC chips - are located in a group on the motherboard along with the cooling plate. Liquid coolant is supplied to a cooling plate contacting each ASIC TPU through a heat tube.

The capital cost of server water-cooled systems is still too high today. Oddly enough, water can become one of the essential components of the cost of the solution, because for a large data center it needs a lot. So, according to Bloomberg, in 2019, Google needed more than 8.7 million cubic meters. m water for their data centers in three states. The need for water is so great that it has to be requested from the city authorities in which the data centers are located. For example, a new data center located in Red Oak, Texas, requires up to 1.46 billion gallons of water, while the entire county, where this and two dozen more cities are located, consumes 15 billion gallons of water for all its municipal needs.

File:Ii cod 5.jpg
Submarine server. Source: Ecoflops

Recently, immersion cooling - cooling server equipment by completely immersing it in liquid dielectric refrigerant - has become used for equipment of conventional data centers. Such a system is compact, unpretentious in power supply - energy is needed only for the operation of several low-power pumps. But a stable temperature environment is provided, eliminating the appearance of hot zones with highly loaded racks.

The immersion-cooled DTL data center was commissioned in 2019 in Moscow.

File:Aquote1.png
An increase in the number of areas for active equipment and the need to use special lifting mechanisms to quickly replace failed computer equipment.
File:Aquote2.png

Inpro Technologies is a Russian developer and manufacturer of computer and communication complexes, which has developed its own cooling solution based on Liquid Cube direct liquid cooling technology. The company says that computing systems and networks built on the basis of Liquid Cube consume 30% less electricity and reduce operating costs by 50% compared to traditional data center solutions.

Energy efficiency of direct liquid cooling systems. Source: Inpro Technologies

Oleg Kotelyukh, managing partner of Inpro Technologies, says that the Liquid Cube solution can be used for a wide range of tasks: from data storage to highly specialized high-density calculations.

Liquid Cube Container Data Center with Liquid Cooling

The Liquid Cube container data center is a universal computing and communication platform with direct liquid cooling, focused on use within hyperconvergent and Edge architectures. The Liquid Cube fast data center can operate in a wide range of ambient temperatures and even in an aggressive environment, providing a 50% or more reduction in OPEX compared to a traditional data center.

Comparison of traditional data center and Liquid Cube with the same power consumption

Floating Data Center

For several years, Nautilus Data Technologies has been building floating data centers. The first such Data Center Eli M with a capacity of 8 MW, including 800 server racks, was launched at the end of 2015. And at the end of this year in the port of Stockton (California) on a barge moored in the port, another floating data center with a capacity of 6 MW should earn.

Nautilus Data Technologies Water Data Center. Source: Selectel

In the data center, the barge will install the company's proprietary cooling system with heat exchangers, which uses the water surrounding the facility. The average sea water consumption to support the server cooling system is about 17 thousand liters per minute and can reach 45 thousand liters per minute at its peak.

Nautilus Data Technologies claims that the cooling method allows for a fivefold increase in specific power per rack, and at the same time the data center will be less demanding on resources than the data centers of competitors.

Two-deck Data Center from Nautilus Data Technologies with modular structure. Source: Selectel

The specially designed barge also houses a floating Google data center. The first such Data Center, cooled by seawater, was launched in 2011. The company does not disclose the design of the internal cooling system.

Underwater Data Center

In mid-September, Microsoft summed up the two-year tests of its underwater Data Center, which were conducted in Scotland as part of the Natick project. The prototype of the first generation Leona Philpot - a small container measuring 3 x 2 m back in 2015 was submerged to a depth of 10 meters near the Pacific coast of the United States. He worked 105 days, demonstrating a PUE factor of 1.07.

The second generation data center is more impressive in size - 12.2 x 2.8 m, it accommodated 12 racks with 864 servers. accordingly. The data center with a power consumption of 240 kW was located next to the tidal power station, which provided power to the data center during its operation. The developers noted that this prototype is designed for five years of work and does not need intermediate maintenance.

Underwater Data Center Microsoft Project Natick

The results of the Natick 2 project showed that the failure rate (the ratio of the number of failed objects per unit time to the average number of objects working properly at a given time) of the underwater data center was eight times lower compared to ground Data centers.

Microsoft also noted that the main problems of ordinary data centers are temperature differences and corrosion, which are caused by oxygen and moisture in the air. Sealed underwater Data centers provide corrosion protection, and the temperature in them practically does not change due to the use of sea water for cooling.

Currently, Microsoft engineers are engaged in the creation of a third-generation data center - it will include 12 cylindrical containers with the technical characteristics of Natick 2. They, together with the entire auxiliary infrastructure of the Data Center, will be attached to a steel frame at a depth of 200 meters under water. The total capacity of the Natick 3 data center will be 5 MW.

Microsoft's Natick 3 Underwater Data Center

Immersion in groundwater

In a situation where there is no sea nearby, it is proposed to use groundwater to cool the data center. Indeed, at a depth of 10 to 100 meters, the temperature does not change during the year, and is 8 ° C - 12 ° C depending on the area. And special water treatment is usually not required. Similar to air friculing, direct cooling or heat exchangers are possible.

The PUE of such a system is 1.06 to 1.08. An important aspect of the solution is the cost of the project, which is seriously growing with increasing depth.

Groundwater is used, in particular, for cooling servers in the IGN data center in Germany. Water from a depth of 300 m by means of a pump rises to the data center, cools the internal closed circuit of water cooling of the servers, while heating by only 5 K, and descends to another well. The system saved 30-40% of electricity compared to the usual air cooling.

File:Aquote1.png
If we talk about the economic efficiency of the data center, then you need to start with climate systems. The more effective they are, the cheaper everyone else. And here the absolute leaders today are groundwater, "Alexandra Erlich is sure. - System consumption is equal to the consumption of several pumps. And that's it. Year-round friculing, almost free cold. We are now designing such a system in Germany, they are widespread here, but in Russia, unfortunately, we have not yet met systems on groundwater.
File:Aquote2.png

Data centers are greening

Another option to optimize the air cooling of the data center was the selection of an appropriate location with suitable natural conditions. Not surprisingly, the first such experiments were conducted in Finland and Iceland - countries with cold climates. It should be noted that in addition to the natural conditions, the successful implementation of such a project requires the presence of developed infrastructure: roads, communications, electricity, etc.

Not so long ago, the Kyoto-cooling cooling system appeared in the data centers. This is a green technology that uses the cold environment year-round. And for guaranteed operability, backup steam compression machines are used. The average annual PUE reaches 1.15.

File:Aquote1.png
It can depend on the time of year and the climate zone of the Data Center. The latter also affects the choice of solutions aimed at improving energy efficiency. In the southern regions, these can be solar panels for supplying equipment, in steppe regions with prevailing strong winds - "windfalls," in the north - cooling with groundwater or sea water.
File:Aquote2.png

File:Aquote1.png
Today, "green technologies" are becoming more popular than ever. It is important that they not only reduce the impact of infrastructure on the environment, but also significantly save resources by reducing energy costs.
File:Aquote2.png

Since it is difficult to achieve a decrease in PUE below the currently reached level of 1.1 in practice, the focus is shifting to low-electricity technologies and renewables. So, one of the Apple data centers, located in North Carolina, is already 100% powered by renewable energy sources today: 42 million kWh come from solar panels, the rest of the needs are covered by biogas burning.

File:Aquote1.png
A number of large Western and Asian corporations said their data centers operate using wind and solar energy. And a Finnish developer recently shared plans to use heat from a data center to heat houses and agricultural greenhouses.
File:Aquote2.png

But projects of this kind are very non-trivial: at the exit from the server - low-potential heat, which is quite difficult to dispose of. At a minimum, because for this, the data center becomes a heat supply organization and receives an appropriate license. This is one of the promising tasks that should be solved in the future if Russian data centers follow the path of energy saving due to "green" technologies: to whom and how to transfer the heat generated, in particular, in the summer?

There are point successful projects in the world. For example, the Yandex data center in the city of Mäntsälä in Finland is cooled by direct friculing, and heated air through a heat exchanger enters the city's heat supply network. Yandex also receives money from municipal services for the supplied thermal energy.

File:Aquote1.png
Of course, today all customers are thinking about energy efficiency. In particular, they choose engineering systems with high efficiency and additional energy saving capabilities (for example, free-cooling), use IT equipment with improved resource utilization algorithms and reduced requirements for external conditions.
File:Aquote2.png

File:Aquote1.png
For Russia, this is rather an experimental story. The feasibility of building green and energy-efficient data centers is directly related to the cost of energy resources. Now they are quite cheap compared to the cost of the required equipment, which is purchased for currency.
File:Aquote2.png

File:Aquote1.png
This is unacceptable for commercial sites, but applicable for large corporate customers and government agencies. It should be borne in mind that energy-efficient solutions are often more reliable. And this also affects the choice of solution, "notes Konstantin Zinoviev.
File:Aquote2.png

Technology Integration Course

A wide selection of promising solutions for data center cooling is complicated by the fact that the practical implementation of each of them depends on many parameters: somewhere there is not enough water for evaporative cooling, somewhere there is not enough electric power, and somewhere - space for the ventilation chamber.

For energy-efficient systems, such as precision Freon air conditioners, CAPEX is often much higher than traditional, but excellent OPEX performance. When using friculing, the temperature mode in the engine room is important: the larger the difference between the temperature inside the data center and outside, the smaller the CAPEX, the less heat transfer work. The smaller the difference, the more work is required to cool the machine room, which means an increase in power consumption, that is, the OPEX system. In general, experts say, friculing is always a high CAPEX, and such a project pays off with a low OPEX.

File:Aquote1.png
It all depends on the climate, the availability of resources and the purpose of the data center itself.
File:Aquote2.png

However, in her opinion, there are general trends in the development of this segment of decisions:

  • direct-flow ventilation;
  • replacement of obsolete precision air conditioners with heat exchangers of various configurations;
  • Technologies based on water evaporation.

File:Aquote1.png
The solution to this issue can be the construction of data centers in the more northerly latitudes, the use of adiabatic cooling technology, the reduction of the IT load itself by abandoning fans for cooling active equipment and installing radiators on processors with heat tubes.
File:Aquote2.png

Sanjay Kumar Sainani, Senior Vice President and Technical Director of Huawei's global data center division, in his 2020-2025 data center forecast, notes as one of the significant industry trends the convergence of liquid and air cooling systems, the wider use of indirect evaporative cooling technologies instead of water. For example, in areas with a suitable climate, water cooling systems will be gradually replaced by indirect evaporative cooling.

Compute Energy Efficiency

File:Aquote1.png
The higher the peak consumption, the more powerful the distribution power supply network of the data center should be, and the more power required from the city, and in the future (taking into account the position of the Ministry of Energy of the Russian Federation on the non-use of network capacity reserves) - fines for power reserves not used most of the year are higher.
File:Aquote2.png

File:Aquote1.png
The key consumer of electricity in the data center is computing equipment. Accordingly, you can reduce costs by more efficiently recycling systems, applying virtualization and using solutions that are already positioned as energy-efficient, "said Pavel Goryunov.
File:Aquote2.png

File:Aquote1.png
The result of virtualization is the enlargement of the data center, which allows you to apply technical solutions to reduce PUE, the use of which in smaller Data Centers is economically impractical, "adds Alexey Malyshev.
File:Aquote2.png

File:Aquote1.png
Virtualization technologies at one time for several years broke the trend of power growth per rack and slowed down the growth rate of the scale of the data center, "says Ilya Tsarev.- We are achieving the values ​ ​ promised by 2013-2015 a decade earlier 8-10 kW per rack in the average corporate data center only now, and in colocation in many cases have not reached so far.
File:Aquote2.png

According to the expert, it is possible to improve the energy efficiency of the data center, even following the simple strategy of efficient use of IT equipment. Firstly, to get rid of equipment that does not perform useful work in a timely manner. Second, provide the ability to scale the engineering architecture with the option of both painless growth and reduction following IT hardware. Third, plan for IT equipment in racks using DCIM systems to improve cooling and power distribution efficiency.

File:Aquote1.png
We have implemented such a solution in our data center. The system using hundreds of sensors records twice a minute the temperature in the premises, builds heat maps and allows you to identify suboptimally located equipment, "says a representative of the CRIC.
File:Aquote2.png

According to him, this makes it possible to save about 5% of electricity costs per year, as well as reduce the number of breakdowns due to equipment overheating or increased humidity.

Uninterrupted power supply

The current de facto standard for uninterruptible power supply systems (UPS) of the data center is the installation of static UPS operating in on-line mode (double conversion), allowing to obtain the required quality of electricity. In addition, modern equipment is characterized by the use of pulse power supplies with a non-linear nature of consumption. To power this kind of equipment, powerful three-phase dual-conversion UPS are well used, which avoid overloading neutral cables of input power grids and transformer substation equipment.

According to Ilya Tsarev, the search for more efficient solutions goes towards topologically more complex solutions in power distribution systems of large data centers than the classic two-beam power supply and 2N, N + 1 redundancy scheme.

One promising method of synchronization and load balancing is implemented, for example, in Hot Sync technology owned by Eaton. Unlike parallel systems of other manufacturers, devices do not exchange with each other the information necessary for synchronization and load balancing. The operating algorithm of the system is based on checking any deviations in UPS output power, and each device operates independently in full synchronization mode with the others. We note other areas in which the development of the SBE is underway.

  • Lithium-ion batteries as a replacement for traditional lead-acid batteries. They are much lighter and more compact than traditional batteries and have greater energy consumption.

File:Aquote1.png
They are more durable in terms of service life and the number of charge-discharge cycles, and contain fewer heavy metals and aggressive substances, "notes Ilya Tsarev and adds that so far lithium-ion batteries are too expensive for mass use in Data Centers, but as they are cheaper, they will gradually replace traditional lead-acid AKB in data centers.
File:Aquote2.png

According to Schneider Electric, depending on the scope of lithium-ion batteries, you can achieve a total cost of ownership savings of 10-40%.

  • Modular UPS. Almost all leading vendors have added 1 MW and higher modular units to their product portfolio: easily scalable and serviced. The power of the system can be increased with a relatively small step, adding additional cabinets with batteries.

In particular, Vertiv has a super-powerful Liebert Trinergy Cube UPS (from 150 kW to 3.4 MW), formed according to the modular principle. Moreover, it itself can play the role of a single module and, as a result of scaling, form an uninterrupted power supply structure with a capacity of up to 27 MW in a parallel configuration.

With a decentralized architecture, modular solutions have great design flexibility and allow you to connect one or more additional modules to an already functioning device when your power demand increases in a short period of time. Moreover, the average system recovery time after a failure (MTTR) is radically reduced due to the possibility of hot replacement of the faulty module.

  • Dynamic (diesel-rotary) UPS. Do not use batteries. They include three main elements: the flywheel - the key element of the DIBP, which plays the role of an energy storage device and rotates on a precisely aligned axis; synchronous electrical machine; diesel engine.

Uninterrupted operation is maintained due to the kinetic energy of the flywheel, as a result of which the need for batteries does not fall. The service life of the DIBP is at least 25 years, while static UPS will serve 10-15 years. The higher efficiency of the system is 98% versus 95%. The solution with DIBP takes up much less space, is easier to maintain, helps reduce capital and operating costs.

  • Cluster systems. Power is increased by parallel installation of high-power power units, for example, with a step of 250 kVA. Thus, an independent design of a higher process stage is created with a common service bypass, common battery power and a single control scheme.

  • Container power solutions. Provide greater flexibility and ease of scaling when creating megawatt data centers. Modules can be transported, quickly mounted and reused. This is very convenient, for example, for telecom operators in the construction of 5G networks.

  • Distributed UPS. They are mounted either in the server rack or directly next to it so that there is practically no free space between each server and the UPS connected to it. Due to this approach, the risk of connection defects in the power supply circuit is significantly reduced, and a small mass simplifies the installation and transfer of distributed UPS.

  • Centralized UPS management. Provides instant information on UPS status, including capacity, location, load status, and the need to replace each UPS's batteries.

  • Intelligent UPS management. The integration system for UPS-based data centers is implemented, for example, by the Russian company MOMENTUM. It is a centralized monitoring system with a local display, which monitors the state of the power supply system, temperature and humidity level, as well as the state of each individual subsystem. 24-hour and year-round remote monitoring of system condition and equipment loading allows minimizing costs of maintenance, equipment repair and losses from unscheduled outages.

Schneider Electric's special ECOnversion power saving mode reduces operating costs. Building fault tolerant systems with N + 1 redundancy is possible on the basis of built-in redundancy. Power expansion is possible both within 1500 kW (at 250kW increments) and above - by parallel connection of several systems.

  • The ability to share unused power. Vertiv is partnering with Upside Energy to develop Virtual Energy Store technology. It allows you to share unused electricity with the central network. In fact, this is a demand response system that does not require the launch of additional generating capacity.

According to the developers, the Virtual Energy Store platform from Upside Energy can organize the collaboration of more than 100 thousand devices in real time, controlling the UPS systems of customers without compromising their functionality in the field of emergency backup power supply to the data center.

  • Smart power management. Modern management systems collect power consumption data from servers, racks, distribution equipment, up to the fact that you can monitor each individual outlet. You can find load drop periods and schedule maintenance at this time. Analysis of consumption peaks will keep the capacity margin within 10-15% instead of 30-40% under manual control.

  • High availability data centers. This approach is proposed, for example, by Schneider Electric. Provides a single, integrated approach to building engineering infrastructure based on proprietary architecture InfraStruXure and interoperability of components at the physical level.

  • Modular data center engineering infrastructure. Implemented, for example, in the complex solution Delta InfraSuite: it is possible to increase the capacity of the data center, gradually adding the necessary equipment: racks, cooling systems, cabinets and power distribution units, control and distribution units, etc.

  • Software-defined power supply (SD-Power). This is about creating a level of abstraction that allows you to effectively manage existing power resources for the benefit of end-users - devices.

All leading vendors offer their own "software-defined" power supply solutions. For example, the Power System Manager, one of the software modules of the Trellis Enterprise DCIM complex of Vertiv, is able to analyze the power consumption of both IT and engineering equipment, generating reports and recommendations for planning the power supply of the data center. This solution "knows" how to predict possible bottlenecks, overloaded and underloaded racks, predict the state of the power system.

ON Intelligent Power Manager Eaton has developed tools for controlling and managing various power supplies in physical and virtual environments. The application ensures continuity business processes and ensures uninterrupted operation of IT equipment. And Schneider Electric is developing solutions for Smart Grid, designed to combine objects that differ in type of consumption, level of power consumption and dynamic redistribution of capacity. The data center as an energy-intensive facility will be one of the key links in this chain.

BIM

The use of BIM technologies in the construction of data centers is gaining increasing popularity. Indeed, due to the fact that the mutual position of engineering systems can be calculated to the smallest, the number of collisions during installation decreases several times. The use of BIM also helps to control the progress of construction work and material consumption, which allows you to stay within the allocated budget. Monitoring the progress of work in a single information space becomes much easier.

The possibilities of information modeling when creating an engineering infrastructure cover the artistic visualization of various objects: from the details of engineering systems to the development of areas with real terrain, as well as 3D models of the object and working with it in virtual reality mode . The use of such developed tools is understandable: full-scale models for data centers are hardly possible, mathematical modeling is the only option for complex analysis of such objects.

An important component of this toolkit ON is CFD (Computational Fluid Dynamics), which implements the tasks of computational dynamics of fluids (liquids and gases). It is used to simulate and evaluate the efficiency of mass and heat exchange in data centers.

CFD model for evaluation of mass and heat exchange efficiency in data centers. Source: ICS Group

During operation, a complete 3D catalog of all data center equipment is also created with a visual display of the temperature distribution and air flows at the room and cabinet level. Such a model allows the data center owner to rationally place and replace equipment in a timely manner, evaluate the impact on temperature and the environment of various scenarios of arrangement, power supply, cooling and restrictions, as well as pre-simulate the impact of any physical actions.

3D mathematical model of the physical data center. Source: ICS Group

Power modeling

Building Energy Modeling (BEM) is an assessment of the integrated energy efficiency of all engineering systems and design solutions using specialized software. Using engineering calculations, it is possible to estimate the object's energy consumption during the year and predict the payback of design solutions.

BEM is a comprehensive simulation that creates a thermal data center map (temperature distribution by volume), determines the efficiency of data center power consumption, and provides the opportunity to choose the optimal solutions for cooling the server room, optimizing air flows in the data center machine halls.

Modeling of standard cooling solution of diesel uninterruptible power supply (DIBP) installed in the data center. Source: MM-Technologiya

The use of outdoor air potential in cooling the date of the centers is one of the promising energy-efficient solutions. However, the specific performance depends substantially on the correct configuration of the mixing chamber. With an unsuccessful chamber configuration, the air mixing temperature obtained by the designer according to the algebraic formula will not be observed in reality. For example, in winter, warm recirculation air, instead of mixing with the outside, can go outside, and excess street air can enter the mixing chamber.

Analysis performed by mathematical modeling methods will allow analyzing the actual pattern of air flow propagation in the mixing chamber, and, if necessary, developing a modification of the mixing chamber.

Numerical simulation of the mixing chamber of the Data Center microclimate maintenance system. Source: MM-Technologiya

Data center energy modeling helps you choose the most energy-efficient way to cool the Data Center, choose the optimal configuration of air conditioners, and also get a more accurate estimate of the OPEX data center, because it is largely determined by energy costs.

Balance modeling of engine room cooling in the data center. Source: MM-Technologiya

... In ancient times, human settlements were located on the banks of large rivers and seas. Rivers were the main transport arteries along which large loads and people moved. In the digital age, data centers appear on the same shores, in which information "loads" move and virtual human communications are carried out. Nature is eternal, and technological progress is fast-moving. Maybe several decades will pass, and fairy tales for babies will begin as follows: on the shores of the blue-fresh sea stood a green-presented data center, and small friendly virtual containers lived in it...