RSS
Логотип
Баннер в шапке 1
Баннер в шапке 2
2025/09/01 13:20:05

Cooling technologies in data centers

Catalog of TAdviser Data centers of Russia and technologies for data centers.

Content

Data centers have become the central nervous system of a modern corporation. They do a huge amount of work. For example, Google's data center processes about a billion search operations every day, which requires gigantic amounts of energy. The daily power consumption of this world's largest search engine is 260 million watts. Approximately the same amount of energy is used by 200 thousand individual residential buildings.

The cost of electricity (namely energy, not physical devices and network connections) is the largest expense of any data center. Up to half of this amount is spent on powering huge air conditioning systems that cool the temperature inside the data center to a level of 16-23 degrees Celsius, which is considered optimal for the normal operation of computing devices at a humidity of 40-55 percent.

To reduce energy costs, cutting-edge companies are experiencing new cooling methods. The savings achieved can be very significant, and such projects can also provide the company with a reputation as an environmentalist. Most modern data centers use atmospheric air for cooling, which is additionally cooled in the data center premises. Companies are striving to ensure that the maximum atmospheric air temperature does not exceed 25 degrees Celsius, because air cooling by only one degree leads to a 4 percent increase in energy costs.

2025

So it remained exotic: Why the immersion cooling of servers in Russia did not take off

Hype periodically arises around different technologies, and some of them subsequently "shoot" quite quickly, finding widespread use in practice, and some get their "minute of fame" and eventually find themselves in a niche segment or go a much longer way to widespread development. In the segment of data centers, technologies that have been on the wave of hype, but are still in little demand in Russia, include immersion (immersion) cooling of servers.

One can recall how some time ago not only world, but also domestic suppliers of server and infrastructure solutions at industry exhibitions in Russia actively demonstrated "floating" "pieces of iron" in a dielectric liquid as at that time, although also exotic, but promising and profitable technology, the use of which is gaining momentum.

Now, more than 10 years later, the owners of some foreign data centers use and continue to develop immersion cooling technologies, receive new patents on them. At the same time, according to analysts at Global Market Insights, the demand for immersion cooling technologies is generated mainly by several segments: hyperscalers, cryptocurrency mining and high-performance [1]

One of the most active researchers, users and evangelists of immersive cooling, for example, is Microsoft Corporation. It has spent more than one year conducting R&D in this area and continues to promote the use of submersible cooling, justifying the feasibility, due to its extreme commitment to the ESG agenda, primarily environmental friendliness compared to air cooling. So, according to the latest data from Microsoft, the use of a cold plate system, coupled with two-phase immersion cooling, reduces energy demand in data centers by 15-20%, in water by 31-52% compared to air cooling[2].

Immersion cooling of blade servers in one of the Microsoft data centers

And what about Russia? Having talked with the leading market players related to the data center segment, there is a picture that immersion cooling in domestic data centers in 2025 is an extremely rare beast.

File:Aquote1.png
Immersion cooling in Russian data centers remains an exotic technology used by no more than 5% of operators, "Andrey Zotov, commercial director of the Cloud.ru, quoted his assessment in a conversation with TAdviser.
File:Aquote2.png

IXcellerate says that, according to their information, immersion cooling of servers is not widespread in Russian commercial data centers: there are only rumors about isolated cases, mainly in specialized data centers. Selectel notes that such a solution is most widespread not in large data centers, but in small server ones, where powerful computing equipment is used for highly specialized tasks. Where compactness of placement is critical and it is necessary to effectively cool "heavy" iron, such an approach can really be justified.

File:Aquote1.png
There are a fair number of small immersion-cooled installations in the world, but these are either small private data centers with a mining past, or research projects. The use of immersion cooling for server equipment in Russia, as well as on a global scale, is rather exotic in the market for high-performance systems. Most of the players in this segment who appeared in the period 2010-2012 have already left the market, - said Yegor Druzhinin, technical director of the developer and integrator of supercomputer solutions of the RSK.
File:Aquote2.png

The company added that RSK itself in 2009 developed, patented and for 15 years has been using 100% direct liquid cooling technology (Direct Liquid Cooling, DLC) in its solutions. RSK specialists did not develop any immersion cooling - with immersion of servers in baths with oil or other dielectric, having carefully studied all the shortcomings of this technology at the very beginning of the company's business development.

Egor Druzhinin says that immersion cooling technology began to quickly gain popularity during the heyday of the mining boom. Then it was necessary to quickly put into operation large mining capacities. As a rule, this happened in unprepared rooms. And when approaching with an emphasis on speed, the operating time and safety of the equipment fades into the background. Just during this period of high demand, a new layer of manufacturers of equipment and materials for submersible cooling was formed. Some players tried to apply the approach for mining equipment, which is quite tolerant of mass failures and has a super-short life cycle, in data centers. But in the end, there is not a single large commercial data center where the immersion method of cooling IT equipment is massively used.

In most of the companies surveyed by TAdviser, among the key obstacles to the wider use of immersion cooling are the high cost of technologies, including dielectric liquids, the difficulties of integration into the existing landscape and additional difficulties in maintenance, including a shortage of personnel, as well as a mental barrier.

File:Aquote1.png
The main deterrent is the high cost of implementation, including the cost of equipment and special cooling fluids, as well as the difficulty of integrating the technology with the existing data center infrastructure. Traditional cooling methods remain more effective for most scenarios, says Andrey Zotov from Cloud.ru.
File:Aquote2.png

Customer requirements for the engineering infrastructure of the data center have been formed for years and are highly conservative, notes Vasily Mikheenko, Deputy Technical Director of 3data HyperScale. Any deviation from classical solutions causes increased attention and is often perceived as a risk factor, which can be a disadvantage when choosing a site. In addition, it is psychologically difficult for many customers to imagine that their expensive iron is literally immersed in a liquid, even if dielectric. All existing commercial data centers are designed and built for traditional cooling systems, and the introduction of immersion technology actually means a large-scale conversion of the facility.

So far, in general, there is a "mental barrier" - doubts about the reliability of immersing expensive equipment in liquid, notes Alexey Amosov, head of the corporate solutions department at Inferit Technika (cluster of SF TECH of Softline Group of Companies ).

It is important that server equipment for such systems requires a special design and adaptation, standard servers are not suitable for this. The result is a set of significant barriers for a large data center, where traditional air or liquid cooling systems fully cope with their tasks, says Ilya Mikhailov, director of Selectel data centers.

Such solutions also have security vulnerabilities. As a cooling medium, synthetic oils or dielectric liquids are most often used. Despite their electrical insulating properties, such substances can be fire hazardous, such as mineral oils, indicates a representative of Selectel. This imposes additional infrastructure requirements and increases risk.

The situation is further complicated by the issues of disposal, cleaning and potential leaks, says Vasily Mikheenko from 3data HyperScale. Finally, the effect of the introduction is far from always obvious: the expected savings in electricity can be leveled, and when using a secondary cooling circuit (chillers, cooling towers), it is completely absent.

File:Aquote1.png
In IXcellerate, we deliberately do not implement full immersion cooling. Our position is based on an analysis of technical risks and operational difficulties. We consider this technology not yet mature enough for commercial use on the scale of a modern data center, "Sergey Vyshemirsky, technical director of IXcellerate, explained to TAdviser. - Instead, we are developing hybrid solutions for direct liquid cooling, where cooling circuits are supplied directly to the most heat-loaded components.
File:Aquote2.png

Ixcellerate gave TAdviser the following list of the main problems seen in the field of immersion cooling:

  • Engineering limitations. Ensuring uniform circulation of dielectric fluid in tanks remains a major technical task. Stagnation zones occur that result in local overheating of the components. Mixing and circulation systems add complexity and points of failure.
  • Connectivity. There is a serious problem with optical communication channels that "die" when fluid enters the connector.
  • System hybridity. Even with immersion cooling, some components - power supplies, drives, network cards - still require air cooling. This creates the need to maintain two engineering systems at the same time, which increases operating costs.
  • Service. Removing servers from a dielectric environment for diagnosis or repair is a labor-intensive process. It takes time to drain the liquid, clean the components, which increases the MTTR (recovery time) significantly.
  • Personnel qualification. Maintenance of immersion systems requires retraining of technical specialists and creation of new safety regulations.

File:Aquote1.png
Our analysis of the experience of pioneers in HPC shows the importance of a comprehensive approach. We studied the case of the Russian supercomputer, where we initially implemented only liquid cooling of processors, but subsequently had to supplement the system with traditional freon cooling for other components, - also noted Sergey Vyshemirsky.
File:Aquote2.png

Among the many troubles in the operation of this technology is one problem that has not yet been figured out how to deal with, says Yegor Druzhinin from the RSK. It is as follows: several servers are in a common capacity with coolant, the liquid actively washes all components of the servers, taking thermal energy with it. Everything is fine before the onset of an electrical accident, the most unpleasant accident is the failure of any electrolytic capacitor in the server. Any server contains up to a couple of hundred such components, i.e. in practice, the probability of failure is high.

File:Aquote1.png
The failure causes a small internal short circuit, and this leads to the formation in the circulating liquid of small electrically conductive particles of the foil of the capacitor plates and the reactive electrolyte. This happens in close proximity to other servers and before the filtering system. As a result, as experience shows, electrically conductive accident products strive to get into neighboring equipment, causing secondary failures, - explains the technical director of the RSK.
File:Aquote2.png

And before talking about the prospects for immersion cooling, it is worth considering the main advantages of the technology, believes Alexey Amosov from Inferit Tekhnika. Firstly, this is a high cooling efficiency (PUE close to 1.02-1.08). Direct contact with a heat source is much more efficient than air cooling. Secondly, high density, which allows you to place many more powerful servers per unit area. Also important advantages are energy efficiency, which is especially important for data centers, temperature stability without local overheating, silence and extension of equipment life.

File:Aquote1.png
As for the prospects, now the immersion cooling technology is at a crossroads. On the one hand, there are restrictions that do not allow it to become massive in Russian data centers. On the other hand, the listed advantages give her a chance to get out of the stage of the "eternal pilot" and really become massive in the new data centers built specifically for it, - says Alexey Amosov. - This will be a solution for high-performance computing, AI and machine learning, dense cloud infrastructure, where space and energy savings are critical.
File:Aquote2.png

New opportunities for the development of immersion cooling technology can create a growing demand for AI solutions and high-performance computing, the construction of new data centers, believes Andrey Zotov from Cloud.ru.

At the same time, Vasily Mikheenko from 3data HyperScale believes, you should not expect a sharp mass introduction of immersion cooling. For data centers, other ways to remove heat from high-density racks remain available, which are easier to integrate into existing infrastructure.

IXcellerate predicts the gradual development of the direction in three scenarios:

  • Short term (2-3 years): Growth in the application of hybrid direct liquid cooling in AI/ML clusters and GPU farms. Immersion cooling will remain in the narrow niches of mining and research projects.
  • Medium term (3-7 years): Standardization of solutions, the emergence of plug-and-play systems from large OEMs can intensify implementation in commercial data centers for specialized high-density zones.
  • Technology catalysts: The development of quantum computing, neuromorphic processors and other ultra-high heat generation technologies will create demand for extreme cooling solutions.

The current estimate of the volume of the global market for high-performance systems, according to the analytical company Hyperion Research (formerly IDC), is $60 billion, according to the RSK. In the Global Market Insights report, the market share of submerged cooling systems in 2024 amounted to $1.3 billion: that is, this is a 2% share. It is also estimated that with an average annual growth in the market share of such systems at 18.3%, their sales by 2034 will amount to $7.2 billion US dollars. At the same time, the entire global HPC market, according to Hyperion Research estimates, will already exceed $100 billion by 2028 alone.

If you look at the rating of the top 50 most powerful supercomputers in Russia and the CIS, then there are de facto no immersion-cooled computing systems.

File:Aquote1.png
The prospects for the use of immersion cooling of servers are very limited, which is typical for technologies with a very narrow niche of application, - notes Yegor Druzhinin from the RSK. - Immersion approach is quite applicable in special areas, where it is necessary to simultaneously seal against external influences and cool powerful electronics. In such cases, electronic units will interfere with individual containers and already solve the problems of reducing the weight of the product.
File:Aquote2.png

I must say, despite the fact that technology giants such as Microsoft, Baidu, Dell, Hewlett-Packard, Lenovo, Intel, Fujikura continue to invest in immersion cooling research, at the global level, one can also find expert opinions that in the near future the use of immersion cooling will be advisable only for certain tasks: in the field of high-performance computing mentioned above more than once, with ultra-dense placement of power due to limited space, or for laboratory research.

Nanotechnology is presented that cools data centers twice as efficiently as conventional compressors

On May 21, 2025, American specialists from the Johns Hopkins University Applied Physics Laboratory in Maryland announced the development of a new technology for solid-state thermoelectric cooling based on nanomaterials. The method is called CHESS (Controlled Hierarchically Engineered Superlattice Structures) - controlled hierarchically designed superlattice structures. Read more here.

Data centers began to cool with a laser

In mid-April 2025, Sandian National Laboratories Ministries of Energy USA reported joining forces with Minneapolis-based startup Maxwell Labs to introduce innovative laser cooling technology to remove heat from equipment in data centers (). DPC Specialists from the University of New Mexico also take part in the project.

Scientists note that about 30-40% of the energy consumed by data centers goes to cooling servers and other components. Against the background of the rapid introduction of artificial intelligence, the load on the data center is growing, which forces operators of such objects and hyperscalers to purchase high-performance accelerators based on GPUs. This creates an additional need for highly efficient cooling systems. Startup Maxwell Labs has developed laser technology that could complement or in some cases replace traditional liquid systems.

Data Center Innovation: Server Laser Cooling Comes to Market

Lasers tuned to a particular frequency and aimed at a small area on the surface of a particular element can remove heat from it. The technology involves the installation of a photon cooling plate less than a millimeter thick, which directs lasers exactly to hot spots on the surface of computer chips. For the manufacture of the plate, gallium arsenide is used - a semiconductor material that is used in the creation of laser diodes, Gann diodes, tunnel diodes, photodetectors, etc.

One advantage of the new solution is the ability to accurately target laser cooling to control localized heating. The proposed approach, according to the developers, will not only reduce the power consumption of data centers, but also increase the performance of equipment.[3]

2024

The volume of the global market for data center cooling systems reached $16.84 billion

At the end of 2024, expenses in the global market for cooling systems for data centers (data centers) amounted to $16.84 billion. The North American region accounted for almost 40% of global costs. Industry trends are addressed in the Fortune Business Insights survey, which TAdviser reviewed in early July 2025.

One of the key drivers of the market is the rapid development of artificial intelligence. Learning large language models (LLMs), inference, and maintaining generative services require enormous computing power. Against this background, the load on the data center is growing, which forces operators of such objects and hyperscalers to purchase high-performance accelerators based on GPUs. This creates an additional need for efficient cooling systems, including liquid solutions.

At the same time, AI algorithms help optimize the operation of data centers in order to increase performance and reduce power consumption. Neural networks analyze data from sensors installed throughout the data center, which allows you to adjust the parameters of the cooling system in real time. Indicators such as temperature, humidity and airflow characteristics are taken into account. Industry experts estimate that 2025 large enterprises will use AI-based infrastructure management tools to improve data center efficiency, save energy, and reduce operating costs 75% year. A study by the Lawrence Berkeley National Laboratory suggests that optimizing cooling with AI can reduce energy consumption in data centers by up to 40%. In addition, neural networks can identify further opportunities to reduce energy consumption.

Fortune Business Insights analysts note that the COVID-19 pandemic has had a positive impact on the industry. The sharp growth of remote work and the digital transformation of enterprises accelerated the development of cloud services, which led to the expansion of the data center infrastructure. As a result, sales of cooling equipment rose. However, disruptions in supply chains and delays in the implementation of data center construction projects due to quarantine created difficulties that affected the market dynamics in the short term. The demand for cloud services continues to grow as such platforms provide flexibility of use, scalability, and cost reduction for local hardware.

In terms of the scope of cooling systems, the authors of the study segment the market in question into IT and telecommunications, BFSI (banking, financial services and insurance), production, retail, healthcare, power and utilities, etc. In 2024, the largest share of revenue was provided by the first of the listed areas, while BFSI accounted for 19.2%. Geographically, North America leads with 38.9%, or $6.55 billion: the region hosts the sites of leading cloud providers, including Amazon Web Services (AWS), Microsoft Azure and Google Cloud. Among the significant players on a global scale are:

Fortune Business Insights analysts believe that in the future, the CAGR in the data center cooling market will be 12.4%. By 2032, costs could increase to $42.48 billion.[4]

New data center cooling technology developed that reduces power consumption by 13%

At the end of October 2024, a team of scientists and engineers at the University of Texas introduced a new technology for cooling data centers. The new "thermal interface material," made of a mixture of liquid metal and aluminum nitride, conducts heat much better than existing commercial materials, thereby being able to organically remove heat from powerful electronic devices.

The researchers created a new cooling material through a special process called mechanochemistry. This process allows liquid metal and aluminum nitride to mix under controlled conditions, creating gradient interfaces and facilitating heat transport. The researchers tested the new materials on small laboratory devices with impressive results: the thermal interface is capable of removing 2,760 watts of heat from a small area of ​ ​ 16 square meters. cm. Now the engineers intend to scale the new technology for testing and application in data centers.

New data center cooling technology created to reduce power consumption by 13%

Cooling accounts for about 40% of data center power consumption, or 8 terawatt hours per year. The researchers estimated that the new technology could reduce cooling requirements by 13%, which is 5% of the total data center power consumption. Across the industry, this provides significant savings, and heat dissipation can significantly increase processing power.

The explosive growth of AI along with the proliferation of technology is expected to lead to a significant increase in demand for data centers. Goldman Sachs analysts estimated that electricity demand for data centers will grow by 160% by 2030. It is assumed that only AI technologies will increase electricity consumption at the data center by 200 terawatt-hours per year between 2023 and 2030. New thermal interface materials can partially solve the problem of electricity costs.[5]

Metal foam released that reduces data centers' cooling energy consumption by 90%

At the end of August 2024, Apheros introduced a new metal foam that improves data center cooling, which accounts for almost 40% of total data center energy consumption. This technology can improve the heat exchange of cooling systems by 90%, thereby significantly reducing energy consumption. Read more here

2023: Global Data Center Cooling Systems Market Size Grows to $3.87 Billion for the Year

At the end of 2023, the global market for liquid cooling systems (LJC) for data centers reached $3.87 billion. For comparison, a year earlier, the costs in this segment were estimated at $3.32 billion. Thus, growth was recorded at 17%. The main drivers of the industry are the introduction of high-performance computing (HPC), artificial intelligence and cloud services. This is stated in the Market Research Future review, published in early September 2024.

Against the background of the rapid development of AI and an increase in the volume of information generated, the load on data centers is constantly growing. This requires the installation of more powerful equipment, which requires efficient cooling systems to remove heat from. Liquid solutions allow you to maintain optimal temperatures while reducing power consumption compared to traditional air circuits.

New standards and government initiatives are another factor driving sales of LJM. The work of the data center is influenced by numerous regulations that will determine the level of energy efficiency and emissions of harmful gases into the atmosphere. Liquid systems allow you to comply with these requirements, while helping hyperscalers and cloud operators develop their computing infrastructure. In addition, FGMs provide much less noise than air cooling. This is especially true for companies located in industrial areas or other places with a high population density. From this point of view, liquid cooling can help data center owners reduce side risks.

Market Research Future analysts divide the FGM market into solutions using water, specialty refrigerants, and dielectric fluid. In 2023, the largest was the water-based segment: it accounted for more than 60% of total revenue. It is noted that such a cooling medium is economical, easily accessible and provides a sufficiently high efficiency. The refrigerant-based segment shows moderate growth as there are strict environmental regulations regarding the use of similar substances.

In terms of application, the LJC industry is segmented into hyperscale data centers, corporate data centers and combination sites. Hyperscaler platforms are estimated to dominate the market, generating more than 40% of revenue in 2024. Growing demand for cloud and big data services is driving the development of hyperscale objects that require highly efficient cooling solutions. At the same time, the authors of the study believe that corporate data centers will demonstrate stable growth rates.

Among the leading players named, Liebert (Emerson Electric) Green Revolution Cooling, CoolIT Systems,,, Schneider Electric Huawei Fujitsuneutral, Asetek, Allied Control, Systemair IBM,,,,,,. Geographically, Vertiv GEA Rittal Danfoss Dell Technologies the leader in the implementation of SJO in 2023 is: this is North America due to the high demand for data processing services due to the growing introduction cloud computing and distribution of AI applications. The Asia-Pacific region has seen steady growth, linked to increased investment in infrastructure. data centers

Analysts believe that in the future, the CAGR (average annual growth rate in complex percentages) in the market under consideration will be 16.5%. As a result, by 2032, the cost of SJO for data centers will rise to $15.3 billion.[6]

2020: Green data centers can only afford IT giants for now

The British journal Nature provided data in the summer of 2020 that the annual demand for data centers is approximately 200 TWh - a value comparable to the national energy consumption of some countries. The energy that data centers receive is not always extracted from clean sources . China, the second largest data center market, receives more than 70% of electricity for the needs of data centers from thermal power plants using coal. Every year, the number of technology companies whose business relies on data centers[7].

The problem has become so serious that a stable phrase "green data centers" has appeared. If ten years ago difficulties with cooling were solved along the way, now data centers are already being designed, combining the features of technology and environmental conservation conditions. The symbiosis of digital technologies and environmental protection is developing in two directions. One of them involves the construction of data centers using recycled materials, as well as environmentally friendly structural materials. The second implies the creation of energy efficient engineering systems.

The selection of the data center construction site testifies to the company's intentions. The main emphasis in the design of Verne Global was on the condition that its energy consumption would be provided by geothermal energy. Therefore, they built a data center near the geyser. In Frankfurt, the Citigroup data center project includes the ability to use rainwater as a refrigerant.

The main power consumption of data centers comes down to two components: IT load and cooling systems. Freon air conditioners and chiller fancoyles are displaced by systems with direct or indirect friculation. Data centers Amazon, Google, Facebook use direct technology for free cooling of server rooms. This means that the outside air is pre-cleaned before being fed for cooling. Indirect friculation implies a closed cooling cycle.

Microsoft pays attention to research in various fields. The company is experimenting, among other things, with cooling systems. Leads the search for an alternative to meatballing. One of these attempts can be considered the placement of a waterproof data center in the ocean. The inaccessibility of the server in the event of a failure of its elements invalidates the benefits of natural cooling.

Microsoft and other IT giants promise to reduce the carbon footprint. In particular, Microsoft announced a phased elimination of the effects of carbon dioxide emissions. The company is set to achieve carbon neutral status by 2030. Ideally, European data centers should also become carbon neutral by 2030. One of the Danish companies designed and launched a system for transferring heat from the data center to the city heating networks. The heat generated by the data center is 70% used. Danish authorities plan to reduce carbon dioxide emissions by 2030. Heat transfer technology partially delays the fulfillment of this quota. Finland, England and other countries also receive: excess heat from some data centers is used to heat houses. IBM's data center in Switzerland gives away some of its excess heat to heat the neighboring pool. Such examples are enough, their number will grow.

For data centers, the use of solar and wind energy is relevant. An example is the data center of an Internet company in Illinois that provides power for its servers using a wind generator.

Practice shows that the development of green data centers is a matter of time. So far, a complete transition to clean energy is possible only for data centers of giant companies. Today they are more environmentally responsible. Changes in energy efficiency policies are inevitable. The risks associated with change scare many. But it is with great concern to treat competition, where energy costs and environmental compliance will play a significant role.

2017

A decommissioned United States Air Force bunker in the Demoyne area (Iowa), capable of withstanding a nuclear strike; an abandoned limestone quarry in a remote area of Pennsylvania; three buildings with an area of ​ ​ about 28 thousand square meters each, being built near the Arctic Circle - so different, at first glance, the objects have already become or will soon become the most modern data centersBased [8]

Of great interest are fundamentally new cooling projects. For example, in Frankfurt, plants were planted on its roof for additional cooling of the data center (but even this data center did not abandon air conditioning systems). The new Google data center, located in the Finnish city of Hamina, uses old granite tunnels for cooling, through which water was supplied to the now defunct pulp and paper mill. Now that water cools the data center and then returns to the Gulf of Finland. And Facebook, in a bid to build a reputation as an environmentally responsible company, has released plans for a server farm in the Arctic Circle region where equipment will cool naturally.

Computing and networking technologies do not attract as much attention as the state-of-the-art design of some data centers, but they can also make a significant contribution to energy savings. For example, a data center Cisco in Allen, Texas, uses a converged infrastructure that transmits data and storage traffic over a single network. As a result, the number of switches, adapters and cables is reduced and power consumption is significantly reduced: the fewer cables, the freer natural air flows flow around the equipment, reducing the need for fans. servers These compact analogues of rack-based server devices also make it easier to cool down. Further reduces the energy requirements of server virtualization by allowing fewer physical servers to process the same amount of information. As a result, air conditioning in this data center is turned on only when the air temperature in the premises exceeds 25 degrees Celsius. Solar panels are used to power office space. All this allows Cisco to reduce the cost of cooling the data center by 600 thousand dollars US dollars per year.

According to some forecasts, greenhouse gas emissions in data centers will amount to 4 percent of the global level by 2020. Currently, national and international organizations use the "carrot" policy to combat emissions, but if companies do not take decisive measures to reduce emissions, the "carrot" can quickly be replaced by "whip." Without waiting for this, advanced companies solve the problem in various ways, including environmentally competent building design and the use of smarter computing and networking technologies.

See also

Notes