Why is exaflop computing so important?
Systems capable of achieving performance of 1 quintillion (or 1018) floating point operations per second - a thousand times more productive than petaflop (or 1015) systems, and probably capable of solving the world's greatest scientific problems.
Market estimates
2023: Global HPC Market Size Reaches $50 Billion in a Year
At the end of 2023, the volume of the global market for high-performance computing (LDC) reached approximately $50 billion. For comparison, a year earlier this figure was about $46 billion. Thus, the increase was fixed at 8%. In the future, steady growth is predicted, which is largely due to the rapid introduction of artificial intelligence technologies, including generative ones. This is stated in a study by Fortune Business Insights, the results of which were released in early July 2024.
It is noted that the COVID-19 pandemic has had a positive impact on the LDC industry due to the increased demand for computing resources for epidemiological modeling, genomics, biomedical research and drug development. During this period, the burden on data centers (data centers) increased significantly, and their operators began to increase capacity. Another important market driver is AI technologies. Large language model (LLM) training requires enormous processing power. Therefore, cloud providers and hyperscalers are actively purchasing expensive GPU-based accelerators (GPUs) and other specialized equipment. Moreover, AI algorithms are becoming more complex, which, in turn, increases the need for LDC resources. Against this background, the industry is showing steady positive dynamics.
In addition, the global HPC market is gaining momentum as a result of the transition to hybrid and multi-cloud IT environments. To create a flexible and dynamic LDC infrastructure, organizations use a combination of local systems, as well as public and private clouds. This approach contributes to the optimal use of resources and economic efficiency.
From a geographical point of view, North America is the largest region in terms of the development and implementation of LDCs: by the end of 2023, costs here reached $20.4 billion. A year earlier, expenses were estimated at $18.91 billion. That equates to a 7.9% year-on-year increase. Overall, North America accounts for approximately 40.8% of the global LDC market. The region's dominance is attributed to a combination of technological innovation, strategic investment, and a mature cloud computing ecosystem. In addition, North America leads in the development of processors, GPU accelerators, specialized cards for accelerating AI and other equipment that forms the foundation of LDC systems. The region's technological expertise and robust infrastructure enable complex computational challenges and drive innovation in areas such as climate modeling, financial analytics, genomics research, and Genia applications.
Among the key players in the global LDC market are Dell, Lenovo, Fujitsu, Atos, Cisco, Nvidia, Amazon Web Services (AWS), Hewlett Packard Enterprise (HPE), AMD, Sugon, Inspur, NEC, etc.
Fortune Business Insights analysts believe that between 2024 and 2032, the global LDC industry will demonstrate a CAGR (compound percentage CAGR) of 9.2%. As a result, by the end of this period, expenses can reach $110 billion. At the same time, by the end of 2024, the result is expected at the level of $54.39 billion. One of the restraining factors is the lack of maturity of digital infrastructure in developing countries. This digital divide could further exacerbate existing inequalities in access to advanced technologies and hinder the overall growth of the LDC market.[1]
2018
High-performance server rental services market reaches $6.28 billion
The volume of the global market for rental services for high-performance servers in 2018 reached $6.28 billion, according to data from the analytical company ResearchAndMarkets. We are talking about services called "HPC as a service."
Experts did not specify the dynamics in the market compared to 2017, but assured that the market in question is on the rise. It is emphasized that the growth of the high-performance computing market is due to the rapid evolution of technologies such as 3D images, artificial intelligence, the Internet of Things, as well as a rapid increase in the volume of analyzed data. In addition, HPC is growing in popularity as a service as it enables real-time data processing to analyze market trends, test new products, and broadcast live sporting events.
Governments are expected to introduce more initiatives for universal digitalization, a trend that is also positively impacting the growth of HPC delivered in service format worldwide. Analysts suggest that the high prevalence of cloud technologies in emerging economies will also become an important environment for the development of high-performance server rental services.
High Performance Computing (HPC) is primarily made on powerful servers or supercomputers that can perform quadrillion calculations per second. Such servers require more than a thousand parallel computing nodes. The server rental service allows you to use the appropriate applications in the cloud, providing on-demand access to the necessary components.
According to a study presented on the ResearchAndMarkets.com website, by 2026 the market for HPC as a service services. could rise to $17 billion. The public cloud segment dominated the global HPC market in 2018 and is expected to generate the most revenue in the future. The highest growth rates in HPC spending in 2018 were seen in healthcare.[2]
HPC Server Market Growth by 15.6% to $13.7 Billion
The global high-performance server (HPC) market reached $13.7 billion in 2018, an increase of 15.6% compared to the previous year. Such data were released on April 9, 2019 by analysts at Hyperion Research.
They note that sales of powerful servers in the world were record and more than experts themselves expected ($13 billion). According to Steve Conway, Senior Vice President of Research at Hyperion Research, the surge in the market was due to HPC's "critical role in artificial intelligence research and the growing adoption of HPC servers in corporate data centers to accelerate business operations." Hyperion Research CEO Earl Joseph also linked the upswing to robust growth in the American and global economies.
Most of the demand for HPC solutions falls on the supercomputer segment, which includes equipment worth more than $500 thousand. In 2018, the cost of such systems on a global scale amounted to $5.4 billion.
Specialists distinguish three more categories of HPC systems: Divisional (servers serving the divisions of companies; price - from $250 thousand to $500 thousand), Departmental (for departments; $100-250 thousand) and Workgroup (for working groups; up to $100 thousand). According to the results of 2018, the volumes of these segments turned out to be equal to $2.5 billion, $3.9 billion and $2 billion, respectively.
Analysts called HP the leader of the market under consideration, which in 2018 earned $4.77 billion on the sale of high-performance servers, which corresponds to almost 35% of the total market volume. The top three manufacturers also included Dell and IBM.
The researchers expect the HPC server market to grow 7.8% annually and reach $20 billion by 2023.[3]
Europe's plans
In February 2012, the European Commission announced its intention to double investments in the development of exaflops computing - from €630 million to €1.2 billion ($1.58 billion). According to this statement, funding will be carried out despite the introduction by the governments of European states of tough measures to prevent defaults.
Recently, the White House also published its budget for 2013, where for the third year in a row very modest funds were provided for the research and development of these technologies. In 2011, the US Department of Energy requested almost $91 million to finance exaflops developments in 2012, and received $73.4 million. True, in comparison with $28.2 million in the previous year, this is still more.
In the 2013 budget proposed for consideration by Congress, the White House requested $89.5 million for exaflops. Some of the funds that the United States spends on exaflops research are also spent from other budget items of the Department of Energy, as well as the Department of Defense.
This amount of investment from the United States, according to IDC analyst Earl Joseph, is insignificant for a program that could require billions of dollars. While China is moving forward in its plans and has the financial resources, scientific and technological talents to make progress in computing the scale of exaflops.
According to the analyst, Europeans should already be concerned about China's success in this market. "China will simply bury them," Joseph said, speaking of Europe. "But with that level of investment, they have a chance to hold on and maybe raise their heads in this game."
The bulk of American money is for basic research in the creation of new types of processors, memory, operating systems and compilers. These research achievements, according to Joseph, can be used mainly for commercial purposes.
Research firm IDC, which advises authorities Europe on HPC technologies, recommended that management European Union focus on developing applications that use exaflops systems and focus less on hardware development.
Europeans, like the Chinese, see advantages in the development of exaflops technologies, which include the construction of systems a thousand times more powerful than those operating today. Exaflops computing systems can reach 1 trillion (1018) floating point operations per second.
But systems like these "pose many difficult questions," the European Commission, which accompanied the funding decision, said in a report. Among these problems are the need for a 100-fold reduction in energy consumption and the development of new programming models. If Europe and China can solve them, then they will have a chance - to compete with the current market leader, the United States, in the field of high-performance computing.
Speaking about European investments, European Commission Vice President Neelie Kroes, who is responsible for this initiative, noted in a statement that "for European industry and job growth in Europe, high-performance computing is one of the most important driving factors." "We must invest in this area, because we cannot afford to give in to competitors," she said.
Meanwhile, the U.S. HPC scientific community has expressed concern about the government's lack of a forward-looking plan to fund these research and development efforts. Jack Dongarra, a University of Tennessee scientist involved in the creation of the world's TOP-500 supercomputer list, welcomes Europe's efforts. "Friendly competition helps move forward," he stressed.
US backlog
There is a race around the world to create next-generation supercomputers, but US efforts seem to have stalled as of early 2012. China and Europe, in particular, are moving forward with their programs, Japan is gaining more and more momentum.
Do not forget about the success of Russian developers in the field of high-performance computing. In March 2011, a 1 Pflops supercomputer was successfully tested at the Sarov Nuclear Center, which in the world ranking TOP-500 claims to be 12th place, and the general director of T-Platforms, Vsevolod Opanasenko, was among the ten most influential people in the supercomputing industry according to the largest industry industry HPCwire.
The US government, meanwhile, has not yet set in motion a plan for the development of exaflop computing, said Patrick Thibodeau, an analyst journalist at Computerworld.
According to him, these programs relate not only to the construction of supercomputers. The development of exaflop computing platforms is the start of new generations of processors, new storage systems and networking technologies. Breakthroughs in these areas in other countries could lead to new problems with US technological dominance.
Five reasons leading the US to the danger of failure in HPC, according to Patrick Thibodeau.
The United States does not have a plan for the development of exaflop calculations
This type of HPC development project could cost the US billions of dollars. Europe estimated its own efforts to develop the exaflop project at €3.5 billion ($4.724 billion) in more than ten years. China will be able to invest an unimaginable amount of money in its developments. In 2008, there were 15 systems in China on the list of the TOP-500 most productive systems in the world. The latest version of this list, released in November 2011, already lists 74 Chinese-made systems, or 14.8% of their total number in the world.
The United States continues to finance large projects, such as IBM's planned computer for the Lawrence Livermore National Laboratory with a capacity of 20 petaflops, which will appear over the next year. This system can again return the United States to the first line of the list. However, despite what is happening now in Europe or China, the United States has not yet determined the budget for the development of this program.
It will be a mistake to believe that the United States will win the race for exaflop
While efforts to develop Chinese supercomputers are getting a lot of attention, Europeans focused on developing technological infrastructure will be able to compete with the US.
A large hadron collider (LHC) with a circular tunnel on the French and Swiss borders, 16.8 miles long, turns Europe into a global center for high-energy physical research. This may mean that physicists who once wanted to work in the United States may find better conditions in Europe and can help create a new manufacturing industry in this region.
The United States once had plans to build a 54-mile super collider tunnel in Texas, but Congress delayed funding and abandoned a partially implemented project after planned spending rose from $5 billion at the end of 1980 to $11 billion. European countries are also working together to create their own global satellite positioning system. Galileo This project costs approximately $20 billion.
LHAC and Galileo show that European countries are ready to pool their resources and act together in the technological sphere. They see a similar possibility of action in exaflop computing, especially in software development.
In its October report, the European Exascale Software Initiative working group said that the United States, Europe, China and Japan all have the potential to create the first exaflop system.
The path to the exaflop is unknown, opening the door to the seekers
Although the United States does not have a plan for the development of an ultra-efficient computing program, there are already some requirements for such a system. It should be ready by 2019-2020. and cannot consume more than 20 MW of electricity, which is quite a bit for a system probably equipped with millions of processors.
The need for systems with reduced energy consumption has given rise to new approaches to their development. Alex Ramirez, head of computer architecture research supercomputers at Barcelona city centre, said the project, which uses processors ARM and GPUs, Nvidia (Nvidia) demonstrates the ability to build a high-performance computing cluster based on the ARM architecture.
The project also creates a complete software stack for this cluster. According to him, there are still a large number of problems ahead, mainly related to the need to create software for working in an environment different from servers and mobile computing. "The human effort and financial investment in developing this software will be significant," he added.
If the US is not leading the exaflop, then what to say about the start of zettaflop planning?
A freshman in computer science today should already know about the four-year path to the exaflop system. By the time he completes his thesis, he will already be discussing a zettaflop (1,021) system, a thousand times more powerful. If high-performance computing retains the historical picture of its development, the emergence of zettaflop systems can be expected around 2030. However, no one knows what such a system may look like, as well as whether it is possible at all. A system of this type may require entirely new approaches, such as quantum computing.
The White House announced its unwillingness to get involved in an "arms race" on the issue of creating faster computers, and a year ago, in a report, warned that a focus on speed "could divert resources from basic research aimed at developing fundamentally new approaches to high-performance computing that would ultimately overtake other countries." But the US is already in the computational race, whether they want it or not. To develop technology that will overtake other countries, the United States will need sustainable funding for basic research, as will the creation of exaflop systems.
The US did not explain what is the bet in this game
President Barack Obama became the first US president to mention exaflop computing, but he did not explain the potential of these systems. Supercomputers can help scientists create atomic-level models, human cells and how the virus attacks them. They can be used to model earthquakes and find ways to predict them, as well as design structures that can withstand them. In the industry, they are increasingly used to create products, test them in virtual environments.
Supercomputers can be used in any way imaginable to humans. Moreover, the greater the power of systems, the greater their computational abilities and the more accurate the sciences. Today, the US dominates the computing market. One IBM accounts for about 45% of the systems included in the TOP-500, followed by HP a share of 28%. About 53% of the list's most powerful systems are in the United States.