Edge computing Edge computing
Peripheral computing is the principle of building a hierarchical IT infrastructure, in which computing resources are partially moved from the core - the central data center to the periphery and are located in close proximity to the place where primary "raw" data is created for their primary processing before being transferred to the higher computing node. Thus, data collection and analysis is carried out not in a centralized computing environment (data center), but where data streams are generated.
Boundary Analytics Software (Global Market)
Main Article: Boundary Analytics Software (Global Market)
Edge WAN Platforms (Global Market)
Main Article: Boundary WAN Platforms (Global Market)
How Peripheral Computing Works
To some extent, this technology can be compared to a measuring device in an oil field or remote telecommunications facility: it brings computing resources closer to where data is collected.
The industrial automation market is leading the way in mastering a number of new technologies that include augmented reality (AR),,, 3D printing(robotics AI artificial intelligence), cloud-based dispatch control and data acquisition systems (), SCADA and programmable automation system controllers (PACs). If we talk about automation technologies, then from production workshops to the supply chain very heart of the enterprise, it already Internet of Things connects various points with intelligent sensors. The Industrial Internet of Things (IoT Industrial IoT) provides information for product maintenance, inventory, and transportation.
However, simply building network connectivity and data flow management is not enough to truly harness the potential of digital transformation. To gain competitive advantages, manufacturing enterprises need to fully integrate industrial automation. Only then will they be able to convert data collected in the IoT environment into valuable insights to enable faster, more accurate and cost-effective decision-making. And to do this, they must transfer computing power to the edge of the network - to its peripherals[1].
Peripherals to Data Center Benefits
The purpose of Edge computing is to move computing resources from a hyperscale cloud data center, which can be at a considerable distance (in the "core" of the network) closer to the user or device, to the "edge" of the network. This approach focuses on reducing network delays and accumulating computing power to process data near its source. Working with a peripheral network, mobile applications could use artificial intelligence and machine learning algorithms to a greater extent, while now they are completely dependent on the computing capabilities of mobile processors. In addition, compute-intensive tasks drain[2] phone batteries much faster[3]
Peripheral computing is a kind of "rocket fuel" for IoT. They are characterized by a number of advantages and potential opportunities:
- peripheral calculations allow you to analyze and filter data closer to sensors. Moreover, only relevant data is sent to the cloud;
- A delay in the production process can be critical, for example, if a failure occurs on the production line. The fast response time, measured in milliseconds, is critical to ensuring the security of critical and accurate operations. In such cases, waiting for a result from the IoT cloud platform is too long;
- peripheral computing means that, if necessary, sensitive data can be processed in a location where it is protected from direct network connections. This provides a higher level of control over the security and confidentiality of information;
- finally, the requirements for cloud storage capacity and network bandwidth are reduced, and the corresponding costs are reduced, since instead of sending to the oblako, a large amount of data from sensors can be processed directly at the periphery.
Peripheral computing architecture has become the center around which many computing tasks are concentrated. Its advantages include minimal network delays in processing data and the ability to work with large amounts of data, but at the same time it also has weaknesses - insufficient interoperability of the protocol stack and lack of standardization. As a result, today devices and applications that work at the edge of the network are a set of autonomous Edge ecosystems.
Edge architecture brings computing resources closer to data and devices. Many market experts see it as a key paradigm beyond cloud computing. There are some digital scenarios that require extremely low latencies, and this is exactly the case where it performs better than cloud services. However, the available variety of interfaces and the lack of industrial standards greatly slow progress, because they deprive devices and applications of the ability to interact with each other.
2023: Global Peripheral Computing Market Size Reaches $15.96 Billion
In 2023, costs in the global edge computing market reached $15.96 billion. The industry is experiencing exponential growth, according to a Fortune Business Insights study published in mid-October 2024.
Edge Computing is the concept of generating computing power and storage at the data production site. Peripheral computing systems can significantly improve application performance, reduce bandwidth requirements, and quickly extract real-time analytics. This model enables organizations to improve security and productivity, automate processes, and optimize user and customer interaction.
Fortune Business Insights analysts highlight several key factors contributing to the rapid growth of the Edge Computing industry. One of them is the growing use of all kinds of data generating devices: equipment Internet of Things (), IoT smart cameras, industrial PCs, medical sensors, production systems, etc. According to industry experts, 75% of information will be generated outside of centralized ones by 2025. data centers New technologies such as Industry 4.0 artificial intelligence and IoT will drive demand for edge computing. In addition, the need for Edge Computing platforms is increasing amid the expansion of infrastructure 5G and the emergence of qualitatively new applications related to virtual, augmented and mixed reality, as well as metaverse.
The authors of the study cite high initial investment as the main deterrent: the deployment and maintenance of peripheral infrastructure can significantly increase the capital costs of companies. In addition, there are certain difficulties associated with maintaining the required level of protection. Ensuring the security of the entire computing network leads to huge costs for suppliers, thereby restraining the expansion of the market.
The list of key industry players includes:
- IBM;
- Intel;
- Amazon;
- Google;
- Microsoft;
- Adlink;
- Hewlett Packard Enterprise Development;
- Cisco;
- Huawei;
- EdgeConneX.
By application, the Edge Computing platform market is divided into IoT applications, robotics and automation, predictive maintenance, remote monitoring, smart cities and others. In 2023, the first of these segments accounted for 28.1% of all costs. Another 21.3% of revenue was brought by predictive service services. From a geographical point of view, North America generated the maximum income in 2023 - approximately $5.16 billion. This is due to the high concentration of large players, including IBM, Intel, Microsoft, etc. These corporations strategically expand their geographic footprint and customer base by acquiring small, local companies. At the same time, the Asia-Pacific region is showing high growth rates, which is associated with the growing adoption of peripheral solutions in countries such as India and China.
At the end of 2024, revenue in the global peripheral computing market is estimated at $21.41 billion. Fortune Business Insights analysts believe that in the future, the CAGR (CAGR in compound percentage) will be 33.6%. As a result, by 2032, costs globally will reach $216.76 billion.[4]
2022: $84 billion invested in cloud and peripheral computing projects in the world
In 2022, investments in cloud and peripheral computing platforms on a global scale reached $84 billion. However, the pace of migration of companies and government organizations to the cloud has slowed down compared to the previous year. This was announced on July 20, 2023 by the international consulting firm McKinsey. Read more here.
2021: Integration of the Internet of Things (IoT) and peripherals
Initially, organizations simply developed strategies for deploying the Internet of Things on the periphery and managing it. But the periphery is now everywhere. As more companies adopt edge computing, they also face new challenges in IoT[5] of[6].
Here are five areas that could create some confusion, as well as recommendations for IT teams on what they can do now to be better prepared.
1. Integration of the Internet of Things and Peripherals
There are several levels of integration where the Internet of Things and peripheral technologies pose challenges. The first level is the integration of IoT and Edge with basic IT systems deployed in production, financial, engineering and other areas. Many of these base systems are obsolete. If they do not have Internet API of Things (IoT) integration, a batch software ETL may be required to download data to these systems.
The second area of the call is the IoT itself. Many IoT devices are built by independent vendors. They use proprietary OSs of their own. This makes it difficult to "mix and match" different IoT devices in a single Edge architecture. Governments are now concerned about the introduction of uniform security and compliance standards for IoT providers willing to do business with states, which should encourage IoT providers to standardize. The next step is likely to be greater standardization of IoT operating systems and communication protocols, making integration easier.
The area of IoT security and compliance is still evolving, but it will improve in the next few years. Meanwhile, IT professionals in organizations can now ask potential IoT providers about what is already available for heterogeneous integration and whether they plan to ensure compatibility in future products.
With regard to the integration of legacy systems, ETL is one way of integration that can help if APIs for systems are not available. Another alternative is to write an API, but it takes a long time. The good news is that most legacy system vendors are aware of the coming wave of the Internet of Things and are already developing their own APIs if they haven't already. IT departments should contact major system providers to see what their plans are for IoT integration.
2. Safety
With new cybersecurity laws, compliance with IoT security needs to become easier. However, this will not solve the problem that monitoring the peripheral Internet of Things and its security will most likely be the responsibility of end users - inexperienced in security administration.
It is important to educate end-user para-IT employees on the basics of protecting the peripheral Internet of Things. This training should include IoT cybersecurity training as well as IoT equipment protection in locked, closed areas where possible.
Access is another important security issue. It is recommended that only authorized Homeland Security trained employees be admitted to IoT security zones.
3. Support
Support IoT hardware and networks by addressing device failure, security, software updates, or new hardware additions on a daily basis.
End users can track parts of the peripheral IoT that are directly related to their operations, and IT makes sense to take over overall service and support, as both are core areas of IT knowledge.
First of all, you need to make sure that you are aware of the current events. With the growth of the shadow IT infrastructure, end users turn directly to vendors to purchase and install the IoT for their operations. To discover these new additions, IT professionals can use enterprise networks and asset discovery software that identifies any new additions or changes to the Internet of Things. However, IT professionals, senior managers, and end users can agree on the types of local support and when IT professionals should take on these functions.
4. Survivability
When the Internet of Things is deployed in dangerous or hard-to-reach areas, it is important to use solutions that can be self-sufficient and require minimal maintenance for long periods of time.
These requirements must be in accordance with the IoT, which works in severe extreme heat or cold conditions or in difficult working conditions. Many off-the-shelf IoT devices may not meet these requirements.
It may also be important to find IoT solutions that can sustain themselves for long periods of time without the need for replacement or ongoing maintenance. It is not uncommon for IoT devices to have a lifecycle of 10 to 20 years. The maintenance frequency of such devices and sensors can be increased if they can be powered by solar energy (and therefore less dependent on batteries), or if they are activated from "sleep" mode only when movements or other events are detected (to monitor them).
To minimize maintenance and testing in the field, you must include survivability as one of the requirements for peripheral IoT solutions.
5. Throughput
Most IoT devices are bandwidth-efficient, but as more devices and sensors are deployed and more data is collected and transferred, bandwidth availability can become a serious (and costly) issue that can compromise network performance and the ability to handle real-time data.
Many organizations prefer to deploy distributed IoT systems on the periphery, where they can use local communication channels. Payloads can be routed at the end of the day, or possibly periodically throughout the day, to more centralized data collection points across the enterprise, whether those points are in an onpremis or cloud environment. This approach to distributed computing minimizes the use of long-distance bandwidth, and also allows you to schedule data transmission at a less busy (and less expensive) time of day.
Another element to consider is what IoT data do you really need to collect? Data architects and end business should be involved in answering this question. By aligning data that a business does not need and eliminating it, you can reduce data workloads and save on bandwidth, processing, and storage.
2020: Investment in ME peripheral computing will triple by 2025
Juniper Research experts predict that by the end of 2020, telecom operators will spend $2.7 billion on technologies for peripheral computing with multiple access, and by 2025 the amount will grow to $8.3 billion.
The number of MEC nodes used in the world (i.e. access points, base stations, routers) should reach 2 million by 2025. This year their number is about 230 thousand. The equipment will allow you to effectively manage the data arrays that smart city systems, transport and other services generate.
Experts suggest that the increase in the number of ME nodes will entail an increase in the quality of services, such as music streaming, digital TV, and cloud games. More than 920 million users will feel the benefits of the implementation of the ME in 2025.
2019
Forrester Research: 2020 will be the year of the breakthrough of peripheral computing
In early November 2019, the analytical company Forrester Research published a study stating that 2020 will be the year of the breakthrough of peripheral computing.
Although this phenomenon is primarily associated with the development of the Internet of Things, experts argue that the need for fast on-demand computing and applications in real time will also actively stimulate the growth of peripheral computing.
In the end, such intensive development of the periphery will lead to the fact that in 2020 traditional servers will cease to play such a large role. For example, a drone car will no longer be able to use them, which means that it will need an alternative. As a result, telecommunications companies will play a more important role in the cloud and distributed computing markets.
Analysts at Forrester believe that large telecommunications companies, especially those that, for one reason or another, were late in entering the cloud market, will soon begin to actively acquire data delivery network operators in order to catch up with peripheral computing. In addition, carriers will invest in open source projects such as Akraino, a software stack using peripheral computing.
However, telecommunications companies will most influence the development of peripheral computing in 2020 thanks to the spread of 5G networks, analysts at Forrester say. Despite the fact that at first such networks will be available only in large cities, this is enough for companies to reconsider their attitude to peripheral computing.
If companies are interested in this area, then they will undoubtedly be attracted by such capabilities as intelligent video processing in real time, 3D mapping to improve productivity and the use of special scenarios for autonomous control of robots or drones. Peripheral computing solutions have been launched or are preparing to do so in the near future by CDN vendors such as Ericsson, Fastly, Limelight and Akamai, the November 2019 report says.
While most businesses still view CDN as a solution for caching content in their web and mobile applications, network capabilities can be applied for much broader purposes.
In addition to telecommunications companies, many computer technology players are interested in peripheral computing. Recently, commercial structures have become urgently needed to interact with customers in real time, regardless of where they are located. This is due to the desire of vendors to maintain consumer loyalty.
Therefore, software manufacturers in all areas, from medicine to utilities to heavy industry, will need customizable peripherals to provide communication and monitoring, remote patient care, or remote maintenance. In addition, large cloud service providers will seek to consolidate their market positions, and AI startups will try to add new functionality to their applications.
According to experts, the most popular solutions on the market will be created by several manufacturers, since few vendors have their own products that are designed for all areas of IoT and peripheral computing. Therefore, in 2020, integrators capable of combining the delivery of products and services of many different suppliers into a common system will be in particular demand.[7]
Linux Foundation: Peripheral computing will become more important than cloud computing
Speaking at the Open Networking Summit in Belgium in September 2019, Linux Foundation Network Project Manager Arpit Joshipura said that peripheral computing will become more important than cloud computing by 2025.
Speaking of peripheral computing, he was referring to computer resources and storage technologies that are at a distance from each other at which information transfer is possible in 5-20 milliseconds.
According to Arpit Joshipura, peripheral computing can become an open environment that can seamlessly interact with others. It must be independent of equipment, silicon, cloud or operating system.
Open edge computing should also work with any related projects in which it is used: the Internet of Things, telecommunications, cloud or enterprise solutions.
Our goal is to combine all this, "said one of the leaders and noted that this work is already underway as part of the LF Edge project. |
Developing LF Edge partners create a set of software tools that combine a fragmented peripheral computing market around a common open concept that will form the basis of the future market.
Co-founder and CEO of Dianomic Systems (involved in the development of LF Edge) Tom Arthur (Tom Arthur) believes that an open interoperable platform is needed by peripheral computing, especially industrial enterprises, factories and the extractive industry, where "almost every field system, part of the equipment or sensor uses its own proprietary protocols and data definitions."
The main catalysts for the growth of demand for peripheral computing in the Linux Foundation are video content transmission systems, games, 5G networks, self-driving cars and virtual and augmented reality technologies.[8]
Transworld Data: Edge Computing Requires Redesign of Disaster Recovery Plans
Since information systems and applications are scattered across enterprises and clouds, IT managers have to revise disaster recovery plans, Mary Shacklet, president of consulting firm Transworld Data, writes on InformationWeek[9].
For many years, IT has been the responsibility of developing disaster recovery (DR) plans. But now these plans must be redesigned with the possibility of peripheral (edge) and cloud failures. What's new, and how are organisations reconsidering their plans?
1. IT does not control peripherals
Given the proliferation of peripheral and other distributed computing, IT can no longer use standard DR plans developed for data centers. For example, if robots and automation are used in production, they are managed by workers and line managers. They must also make sure that these assets are in a safe place when not in use. Often they independently install, monitor and maintain them or contact manufacturers.
Such employees have no experience in securing or protecting assets and maintaining/monitoring them. At the same time, the emergence of new peripheral networks and solutions without the participation of IT departments multiplies the number of assets that can fail. To cover these assets, DR and failover plans must be documented and personnel trained to act on these plans. The most logical way to do this is within the IT department's DR and business continuity plan.
When reviewing the plan, IT professionals should work with those who use various types of peripheral computing. It is important to involve each of them in documenting the relevant DR and failover plan and to test that plan regularly.
2. Cloud applications are an extra burden
In 2018, Rightscale interviewed almost 1,000 people. IT professionals and found that, on average, each company uses 4.8 clouds.
It would be interesting to know how many people at these companies have documented DR procedures in case of cloud failure. A survey of cloud providers showed that almost all of them provided a clause in contracts that exempted them from liability in the event of a disaster.
Hence the conclusion: should be included in the DR plan of each cloud provider whose services you use. What are the terms of the SLAs for backup and recovery? Do you (or your provider) have a cloud failure plan? Have you entered into an agreement with the provider to annually test the applications you use in the cloud for failover in the case of DR?
3. The Importance of Physical Security
The more IT gravitates to the periphery, paving its way to production or branch offices, the closer physical security is intertwined with DR. What happens if a server fails due to overheating in a remote office? Or if an unauthorized employee enters the shop in the fenced-off zone and spoils the robot? The DR plan shall provide for regular inspection and testing of equipment and facilities at remote locations, not just in the main data center.
4. In the event of a disaster, it is necessary to maintain a stable exchange of information
Several years ago, as a result of the earthquake, the data center of one bank suffered little, but networks were destroyed throughout the disaster zone. Cashiers in branches had to manually enter customer transactions into registers in order to enter them into the system after eliminating the consequences of the earthquake.
One of the clients asked the cashier what had happened. And then take it and answer: "All our computers are out of order." This information spread with the speed of a forest fire among customers and was disseminated in the media. As a result, many customers rushed to close their accounts.
Such situations are greatly exacerbated when IT assets are managed by many people, as is the case with peripheral computing. Therefore, it is so important to create a "tree" of information exchange showing who, what and to whom in the event of a disaster. This procedure must be strictly followed by all employees.
In a normal case, the company's "voice" is its PR division, which coordinates with the company's management and makes statements about the disaster for the community and the media. If such an information channel is unreliable and if employees do not know about its existence, more time can be spent on eliminating the consequences of incorrect information than the disaster itself.
5. DR plan shall cover different geographical points
Given the growing proliferation of edge computing and remote offices, it goes without saying that the DR plan can no longer only address a single location or data center. Especially if you are using clouds for disaster recovery, you should choose cloud providers that have multiple data centers in different regions. This will allow failover to a healthy geographic location in the event of a failure of the primary data center or cloud storage. This scenario should be included in the DR plan and tested.
6. DR Plan Testing Methodology Should Be Revised
If you plan to move more features to the cloud and deploy edge computing more widely, consider additional DR plan testing scenarios and ensure that documentation and testing are available for clouds and peripherals. You need confidence that the DR plan will work in any scenario if you have to put it into effect.
7. Management should support DR plan in more than just words
Clouds and edge computing make disaster recovery more difficult. Therefore, most organizations need to analyze and revise DR. plans. This will take time to solve a problem that for most organizations is no longer a priority and is far from the first place in the long list of priority projects.
Because of the changes in IT caused by the advent of clouds and edge computing, CIOs must explain to management and the board how these changes affect the DR plan and persuade them to spend their energy and time revising it.
8. Peripheral IT providers and cloud providers should be involved in the implementation of the DR plan
As mentioned, most cloud providers do not provide DR and failover guarantees in contracts. Therefore, before the contract is signed, the provider's obligations regarding DR should be included in the request for proposals (RFP) and become an important item for discussion.
9. Network redundancy is of great importance
In the event of a disaster, many organizations focus on restoring systems and data, paying less attention to networks. However, given the current role of the Internet and global networks, disaster failover and network redundancy should also be envisioned by the DR plan.
Juniper Research: Edge computing is becoming a reality
From cars to industrial equipment, data today comes at a surprising rate, affecting more and more areas. In fact, 90% of the world's data has been generated in the last two years. According to Techjury estimates, by the end of this year, the accumulated data volume will be about 40 trillion gigabytes, with a generation speed of 1.7 MB of data per person per second. As Juniper Research calculated, by 2022 the number of connected devices will grow by 140%, reaching 50 billion.. They will generate an unprecedentedly large amount of information that humanity did not know before. In 2017, the Economist magazine said that data has become the most valuable resource, replacing oil, but so far its value has not been truly disclosed - it is expressed in units of percent of the total amount of data[10].
However, by 2025 the situation will change. Strategy Analytics predicts that 59% of the data produced by Internet of Things (IoT) devices will be processed using peripheral computing technology. The rationale for its application is as follows: it improves network throughput (reduces latency) and reduces the cost of transporting data. Both indicators indicate the need to process data in real time and from anywhere. As the modern world becomes digital and connected, and applications become much more demanding on computing resources, the need for Edge computing becomes vital.
All this brings to the question: how to translate peripheral computing into reality? The answer to this is the need for a large-scale deployment of a new type of infrastructure. Among its main elements is the placement of physical computing components on the edge of the network. Their geographical location and density will depend on the application, but it is unlikely that most enterprises will dare to shoulder these costs. Most likely, they will rely on the fact that large service providers will take up the arrangement of peripheral infrastructure, acting as part of their cloud and edge strategies.
One such example is the AT&T CORD (Central Office Redesigned as Datacenter) initiative, which formed the basis for the evolution of the company's architecture. In addition to AT&T, other large cloud players have taken up infrastructure restructuring, which have supplemented their huge centralized data centers with peripheral data centers. It is possible that the concept of micro-data centers that unfold at base stations of cellular communications will soon become a reality.
Micro-data centers provide opportunities to deploy multi-layered architectures, with the most bandwidth-sensitive applications with small computing units. The latter can be in close proximity to the device, transferring data from it to the cache or data stores that are located in peripheral data centers.
The deployment of edge infrastructure provides two scenarios. In the first case, the data that the devices generate will be processed locally, without being sent to the corporate cloud or remote data center. This method can be especially useful for regions where personal data protection legislation prohibits the cross-border movement of confidential user information. Alternatively, throughput sensitive data will be processed at the edge of the network with asynchronous transmission of results to remote systems.
Data at scale
Low latency and large data processing are two key components for implementing Edge computing as a technology. Despite current challenges, industry leaders need to rally efforts to standardize how data is applied and processed by multiple service providers, eliminating delays in long-distance traffic, becoming the basis for the emergence of a new generation of strategic applications, and adapting to an evolving IT infrastructure. Before putting peripheral computing at the service of a business, you need to develop a technique for scaling streaming data that comes from many types of devices and processing it in tiered caches or data stores.
It should be noted that existing technologies - both hardware and software - allow this. On top of that, companies are starting to get creative for prototyping edge applications, trying to capitalize on the impending wave of edge computing. Pioneering companies that demonstrate the viability of the technology will launch a chain reaction of mass implementations, demonstrating that the cloud has a competitor.
While the cloud still dominates, the IT market is at a tipping point - the emergence of a new generation of infrastructure that will bring profound change to all industries.
Nvidia unveils first AI platform for peripheral computing
On May 27, 2019, Nvidia unveiled its first AI platform for peripheral computing, the Nvidia EGX. It is able to distinguish, understand and process data in real time without first sending it to the cloud or data center. Read more here.
Notes
- ↑ Advantages of peripheral computing
- ↑ [https://www.itweek.ru/its/article/detail.php?ID=212193 Edge vs. Cloud
- ↑ : what's the difference]?.
- ↑ Edge Computing Market Size, Share & Industry Analysis
- ↑ [https://www.itweek.ru/iot/article/detail.php?ID=216480 Internet
- ↑ Things and edge computing: integration challenges]
- ↑ Forrester: Edge computing is about to bloom
- ↑ Linux Foundation exec believes edge computing will be more important than cloud computing
- ↑ Peripheral computing requires recycling disaster recovery plans
- ↑ Peripheral computing is preparing to squeeze out cloud services