Translated by
2019/11/07 11:12:59

Peripheral calculations Boundary calculations of Edge computing

Peripheral calculations — the principle of creation of hierarchical IT infrastructure at which computing resources partially move from a core – the central data center to the periphery and are located in close proximity to the place of creation of primary "crude" data for their preprocessing before transfer to a higher computing node. Thus collecting and data analysis is carried out not in the centralized computing environment (DPC) and where there is a generation of data streams.


As peripheral calculations work

Scheme of implementation of boundary calculations (Edge Computing), 2019
Scheme of implementation of boundary calculations (Edge Computing), 2019

The market of industrial automation is in the lead in mastering of a number of new technologies which include augmented reality (AR), 3D - printing, robotics, artificial intelligence (AI), cloud supervisory control systems and data collection (SCADA) and programmable controllers of automation systems (PAC). If to speak about technologies of automation, then since production workshops and finishing with a logistics chain and heart of the enterprise, Internet of Things already connects different points to intelligent sensors. Industrial Internet of Things (Industrial IoT) provides information for maintenance, inventory and transportation of products.

However simple building of opportunities of network connection and the organization of data streams is not enough really to use the potential of digital transformation. To get competitive advantages, manufacturing enterprises need to integrate industrial automation completely. Only in this case they will have an opportunity to transform the data collected in the environment of IoT to valuable analytical information to provide faster, exact and economic decision making. And for this purpose they should transfer computing powers to network edge — on its periphery[1].

Advantages from the periphery to DPC

Peripheral calculations — some kind of "rocket fuel" for IoT. They are characterized by a number of advantages and potentialities:

  • peripheral calculations allow to analyze and filter data closer to sensors. Moreover, only relevant data go to a cloud;
  • the delay in production process can be crucial, for example, if there is a failure on a production line. The small response time measured in milliseconds is critical for security of responsible and exact transactions. In such cases to wait for result from a cloud platform of IoT is too long;
  • peripheral calculations mean that if it is necessary, confidential data can be processed on site where they are protected from direct network connections. It provides higher level of control over security and confidentiality of information;
  • at last, requirements to the capacity of cloud storages of data and network transmission capacity decrease, the corresponding costs as instead of sending in ​​ a cloud the large volume of data from sensors can be processed directly on the periphery are reduced.

The architecture of peripheral calculations became the center around which the set of computing tasks concentrates. Among its advantages it is possible to select the minimum cross-network delays at data processing and an opportunity for work with large volumes of data, but at the same time it has also weaknesses — insufficient interoperability of a legal stack and lack of standardization. As a result, today devices and applications which work at network edge are a set of autonomous Edge-ecosystems.

The architecture of Edge brings closer computing resources to data and devices. Many market experts consider it as a key paradigm outside cloud computing. There are some digital scenarios for which extremely low delays are required, and it is just that case where it proves better than cloud services. However the available variety of interfaces and lack of industry standards strongly slow down progress because deprive of the device and the application of an opportunity to interact with each other.


Forrester Research: the 2020th will become year of break of peripheral calculations

At the beginning of November, 2019 the Forrester Research analytical company published a research in which it is said that the 2020th will become year of break of peripheral calculations.

Though this phenomenon is connected first of all with development of Internet of Things, experts claim that need for fast calculations on demand and operation of applications in real time will also actively stimulate growth of peripheral calculations.

Forrester Research published a research in which it is said that the 2020th will become year of break of peripheral calculations.
Forrester Research published a research in which it is said that the 2020th will become year of break of peripheral calculations.

Eventually such intensive development of the periphery will bring to that, in 2020 traditional servers will cease to play such large role. For example, the car UAV will not be able to use them any more, so, it needs an alternative. As a result telecommunication companies will begin to play more important role in the markets of cloud and distributed computing.

Analysts of Forrester consider that large telecommunication companies, especially those that in due time for one reason or another were late with entry into the cloud market, soon will begin to purchase actively operators of network of delivery of data to make up for lost time due to peripheral calculations. Besides, telecom operators will invest means in projects open source, such as Akraino, program stack using peripheral calculations.

However most of all telecommunication companies will influence development of peripheral calculations in 2020 thanks to distribution of a 5G networks, analysts of Forrester claim. In spite of the fact that at first such networks will be available only in the large cities, it will be enough that the companies reconsidered the relation to peripheral calculations.

azvitiye of peripheral calculations in 2020 thanks to distribution of a 5G networks.
azvitiye of peripheral calculations in 2020 thanks to distribution of a 5G networks.

If the companies become interested in this area, then them, undoubtedly, will attract such opportunities as intellectual processing of video in real time, 3D - mapping to performance improvement of work and use of special scenarios to off-line control by robots or drones. Start of solutions based on peripheral calculations was carried out or are going to make it such CDN vendors as Ericsson, Fastly, Limelight and Akamai, said in the report of November, 2019 in the near future.

In spite of the fact that most the enterprises still consider CDN as the solution for caching of content in the web - and mobile applications, possibilities of network can be applied to much wider purposes.

In addition to telecommunication companies, the great number of players in the field of computer technologies is interested in peripheral calculations. Recently commercial structures had an imperative need in customer interaction in real time irrespective of where they are. It is connected with desire of vendors to maintain consumer loyalty.

Therefore software makers in all areas, from medicine to municipal services and the heavy industry, will need the configured peripheral devices for ensuring communication and control, remote treatment of patients or remote maintenance. Besides, large suppliers of cloud services will aim to consolidate the positions in the market, and AI startups will try to add new functionality in the applications.

According to forecasts of specialists, in the market the solutions created by several producers as the few vendors have own products which are expected all IoT areas and peripheral calculations will enjoy the greatest popularity. Therefore in 2020 the integrators capable to integrate to deliver products and services of many different suppliers in the general system will be in special demand.[2]

Linux Foundation: peripheral calculations will become more important than cloud

Speaking at the Open Networking Summit conference in Belgium in September, 2019 the head of the network Linux Foundation projects Arpit Joshipura said that peripheral calculations will become more important cloud by 2025.

Speaking about peripheral calculations, he meant computer resources and technologies of data storage which are from each other at distance at which information transfer is possible for 5-20 milliseconds.

Linux Foundation said that peripheral calculations will become more important cloud by 2025
Linux Foundation said that peripheral calculations will become more important cloud by 2025

According to Arpit Dzhoshipura, peripheral calculations are capable to become the open environment capable to interact trouble-free with others. It should be independent of the equipment, silicon, a cloud or the operating system.

Open peripheral calculations also should work with any adjacent projects in which they are used: Internet of Things, telecommunications, cloud or corporate solutions.

Our purpose — to integrate all this — one of heads said and noted that this work is already conducted within the LF Edge project.

The partners developing LF Edge create a set of software tools which integrate the fragmented market of peripheral calculations around the general open concept which will form the basis of the market of the future.

The cofounder and the CEO of Dianomic Systems company (participates in LF Edge development) Tom Arthur considers that the open platform, ready to interaction, is necessary to peripheral calculations, especially industrial enterprises, the plants and mining industry where "almost each field system, a part of the equipment or the sensor uses own proprietary protocols and data definitions".

The main catalysts of increase in demand for peripheral calculations in Linux Foundation see transmission systems of video content, a game, a 5G network, unmanned vehicles and technologies of virtual and augmented reality.[3]

Transworld Data: Peripheral calculations require processing of plans of recovery after accidents

As information systems and applications are disseminated through the enterprises and clouds, heads of IT departments should review plans of recovery after accidents, writes on the InformationWeek portal of Mary Sheklet, the president of Transworld Data consulting firm[4].

For many years drawing up plans of recovery after accidents belonged to duties of IT departments (disaster recovery, DR). But now these plans should be processed taking into account a possibility of failure of peripheral (edge) and cloud environments. What appeared new and as the organizations review the plans?

1. IT departments do not control the periphery

Considering distribution of peripheral and other distributed computing, IT departments cannot use the standard plans of DR developed for DPCs any more. For example, if in production robots and automation are used, workers and line managers manage them. They should take care of that these assets were in the safe place when are not used. Often they independently perform their installation, monitoring and service or contact to producers.

Such employees have no experience of security or protection of assets and their service / monitoring. At the same time emergence of new peripheral networks and decisions without participation of IT departments multiplies quantity of assets which can glitch. To cover these assets, plans of DR and abnormal switching should be documented, and the personnel should be trained to work according to these plans. It is the most logical to make it within the plan of DR and ensuring business continuity which is available for IT department.

At review of the plan IT specialists should cooperate with those who use different types of peripheral calculations. It is important to involve each of them in documentation of the corresponding plan of DR and abnormal switching and to regularly test this plan.

2. Cloud applicaions are an additional loading

In 2018 the Rightscale company polled nearly 1 thousand. Also found out IT specialists that on average each company uses 4.8 clouds.

It would be interesting to learn how many people in these companies documented the DR procedures on a case of failure of clouds. Inspection of cloud providers showed that almost all of them provided the clause exempting them from liability in case of accident in contracts.

From there is an output: it is necessary to include in the plan of DR of each cloud provider which services you use. What conditions of agreements on the service level concerning backup and data recovery? Do you have (or your provider) a plan for a cloud exit case out of operation? Whether you with provider signed the agreement on annual applications testing which you use in a cloud for abnormal switching in case of DR?

3. Importance of physical security

The more IT are drawn towards the periphery, laying themselves a way on production or in offices of branches, subjects physical security intertwines with DR more closely. What will occur if at remote office because of overheating the server fails? Or if the employee who does not have on that powers enters in workshop the fenced-off zone and will damage the robot? The plan of DR should provide regular inspection and testing of the equipment and objects in remote points, and not just in the main DPC.

4. In case of accident it is necessary to save steady information exchange

A few years ago as a result of an earthquake of DPC of one bank suffered little, but networks in all zone of disaster were destroyed. Cashiers in branches had to enter manually transactions of clients in registers that later elimination of effects of an earthquake to enter them into a system.

One of clients took an interest at the cashier that happened. And that take and answer: "All our computers failed". This information with a speed of wildfire extended among clients and was spread around in media. As a result many clients hurried to close the accounts.

Similar situations considerably become aggravated when many people manage IT assets, as in a case with peripheral calculations. Therefore it is so important to create the information exchange "tree" showing who, as to whom tells in case of accident. Such order should be observed by all employees strictly.

In a normal case "-voiced" the companies its PR-division which coordinates the actions with company management and makes statements for accident for community and media is. If such information channel is unreliable and if employees do not know about its existence, it is possible to spend more time for elimination of effects of incorrect information, than the accident.

5. The plan of DR should cover different geographical points

Considering the growing distribution of peripheral calculations and remote offices, needless to say, that the plan of DR cannot concern only one location or DPC any more. Especially if you use clouds for recovery after accidents, it is necessary to select the cloud providers having several DPCs in different regions. It will allow to execute abnormal switching to an operable geographical point in case of failure of the main DPC or the cloud storage of data. Such scenario should be included in the plan of DR and is tested.

6. The testing methodology of the plan DR should be reviewed

If it is going to transfer more functions to clouds and more widely to unroll peripheral calculations, it is necessary to provide additional test scripts of the plan of DR and to be convinced that for clouds and the periphery are available documentation and testing is held. Confidence is necessary that the plan of DR will work at any scenario if it is necessary to put into operation it.

7. The management should support the plan of DR not only in words

Clouds and peripheral calculations complicate recovery after accidents. Therefore most the organizations should analyze and review plans of DR. It will demand time for the solution of a task which for most the organizations is not first-priority any more and is not on the first place in the long list of priority projects.

Because of changes in the IT called by emergence of clouds and peripheral calculations, CIO should explain to the management and board as these changes affect the plan of DR and to convince them to spend forces and time for its review.

8. It is necessary to provide the involvement of suppliers of peripheral IT and cloud providers into implementation of the plan of DR

As it was already mentioned, most cloud providers do not provide in contracts of a guarantee of DR and abnormal switching. Therefore before contract signature of the obligation of provider, the concerning DR, should be included in requests for offers (RFP) and to become important point for discussion.

9. The redundancy of networks has huge value

Many organizations in case of accident focus on recovery of systems and data, paying less attention to networks. However, considering today's role of the Internet and global networks, abnormal switching in case of accident and redundancy of networks also should be provided by the plan of DR.

Juniper Research: Edge-calculations become a reality

Since cars and finishing with industrial equipment, data arise with a surprising speed today, affecting more and more areas. Actually, 90% of volume of world data were in the last two years created. By Techjury estimates, by the end of this year the saved-up amount of data will be about 40 trillion gigabytes, at the same time generation rate will make 1.7 MB of data on the person per second. As counted Juniper Research, by 2022 the number of attached devices will grow by 140%, having reached 50 billion. They will generate unprecedentedly large volume of information which earlier the mankind did not know. In 2017 the Economist magazine said that data became the most valuable resource, having replaced oil, however meanwhile their value is really not disclosed — it is expressed by units of percent from the total amount of data[5].

However by 2025 the situation will change. According to forecasts of Strategy Analytics, 59% of the data produced by devices of Internet of Things (IoT) will be processed by means of technology of peripheral calculations. Justification for its application the following: it improves capacity of networks (reduces latency) and reduces the cost of transportation of data. And that, and other indicator indicate the need data processings in real time and from any place. As the modern world becomes digital and connected, and applications — are much more exacting to computing resources, the need for Edge computing becomes vital.

All this brings to a question: how to turn peripheral calculations into reality? The answer to it consists in need of large-scale deployment of infrastructure of new type. Among its basic elements there is placement on the edge of network of physical computing components. Their geographic location and density of placement will depend on application option, but it is improbable that most the enterprises will venture to undertake these costs. Most likely, they will build calculation that large service providers will undertake arrangement of peripheral infrastructure, working within cloud and edge-strategy.

One of such examples is the initiative of AT&T CORD (Central Office Redesigned as Datacenter) which formed the basis of evolution of architecture of the company. In addition to AT&T other large cloud players who complemented the huge centralized data centers with peripheral DPCs also undertook reorganization of infrastructure. It is possible that the concept of micro DPCs which are developed on base stations of cellular communication will soon become a reality.

Micro DPCs open opportunities for deployment of multi-tier architectures, at the same time the options of application, most sensitive to capacity, will locate small computing blocks. The last can be in close proximity to a device, transferring from it data to a cache or data warehouses which are in peripheral data centers.

Deployment of edge-infrastructure provides two scenarios. In the first case data processing which generate devices will be locally performed, without sending to a corporate cloud or remote DPC. This method can be especially useful to regions where the legislation on personal data protection prohibits cross-border movement of confidential information on users. In other case processing of data, sensitive to capacity, will be performed on network edge with asynchronous transmission of results in the remote systems.

Data in scale

The low latency and processing of large volumes of data are two key components for Edge computing implementation as technologies. Despite the current problems, industry leaders should rally efforts for standardization of ways of application and data processing by numerous service providers that will allow to eliminate delays by transfer of traffic on a long distance, to become a basis for origin of new generation of strategic applications and also to adapt to the developing IT infrastructure. Before to place peripheral calculations at service to business, it is necessary to develop the equipment for scaling stream data which arrive from a set of types of devices, and their processing in multilevel caches or data warehouses.

It is necessary to notice that the existing technologies — both hardware, and program — it allow. To crown it all, the companies begin to show creative approach for prototyping of edge-applications, trying to benefit by the approaching wave of peripheral calculations. The companies pioneers which will show viability of technology will start chain reaction of mass implementations, having shown that the cloud had a competitor.

In spite of the fact that the cloud still dominates, IT market approaches a turning point — appearance of new generation of infrastructure which will lead to profound changes in all industries.

Nvidia submitted the first AI platform for peripheral calculations

On May 27, 2019 Nvidia submitted the first AI platform for peripheral calculations — Nvidia EGX. She is capable to distinguish, understand and process data in real time without their preliminary sending to a cloud or data center. Read more here.