Understanding Infrastructure Edge Computing. Alex Marcham
Чтение книги онлайн.
Читать онлайн книгу Understanding Infrastructure Edge Computing - Alex Marcham страница 12
This regionalisation of internet infrastructure where key pieces of the network and the data centre move outwards from centralised locations to be deployed on a distributed and regional level is not an accident. As the number of users and their individual usage of the network increased, it became urgent to minimise the length of the network path between the source and destination of traffic.
The Advanced Research Projects Agency Network (ARPANET), first established in 1969 [4], was the precursor to the modern internet. Although other projects existed across the world to develop technologies and standards around such transformative technologies as decentralised networks, packet switching, and resilient routing of data in transit to provide a network with the ability to withstand an attack on its infrastructure, the ARPANET was by far the most influential example.
Although considered to be a leading example of a decentralised network at its inception and during the 1970s and 1980s, by the 1990s the level of centralisation in the architecture of the ARPANET was being strained under the emergence of a large number of new internet users and applications. More regionalisation of internet infrastructure was required to address these challenges, and perhaps the most influential method of achieving this was positioning static content in caches which are placed strategically throughout the network, creating a shorter path between traffic source and destination.
2.4.3 CDNs and Early Examples
One of the best examples of network regionalisation to solve a specific use case as well as address the needs of network operators is the content delivery network (CDN) work done by Akamai Technologies in the late 1990s [5]. Although compared to today the internet and the world wide web it supports were still in their infancy, with both having gained mainstream acceptance only a few years previously, need for the regionalisation of key infrastructure was already beginning to show as the internet became known for distributing new multimedia content, such as images and early examples of hosted video, which began to strain its underlying networks. If left unaddressed, this strain would have limited the uptake of online services by both businesses and home users and ultimately prevented the adoption of the internet as the go‐to location for businesses, essential services, shopping, and entertainment.
The importance of CDNs and of the practical proof point of the benefits of network regionalisation which they represent cannot be understated. By deploying a large number of distributed content caching nodes throughout the internet, CDNs have drastically reduced the level of centralised load placed on internet infrastructure on a regional, national, and global scale. Today, they are a fact of life for network operators; these static caches are widely deployed in many thousands of instances from a variety of providers such as CacheFly, Cloudflare, and Akamai, who reach agreements with network operators for their deployment and operation within both wired and wireless networks which provide last mile network connectivity. This regionalisation of static content, by moving the CDN nodes to locations closer to their end users, improves the user experience and saves network operators significant sums in the backhaul network capacity which would otherwise be needed to serve the demand for the content were it located farther away in an RNDC.
Where infrastructure edge computing diverges from the historical CDN deployment model is in its ability to support a range of use cases which rely on dense compute resources to operate, such as clusters of central processing units (CPUs), graphics processing units (GPUs), or other resources which enable infrastructure edge computing to provide services beyond the distribution of static content. Many CDN deployments do not require significant compute density, nor are many of the existing telecommunications sites where they are deployed (such as shelters at the bases of cellular towers, cable headend locations, or central office locations) which were originally designed to support low‐density network switching equipment capable of supporting the difficult cooling and power delivery requirements which these dense resources impose. Additionally, in many cases infrastructure edge computing deployments bring additional network infrastructure to provide optimal paths for data transit between last mile networks and edge data centre locations and between edge data centres and RNDCs; typical CDN nodes in contrast will usually be deployed atop existing network operator infrastructure at aggregation points such as cable network headends.
It is worth mentioning here, however, that infrastructure edge computing and the CDN are not at all mutually exclusively concepts. Just as a CDN can operate from various locations across the network today by the deployment of server infrastructure in locations such as cable network headends, they are also able to operate from an IEDC. One or multiple CDNs are then able to use infrastructure edge computing facilities as deployment locations for CDN nodes to replace or augment their existing deployments which use the current infrastructure of the network operator.
Although CDNs in many ways pioneered the deployment methodology of placing numerous content caches throughout the internet to shorten the path between the source and destination of traffic, it is important to understand the distinction between a deployment methodology and a use case. The CDN is a use case which needed a deployment methodology that achieved network regionalisation in order to function. As infrastructure edge computing is deployed, CDNs can also be operated from these locations as well. This is an important point that will be revisited later on the subject of the cloud.
2.5 Why Edge Computing?
Now that we have established the terminology and some of the history behind the concept of edge computing, we can delve deeper into the specific factors which make this technology appealing for a wide range of use cases and users. We will return to many of these factors throughout this book, but this section will establish these factors and the basic reasoning behind their importance at the edge.
2.5.1 Latency
The time required for a single bit, packet, or frame of data to be successfully transmitted between its source and destination can be measured in extreme detail by a variety of mechanisms. Between the ports on a single Ethernet switch, nanosecond scale latencies can be achieved, though they are more frequently measured in microseconds. Between devices, microsecond or millisecond scale latencies are observed, and across a large‐scale WAN, such as an access or last mile access network, hundreds of milliseconds of latency are commonly experienced, especially when the traffic destination is in a remote location relative to the source of the data, as is the case when a user located on the device edge seeks to use an application being hosted in a remote centralised data centre facility.
Latency is typically considered to be the primary performance benefit which edge computing and particularly infrastructure edge computing can provide to its end users, although other performance advantages exist such as the ability to avoid current hotspots of network congestion by reducing the length of the network path between a user and the data centre running their application of choice.
Beyond a certain point of acceptability, where the required data rate is provided by the network to the application for it to function as intended, increasing the bandwidth and therefore the maximum data rate that is provided to a user or application on the network for a real‐time use case does not measurably increase their quality of experience (QoE). The primary drivers of increased user QoE are then latency, measured at its maximum, minimum, and average over a period of time, and the ability of the system to provide as close to deterministic performance as possible by avoiding congestion.
The physical distance between a user and the data centre providing their application or service is not the only factor which influences latency from the network perspective. The network topology that exists between the end user and the data centre is also of significant concern; to achieve the lowest latency, as direct a connection as possible is preferable rather than relying on many circuitous routes which introduces additional delay in data transport. In extreme cases, data may be sent away from its intended destination before taking a hairpin turn back on a return