Fog Computing. Группа авторов

Чтение книги онлайн.

Читать онлайн книгу Fog Computing - Группа авторов страница 29

Fog Computing - Группа авторов

Скачать книгу

Depending on how wide the monitored area is, four or more layers can be used. The first layer represents the cloud and has the purpose of offering a global perspective of the entire system. In this layer, hard computational analysis is performed to prevent and respond to citywide disasters. Since the layer performs analysis on historical data, it does not require real-time or near real-time responses. Following, the second layer represents fog devices that are responsible to prevent any failure, at a smaller scale than cloud, in every neighborhood. The key purpose behind this layer is to identify potential hazardous events, based on the collected measurements from multiple devices. In this situation, near real-time responses are required for better prediction. Next, the third layer is composed of local fog nodes that identify potential threats based on the data received from the sensors and act by outputting control signals when it is necessary for fast recovery from a found threat. Furthermore, these devices process the raw data and prepare it for the layer above. Finally, the last layer is represented by sensors that generate measurements for the aforementioned layers in the architecture.

A smart pipeline four-layer fog computing architecture: (1) Data Centers; (2) Intermediate computing nodes; (3) Edge computing nodes; (4) Sensing networks on critical infrastructures.

      Fog and edge computing vision introduce multiple advantages by migrating some computational resources at the edge of the network. The underlining of these paradigms is to create an IoT network environment covered with a vast amount of interconnected distributed heterogeneous devices having the purpose to deploy and manage demanding applications closer to the user. Yet, it is a nontrivial task to design platforms where all these required characteristics are met.

      In this section, we identify and discuss the challenges that these paradigms must conquer in order to fulfill their full potential. We group these challenges in three main areas, i.e. resource management, security, and privacy, and network management.

      2.5.1 Resource Management

      A taxonomy of resource management at the edge, based on the current state-of-the-art research in this area, is presented in [28]. According to this classification, a total of five different categories are identified considering the objective of the technique.

      The first category refers to resource estimation and represents one of the fundamental requirements in resource management, i.e. the capability of estimating how many resources a certain task requires. This is important for handling the uncertainties found in an IoT network and providing at the same time a satisfactory QoS for deployed IoT applications. The second category is represented by resource discovery and aims to aid the user to discover available resources already deployed at the edge. Resource discovery complements resource estimation by keeping the pool of available computational resources updated.

      Once the system can estimate and discover resources, a third category appears having the purpose of allocating IoT applications in close proximity to the users. This technique, called resource allocation, utilizes the knowledge of available resources to map parts of the applications at different edge devices such that its requirements are met. There are two different perspectives of the allocation: (1) it represents the initial deployment to the edge of the network, deciding where to map the application; and (2) it serves as a migration technique by self-adapting when a node has failed. Moreover, one challenge arises when sharing resources between distributed edge devices, i.e. a close collaboration between nodes enforced by security and privacy is required. Solving this challenge creates the fourth category, i.e. resource sharing.

      Finally, the last technique is called resource optimization and is obtained by combining the aforementioned resource management approaches. The main objective is to optimize the usage of available resources at the edge according to the IoT application constraints. Usually, the developer creates the QoS requirement of his application before deploying it to the edge.

      2.5.2 Security and Privacy

      To evaluate the security and privacy enforced in systems based on fog and edge devices, the designer can use the confidentiality, integrity, and availability (CIA) triad model, representing the most critical characteristics of a system [29]. While any breach of the confidentiality and integrity components yields a data privacy issue, the availability component refers to the property of the nodes to share their resources when required. Since fog and edge represents an extension of the cloud, such systems inherit not only the computational resources but also the security and privacy challenges. Besides these challenges, due to the deployment of devices at the edge of the network more security challenges appear. Yi et al. identify the most important security issues of fog computing as authentication, access control, intrusion attack, and privacy [9].

      Considering the dynamic structure of an IoT network, authentication is an important key feature of fog and edge computing and was identified. as the main security issue in fog computing [20]. The authentication serves as the connectivity mechanism that allows to securely accept new nodes into the IoT network. By providing means to identify each device and establish its credentials, a trust is created between the new added node and network. The current security solutions proposed for cloud computing may have to be updated for fog/edge computing to account for threats that do not exist in its controlled environment [21]. One solution to securely authenticate edge devices is presented in [30].

Скачать книгу