Cyber-Physical Distributed Systems. Min Xie

Чтение книги онлайн.

Читать онлайн книгу Cyber-Physical Distributed Systems - Min Xie страница 9

Cyber-Physical Distributed Systems - Min Xie

Скачать книгу

actual correlations in a CPS. It is reasonable to build a reliability model based on the control block diagram of a CPS. In the model, the controller has many input signals, including commands and system state feedback. In general, commands are the system's expected outputs. Control signal flows are given in the control block diagram, and sensors play an important role in this feedback system. This control block diagram clearly indicates the internal dynamic relations of the system, covering most of the aspects that need to be studied.

      For applications in CPSs, we are interested in real‐time performance. Therefore, from a control perspective, the ability to adjust the transient and steady‐state response of a feedback CPS is a beneficial outcome of the design of the CPS. One of the first steps in the design process is to specify the performance measures. In this chapter, we introduce common time‐domain specifications, such as percent overshoot, settling time, time to peak, time to rise, and steady‐state tracking error. We will use selected input signals, such as the step and ramp, to test the response of the CPS. The correlations between the system performance and the stability, reliability, and resilience strategies of CPSs are investigated. We will develop valuable relationships between the performance specifications and the component states for CPSs.

      The ability of a feedback CPS to compensate for the consequences of the inherent faults redefines the concept of failures, i.e., the reliability of the CPS is dependent not only on the type of failure that may occur, but also on the evolving states of system output and control signals in each period [9,10]. Classical reliability evaluation methods, such as fault tree analysis, event tree analysis, and failure mode and effect analysis, are not appropriate for application to these evolving states due to the level of complexity and dynamics of CPSs. In [11,12], structured analyses and design techniques based on Monte Carlo simulation (MCS) for reliability evaluation are presented. This approach explicitly formalizes the functional interactions between subsystems, identifies the characteristic values affecting the reliability of complex CPSs, and quantifies the reliability, availability, maintainability, and safety (RAMS) parameters related to the operational architecture. As the remaining ability of the system to maintain the expected control goal after faults occur is crucial, ordered sequences of multi‐failure methods have been applied to assess the reliability of all possible CPS architectures [10]. A new methodology called a multi‐fault tree is proposed, and time‐ordered sequences of failures are addressed.

      In contrast to the aforementioned studies, the reliability of a CPS as a function of the required performance from a control viewpoint is evaluated in [13]. The CPS is regarded as a failure if the dynamic performance does not satisfy all the requirements. Difference equations are introduced to describe the stochastic model of the CPS, explicitly illustrating the influence of the transmission delays and packet dropouts on changing the model parameters. A linear discrete‐time dynamic approach for modeling the signal flow in, out, and among all subsystems promotes straightforward calculation of fundamental dynamic aspects, such as times and fault characteristics [14].

      In [13], this method was extended to estimate the reliability of CPSs and replaces the constraint on the number of replications used in [16] with two other constraints, namely, a precision interval and a percentage of simulations belonging to this interval. The networked degradations for each channel are generated and are then used to determine the success or failure of the CPS for a given combination of operational requirements. Therefore, the reliability of the CPS is estimated as a tabulated function of the operational requirements. Compared with the results in [16], the results obtained in [13] guarantee the estimated reliability to satisfy a given precision.

      1.2.1 Stability of CPSs

      In power systems, communication delays always occur in the transmission of frequency measurements from sensors to the control center (S‐C channel), and control signals from the control center to the plant side (C‐A channel) [17,18]. In local networks, time delays are usually ignored because the control is mainly applied locally, and communication delays are negligible compared with the subsystem time constants [19]. In recent years, with the rapid development of wide‐area measurement systems (WAMSs), a large number of phasor measurement units (PMUs) have been deployed to facilitate the real‐time control of wide‐area power systems (WAPSs) to improve the load frequency control (LFC) performance [20–22]. However, when data packets are transmitted across a WAMS, communication delays may become significant and cannot be ignored [23–25].

      The conventional control of WAPSs is centralized and employs dedicated communication channels over a closed communication network. However, new regulatory guidelines require coordination across multiple hierarchical levels of power systems for more effective market operations and, as a result, open communication infrastructures have been deployed to promote the control of these increasingly complex systems [26,27]. While open networks have economic, maintenance, and reliability advantages, they are subject to time delays that are inherently stochastic (e.g., multiple delays [20] and probabilistic interval delays [24]), and thus cannot be calculated based on the procedures for dedicated networks. Numerical investigations show that time delays the open communication network have the potential to destabilize a WAPS [23,28,29]. For instance, the deregulation of the power industry has pushed many tie lines between control areas to operate close to their maximum capacity. This is especially true for tie lines serving heavy load centers, for example, in southern California [28]. Under these circumstances, operational stresses, such as large time delays, increase the possibility of inter‐area oscillation, reducing the effectiveness of control system damping, and potentially leading to loss of system synchronism [30].

      Two classes of methods are available for computing the delay margin in a WAPS for constant and time‐varying delays. Frequency‐domain direct methods quantify the delay margin based on computing critical eigenvalues for constant delays with a known upper bound, for example, the Schur‐Cohn‐based method for commensurate delays [37], Rekasius substitution [38], and the elimination of exponential terms in the characteristic equation [36]. Indirect methods can deal with time‐varying and constant delays; they are derived from Lyapunov stability theory [17,23,24], linear matrix inequality techniques [27,28,39], H‐infinity robust synthesis [20], and the dual‐locus diagram method [40].

      These methods assume well‐defined time delay models, that is, constant [20], uniformly distributed [25], multiple [17], and probabilistic interval time delays [24], and require prior knowledge of the lower bound, upper bound, and parameters of the delay distribution. However, delays in an open communication network vary with the number of active end‐users [41] and media access control

Скачать книгу