Artificial Intelligence and Quantum Computing for Advanced Wireless Networks. Savo G. Glisic

Чтение книги онлайн.

Читать онлайн книгу Artificial Intelligence and Quantum Computing for Advanced Wireless Networks - Savo G. Glisic страница 101

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks - Savo G. Glisic

Скачать книгу

end

      The function FORWARD computes the states, whereas BACKWARD calculates the gradient. The procedure MAIN minimizes the error by calling FORWARD and BACKWARD iteratively.

      which implies that Fw is a contraction map (w.r. t. ‖ ‖1 ) for any set of parameters w.

      1 Nonlinear (nonpositional) GNN. In this case, hw is realized by a multilayered feedforward NN. Since three‐layered neural networks are universal approximators [67], hw can approximate any desired function. However, not all the parameters w can be used, because it must be ensured that the corresponding transition function Fw is a contraction map. This can be achieved by adding a penalty term to Eq. (5.79), that is where the penalty term L(y) is (y − μ)2 if y > μ and 0 otherwise, and the parameter μ ∈ (0, 1) defines the desired contraction constant of Fw . More generally, the penalty term can be any expression, differentiable with respect to w, that is monotone increasing with respect to the norm of the Jacobian. For example, in our experiments, we use the penalty term , where Ai is the i‐th column of ∂Fw/∂x. In fact, such an expression is an approximation of L(‖∂Fw/∂x‖1) = L(maxi‖Ai‖1).

      5.3.2 Computational Complexity

      Here, we derive an analysis of the computational cost in GNN. The analysis will focus on three different GNN models: positional GNNs, where the functions fw and gw of Eq. (5.74) are implemented by FNNs; linear (nonpositional) GNNs; and nonlinear (nonpositional) GNNs.

      We will assume that there exist two procedures FP and BP, which implement the forward phase and the backward phase of the back propagation procedure, respectively. Formally, given a function lw : RaRb implemented by an FNN, we have

equation

      Source: Scarselli et al. [1].

Instruction Positional Nonlinear Linear Execs.
z(t + 1) = z(t) ⋅ A + b s2E s2E s2E itb
o = Gw(x(t), lw) images images images 1
x(t + 1) = Fw(x(t), l) images images s2E itf
images 1
images images images 1
images N N N 1
images images images

Скачать книгу