Electronics in Advanced Research Industries. Alessandro Massaro

Чтение книги онлайн.

Читать онлайн книгу Electronics in Advanced Research Industries - Alessandro Massaro страница 26

Electronics in Advanced Research Industries - Alessandro Massaro

Скачать книгу

connection between neurons is associated with a weight determining the importance of the input value (input variable xi). The initial weights are set randomly. Each neuron, on the other hand, is characterized by an activation of a mathematical function. Figure 1.17a shows a basic neural network and Figure 1.17b shows a DL neural network. The ANNs are based on the concept of back‐propagation error of the neural network training, consisting of a tuning of the weights based on the error rate obtained in the previous iterations. The iteration is named an epoch. Proper refinement of the weights tuning ensures lower error rates, optimizing the model for the specific case of study. In Figure 1.18a is sketched the principle of the back propagation feedback system, enabling self‐adjusting weights. Figure 1.18b shows a basic neural network implementing the mathematical function defining node output (unit step functions named activation functions).

Schematic illustration of (a) Simple ANN. (b) DL neural network. Schematic illustration of (a) the feedback system minimizing calculation error in the training model. (b) Neural network model implementing unit step function.

      The pseudocode of the ANN training process is as follows:

       1. Train_ANN (fi, wi, oj) 2. For epochs = 1 to N Do 3. While (j ≤ m) Do 4. Randomly initialize wi = { w1 , w2 …, wn}; 5. Input oj = { o1 , o2 ,…, om} in the input layer forward propagate (fi· wi) though layers until is obtained the predicted result y; 6. Compute the error e = y - y2; 7. Back propagate e from the right to the left of the ANN network through layers; 8. Update wi; 9. End While 10. End For

      The pseudocode highlights that there are two mechanisms in the ANN network: the forward propagation of the estimation of the predicted output y, and the back propagation of the error function as sketched in Figure 1.18b. The output is estimated by considering the summation of the input contributions and is defined as:

      (1.10)equation

      (1.11)equation

      (1.12)equation

      (1.13)equation

      (1.14)equation

      (1.15)equation

Schematic illustration of basic mathematical functions defining activation functions.

      Other mathematical activation functions are the following [68]:

      (1.16)equation

      (1.17)equation

      (1.18)equation

      (1.19)equation

      (1.20)equation

      (1.21)equation

      (1.22)equation

      (1.23)equation

      (1.24)equation

      (1.25)equation

      (1.26)equation

      (1.27)equation

      (1.28)equation

      (1.29)equation

      (1.30)equation

      (1.31)equation

      The activation function represents a basic research element of considerable importance. The correct choice of the activation function defines the best implementation of the logic defining the outputs. The analytical model must therefore be appropriately weighted by the various variables and must be “calibrated” for the specific case study. Another important aspect is the ability of the activation function to self‐adapt [69] to the specific case study providing a certain flexibility [70]. Of particular interest is the possibility to consider a combination of activation functions (activation ensemble [71]). The approach to follow is therefore to define a flexible and modular activation function as is the case for the adaptive spline activation function [72].

Скачать книгу