Mathematical Programming for Power Systems Operation. Alejandro Garcés Ruiz

Чтение книги онлайн.

Читать онлайн книгу Mathematical Programming for Power Systems Operation - Alejandro Garcés Ruiz страница 16

Mathematical Programming for Power Systems Operation - Alejandro Garcés Ruiz

Скачать книгу

2.26) leads to the following condition:

       g left parenthesis t right parenthesis greater or equal than g left parenthesis 0 right parenthesis (2.27)

      This condition implies that 0 is a local optimum of g; moreover,

       g to the power of apostrophe left parenthesis 0 right parenthesis equals limit as t not stretchy rightwards arrow 0 to the power of plus of fraction numerator f left parenthesis x with tilde on top plus t straight capital delta x right parenthesis minus f left parenthesis x with tilde on top right parenthesis over denominator t end fraction equals left parenthesis nabla f left parenthesis x with tilde on top right parenthesis right parenthesis to the power of straight ⊤ straight capital delta x (2.28)

      Notice that g is a function of one variable, then optimal conditiong′ = 0 is met, regardless the direction of Δx. Therefore, the optimum of a multivariate function is given when the gradient is zero ∇f(x~)=0). This condition permits to find local optimal points, as presented in the next section. Two questions are still open: in what conditions are the optimum global? And, when is the solution unique? We will answer these relevant questions in the next chapter. For now, let us see how to find the optimum using the gradient.

      2.5 The gradient method

       min blank f left parenthesis x right parenthesis (2.29)

      where the objective function f :

n
is differentiable. The gradient ∇f(x) represents the direction of greatest increase of f. Thus, minimizing f implies to move in the direction opposite to the gradient. Therefore, we use the following iteration:

      The gradient method consists in applying this iteration until the gradient is small enough, i.e., until ‖∇f(x)‖ ≥ ϵ. It is easier to understand the algorithm by considering concrete problems and their implementation in Python, as given in the next examples.

      Example 2.4

      Consider the following optimization problem:

       min blank f left parenthesis x comma y right parenthesis equals 10 x squared plus 15 y squared plus exp invisible function application left parenthesis x plus y right parenthesis (2.31)

      The gradient of this function is presented below:

       nabla f left parenthesis x comma y right parenthesis equals left parenthesis table attributes columnspacing 1em end attributes row cell 20 x plus exp invisible function application left parenthesis x plus y right parenthesis end cell row cell 30 y plus exp invisible function application left parenthesis x plus y right parenthesis end cell end table right parenthesis nabla f left parenthesis x comma y right parenthesis equals left parenthesis table attributes columnspacing 1em end attributes row cell 20 x plus exp invisible function application left parenthesis x plus y right parenthesis end cell row cell 30 y plus exp invisible function application left parenthesis x plus y right parenthesis end cell end table right parenthesis (2.32)

      We require to find a value (x, y) such that this gradient is zero. Therefore, we use the gradient method. The algorithm starts from an initial point (for example x = 10, y = 10) and calculate new points as follows:

       table attributes columnalign right left right left right left right left right left right left columnspacing 0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em end attributes row cell x not stretchy leftwards arrow x minus t fraction numerator straight partial differential f over denominator straight partial differential x end fraction end cell end table (2.33)

       table attributes columnalign right left right left right left right left right left right left columnspacing 0em 2em 0em 2em 0em 2em 0em 2em 0em 2em 0em end attributes row cell y not stretchy leftwards arrow y minus t fraction numerator straight partial differential f over denominator straight partial differential y end fraction end cell end table (2.34)

      This step can be implemented in a script in Python, as presented below:

      import numpy as np x = 10 y = 10 t = 0.03 for k in range(50): dx = 20*x + np.exp(x+y) dy = 30*y + np.exp(x+y) x += -t*dx y += -t*dy print('grad:',np.abs([dx,dy])) print('argmin:',x,y)

      Example 2.5

      Python allows calculating the gradient automatically using the module AutoGrad. It is quite intuitive to use. Consider the following script, which solves the same problem presented in the previous example:

      import autograd.numpy as np from autograd import grad # gradient calculation def f(x): z = 10.0*x[0]**2 + 15*x[1]**2 + np.exp(x[0]+x[1]) return z g = grad(f) # create a funtion g that returns the gradient x = np.array([10.0,10.0]) t = 0.03 for k in range(50): dx = g(x) x = x -t*dx print('argmin:',x)

      In this case, we defined a function f and its gradient g where (x, y) was replaced by a vector (x0, x1). The module NumPy was loaded using autograd.numpy to obtain a gradient function automatically. The code executes the same 50 iterations, obtaining the same result. The reader should execute and compare the two codes in terms of time calculation and results.

      Example 2.6

Скачать книгу