Mathematical Programming for Power Systems Operation. Alejandro Garcés Ruiz

Чтение книги онлайн.

Читать онлайн книгу Mathematical Programming for Power Systems Operation - Alejandro Garcés Ruiz страница 15

Mathematical Programming for Power Systems Operation - Alejandro Garcés Ruiz

Скачать книгу

In general, ‖x1 ≤ ‖x2 ≤ ‖x. All of these norms are suitable ways to measure vectors in the space.

      Figure 2.2 Three ways to measure the vector

2-norm or Euclidean norm, b) 1-norm or Manhattan distance, c) infinity-norm or uniform norm.

      We can use a norm to define a set given by all the points at a distance less or equal to a given value r, as given in (Equation 2.21).

(2.21)

      This set is known as a ball of radius r. Figure 2.3 shows the shape of unit balls (i.e., balls of radius 1), generated by each of the three previously mentioned norms.

      Figure 2.3 Comparison among unit balls defined by norm-2, norm-1, and norm-∞

      Notice that a ball is not necessarily round, at least with this definition. All balls share a common geometric property known as convexity that is studied in Chapter 3.

      2.3 Global and local optimum

      Let us consider a mathematical optimization problem represented as (Equation 2.22).

(2.22)

      where f :

n
is the objective function, x are decision variables, Ω is the feasible set, and β are constant parameters of the problem.

      Figure 2.4 Example of local and global optima: a) function with two local minima and their respective neighborhoods, b) function with a unique global minimum (the neighborhood is the entire domain of the function).

      There are two local minima in the first case, whereas there is a unique global minimum in the second case. This concept is more than a fancy theoretical notion; what good is a local optimum if there are even better solutions in another region of the feasible set? In practice, we require global or close-to-global optimum solutions.

      Figure 2.5 Example of a function with several optimal points.

      It is well-known, from basic mathematics, that the optimum of a continuous differentiable function is attached when its derivative is zero. This fact can be formalized in view of the concepts presented in previous sections. Consider a function f :

with a local minimum in x~. A neighborhood is defined as N={x∈R:x=x~±t,|t|<t0} with the following condition:

(2.23)

      

(2.24)

      The same calculation can be made if t < 0, just in that case, the direction of the inequality changes as follows:

      

(2.25)

      Notice that this limit is the definition of derivative; hence, f′(x~)≥0 and f′(x~)≤0 These two conditions hold simultaneously when f′(x~)=0. Consequently, the optimum of a differentiable function is the point where the derivative vanishes. This condition is local in the neighborhood N.

      This idea can be easily extended to multivariable functions as follows: consider a function f:Rn→R (continuous and differentiable) and a neighborhood given by N={x∈Rn:x=x~+Δx} Now, define a function g(t)=f(x~+tΔx) If x~ is a local minimum of f, then

(2.26)

      In terms of the new function g, (

Скачать книгу