Mathematical Programming for Power Systems Operation. Alejandro Garcés Ruiz
Чтение книги онлайн.
Читать онлайн книгу Mathematical Programming for Power Systems Operation - Alejandro Garcés Ruiz страница 15
Figure 2.2 Three ways to measure the vector
2-norm or Euclidean norm, b) 1-norm or Manhattan distance, c) infinity-norm or uniform norm.We can use a norm to define a set given by all the points at a distance less or equal to a given value r, as given in (Equation 2.21).
(2.21)
This set is known as a ball of radius r. Figure 2.3 shows the shape of unit balls (i.e., balls of radius 1), generated by each of the three previously mentioned norms.
Figure 2.3 Comparison among unit balls defined by norm-2, norm-1, and norm-∞
Notice that a ball is not necessarily round, at least with this definition. All balls share a common geometric property known as convexity that is studied in Chapter 3.
2.3 Global and local optimum
Let us consider a mathematical optimization problem represented as (Equation 2.22).
(2.22)
where f :
n → is the objective function, x are decision variables, Ω is the feasible set, and β are constant parameters of the problem.A point x~ is a local optimum of the problem, if there exists an open set N(x~), named neighborhood, that contains x~ such that f(x)≥f(x~),∀x∈N(x~). If N=Ω then, the optimum is global. Figure 2.4 shows the concept for two functions in R with their respective neighborhoods N.
Figure 2.4 Example of local and global optima: a) function with two local minima and their respective neighborhoods, b) function with a unique global minimum (the neighborhood is the entire domain of the function).
There are two local minima in the first case, whereas there is a unique global minimum in the second case. This concept is more than a fancy theoretical notion; what good is a local optimum if there are even better solutions in another region of the feasible set? In practice, we require global or close-to-global optimum solutions.
On the other hand, several points may be optimal, as shown in Figure 2.5. In that case, all the points in the interval x1 ≤ x ≤ x2 are global optima. Thus, the question is not only if the optimal point is global but also if it is unique. Both globality and uniqueness are geometrical questions with practical implications, especially in competitive markets. Convex optimization allows naturally answering these questions as explained in Chapter 3
Figure 2.5 Example of a function with several optimal points.
2.4 Maximum and minimum values of continuous functions
It is well-known, from basic mathematics, that the optimum of a continuous differentiable function is attached when its derivative is zero. This fact can be formalized in view of the concepts presented in previous sections. Consider a function f :
→ with a local minimum in x~. A neighborhood is defined as N={x∈R:x=x~±t,|t|<t0} with the following condition:(2.23)
where t can be positive or negative. If t > 0, then (Equation 2.23) can be divided by t without modifying the direction of the inequality, to then take the limit when t→0+t → 0+ as presented below:
(2.24)
The same calculation can be made if t < 0, just in that case, the direction of the inequality changes as follows:
(2.25)
Notice that this limit is the definition of derivative; hence, f′(x~)≥0 and f′(x~)≤0 These two conditions hold simultaneously when f′(x~)=0. Consequently, the optimum of a differentiable function is the point where the derivative vanishes. This condition is local in the neighborhood N.
This idea can be easily extended to multivariable functions as follows: consider a function f:Rn→R (continuous and differentiable) and a neighborhood given by N={x∈Rn:x=x~+Δx} Now, define a function g(t)=f(x~+tΔx) If x~ is a local minimum of f, then
(2.26)