Mathematical Programming for Power Systems Operation. Alejandro Garcés Ruiz
Чтение книги онлайн.
Читать онлайн книгу Mathematical Programming for Power Systems Operation - Alejandro Garcés Ruiz страница 19
This is a linear system of equation with solution x = 1/2, y = 1/2, and λ = 1 that constitutes the optimum of the problem.
2.7 The Newton’s method
The optimal conditions for both constrained and unconstrained problems constitute a set of algebraic equations S(x) for the first case and S(x, λ) for the second case. This set can be solved using Newton’s method4.
Consider a set of algebraic equations S(x) = 0 where S :
n → n is continuous and differentiable. Thus, the following approximation is made:
where
This iteration is the primary Newton’s method. Compared to the gradient method, this method is faster since it includes information from the second derivative5. In addition, Newton’s method does not require defining a step t as in the gradient method. However, each iteration of Newton’s method is computationally expensive since it implies the formulation of a jacobian and solves a linear system in each iteration.
Example 2.9
Consider the following optimization problem:
Its corresponding lagrangian is presented below:
and the optimal conditions forms the following set of algebraic equations:
The corresponding Jacobian is the following matrix:
It is possible to formulate Newton’s method using the information described above. The algorithm implemented in Python is presented below:
import numpy as np def Fobj(x,y): "Objective funcion" return 10*x**2 + 15*y**2 + np.exp(x+y) def Grad(x,y,z): "Gradient of Lagrangian" dx = 20*x + np.exp(x+y) + z dy = 30*y + np.exp(x+y) + z dz = x + y - 5 return np.array([dx,dy,dz]) def Jac(x,y,z): "Jacobian of Grad" p = np.exp(x+y) return np.array([[20+p,p,1],[p,30+p,1],[1,1,0]]) (x,y,z) = (10,10,1) # initial condition G = Grad(x,y,z) while np.linalg.norm(G) >= 1E-8: J = Jac(x,y,z) step = -np.linalg.solve(J,G) (x,y,z) = (x,y,z) + step G = Grad(x,y,z) print('Gradient: ',np.linalg.norm(G)) print('Optimum point: ', np.round([x,y,z],2)) print('Objective function: ', Fobj(x,y))
In this case, we used a tolerance of 10−3. The algorithm achieves convergence in few iterations, as the reader can check by running the code.
2.8 Further readings
This chapter presented basic optimization methods for constrained and unconstrained problems. Conditions for convergence of these algorithms were not presented here. However, they are incorporated into modules and solvers called for a modeling/programming language such as Python. Our approach is to use these solvers and concentrate on studying the characteristics of the models. Readers interested in details of the algorithms are invited to review [12] and [11], for a formal analysis of convergence; other variants of the algorithms can be found in [13]. Moreover, a complete review of Lagrange multipliers can be studied in [14].
This chapter is also an excuse for presenting Python’s features as programming and modeling language. The reader can review Appendix C for more details about each of the commands used in this section.
2.9 Exercises
1 Find