THE restructuring of the power industry has lead to

Size: px
Start display at page:

Download "THE restructuring of the power industry has lead to"

Transcription

1 GLOBALLY CONVERGENT OPTIMAL POWER FLOW USING COMPLEMENTARITY FUNCTIONS AND TRUST REGION METHODS Geraldo L. Torres Universidade Federal de Pernambuco Recife, Brazil Abstract - As power systems become heavily loaded there is an increasing need for globally convergent optimal power flow (OPF) algorithms. An algorithm is said to be globally convergent if it is able to obtain a solution, if one exists, for any choice of initial point. Such an algorithm is developed here by combining three approaches: (i) reformulation of the OPF problem as a nonlinear equations system by means of complementarity functions, (ii) reformulation of the equations system as an unconstrained minimization problem, and (iii) solution of the unconstrained minimization problem by a globally convergent trust region algorithm. The proposed algorithm is implemented in MATLAB, and its performance is tested on the IEEE 30-, 57-, 118- and 300-bus systems. Keywords - Optimal Power Flow, Trust Region Method, Nonlinear Complementarity Method, Global Convergence. 1 INTRODUCTION THE restructuring of the power industry has lead to new complex optimal power flow (OPF) problems [1], requiring robust and reliable solution techniques. Concerning robustness, global convergence is a desirable property for any nonlinear OPF solution algorithm, so as to be able to obtain a solution, if one exists, for any choice of initial point. There are two classical approaches for globalizing a locally convergent optimization algorithm: use of line searches and trust regions []. This paper concentrates on a trust region method for unconstrained optimization and nonlinear systems of equations. Trust region methods are a relatively new class of nonlinear optimization algorithms that minimize a quadratic approximation of a nonlinear objective function within a closed region called the trust region. The closed region is named so because within this region the quadratic model can be trusted to be a good approximation to the original objective function. The achieved reduction in the quadratic approximation should correspond to a reduction in the nonlinear objective function, and if that is not the case, then the size of the trust region is reduced and the approximation model is solved again. There is a broad family of trust region methods, and they differ from each other mainly in the way they model the objective function and handle the constraints, the classical approach being the use of a quadratic approximation for unconstrained minimization []. Some applications of trust region methods to power systems optimization can be found in the literature [1, 3, 4, 5]. A new application of trust region methods to OPF solution is proposed here. Unlike in [5], the trust region algorithm used in this paper is for unconstrained optimization. To allow for that, first the OPF optimality conditions are reformulated as a nonlinear equations system using the complementarity function approach described in [6]. Because Newton s method may fail to converge when solving the nonlinear equations system, the equations are thus reformulated as an unconstrained minimization problem whose objective is the sum of the equations squared residuals. Finally, to globalize the solution of the unconstrained minimization problem, a trust region method is employed. The trust region OPF algorithm proposed in this paper differs from its alike in [5] in several aspects, but the three main ones are: the use here of complementarity functions to reformulate the OPF problem as a nonlinear equations system, which in turn is reformulated as an unconstrained minimization problem; the trust region algorithm used here is for unconstrained optimization, while in [5] is for constrained optimization; the trust region subproblems are solved here by the dogleg method, while in [5] are solved by interiorpoint methods for quadratic programming. The main computational implementation issues of the proposed trust region OPF algorithm are discussed in this paper, and the performance of the algorithm is studied using the IEEE 30-, 57-, 118- and 300-bus test systems. Comparisons are made with the primal-dual interior-point algorithm for direct nonlinear OPF solution [7], bearing in mind that the main focus of the globally convergent OPF algorithm is on convergence robustness rather than on processing time. The paper is organized as follows: In Section, the first-order necessary optimality conditions for a general form nonlinear OPF problem are presented and then, by using complementarity functions, are reformulated as a nonlinear equations system. In Section 3, reformulation of the equations system as an unconstrained minimization problem and globalization of the solution by a trust region method are both described. Some implementation issues are discussed in Section 4, and numerical results for various test systems are presented in Section 5. Section 6 summarizes the main contributions of the paper.

2 NONLINEAR EQUATIONS REFORMULATION OF OPTIMAL POWER FLOW PROBLEMS In this paper one deals with the solution of nonlinear OPF problems of the general form: min f(x) (1a) s. t.: g(x) = 0 (1b) x x x (1c) where x R n is a vector of decision variables, including the control and state variables; g : R n R m is a nonlinear vector function including conventional power flow equations and other equality constraints such as power balance across boundaries in a pool operation; and x and x are lower and upper bounds on the variables x, corresponding to physical and operating limits on the system. If x is a local minimizer to (1) then there exist vectors of Lagrange multipliers, say, (λ, π, υ ), that satisfy the Karush-Kuhn-Tucker (KKT) optimality conditions [6]: Sπ = 0, s 0, π 0, Zυ = 0, z 0, υ 0, x + s x = 0, x + z x = 0, g(x) = 0, f(x) + g(x)λ π + υ = 0, (a) (b) (c) (d) (e) (f) where s and z are slack vectors that transform the inequalities in (1) into the equalities (c) and (d); S and Z are diagonal matrices with S ii = s i and Z ii = z i ; f(x) is the gradient vector of f(x), and g(x) is the gradient matrix of g(x). The major difficulty in solving () is mainly related to the complementarity conditions (a) and (b). A rootfinding Newton s method applied to () cannot automatically assure (s, z, π, υ) 0, and the numerical solution of Sπ = 0 and Zυ = 0 is intricate. For instance, the Newton equation for the i-th complementarity equation s i π i = 0 is s k i π i + π k i s i = s k i π k i. (3) If a variable becomes zero, say, πi k = 0, then (3) becomes s k i π i = 0, leading to a zero update, π i = 0. Thus, πi k will remain zero all the time once it becomes zero during the iterations, which is fatal because the algorithm will never be able to recover from such a situation. In [6] a nonlinear complementarity (NC) approach is proposed to handle (a) and (b). Unlike interior-point methods, the NC approach does not require that the strict positivity conditions (s, z, π, υ) > 0 be verified at each iteration. Derived from techniques for solving complementarity problems [8], the most attractive feature of the NC approach is that it reformulates the KKT conditions () as a nonlinear equations system, allowing for a Newton-type method to be used. The sign conditions (s, z, π, υ) 0 are automatically satisfied at the limit point, without imposing additional conditions during the iterations. The reformulation of the OPF as nonlinear equations is obtained by handling each complementarity condition in () by a function ψ : R R that holds the property ψ(a, b) = 0 ab = 0, a 0, b 0. (4) Thus, the i-th complementarity conditions in (a) and (b), respectively, are equivalent to the equations ψ(s i, π i ) = 0, (5) ψ(z i, υ i ) = 0. (6) Any function that holds the property (4), such as ψ(a, b) = a + b a + b, (7) ψ(a, b) = 1 min{a, b}, (8) ψ(a, b) = 1 ((ab) + min{0, a} + min{0, b}), (9) is said to be a NC-function. The function (7) was proposed by Fischer [8] in 199, and has attracted a lot of attention. By exploiting the property (4), the KKT conditions () can be reformulated as the nonlinear equations system ψ(s, π) ψ(z, υ) h(y) = x + s x x + z x = 0, (10) g(x) f(x) + g(x)λ π + υ where y = (s, z, π, υ, λ, x). The advantage of equation reformulation (10) is that, unlike KKT conditions (), it can be solved iteratively by well-known methods [9]. The conditions (s, z, π, υ) 0 are automatically assured by the function ψ at the limit point of the iterations. Newton s method is the most popular method for solving systems of nonlinear equations. Because with a poor initial estimate it may diverge, step size control can be used to improve the convergence, i.e., given the point y k, solve the Newton system for the correction term y, h(y k ) T y = h(y k ), (11) and then compute a new solution estimate from y k+1 = y k + α k y. (1) The matrix h(y k ) is the gradient of h(y) evaluated at y k, and the scalar α k (0, 1] is the step size parameter to enhance the convergence. 3 GLOBALIZATION BY A TRUST REGION METHOD Newton s method can be made more robust by using line searches or trust region techniques. A trust region method is considered in this paper due to its significant success in globalizing algorithms for unconstrained optimization and systems of nonlinear equations []. Note that solving the equations system (10) is equivalent to solving the unconstrained minimization problem min y φ(y) = 1 h(y)t h(y). (13)

3 Any root y of h(y) has φ(y ) = 0, and since φ(y) 0 for all y each root of h(y) is a minimizer of φ(y). The converse is not always true; local minimizers of φ(y) with φ(y ) > 0 are not roots of h(y). However, local minima like these which are not roots of h(y) satisfy an interesting property []. Since φ(y ) = h(y )h(y ) = 0 (14) one can have h(y ) 0 only if h(y ) is singular. Note that h(y ) is the coefficient matrix in the Newton system (11) to solve the KKT conditions and under a constraint qualification (regularity condition) it is nonsingular. The trust region method to solve the unconstrained minimization problem (13) uses information gathered about φ(y) to construct a model function m k whose behavior near the approximation point y k should be similar to φ(y). The model m k is usually a quadratic function derived from the truncated Taylor series expansion of φ(y) around y k, m k (y k +d) = φ(y k )+ φ(y k ) T d+ 1 dt φ(y k )d, (15) where φ(y k ) is the gradient and φ(y k ) is the Hessian of φ(y), both evaluated at y k. If the point y = y k + d is far from y k, the model function m k (y k + d) may not be a good approximation to φ(y); thus, the solution d = arg min m k (y k + d) does not always make sense as a minimization step for φ(y). For instance, the Hessian φ(y k ) may be indefinite, and there are directions along which m k (y k +d) is unbounded from below; in this case, d is infinite. To globalize the algorithm the search for a minimizer of m k (y k + d) is restricted to some region around y k, called the trust region. This trust region is usually a ball defined by the Euclidean norm d k, where the scalar k is called the trust region radius. The trust region subproblem around the point y k for the unconstrained minimization problem (13) is min φ(y k ) + φ(y k ) T d + 1 dt φ(y k )d s. t.: d k. (16) If the candidate solution y k+1 = y k +d k does not produce a sufficient decrease in φ(y), then the trust region as given by the radius k is too large. To proceed, the radius k is reduced and the trust region subproblem is solved again. 3.1 Solving the Trust Region Subproblems As shown in [10], a vector d is a global solution to problem (16) if and only if it is feasible (i.e., d k ) and there is a Lagrange multiplier σ 0 such that ( φ(y k ) + σ I)d = φ(y k ), (17a) σ ( k d ) = 0, (17b) ( φ(y k ) + σ I) is positive semidefinite. (17c) The condition (17b) is a complementarity condition that states that at least one of the nonnegative quantities σ 0 and ( k d ) 0 must be zero. Thus, when d < k one must have σ = 0 and, from (17a), φ(y k )d = φ(y k ). When σ > 0 the solution d is collinear with the negative gradient of m k and normal to its contours. Although in principle one seeks the optimal solution of (16), it is enough for purposes of global convergence to find an approximate solution d k that lies within the trust region and gives a sufficient reduction in model m k []. This approximate solution can be obtained, e.g., using Powell s dogleg method [11]. Another approach is the two-dimensional subspace minimization. While the former can be used only when φ(y k ) is positive definite, the later is more general and can handle directions of negative curvature. The dogleg method is used here due to its lower computational cost. Before presenting it, one need to discuss the Cauchy step and the Newton step The Cauchy Step The Cauchy point for problem (16) is the constrained minimizer in the direction of steepest descent at d = 0. Thus, it is given by d c = α k φ(y k ) (18) where α k is the optimal step length found by solving min α>0 φ T k ( α φ k ) + 1 (α φ k) T φ k (α φ k ) (19a) s. t.: α φ k k (19b) whose solution is given by φ T k φ k if δ k > 0 and ( φt k φ k )3/ k δ α k = k δ k k otherwise φ k (0) where, for compactness, δ k = φ T k φ k φ k The Newton Step The Newton step is the unconstrained minimizer to problem (16), which is given by d n = φ(y k ) 1 φ(y k ). (1) Because the Newton step is an unconstrained minimizer, it may not satisfy the trust region constraint The Dogleg Method If the Newton step d n lies inside the trust region (i.e., if d n k ), then it is an approximate solution to (16). If it lies outside, then dogleg method finds the constrained minimizer along the dogleg path subject to d k. The dogleg path consists of the two line segments from d = 0 to d = d c and from d = d c to d = d n, denoted by { τd c for τ [0, 1] d(τ) = () d c + (τ 1)(d n d c ) for τ (1, ].

4 Two properties make the dogleg method well-defined: the Newton point is always farther away than the Cauchy point (i.e., d n > d c ), and the quadratic objective decreases monotonically along the dogleg path. Thus, the constrained minimizer is the intersection of the dogleg path d(τ) with the trust region boundary (shown in Fig. 1), characterized by d c + (τ 1)(d n d c ) = k. (3) Equation (3) is quadratic and thus has two solutions; since d c < k the intersection point is given by (d T τ = c d nc ) d nc ( d c k ) dt c d nc d nc + 1 (4) where d nc = d n d c. The main steps of the dogleg method are as follows: 1. Given φ(y k ) and φ(y k ), obtain d n from (1). If d n k, then return with d k = d n.. Compute d c from (18). If d c k, then return with d k = d c. 3. Given d n and d c from steps 1 and, compute d(τ) from () and (4), and return with d k = d(τ). Clearly, the most expensive task in the dogleg method is setting up and solving the linear system (1) to obtain d n. All other steps are inexpensive. Figure 1: The Cauchy point and Newton point in the dogleg path Further Developments Alternatively, one can solve problem (16) using LSTRS [1], a MATLAB software for large-scale trust region subproblems. LSTRS is based on a reformulation of the trust region subproblem as a parameterized eigenvalue problem. An iterative procedure finds the optimal value for the parameter, which is used to compute a solution for problem (16). The eigenvalue formulation is based on the fact that there exists a value of a scalar parameter α such that problem (16) (without the term φ(y k ) in the objective) is equivalent to min 1 wt B α w s. t.: w T w 1 +, e T 1 w = 1, (5) where B α is the bordered matrix [ ] α φ(y B α = k ) T φ(y k ) φ(y k ) (6) and e 1 is the first canonical vector in R n d+1. The optimal value for α is given by α = σ φ(y k ) T d, with σ, d the optimal pair in (17). If we knew α, we could compute a solution to the trust region subproblem from the algebraically smallest eigenvalue of B α and a corresponding eigenvector with a special structure. The solution would consist of the last n d components of the eigenvector and the Lagrange multiplier would be the eigenvalue. LSTRS starts with an initial guess for α and iteratively adjusts this parameter toward the optimal value. This is accomplished by solving a sequence of eigenvalue problems for B α for different α s, as clearly described in [1]. 3. Updating the Trust Region Radius Critical to the performance of the trust region algorithm is the strategy for choosing the trust region radius k at each iteration []. If k is too small, the algorithm misses an opportunity to make substantial progress to the solution. If too large, the minimizer of the model may be far from the minimizer of the objective function in the region, so one may have to reduce the size of the region and try again. The updating of k is usually based on the agreement between the model function m k and the objective function φ(y) at previous iterations. Given a step d k, a reduction ratio ρ k is defined as: ρ k = φ(y k) φ(y k + d k ) m k (0) m k (d k ). (7) The numerator is the actual reduction in the objective function, and the denominator is the predicted reduction (the reduction in φ(y) predicted by m k ). The predicted reduction is always nonnegative, since d k is obtained by minimizing the model m k over a region that includes d = 0. Thus if ρ k is negative then φ(y k +d k ) φ(y k ) and the step d k must be rejected. If ρ k is close to 1, then there is good agreement between m k and φ over this step, and it is safe to expand the trust region for the next iteration. If ρ k is positive but significantly smaller than 1, then the trust region is not altered. Based on the Algorithm 4.1 in [], the main steps of the trust region algorithm for solving (13) are as follows: 1. Set k = 0, set maximum trust region size > 0, choose 0 (0, ) and η [0, 1 4 ).. Form and approximately solve the trust region subproblem (16) for the step d k. Compute the reduction ratio ρ k from (7). 3. If ρ k < 1 4, then set k+1 = 1 4 d k and go to step 5. Otherwise, go to step If ρ k > 3 4 and d k = k, then set k+1 = min( k, ). Otherwise, set k+1 = k. 5. If ρ k > η, then x k+1 = x k + d k. Otherwise, x k+1 = x k. Set k = k + 1 and return to step.

5 Note that the trust region radius is increased only if d k actually reaches the boundary of the trust region (i.e., d k = k ). If the step stays strictly inside the region, then one can infer that the value of k is not interfering with the progress of the algorithm, so one leaves its value unchanged for the next iteration. 4 SOME IMPLEMENTATION ISSUES 4.1 Non-Smoothness of the Complementarity Function In this paper, the nonlinear system of equations (10) is defined using the Fischer-Burmestein function (7). Since ψ(a, b) a = 1 (8) a a + b ψ(a, b) b = 1 b a + b (9) ψ(a, b) is differentiable everywhere but in (0, 0). Some approaches to deal with the non-smoothness of ψ(a, b) are discussed in the literature. One approach is based on Clarke s generalized Jacobian [13]: ψ(a, b) a ψ(a, b) b a 1 if a + b 0 = a + b 1 α otherwise b 1 if a + b 0 = a + b 1 β otherwise (30) (31) where α and β are arbitrary nonnegative constants with α + β = 1. Another approach is to use smoothing approximations, such as: ψ(a, b) = a + b a + b + µ (3) ψ(a, b) = a + b (a b) + 4µ (33) where µ > 0 is a smoothing parameter so that property (4) becomes ψ(a, b) = 0 ab = µ, a > µ, b > µ. (34) 4. Computing Gradients and Hessians To set up the trust region subproblem (13) one has to compute the gradient and Hessian of composite function φ(h(y)). The gradient is computed from φ(y k ) = h(y k )h(y k ) (35) where D s 0 D π D z 0 D υ 0 0 h(y) = I I 0 I I (36) g(x) T 0 0 I I g(x) H(y) D s, D π, D z and D υ are diagonal matrices given by and Dii s s i = 1, s i + πi (37) Dii π π i = 1, s i + πi (38) Dii z z i = 1, z i + υi (39) Dii υ υ i = 1, z i + υi (40) H(y k ) = f(x k ) + m λ k i g i (x k ). (41) i=1 The Hessian φ(y k ) is computed from φ(y k ) = h(y k ) h(y k ) T + h j (y k ) h j (y k ). n y j=1 (4) Computing φ(y k ) can be computationally expensive and requires efficient sparse matrix structures and linear algebra kernel, as it involves the evaluation and summation of a large number of functions Hessians h j (y k ) plus the product of the matrix h(y k ) to its transpose. The gradient h(y k ) is relatively simple and inexpensive to obtain [7], and using h(y k ) one can compute the first term h(y k ) h(y k ) T in φ(y k ) without computing any Hessian h j (y k ). As remarked in [], the first term h(y k ) h(y k ) T is more important than the second summation term in (4). The first term will be dominant when the norm of each second-order term (i.e., h j (y k ) h j (y k ) ) is significantly smaller than the eigenvalues of h(y k ) h(y k ) T. Such a behavior can be seen either when the residuals h j (y) are small or are nearly affine (so that the h j (y) are small). Note that in the application in this paper ( root finding problem!) the residuals h j (y) s must be zero at the solution. 5 NUMERICAL RESULTS The proposed trust region nonlinear complementarity OPF algorithm (TRNC) is applied here to the IEEE 30-, 57-, 118- and 300-bus test systems. Its computational performance is compared with that of a widely used primaldual interior-point (PDIP) algorithm [7]. Both algorithms are implemented in the MATLAB language. In all test runs PDIP algorithm uses the parameters: µ 0 = 0.01, γ = , σ = 0. and ɛ 1 = , while TRNC algorithm uses the parameters: 0 = 1, = 5, and η = 0.5. The OPF problem solved is the classical active power losses minimization. The constraints include the active and reactive power balance equations for all buses, and lower and upper bounds on voltage magnitudes, transformer tap ratios, shunt susceptances, generators reactive power, and selected branch flows. The corresponding

6 mathematical formulation of this OPF is as follows: min g ij (Vi + Vj V i V j cos θ ij ) (i,j) B s. t.: P i (θ, V, t) P Gi + P Di = 0, i N where Q i (θ, V, t) Q Gi + Q Di = 0, i G Q i (θ, V, t) Q Gi + Q Di = 0, i F Q i (θ, V, t) + Q Di + b sh i Vi = 0, i E F ij (θ, V, t) f ij = 0, {(i, j)} B V min i t min ij b min i V i V max i, i N t ij t max ij, {(i, j)} T b sh i b max i, i E Q min i Q Gi Q max i, i G f min ij P i (θ, V, t) = V i f ij f max ij, {(i, j)} B (43) j N i V j (G ij cos θ ij + B ij sin θ ij ) (44) Q i (θ, V, t) = V i j N i V j (G ij sin θ ij B ij cos θ ij ) (45) V i and θ i are the voltage magnitude and phase angle, with θ ij = θ i θ j ; P Gi and P Di are active power generation and demand; Q Gi and Q Di are reactive power generation and demand, and b sh i is the shunt susceptance, all at bus i; g ij is the conductance of a circuit connecting the bus i to bus j; Y ij = G ij + jb ij is the ij-th element of the bus admittance matrix; N is the index set of all system buses, G of generator buses, F of load buses with fixed shunt var control, E of load buses eligible for shunt var control, B of branches (lines and transformers), and T of transformers with LTCs. Table 1: Sizes of the power systems and the nonlinear OPF problem (1) System N G E B T n m p IEEE IEEE IEEE IEEE Table : Initial loading (active and reactive) and active power losses System MW Load MVAr Load MW Loss IEEE IEEE IEEE IEEE Table 1 shows the sizes of the index sets N, G, E, B, and T for the various test systems, as well as the number of primal variables n, number of equality constraints m, and number of simple bound constraints p of problem (1). Table displays the total system active and reactive power loads, and the transmission active power losses for the base condition, i.e., prior to the application of any optimization procedure. The performance of TRNC algorithm using different complementarity functions to reformulate the OPF as a nonlinear equations system is also studied. Three commonly used complementarity functions are tested: (7), (3) and (33). Complementarity function (7) is differentiable everywhere but in (0, 0), while complementarity functions (3) and (33) are differentiable everywhere but only provide µ-approximations to (a) and (b). When using (3) or (33) the parameter µ is set to Four initialization rules are considered for variables x: (i) as given by an initial power flow solution, (ii) the middle point of bounded variables (i.e., x 0 i = (x i + x i )/), (iii) a flat start (i.e., Vi 0 = 1 and θi 0 = 0), and (iv) random points within the limits. The slack variables s and z are initialized as the middle range, i.e., s 0 = z 0 = (x x)/. The Lagrange multipliers π i and υ i are all set to 0.1. The Lagrange multipliers λ i are either set to 1 if associated with an active power balance constraint or set to 0 if associated with a reactive power constraint. Table 3: Number of PDIP and TRNC iterations for convergence and minimum active losses System Number of Iterations Minimum Losses PDIP TRNC S / F (MW) (Red. %) IEEE / IEEE / IEEE / IEEE / Table 3 displays the number of PDIP and TRNC iterations using initialization rule (i) (a power flow solution), the number of successful and failed TRNC iterations (column S/F), and the minimum active losses. In these simulations TRNC algorithm uses complementarity function (4). Both algorithms converged with all test systems. The numbers of TRNC iterations are slightly higher than the number of PDIP iterations. The computational cost per iteration of TRNC algorithm is only slightly higher than the cost of a PDIP iteration. In each iteration the two algorithms solve a large sparse linear system of same size, but the system matrix in TRNC algorithm is less sparse than the matrix in PDIP algorithm. The column S/F shows the number of successful and failed TRNC iterations. For instance, for IEEE 300 bus system 15 out of 19 iteration were successful. This means that in 4 iterations the computed step d k was discarded and the trust region radius reduced. In the failed iterations the nonlinear objective function increased while the objective in the approximation model was slightly reduced. Thus, the reduction ratio ρ k for that iterations were negative indicating unsuccessful trial steps. Table 4: Number of TRNC and PDIP iterations using three different initial points: initial power flow solution, middle point and flat start System Power Flow Middle Point Flat Start TRNC PDIP TRNC PDIP TRNC PDIP IEEE (4) 10 6 (0) 11 6 (0) 11 IEEE (4) 10 9 (3) 1 9 (3) 1 IEEE (3) 1 7 (0) 13 7 (0) 14 IEEE (4) 14 1 (4) 17 (5) 17

7 Table 4 displays the number of TRNC and PDIP iterations using three different initialization rules: (i) an initial power flow solution, (ii) the middle point within limits, and (iii) a flat start. The two algorithms were able to optimize all test systems. Except for IEEE 300 bus system, when using the middle point and the flat start initializations the performance of TRNC algorithm is better than that of PDIP algorithm. When the initialization is an initial power flow solution TRNC algorithm presents an average of 4 unsuccessful steps per test system, thus requiring a few more iterations to converge than PDIP algorithm. Table 5: Performance of TRNC and PDIP using 50 random starting points System TRNC Iterations PDIP Iterations Total Avrg Min Max Total Avrg Min Max IEEE IEEE IEEE IEEE Table 5 summarizes the performance of the algorithms when the initial points are random points within the limits. The random points are generated using the MATLAB command: x = xmin + rand(n,1).* (xmax - xmin) A set of 50 random points is generated for each system. The same random points are used by TRNC and PDIP algorithms. The column Total shows the total number of TRNC or PDIP iterations to solve the whole set of 50 cases. The column Avrg is the average number of iterations, column Min is the lowest number of iterations observed for that test set, and column Max is the largest number of iterations taken by a single run. The two algorithms were able to converge with the 50 random points for IEEE 30, 57 and 118 bus systems, TRNC algorithm converged with the 50 random points for IEEE 300 bus system, and PDIP algorithm failed to converge with 6 out of 50 random points for IEEE 300 bus system. Overall, the total number of iterations taken by TRNC algorithm is lower than the total number of iterations taken by PDIP algorithm. The total of 90 iterations of PDIP algorithm applied to IEEE 300 bus system does not include the number of iterations in the 6 unsuccessful cases. While the average number of iterations is quite close in the two algorithms, the minimum number of iterations is observed in all instances with TRNC algorithm. For instance, 10 versus 17 iterations for IEEE 118 bus system, and 11 versus 18 iterations for IEEE 300 bus system. Thus if one is able to reduce the number of failed steps in TRNC algorithm to a minimum the computational cost can be significantly reduced. In the tests performed the unsuccessful TRNC iterations are due to increases in the original objective, and an interesting approach to handle this issue is presented in [14], combining trust region with line search iterations. Table 6: Performance of TRNC using three different complementarity functions System Function (4) Function (3) Function (33) Middle Flat Middle Flat Middle Flat IEEE IEEE IEEE IEEE The performance of TRNC algorithm using different complementarity functions to reformulate the OPF as a nonlinear equations system is also studied. The results are presented in Table 6, and each function is tested with two initial points, the middle point and the flat start ones. The performances of the complementarity functions (7) and (3) are similar and better than the performance of complementarity function (33). Although complementarity function (7) is not differentiable in the origin, it never required using Clark s generalized Jacobian as described in Section 4.1. Overall, the three complementarity functions studied performed successfully, but the choice of an appropriate complementarity function to reformulate the OPF problem requires further investigation. 6 CONCLUSIONS The paper has presented the mathematical development of a globally convergent OPF algorithm combining three major ideas: (i) using complementarity functions to transform the OPF problem into a system of nonlinear equations, (ii) transforming the nonlinear equations system into an unconstrained minimization problem, and (iii) using trust region methods for unconstrained minimization to globalize the solution convergence. The motivation is that one of the main difficulties to apply a trust region method to OPF problems is handling inequality constraints. Thus this paper uses complementarity functions to handle inequality constraints. Three complementarity functions have been implemented and tested. The performance of the proposed trust-region nonlinear complementarity algorithm is very promising. Globally convergent algorithms are inherently time consuming, but the proposed algorithm shows potential to be close to interior-point algorithms in terms of speed. The numerical results presented here are preliminary, and the performance of the algorithm can be improved in several fronts as, e.g., better choice of parameters and more efficient solutions of the trust region subproblems. ACKNOWLEDGMENTS The author gratefully acknowledges the financial support from CNPq, a Brazilian research funding agency. This work was partially developed at the EMSOL Lab, University of Waterloo, Canada. The author gratefully thanks Dr. Claudio Cañizares for providing the research conditions at EMSOL.

8 REFERENCES [1] H. Wang, C. E. Murillo-Sánchez, R. D. Zimmerman, and R. J. Thomas, On computational issues of market-based optimal power flow, IEEE Trans. on Power Systems, vol., pp , Aug [] J. Nocedal and S. J. Wright, Numerical Optimization. Springer, [3] S. Pajic and K. A. Clements, Globally convergent state estimation via the trust region method, in 003 IEEE Bologna PowerTech Conference, (Bologna, Italy), June 003. [4] A. S. Costa, R. Salgado, and P. Haas, Globally convergent state estimation based on Givens rotations, in Proc. of the 007 IREP Symposium, (Charleston SC, USA), Aug [5] A. A. Sousa, G. L. Torres, and C. A. Cañizares, Robust optimal power flow solution using trust region and interior-point methods, IEEE Trans. on Power Systems, vol. 1, pp. 1 13, Aug [6] G. L. Torres and V. H. Quintana, Optimal power flow by a nonlinear complementarity method, IEEE Trans. on Power Systems, vol. 15, pp , Aug [7] G. L. Torres and V. H. Quintana, An interior point method for nonlinear optimal power flow using voltage rectangular coordinates, IEEE Trans. on Power Systems, vol. 13, pp , Nov [8] A. Fischer, A special Newton-type optimization method, Optimization, vol. 4, pp , 199. [9] J. E. Dennis Jr. and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Philadelphia: SIAM Classics In Applied Mathematics, [10] J. J. More and D. C. Sorensen, Computing a trust region step, SIAM Journal on Scientific and Statistical Computing, pp , [11] M. J. D. Powell, A hybrid method for nonlinear equations, in Numerical Methods for Nonlinear Algebraic Equations, pp , Gordon and Breach Science, [1] M. Rojas, S. A. Santos, and D. C. Sorensen, Algorithm 873: LSTRS: MATLAB software for largescale trust-region subproblems and regularization, ACM Trans. on Mathematical Software, vol. 34, pp. 11:1 11:8, Mar [13] F. H. Clarke, Optimization and Nonsmooth Analysis. SIAM, [14] J. Nocedal and Y. Yuan, Combining trust region and line search techniques, Report OTC 98/04, Optimization Technology Center, Dept. of Electrical and Computer Engineering, Northwestern University, Evanston, IL, USA, 1983.

POWER SYSTEMS in general are currently operating

POWER SYSTEMS in general are currently operating TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf.

Suppose that the approximate solutions of Eq. (1) satisfy the condition (3). Then (1) if η = 0 in the algorithm Trust Region, then lim inf. Maria Cameron 1. Trust Region Methods At every iteration the trust region methods generate a model m k (p), choose a trust region, and solve the constraint optimization problem of finding the minimum of

More information

On Lagrange multipliers of trust-region subproblems

On Lagrange multipliers of trust-region subproblems On Lagrange multipliers of trust-region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Programy a algoritmy numerické matematiky 14 1.- 6. června 2008

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell

On fast trust region methods for quadratic models with linear constraints. M.J.D. Powell DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many

More information

Trust Regions. Charles J. Geyer. March 27, 2013

Trust Regions. Charles J. Geyer. March 27, 2013 Trust Regions Charles J. Geyer March 27, 2013 1 Trust Region Theory We follow Nocedal and Wright (1999, Chapter 4), using their notation. Fletcher (1987, Section 5.1) discusses the same algorithm, but

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Higher-Order Methods

Higher-Order Methods Higher-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. PCMI, July 2016 Stephen Wright (UW-Madison) Higher-Order Methods PCMI, July 2016 1 / 25 Smooth

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

5.5 Quadratic programming

5.5 Quadratic programming 5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Lecture 7: CS395T Numerical Optimization for Graphics and AI Trust Region Methods

Lecture 7: CS395T Numerical Optimization for Graphics and AI Trust Region Methods Lecture 7: CS395T Numerical Optimization for Graphics and AI Trust Region Methods Qixing Huang The University of Texas at Austin huangqx@cs.utexas.edu 1 Disclaimer This note is adapted from Section 4 of

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

Derivative-Free Trust-Region methods

Derivative-Free Trust-Region methods Derivative-Free Trust-Region methods MTH6418 S. Le Digabel, École Polytechnique de Montréal Fall 2015 (v4) MTH6418: DFTR 1/32 Plan Quadratic models Model Quality Derivative-Free Trust-Region Framework

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS

A DECOMPOSITION PROCEDURE BASED ON APPROXIMATE NEWTON DIRECTIONS Working Paper 01 09 Departamento de Estadística y Econometría Statistics and Econometrics Series 06 Universidad Carlos III de Madrid January 2001 Calle Madrid, 126 28903 Getafe (Spain) Fax (34) 91 624

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING

AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING AN AUGMENTED LAGRANGIAN AFFINE SCALING METHOD FOR NONLINEAR PROGRAMMING XIAO WANG AND HONGCHAO ZHANG Abstract. In this paper, we propose an Augmented Lagrangian Affine Scaling (ALAS) algorithm for general

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

On Lagrange multipliers of trust region subproblems

On Lagrange multipliers of trust region subproblems On Lagrange multipliers of trust region subproblems Ladislav Lukšan, Ctirad Matonoha, Jan Vlček Institute of Computer Science AS CR, Prague Applied Linear Algebra April 28-30, 2008 Novi Sad, Serbia Outline

More information

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee

DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion:

4 Newton Method. Unconstrained Convex Optimization 21. H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion: Unconstrained Convex Optimization 21 4 Newton Method H(x)p = f(x). Newton direction. Why? Recall second-order staylor series expansion: f(x + p) f(x)+p T f(x)+ 1 2 pt H(x)p ˆf(p) In general, ˆf(p) won

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

DELFT UNIVERSITY OF TECHNOLOGY

DELFT UNIVERSITY OF TECHNOLOGY DELFT UNIVERSITY OF TECHNOLOGY REPORT -09 Computational and Sensitivity Aspects of Eigenvalue-Based Methods for the Large-Scale Trust-Region Subproblem Marielba Rojas, Bjørn H. Fotland, and Trond Steihaug

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725

Trust Region Methods. Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh. Convex Optimization /36-725 Trust Region Methods Lecturer: Pradeep Ravikumar Co-instructor: Aarti Singh Convex Optimization 10-725/36-725 Trust Region Methods min p m k (p) f(x k + p) s.t. p 2 R k Iteratively solve approximations

More information

Geometry optimization

Geometry optimization Geometry optimization Trygve Helgaker Centre for Theoretical and Computational Chemistry Department of Chemistry, University of Oslo, Norway European Summer School in Quantum Chemistry (ESQC) 211 Torre

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Arc Search Algorithms

Arc Search Algorithms Arc Search Algorithms Nick Henderson and Walter Murray Stanford University Institute for Computational and Mathematical Engineering November 10, 2011 Unconstrained Optimization minimize x D F (x) where

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Newton s Method. Ryan Tibshirani Convex Optimization /36-725

Newton s Method. Ryan Tibshirani Convex Optimization /36-725 Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x

More information

Inexact Newton Methods and Nonlinear Constrained Optimization

Inexact Newton Methods and Nonlinear Constrained Optimization Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Trust-region methods for rectangular systems of nonlinear equations

Trust-region methods for rectangular systems of nonlinear equations Trust-region methods for rectangular systems of nonlinear equations Margherita Porcelli Dipartimento di Matematica U.Dini Università degli Studi di Firenze Joint work with Maria Macconi and Benedetta Morini

More information

Support Vector Machines for Regression

Support Vector Machines for Regression COMP-566 Rohan Shah (1) Support Vector Machines for Regression Provided with n training data points {(x 1, y 1 ), (x 2, y 2 ),, (x n, y n )} R s R we seek a function f for a fixed ɛ > 0 such that: f(x

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Interpolation-Based Trust-Region Methods for DFO

Interpolation-Based Trust-Region Methods for DFO Interpolation-Based Trust-Region Methods for DFO Luis Nunes Vicente University of Coimbra (joint work with A. Bandeira, A. R. Conn, S. Gratton, and K. Scheinberg) July 27, 2010 ICCOPT, Santiago http//www.mat.uc.pt/~lnv

More information

Affine covariant Semi-smooth Newton in function space

Affine covariant Semi-smooth Newton in function space Affine covariant Semi-smooth Newton in function space Anton Schiela March 14, 2018 These are lecture notes of my talks given for the Winter School Modern Methods in Nonsmooth Optimization that was held

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms

Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Some Inexact Hybrid Proximal Augmented Lagrangian Algorithms Carlos Humes Jr. a, Benar F. Svaiter b, Paulo J. S. Silva a, a Dept. of Computer Science, University of São Paulo, Brazil Email: {humes,rsilva}@ime.usp.br

More information

FINANCIAL OPTIMIZATION

FINANCIAL OPTIMIZATION FINANCIAL OPTIMIZATION Lecture 1: General Principles and Analytic Optimization Philip H. Dybvig Washington University Saint Louis, Missouri Copyright c Philip H. Dybvig 2008 Choose x R N to minimize f(x)

More information

Technische Universität Dresden Herausgeber: Der Rektor

Technische Universität Dresden Herausgeber: Der Rektor Als Manuskript gedruckt Technische Universität Dresden Herausgeber: Der Rektor The Gradient of the Squared Residual as Error Bound an Application to Karush-Kuhn-Tucker Systems Andreas Fischer MATH-NM-13-2002

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2

Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS

ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS ALGORITHM XXX: SC-SR1: MATLAB SOFTWARE FOR SOLVING SHAPE-CHANGING L-SR1 TRUST-REGION SUBPROBLEMS JOHANNES BRUST, OLEG BURDAKOV, JENNIFER B. ERWAY, ROUMMEL F. MARCIA, AND YA-XIANG YUAN Abstract. We present

More information

A Trust-region-based Sequential Quadratic Programming Algorithm

A Trust-region-based Sequential Quadratic Programming Algorithm Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version

More information

Feasible Interior Methods Using Slacks for Nonlinear Optimization

Feasible Interior Methods Using Slacks for Nonlinear Optimization Feasible Interior Methods Using Slacks for Nonlinear Optimization Richard H. Byrd Jorge Nocedal Richard A. Waltz February 28, 2005 Abstract A slack-based feasible interior point method is described which

More information

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS?

WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? WHEN ARE THE (UN)CONSTRAINED STATIONARY POINTS OF THE IMPLICIT LAGRANGIAN GLOBAL SOLUTIONS? Francisco Facchinei a,1 and Christian Kanzow b a Università di Roma La Sapienza Dipartimento di Informatica e

More information

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84

An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 An Introduction to Algebraic Multigrid (AMG) Algorithms Derrick Cerwinsky and Craig C. Douglas 1/84 Introduction Almost all numerical methods for solving PDEs will at some point be reduced to solving A

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers)

Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Support vector machines In a nutshell Linear classifiers selecting hyperplane maximizing separation margin between classes (large margin classifiers) Solution only depends on a small subset of training

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

On the Convergence of the Concave-Convex Procedure

On the Convergence of the Concave-Convex Procedure On the Convergence of the Concave-Convex Procedure Bharath K. Sriperumbudur and Gert R. G. Lanckriet Department of ECE UC San Diego, La Jolla bharathsv@ucsd.edu, gert@ece.ucsd.edu Abstract The concave-convex

More information

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations

Outline. Scientific Computing: An Introductory Survey. Nonlinear Equations. Nonlinear Equations. Examples: Nonlinear Equations Methods for Systems of Methods for Systems of Outline Scientific Computing: An Introductory Survey Chapter 5 1 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

On nonlinear optimization since M.J.D. Powell

On nonlinear optimization since M.J.D. Powell On nonlinear optimization since 1959 1 M.J.D. Powell Abstract: This view of the development of algorithms for nonlinear optimization is based on the research that has been of particular interest to the

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Local Analysis of the Feasible Primal-Dual Interior-Point Method

Local Analysis of the Feasible Primal-Dual Interior-Point Method Local Analysis of the Feasible Primal-Dual Interior-Point Method R. Silva J. Soares L. N. Vicente Abstract In this paper we analyze the rate of local convergence of the Newton primal-dual interiorpoint

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information