INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING. Hande Yurttan Benson
|
|
- Edwina Wright
- 5 years ago
- Views:
Transcription
1 INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING Hande Yurttan Benson A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN CANDIDACY FOR THE DEGREE OF DOCTOR OF PHILOSOPHY RECOMMENDED FOR ACCEPTANCE BY THE DEPARTMENT OF OPERATIONS RESEARCH AND FINANCIAL ENGINEERING June 2001
2 c Copyright by Hande Yurttan Benson, All rights reserved.
3 Abstract Interior-point methods have been a re-emerging field in optimization since the mid- 1980s. We will present here ways of improving the performance of these algorithms for nonlinear optimization and extending them to different classes of problems and application areas. At each iteration, an interior-point algorithm computes a direction in which to proceed, and then must decide how long of a step to take. The traditional approach to choosing a steplength is to use a merit function, which balances the goals of improving the objective function and satisfying the constraints. Recently, Fletcher and Leyffer reported success with using a filter method, where improvement of any of the objective function and constraint infeasibility is sufficient. We have combined these two approaches and applied them to interior-point methods for the first time and with good results. Another issue in nonlinear optimization is the emergence of several popular problem classes and their specialized solution algorithms. Two such problem classes are Second- Order Cone Programming (SOCP) and Semidefinite Programming (SDP). In the second part of this dissertation, we show that problems from both of these classes can be reformulated as smooth convex optimization problems and solved using a general purpose interior-point algorithm for nonlinear optimization. iii
4 Acknowledgements I cannot thank Bob Vanderbei and Dave Shanno enough for their brilliant mentorship and for teaching me everything I know about optimization. Much of the research presented here has been a direct result of their ideas and experience. I have very much appreciated their advice, support and encouragement. I would like to thank my other committee members, C.A. Floudas and E. Cinlar, for their insightful questions and comments on my work. Financial support during the completion of this research has mainly come from Princeton University in the form of teaching assistantships. I would like to express my gratitude to DIMACS for providing a graduate fellowship, and to NSF and ONR grants for supporting my research activities at various times. I would also like to thank all of the professors and fellow students of the Operations Research and Financial Engineering department for all that I have learned over the last four years. This dissertation would not have been possible without the help of my family. I would like to thank my husband, Dan, who provides endless love and patience and much needed advice and support, and the whole Benson family for giving me yet another wonderful family to be a part of. I would also like to thank my brother, Ersin, and his wife, Vanessa, for all of their love and support. Most importantly, this work is dedicated to my parents iv
5 ACKNOWLEDGEMENTS v Iffet and Necdet Yurttan, who have made more sacrifices and given me more love and support than any child could have asked of her parents.
6 Contents Abstract iii Acknowledgements iv List of Figures ix List of Tables x Chapter 1. Introduction 1 1. Thesis Outline. 5 Part 1. The interior-point algorithm: description and improvements 7 Chapter 2. Interior-Point Methods for Nonlinear Programming The Interior-Point Algorithm The barrier parameter. 12 Chapter 3. Steplength control: Background Merit functions Filter Methods. 16 Chapter 4. Steplength control in interior-point methods Three Hybrid Methods Sufficient reduction and other implementation details. 24 Chapter 5. Numerical Results: Comparison of Steplength Control Methods. 29 vi
7 CONTENTS vii Part 2. Extensions to Other Problem Classes. 41 Chapter 6. Extension to Second-Order Cone Programming Key issues in nonsmoothness Alternate Formulations of Second Order Cone Constraints. 45 Chapter 7. Numerical Results: Second-Order Cone Programming Antenna Array Weight Design Grasping Force Optimization FIR Filter Design Portfolio Optimization Truss Topology Design Equilibrium of a system of piecewise linear springs Euclidean Single Facility Location Euclidean Multiple Facility Location Steiner Points Minimal Surfaces Plastic Collapse Results of numerical testing. 69 Chapter 8. Extension to Semidefinite Programming Characterizations of Semidefiniteness The Concavity of the d j s. 95 Chapter 9. Numerical Results for Semidefinite Programming The AMPL interface. 105
8 CONTENTS viii 2. Algorithm modification for step shortening Applications Results of Numerical Testing. 116 Chapter 10. Future Research Directions. 119 Chapter 11. Conclusions. 123 Bibliography 125 Appendix A. Numerical Results for Steplength Control. 131 Appendix B. Solving SDPs using AMPL The SDP model The AMPL function definition Step-shortening in LOQO. 159
9 List of Figures 1 Fletcher and Leyffer s filter method adapted to the barrier objective A barrier objective filter that is updated with the barrier parameter µ at each iteration Performance profiles of LOQO and the hybrid algorithms with respect to runtime Performance profiles of LOQO and the hybrid algorithms with respect to iteration counts. 40 ix
10 List of Tables 1 Comparison of LOQO to FB on commonly solved problems Comparison of LOQO to FO on commonly solved problems Comparison of LOQO to FP on commonly solved problems Comparison of FB to FO on commonly solved problems Comparison of FB to FP on commonly solved problems Comparison of FO to FP on commonly solved problems Runtimes for models which can be formulated as SOCPs Iteration counts and runtimes for semidefinite programming models from various application areas Iteration counts and runtimes for small truss topology problems from the SDPLib test suite Comparative results for different steplength control methods on the CUTE test suite. 132 x
11 CHAPTER 1 Introduction Much of the theory used in nonlinear programming today dates back to Newton and Lagrange. Newton s Method for finding the roots of an equation has been used to find the roots of the first-order derivatives in an unconstrained nonlinear optimization problem. The theory of Lagrange multipliers has expanded the range of problems to which Newton s Method can be applied to include constrained problems. In the middle part of the 20th century, Frisch [28] proposed logarithmic barrier methods to transform optimization problems with inequality constraints to unconstrained optimization problems. Fiacco and McCormick s important work [23] on this approach made it the focus of much of the research in nonlinear optimization in the 1960s. However, complications arising from ill-conditioning in the numerical algebra made logarithmic barrier methods fall into disfavor. For much of the 1970s and early 1980s, the field of nonlinear optimization was dominated by augmented Lagrangian methods and sequential quadratic programming. In the meantime, the field of linear programming had placed much of its focus on Dantzig s simplex method [19]. Introduced in 1947, this method was the standard for linear optimization problems, and it performed quite well in practice. With the advances made in complexity theory, however, it was no longer important to have empirical performance any acceptably fast algorithm had to also have a theoretical worst-case runtime that was 1
12 1. INTRODUCTION 2 polynomial in the size of the problem. Klee and Minty provided an example in [42], for which the simplex method had exponential runtime. In 1980, Khachian proposed the ellipsoid method for linear programming [41]. It received much acclaim, due mainly to the fact that it had a theoretical worst-case complexity which was polynomial in the problem size. However, further work to implement the algorithm showed that it performed close to its worst-case in practice. The big advance in linear programming came with Karmarkar s seminal 1984 paper [40], in which he proposed a projective interior-point method that had a polynomial worst-case complexity and the polynomial was of a lower order than Khachian s. The linear programming community was naturally excited about the result, but there was much curiosity as to how it would behave in practice. In a follow-up to his paper, Karmarkar, et al. [1] presented claims that an implementation of the algorithm performed up to 50 times faster than the simplex method. There was a rush by researchers to verify this claim. Although it was not verified by the teams of Vanderbei, et al. [65] and Gill, et al. [30], the empirical studies showed, nonetheless, that it was faster than the simplex method on some problems and comparable to it on the rest. Thus, the linear programming community had found itself an algorithm that performed well in practice and had satisfactory theoretical complexity. An important result that was a part of Gill et al. s [30] work was to show that Karmarkar s projective interior-point method was equivalent to Fiacco and McCormick s logarithmic barrier method. With this result, earlier work on the logarithmic barrier method was revived, and interior-point methods for nonlinear programming became an active field of research. In fact, the premiere archive of papers in this field, Interior-Point Methods Online [12], which starts in 1994, and Kranich s
13 1. INTRODUCTION 3 bibliographic collection [43] which covers previous works, together now boast over 2500 papers. Most of these papers have been written after While reviving the research in barrier methods, however, the suspected effects of illconditioning had to be examined. Surprisingly, in [69], Wright showed that such effects were quite benign, and the theory that the undesirable behavior of these methods were due to their primal nature became more commonplace. This explained the success of primaldual interior-point methods, which is the focus of much of nonlinear programming research today. With the vast improvements in computer power, such as faster CPUs and larger memory, and the emergence of elegant and powerful modelling environments, such as ampl [27], as well as the development of large test sets, such as CUTE [17] and SDPLib [10], the development and implementation of algorithms have become top priorities. Many of the algorithms differ from each other in a handful of aspects: (1) Treatment of equality constraints/free variables (2) Numerical algebra routines (3) Treatment of indefiniteness (4) Method of choosing a stepsize (1) and (2) deal with the internal formulation of the problem for the algorithm and the efficient and reliable design of its numerical aspects, respectively. The treatment of indefiniteness is required for general purpose algorithms that handle nonconvexity. The last item, the method of choosing a stepsize for the solution of the Newton system has received much interest with the discussion of merit functions and the recent emergence of filter methods by Fletcher and Leyffer [26], and it will be the focus of Part I of this dissertation.
14 1. INTRODUCTION 4 With the advances in nonlinear programming research, one class of problems in particular, convex programming problems, has been extensively studied. The pivotal complexity results by Nesterov and Nemirovskii [52] showed that by using self-concordant barrier functions, it is possible to construct interior-point algorithms that have worst-case polynomial runtimes. In their work, they proposed such barriers for several subclasses of problems, including second-order cone programming and semidefinite programming. Both of these subclasses have been the focus of much research in the last decade. In fact, DIMACS held a special-year to study these subclasses and even hosted a computational challenge [18] where algorithms to solve them were presented. Second-order cone programming problems arise in many important engineering applications, ranging from financial optimization to structural design. Many of these examples are surveyed in two recent papers by Lobo, et al. [46] and Vanderbei and Benson [67]. Similarly, semidefinite programming problems have also been the subject of much research, partly due to the fact that many NP-hard combinatorial problems have relaxations which are semidefinite programming problems. An example of such a relaxation is in the case of the Max-Cut problem, and it is described in detail by Goemans and Williamson [31]. Both second-order cone programming and semidefinite programming problems are large-scale, convex optimization problems from real-world applications, it is therefore important to find efficient solutions to them. Using Nesterov and Nemirovskii s complexity results, there have been numerous specialized algorithms developed, such as Andersen, et al. s mosek [2] and its add-on as described in [21] for second-order cone programming, Benson and Ye s dsdp [56] and Helmberg s SBmethod [34] for semidefinite programming, and Sturm s SeDuMi [62] for both second-order cone and semidefinite programming. There are also
15 1. THESIS OUTLINE. 5 algorithms available for specific problem instances, such as Burer, Monteiro and Zhang s bmz [13] for the Max-Cut SDP relaxation. The issue with these subclasses is that Nesterov and Nemirovskii s results for the selfconcordant barrier only allow for specialized algorithms. However, after the emergence of Karmarkar s work, interior-point methods have unified the fields of linear and nonlinear programming in terms of solution methods, and one would expect that an algorithm can be made general enough to handle all these different classes of problems and still be efficient and reliable. In fact, both second-order cone programming and semidefinite programming (which can be seen as a generalization of linear programming) can be handled as general purpose nonlinear programming problems, and it is the goal of Part II of this dissertation to outline this process and provide empirical results. 1. Thesis Outline. In this dissertation, we will present a state-of-the-art interior-point method and present ways to improve its performance and extend its usage beyond the nonlinear programming paradigm. Part I will be focused on the algorithm itself. In Chapter 2, we will present a primal-dual interior-point algorithm, which is a simplified version of the one implemented currently in Vanderbei s loqo [64]. As discussed in the introduction, we will give ways to improve this algorithm s method for choosing step sizes. In Chapter 3, we will introduce both the traditional merit function approach and filter methods recently introduced [26] by Fletcher and Leyffer. In Chapter 4, we will present and discuss in some detail new hybrid methods for choosing the step size. There, we will show that the use of such a hybrid method can
16 1. THESIS OUTLINE. 6 allow for more aggressive steps and improve the performance of the algorithm indeed. The numerical results given in Chapter 5 will support this conclusion. In Part II, we will focus more on extending this algorithm to the two subclasses discussed above: second-order cone programming and semidefinite programming, in Chapters 6 (with numerical results in Chapter 7) and 8 (with numerical results in Chapter 9), respectively. We will present ways to smooth the second-order cone constraints to allow for the use of an interior-point algorithm to solve them, and we will also reformulate the semidefinite programming problem as a standard form, convex, smooth nonlinear programming problem. The extensive numerical results presented throughout this dissertation provide much support for the theoretical conclusions given in Parts I and II. The comparative testing for the hybrid methods presented in Part I is performed on problems from the CUTE [17], Hock and Schittkowski [35], and Schittkowski [58] test suites. Numerical results for the reformulated second-order cone problems and semidefinite programming problems include problems from the DIMACS Challenge set [18] and SDPLib [10]. Finally, we will discuss future research directions in Chapter 10.
17 Part 1 The interior-point algorithm: description and improvements
18 CHAPTER 2 Interior-Point Methods for Nonlinear Programming. In its standard form, a nonlinear programming problem (NLP) is (1) minimize f(x) subject to h(x) 0, where x R n, f : R n R, and h : R n R m. When f is a convex function and h are concave functions of x, the problem is said to be convex. Also, for reasons which will become clear during the discussion of the interior-point algorithm, f and h are assumed to be twice continuously differentiable. 1. The Interior-Point Algorithm. In this section, we will outline a primal-dual interior-point algorithm to solve the optimization problem given by (1). A more detailed version of this algorithm, which handles equality constraints and free variables as well, is implemented in loqo. More information on those features can be found in [64]. First, slack variables, w i, are added to each of the constraints to convert them to equalities. minimize f(x) subject to h(x) w = 0 w 0, where w R m. Then, the nonnegativity constraints on the slack variables are eliminated by placing them in a barrier objective function, giving the Fiacco and McCormick [23] 8
19 1. THE INTERIOR-POINT ALGORITHM. 9 logarithmic barrier problem: m minimize f(x) µ log w i subject to h(x) w = 0. The scalar µ is called the barrier parameter. Now that we have an optimization problem with no inequalities, we form the Lagrangian m L µ (x, w, y) = f(x) µ log w i y T (h(x) w), i=1 where y R m are called the Lagrange multipliers or the dual variables. In order to achieve a stationary point of the Lagrangian function, we need the first-order i=1 optimality conditions: L x = f(x) A(x)T y = 0 L w = µw 1 e + y = 0 L y = h(x) w = 0, where A(x) = h(x) is the Jacobian of the constraint functions h(x), W is the diagonal matrix with elements w i, and e is the vector of all ones of appropriate dimension. The first and the third equations are more commonly referred to as the dual and primal feasibility conditions. When µ = 0, the second equation gives the complementarity condition that w i y i = 0 for i = 1,..., m. Before we begin to solve this system of equations, we multiply the second set of equations by W to give µe + W Y e = 0,
20 1. THE INTERIOR-POINT ALGORITHM. 10 where Y is the diagonal matrix with elements y i. Note that this equation implies that y is nonnegative, and this is consistent with the fact that it is the vector of Lagrange multipliers associated with a set of constraints that were initially inequalities. We now have the standard primal-dual system f(x) A(x) T y = 0 (2) µe + W Y e = 0 h(x) w = 0 In order to solve this system, we use Newton s Method. Doing so gives the following system to solve: H(x, y) 0 A(x) T x f(x) + A(x) T y 0 Y W w = µe W Y e, A(x) I 0 y h(x) + w where, the Hessian, H, is given by H(x, y) = 2 f(x) m y i 2 h(x). i=1 We symmetrize this system by multiplying the first equation by -1 and the second equation by W 1 : H(x, y) 0 A(x) T x f(x) A(x) T y := σ 0 W 1 Y I w = µw 1 e + y := γ. A(x) I 0 y h(x) + w := ρ Here, σ, γ, and ρ depend on x, y, and w, even though we do not show this dependence explicitly in our notation. Note that ρ measures primal infeasibility, and using an analogy with linear programming, we refer to σ as the dual infeasibility.
21 1. THE INTERIOR-POINT ALGORITHM. 11 It is easy to eliminate w from this system without producing any additional fill-in in the off-diagonal entries. Thus, w is given by w = W Y 1 (γ y). (3) After the elimination, the resulting set of equations is the reduced KKT system: H(x, y) A(x) A(x) T x = σ. W Y 1 y ρ + W Y 1 y This system is solved by using LDL T factorization, which is a modified version of Cholesky factorization, and then performing a backsolve to obtain the step directions. The algorithm starts at an initial solution (x (0), w (0), y (0) ) and proceeds iteratively toward the solution through a sequence of points which are determined by the search directions obtained from the reduced KKT system as follows: x (k+1) = x k + α (k) x (k), w (k+1) = w k + α (k) w (k), y (k+1) = y k + α (k) y (k), where 0 < α 1 is the steplength and the superscripts denote the iteration number. Currently, in loqo, the steplength is chosen using a merit function which ensures that a balanced improvement toward optimality and feasibility is achieved at each iteration. It is obvious that the steplength, α, can have a large effect on the number of iterations required to reach the optimum. In Chapter 3, we will present the traditional merit function approach and the recent filter approach by Fletcher and Leyffer [26]. We will discuss several variants of both approaches in order to find an aggressive yet reliable way to pick the steplength, α.
22 2. THE BARRIER PARAMETER The barrier parameter. Before we finish this chapter on our interior-point algorithm, it is important that we examine the barrier parameter, µ, in some detail. Traditionally, µ is chosen to be µ = λ wt y m, where 0 λ < 1. As reported by Vanderbei and Shanno [66], the interior-point algorithm presented above performs best when the complementarity products w i y i go to zero at a uniform rate, and, when at a point that is far from uniformity, a large µ promotes uniformity for the next iteration. We measure the distance from uniformity by ξ = min i w i y i w T y/m. This means that 0 < ξ 1, and ξ = 1 only when all of the w i y i s are constant over all values of i. Therefore, Vanderbei and Shanno [66] use the following heuristic to compute the barrier parameter at each iteration: ( µ = λ min (1 r) 1 ξ ) 3 w T y, 2 ξ m, where r is a steplength parameter set to 0.95 and λ is set to 0.1. This computation is performed at the beginning of each iteration, using the values of w and y computed with the step taken at the end of the previous iteration.
23 CHAPTER 3 Steplength control: Background. With interior-point methods for linear and quadratic programming, the steplength is controlled using a ratio test which ensures that the nonnegative variables stay nonnegative. However, with general nonlinear programming the situation is more complicated. While computing a steplength for an interior-point iterate, one sometimes faces a contradiction between reducing the objective function and satisfying the constraints. In fact, it may be the case that a small reduction in the objective function leads to a large increase in the infeasibility. It is important to have a method to balance these contradicting goals. In this chapter, we will outline two such existing methods: Merit functions and filter methods. 1. Merit functions. Traditionally, merit functions have been the method of choice to provide the balance between optimality and feasibility. A merit function consists of some combination of a measure of optimality and a measure of feasibility, and a step is taken if and only if it leads to a sufficient reduction in the merit function. In order to achieve sufficient reduction, backtracking, that is, systematically reducing the steplength, may be necessary. One example of a merit function is Han s l 1 exact merit function [33]: ψ 1 (x, β) = f(x) + β ρ(x, w) 1, where ρ(x, w) = w h(x). The term exact refers to the fact that for any β within a certain range, a minimizer of the original optimization problem is guaranteed to be a local 13
24 1. MERIT FUNCTIONS. 14 minimum of ψ 1 (x, β). This is a very good property to have, however, the l 1 exact merit function is nondifferentiable due to the norm and results in numerical problems in practice. Another merit function is the l 2 merit function used by El-Bakry, Tapia, Tsuchiya and Zhang [22]: ψ 2 (x, w, y) = f(x) A(x) T y W Y e ρ(x, w) 2 2. El-Bakry et. al. presented a globally convergent algorithm using this merit function under the usual conditions, provided that H(x, y) + A(x) T W 1 Y A(x) remained nonsingular throughout. However, Shanno and Simantraki [59] showed that on the Hock and Schittkowski test suite [35] a variant of this algorithm fails on some problems due to singularity. Also, while the algorithm is usually efficient, it sometimes converges to local maxima or saddle points. Because of the drawbacks of these two merit functions, Vanderbei and Shanno [66] refer back to Fiacco and McCormick s [23] penalty function for equality constrained problems and the following merit function: ψ 3 (x, w, β) = f(x) + β 2 ρ(x, w) 2 2. Vanderbei and Shanno s algorithm was presented in Chapter 2, and it uses a logarithmic barrier objective function. The merit function in the context of their algorithm is m ψ β,µ (x, w) = f(x) µ log w i + β 2 ρ(x, w) 2 2. i=1 ψ β,µ has the disadvantage that β is required to go to infinity in order to guarantee convergence to a feasible point, which is hoped to be a local minimum of the original problem.
25 1. MERIT FUNCTIONS. 15 In practice, Vanderbei and Shanno report that the disadvantage seems to be quite unimportant, so this is why they have chosen this merit function to use in the loqo algorithm. In [66], Vanderbei and Shanno also present theoretical results about the behavior of the merit function, ψ β,µ. We will repeat them here. The matrix H(x, y) + A(x) T W 1 Y A(x), more commonly referred to as the dual normal matrix, will also play a special role in the following theorem: Theorem 1. (Vanderbei and Shanno). Suppose that the dual normal matrix is positive definite. Let ρ = ρ(x, w) = w h(x), and b = b µ (x, w) = f(x) µ Then the search directions have the following properties: m log w i. i=1 (1) If ρ = 0, then T x b x 0. w b w (2) There exists β min 0 such that, for every β > β min, x ψ β,µ x 0. w ψ β,µ w In both cases, equality holds if and only if (x, w) satisfies (2) for some y. T This theorem suggests that when the problem is strictly convex, the search directions given by (3) are descent directions for ψ β,µ for a large enough β. The positive definiteness condition on the dual normal matrix, however, may not always hold. The authors, then, propose using Ĥ(x, y) = H(x, y) + λi, λ 0
26 2. FILTER METHODS. 16 instead of H(x, y) in the definition of the search directions. The diagonal perturbation λ is chosen large enough so that Ĥ(x, y) is positive definite, and Theorem 1 follows. Note that by diagonal dominance of positive definite matrices, such a λ can always be found. It is Vanderbei and Shanno s merit function, ψ β,µ, that will be considered as the state of the art for this dissertation. We will use it to construct our hybrid methods in the next chapter and compare against it in the when presenting numerical results in Chapter 5. One thing to note about the merit function is that at each iteration, the steplength is chosen such that the new iterate will provide a sufficient reduction in ψ β,µ. The amount of the reduction required is determined by an Armijo Rule [6]: x ψ β,µ (x (k), w (k) ) (4) ψ β,µ (x (k+1), w (k+1) ) < ψ β,µ (x (k), w (k) ) + ɛ w ψ β,µ (x (k), w (k) ) T x (k) w (k), where ɛ is a small constant, chosen in our implementation to be Note that by Theorem 1, the last term is negative, so it does indeed describe that we are requiring a reduction. 2. Filter Methods. Recently, Fletcher and Leyffer [26] studied solving the nonlinear programming problem (1) using a sequential-quadratic programming (SQP) algorithm that employed a different type of steplength control. An SQP algorithm is an active-set method that tries to locate the optimal solution by finding the inequality constraints that are equalities at the optimum. Since there are possibly an exponential number of sets of these active constraints, a smart way to pick a set is to work with a quadratic approximation to the problem which is easier to solve: min 1 2 xt Q x + f(x) T x
27 2. FILTER METHODS. 17 (5) s.t. A(x) x + h(x) 0, x 2 r, where Q is some positive definite approximation to the Hessian matrix of f(x), and r is the trust region radius. SQP algorithms have also traditionally used a merit function to balance the goals of reducing the objective function and reducing infeasibility. Such a merit function is (6) ψ β (x) = f(x) + β 2 ρ (x) 2 where ρ (x) is the vector with elements ρ i (x) = min(h i(x), 0). Here, reducing ψ β (x) clearly ensures that either the objective function or the infeasibility is reduced. The goal of Fletcher and Leyffer s work was to replace the use of a merit function in their SQP algorithm with a requirement that improvement be made over all previous iterations in any of its two components: (a) a measure of objective progress and (b) a measure of progress toward feasibility. They define a filter to be a set of pairs f(x (k) ) and ρ i (x(k) ), and a new point x (k+1) is admitted to the filter if it is not dominated by any point already in the filter. A point x (j) is said to dominate x (k+1) if (7) ρ i (x(j) ) 2 ρ i (x(k+1) ) 2, f(x (j) ) f(x (k+1) ). If there is a point x (j) in the filter such that (7) is satisfied, an acceptable point is determined either by reducing the trust region radius r or by a feasibility restoration step. In order to ensure that there is sufficient progress towards the optimal solution at each iteration, Fletcher and Leyffer have modified their filter to include an Armijo Rule. Doing
28 2. FILTER METHODS. 18 this allows one to define an envelope around the filter, so that new points that are arbitrarily close to the filter are not admitted. Also, when a sufficient level of feasibility is reached, it can be the case that a miniscule improvement in feasibility can increase the objective function significantly. At such a point, however, we should not even be concerned with improving the feasibility any further. To avoid such a deviation from the optimal solution, Fletcher and Leyffer have included a condition in the filter that when the norm of the infeasibility is sufficiently small, a reduction in the objective function is required, subject to an Armijo Rule. In [26], Fletcher and Leyffer report good numerical results with this filter approach on problems from the CUTE test suite [17]. Their new code filtersqp consistently outperforms their previous code l 1 SQP, which employs the merit function given by (6). Encouraged by these results, we have decided to try the filter method approach in the context of interiorpoint methods and the implementation of our algorithm, loqo. As we will describe in the next chapter, it is not possible to apply filter methods to loqo without modification. Therefore, we will propose several hybrid methods to control steplength.
29 CHAPTER 4 Steplength control in interior-point methods. As Theorem 1 in the previous chapter states, there exists a β at each iteration such that the search direction which solves (3) is a descent direction for the merit function ψ β,µ (x, w) given by (6). This implies that a steplength α can be found at each iteration to reduce ψ β,µ (x, w). The reduction can come from two sources: The barrier objective function b µ (x, w) = f(x) µ or the norm of the infeasibility ρ(x, w). m log(w i ) i=1 Therefore, Theorem 1 guarantees that at least one of these quantities will be reduced at each iteration, and this immediately suggests using a filter consisting of pairs of points (b (k) µ, ρ (k) ), where b (k) µ = b µ (x (k), w (k) ), and ρ (k) = ρ(x (k), w (k) ). An example of such a filter consisting of four points is shown in Figure 1. In interior-point methods, however, the barrier parameter changes from one iteration to the next. We will denote by µ (k 1) the barrier parameter used in iteration k, since it is computed from (x (k 1), w (k 1) ). As discussed above, a steplength, α, exists at iteration k +1 that reduces either b (k) µ (k) or ρ (k). But, since b (k) µ (k) is different from the b (k) µ (k 1) that was accepted into the filter, we might not find a steplength that will give a point acceptable to the filter at iteration k +1. In fact, Figure 1 depicts two possible locations for (b (k) µ (k), ρ (k) ), 19
30 4. STEPLENGTH CONTROL IN INTERIOR-POINT METHODS. 20 b (b (2) (2) ) (1) Case 1 Case 2 (b (1) (1) ) (0) (b (4) (4) ) (4) (b (4) (4) ) (3) (b (4) (4) ) (4) (b (3) (3) ) (2) Figure 1. Fletcher and Leyffer s filter method adapted to the barrier objective. where k = 4. In Case 1, we have b (4) µ (4) < b (4) µ (3), and we are guaranteed to find a point acceptable to the filter in iteration 5. However, in Case 2, it is impossible to find a steplength that will give us such a point. In general, in order to guarantee that we can find a point (b (k+1) µ (k), ρ (k+1) 2 ) that is acceptable to the filter, it is sufficient to have (8) b (k) µ (k) < b (k) µ (k 1). This inequality holds if µ (k) < µ (k 1) and m i=1 log(w (k+1) i ) 0. In fact, it is usually the case that the barrier parameter, µ, is monotonically decreasing, and always so as the optimum is approached. Also, in loqo, the treatment of free variables
31 1. THREE HYBRID METHODS. 21 and equality constraints, as described in [66] and [39], ensures that slack variables will approach zero. Thus, (8) will hold as the algorithm approaches the optimum, and the suggested filter is plausible. However, (8) may not always hold, as loqo does not reduce the barrier parameter µ monotonically, and at early iterations it can increase from one iteration to the next. We cannot, therefore, implement a filter method in our algorithm without modifying Fletcher and Leyffer s [26] approach or modifying the µ calculation, which we did not try. In the rest of this chapter, we will present three filter-based algorithms and discuss their properties. 1. Three Hybrid Methods Hybrid #1: Filter method using the barrier objective. As the first variation on Fletcher and Leyffer s filter method, we have created a filter saving three values at each point, (f(x (k) ), m i=1 log(w(k) i ), ρ(x (k), w (k) ) ). Each time a new µ (k) is calculated, each barrier objective function is calculated using this new value, and a new filter is constructed. A new point (x (k+1), w (k+1) ) is admitted to the filter if there is no point in the filter satisfying (9) b µ (k)(x (j), w (j) ) b µ (k)(x (k+1), w (k+1) ) and ρ(x (j), w (j) ) 2 ρ(x (k+1), w (k+1) ) 2. This filter is shown in Figure 2. Note that requiring condition (9) be satisfied imposes the stronger condition that if the new point reduces the current barrier objective function, it must reduce it over all previous points for this same value of the barrier parameter. However, there is still no guarantee
32 1. THREE HYBRID METHODS. 22 b (b (2) (2) ) (4) old filter new filter (b (2) (2) ) (3) (b (1) (1) ) (3) (b (4) (4) ) (3) (b (1) (1) ) (4) (b (4) (4) ) (4) (b (3) (3) ) (3) (b (3) (3) ) (4) Figure 2. A barrier objective filter that is updated with the barrier parameter µ at each iteration. that a new point acceptable to the filter can be found at each iteration. In Figure 2, we depict one possible scenario, where b (1) is reduced by such an amount that b (4) is no longer in the filter. In that situation, we cannot find a steplength that will give a point acceptable to the filter. The question then arises as to what to do when a new trial point (x (k+1), w (k+1) ) is not acceptable to the filter. Since we know that there exists a β such that the search vector ( x, w, y) is a descent vector for ψ β,µ (x, w), one strategy is to compute the β as in standard loqo [66] and do a linear search to reduce ψ β,µ (x, w). While the new point must improve either the infeasibility or the barrier objective over the previous point, it need not be acceptable to the filter. Nonetheless, we accept the new point as the current point and continue. To summarize, our first hybrid approach uses the objective function, the barrier term and the norm of the infeasibility to create a filter of triples. The filter is updated with the
33 1. THREE HYBRID METHODS. 23 current value of the barrier parameter, µ, at each iteration. Since it may still be the case that we cannot find a steplength to give a point acceptable to the filter at some iteration, we also employ a merit function to test and accept a new point Hybrid #2: Filter using the objective function. A second possibility for a filter algorithm is simply to keep the pairs f(x (k) ) and ρ(x (k), w (k) ), and admit a new point to the filter if there is no point (x (j), w (j) ) with f(x (j) ) f(x (k+1) ), ρ(x (j), w (j) ) 2 ρ(x (k+1), w (k+1) ) 2. The justification for this approach follows from the fact that if µ (k) µ (k+1) and the pair (x (k+1), w (k+1) ) is feasible and minimizes b(x, w, µ (k) ), then f(x (k+1) ) f(x (k) ) (see Fiacco and McCormick [23]). If the new point is not feasible, then it may be admitted to the filter for reducing infeasibility. If infeasibility is not reduced, then the barrier objective must be, and a sufficient reduction should also reduce the objective function. However, it may still be the case that we cannot find a steplength to give a point acceptable to the filter. Again, we employ a merit function as in Hybrid #1 to resolve this issue Hybrid #3: Filter based only on the previous iteration. In the case that we cannot find a steplength to give a point acceptable to the filter in Hybrid # 1, another possibility is simply to backtrack by reducing the step size until either the infeasibility or the barrier objective function is sufficiently reduced from the previous iteration. Clearly, if ψ β,µ (x, w) can be reduced, then for some steplength α, we can achieve such a reduction and no penalty parameter β need be computed.
34 2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 24 Early success with the strategy of backtracking using ψ β,µ (x, w) even if the new point was not necessarily acceptable to the filter led us to try one more strategy, one that uses no filter whatsoever. Instead, at each iteration we simply backtrack until a reduction in either the infeasibility or the barrier objective over the previous point is achieved, that is, (10) b µ (k)(x (k+1), w (k+1) ) b µ (k)(x (k), w (k) ) and ρ(x (k+1), w (k+1) ) 2 ρ(x (k), w (k) ) 2. This approach clearly avoids the need for the penalty parameter in the merit function, and is in the spirit of the filter, but is less complex. 2. Sufficient reduction and other implementation details. In all of the above, we require a sufficient decrease in either the infeasibility and or the barrier objective. In practice, we impose an Armijo-type condition on the decrease. Specifically, in hybrid #1 we require that either (11) b (k+1) b (j) + ɛα µ (k) µ (k) x b (k) µ (k) w b (k) µ (k) T x (k) w (k) or (12) ρ (k+1) 2 ρ (j) 2 + 2ɛα x ρ (k) w ρ (k) T ρ (k) T x (k) w (k) for all (x (j), w (j) ) in the filter. Note that the Armijo condition imposed on the barrier objective and the infeasibility are different from the standard Armijo condition. In its standard form, a measure of sufficient
35 2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 25 decrease from the jth iteration would be based on b (j) µ (k), ρ (j), x (j), and w (j). However, we are not guaranteed that ( x (j), w (j) ) are descent directions for b (j) µ (k). To see this, note that ( x (j), w (j) ) are indeed descent directions for b (j) µ (j), and the following inequality holds: f(x (j) ) T x (j) µ (j) e T (W (j) ) 1 w (j) 0. The inequality that we want to hold is f(x (j) ) T x (j) µ (k) e T (W (j) ) 1 w (j) 0. If e T (W (j) ) 1 w (j) 0, this inequality is guaranteed to hold only if µ (k) µ (j). Otherwise, it is guaranteed to hold only if µ (k) > µ (j). Since neither of these conditions can be assumed to hold, we cannot use ( x (j), w (j) ) as descent directions for b (j) µ (k). However, we should note that the aim of the Armijo condition is to create an envelope around the filter, it would be sufficient to note that x (k) and w (k) are descent directions for either b (k) µ (k) or ρ (k) 2. Therefore, the condition given by (11) achieves our goal and is easy to implement. If the case where x (k) and w (k) are not descent directions for b (k) µ (k), it is still easy to approximate a sufficient reduction. First, we note that x ρ (k) w ρ (k) T ρ (k) T x (k) w (k) = ρ(k)t x ρ (k) w ρ (k) x (k) w (k) ρ (k)t ρ (k) 0. The approximation is obtained using Newton s Method. Therefore, x (k) and w (k) are descent directions for ρ (k) 2. We can always define a valid envelope for the infeasibility,
36 2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 26 so we can use this information to approximate what the corresponding envelope for the barrier objective would be. The dual variables are a measure of the proportion of marginal change in the barrier objective to marginal change in the infeasibility. Using the l norm of the vector of dual variables to obtain a proportional envelope for the barrier objective will suffice. We have thus shown that we can guarantee the existence of an envelope around the filter simply by using information from the previous iteration. Furthermore, we are guaranteed that we can always find an α that will give us a sufficient decrease over the previous iteration. This allows us to obtain a sufficient decrease at each iteration. For the filter algorithm that uses the objective function, (11) is replaced with (13) f (k+1) f (j) + ɛα ( f (j)) T x (j) which is the standard Armijo condition. For the third filter-based algorithm, we only compare against the last iterate, so (11) is replaced with (14) b (k+1) b (k) + ɛα µ (k) µ (k) x b (k) µ (k) w b (k) µ (k) T x (k) w (k), and (12) is replaced with (15) ρ (k+1) 2 ρ (k) 2 + 2ɛα x ρ (k) w ρ (k) T ρ (k) T x (k) w (k) Note that the last two expressions correspond to the standard Armijo condition. We have also incorporated into our code measures to avoid a large increase in either the barrier objective (or objective) or the infeasibility in exchange for a small decrease in
37 2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 27 the other. If we have a solution that satisfies ρ(x (k), w (k) ) 10 10, then we require a sufficient decrease in the barrier objective (or objective) function for the next iteration or a decrease in the infeasibility of at least one order of magnitude in the next iteration. Also, if the primal feasibility is exactly 0, we insist on a sufficient decrease in the barrier objective (or objective) function for the next iteration. Finally, we should note that in order to save time, a maximum of 10 stepcuts can be performed at each iteration. This is the default value in loqo originally, and it has been implemented in the hybrid algorithms as well Feasibility Restoration. Hybrid #3, as presented above, may run into further numerical difficulties when the iterates are feasible and close to optimality. It may, in fact, be the case that the current point is superoptimal and the infeasibility is a very small value less than Then, the maximum number of 10 step cuts may not reduce the barrier objective (or objective) function to be less than the superoptimal value, and we may not be able to reduce the infeasibility by an order of magnitude, either. The algorithm simply gets stuck at this point, doing 10 step cuts at each iteration, and it will either fail without achieving the default levels of accuracy or it will slow down considerably. However, this is an easy situation to remedy. When the feasibility level is so low and 10 step cuts are being performed at each iteration, the feasibility improvement required is changed from one order of magnitude back to the Armijo condition, and this allows the
38 2. SUFFICIENT REDUCTION AND OTHER IMPLEMENTATION DETAILS. 28 algorithm to step back to a more feasible solution and attain an optimal solution with the default levels of accuracy. In this chapter, we have presented three filter-based algorithms for use with interiorpoint methods. All three of these algorithms have been implemented on the interiorpoint algorithm of loqo. Extensive numerical testing has been performed comparing these variants to each other and to the original algorithm, which uses a merit function. The results are presented in the next chapter.
39 CHAPTER 5 Numerical Results: Comparison of Steplength Control Methods. Fletcher and Leyffer report encouraging numerical results for the performance of the filter method sequential quadratic programming code, filtersqp, as compared to their original code, l 1 SQP, and to the Conn, Gould, Toint trust-region code, Lancelot. In this study, it is our goal to ascertain the effects of using filter-based methods and merit function in an interior-point method setting. To the best of our knowledge, no such previous study exists that compares the two approaches as implemented within the framework of the same interior-point algorithm. We have tried many variants of the filter-based approach, and, in this section, we will discuss numerical results for the three best versions discussed in the previous sections. We will provide pairwise comparisons between these methods and the current version of loqo and also among each other. Thus, the four implementations No filter, with merit function (LOQO) Filter using the barrier objective function, with merit function (FB) Filter using objective function, with merit function (FO) Filter on previous iteration only, no merit function (FP) As any code using Newton s method requires second partial derivatives, we have chosen to formulate the models in AMPL [27], a modelling language that provides analytic first and second partial derivatives. In order to construct a meaningful test suite, we have been engaged in reformulating from standard input format (SIF) to AMPL all models in the 29
40 5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 30 CUTE [17] (constrained and unconstrained testing environment) test suite. To date, we have converted and validated 699 models. For those problems with variable size, we have used the largest suggested number of variables and constraints, except in the case of the ncvxqp family of problems and fminsurf, where the largest suggested sizes were beyond the capabilities of all solvers. In addition, we have expressed the entire Schittkowski [58] test suite in AMPL. Together, this comprises a test suite of 889 AMPL models, which form the test set for this study. These models vary greatly in size and difficulty and have proved useful in drawing meaningful conclusions. All of the AMPL models used in our testing are available at [63]. The CUTE suite contains some problems that are excluded from our set. We have not yet converted to AMPL any models requiring special functions as well as some of the more complex models. We will continue to convert the remainder of the suite to AMPL models as time allows, but believe that the results of this section show that the current test suite is sufficiently rich to provide meaningful information. We have made the algorithm variants from loqo Version 5.06, which was called from AMPL Version All testing was conducted on a SUN SPARC Station running SunOS 5.8 with 4GB of main memory and a 400MHz clock speed. Since detailed results are too voluminous to present here, we provide summary statistics and pairwise comparisons of the algorithms in Tables 1-6. Tables with more detailed comparisons can be found in Appendix 1. Each comparison is broken down by size of the problems, where we define the size by the number of variables plus the number of constraints in the model. Small problems have size less than 100, Medium problems have size 100 to less than 1000, Large problems have size 1000 to less than 10000, and
41 5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 31 Very Large problems have size or more. Note that the size reported may differ from the model itself, since ampl preprocesses a problem before passing it to the solvers. The total number of problems in each category is as follows: Small 584 Medium 100 Large 108 Very Large 97 In Tables 1-6, we provide total iteration counts and runtimes for those problems where one of the solvers took less iterations to reach the optimum than the other. Since these are pairwise comparisons each table contains information on a different group of problems, that is, the 18 problems where (FB) outperforms (LOQO) as reported in Table 1 and the 18 where (FO) outperforms (LOQO) as reported in Table 2 are not the same set of problems. That is why the iteration and runtime totals are different. We have included problems that were not solved with the original settings of the loqo parameters but were able to be solved by tuning. The parameters that we most often tune are bndpush, initial value of of the slack variables, inftol, primal and dual infeasibility tolerance, and sigfig, number of digits of agreement between the primal and dual solutions. For a summary of which problems need tuning and their respective tuning parameters, see [39]. In our pairwise comparisons, we only include those problems that were either not tuned by either solver or had the same tuning parameters for both.
42 5. NUMERICAL RESULTS: COMPARISON OF STEPLENGTH CONTROL METHODS. 32 LOQO FB Iter Time Iter Time Small LOQO Better FB Better Medium LOQO Better FB Better Large LOQO Better FB Better Very Large LOQO Better FB Better Table 1. Comparison of LOQO to FB on commonly solved problems. LOQO FO Iter Time Iter Time Small LOQO Better FO Better Medium LOQO Better FO Better Large LOQO Better FO Better Very Large LOQO Better FO Better Table 2. Comparison of LOQO to FO on commonly solved problems.
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS HANDE Y. BENSON, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Operations Research and Financial Engineering Princeton
More informationCONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING
CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationImplementation of an Interior Point Multidimensional Filter Line Search Method for Constrained Optimization
Proceedings of the 5th WSEAS Int. Conf. on System Science and Simulation in Engineering, Tenerife, Canary Islands, Spain, December 16-18, 2006 391 Implementation of an Interior Point Multidimensional Filter
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationAN INTERIOR POINT ALGORITHM FOR NONCONVEX NONLINEAR PROGRAMMING. Statistics and Operations Research Princeton University SOR-97-21
AN INTERIOR POINT ALGORITHM FOR NONCONVEX NONLINEAR PROGRAMMING ROBERT J. VANDERBEI AND DAVID F. SHANNO Statistics and Operations Research Princeton University SOR-97-21 ABSTRACT. The paper describes an
More informationINTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB
1 INTERIOR-POINT METHODS FOR SECOND-ORDER-CONE AND SEMIDEFINITE PROGRAMMING ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB Outline 2 Introduction
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationLecture 3 Interior Point Methods and Nonlinear Optimization
Lecture 3 Interior Point Methods and Nonlinear Optimization Robert J. Vanderbei April 16, 2012 Machine Learning Summer School La Palma http://www.princeton.edu/ rvdb Example: Basis Pursuit Denoising L
More informationInexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More informationINTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS
INTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS HANDE Y. BENSON, ARUN SEN, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Abstract. In this paper we consider the question of solving equilibrium
More informationAn Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints
An Active Set Strategy for Solving Optimization Problems with up to 200,000,000 Nonlinear Constraints Klaus Schittkowski Department of Computer Science, University of Bayreuth 95440 Bayreuth, Germany e-mail:
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationOn Solving SOCPs with an Interior-Point Method for NLP
On Solving SOCPs with an Interior-Point Method for NLP Robert Vanderbei and Hande Y. Benson INFORMS Nov 5, 2001 Miami Beach, FL Operations Research and Financial Engineering, Princeton University http://www.princeton.edu/
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationA SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationA PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION
Optimization Technical Report 02-09, October 2002, UW-Madison Computer Sciences Department. E. Michael Gertz 1 Philip E. Gill 2 A PRIMAL-DUAL TRUST REGION ALGORITHM FOR NONLINEAR OPTIMIZATION 7 October
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationJanuary 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions
Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationarxiv: v2 [math.oc] 11 Jan 2018
A one-phase interior point method for nonconvex optimization Oliver Hinder, Yinyu Ye January 12, 2018 arxiv:1801.03072v2 [math.oc] 11 Jan 2018 Abstract The work of Wächter and Biegler [40] suggests that
More informationConvex Optimization and l 1 -minimization
Convex Optimization and l 1 -minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationSOLVING PROBLEMS WITH SEMIDEFINITE AND RELATED CONSTRAINTS USING INTERIOR-POINT METHODS FOR NONLINEAR PROGRAMMING
SOLVING PROBLEMS WITH SEMIDEFINITE AND RELATED CONSTRAINTS SING INTERIOR-POINT METHODS FOR NONLINEAR PROGRAMMING ROBERT J. VANDERBEI AND HANDE YRTTAN BENSON Operations Research and Financial Engineering
More informationDerivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming
Derivative-Based Numerical Method for Penalty-Barrier Nonlinear Programming Martin Peter Neuenhofen October 26, 2018 Abstract We present an NLP solver for nonlinear optimization with quadratic penalty
More informationarxiv: v1 [math.na] 8 Jun 2018
arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: JAMMING AND COMPARATIVE NUMERICAL TESTING
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: JAMMING AND COMPARATIVE NMERICAL TESTING HANDE Y. BENSON, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Operations Research and Financial Engineering
More informationNonlinear Optimization Solvers
ICE05 Argonne, July 19, 2005 Nonlinear Optimization Solvers Sven Leyffer leyffer@mcs.anl.gov Mathematics & Computer Science Division, Argonne National Laboratory Nonlinear Optimization Methods Optimization
More informationA SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION
A SHIFTED PRIMAL-DUAL PENALTY-BARRIER METHOD FOR NONLINEAR OPTIMIZATION Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-19-3 March
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationCubic regularization in symmetric rank-1 quasi-newton methods
Math. Prog. Comp. (2018) 10:457 486 https://doi.org/10.1007/s12532-018-0136-7 FULL LENGTH PAPER Cubic regularization in symmetric rank-1 quasi-newton methods Hande Y. Benson 1 David F. Shanno 2 Received:
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationmin f(x). (2.1) Objectives consisting of a smooth convex term plus a nonconvex regularization term;
Chapter 2 Gradient Methods The gradient method forms the foundation of all of the schemes studied in this book. We will provide several complementary perspectives on this algorithm that highlight the many
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationConvergence analysis of a primal-dual interior-point method for nonlinear programming
Convergence analysis of a primal-dual interior-point method for nonlinear programming Igor Griva David F. Shanno Robert J. Vanderbei August 19, 2005 Abstract We analyze a primal-dual interior-point method
More informationThe Squared Slacks Transformation in Nonlinear Programming
Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationPDE-Constrained and Nonsmooth Optimization
Frank E. Curtis October 1, 2009 Outline PDE-Constrained Optimization Introduction Newton s method Inexactness Results Summary and future work Nonsmooth Optimization Sequential quadratic programming (SQP)
More informationUsing Interior-Point Methods within Mixed-Integer Nonlinear Programming
Using Interior-Point Methods within Mixed-Integer Nonlinear Programming. Hande Y. Benson Drexel University IMA - MINLP - p. 1/34 Motivation: Discrete Variables Motivation: Discrete Variables Interior-Point
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.
SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question
More informationApolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints
Apolynomialtimeinteriorpointmethodforproblemswith nonconvex constraints Oliver Hinder, Yinyu Ye Department of Management Science and Engineering Stanford University June 28, 2018 The problem I Consider
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationAn Inexact Newton Method for Nonlinear Constrained Optimization
An Inexact Newton Method for Nonlinear Constrained Optimization Frank E. Curtis Numerical Analysis Seminar, January 23, 2009 Outline Motivation and background Algorithm development and theoretical results
More informationPrimal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization
Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are
More informationSelf-Concordant Barrier Functions for Convex Optimization
Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationSome new facts about sequential quadratic programming methods employing second derivatives
To appear in Optimization Methods and Software Vol. 00, No. 00, Month 20XX, 1 24 Some new facts about sequential quadratic programming methods employing second derivatives A.F. Izmailov a and M.V. Solodov
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationLecture 9 Sequential unconstrained minimization
S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationPOWER SYSTEMS in general are currently operating
TO APPEAR IN IEEE TRANSACTIONS ON POWER SYSTEMS 1 Robust Optimal Power Flow Solution Using Trust Region and Interior-Point Methods Andréa A. Sousa, Geraldo L. Torres, Member IEEE, Claudio A. Cañizares,
More informationProximal Newton Method. Ryan Tibshirani Convex Optimization /36-725
Proximal Newton Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: primal-dual interior-point method Given the problem min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h
More informationSPARSE SECOND ORDER CONE PROGRAMMING FORMULATIONS FOR CONVEX OPTIMIZATION PROBLEMS
Journal of the Operations Research Society of Japan 2008, Vol. 51, No. 3, 241-264 SPARSE SECOND ORDER CONE PROGRAMMING FORMULATIONS FOR CONVEX OPTIMIZATION PROBLEMS Kazuhiro Kobayashi Sunyoung Kim Masakazu
More informationAN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS
AN INTERIOR-POINT METHOD FOR NONLINEAR OPTIMIZATION PROBLEMS WITH LOCATABLE AND SEPARABLE NONSMOOTHNESS MARTIN SCHMIDT Abstract. Many real-world optimization models comse nonconvex and nonlinear as well
More informationRanking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization
Ranking from Crowdsourced Pairwise Comparisons via Matrix Manifold Optimization Jialin Dong ShanghaiTech University 1 Outline Introduction FourVignettes: System Model and Problem Formulation Problem Analysis
More informationOn nonlinear optimization since M.J.D. Powell
On nonlinear optimization since 1959 1 M.J.D. Powell Abstract: This view of the development of algorithms for nonlinear optimization is based on the research that has been of particular interest to the
More informationNumerical Comparisons of. Path-Following Strategies for a. Basic Interior-Point Method for. Revised August Rice University
Numerical Comparisons of Path-Following Strategies for a Basic Interior-Point Method for Nonlinear Programming M. A rg a e z, R.A. T a p ia, a n d L. V e l a z q u e z CRPC-TR97777-S Revised August 1998
More informationREGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS
REGULARIZED SEQUENTIAL QUADRATIC PROGRAMMING METHODS Philip E. Gill Daniel P. Robinson UCSD Department of Mathematics Technical Report NA-11-02 October 2011 Abstract We present the formulation and analysis
More informationA Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm
Journal name manuscript No. (will be inserted by the editor) A Primal-Dual Augmented Lagrangian Penalty-Interior-Point Filter Line Search Algorithm Rene Kuhlmann Christof Büsens Received: date / Accepted:
More informationModule 04 Optimization Problems KKT Conditions & Solvers
Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationSparse Optimization Lecture: Basic Sparse Optimization Models
Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclear-norm
More informationNewton s Method. Javier Peña Convex Optimization /36-725
Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationThe Solution of Euclidean Norm Trust Region SQP Subproblems via Second Order Cone Programs, an Overview and Elementary Introduction
The Solution of Euclidean Norm Trust Region SQP Subproblems via Second Order Cone Programs, an Overview and Elementary Introduction Florian Jarre, Felix Lieder, Mathematisches Institut, Heinrich-Heine
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationA GLOBALLY CONVERGENT STABILIZED SQP METHOD
A GLOBALLY CONVERGENT STABILIZED SQP METHOD Philip E. Gill Daniel P. Robinson July 6, 2013 Abstract Sequential quadratic programming SQP methods are a popular class of methods for nonlinearly constrained
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More information