Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method


 Cory Turner
 11 months ago
 Views:
Transcription
1 Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method YiChih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA February 1994 Abstract A new infeasible pathfollowing algorithm based on the monomial method, rather than Newton's method, is proposed to solve the convex quadratic programming problem. This algorithm generates a sequence of solutions which is exactly on the central trajectory. The different performances between the algorithms based on both Newton's and the monomial methods are illustrated by computational results. KEYWORDS: Convex Quadratic Programming, PathFollowing Algorithm, InteriorPoint Method, Monomial Method
2 1. Introduction We consider the linearly constrained convex quadratic programming problem (QP) where Q R n n 1 Min 2 xt Qx + c T x s.t. Ax b x 0 is a symmetric positive semidefinite matrix, A R m n, x, c R n, and b R m. QP problems have been widely studied, and many algorithms have been proposed to solve them. In 1979, Kozlov et al. [9] first proposed a polynomialtime algorithm for QP problems based on the ellipsoid method. With the advent of the interior point algorithm by Karmarar [7] for solving linear programming problems (LP), several algorithms based on the interior point method for solving LP and QP problems have been studied. Most of them are based on the Newton's method for solving the system of nonlinear KarushKuhn Tucer (KKT) equations. (See, for example, Goldfarb and Liu [6], Monteiro and Adler [12], and Mehrotra and Sun [10] for QP, and Renegar [14], Kojima et al. [8], Monteiro and Adler [11], and McShane et al. [13], etc., for LP.) Recently, a method called the "monomial method" has been used to solve a system of algebraic nonlinear equations (see, for example, Burns [2], [3], and [4]). It is well nown that Newton's method uses the linear part (firstorder) of the Taylor series expansion to approximate each nonlinear equation. In contrast to Newton's method, the monomial method is based on a system of approximating equations that are monomial in form. We may say that the monomial method is based on a different type of linearization. Although the monomial method is based on an alternative type of linearization, its performance seems very different from that of Newton's method. From Burns' examples (Burns [2], [3], and [4]), it appears that for some cases the performance of the monomial method seems better than that of Newton's method in many features. For example, the 1
3 monomial method converges much faster than Newton's method if given extreme starting points, and the monomial method reduces some computational error, e.g. floating point overflow. The main purpose of this paper is to demonstrate the different performances of the algorithms based on Newton's method and the monomial method for QP problems. The sections which follow are organized thus: After a brief description of the system of nonlinear KKT equations in Section 2, we outline the algorithm based on Newton's method in Section 3. In Section 4, the basic concept of the monomial method is presented and the algorithm based on this method is proposed. Computational results and brief conclusions are provided in the last Section. 2. Convex Quadratic Problem Consider the standard convex quadratic program: (QP) Its dual is: Min s.t. 1 2 xt Qx + c T x Ax y = b x, y 0 (QPD) Max 1 2 xt Qx + b T w s.t. Qx + A T w + s = c s,w 0 where x, s,c R n 1, y,w, b R m 1, Q R n n, and A R m n. We impose the following assumptions (A1) The matrix Q is positive semidefinite. (A2) The constraint matrix A has full row ran. (A3) The feasible region is nonempty and bounded. 2
4 For x, y > 0 in (QP) and s, w > 0 in (QPD), we can apply the logarithmic barrier function technique, and obtain the nonlinear programming problems, (QP µ ) and (QPD µ ): (QP µ ) and Min s.t. n m 1 2 xt Qx + c T x µ log x j µ log y j Ax y = b x, y > 0 j =1 j =1 (QPD µ ) Max 1 2 xt Qx + b T w + µ log w j + µ log s j s.t. Qx + A T w + s = c w,s > 0 m j =1 n j =1 where µ > 0 is a barrier parameter. It is expected that the optimal solution of problem (QP µ ) would converge to the optimal solution of the original problem (QP) as µ 0. Convex programming theory further implies that the global solution, if one exists, is completely characterized by the KKT conditions as: Ax y = b x, y > 0 (primal feasibility) (2.1a) Qx + A T w + s = c s,w > 0 (dual feasibility) (2.1b) XSe n = µe n (complementary slacness) (2.1c) WYe m = µe m (complementary slacness) (2.1d) where X,S,W, and Y are diagonal matrices using the elements of vectors of x, s, w, and y as diagonal elements, respectively, and e i is the column matrix with i elements, each with value one. 3
5 3 Algorithm Based on Newton's Method Assume that (x,y,s,w ) >0 is a current solution of equation (2.1) for given µ >0. Applying Newton's method, we obtain a system of linear equations for the directions of translation. This system is given by A I 0 0 Q 0 I A T S 0 X 0 0 W 0 Y d x d y d s d w Ax y b Qx + A T w + s c = X S e n µe n W Y e m µe m (3.1) Note that (3.1) can be expressed as Ad x d y = t 1 Qd x + A T d w + d s = t 2 S d x + X d s = t 3 W d y + Y d w = t 4 where t 1 = b + y Ax where t 2 = Qx + c A T w s where t 3 = µ e n X S e n where t 4 = µ e m W Y e m (3.2a) (3.2b) (3.2c) (3.2d) Solving (3.2), one can derive the directions for iteration as [ ] 1 [(W t 1 + t 4 ) + W A(S + X Q) 1 (X t 2 t 3 )] (3.3a) d x = (S + X Q) 1 [ X A T d w (X t 2 t 3 )] (3.3b) d w = W A(S + X Q) 1 X A T + Y d y = Ad x t 1 (3.3c) and d s = X 1 (t 3 S d x ) (3.3d) Thus, a new solution can be obtained by choosing the appropriate step sizes α p for primal and α d for dual, such that x +1 = x + α p d x y +1 = y + α p d y s +1 = s + α d d s w +1 = w + α d d w (3.4a) (3.4b) (3.4c) (3.4d) where 4
6 α 1 p = min max 1, d x i i { αx i }, 1 max 1, d y i { i αy i } (3.5a) and α 1 d = min max 1, d w i i { αw i }, 1 max i 1, d { s i αs i } (0 < α <1). (3.5b) For each iteration, the barrier parameter µ is adjusted as follows: µ = σ (x ) T s + (y ) T w where 0 < σ <1 (3.6) n + m Therefore, we can introduce the algorithm based on Newton's method as : Algorithm based on Newton's method (Algorithm NM) : Step 1 : (Initialization) Set =0. Start with any initial solution (x 0,y 0,s 0,w 0 )>0. Choose three small values for ε 1, ε 2, and ε 3, and α,σ (0,1). Step 2 : (Intermediate computation) Compute µ by (3.6) and t 1,t 2,t 3 and t 4 by (3.2), respectively. Step 3 : (Checing optimality) If µ < ε 1, t 1 b +1 < ε, and t 2 2 Qx + c + 1 < ε 3 (3.7) then stop; the current solution is accepted as the optimal solution. Else proceed to the next step. Step 4 : (Finding the directions) Compute d w,d x,d y, and d s by (3.3). Step 5 : (Computing step sizes) Compute α p and α d by (3.5). Step 6 : (Finding the new solution) Compute x +1, y +1,s +1 and w +1 by (3.4) Set = +1 and go to step 2. 5
7 4. Algorithm Based on the Monomial Method 4.1 Basic Concepts of the Monomial Method Consider the following general class of N nonlinear equations with N unnowns of the form: T q s ˆ ˆ iq i =1 c iq N x ˆ a ijq = 0, q = 1,2,..., N. (4.1) j j =1 where s ˆ iq { 1,+1} refer to signs of the terms, ˆ c iq >0 are the coefficients, a ijq, which are real numbers without restriction in sign, are the exponents, ˆ x j >0 are the variables, T q is the total number of terms in equation q. We define u iq = ˆ c iq Let T q + = i ˆ s iq = 1 N a x ˆ ijq j j =1, so that (4.1) can be rewritten as T q s ˆ u iq iq = 0, q = 1,2,..., N. (4.2) i =1 { } and T q = { i s ˆ iq = 1} for q = 1,2,..., N. Hence, (4.2) can be further expressed as : u iq = 0, q = 1,2,..., N. u iq + i T q or equivalently, Note that u iq >0. We further define u iq + u iq = 1, q = 1,2,..., N. (4.3) = u iq P q i T + q and = u iq Q q (4.4) 6
8 where P q = and Q q = u iq, u iq i T q + u iq = u iq ˆ x = x, P q = P q ˆ x = x, and Q q = Q q x ˆ. = x Property 4.1: u iq u iq + + ( ) and uiq ( ) δ iq u iq, q =1,2,..., N. with equalities if and only if u iq is a constant for q = 1,2,..., N. Using this property, we can approximate (4.3) as δ ( u iq ) iq + ( u iq ) =1 (4.5) or equivalently N x ˆ D jq =1 (4.6) j H q j =1 where c ˆ iq i T H q = q +( ) ˆ c iq i T q ( ) and D jq = a ijq i T q + Transforming the variables according to x ˆ j = e z j, we have N D jq j =1 a ijq (4.7) z j = log H q, q = 1,2,..., N. (4.8) Thus, solving the linear equation (4.8) for z j and we can find the new iterate as x ˆ = e z j. 4.2 Algorithm Based on the Monomial Method Applying the monomial method to the system of KKT equations (2.1), we have the following system of equations: 7
9 A 1x A 1y 0 0 A 2x 0 A 2s A 2 w I 0 I 0 0 I 0 I z x +1 z y +1 z s +1 z w +1 = ξ x ξ y ξ s ξ w (4.9) where A 1x R m n, A 1y R m m, A 2x, A 2s R n n, A 2w R n m, ξ x,ξ w R m 1, and ξ y,ξ s R n 1. Note that the dimension for the matrix of lefthandside of (4.9) is (2n + 2m) (2n + 2m), that is, there are (2n + 2m) variables in the system of linear equations Property 4.2: The elements of matrices ξ s and ξ w are all log( 1 µ ) where µ = σ (x ) T s + (y ) T w n + m, 0 < σ <1. This property implies that the sequence of solutions is exactly on the central trajectory. Equation (4.9) may be solved as follows: From (4.9), we get By (4.10c), By (4.10d), A 2x A 1x z x +1 + z x +1 + A 2 s A 1y z s +1 + z y +1 = ξ x A 2w z +1 x + z s = ξ s z +1 y + z w = ξ w z +1 s = ξ +1 s z x z +1 w = ξ +1 w z y z +1 w = ξ y (4.10a) (4.10b) (4.10c) (4.10d) (4.11) (4.12) Substituting (4.11) and (4.12) to (4.10b), we have z +1 x + (ξ s z +1 x ) + A 2x A 2 s A 2w (ξ w z +1 y ) = ξ y which implies Hence, if ( A 2x ( A 2x A 2s ) z x +1 A 2w A 2s ) has full ran, we further have z +1 y = ξ y ξ s A 2s A 2w ξ w (4.13) 8
10 z +1 x ( A 2x A 2s ) 1 A 2w z +1 y = ( A 2 x Multiplying (4.14) by A 1x, it produces A 1x z +1 x A 1x ( A 2x A 2s ) 1 A 2w z +1 y = A 1x ( Subtracting (4.10a) from (4.15), we obtain A 1x ( A A ) 1 A + A [ ] z +1 2x 2s 2w 1y y = A 1x ( That is z +1 y = A 2 x A 2s A 2x A ( A A ) 1 A + A [ ] 1 ξ 1x 2x 2s 2w 1y x A 1x ( By (4.14), we have z +1 x = ( A 2x A 2s ) 1 A 2 s A ( 2w ξ w ) (4.14) ) 1 ξ y ξ s A 2s ) 1 A 2s ξ y A ξ 2s s A ( ξ 2w w ) (4.15) A A ( 2 s ξ 2w w ) ξ x (4.16) ) 1 ξ y ξ s [ A A ) 1 ( ξ 2x 2 s y A 2s ξ s A ξ )] (4.17) 2 w w ξ y A ξ 2s s A ξ 2w w + A +1 ( 2w z y ) (4.18) Thus, after computing (4.17), (4.18), (4.11), and (4.12), respectively, we may find the new iterate as x +1 y +1 s +1 w +1 = +1 z x e +1 z y e +1 z s e +1 z w e (4.19) Algorithm based on the monomial method (Algorithm MM) : Step 1: (Initialization) Set = 0. Start with any initial solution (x 0,y 0,s 0,w 0 )>0, and choose three small values for ε 1, ε 2, and ε 3. Step 2: (Checing optimality) Compute µ by (3.6) and t 1,t 2 by (3.2), respectively. If (3.7) is satisfied then stop; the current solution is accepted as the optimal solution. Else proceed to the next step. Step 3: (Evaluating weights) Compute the weights of each term and equation for iteration by (4.4). Step 4: (Intermediate computation) 9
11 Compute A 1x, A 1y,,,,ξ x,ξ y,ξ s, and ξ w by (4.7). A 2x A 2s A 2w Step 5: (Solving nonlinear equations) z +1 y, z x +1 z +1 w and z +1 s by (4.17), (4.18), (4.11), and (4.12), respectively. Step 6: (Finding the new solution) Compute x +1, y +1,s +1 and w +1 by (4.19) Set = +1, and go to Step Computational Results and Conclusions 5.1 Computational Results The first test problem is an example due to Bazaraa and Shetty [1] as shown below: Min 2 x x 2 2 2x 1 x 2 4x 1 6x 2 s.t. x 1 x 2 2 x 1 5x 2 5 x 1, x 2 0 This is a very simple example, but will demonstrate the different performances of these two algorithms, Algorithms NM and MM. For ease of comparison of these algorithms, we employ the following procedure: 1. We try 15,000 starting points, in which x 0 = (x 0 1, x 0 2 ) Integer, where x 0 1 [1,150], and x 0 2 [1,100]. 2. y 0 = s 0 = w 0 = (1,1) T for each starting point. 3. The tolerances are ε 1 = ε 2 = ε 3 = σ +1 = if µ +1 µ µ if µ +1 µ µ > 0.5 Figures 1 and 2 illustrate the number of iterations required for these two algorithms to satisfy the convergence criterion. That is, each of the 15,000 starting points is shaded according to the number of iterations required to converge. One can see that Figure 2, 10
12 based on the monomial method, is more regular than Figure 1, based on Newton's method. It should also be noted that the average number of iterations to converge in Figure 1 is (ranging from 17 to 27 iterations), which is larger than that of Figure 2, namely (ranging from 16 to 21 iterations). In addition, we have tested several different sizes of convex quadratic problems. These are separable problems, based upon Calamai's procedure for generating test problems (Calamai et al. [5]), in the form of (SQP) Min F(x) = f l (x 1l, x 2l ) s.t. M l =1 a 11l x 1l + a 12l x 2l α l a 21l x 1l + a 22l x 2l α l x 1l x 2l 30 x 1l, x 2l 0, l { 1,2,..., M} (5.1) where a 11l,a 12 l,a 21l,a 22 l and α l are randomly generated such that assumptions (A2) and { } ρ 1l,ρ 2l 0,1 (A3) are satisfied. f l (x 1l, x 2l ) = 1 2 ρ 1l (x 1l t 1l )2 +ρ 2l (x 2l t 2l ) 2 { } and t 1l,t 2l R n. Note that, for this type of separable convex quadratic problems, the optimal solutions can be specified, and may be either extreme points, interior points, or boundary points. Based on (5.1), we test different sizes of test problems, M =1, 2, 4, 8, 16, 18, 20, and 22. The number of variables and constraints are 2M and 3M, respectively. For each test problem, the optimal solutions for the subproblems may be either extreme points, interior points, or boundary points. We also impose the following conditions. 1. For each problem size, we generate 10 random test problems, and for each of these 10 test problems we try 5 random starting points ((x 0,y 0,s 0,w 0 )) selected from the interval [1,100]. 11
13 2. For each combination of test problem and starting point, algorithms NM and MM were applied. 3. The convergence tolerances are ε 1 = ε 2 = ε 3 = The results shown in Table 1 were obtained using the HP 715/50 worstation. From Table 1, one can see that the average number of iterations for the monomial method is less than those of Newton's method. The cpu time for the monomial method is less than that of Newton's method when the problem size, M, is larger (for instance, when M = 16, 18, 20, and 22). It should be pointed out that Algorithm MM needs more arithmetic operations for each iteration owing to the computation of weights in step 3. However, because for the larger problems the number of iterations required by the Algorithm MM is reduced by approximately one iteration, the total cpu time is less than that of Algorithm NM. 5.2 Conclusions We have proposed a pathfollowing algorithm based on the monomial method rather than Newton's method for the convex quadratic problems. From the limited computational results which we have presented, one may see that the Algorithm MM seems better than Algorithm NM in various features. For example, from Figures 1 and 2, we find that the former is more regular than the latter. From Table 1, one can see that the total number of iterations and cpu time to converge are better for Algorithm MM when the problem size is larger. Further study is required in order to draw more definite conclusions; the authors are now investigating the global convergence and complexity of this new algorithm, as well as performing more complete computational testing. 12
14 REFERENCE [1] M. S. Bazaraa and C. M. Shetty, Nonlinear Programming : Theory and Algorithms, John Wiley and Sons, (1979). [2] S. A. Burns and A. Locascio, "A MonomialBased Method for Solving Systems of NonLinear Algebraic Equations", International Journal for Numerical Methods in Engineering, Vol. 31, pp , (1991). [3] S. A. Burns, "The Monomial Method and Asymptotic Properties of Algebraic Systems", to appear in: International Journal for Numerical Methods in Engineering, (1993). [4] S. A. Burns, "The Monomial Method: Extensions, Variations, and Performance Issues", to appear in: International Journal for Numerical Methods in Engineering, (1993). [5] P. H. Calamai, L. N. Vicente, and J. J. Judice, "New techniques for generating Quadratic Programming Test Problems", Mathematical Programming, Vol. 61, pp , (1993). [6] D. Goldfarb and S. Liu, "An O(n 3 L) Primal Interior Point Algorithm for Convex Quadratic Programming", Mathematical Programming, Vol. 49, pp , (1991). [7] N. Karmarar, "A New Polynomial Time Algorithm for Linear Programming", Combinatorica, Vol. 4, pp , (1984). [8] M. Kojima, N. Megiddo, and S. Mizuno, "Theoretical Convergence of LargeStep PrimalDual Interior Point Algorithms for Linear Programming", Mathematical Programming, Vol. 59., pp. 121, (1993). [9] M. K. Kozlov, S. P. Tarasov, and L. G. Khachian, "Polynomial Solvability of Convex Quadratic Programming", Dolady Aademiia Nau USSR, pp , (1979). 13
15 [10] S. Mehrotra and J. Sun, "An Interior Point Algorithm for Solving Smooth Convex Programs Based on Newton's Method", Contemporary Mathematics, Vol. 114, pp , (1990). [11] R. D. C. Monteiro and I. Adler, "Interior Path Following PrimalDual Algorithms. Part I: Linear Programming", Mathematical Programming, Vol. 44, pp , (1989). [12] R. D. C. Monteiro and I. Adler, "Interior Path Following PrimalDual Algorithms. Part II: Convex Quadratic Programming", Mathematical Programming, Vol. 44, pp , (1989). [13] K. McShane, C. Monna, and D. Shanno, "An Implementation of a PrimalDual Interior Point Method for Linear Programming, ORSA Journal on Computing, Vol. 1, No. 2, pp , (1989). [14] J. Renegar, "A PolynomialTime Algorithm Based on Newton's Method for Linear Programming", Mathematical Programming, Vol. 40, pp , (1988). 14
16 Figure 1. Iteration counts for various starting points using Newton's method for Bazaraa's example. 15
17 Figure 2. Iteration counts for various starting points using monomial method for Bazaraa's example. 16
18 Number of subproblems M=1 M=2 M=4 M=8 M=16 M=18 M=20 M=22 Newton's iterations (cpu time) (0.3512) (0.6912) (1.4610) (6.4020) ( ) ( ) ( ) ( ) Monomial iterations (cpu time) (0.5300) (1.2440) (2.5482) (8.7306) ( ) ( ) ( ) ( ) Table 1. Average numbers of iterations and cpu time for algorithms based on Newton's and monomial methods. 17
New Infeasible Interior Point Algorithm Based on Monomial Method
New Infeasible Interior Point Algorithm Based on Monomial Method YiChih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)
More informationSolving Obstacle Problems by Using a New Interior Point Algorithm. Abstract
Solving Obstacle Problems by Using a New Interior Point Algorithm YiChih Hsieh Department of Industrial Engineering National Yunlin Polytechnic Institute Huwei, Yunlin 6308 Taiwan and Dennis L. Bricer
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equalityconstrained Optimization Inequalityconstrained Optimization Mixtureconstrained Optimization 3 Quadratic Programming
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationA Generalized Homogeneous and SelfDual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and SelfDual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and selfdual (HSD) infeasibleinteriorpoint
More informationPrimaldual relationship between LevenbergMarquardt and central trajectories for linearly constrained convex optimization
Primaldual relationship between LevenbergMarquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department
More informationInteriorPoint Methods for Linear Optimization
InteriorPoint Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 13 line supporting sentence
More informationLecture 13: Constrained optimization
20101203 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationInteriorPoint Methods
InteriorPoint Methods Stephen Wright University of WisconsinMadison Simons, Berkeley, August, 2017 Wright (UWMadison) InteriorPoint Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationPart 4: Activeset methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Activeset methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationPrimalDual InteriorPoint Methods for Linear Programming based on Newton s Method
PrimalDual InteriorPoint Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO  2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationChapter 6 InteriorPoint Approach to Linear Programming
Chapter 6 InteriorPoint Approach to Linear Programming Objectives: Introduce Basic Ideas of InteriorPoint Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationi.e., into a monomial, using the ArithmeticGeometric Mean Inequality, the result will be a posynomial approximation!
Dennis L. Bricker Dept of Mechanical & Industrial Engineering The University of Iowa i.e., 1 1 1 Minimize X X X subject to XX 4 X 1 0.5X 1 Minimize X X X X 1X X s.t. 4 1 1 1 1 4X X 1 1 1 1 0.5X X X 1 1
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. AnnBrith Strömberg
MVE165/MMG631 Overview of nonlinear programming AnnBrith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationBarrier Method. Javier Peña Convex Optimization /36725
Barrier Method Javier Peña Convex Optimization 10725/36725 1 Last time: Newton s method For rootfinding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 20182019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationLP. Kap. 17: Interiorpoint methods
LP. Kap. 17: Interiorpoint methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interiorpoint methods they find a path in the interior of
More information12. Interiorpoint methods
12. Interiorpoint methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationOn Superlinear Convergence of Infeasible InteriorPoint Algorithms for Linearly Constrained Convex Programs *
Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible InteriorPoint Algorithms for
More informationThe Qparametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Springmass System
The Qparametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Springmass system measurements y Controller control inputs u Idea for lecture
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationPrimalDual InteriorPoint Methods. Ryan Tibshirani Convex Optimization /36725
PrimalDual InteriorPoint Methods Ryan Tibshirani Convex Optimization 10725/36725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...
More informationIE 5531 Midterm #2 Solutions
IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,
More informationCSCI 1951G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationSecondorder cone programming
Outline Secondorder cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The
More informationFRTN10 Multivariable Control, Lecture 13. Course outline. The Qparametrization (Youla) Example: Springmass System
FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Qparametrization (Youla) LL5 Purpose, models and loopshaping by hand L6L8 Limitations
More informationCE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions
CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationOptimisation in Higher Dimensions
CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equalityconstrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equalityconstrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationAn O(nL) InfeasibleInteriorPoint Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015
An O(nL) InfeasibleInteriorPoint Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arcsearch
More informationApproximate Farkas Lemmas in Convex Optimization
Approximate Farkas Lemmas in Convex Optimization Imre McMaster University Advanced Optimization Lab AdvOL Graduate Student Seminar October 25, 2004 1 Exact Farkas Lemma Motivation 2 3 Future plans The
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationAn Infeasible Interior Point Method for the Monotone Linear Complementarity Problem
Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,
More informationPrimalDual InteriorPoint Methods
PrimalDual InteriorPoint Methods Lecturer: Aarti Singh Coinstructor: Pradeep Ravikumar Convex Optimization 10725/36725 Outline Today: Primaldual interiorpoint method Special case: linear programming
More informationCOMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM
STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIII, Number 3, September 2008 COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM ADNAN
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPAInstituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de JaneiroRJ CEP 22460320 Brasil
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationResearch Note. A New Infeasible InteriorPoint Algorithm with Full NesterovTodd Step for SemiDefinite Optimization
Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88107 Research Note A New Infeasible InteriorPoint Algorithm with Full NesterovTodd Step for SemiDefinite Optimization B. Kheirfam We
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationConstraint Reduction for Linear Programs with Many Constraints
Constraint Reduction for Linear Programs with Many Constraints André L. Tits Institute for Systems Research and Department of Electrical and Computer Engineering University of Maryland, College Park PierreAntoine
More informationLecture 10. PrimalDual Interior Point Method for LP
IE 8534 1 Lecture 10. PrimalDual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of WisconsinMadison May 2014 Wright (UWMadison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationA Regularized InteriorPoint Method for Constrained Nonlinear Least Squares
A Regularized InteriorPoint Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná  Curitiba/PR  Brazil Dominique
More information12. Interiorpoint methods
12. Interiorpoint methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationPrimalDual InteriorPoint Methods. Ryan Tibshirani Convex Optimization
PrimalDual InteriorPoint Methods Ryan Tibshirani Convex Optimization 10725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are
More informationLinear programming II
Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality
More informationConvergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization
Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun AlJeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and FuJie Huang 1 Outline What is behind
More informationA ConstraintReduced MPC Algorithm for Convex Quadratic Programming, with a Modified ActiveSet Identification Scheme
A ConstraintReduced MPC Algorithm for Convex Quadratic Programming, with a Modified ActiveSet Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov
More informationAn interiorpoint trustregion polynomial algorithm for convex programming
An interiorpoint trustregion polynomial algorithm for convex programming Ye LU and Yaxiang YUAN Abstract. An interiorpoint trustregion algorithm is proposed for minimization of a convex quadratic
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationConic Linear Programming. Yinyu Ye
Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture
More informationMarch 8, 2010 MATH 408 FINAL EXAM SAMPLE
March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:300:20 am. You may bring twosided 8 page
More informationIBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel
and Nimrod Megiddo IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia 951206099 and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel Submitted by Richard Tapia ABSTRACT
More informationA nullspace primaldual interiorpoint algorithm for nonlinear optimization with nice convergence properties
A nullspace primaldual interiorpoint algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a nullspace primaldual interiorpoint algorithm
More informationA SecondOrder PathFollowing Algorithm for Unconstrained Convex Optimization
A SecondOrder PathFollowing Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford
More informationfrom the primaldual interiorpoint algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise
1. Introduction The primaldual infeasibleinteriorpoint algorithm which we will discuss has stemmed from the primaldual interiorpoint algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise [7], Monteiro
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More information1 Outline Part I: Linear Programming (LP) InteriorPoint Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin
Sensitivity Analysis in LP and SDP Using InteriorPoint Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS
More informationUnconstrained Optimization
1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation
More informationAdvanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs
Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.211.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationOn Generalized PrimalDual InteriorPoint Methods with Nonuniform Complementarity Perturbations for Quadratic Programming
On Generalized PrimalDual InteriorPoint Methods with Nonuniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationFinding a point in the relative interior of a polyhedron
Report no. NA07/01 Finding a point in the relative interior of a polyhedron Coralia Cartis Rutherford Appleton Laboratory, Numerical Analysis Group Nicholas I. M. Gould Oxford University, Numerical Analysis
More informationminimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,
4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x
More informationA PRIMALDUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:
STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMALDUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept HardMargin SVM SoftMargin SVM Dual Problems of HardMargin
More informationAn E cient A nescaling Algorithm for Hyperbolic Programming
An E cient A nescaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such
More informationOptimization Problems with Constraints  introduction to theory, numerical Methods and applications
Optimization Problems with Constraints  introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept HardMargin SVM SoftMargin SVM Dual Problems of HardMargin
More informationWhat s New in ActiveSet Methods for Nonlinear Optimization?
What s New in ActiveSet Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationA FULLNEWTON STEP INFEASIBLEINTERIORPOINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULLNEWTON STEP INFEASIBLEINTERIORPOINT ALGORITHM FOR P (κ)horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 151 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology  Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationA class of Smoothing Method for Linear SecondOrder Cone Programming
Columbia International Publishing Journal of Advanced Computing (13) 1: 94 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear SecondOrder Cone Programming Zhuqing Gui *, Zhibin
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationA fullnewton step infeasible interiorpoint algorithm for linear programming based on a kernel function
A fullnewton step infeasible interiorpoint algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interiorpoint algorithm with
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationDuality revisited. Javier Peña Convex Optimization /36725
Duality revisited Javier Peña Conve Optimization 10725/36725 1 Last time: barrier method Main idea: approimate the problem f() + I C () with the barrier problem f() + 1 t φ() tf() + φ() where t > 0 and
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec  Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec  Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More information