Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Size: px
Start display at page:

Download "Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method"

Transcription

1 Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA February 1994 Abstract A new infeasible path-following algorithm based on the monomial method, rather than Newton's method, is proposed to solve the convex quadratic programming problem. This algorithm generates a sequence of solutions which is exactly on the central trajectory. The different performances between the algorithms based on both Newton's and the monomial methods are illustrated by computational results. KEYWORDS: Convex Quadratic Programming, Path-Following Algorithm, Interior-Point Method, Monomial Method

2 1. Introduction We consider the linearly constrained convex quadratic programming problem (QP) where Q R n n 1 Min 2 xt Qx + c T x s.t. Ax b x 0 is a symmetric positive semi-definite matrix, A R m n, x, c R n, and b R m. QP problems have been widely studied, and many algorithms have been proposed to solve them. In 1979, Kozlov et al. [9] first proposed a polynomial-time algorithm for QP problems based on the ellipsoid method. With the advent of the interior point algorithm by Karmarar [7] for solving linear programming problems (LP), several algorithms based on the interior point method for solving LP and QP problems have been studied. Most of them are based on the Newton's method for solving the system of nonlinear Karush-Kuhn- Tucer (KKT) equations. (See, for example, Goldfarb and Liu [6], Monteiro and Adler [12], and Mehrotra and Sun [10] for QP, and Renegar [14], Kojima et al. [8], Monteiro and Adler [11], and McShane et al. [13], etc., for LP.) Recently, a method called the "monomial method" has been used to solve a system of algebraic nonlinear equations (see, for example, Burns [2], [3], and [4]). It is well nown that Newton's method uses the linear part (first-order) of the Taylor series expansion to approximate each nonlinear equation. In contrast to Newton's method, the monomial method is based on a system of approximating equations that are monomial in form. We may say that the monomial method is based on a different type of linearization. Although the monomial method is based on an alternative type of linearization, its performance seems very different from that of Newton's method. From Burns' examples (Burns [2], [3], and [4]), it appears that for some cases the performance of the monomial method seems better than that of Newton's method in many features. For example, the 1

3 monomial method converges much faster than Newton's method if given extreme starting points, and the monomial method reduces some computational error, e.g. floating point overflow. The main purpose of this paper is to demonstrate the different performances of the algorithms based on Newton's method and the monomial method for QP problems. The sections which follow are organized thus: After a brief description of the system of nonlinear KKT equations in Section 2, we outline the algorithm based on Newton's method in Section 3. In Section 4, the basic concept of the monomial method is presented and the algorithm based on this method is proposed. Computational results and brief conclusions are provided in the last Section. 2. Convex Quadratic Problem Consider the standard convex quadratic program: (QP) Its dual is: Min s.t. 1 2 xt Qx + c T x Ax y = b x, y 0 (QPD) Max 1 2 xt Qx + b T w s.t. Qx + A T w + s = c s,w 0 where x, s,c R n 1, y,w, b R m 1, Q R n n, and A R m n. We impose the following assumptions (A1) The matrix Q is positive semi-definite. (A2) The constraint matrix A has full row ran. (A3) The feasible region is nonempty and bounded. 2

4 For x, y > 0 in (QP) and s, w > 0 in (QPD), we can apply the logarithmic barrier function technique, and obtain the nonlinear programming problems, (QP µ ) and (QPD µ ): (QP µ ) and Min s.t. n m 1 2 xt Qx + c T x µ log x j µ log y j Ax y = b x, y > 0 j =1 j =1 (QPD µ ) Max 1 2 xt Qx + b T w + µ log w j + µ log s j s.t. Qx + A T w + s = c w,s > 0 m j =1 n j =1 where µ > 0 is a barrier parameter. It is expected that the optimal solution of problem (QP µ ) would converge to the optimal solution of the original problem (QP) as µ 0. Convex programming theory further implies that the global solution, if one exists, is completely characterized by the KKT conditions as: Ax y = b x, y > 0 (primal feasibility) (2.1a) Qx + A T w + s = c s,w > 0 (dual feasibility) (2.1b) XSe n = µe n (complementary slacness) (2.1c) WYe m = µe m (complementary slacness) (2.1d) where X,S,W, and Y are diagonal matrices using the elements of vectors of x, s, w, and y as diagonal elements, respectively, and e i is the column matrix with i elements, each with value one. 3

5 3 Algorithm Based on Newton's Method Assume that (x,y,s,w ) >0 is a current solution of equation (2.1) for given µ >0. Applying Newton's method, we obtain a system of linear equations for the directions of translation. This system is given by A I 0 0 Q 0 I A T S 0 X 0 0 W 0 Y d x d y d s d w Ax y b Qx + A T w + s c = X S e n µe n W Y e m µe m (3.1) Note that (3.1) can be expressed as Ad x d y = t 1 Qd x + A T d w + d s = t 2 S d x + X d s = t 3 W d y + Y d w = t 4 where t 1 = b + y Ax where t 2 = Qx + c A T w s where t 3 = µ e n X S e n where t 4 = µ e m W Y e m (3.2a) (3.2b) (3.2c) (3.2d) Solving (3.2), one can derive the directions for iteration as [ ] 1 [(W t 1 + t 4 ) + W A(S + X Q) 1 (X t 2 t 3 )] (3.3a) d x = (S + X Q) 1 [ X A T d w (X t 2 t 3 )] (3.3b) d w = W A(S + X Q) 1 X A T + Y d y = Ad x t 1 (3.3c) and d s = X 1 (t 3 S d x ) (3.3d) Thus, a new solution can be obtained by choosing the appropriate step sizes α p for primal and α d for dual, such that x +1 = x + α p d x y +1 = y + α p d y s +1 = s + α d d s w +1 = w + α d d w (3.4a) (3.4b) (3.4c) (3.4d) where 4

6 α 1 p = min max 1, d x i i { αx i }, 1 max 1, d y i { i αy i } (3.5a) and α 1 d = min max 1, d w i i { αw i }, 1 max i 1, d { s i αs i } (0 < α <1). (3.5b) For each iteration, the barrier parameter µ is adjusted as follows: µ = σ (x ) T s + (y ) T w where 0 < σ <1 (3.6) n + m Therefore, we can introduce the algorithm based on Newton's method as : Algorithm based on Newton's method (Algorithm NM) : Step 1 : (Initialization) Set =0. Start with any initial solution (x 0,y 0,s 0,w 0 )>0. Choose three small values for ε 1, ε 2, and ε 3, and α,σ (0,1). Step 2 : (Intermediate computation) Compute µ by (3.6) and t 1,t 2,t 3 and t 4 by (3.2), respectively. Step 3 : (Checing optimality) If µ < ε 1, t 1 b +1 < ε, and t 2 2 Qx + c + 1 < ε 3 (3.7) then stop; the current solution is accepted as the optimal solution. Else proceed to the next step. Step 4 : (Finding the directions) Compute d w,d x,d y, and d s by (3.3). Step 5 : (Computing step sizes) Compute α p and α d by (3.5). Step 6 : (Finding the new solution) Compute x +1, y +1,s +1 and w +1 by (3.4) Set = +1 and go to step 2. 5

7 4. Algorithm Based on the Monomial Method 4.1 Basic Concepts of the Monomial Method Consider the following general class of N nonlinear equations with N unnowns of the form: T q s ˆ ˆ iq i =1 c iq N x ˆ a ijq = 0, q = 1,2,..., N. (4.1) j j =1 where s ˆ iq { 1,+1} refer to signs of the terms, ˆ c iq >0 are the coefficients, a ijq, which are real numbers without restriction in sign, are the exponents, ˆ x j >0 are the variables, T q is the total number of terms in equation q. We define u iq = ˆ c iq Let T q + = i ˆ s iq = 1 N a x ˆ ijq j j =1, so that (4.1) can be rewritten as T q s ˆ u iq iq = 0, q = 1,2,..., N. (4.2) i =1 { } and T q = { i s ˆ iq = 1} for q = 1,2,..., N. Hence, (4.2) can be further expressed as : u iq = 0, q = 1,2,..., N. u iq + i T q or equivalently, Note that u iq >0. We further define u iq + u iq = 1, q = 1,2,..., N. (4.3) = u iq P q i T + q and = u iq Q q (4.4) 6

8 where P q = and Q q = u iq, u iq i T q + u iq = u iq ˆ x = x, P q = P q ˆ x = x, and Q q = Q q x ˆ. = x Property 4.1: u iq u iq + + ( ) and uiq ( ) δ iq u iq, q =1,2,..., N. with equalities if and only if u iq is a constant for q = 1,2,..., N. Using this property, we can approximate (4.3) as δ ( u iq ) iq + ( u iq ) =1 (4.5) or equivalently N x ˆ D jq =1 (4.6) j H q j =1 where c ˆ iq i T H q = q +( ) ˆ c iq i T q ( ) and D jq = a ijq i T q + Transforming the variables according to x ˆ j = e z j, we have N D jq j =1 a ijq (4.7) z j = log H q, q = 1,2,..., N. (4.8) Thus, solving the linear equation (4.8) for z j and we can find the new iterate as x ˆ = e z j. 4.2 Algorithm Based on the Monomial Method Applying the monomial method to the system of KKT equations (2.1), we have the following system of equations: 7

9 A 1x A 1y 0 0 A 2x 0 A 2s A 2 w I 0 I 0 0 I 0 I z x +1 z y +1 z s +1 z w +1 = ξ x ξ y ξ s ξ w (4.9) where A 1x R m n, A 1y R m m, A 2x, A 2s R n n, A 2w R n m, ξ x,ξ w R m 1, and ξ y,ξ s R n 1. Note that the dimension for the matrix of left-hand-side of (4.9) is (2n + 2m) (2n + 2m), that is, there are (2n + 2m) variables in the system of linear equations Property 4.2: The elements of matrices ξ s and ξ w are all log( 1 µ ) where µ = σ (x ) T s + (y ) T w n + m, 0 < σ <1. This property implies that the sequence of solutions is exactly on the central trajectory. Equation (4.9) may be solved as follows: From (4.9), we get By (4.10c), By (4.10d), A 2x A 1x z x +1 + z x +1 + A 2 s A 1y z s +1 + z y +1 = ξ x A 2w z +1 x + z s = ξ s z +1 y + z w = ξ w z +1 s = ξ +1 s z x z +1 w = ξ +1 w z y z +1 w = ξ y (4.10a) (4.10b) (4.10c) (4.10d) (4.11) (4.12) Substituting (4.11) and (4.12) to (4.10b), we have z +1 x + (ξ s z +1 x ) + A 2x A 2 s A 2w (ξ w z +1 y ) = ξ y which implies Hence, if ( A 2x ( A 2x A 2s ) z x +1 A 2w A 2s ) has full ran, we further have z +1 y = ξ y ξ s A 2s A 2w ξ w (4.13) 8

10 z +1 x ( A 2x A 2s ) 1 A 2w z +1 y = ( A 2 x Multiplying (4.14) by A 1x, it produces A 1x z +1 x A 1x ( A 2x A 2s ) 1 A 2w z +1 y = A 1x ( Subtracting (4.10a) from (4.15), we obtain A 1x ( A A ) 1 A + A [ ] z +1 2x 2s 2w 1y y = A 1x ( That is z +1 y = A 2 x A 2s A 2x A ( A A ) 1 A + A [ ] 1 ξ 1x 2x 2s 2w 1y x A 1x ( By (4.14), we have z +1 x = ( A 2x A 2s ) 1 A 2 s A ( 2w ξ w ) (4.14) ) 1 ξ y ξ s A 2s ) 1 A 2s ξ y A ξ 2s s A ( ξ 2w w ) (4.15) A A ( 2 s ξ 2w w ) ξ x (4.16) ) 1 ξ y ξ s [ A A ) 1 ( ξ 2x 2 s y A 2s ξ s A ξ )] (4.17) 2 w w ξ y A ξ 2s s A ξ 2w w + A +1 ( 2w z y ) (4.18) Thus, after computing (4.17), (4.18), (4.11), and (4.12), respectively, we may find the new iterate as x +1 y +1 s +1 w +1 = +1 z x e +1 z y e +1 z s e +1 z w e (4.19) Algorithm based on the monomial method (Algorithm MM) : Step 1: (Initialization) Set = 0. Start with any initial solution (x 0,y 0,s 0,w 0 )>0, and choose three small values for ε 1, ε 2, and ε 3. Step 2: (Checing optimality) Compute µ by (3.6) and t 1,t 2 by (3.2), respectively. If (3.7) is satisfied then stop; the current solution is accepted as the optimal solution. Else proceed to the next step. Step 3: (Evaluating weights) Compute the weights of each term and equation for iteration by (4.4). Step 4: (Intermediate computation) 9

11 Compute A 1x, A 1y,,,,ξ x,ξ y,ξ s, and ξ w by (4.7). A 2x A 2s A 2w Step 5: (Solving nonlinear equations) z +1 y, z x +1 z +1 w and z +1 s by (4.17), (4.18), (4.11), and (4.12), respectively. Step 6: (Finding the new solution) Compute x +1, y +1,s +1 and w +1 by (4.19) Set = +1, and go to Step Computational Results and Conclusions 5.1 Computational Results The first test problem is an example due to Bazaraa and Shetty [1] as shown below: Min 2 x x 2 2 2x 1 x 2 4x 1 6x 2 s.t. x 1 x 2 2 x 1 5x 2 5 x 1, x 2 0 This is a very simple example, but will demonstrate the different performances of these two algorithms, Algorithms NM and MM. For ease of comparison of these algorithms, we employ the following procedure: 1. We try 15,000 starting points, in which x 0 = (x 0 1, x 0 2 ) Integer, where x 0 1 [1,150], and x 0 2 [1,100]. 2. y 0 = s 0 = w 0 = (1,1) T for each starting point. 3. The tolerances are ε 1 = ε 2 = ε 3 = σ +1 = if µ +1 µ µ if µ +1 µ µ > 0.5 Figures 1 and 2 illustrate the number of iterations required for these two algorithms to satisfy the convergence criterion. That is, each of the 15,000 starting points is shaded according to the number of iterations required to converge. One can see that Figure 2, 10

12 based on the monomial method, is more regular than Figure 1, based on Newton's method. It should also be noted that the average number of iterations to converge in Figure 1 is (ranging from 17 to 27 iterations), which is larger than that of Figure 2, namely (ranging from 16 to 21 iterations). In addition, we have tested several different sizes of convex quadratic problems. These are separable problems, based upon Calamai's procedure for generating test problems (Calamai et al. [5]), in the form of (SQP) Min F(x) = f l (x 1l, x 2l ) s.t. M l =1 a 11l x 1l + a 12l x 2l α l a 21l x 1l + a 22l x 2l α l x 1l x 2l 30 x 1l, x 2l 0, l { 1,2,..., M} (5.1) where a 11l,a 12 l,a 21l,a 22 l and α l are randomly generated such that assumptions (A2) and { } ρ 1l,ρ 2l 0,1 (A3) are satisfied. f l (x 1l, x 2l ) = 1 2 ρ 1l (x 1l t 1l )2 +ρ 2l (x 2l t 2l ) 2 { } and t 1l,t 2l R n. Note that, for this type of separable convex quadratic problems, the optimal solutions can be specified, and may be either extreme points, interior points, or boundary points. Based on (5.1), we test different sizes of test problems, M =1, 2, 4, 8, 16, 18, 20, and 22. The number of variables and constraints are 2M and 3M, respectively. For each test problem, the optimal solutions for the subproblems may be either extreme points, interior points, or boundary points. We also impose the following conditions. 1. For each problem size, we generate 10 random test problems, and for each of these 10 test problems we try 5 random starting points ((x 0,y 0,s 0,w 0 )) selected from the interval [1,100]. 11

13 2. For each combination of test problem and starting point, algorithms NM and MM were applied. 3. The convergence tolerances are ε 1 = ε 2 = ε 3 = The results shown in Table 1 were obtained using the HP 715/50 worstation. From Table 1, one can see that the average number of iterations for the monomial method is less than those of Newton's method. The cpu time for the monomial method is less than that of Newton's method when the problem size, M, is larger (for instance, when M = 16, 18, 20, and 22). It should be pointed out that Algorithm MM needs more arithmetic operations for each iteration owing to the computation of weights in step 3. However, because for the larger problems the number of iterations required by the Algorithm MM is reduced by approximately one iteration, the total cpu time is less than that of Algorithm NM. 5.2 Conclusions We have proposed a path-following algorithm based on the monomial method rather than Newton's method for the convex quadratic problems. From the limited computational results which we have presented, one may see that the Algorithm MM seems better than Algorithm NM in various features. For example, from Figures 1 and 2, we find that the former is more regular than the latter. From Table 1, one can see that the total number of iterations and cpu time to converge are better for Algorithm MM when the problem size is larger. Further study is required in order to draw more definite conclusions; the authors are now investigating the global convergence and complexity of this new algorithm, as well as performing more complete computational testing. 12

14 REFERENCE [1] M. S. Bazaraa and C. M. Shetty, Nonlinear Programming : Theory and Algorithms, John Wiley and Sons, (1979). [2] S. A. Burns and A. Locascio, "A Monomial-Based Method for Solving Systems of Non-Linear Algebraic Equations", International Journal for Numerical Methods in Engineering, Vol. 31, pp , (1991). [3] S. A. Burns, "The Monomial Method and Asymptotic Properties of Algebraic Systems", to appear in: International Journal for Numerical Methods in Engineering, (1993). [4] S. A. Burns, "The Monomial Method: Extensions, Variations, and Performance Issues", to appear in: International Journal for Numerical Methods in Engineering, (1993). [5] P. H. Calamai, L. N. Vicente, and J. J. Judice, "New techniques for generating Quadratic Programming Test Problems", Mathematical Programming, Vol. 61, pp , (1993). [6] D. Goldfarb and S. Liu, "An O(n 3 L) Primal Interior Point Algorithm for Convex Quadratic Programming", Mathematical Programming, Vol. 49, pp , (1991). [7] N. Karmarar, "A New Polynomial Time Algorithm for Linear Programming", Combinatorica, Vol. 4, pp , (1984). [8] M. Kojima, N. Megiddo, and S. Mizuno, "Theoretical Convergence of Large-Step Primal-Dual Interior Point Algorithms for Linear Programming", Mathematical Programming, Vol. 59., pp. 1-21, (1993). [9] M. K. Kozlov, S. P. Tarasov, and L. G. Khachian, "Polynomial Solvability of Convex Quadratic Programming", Dolady Aademiia Nau USSR, pp , (1979). 13

15 [10] S. Mehrotra and J. Sun, "An Interior Point Algorithm for Solving Smooth Convex Programs Based on Newton's Method", Contemporary Mathematics, Vol. 114, pp , (1990). [11] R. D. C. Monteiro and I. Adler, "Interior Path Following Primal-Dual Algorithms. Part I: Linear Programming", Mathematical Programming, Vol. 44, pp , (1989). [12] R. D. C. Monteiro and I. Adler, "Interior Path Following Primal-Dual Algorithms. Part II: Convex Quadratic Programming", Mathematical Programming, Vol. 44, pp , (1989). [13] K. McShane, C. Monna, and D. Shanno, "An Implementation of a Primal-Dual Interior Point Method for Linear Programming, ORSA Journal on Computing, Vol. 1, No. 2, pp , (1989). [14] J. Renegar, "A Polynomial-Time Algorithm Based on Newton's Method for Linear Programming", Mathematical Programming, Vol. 40, pp , (1988). 14

16 Figure 1. Iteration counts for various starting points using Newton's method for Bazaraa's example. 15

17 Figure 2. Iteration counts for various starting points using monomial method for Bazaraa's example. 16

18 Number of subproblems M=1 M=2 M=4 M=8 M=16 M=18 M=20 M=22 Newton's iterations (cpu time) (0.3512) (0.6912) (1.4610) (6.4020) ( ) ( ) ( ) ( ) Monomial iterations (cpu time) (0.5300) (1.2440) (2.5482) (8.7306) ( ) ( ) ( ) ( ) Table 1. Average numbers of iterations and cpu time for algorithms based on Newton's and monomial methods. 17

New Infeasible Interior Point Algorithm Based on Monomial Method

New Infeasible Interior Point Algorithm Based on Monomial Method New Infeasible Interior Point Algorithm Based on Monomial Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa, Iowa City, IA 52242 USA (January, 1995)

More information

Solving Obstacle Problems by Using a New Interior Point Algorithm. Abstract

Solving Obstacle Problems by Using a New Interior Point Algorithm. Abstract Solving Obstacle Problems by Using a New Interior Point Algorithm Yi-Chih Hsieh Department of Industrial Engineering National Yunlin Polytechnic Institute Huwei, Yunlin 6308 Taiwan and Dennis L. Bricer

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Operations Research Lecture 4: Linear Programming Interior Point Method

Operations Research Lecture 4: Linear Programming Interior Point Method Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)

A Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994) A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point

More information

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization Roger Behling a, Clovis Gonzaga b and Gabriel Haeser c March 21, 2013 a Department

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

Interior Point Methods for Mathematical Programming

Interior Point Methods for Mathematical Programming Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained

More information

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

More information

Chapter 6 Interior-Point Approach to Linear Programming

Chapter 6 Interior-Point Approach to Linear Programming Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

i.e., into a monomial, using the Arithmetic-Geometric Mean Inequality, the result will be a posynomial approximation!

i.e., into a monomial, using the Arithmetic-Geometric Mean Inequality, the result will be a posynomial approximation! Dennis L. Bricker Dept of Mechanical & Industrial Engineering The University of Iowa i.e., 1 1 1 Minimize X X X subject to XX 4 X 1 0.5X 1 Minimize X X X X 1X X s.t. 4 1 1 1 1 4X X 1 1 1 1 0.5X X X 1 1

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

LP. Kap. 17: Interior-point methods

LP. Kap. 17: Interior-point methods LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *

On Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs * Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System The Q-parametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Spring-mass system measurements y Controller control inputs u Idea for lecture

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725 Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...

More information

IE 5531 Midterm #2 Solutions

IE 5531 Midterm #2 Solutions IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,

More information

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura

More information

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Q-parametrization (Youla) L-L5 Purpose, models and loop-shaping by hand L6-L8 Limitations

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Optimisation in Higher Dimensions

Optimisation in Higher Dimensions CHAPTER 6 Optimisation in Higher Dimensions Beyond optimisation in 1D, we will study two directions. First, the equivalent in nth dimension, x R n such that f(x ) f(x) for all x R n. Second, constrained

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015

An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv: v2 [math.oc] 29 Jun 2015 An O(nL) Infeasible-Interior-Point Algorithm for Linear Programming arxiv:1506.06365v [math.oc] 9 Jun 015 Yuagang Yang and Makoto Yamashita September 8, 018 Abstract In this paper, we propose an arc-search

More information

Approximate Farkas Lemmas in Convex Optimization

Approximate Farkas Lemmas in Convex Optimization Approximate Farkas Lemmas in Convex Optimization Imre McMaster University Advanced Optimization Lab AdvOL Graduate Student Seminar October 25, 2004 1 Exact Farkas Lemma Motivation 2 3 Future plans The

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem

An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Int. Journal of Math. Analysis, Vol. 1, 2007, no. 17, 841-849 An Infeasible Interior Point Method for the Monotone Linear Complementarity Problem Z. Kebbiche 1 and A. Keraghel Department of Mathematics,

More information

Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming

More information

COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM

COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM STUDIA UNIV. BABEŞ BOLYAI, MATHEMATICA, Volume LIII, Number 3, September 2008 COMPARATIVE STUDY BETWEEN LEMKE S METHOD AND THE INTERIOR POINT METHOD FOR THE MONOTONE LINEAR COMPLEMENTARY PROBLEM ADNAN

More information

On well definedness of the Central Path

On well definedness of the Central Path On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

Research Note. A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization Iranian Journal of Operations Research Vol. 4, No. 1, 2013, pp. 88-107 Research Note A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization B. Kheirfam We

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Constraint Reduction for Linear Programs with Many Constraints

Constraint Reduction for Linear Programs with Many Constraints Constraint Reduction for Linear Programs with Many Constraints André L. Tits Institute for Systems Research and Department of Electrical and Computer Engineering University of Maryland, College Park Pierre-Antoine

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Lecture 10. Primal-Dual Interior Point Method for LP

Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.

More information

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares

A Regularized Interior-Point Method for Constrained Nonlinear Least Squares A Regularized Interior-Point Method for Constrained Nonlinear Least Squares XII Brazilian Workshop on Continuous Optimization Abel Soares Siqueira Federal University of Paraná - Curitiba/PR - Brazil Dominique

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Linear programming II

Linear programming II Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality

More information

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization

Convergence Analysis of Inexact Infeasible Interior Point Method. for Linear Optimization Convergence Analysis of Inexact Infeasible Interior Point Method for Linear Optimization Ghussoun Al-Jeiroudi Jacek Gondzio School of Mathematics The University of Edinburgh Mayfield Road, Edinburgh EH9

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

An interior-point trust-region polynomial algorithm for convex programming

An interior-point trust-region polynomial algorithm for convex programming An interior-point trust-region polynomial algorithm for convex programming Ye LU and Ya-xiang YUAN Abstract. An interior-point trust-region algorithm is proposed for minimization of a convex quadratic

More information

Conic Linear Programming. Yinyu Ye

Conic Linear Programming. Yinyu Ye Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture

More information

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

March 8, 2010 MATH 408 FINAL EXAM SAMPLE March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization

A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization A Second-Order Path-Following Algorithm for Unconstrained Convex Optimization Yinyu Ye Department is Management Science & Engineering and Institute of Computational & Mathematical Engineering Stanford

More information

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel

IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel and Nimrod Megiddo IBM Almaden Research Center,650 Harry Road Sun Jose, Calijornia 95120-6099 and School of Mathematical Sciences Tel Aviv University Tel Aviv, Israel Submitted by Richard Tapia ABSTRACT

More information

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise

from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise 1. Introduction The primal-dual infeasible-interior-point algorithm which we will discuss has stemmed from the primal-dual interior-point algorithm (Megiddo [16], Kojima, Mizuno, and Yoshise [7], Monteiro

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin

1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS

More information

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

Finding a point in the relative interior of a polyhedron

Finding a point in the relative interior of a polyhedron Report no. NA-07/01 Finding a point in the relative interior of a polyhedron Coralia Cartis Rutherford Appleton Laboratory, Numerical Analysis Group Nicholas I. M. Gould Oxford University, Numerical Analysis

More information

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1, 4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format:

A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS. 1. Introduction Consider the quadratic program (PQ) in standard format: STUDIA UNIV. BABEŞ BOLYAI, INFORMATICA, Volume LVII, Number 1, 01 A PRIMAL-DUAL INTERIOR POINT ALGORITHM FOR CONVEX QUADRATIC PROGRAMS MOHAMED ACHACHE AND MOUFIDA GOUTALI Abstract. In this paper, we propose

More information

An E cient A ne-scaling Algorithm for Hyperbolic Programming

An E cient A ne-scaling Algorithm for Hyperbolic Programming An E cient A ne-scaling Algorithm for Hyperbolic Programming Jim Renegar joint work with Mutiara Sondjaja 1 Euclidean space A homogeneous polynomial p : E!R is hyperbolic if there is a vector e 2E such

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

Support Vector Machine (SVM) and Kernel Methods

Support Vector Machine (SVM) and Kernel Methods Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

What s New in Active-Set Methods for Nonlinear Optimization?

What s New in Active-Set Methods for Nonlinear Optimization? What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for

More information

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS

A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh

More information

15. Conic optimization

15. Conic optimization L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 15-1 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

A class of Smoothing Method for Linear Second-Order Cone Programming

A class of Smoothing Method for Linear Second-Order Cone Programming Columbia International Publishing Journal of Advanced Computing (13) 1: 9-4 doi:1776/jac1313 Research Article A class of Smoothing Method for Linear Second-Order Cone Programming Zhuqing Gui *, Zhibin

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function

A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Semidefinite Programming

Semidefinite Programming Chapter 2 Semidefinite Programming 2.0.1 Semi-definite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semi-definite programming problem is to find a matrix X M n for the optimization

More information

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 28, 2017 1 / 40 The Key Idea of Newton s Method Let f : R n R be a twice differentiable function

More information