Constraint Reduction for Linear Programs with Many Constraints
|
|
- Loren Terry
- 6 years ago
- Views:
Transcription
1 Constraint Reduction for Linear Programs with Many Constraints André L. Tits Institute for Systems Research and Department of Electrical and Computer Engineering University of Maryland, College Park Pierre-Antoine Absil School of Computer Science and Information Technology Florida State University, Tallahassee William Woessner Department of Computer Science University of Maryland, College Park UMBC, 11 March 2. 1
2 Consider the following linear program in dual standard form max b T y subject to A T y c, (1) where A is m n. Suppose n m. Observation: Normally, only a small subset of the constraints (no more than m under nondegeneracy assumptions) are active at the solution. The others are redundant. b Objective: Compute search direction based on a reduced Newton-KKT system, by adaptively selecting a small subset of critical columns of A. Hope: Significantly reduced cost per iteration. No drastic increase in the number of iterations. Preserve theoretical convergence properties. 2
3 Outline 1. Background Some related work Notation Primal-dual framework Operation count Reduced Newton-KKT system 2. Reduced, Dual-Feasible PD Affine Scaling: µ = Algorithm statement Observation Numerical experiments Convergence properties 3. Reduced Mehrotra-Predictor-Corrector Algorithm statement Numerical experiments 4. Concluding Remarks 3
4 Background Some related work Indicators (to identify early the zero components of x ): El-Bakry et al.[1994], Facchinei et al.[2]. Column generation, build-up, build-down : Ye [1992], den Hertog et al.[1992, 1994, 199], Goffin et al.[1994], Ye [1997], Luo et al.[1999]. Focus is on complexity analysis; Good numerical results on discretized semi-infinite programming problems; but typically, many more than m columns of A are retained. Notation n := {1,...,n}, A = [a 1,...,a n ], e = [1,..., 1] T, Given Q n, A Q := col[a i : i Q], X Q := diag(x i : i Q), x Q := [x i : i Q] T S Q := diag(s i : i Q). s Q := [s i : i Q] T 4
5 Background (cont d) Primal-dual framework Primal-dual LP pair in standard form: min c T x subject to Ax = b, x max b T y subject to A T y + s = c, s. (2) Perturbed (µ ) KKT conditions of optimality: A T y + s c = (3) Ax b = (4) XSe = µe () x, s, (6) Given µ, µ-perturbed Newton-KKT system: A T I x r c A y = r b S X s Xs + µe, with r b := Ax b (primal residue) and r c := A T y + s c (dual residue).
6 Background (cont d) Primal-dual framework (cont d) Equivalently, ( x, y, s) satisfy that normal equations: AS 1 XA T y = r b + A( S 1 Xr c + x µs 1 e) s = A T y r c x = x + µs 1 e S 1 X s Simple interior-point iteration: Given x >, s >, y, Select a value for µ ( ); Solve Newton-KKT system for x, y, s; Set x + := x + α P x >, s + := s + α D s >, y + := y + α D y with appropriate α P, α D (possibly forced to be equal). Note: If (y, s) is dual feasible, then (y +, s + ) also is. 6
7 Background (cont d) Operation count Reduced Newton-KKT system Operation count (for a dense problem): - Forming G := AS 1 XA T : m 2 n; - Forming v := r b + A(x S 1 (Xr c + µe)); 2mn; - Solving G y = v: m 3 /3 (Cholesky); - Computing s = A T y r c : 2mn; - Computing x = x + S 1 ( X s + µe): 2n; Benefit in replacing A with A Q : n replaced with Q Assume n m and m 1. Then main gain can be achieved in line 1, i.e., by merely redefining G := A Q S 1 Q X QA T Q, and leaving the rest unchanged. This is done in the sequel. Key question: How to select Q for Significantly reducing the work per iteration ( Q small); Avoiding a dramatic increase in number of iterations; Preserving theoretical convergence properties. 7
8 Reduced, Dual-Feasible PD Affine Scaling: µ = Algorithm statement Iteration rpdas. Parameters. β (, 1), x max > x min >, M m. Data. y, with A T y < c; s := c A T y (> ) (i.e., r c = ); x > ; Q n, including the indices of the M smallest entries of s. Step 1. Compute search direction. Step 2. Updates. Solve (i) Primal update. Set and compute A Q S 1 Q X QA T Q y = b s A T y x S 1 X s x + i min{max{min{ y 2 + x 2, x min }, x i }, x max }, i n, (8) where ( x ) i := min{ x i, }. (ii) Dual update. Set if s i i n, t D min{( s i / s i ) : s i <, i n} otherwise. (9) Set ˆt D min{max{βt D, t D y }, 1}. Set y + y + ˆt D y, s + s + ˆt D s. (So r c remains at.) 8
9 Reduced, Dual-Feasible PD Affine Scaling: µ = Observation ( x Q, y, s Q ) constructed by iteration rpdas, also satisfy s Q = A T Q y x Q = x Q S 1 Q X Q s Q, (1a) (1b) i.e., they satisfy the full set of normal equations associated with the constraint-reduced system. Equivalently, they satisfy the Newton system (with µ = and r c = ) A T Q I A Q x Q y = b A Q x Q. S Q X Q s Q X Q s Q (This is a key ingredient to the local convergence analysis.) 9
10 Reduced, Dual-Feasible PD Affine Scaling: µ = Numerical experiments Heuristic used for Q: For given M m, Q = indexes of M smallest components of s. Parameter value: β =.99 Selection of x : Based on Mehrotra s [SIOPT, 1992] scheme. Test problems (with dual-feasible initial point) Polytopic approximation of unit sphere entries of b N(, 1); columns of A uniformly distributed on the unit sphere; components of y and s uniformly distributed on (,1); c = A T y + s to ensure dual feasibility. Fully random problem entries of A and b N(, 1); y, s, and c generated as above. SCSD1, SCSD6, SHIP4L, and WOODW from Netlib: SIPOW1, SIPOW2 (semi-infinite) from CUTE: 1
11 Reduced, Dual-Feasible PD Affine Scaling: µ = Numerical experiments (cont d) The points on the plots correspond to different runs of Algorithm rpdas on the same problem. The runs only differ by the number of constraints M that are retained in Q; this information is indicated on the horizontal axis in relative value. The rightmost point thus corresponds to the experiment without constraint reduction, while the points on the extreme left correspond to the most drastic constraint reduction. Observations: In most cases, surprisingly, the number of iterations does NOT increase as M is reduced. Thus any gain in per iteration directly translates into the same relative gain in overall. Displayed values are purely indicative. Indeed, they will strongly depend on the implementation (in particular, how the product A Q S 1 Q X QA T Q is computed), and on the possible sparsity of the data. The algorithm sometimes fails for small Q. This is due to A Q losing rank, and accordingly A Q SQ 1X QA T Q becoming singular. (Note that this will almost surely not happen when A is generated randomly.) Schemes to bypass this difficulty are being investigated. 11
12 m=32, n=8192, prob=tgtcstr, sw MPC=, sw mu Meh=, sw p update=2, sw x=2, sw sparse=1, diag ld= 2 1. solves Q /n (smallest Q displayed is 64) rpdas on polytopic approximation of unit sphere; m = 32, n = m=32, n=8192, prob=randn, sw MPC=, sw mu Meh=, sw p update=2, sw x=2, sw sparse=1, diag ld= 3 2. solves Q /n (smallest Q displayed is 64) rpdas on fully random problem; m = 32, n =
13 m=77, n=76, prob=scsd1, sw MPC=, sw mu Meh=, sw p update=2, sw x=2, sw sparse=1, diag ld=.2.1 solves Q /n (smallest Q displayed is 14) rpdas on SCSD1: m = 77, n = 76. m=147, n=13, prob=scsd6, sw MPC=, sw mu Meh=, sw p update=2, sw x=2, sw sparse=1, diag ld=.8.6 solves Q /n (smallest Q displayed is 294) rpdas on SCSD6: m = 147, n =
14 m=36, n=2162, prob=ship4l, sw MPC=, sw mu Meh=, sw p update=2, sw x=2, sw sparse=1, diag ld= 2. 2 solves Q /n (smallest Q displayed is 1679) rpdas on SHIP4L: m = 36, n = m=198, n=8418, prob=woodw, sw MPC=, sw mu Meh=, sw p update=2, sw x=2, sw sparse=1, diag ld= 4 solves Q /n (smallest Q displayed is 8418) rpdas on WOODW: m = 198, n =
15 m=2, n=1, prob=sipow1, sw MPC=, sw mu Meh=, sw p update=2, sw x=2, sw sparse=1, diag ld=.4.3 solves Q /n (smallest Q displayed is 12) rpdas on SIPOW1: m = 2, n = 1. rpdas on SIPOW2: m = 2, n = 1. 1
16 Reduced, Dual-Feasible PD Affine Scaling: µ = Convergence properties Let F := {y : A T y c}. For y F, let I(y) := {i : a T i y = c i}. Assumption 1. All m M submatrices of A have full (row) rank. Assumption 2. The dual (y) solution set is nonempty and bounded. Assumption 3. For all y F, the set {a i : i I(y)} is linear independent. Theorem. {y k } converges to the dual solution set. Assumption 4. The dual solution set is a singleton, say, {y }, and the associated KKT multiplier x satisfies x i < x max for all i. Theorem. {(x k, y k )} converges to (x, y ) Q-quadratically. The global convergence analysis focusses on the monotone decrease of the dual objective function b T y. The lower bound y 2 + x 2 in the primal update formula (8) is essential as it keep the Newton-KKT matrix away from singularity as long as KKT points are not approached. (A step along the primal direction x would not allow for this.) 16
17 Reduced Mehrotra-Predictor-Corrector Algorithm statement Iteration rmpc. Parameters. β (, 1), M m. Data. y, with A T y < c; s := c A T y; x > ; µ := x T s/n; Q n, including the indices of the M smallest components of s. Step 1. Compute affine scaling step. Solve A Q S 1 Q X QA T Q y = r b + A( S 1 Xr c + x) and compute s A T y r c x x S 1 X s t aff P arg max{t [, 1] x + t x } t aff D arg max{t [, 1] x + t s } Step 2. Compute centering parameter. µ aff (x + t aff P x)t (s + t aff D s)/n σ (µ aff /µ) 3 Step 3. Compute centering/corrector direction. A Q S 1 Q X QA T Q y cc = AS 1 (σµe X s) s cc A T y cc x cc S 1 (σµe X s) S 1 X s cc 17
18 Step 4. Compute MPC step. x mpc x + x cc y mpc y + y cc s mpc s + s cc t max P arg max{t [, 1] x + t x mpc } t max D arg max{t [, 1] s + t s mpc } t P = min{βt max P, 1} t D = min{βt max D, 1} Step. Updates. x + x + t P x mpc y + y + t D y mpc s + s + t D s mpc Numerical experiments with dual-feasible initial point Algorithm rmpc was run on the same problems as rpdas, with the same (dual-feasible) initial points. The results are reported in the next few slides. 18
19 m=32, n=8192, prob=tgtcstr, sw MPC=1, sw mu Meh=, sw p update=, sw x=2, sw sparse=1, diag ld= 2. 2 solves Q /n (smallest Q displayed is 968) Dual-feasible rmpc on polytopic approximation of unit sphere; m = 32, n = m=32, n=8192, prob=randn, sw MPC=1, sw mu Meh=, sw p update=, sw x=2, sw sparse=1, diag ld= 2 1. solves Q /n (smallest Q displayed is 64) Dual-feasible MPC on fully random problem; m = 32, n =
20 m=77, n=76, prob=scsd1, sw MPC=1, sw mu Meh=, sw p update=, sw x=2, sw sparse=1, diag ld=.2.1 solves Q /n (smallest Q displayed is 14) Dual-feasible rmpc on SCSD1: m = 77, n = 76. m=147, n=13, prob=scsd6, sw MPC=1, sw mu Meh=, sw p update=, sw x=2, sw sparse=1, diag ld=.4.3 solves Q /n (smallest Q displayed is 294) Dual-feasible rmpc on SCSD6: m = 147, n = 13. 2
21 m=36, n=2162, prob=ship4l, sw MPC=1, sw mu Meh=, sw p update=, sw x=2, sw sparse=1, diag ld= 2 1. solves Q /n (smallest Q displayed is 2162) Dual-feasible rmpc on SHIP4L: m = 36, n = m=198, n=8418, prob=woodw, sw MPC=1, sw mu Meh=, sw p update=, sw x=2, sw sparse=1, diag ld= 4 3 solves Q /n (smallest Q displayed is 8418) Dual-feasible rmpc on WOODW: m = 198, n =
22 m=2, n=1, prob=sipow1, sw MPC=1, sw mu Meh=, sw p update=, sw x=2, sw sparse=1, diag ld=..4 solves Q /n (smallest Q displayed is 12) Dual-feasible rmpc on SIPOW1: m = 2, n = 1. m=2, n=1, prob=sipow2, sw MPC=1, sw mu Meh=, sw p update=, sw x=2, sw sparse=1, diag ld=.8.6 solves Q /n (smallest Q displayed is 12) Dual-feasible rmpc on SIPOW2: m = 2, n = 1. 22
23 Reduced Mehrotra-Predictor-Corrector (cont d) Numerical experiments with infeasible initial point The next few slides reports results obtained on the same problem, but with the (usually infeasible) initial point as recommended by Mehrotra [SIOPT, 1992]. 23
24 m=32, n=8192, prob=tgtcstr, sw MPC=1, sw mu Meh=, sw p update=, sw x=3, sw sparse=1, diag ld= 2 1. solves Q /n (smallest Q displayed is 968) (Infeasible) rmpc on polytopic approximation of unit sphere; m = 32, n = m=32, n=8192, prob=randn, sw MPC=1, sw mu Meh=, sw p update=, sw x=3, sw sparse=1, diag ld= 2. 2 solves Q /n (smallest Q displayed is 968) (Infeasible) MPC on fully random problem; m = 32, n =
25 m=77, n=76, prob=scsd1, sw MPC=1, sw mu Meh=, sw p update=, sw x=3, sw sparse=1, diag ld=.4.3 solves Q /n (smallest Q displayed is 14) (Infeasible) rmpc on SCSD1: m = 77, n = 76. m=147, n=13, prob=scsd6, sw MPC=1, sw mu Meh=, sw p update=, sw x=3, sw sparse=1, diag ld=.8.6 solves Q /n (smallest Q displayed is 294) (Infeasible) rmpc on SCSD6: m = 147, n = 13. 2
26 m=36, n=2162, prob=ship4l, sw MPC=1, sw mu Meh=, sw p update=, sw x=3, sw sparse=1, diag ld= 2. 2 solves Q /n (smallest Q displayed is 2162) (Infeasible) rmpc on SHIP4L: m = 36, n = m=198, n=8418, prob=woodw, sw MPC=1, sw mu Meh=, sw p update=, sw x=3, sw sparse=1, diag ld= 6 solves Q /n (smallest Q displayed is 736) (Infeasible) rmpc on WOODW: m = 198, n =
27 m=2, n=1, prob=sipow1, sw MPC=1, sw mu Meh=, sw p update=, sw x=3, sw sparse=1, diag ld= 2. 2 solves Q /n (smallest Q displayed is 2223) (Infeasible) rmpc on SIPOW1: m = 2, n = 1. m=2, n=1, prob=sipow2, sw MPC=1, sw mu Meh=, sw p update=, sw x=3, sw sparse=1, diag ld=.8.6 solves Q /n (smallest Q displayed is 12) (Infeasible) rmpc on SIPOW2: m = 2, n = 1. 27
28 Concluding Remarks Reduced version of an primal-dual affine scaling algorithm (rpdas) and of Mehrotra s predictor-corrector algorithm (rmpc) were proposed. When n m and m 1, for both rpdas and rmpc, major reduction in per iteration can be achieved. Under nondegeneracy assumptions, rpdas is proved to converge quadratically in the primal-dual space; a convergence proof for rmpc is lacking at this time. Numerical experiments show that The number of iterations to convergence remains essentially constant as Q decreases, down to a small multiple of m. One some problems (e.g., SCSD6), when Q is reduced below a certain value, the algorithm fails due to A Q losing rank. Schemes to bypass this difficulty are being investigated. This presentation can be downloaded from The full paper should be completed by April 2. 28
Constraint Reduction for Linear Programs with Many Inequality Constraints
SIOPT 63342, compiled November 7, 25 Constraint Reduction for Linear Programs with Many Inequality Constraints André L. Tits Department of Electrical and Computer Engineering and Institute for Systems
More informationA Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme
A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme M. Paul Laiu 1 and (presenter) André L. Tits 2 1 Oak Ridge National Laboratory laiump@ornl.gov
More informationA Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm
A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm Luke B. Winternitz, Stacey O. Nicholls, André L. Tits, Dianne P. O Leary September 24, 2007 Abstract Consider linear programs in
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationA Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm
A Constraint-Reduced Variant of Mehrotra s Predictor-Corrector Algorithm Luke B. Winternitz, Stacey O. Nicholls, André L. Tits, Dianne P. O Leary September 7, 2010 Abstract Consider linear programs in
More informationABSTRACT. linear and convex quadratic programs with many more inequality constraints than
ABSTRACT Title of dissertation: INFEASIBLE-START CONSTRAINT-REDUCED METHODS FOR LINEAR AND CONVEX QUADRATIC OPTIMIZATION Meiyun He, Doctor of Philosophy, 211 Dissertation directed by: Professor André L.
More informationABSTRACT. Primal-dual interior-point methods (IPMs) are distinguished for their exceptional
ABSTRACT Title of dissertation: COLUMN GENERATION IN INFEASIBLE PREDICTOR-CORRECTOR METHODS FOR SOLVING LINEAR PROGRAMS Stacey O. Nicholls, Doctor of Philosophy, 2009 Dissertation directed by: Professor
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationLecture 14: Primal-Dual Interior-Point Method
CSE 599: Interplay between Convex Optimization and Geometry Winter 218 Lecturer: Yin Tat Lee Lecture 14: Primal-Dual Interior-Point Method Disclaimer: Please tell me any mistake you noticed. In this and
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationOperations Research Lecture 4: Linear Programming Interior Point Method
Operations Research Lecture 4: Linear Programg Interior Point Method Notes taen by Kaiquan Xu@Business School, Nanjing University April 14th 2016 1 The affine scaling algorithm one of the most efficient
More informationInterior-Point Methods
Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals
More informationInterior Point Methods for LP
11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationLecture 10. Primal-Dual Interior Point Method for LP
IE 8534 1 Lecture 10. Primal-Dual Interior Point Method for LP IE 8534 2 Consider a linear program (P ) minimize c T x subject to Ax = b x 0 and its dual (D) maximize b T y subject to A T y + s = c s 0.
More informationMcMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods
McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report
More informationA Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active Set Identification Scheme
Noname manuscript No. (will be inserted by the editor) A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active Set Identification Scheme M. Paul Laiu André L. Tits February
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationABSTRACT. been done to make these machines more efficient in classification. In our work, we
ABSTRACT Title of thesis: THE USE OF PRECONDITIONING FOR TRAINING SUPPORT VECTOR MACHINES Jhacova Ashira Williams, Master of Science, 2008 Thesis directed by: Dr. Dianne P. O Leary Department of Computer
More information11. Equality constrained minimization
Convex Optimization Boyd & Vandenberghe 11. Equality constrained minimization equality constrained minimization eliminating equality constraints Newton s method with equality constraints infeasible start
More informationOn implementing a primal-dual interior-point method for conic quadratic optimization
On implementing a primal-dual interior-point method for conic quadratic optimization E. D. Andersen, C. Roos, and T. Terlaky December 18, 2000 Abstract Conic quadratic optimization is the problem of minimizing
More informationInfeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*
Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Yin Zhang Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity theorem (3)
More informationA full-newton step infeasible interior-point algorithm for linear programming based on a kernel function
A full-newton step infeasible interior-point algorithm for linear programming based on a kernel function Zhongyi Liu, Wenyu Sun Abstract This paper proposes an infeasible interior-point algorithm with
More information18. Primal-dual interior-point methods
L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric
More informationA priori bounds on the condition numbers in interior-point methods
A priori bounds on the condition numbers in interior-point methods Florian Jarre, Mathematisches Institut, Heinrich-Heine Universität Düsseldorf, Germany. Abstract Interior-point methods are known to be
More informationThe Solution of Euclidean Norm Trust Region SQP Subproblems via Second Order Cone Programs, an Overview and Elementary Introduction
The Solution of Euclidean Norm Trust Region SQP Subproblems via Second Order Cone Programs, an Overview and Elementary Introduction Florian Jarre, Felix Lieder, Mathematisches Institut, Heinrich-Heine
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationChapter 6 Interior-Point Approach to Linear Programming
Chapter 6 Interior-Point Approach to Linear Programming Objectives: Introduce Basic Ideas of Interior-Point Methods. Motivate further research and applications. Slide#1 Linear Programming Problem Minimize
More informationA FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM COMPLEMENTARITY PROBLEMS
Yugoslav Journal of Operations Research 25 (205), Number, 57 72 DOI: 0.2298/YJOR3055034A A FULL-NEWTON STEP INFEASIBLE-INTERIOR-POINT ALGORITHM FOR P (κ)-horizontal LINEAR COMPLEMENTARITY PROBLEMS Soodabeh
More informationA stable primal dual approach for linear programming under nondegeneracy assumptions
Comput Optim Appl (2009) 44: 213 247 DOI 10.1007/s10589-007-9157-2 A stable primal dual approach for linear programming under nondegeneracy assumptions Maria Gonzalez-Lima Hua Wei Henry Wolkowicz Received:
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More informationNonlinear Optimization for Optimal Control
Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]
More informationEE364a Homework 8 solutions
EE364a, Winter 2007-08 Prof. S. Boyd EE364a Homework 8 solutions 9.8 Steepest descent method in l -norm. Explain how to find a steepest descent direction in the l -norm, and give a simple interpretation.
More informationAgenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms
Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point
More informationA Stable Primal-Dual Approach for Linear Programming under Nondegeneracy Assumptions
A Stable Primal-Dual Approach for Linear Programming under Nondegeneracy Assumptions Maria Gonzalez-Lima Hua Wei Henry Wolkowicz January 15, 2008 University of Waterloo Department of Combinatorics & Optimization
More informationCOURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion
COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS Didier HENRION www.laas.fr/ henrion October 2006 Geometry of LMI sets Given symmetric matrices F i we want to characterize the shape in R n of the LMI set F
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationA Generalized Homogeneous and Self-Dual Algorithm. for Linear Programming. February 1994 (revised December 1994)
A Generalized Homogeneous and Self-Dual Algorithm for Linear Programming Xiaojie Xu Yinyu Ye y February 994 (revised December 994) Abstract: A generalized homogeneous and self-dual (HSD) infeasible-interior-point
More informationLP. Kap. 17: Interior-point methods
LP. Kap. 17: Interior-point methods the simplex algorithm moves along the boundary of the polyhedron P of feasible solutions an alternative is interior-point methods they find a path in the interior of
More informationSVM May 2007 DOE-PI Dianne P. O Leary c 2007
SVM May 2007 DOE-PI Dianne P. O Leary c 2007 1 Speeding the Training of Support Vector Machines and Solution of Quadratic Programs Dianne P. O Leary Computer Science Dept. and Institute for Advanced Computer
More informationFINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING
FINDING PARALLELISM IN GENERAL-PURPOSE LINEAR PROGRAMMING Daniel Thuerck 1,2 (advisors Michael Goesele 1,2 and Marc Pfetsch 1 ) Maxim Naumov 3 1 Graduate School of Computational Engineering, TU Darmstadt
More informationSolving Obstacle Problems by Using a New Interior Point Algorithm. Abstract
Solving Obstacle Problems by Using a New Interior Point Algorithm Yi-Chih Hsieh Department of Industrial Engineering National Yunlin Polytechnic Institute Huwei, Yunlin 6308 Taiwan and Dennis L. Bricer
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationImplementation of Warm-Start Strategies in Interior-Point Methods for Linear Programming in Fixed Dimension
Implementation of Warm-Start Strategies in Interior-Point Methods for Linear Programming in Fixed Dimension Elizabeth John E. Alper Yıldırım May 11, 2006 Abstract We implement several warm-start strategies
More informationA Second Full-Newton Step O(n) Infeasible Interior-Point Algorithm for Linear Optimization
A Second Full-Newton Step On Infeasible Interior-Point Algorithm for Linear Optimization H. Mansouri C. Roos August 1, 005 July 1, 005 Department of Electrical Engineering, Mathematics and Computer Science,
More informationA CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING
A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás
More informationInfeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming*
Infeasible Primal-Dual (Path-Following) Interior-Point Methods for Semidefinite Programming* Notes for CAAM 564, Spring 2012 Dept of CAAM, Rice University Outline (1) Introduction (2) Formulation & a complexity
More informationOn Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming
On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More informationUniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods
Uniform Boundedness of a Preconditioned Normal Matrix Used in Interior Point Methods Renato D. C. Monteiro Jerome W. O Neal Takashi Tsuchiya March 31, 2003 (Revised: December 3, 2003) Abstract Solving
More informationBarrier Method. Javier Peña Convex Optimization /36-725
Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More informationAnalytic Center Cutting-Plane Method
Analytic Center Cutting-Plane Method S. Boyd, L. Vandenberghe, and J. Skaf April 14, 2011 Contents 1 Analytic center cutting-plane method 2 2 Computing the analytic center 3 3 Pruning constraints 5 4 Lower
More informationA PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE
Yugoslav Journal of Operations Research 24 (2014) Number 1, 35-51 DOI: 10.2298/YJOR120904016K A PREDICTOR-CORRECTOR PATH-FOLLOWING ALGORITHM FOR SYMMETRIC OPTIMIZATION BASED ON DARVAY'S TECHNIQUE BEHROUZ
More informationOn Superlinear Convergence of Infeasible Interior-Point Algorithms for Linearly Constrained Convex Programs *
Computational Optimization and Applications, 8, 245 262 (1997) c 1997 Kluwer Academic Publishers. Manufactured in The Netherlands. On Superlinear Convergence of Infeasible Interior-Point Algorithms for
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationMeiyun Y. He a & Andréé L. Tits a a Department of Electrical and Computer Engineering and. Available online: 14 Sep 2011
This article was downloaded by: [University Of Maryland], [Andre Tits] On: 14 September 2011, At: 15:41 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954
More informationConvex Optimization : Conic Versus Functional Form
Convex Optimization : Conic Versus Functional Form Erling D. Andersen MOSEK ApS, Fruebjergvej 3, Box 16, DK 2100 Copenhagen, Blog: http://erlingdandersen.blogspot.com Linkedin: http://dk.linkedin.com/in/edandersen
More informationEnlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions
Enlarging neighborhoods of interior-point algorithms for linear programming via least values of proximity measure functions Y B Zhao Abstract It is well known that a wide-neighborhood interior-point algorithm
More informationMPC Infeasibility Handling
MPC Handling Thomas Wiese, TU Munich, KU Leuven supervised by H.J. Ferreau, Prof. M. Diehl (both KUL) and Dr. H. Gräb (TUM) October 9, 2008 1 / 42 MPC General MPC Strategies 2 / 42 Linear Discrete-Time
More informationA SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION
A SUFFICIENTLY EXACT INEXACT NEWTON STEP BASED ON REUSING MATRIX INFORMATION Anders FORSGREN Technical Report TRITA-MAT-2009-OS7 Department of Mathematics Royal Institute of Technology November 2009 Abstract
More informationSparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.
Sparsity Matters Robert J. Vanderbei 2017 September 20 http://www.princeton.edu/ rvdb IDA: Center for Communications Research Princeton NJ The simplex method is 200 times faster... The simplex method is
More information1 Outline Part I: Linear Programming (LP) Interior-Point Approach 1. Simplex Approach Comparison Part II: Semidenite Programming (SDP) Concludin
Sensitivity Analysis in LP and SDP Using Interior-Point Methods E. Alper Yldrm School of Operations Research and Industrial Engineering Cornell University Ithaca, NY joint with Michael J. Todd INFORMS
More informationChapter 14 Linear Programming: Interior-Point Methods
Chapter 14 Linear Programming: Interior-Point Methods In the 1980s it was discovered that many large linear programs could be solved efficiently by formulating them as nonlinear problems and solving them
More informationAn Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization
An Infeasible Interior-Point Algorithm with full-newton Step for Linear Optimization H. Mansouri M. Zangiabadi Y. Bai C. Roos Department of Mathematical Science, Shahrekord University, P.O. Box 115, Shahrekord,
More informationSolving the normal equations system arising from interior po. linear programming by iterative methods
Solving the normal equations system arising from interior point methods for linear programming by iterative methods Aurelio Oliveira - aurelio@ime.unicamp.br IMECC - UNICAMP April - 2015 Partners Interior
More informationInterior Point Methods for Linear Programming: Motivation & Theory
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Linear Programming: Motivation & Theory Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationTRANSPORTATION PROBLEMS
Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationNumerical Optimization
Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationMAT016: Optimization
MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The
More informationPrimal-Dual Interior-Point Methods
Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming
More informationSMO vs PDCO for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines
vs for SVM: Sequential Minimal Optimization vs Primal-Dual interior method for Convex Objectives for Support Vector Machines Ding Ma Michael Saunders Working paper, January 5 Introduction In machine learning,
More informationReview Solutions, Exam 2, Operations Research
Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To
More informationOn Mehrotra-Type Predictor-Corrector Algorithms
On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky April 7, 005 Abstract In this paper we discuss the polynomiality of Mehrotra-type predictor-corrector algorithms. We consider
More informationLinear programming II
Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality
More informationLecture 24: August 28
10-725: Optimization Fall 2012 Lecture 24: August 28 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Jiaji Zhou,Tinghui Zhou,Kawa Cheung Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:
More informationLinear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence
Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationChapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization
In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of
More informationGoing from graphic solutions to algebraic
Going from graphic solutions to algebraic 2 variables: Graph constraints Identify corner points of feasible area Find which corner point has best objective value More variables: Think about constraints
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationA Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization
A Full Newton Step Infeasible Interior Point Algorithm for Linear Optimization Kees Roos e-mail: C.Roos@tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos 37th Annual Iranian Mathematics Conference Tabriz,
More informationCSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationDuality revisited. Javier Peña Convex Optimization /36-725
Duality revisited Javier Peña Conve Optimization 10-725/36-725 1 Last time: barrier method Main idea: approimate the problem f() + I C () with the barrier problem f() + 1 t φ() tf() + φ() where t > 0 and
More informationThe Q Method for Second-Order Cone Programming
The Q Method for Second-Order Cone Programming Yu Xia Farid Alizadeh July 5, 005 Key words. Second-order cone programming, infeasible interior point method, the Q method Abstract We develop the Q method
More informationConstrained optimization
Constrained optimization DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Compressed sensing Convex constrained
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationSparse Linear Programming via Primal and Dual Augmented Coordinate Descent
Sparse Linear Programg via Primal and Dual Augmented Coordinate Descent Presenter: Joint work with Kai Zhong, Cho-Jui Hsieh, Pradeep Ravikumar and Inderjit Dhillon. Sparse Linear Program Given vectors
More informationNumerical Methods for Model Predictive Control. Jing Yang
Numerical Methods for Model Predictive Control Jing Yang Kongens Lyngby February 26, 2008 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800 Kongens Lyngby, Denmark
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More information