Using Interior-Point Methods within Mixed-Integer Nonlinear Programming

Size: px
Start display at page:

Download "Using Interior-Point Methods within Mixed-Integer Nonlinear Programming"

Transcription

1 Using Interior-Point Methods within Mixed-Integer Nonlinear Programming. Hande Y. Benson Drexel University IMA - MINLP - p. 1/34

2 Motivation: Discrete Variables Motivation: Discrete Variables Interior-Point Methods Handling discrete variables generally requires a bilevel approach: Upper level: Branch-and-bound, branch-and-cut, outer approximation Lower level: Active-set methods, interior-point methods Active-set methods are considered superior at the lower level because They can be warmstarted They can identify infeasible problems They can handle fixed variables naturally A textbook interior-point method cannot do any of these. But current interior-point implementations outperform active-set implementations on many large problems As sizes of mixed-integer nonlinear programming problems grow, there will be a definite need for interior-point methods at the lower level. IMA - MINLP - p. 2/34

3 Interior-Point Methods Each NLP relaxation has the form: Motivation: Discrete Variables Interior-Point Methods Add slack variables: min x,y f(x, y) s.t. h(x, y) 0 l y u min x,y,g,t f(x, y) s.t. h(x, y) w = 0 y g = l y + t = u w, g, t 0. IMA - MINLP - p. 3/34

4 Interior-Point Methods - 2 First-order conditions for the log barrier problem are Motivation: Discrete Variables Interior-Point Methods h(x, y) w = 0 y g = l y + t = u x f(x, y) A T x λ = 0 y f(x, y) A T y λ z + s = 0 W Λe = µe GZe = µe T Se = µe Use Newton s Method to solve this system. At each iteration, solve the reduced KKT system: 2 Hxx Hxy A T x x 6 4 Hxy (Hyy + D) A T 7 B y yc A = Ax Ay E λ 0 xf(x, y) A T x λ y f(x, y) A T y λ z + s D g (l y) D t (u y) µg 1 e + µt 1 e µλ 1 e h(x, y) 1 C A where E = W Λ 1, D = Dg + D t, Dg = G 1 Z, D t = T 1 S. IMA - MINLP - p. 4/34

5 Interior-Point Methods - 3 Motivation: Discrete Variables Interior-Point Methods At each iteration: choose steplengths to ensure that slacks remain strictly positive and sufficient progress toward optimality and feasibility is attained. value of the barrier parameter may also be updated as a function of (W (k+1) Λ (k+1) e, G (k+1) Z (k+1) e, T (k+1) S (k+1) e). Stopping criteria: primal infeasibility < ɛ dual infeasibility < ɛ average complementarity < ɛ IMA - MINLP - p. 5/34

6 IMA - MINLP - p. 6/34

7 Warmstarting: Branch-and-Bound Optimal solution at the parent: (x, y, g, t, λ, z, s ). Warmstarting: Branch-and-Bound Warmstarting: Outer Approximation Infeasibility Identification Fixed Variables Current node: Branch on some variable y j The following must hold: l j < y j < u j, gj > 0, zj = 0 t j > 0, s j = 0. WLOG, assume that l j < y j < y j The only term affected is D tj (u j y j ) = s j (u j y j )/t j. However, at the first iteration s j (u j y j )/t j = 0. The algorithm will get stuck at this nonoptimal and, in fact, infeasible solution. IMA - MINLP - p. 7/34

8 Warmstarting: Outer Approximation Warmstarting: Branch-and-Bound Warmstarting: Outer Approximation Infeasibility Identification Fixed Variables min x,y (x 0.25) 2 + y s.t. 60x 3 y y {0, 1} At each iteration of OA, an NLP subproblem is solved for a fixed value of y. The reduced KKT system: [ ] ( ) ( ) 2 360xλ 180x 2 x 2x x 2 λ 180x 2 w = µ λ λ λ (y 60x3 ) Let y = 1 for the first subproblem. Then, x = 0.25, w = 0.062, and λ = 0. Let y = 0 for the next subproblem. In the first iteration, x = 0 and y > 0, but very close to 0. Then, w = µ λ w w λ λ = 1. The steplength is shortened to less than The algorithm becomes stuck at the old solution. IMA - MINLP - p. 8/34

9 Infeasibility Identification Warmstarting: Branch-and-Bound Warmstarting: Outer Approximation Infeasibility Identification Fixed Variables An infeasible interior-point method does not have to start and/or stay feasible. Cannot get a certificate of infeasibility - only heuristics available. IMA - MINLP - p. 9/34

10 Fixed Variables Consider the following problem: Warmstarting: Branch-and-Bound Warmstarting: Outer Approximation Infeasibility Identification Fixed Variables min y y 2 s.t. 1 y 1. The optimality conditions of this problem are: y g = 1 y + t = 1 2y z + s = 0 gz = 0 ts = 0. When y = 1, we have both g and t equal 0 and at the optimal solution, the dual variables z and s are free to take on any nonnegative values as long as they satisfy the equality z s = 2. IMA - MINLP - p. 10/34

11 IMA - MINLP - p. 11/34

12 Previous Work on Warmstarting IPMs Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... Approach: Find a suitable starting point Identify an iterate close to the central path of the original problem Modify the iterate so it is well-centered for the new problem Solve the new problem from this point Works well in theory and practice: Gondzio (1998), Gondzio and Grothey (2003, 2006), Gondzio and Vial (1999), Yildirim and Wright (2002), John and Yildirim (2006) Mostly for LPs and QPs and only certain types of data perturbations IMA - MINLP - p. 12/34

13 Previous Work on Warmstarting IPMs Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... We propose a different approach: Change the problem, not the starting point Also investigated by Waltz & Ordonez, and Engau, Anjos, & Vanelli (2008) Our approach (Benson & Shanno (2005) and Benson (2007)) Corrects the numerical issues in the KKT system at the optimum of the original problem Allows the nonnegative variables to become negative to encourage longer steps Solves the new problem from the optimum of the original problem without modification IMA - MINLP - p. 13/34

14 Primal-Dual Penalty Model The primal problem: Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... min x,y The primal penalty problem: f(x, y) s.t. h(x, y) 0 l y u. min f(x, y) + c T wξ w + c T g ξ g + c T t ξ t x,y,w,g,t,ξ w,ξ g,ξ t s.t. h(x, y) w = 0 y g = l y + t = u ξ w w b λ ξ g g b z ξ t t b s ξ w, ξ g, ξ t 0, IMA - MINLP - p. 14/34

15 Primal-Dual Penalty Model The dual problem: Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... max λ,z,s dual_obj(λ, z, s; x, y) s.t. x f(x, y) A T x λ = 0 y f(x, y) A T y λ z + s = 0 λ, z, s 0. The dual penalty problem: max λ,z,s dual_obj(λ, z, s; x, y) b T λ ψ λ b T z ψ z b T s ψ s s.t. x f(x, y) A T x λ = 0 y f(x, y) A T y λ z + s = 0 ψ λ λ c w ψ λ ψ z z c g ψ z ψ s s c t ψ s ψ λ, ψ z, ψ s 0. IMA - MINLP - p. 15/34

16 Solving the Penalty Problem First-order conditions: Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... h(x, y) w = 0 y g = l y + t = u (W + Ξ w )(Λ + Ψ λ )e = µe (G + Ξ g )(Z + Ψ z )e = µe (T + Ξ t )(S + Ψ s )e = µe The reduced KKT system has E = Dg = D t = Ψ λ (B λ W )e = µe Ψ z (B z G)e = µe Ψ s (B s T )e = µe. x f(x, y) A T x λ = 0 y f(x, y) A T y λ z + s = 0 Ξ w (C w Λ Ψ λ )e = µe Ξ g (C g Z Ψ z )e = µe Ξ t (C t S Ψ s )e = µe (Λ + Ψ λ ) 1 (W + Ξw) + Ξw(Cw Λ Ψ λ ) Ψλ (B λ W ) 1«1 (Z + Ψz ) 1 (G + Ξg ) + Ξg (Cg Z Ψz ) Ψ z (Bz G) 1 (S + Ψs) 1 (T + Ξ t ) + Ξ t (C t S Ψs) Ψ s(bs T ) 1 and an appropriately modified rhs. IMA - MINLP - p. 16/34

17 Exactness of the Penalty Model Set the penalty parameters so that Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... b λ > w, c w > λ, b z > ḡ, c g > z, b s > t, c t > s. Then, the optimality conditions of the penalty problem reduce to the optimality conditions of the original problem. IMA - MINLP - p. 17/34

18 Computational Issues: Initialization Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... How do we initialize the relaxation variables and the penalty parameters in order to reach the new optimum quickly after a warmstart? Relaxation variables: ξ w = max(h(x, y) w, 0) + β ψ λ = β ξ g = max(x g l, 0) + β ψ z = β ξ t = max(x + t u, 0) + β ψ s = β where β is a small parameter, currently set to 10 5 M, where M is the greater of 1 and the largest primal or dual slack value. For discrete variables, β = 1. Penalty parameters: b λ = 2(w + κ) c w = 2(λ + ψ λ + κ) b z = 2(g + κ) c g = 2(z + ψ z + κ) b s = 2(t + κ) c t = 2(s + ψ s + κ), where κ is a constant with a default value of 1. IMA - MINLP - p. 18/34

19 Computational Issues: Updates Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... If necessary, how do we update the penalty parameters? Static updates Dynamic updates If If If If If If w (k+1) i g (k+1) j t (k+1) j λ (k+1) i z (k+1) j s (k+1) j > 0.9b (k) λi, then b(k+1) λi > 0.9b (k) zj > 0.9b (k) sj, then b(k+1) zj, then b(k+1) sj + ψ (k) λi > 0.9c wi w (k), then c (k+1) wi + ψ (k) zj + ψ (k) sj > 0.9c(k) gj > 0.9c (k) tj, then c(k+1) gj, then c(k+1) tj = 10b (k) λi, i = 1,..., m. = 10b (k) zj, j = 1,..., p. = 10b (k) sj = 10c (k) wi = 10c (k) gj = 10c (k) tj, j = 1,..., p., i = 1,..., m., j = 1,..., p., j = 1,..., p. IMA - MINLP - p. 19/34

20 Benefits of the Primal-Dual Penalty Model Warmstarting Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... Primal and dual infeasibility/unboundedness detection Handling of fixed variables Bounded sets of optimal primal and dual solutions Handling of complementarity conditions Detection of nonkkt optima Relieving of the jamming phenomenon IMA - MINLP - p. 20/34

21 One last improvement... A binary variable y can also be expressed as Previous Work Primal-Dual Penalty Model Solving the Penalty Problem Exactness of the Penalty Model Initialization Updates Benefits of the Primal-Dual Penalty Model One last improvement... 0 y 1 y 0. Not guaranteed to give the optimal solution, but it can provide integer feasible solutions. Computational effort is not significantly more than solving one subproblem: D = D g + D t 2 Λ + (2Y I) Λ W 1 (2Y I) with an appropriately modified rhs. IMA - MINLP - p. 21/34

22 IMA - MINLP - p. 22/34

23 Binary variables Mixed-integer nonlinear programming problems from MINLPLib: Binary variables Warmstarting NAME #MPEC #node f MPEC f MPEC opt? alan E E+000 Y batch E E+005 N batchdes E E+005 N ex E E+000 Y ex1223a E E+000 Y ex1223b E E+000 Y ex E E+001 N ex E E+001 N ex E E+000 N fac E E+007 N fuel E E+003 Y gbd E E+000 Y gkocis E E+000 N johnall E E+002 Y meanvarx E E+001 N nous E E+000 Y IMA - MINLP - p. 23/34

24 Binary variables - 2 Mixed-integer nonlinear programming problems from MINLPLib: Binary variables Warmstarting NAME #MPEC #node f MPEC f MPEC opt? oaer E E+000 Y procsel E E+000 Y ravem E E+005 N st_e E E+000 Y st_miqp E E+002 N st_miqp E E+003 Y st_test E E-012 N st_test E E+002 Y st_test E E+002 N synthes E E+000 N synthes E E+001 Y synthes E E+001 N IMA - MINLP - p. 24/34

25 Warmstarting Mixed-integer nonlinear programming problems from MINLPLib: Binary variables Warmstarting Problem WarmIters ColdIters %Impr #Nodes #Inf f(x ) alan E+00 batch* E+05 batchdes E+05 du-opt E+00 du-opt E+00 eg_all_s E+00 eg_disc_s E+00 eg_disc2_s E+00 eg_int_s E+00 ex E+00 ex1223a E+00 ex1223b E+00 ex E+01 ex E+01 ex E+00 fac E+07 IMA - MINLP - p. 25/34

26 Warmstarting - 2 Binary variables Warmstarting Mixed-integer nonlinear programming problems from MINLPLib: Problem WarmIters ColdIters %Impr #Nodes #Inf f(x ) fuel* E+03 gbd E+00 gear E-05 gkocis E+00 johnall E+02 meanvarx E+01 nous E+00 nvs E+01 nvs E-01 nvs E+00 nvs E+01 nvs E+02 nvs E+02 nvs E+02 nvs E+02 nvs E+04 IMA - MINLP - p. 26/34

27 Warmstarting - 3 Binary variables Warmstarting Mixed-integer nonlinear programming problems from MINLPLib: Problem WarmIters ColdIters %Impr #Nodes #Inf f(x ) nvs E+00 nvs E+03 nvs E+02 nvs E+03 nvs E+02 nvs E+03 nvs E+03 oaer E+00 prob E+03 prob E+01 procsel E+00 ravem E+05 st_e E+00 st_miqp E+02 st_miqp E+00 st_miqp E+03 IMA - MINLP - p. 27/34

28 Warmstarting - 4 Mixed-integer nonlinear programming problems from MINLPLib: Binary variables Warmstarting Problem WarmIters ColdIters %Impr #Nodes #Inf f(x ) st_test E-12 st_test E+00 st_test E+00 st_test E+02 st_test E+02 st_test E+04 st_testgr E+01 st_testgr E+01 st_testph E+01 synthes E+00 synthes E+01 synthes E+01 tloss E+01 OVERALL IMA - MINLP - p. 28/34

29 MILPs Binary variables Warmstarting Mixed-integer linear programming problems solved using branch-and-bound: Problem WarmIters ColdIters Diet Diet Diet Diet Diet Diet Diet Diet Diet Diet HL HL HL415-4 (inf) (inf) (inf) HL HL HL Problem WarmIters ColdIters Synthes Synthes Synthes Synthes Synthes Synthes Synthes Synthes Synthes Synthes Synthes Synthes Synthes Synthes IMA - MINLP - p. 29/34

30 Cutting Stock Binary variables Warmstarting Master problems of the cutting stock model: Problem WarmIters ColdIters Master Master Master IMA - MINLP - p. 30/34

31 IMA - MINLP - p. 31/34

32 Mixed-Integer SOCP MISOCP Interesting application areas, e.g. portfolio optimization with cardinality constraints, facility location problems with fixed costs. Interior-point methods have good convergence properties and computational performance. Homogeneous self-dual approach allows for a similar type of warmstart capability. A naive implementation of SeDuMi + Outer Approximation Looking to implement a re-centering scheme The primal-dual penalty approach also extends naturally to SOCPs. IMA - MINLP - p. 32/34

33 IMA - MINLP - p. 33/34

34 Interior-point methods can be warmstarted when regularization is used. Primal-dual regularization allows for warmstarts after any change to the problem. Moral of the story: We re working on it! IMA - MINLP - p. 34/34

RESEARCH ARTICLE. Mixed Integer Nonlinear Programming Using Interior-Point Methods

RESEARCH ARTICLE. Mixed Integer Nonlinear Programming Using Interior-Point Methods Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 20 RESEARCH ARTICLE Mixed Integer Nonlinear Programming Using Interior-Point Methods Hande Y. Benson Department of Decision Sciences Bennett

More information

AN EXACT PRIMAL-DUAL PENALTY METHOD APPROACH TO WARMSTARTING INTERIOR-POINT METHODS FOR LINEAR PROGRAMMING

AN EXACT PRIMAL-DUAL PENALTY METHOD APPROACH TO WARMSTARTING INTERIOR-POINT METHODS FOR LINEAR PROGRAMMING AN EXACT PRIMAL-DUAL PENALTY METHOD APPROACH TO WARMSTARTING INTERIOR-POINT METHODS FOR LINEAR PROGRAMMING HANDE Y. BENSON AND DAVID F. SHANNO Abstract. One perceived deficiency of interior-point methods

More information

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global and local convergence results

More information

A New Penalty-SQP Method

A New Penalty-SQP Method Background and Motivation Illustration of Numerical Results Final Remarks Frank E. Curtis Informs Annual Meeting, October 2008 Background and Motivation Illustration of Numerical Results Final Remarks

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

From structures to heuristics to global solvers

From structures to heuristics to global solvers From structures to heuristics to global solvers Timo Berthold Zuse Institute Berlin DFG Research Center MATHEON Mathematics for key technologies OR2013, 04/Sep/13, Rotterdam Outline From structures to

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: COMPLEMENTARITY CONSTRAINTS HANDE Y. BENSON, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Operations Research and Financial Engineering Princeton

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms

Agenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primal-dual path following algorithms Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primal-dual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point

More information

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss

More information

Integer Programming. Wolfram Wiesemann. December 6, 2007

Integer Programming. Wolfram Wiesemann. December 6, 2007 Integer Programming Wolfram Wiesemann December 6, 2007 Contents of this Lecture Revision: Mixed Integer Programming Problems Branch & Bound Algorithms: The Big Picture Solving MIP s: Complete Enumeration

More information

Active-set prediction for interior point methods using controlled perturbations

Active-set prediction for interior point methods using controlled perturbations Active-set prediction for interior point methods using controlled perturbations Coralia Cartis and Yiming Yan Abstract We propose the use of controlled perturbations to address the challenging question

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

INTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS

INTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS INTERIOR-POINT ALGORITHMS, PENALTY METHODS AND EQUILIBRIUM PROBLEMS HANDE Y. BENSON, ARUN SEN, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Abstract. In this paper we consider the question of solving equilibrium

More information

Lecture 3 Interior Point Methods and Nonlinear Optimization

Lecture 3 Interior Point Methods and Nonlinear Optimization Lecture 3 Interior Point Methods and Nonlinear Optimization Robert J. Vanderbei April 16, 2012 Machine Learning Summer School La Palma http://www.princeton.edu/ rvdb Example: Basis Pursuit Denoising L

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization

Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization /36-725 Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725/36-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...

More information

Software for Integer and Nonlinear Optimization

Software for Integer and Nonlinear Optimization Software for Integer and Nonlinear Optimization Sven Leyffer, leyffer@mcs.anl.gov Mathematics & Computer Science Division Argonne National Laboratory Roger Fletcher & Jeff Linderoth Advanced Methods and

More information

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2 ex-5.-5. Foundations of Operations Research Prof. E. Amaldi 5. Branch-and-Bound Given the integer linear program maxz = x +x x +x 6 x +x 9 x,x integer solve it via the Branch-and-Bound method (solving

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming

On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence

More information

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization Primal-Dual Interior-Point Methods Ryan Tibshirani Convex Optimization 10-725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are

More information

Benders Decomposition

Benders Decomposition Benders Decomposition Yuping Huang, Dr. Qipeng Phil Zheng Department of Industrial and Management Systems Engineering West Virginia University IENG 593G Nonlinear Programg, Spring 2012 Yuping Huang (IMSE@WVU)

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Solving Dual Problems

Solving Dual Problems Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem

More information

INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING. Hande Yurttan Benson

INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING. Hande Yurttan Benson INTERIOR-POINT METHODS FOR NONLINEAR, SECOND-ORDER CONE, AND SEMIDEFINITE PROGRAMMING Hande Yurttan Benson A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN CANDIDACY FOR THE DEGREE OF

More information

Decomposition Techniques in Mathematical Programming

Decomposition Techniques in Mathematical Programming Antonio J. Conejo Enrique Castillo Roberto Minguez Raquel Garcia-Bertrand Decomposition Techniques in Mathematical Programming Engineering and Science Applications Springer Contents Part I Motivation and

More information

Basic notions of Mixed Integer Non-Linear Programming

Basic notions of Mixed Integer Non-Linear Programming Basic notions of Mixed Integer Non-Linear Programming Claudia D Ambrosio CNRS & LIX, École Polytechnique 5th Porto Meeting on Mathematics for Industry, April 10, 2014 C. D Ambrosio (CNRS) April 10, 2014

More information

arxiv: v2 [math.oc] 11 Jan 2018

arxiv: v2 [math.oc] 11 Jan 2018 A one-phase interior point method for nonconvex optimization Oliver Hinder, Yinyu Ye January 12, 2018 arxiv:1801.03072v2 [math.oc] 11 Jan 2018 Abstract The work of Wächter and Biegler [40] suggests that

More information

A Fast Heuristic for GO and MINLP

A Fast Heuristic for GO and MINLP A Fast Heuristic for GO and MINLP John W. Chinneck, M. Shafique, Systems and Computer Engineering Carleton University, Ottawa, Canada Introduction Goal: Find a good quality GO/MINLP solution quickly. Trade

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Penalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques

More information

Infeasibility Detection in Nonlinear Optimization

Infeasibility Detection in Nonlinear Optimization Infeasibility Detection in Nonlinear Optimization Frank E. Curtis, Lehigh University Hao Wang, Lehigh University SIAM Conference on Optimization 2011 May 16, 2011 (See also Infeasibility Detection in Nonlinear

More information

Introduction to Nonlinear Stochastic Programming

Introduction to Nonlinear Stochastic Programming School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Copositive Programming and Combinatorial Optimization

Copositive Programming and Combinatorial Optimization Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA

More information

Primal-Dual Interior-Point Methods

Primal-Dual Interior-Point Methods Primal-Dual Interior-Point Methods Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 Outline Today: Primal-dual interior-point method Special case: linear programming

More information

Feasibility Pump Heuristics for Column Generation Approaches

Feasibility Pump Heuristics for Column Generation Approaches 1 / 29 Feasibility Pump Heuristics for Column Generation Approaches Ruslan Sadykov 2 Pierre Pesneau 1,2 Francois Vanderbeck 1,2 1 University Bordeaux I 2 INRIA Bordeaux Sud-Ouest SEA 2012 Bordeaux, France,

More information

SOLVING INTEGER LINEAR PROGRAMS. 1. Solving the LP relaxation. 2. How to deal with fractional solutions?

SOLVING INTEGER LINEAR PROGRAMS. 1. Solving the LP relaxation. 2. How to deal with fractional solutions? SOLVING INTEGER LINEAR PROGRAMS 1. Solving the LP relaxation. 2. How to deal with fractional solutions? Integer Linear Program: Example max x 1 2x 2 0.5x 3 0.2x 4 x 5 +0.6x 6 s.t. x 1 +2x 2 1 x 1 + x 2

More information

Convex Optimization and Support Vector Machine

Convex Optimization and Support Vector Machine Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We

More information

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization

An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns

More information

Barrier Method. Javier Peña Convex Optimization /36-725

Barrier Method. Javier Peña Convex Optimization /36-725 Barrier Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: Newton s method For root-finding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly

More information

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization

A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization A Branch-and-Refine Method for Nonconvex Mixed-Integer Optimization Sven Leyffer 2 Annick Sartenaer 1 Emilie Wanufelle 1 1 University of Namur, Belgium 2 Argonne National Laboratory, USA IMA Workshop,

More information

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy)

Mixed-Integer Nonlinear Decomposition Toolbox for Pyomo (MindtPy) Mario R. Eden, Marianthi Ierapetritou and Gavin P. Towler (Editors) Proceedings of the 13 th International Symposium on Process Systems Engineering PSE 2018 July 1-5, 2018, San Diego, California, USA 2018

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2017co.html version:

More information

Approximate Farkas Lemmas in Convex Optimization

Approximate Farkas Lemmas in Convex Optimization Approximate Farkas Lemmas in Convex Optimization Imre McMaster University Advanced Optimization Lab AdvOL Graduate Student Seminar October 25, 2004 1 Exact Farkas Lemma Motivation 2 3 Future plans The

More information

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0. ex-.-. Foundations of Operations Research Prof. E. Amaldi. Dual simplex algorithm Given the linear program minx + x + x x + x + x 6 x + x + x x, x, x. solve it via the dual simplex algorithm. Describe

More information

A Warm-start Interior-point Method for Predictive Control

A Warm-start Interior-point Method for Predictive Control A Warm-start Interior-point Method for Predictive Control Amir Shahzad Eric C Kerrigan George A Constantinides Department of Electrical and Electronic Engineering, Imperial College London, SW7 2AZ, UK

More information

Lecture 24: August 28

Lecture 24: August 28 10-725: Optimization Fall 2012 Lecture 24: August 28 Lecturer: Geoff Gordon/Ryan Tibshirani Scribes: Jiaji Zhou,Tinghui Zhou,Kawa Cheung Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Lecture 9 Sequential unconstrained minimization

Lecture 9 Sequential unconstrained minimization S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities

More information

Rounding-based heuristics for nonconvex MINLPs

Rounding-based heuristics for nonconvex MINLPs Mathematical Programming Computation manuscript No. (will be inserted by the editor) Rounding-based heuristics for nonconvex MINLPs Giacomo Nannicini Pietro Belotti March 30, 2011 Abstract We propose two

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

More information

Development of the new MINLP Solver Decogo using SCIP - Status Report

Development of the new MINLP Solver Decogo using SCIP - Status Report Development of the new MINLP Solver Decogo using SCIP - Status Report Pavlo Muts with Norman Breitfeld, Vitali Gintner, Ivo Nowak SCIP Workshop 2018, Aachen Table of contents 1. Introduction 2. Automatic

More information

An Empirical Evaluation of a Walk-Relax-Round Heuristic for Mixed Integer Convex Programs

An Empirical Evaluation of a Walk-Relax-Round Heuristic for Mixed Integer Convex Programs Noname manuscript No. (will be inserted by the editor) An Empirical Evaluation of a Walk-Relax-Round Heuristic for Mixed Integer Convex Programs Kuo-Ling Huang Sanjay Mehrotra Received: date / Accepted:

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

18. Primal-dual interior-point methods

18. Primal-dual interior-point methods L. Vandenberghe EE236C (Spring 213-14) 18. Primal-dual interior-point methods primal-dual central path equations infeasible primal-dual method primal-dual method for self-dual embedding 18-1 Symmetric

More information

The moment-lp and moment-sos approaches

The moment-lp and moment-sos approaches The moment-lp and moment-sos approaches LAAS-CNRS and Institute of Mathematics, Toulouse, France CIRM, November 2013 Semidefinite Programming Why polynomial optimization? LP- and SDP- CERTIFICATES of POSITIVITY

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Interior-Point Methods

Interior-Point Methods Interior-Point Methods Stephen Wright University of Wisconsin-Madison Simons, Berkeley, August, 2017 Wright (UW-Madison) Interior-Point Methods August 2017 1 / 48 Outline Introduction: Problems and Fundamentals

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

SF2822 Applied nonlinear optimization, final exam Wednesday June

SF2822 Applied nonlinear optimization, final exam Wednesday June SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Some Recent Advances in Mixed-Integer Nonlinear Programming

Some Recent Advances in Mixed-Integer Nonlinear Programming Some Recent Advances in Mixed-Integer Nonlinear Programming Andreas Wächter IBM T.J. Watson Research Center Yorktown Heights, New York andreasw@us.ibm.com SIAM Conference on Optimization 2008 Boston, MA

More information

SF2822 Applied nonlinear optimization, final exam Saturday December

SF2822 Applied nonlinear optimization, final exam Saturday December SF2822 Applied nonlinear optimization, final exam Saturday December 5 27 8. 3. Examiner: Anders Forsgren, tel. 79 7 27. Allowed tools: Pen/pencil, ruler and rubber; plus a calculator provided by the department.

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION

A SHIFTED PRIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OPTIMIZATION A SHIFTED RIMAL-DUAL INTERIOR METHOD FOR NONLINEAR OTIMIZATION hilip E. Gill Vyacheslav Kungurtsev Daniel. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-18-1 February 1, 2018

More information

A Computational Comparison of Branch and Bound and. Outer Approximation Algorithms for 0-1 Mixed Integer. Brian Borchers. John E.

A Computational Comparison of Branch and Bound and. Outer Approximation Algorithms for 0-1 Mixed Integer. Brian Borchers. John E. A Computational Comparison of Branch and Bound and Outer Approximation Algorithms for 0-1 Mixed Integer Nonlinear Programs Brian Borchers Mathematics Department, New Mexico Tech, Socorro, NM 87801, U.S.A.

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Interior-Point versus Simplex methods for Integer Programming Branch-and-Bound

Interior-Point versus Simplex methods for Integer Programming Branch-and-Bound Interior-Point versus Simplex methods for Integer Programming Branch-and-Bound Samir Elhedhli elhedhli@uwaterloo.ca Department of Management Sciences, University of Waterloo, Canada Page of 4 McMaster

More information

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties

A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties A null-space primal-dual interior-point algorithm for nonlinear optimization with nice convergence properties Xinwei Liu and Yaxiang Yuan Abstract. We present a null-space primal-dual interior-point algorithm

More information

Improved quadratic cuts for convex mixed-integer nonlinear programs

Improved quadratic cuts for convex mixed-integer nonlinear programs Improved quadratic cuts for convex mixed-integer nonlinear programs Lijie Su a,b, Lixin Tang a*, David E. Bernal c, Ignacio E. Grossmann c a Institute of Industrial and Systems Engineering, Northeastern

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7. SF2822 Applied Nonlinear Optimization Lecture 10: Interior methods Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH 1 / 24 Lecture 10, 2017/2018 Preparatory question 1. Try to solve theory question

More information

An Inexact Newton Method for Optimization

An Inexact Newton Method for Optimization New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)

More information

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms

MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms MINLP: Theory, Algorithms, Applications: Lecture 3, Basics of Algorothms Jeff Linderoth Industrial and Systems Engineering University of Wisconsin-Madison Jonas Schweiger Friedrich-Alexander-Universität

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

Feasibility Pump for Mixed Integer Nonlinear Programs 1

Feasibility Pump for Mixed Integer Nonlinear Programs 1 Feasibility Pump for Mixed Integer Nonlinear Programs 1 Presenter: 1 by Pierre Bonami, Gerard Cornuejols, Andrea Lodi and Francois Margot Mixed Integer Linear or Nonlinear Programs (MILP/MINLP) Optimize

More information

Second-order cone programming

Second-order cone programming Outline Second-order cone programming, PhD Lehigh University Department of Industrial and Systems Engineering February 10, 2009 Outline 1 Basic properties Spectral decomposition The cone of squares The

More information

SOLVING PROBLEMS WITH SEMIDEFINITE AND RELATED CONSTRAINTS USING INTERIOR-POINT METHODS FOR NONLINEAR PROGRAMMING

SOLVING PROBLEMS WITH SEMIDEFINITE AND RELATED CONSTRAINTS USING INTERIOR-POINT METHODS FOR NONLINEAR PROGRAMMING SOLVING PROBLEMS WITH SEMIDEFINITE AND RELATED CONSTRAINTS SING INTERIOR-POINT METHODS FOR NONLINEAR PROGRAMMING ROBERT J. VANDERBEI AND HANDE YRTTAN BENSON Operations Research and Financial Engineering

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

Convex Optimization and SVM

Convex Optimization and SVM Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence

More information

Numerical Optimization

Numerical Optimization Linear Programming - Interior Point Methods Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Example 1 Computational Complexity of Simplex Algorithm

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Duality revisited. Javier Peña Convex Optimization /36-725

Duality revisited. Javier Peña Convex Optimization /36-725 Duality revisited Javier Peña Conve Optimization 10-725/36-725 1 Last time: barrier method Main idea: approimate the problem f() + I C () with the barrier problem f() + 1 t φ() tf() + φ() where t > 0 and

More information

where X is the feasible region, i.e., the set of the feasible solutions.

where X is the feasible region, i.e., the set of the feasible solutions. 3.5 Branch and Bound Consider a generic Discrete Optimization problem (P) z = max{c(x) : x X }, where X is the feasible region, i.e., the set of the feasible solutions. Branch and Bound is a general semi-enumerative

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information