The Value function of a Mixed-Integer Linear Program with a Single Constraint

Similar documents
The Value function of a Mixed-Integer Linear Program with a Single Constraint

On the Value Function of a Mixed Integer Linear Program

Integer Programming Duality

Bilevel Integer Linear Programming

Duality for Mixed-Integer Linear Programs

Bilevel Integer Programming

Bilevel Integer Linear Programming

On the Value Function of a Mixed Integer Linear Optimization Problem and an Algorithm for its Construction

Duality, Warm Starting, and Sensitivity Analysis for MILP

A Benders Algorithm for Two-Stage Stochastic Optimization Problems With Mixed Integer Recourse

Bilevel Integer Optimization: Theory and Algorithms

Asteroide Santana, Santanu S. Dey. December 4, School of Industrial and Systems Engineering, Georgia Institute of Technology

Generation and Representation of Piecewise Polyhedral Value Functions

IP Duality. Menal Guzelsoy. Seminar Series, /21-07/28-08/04-08/11. Department of Industrial and Systems Engineering Lehigh University

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Strong Dual for Conic Mixed-Integer Programs

Lecture #21. c T x Ax b. maximize subject to

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Mixed-integer nonlinear programming, Conic programming, Duality, Cutting

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

A non-standart approach to Duality

4. The Dual Simplex Method

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Structure of Valid Inequalities for Mixed Integer Conic Programs

Structure in Mixed Integer Conic Optimization: From Minimal Inequalities to Conic Disjunctive Cuts

Chapter 1 Linear Programming. Paragraph 5 Duality

1 Review of last lecture and introduction

Multiobjective Mixed-Integer Stackelberg Games

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Subgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives

CO350 Linear Programming Chapter 6: The Simplex Method

Nonlinear Programming

DUALITY AND INTEGER PROGRAMMING. Jean B. LASSERRE

March 2002, December Introduction. We investigate the facial structure of the convex hull of the mixed integer knapsack set

Lagrangian Duality Theory

Subgradients. subgradients. strong and weak subgradient calculus. optimality conditions via subgradients. directional derivatives

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Linear and Combinatorial Optimization

Introduction to integer programming II

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Primal/Dual Decomposition Methods

Semidefinite Programming

Linear Programming. Operations Research. Anthony Papavasiliou 1 / 21

Multi-Row Cuts in Integer Programming. Tepper School of Business Carnegie Mellon University, Pittsburgh

Subadditive Approaches to Mixed Integer Programming

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

Minimal Valid Inequalities for Integer Constraints

Some cut-generating functions for second-order conic sets

Lagrangean relaxation

EE364a Review Session 5

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

On the Relative Strength of Split, Triangle and Quadrilateral Cuts

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

3.10 Lagrangian relaxation

Applied Lagrange Duality for Constrained Optimization

On Sublinear Inequalities for Mixed Integer Conic Programs

Lagrangian Duality and Convex Optimization

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

Sufficiency of Cut-Generating Functions

On Sublinear Inequalities for Mixed Integer Conic Programs

On the Relative Strength of Split, Triangle and Quadrilateral Cuts

Today: Linear Programming (con t.)

Separation, Inverse Optimization, and Decomposition. Some Observations. Ted Ralphs 1 Joint work with: Aykut Bulut 1

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Understanding the Simplex algorithm. Standard Optimization Problems.

Optimality Conditions for Constrained Optimization

Cutting Plane Methods I

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

ORIE 6300 Mathematical Programming I August 25, Lecture 2

Conic Linear Optimization and its Dual. yyye

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

THEOREMS, ETC., FOR MATH 516

Cutting Plane Methods II

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Discrete (and Continuous) Optimization WI4 131

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Introduction to Mathematical Programming IE406. Lecture 13. Dr. Ted Ralphs

On Subadditive Duality for Conic Mixed-Integer Programs

Reformulation and Sampling to Solve a Stochastic Network Interdiction Problem

Chapter 1. Optimality Conditions: Unconstrained Optimization. 1.1 Differentiable Problems

Lecture 10: Linear programming duality and sensitivity 0-0

Numerical Optimization

On the knapsack closure of 0-1 Integer Linear Programs

Introduction to linear programming using LEGO.

On duality theory of conic linear problems

Mixing Inequalities and Maximal Lattice-Free Triangles

Lagrange Relaxation: Introduction and Applications

Convex Sets and Minimal Sublinear Functions

A new Hellinger-Kantorovich distance between positive measures and optimal Entropy-Transport problems

CS711008Z Algorithm Design and Analysis

Lecture 7 Duality II

Lift-and-Project Inequalities

5. Duality. Lagrangian

Transcription:

The Value Function of a Mixed Integer Linear Programs with a Single Constraint MENAL GUZELSOY TED RALPHS ISE Department COR@L Lab Lehigh University tkralphs@lehigh.edu University of Wisconsin February 8, 008 Thanks: Work supported in part by the National Science Foundation

Outline The Value Function MILP Duality Linear Approximations Structure of The Value Function Introduction Extending the Value Function Evaluating The Value Function 4

Motivation The Value Function MILP Duality The goal of this work is to study the structure of the value function of a general MILP. Eventually, we hope this will lead to methods for approximation useful for sensitivity analysis warm starting other methods that requires dual information Computing the value function (or even an approximation) is difficult even in a small neighborhood Our approach is to begin by considering the value functions of various single-row relaxations.

The Value Function MILP Duality Consider a general mixed-integer linear program (MILP) z P = min x S cx, (P) c R n, S = {x Z r + R n r + Ax = b} with A Q m n, b R m. The value function of the primal problem (P) is z(d) = min x S(d) cx, where for a given d R m, S(d) = {x Z r + R n r + Ax = d}. We let z(d) = if S(d) =.

Previous Work The Value Function MILP Duality Johnson [97,974,979], Jeroslow [978] : the theory of subadditive duality for integer linear programs. Pure Integer Programs: Jeroslow [98] : Gomory functions - maximum of finitely many subadditive functions. Lasserre [004] : generating functions, two-sided Z transformation. Loera et al. [004] : generating functions, global test set. Mixed Integer Programs: Jeroslow [98] : minimum of finitely many Gomory functions. Blair [995] : Jeroslow formula - consisting of a Gomory function and a correction term.

Subadditive Duality The Value Function MILP Duality A function F is subadditive over a domain Θ if F(λ ) + F(λ ) F(λ + λ ) λ,λ,λ + λ Θ. For b R, the subadditive dual of the primal problem: z D = max F(b) F(a j ) c j j I, F(a j ) c j j C, and F is subadditive, F(0) = 0, where a j is the j th column of A, I = {,...,r}, C = {r +,...,n} and F is defined by F(d) = lim sup δ 0 + F(δd) δ d R m. F (the upper d-directional derivative of F at zero) is positively homogeneous, subadditive, and bounds F from above.

Primal-Dual Relations The Value Function MILP Duality Theorem (Weak Duality) Let x be a feasible solution to the primal problem and let F be a feasible solution to the subadditive dual problem. Then F(b) cx. Theorem (Strong Duality) If the primal problem (resp., the dual) has a finite optimum, then so does the dual problem (resp., the primal) and they are equal. In other words, there exists a dual feasible F such that F (b) = z D = z P = z(b) and F (d) z(d) d R m. Extensions: complementarity, optimality conditions, etc.

The Value Function MILP Duality Chvátal and Gomory Functions Let L m = {f f : R m R, f is linear}. Chvátal functions are the smallest set of functions C m such that If f L m, then f C m. If f, f C m and α, β Q +, then αf + βf C m. If f C m, then f C m. Gomory functions are the smallest set of functions G m C m with the additional property that If f, f G m, then max{f, f } G m. Theorem For PILPs (r = n), if z(0) = 0, then there is a g G m such that g(d) = z(d) for all d R m with S(d).

Jeroslow Formula - MILP The Value Function MILP Duality Let the set E consist of the index sets of dual feasible bases of the linear program min{ M c Cx C : M A Cx C = b, x 0} where M Z + such that for any E E, MA E aj Z m for all j I. Theorem (Jeroslow Formular) There is a g G m such that z(d) = min E E g( d E ) + v E(d d E ) d R m with S(d), where for E E, d E = A E A E d and v E is the corresponding basic feasible solution.

Linear Approximations Structure of The Value Function We will consider min cx, x S (P) c R n, S = {x Z r + R n r + a x = b} with a Q n, b R. The value function of the primal problem (P) is now z(d) = min x S(d) cx, where for a given d R, S(d) = {x Z r + R n r + a x = d}. Assumptions: Let N = I C, z(0) = 0 = z : R R {+ }, N + = {i N a i > 0} and N = {i N a i < 0}, r < n, that is, C = z : R R.

Example Linear Approximations Structure of The Value Function min x + x + x 4 + x 5 + 4 x 6 s.t x x + x + x 4 x 5 + x 6 = b and x, x, x Z +, x 4, x 5, x 6 R +. z F 5-4 7 - - - 0 4 5 5 7 d

Lower Bound (LP Relaxation) Linear Approximations Structure of The Value Function The value function of the LP relaxation yields a lower bound. In this case, it has a convenient closed form: ηd if d > 0, F L (d) = max{ud ζ u η, u R} = 0 if d = 0, ζd if d < 0. where F L z. η = min{ c i a i i N + } and ζ = max{ c i a i i N }.

Linear Approximations Structure of The Value Function Upper Bound (Continuous Relaxation) To get an upper bound, we consider only the continuous variables: F U (d) = min{ c i x i η C d if d > 0 a i x i = d, x i 0 i C} = 0 if d = 0 i C i C ζ C d if d < 0 where η C = min{ c i a i i C + = {i C a i > 0}} and ζ C = max{ c i a i i C = {i C a i < 0}} By convention: C + η C =. C ζ C =. F U z

Example (cont d) Linear Approximations Structure of The Value Function We have η =, ζ = 0, ηc = and ζ C =. Consequently, { F L (d) = d if d 0 { d if d 0 and F 0 if d < 0 U (d) = d if d < 0 z(d) F U(d) F L(d) ζ C 5 η C ζ -4 7 - - - 0 4 5 5 7 η d

Observations Linear Approximations Structure of The Value Function {η = η C } {z(d) = F U (d) = F L (d) d R + } {ζ = ζ C } {z(d) = F U (d) = F L (d) d R } z(d) F U(d) F L(d) ζ C 5 η C ζ -4 7 - - - 0 4 5 5 7 η d

Observations Linear Approximations Structure of The Value Function Let d + U = sup{d 0 z(d) = F U (d)} d U = inf{d 0 z(d) = F U (d)}, d + L = inf{d > 0 z(d) = F L (d)} d L = sup{d < 0 z(d) = F L (d)}. z(d) F U(d) F L(d) 5 d d L U d + d + U L -4 7-5 - - 0 4 5 7 d

Observations Linear Approximations Structure of The Value Function η C = d + U = 0 and ζc = d U = 0 η < η C {d z(d) = F U(d) = F L(d), d R +} {0} d + U ζ > ζ C {d z(d) = F U(d) = F L(d), d R } {0} d U < > z(d) F U(d) F L(d) 5 d d L U d + d + U L -4 7-5 - - 0 4 5 7 d

Observations Linear Approximations Structure of The Value Function z(d) = F U (d) d (d U, d+ U ) d + L d+ U if d+ L > 0 and d L d U if d L < 0 if b {d R z(d) = F L (d)}, then, z(kb) = kf L (b), k Z + z(d) F U(d) F L(d) 5 d d d L L U d + d + d + U L L -4 7-5 - - 0 4 5 7 d

Observations Linear Approximations Structure of The Value Function Notice the relation between F U and the linear segments of z: {η C,ζ C } z(d) F U(d) F L(d) 5 d d d L L U d + d + d + U L L -4 7-5 - - 0 4 5 7 d

Redundant Variables Linear Approximations Structure of The Value Function Let T C be such that t + T if and only if η C < and η C = c t + a t + t T if and only if ζ C > and ζ C = c t a t. and define and similarly, ν(d) = min s.t. c I x I + c T x T a I x I + a T x T = d x I Z I +, x T R T + Then ν(d) = z(d) for all d R. The variables in C\T are redundant. z can be represented with at most continuous variables.

Back to the Jeroslow Formula Linear Approximations Structure of The Value Function Let M Z + be such that for any t T, Maj a t Z for all j I. Then there is a Gomory function g such that z(d) = min {g( d t T t ) + c t (d d a t )} d R t where d t = at Md M a t. It is possible to show that such a Gomory function can be obtained from the value function of a related PILP: g(q) = min s.t for all q R, where ϕ = M c I x I + M c Tx T + z(ϕ)v a I x I + M a Tx T + ϕv = q x I Z I +, x T Z+, T v Z + t T a t.

Linear Approximations Structure of The Value Function Piecewise-Linearity of the Value Function For t T, setting we can write ω t (d) = g( d t ) + c t a t (d d t ) d R, z(d) = min t T ω t(d) d R For t T, ω t is piecewise linear with finitely many linear segments on any closed interval and each of those linear segments has a slope of η C if t = t + or ζ C if t = t. Thus, z is also piecewise-linear with finitely many linear segments on any closed interval. Furthermore, each of those linear segments has a slope of η C or ζ C.

Structure of Linear Pieces Linear Approximations Structure of The Value Function Theorem If the value function z is linear on an interval U R, then there exists a ȳ Z I + such that ȳ is the integral part of an optimal solution for any d U. Consequently, for some t T, z can be written as z(d) = c I ȳ + c t a t (d a I ȳ) d U. Furthermore, for any d U, we have d a I ȳ 0 if t = t + and d a I ȳ 0 if t = t.

Example (cont d) Linear Approximations Structure of The Value Function T = {4, 5} and hence, x 6 is redundant. η C = and ζ C =, z 5-4 7 - - - 0 4 5 5 7 d U = [0, /], U = [/, ], U = [, 7/6], U 4 = [7/6, /],... y = (0 0 0), y = ( 0 0), y = ( 0 0), y 4 = (0 0 ),... d if d U d + / if d U z(d) = d / if d U d + if d U 4...

Continuity Linear Approximations Structure of The Value Function ω t + is continuous from the right and ω t is continuous from the left. ω t + and ω t are both lower-semicontinuous. Theorem If z is discontinuous at a right-hand-side b R, then there exists a ȳ Z I + such that b a I ȳ = 0. z is lower-semicontinuous. η C < if and only if z is continuous from the right. ζ C > if and only if z is continuous from the left. Both η C and ζ C are finite if and only if z is continuous everywhere.

Example Linear Approximations Structure of The Value Function η c = /, ζ C =. min x /4x + /4x s.t 5/4x x + /x = b, x, x Z +, x R +. 7-5 - - 5 - - 5 5 7 For each discontinuous point d i, we have d i (5/4y i y i ) = 0 and each linear segment has the slope of η C = /.

Introduction Extending the Value Function Evaluating The Value Function Let f : [0, h] R, h > 0 be subadditive and f(0) = 0. The maximal subadditive extension of f from [0, h] to R + is f(d) if d [0, h] f S (d) = inf f(ρ) if d > h, C C(d) ρ C C(d) is the set of all finite collections {ρ,..., ρ R} such that ρ i [0, h], i =,..., R and P R i= ρi = d. Each collection {ρ,..., ρ R} is called an h-partition of d. We can also extend a subadditive function f : [h, 0] R, h < 0 to R similarly. f S is subadditive and if g is any other subadditive extension of f from [0, h] to R +, then g f S (maximality).

Introduction Extending the Value Function Evaluating The Value Function Lemma What if we use z as the seed function? We can change the inf to min : Let the function f : [0, h] R be defined by f(d) = z(d) d [0, h]. Then, z(d) if d [0, h] f S (d) = min z(ρ) if d > h. C C(d) ρ C For any h > 0, z(d) f S (d) d R +. Observe that for d R +, f S (d) z(d) while h. Is there an h < such that f S (d) = z(d) d R +?

Introduction Extending the Value Function Evaluating The Value Function We can get the value function by extending it from a specific neighborhood. Theorem Let d r = max{a i i N} and d l = min{a i i N} and let the functions f r and f l be the maximal subadditive extensions of z from the intervals [0, d r ] and [d l, 0] to R + and R, respectively. Let { fr (d) d R F(d) = + f l (d) d R then, z = F. Outline of the Proof. z F : By construction. z F : Using MILP duality, F is dual feasible. In other words, the value function is completely encoded by the breakpoints in [d l, d r ] and slopes.

Introduction Extending the Value Function Evaluating The Value Function The next question is whether we can obtain z(d) for some d [d l, d r ] from this encoding. Consider extending z from [0, d r ] to R + : write z(d) = min z(ρ) d > d r, C C(d) ρ C Can we limit C, C C(d)? Yes! Using subadditivity! Can we limit C(d)? Yes! Using the break points of z!

Special Case Introduction Extending the Value Function Evaluating The Value Function Theorem Let d l, d r be defined as in Theorem 8. If z is concave in [0, d r ], then for any d R + z(d) = kz(d r ) + z(d kd r ), kd r d < (k + )d r k Z + Similarly, if z is concave in [d l, 0], then for any d R See Figure. z(d) = kz(d l ) + z(d kd l ), (k + )d l < d kd l k Z +

Introduction Extending the Value Function Evaluating The Value Function Theorem Let d > d r and let k d be the integer such that d ( k d d r, k d+ d r ]. Then z(d) = min{ k d i= z(ρ i ) k d i= ρ i = d,ρ i [0, d r ], i =,..., k d }. Therefore, C k d for any C C(d). How about C(d)?

Lower Break Points Introduction Extending the Value Function Evaluating The Value Function Theorem Set Ψ be the lower break points of z in [0, d r ]. For any d R + \[0, d r ] there is an optimal d r -partition C C(d) such that C\Ψ. In particular, we only need to consider the collection Λ(d) {H {µ} H C(d µ), H = k d, H Ψ, ρ H ρ + µ = d,µ In other words, z(d) = min C Λ(d) ρ C z(ρ) d R + \[0, d r ] Observe that the set Λ(d) is finite, since Ψ is finite due to the fact that z has finitely many linear segments on [0, d r ] and since for each H Λ(d), µ is uniquely determined.

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function For the interval [ 0, ], we have Ψ = {0,, }. For b = 5 8, C = { 8,, } is an optimal d r -partition with C\Ψ =. z(d) U-bp L-bp 5-4 7 - - - 0 4 5 5 7 d

Introduction Extending the Value Function Evaluating The Value Function Evaluating the breakpoints in [d l, d r ] Note that z overlaps with F U = η C d in a right-neighborhood of origin and with F U = ζ C d in a left-neighborhood of origin. The slope of each linear segment is either η C or ζ C. Furthermore, if both η C and ζ C are finite, then the slopes of linear segments alternate between η C and ζ C (continuity). For d, d [0, d r] (or [d l, 0]), if z(d ) and z(d ) are on the line with the slope of η C (or ζ C ), then z is linear over [d, d ] with the same slope (subadditivity). With these observations, we can formulate a finite algorithm to evaluate z in [d l, d r ].

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function η C ζ C 0 Figure: Evaluating z in [0, ]

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function η C ζ C 0 Figure: Evaluating z in [0, ]

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function η C ζ C 0 Figure: Evaluating z in [0, ]

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function η C ζ C 0 Figure: Evaluating z in [0, ]

Combinatorial Approach Introduction Extending the Value Function Evaluating The Value Function We can formulate the problem of evaluating z(d) for d R + \[0, d r ] as a Constrained Shortest Path Problem. Among the paths (feasible partitions) of size k b with exactly k b edges (members of each partition) of each path chosen from Ψ, we need to find the minimum-cost path with a total length of d.

Recursive Construction Introduction Extending the Value Function Evaluating The Value Function Set Ψ(p) to the set of the lower break points of z in the interval (0, p] p R +. Let p := d r. For any d `p, p + p, let z(d) = min{z(ρ ) + z(ρ ) ρ + ρ = d, ρ Ψ(p), ρ (0, p]} Let p := p + p and repeat this step.

Introduction Extending the Value Function Evaluating The Value Function We can also do the following: z(d) = min g j (d) d j ( p, p + p ] where, for each d j Ψ(p), the functions g j : [ 0, p + p ] R { } are defined as z(d) if d d j, g j (d) = z(d j ) + z(d d j ) if d j < d p + d j, otherwise. Because of subadditivity, we can then write z(d) = min g j (d) d j ( 0, p + p ].

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function Extending the value function of (6) from [ [ ] 0, ] to 0, 9 4 z(d) F U(d) F L(d) 5-4 7 - - - 0 4 5 5 7 d

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function Extending the value function of (6) from [ [ ] 0, ] to 0, 9 4 z(d) F U(d) F L(d) 5 g g -4 7 - - - 0 4 5 5 7 d

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function Extending the value function of (6) from [ [ ] 0, ] to 0, 9 4 z(d) F U(d) F L(d) 5-4 7 - - - 0 4 5 5 7 d

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function Extending the value function of (6) from [ [ ] 0, 4] 9 to 0, 7 8 z(d) F U(d) F L(d) 5-4 7 - - - 0 4 5 5 7 d

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function Extending the value function of (6) from [ [ ] 0, 4] 9 to 0, 7 8 z(d) F U(d) F L(d) 5 g g g g 4-4 7 - - - 0 4 5 5 7 d

Example (cont d) Introduction Extending the Value Function Evaluating The Value Function Extending the value function of (6) from [ [ ] 0, 4] 9 to 0, 7 8 z(d) F U(d) F L(d) 5-4 7 - - - 0 4 5 5 7 d

Computational Experiments Approximating the value function of a MILP with a single constraint. Extending our results to bounded case. Approximating the value function of a general MILP using the value functions of single constraint relaxations.