Introduction to integer programming II

Similar documents
Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Introduction to integer programming III:

Totally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems

Review Solutions, Exam 2, Operations Research

Linear and Integer Programming - ideas

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

3.10 Lagrangian relaxation

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Integer Programming Duality

Numerical Optimization

Lecture 5: Computational Complexity

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Introduction into Vehicle Routing Problems and other basic mixed-integer problems

Algorithms for nonlinear programming problems II

On the Number of Solutions Generated by the Simplex Method for LP

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

IP Duality. Menal Guzelsoy. Seminar Series, /21-07/28-08/04-08/11. Department of Industrial and Systems Engineering Lehigh University

Chapter 1 Linear Programming. Paragraph 5 Duality

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

Relation of Pure Minimum Cost Flow Model to Linear Programming

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

is called an integer programming (IP) problem. model is called a mixed integer programming (MIP)

On the number of distinct solutions generated by the simplex method for LP

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Lecture 11: Post-Optimal Analysis. September 23, 2009

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

Part 4. Decomposition Algorithms

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Advanced Linear Programming: The Exercises

On the Polyhedral Structure of a Multi Item Production Planning Model with Setup Times

Math Models of OR: Branch-and-Bound

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

BBM402-Lecture 20: LP Duality

Algorithms for nonlinear programming problems II

Chapter 7 Network Flow Problems, I

Week Cuts, Branch & Bound, and Lagrangean Relaxation

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

3.7 Cutting plane methods

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2

Discrete (and Continuous) Optimization WI4 131

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Notes on Dantzig-Wolfe decomposition and column generation

SOLVING INTEGER LINEAR PROGRAMS. 1. Solving the LP relaxation. 2. How to deal with fractional solutions?

Answer the following questions: Q1: Choose the correct answer ( 20 Points ):

Decision Procedures An Algorithmic Point of View

A BRANCH&BOUND ALGORITHM FOR SOLVING ONE-DIMENSIONAL CUTTING STOCK PROBLEMS EXACTLY

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

Written Exam Linear and Integer Programming (DM545)

Lecture 9 Tuesday, 4/20/10. Linear Programming

Part 1. The Review of Linear Programming

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

KNAPSACK PROBLEMS WITH SETUPS

IE 5531: Engineering Optimization I

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Linear Programming Duality

A primal-simplex based Tardos algorithm

Lecture #21. c T x Ax b. maximize subject to

Bounds on the Traveling Salesman Problem

Lecture 9: Dantzig-Wolfe Decomposition

LOWER BOUNDS FOR INTEGER PROGRAMMING PROBLEMS

1 Review Session. 1.1 Lecture 2

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Linear Programming: Simplex

Summary of the simplex method

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

IE418 Integer Programming

Introduction to Integer Linear Programming

A Strongly Polynomial Simplex Method for Totally Unimodular LP

Stochastic Integer Programming

Algorithms and Theory of Computation. Lecture 13: Linear Programming (2)

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Linear and Combinatorial Optimization

The simplex algorithm

Computational Integer Programming Universidad de los Andes. Lecture 1. Dr. Ted Ralphs

Integer Programming Part II

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

Sensitivity Analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Discrete Optimization

Combinatorial Optimization

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Appendix A Taylor Approximations and Definite Matrices

Integer Programming ISE 418. Lecture 12. Dr. Ted Ralphs

Integer and Combinatorial Optimization: Introduction

Chapter 1. Preliminaries

Column Generation in Integer Programming with Applications in Multicriteria Optimization

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

Linear Programming. Chapter Introduction

Optimisation and Operations Research

Integer Programming. The focus of this chapter is on solution techniques for integer programming models.

The Strong Duality Theorem 1

Written Exam Linear and Integer Programming (DM554)

Transcription:

Introduction to integer programming II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization Martin Branda (KPMS MFF UK) 2017-03-20 1 / 39

Obsah Introduction to computational complexity 1 Introduction to computational complexity 2 3 Duality 4 Dynamic programming Martin Branda (KPMS MFF UK) 2017-03-20 2 / 39

Introduction to computational complexity Introduction to complexity theory Wolsey (1998): Consider decision problems having YES NO answers. Optimization problem max x M ct x can be replaced by (for some k integral) Is there an x M with value c T x k? For a problem instance X, the length of the input L(X ) is the length of the binary representation of a standard representation of the instance. Instance X = {c, M}, X = {c, M, k}. Martin Branda (KPMS MFF UK) 2017-03-20 3 / 39

Introduction to computational complexity Example: Knapsack decision problem For an instance { n } n X = c i x i k, a i x i b, x {0, 1} n, the length of the input is i=1 i=1 L(X ) = n n log c i + log a i + log b + log k i=1 i=1 Martin Branda (KPMS MFF UK) 2017-03-20 4 / 39

Introduction to computational complexity Running time Definition f A (X ) is the number of elementary calculations required to run the algorithm A on the instance X P. Running time of the algorithm A f A (l) = sup{f A (X ) : L(X ) = l}. X An algorithm A is polynomial for a problem P if f A (l) = O(l p ) for some p N. Martin Branda (KPMS MFF UK) 2017-03-20 5 / 39

Introduction to computational complexity Classes N P and P Definition N P (Nondeterministic Polynomial) is the class of decision problems with the property that: for any instance for which the answer is YES, there is a polynomial proof of the YES. P is the class of decision problems in N P for which there exists a polynomial algorithm. N P may be equivalently defined as the set of decision problems that can be solved in polynomial time on a non-deterministic Turing machine 1. 1 NTM writes symbols one at a time on an endless tape by strictly following a set of rules. It determines what action it should perform next according to its internal state and what symbol it currently sees. It may have a set of rules that prescribes more than one action for a given situation. The machine branches into many copies, each of which follows one of the possible transitions leading to a computation tree. Martin Branda (KPMS MFF UK) 2017-03-20 6 / 39

Introduction to computational complexity Alan Turing The Imitation Game (2014) Martin Branda (KPMS MFF UK) 2017-03-20 7 / 39

Introduction to computational complexity Polynomial reduction and the class N PC Definition If problems P, Q N P, and if an instance of P can be converted in polynomial time to an instance of Q, then P is polynomially reducible to Q. N PC, the class of N P-complete problems, is the subset of problems P N P such that for all Q N P, Q is polynomially reducible to P. Proposition: Suppose that problems P, Q N P. If Q P and P is polynomially reducible to Q, then P P. If P N PC and P is polynomially reducible to Q, then Q N PC. Proposition: If P N PC, then P = N PC. Martin Branda (KPMS MFF UK) 2017-03-20 8 / 39

Introduction to computational complexity Open question & Euler diagram Is P = N P? Martin Branda (KPMS MFF UK) 2017-03-20 9 / 39

Introduction to computational complexity N P-hard optimization problems Definition An optimization problem for which the decision problem lies in N PC is called N P-hard. Martin Branda (KPMS MFF UK) 2017-03-20 10 / 39

Introduction to computational complexity Simplex algorithm Klee Minty (1972) example: n max 10 n j x j j=1 i 1 s.t. 2 10 i j x j + x i 100 i 1, i = 1,..., n, j=1 x j 0, j = 1,..., n. (1) Can be easily reformulated in the standard form. The Simplex algorithm takes 2 n 1 pivot steps, i.e. it is not polynomial in the worst case. Martin Branda (KPMS MFF UK) 2017-03-20 11 / 39

Obsah 1 Introduction to computational complexity 2 3 Duality 4 Dynamic programming Martin Branda (KPMS MFF UK) 2017-03-20 12 / 39

Basic idea: DIVIDE AND RULE Let M = M 1 M 2 M r be a partitioning of thefeasibility set and let f j = min x M j f (x). Then min f (x) = min f j. x M j=1,...,r Martin Branda (KPMS MFF UK) 2017-03-20 13 / 39

General principles: Solve LP problem without integrality only. Branch using additional constraints on integrality: x i x i, x i x i + 1. Cut inperspective branches before solving (using bounds on the optimal value). Martin Branda (KPMS MFF UK) 2017-03-20 14 / 39

General principles: Solve only LP problems with relaxed integrality. Branching: if an optimal solution is not integral, e.g. ˆx i, create and save two new problems with constraints x i ˆx i, x i ˆx i. Bounding ( different cutting): save the objective value of the best integral solution and cut all problems in the queue created from the problems with higher optimal values 2. Exact algorithm.. 2 Branching cannot improve it. Martin Branda (KPMS MFF UK) 2017-03-20 15 / 39

P. Pedegral (2004). Introduction to optimization, Springer-Verlag, New York. Martin Branda (KPMS MFF UK) 2017-03-20 16 / 39

0. f min =, x min =, list of problems P = Solve LP-relaxed problem and obtain f, x. If the solution is integral, STOP. If the problem is infeasible or unbounded, STOP. 1. BRANCHING: There is xi for some k N: basic non-integral variable such that k < x i < k + 1 Add constraint x i k to previous problem and put it into list P. Add constraint x i k + 1 to previous problem and put it into list P. 2. Take problem from P and solve it: f, x. 3. If f < f min and x is non-integral, GO TO 1. BOUNDING: If f < f min a x is integral, set f min = f a x min = x, GO TO 4. BOUNDING: If f f min, GO TO 4. Problem is infeasible, GO TO 4. 4. If P, GO TO 2. If P = a f min =, integral solution does not exist. If P = a f min <, optimal value and solution are f min, x min. Martin Branda (KPMS MFF UK) 2017-03-20 17 / 39

Better... 2./3. Take problem from list P and solve it: f, x. If for the optimal value of the current problem holds f f min, then the branching is not necessary, since by solving the problems with added branching constraints we can only increase the optimal value and obtain the same f min. Martin Branda (KPMS MFF UK) 2017-03-20 18 / 39

Algorithmic issues: Problem selection from the list P: FIFO, LIFO (depth-first search), problem with the smallest f. Selection of the branching variable xi : the highest/smallest violation of integrality OR the highest/smallest coefficient in the objective function. Martin Branda (KPMS MFF UK) 2017-03-20 19 / 39

B&B Example I min 4x 1 + 5x 2 x 1 + 4x 2 5, 3x 1 + 2x 2 7, x 1, x 2 Z +. After two iterations of the dual SIMPLEX algorithm... 4 5 0 0 x 1 x 2 x 3 x 4 5 x 2 8/10 0 1-3/10 1/10 4 x 1 18/10 1 0 2/10-4/10 112/10 0 0-7/10-11/10 Martin Branda (KPMS MFF UK) 2017-03-20 20 / 39

B&B Example I Branching means adding a cut of the form x 1 1, i.e. (α = (1, 0, 0, 0, 1), α B = (1, 0)) x 1 + x 5 = 1, x 5 0. 4 5 0 0 0 x 1 x 2 x 3 x 4 x 5 5 x 2 8/10 0 1-3/10 1/10 0 4 x 1 18/10 1 0 2/10-4/10 0 0 x 5-8/10 0 0-2/10 4/10 1 112/10 0 0-7/10-11/10 0 Dual feasible, primal infeasible run the dual simplex... Martin Branda (KPMS MFF UK) 2017-03-20 21 / 39

x 1 1 node 1 x 1 = 1 x 2 = 2 ˆf = 14 node 0 x 1 = 1.8 x 2 = 0.8 ˆf = 11.2 x 1 2 x 2 0 node 3 x 1 = 5 x 2 = 0 ˆf = 20 node 2 x 1 = 2 x 2 = 0.75 ˆf = 11.75 x 2 1 node 4 x 1 = 2 x 2 = 1 ˆf = 13 Martin Branda (KPMS MFF UK) 2017-03-20 22 / 39

B&B Example II max 23x 1 + 19x 2 + 28x 3 + 14x 4 + 44x 5 s.t. 8x 1 + 7x 2 + 11x 3 + 6x 4 + 19x 5 25, x 1, x 2, x 3, x 4, x 5 {0, 1}. Martin Branda (KPMS MFF UK) 2017-03-20 23 / 39

B&B Example II x 3 = 0 node 1 ˆf = 65.26 branching continues... node 0 ˆf = 67.45 x 3 = 1 x 2 = 0 node 3 ˆf = 65 integral node 2 ˆf = 67.28 x 2 = 1 x 1 = 0 node 5 ˆf = 63.32 pruned node 4 ˆf = 67.127 x 1 = 1 node 6 ˆf = Martin Branda (KPMS MFF UK) 2017-03-20 24 / 39

remarks If you are able to get a feasible solution quickly, deliver it to the software (solver). Branch-and-Cut: add cuts at the beginning/during B&B. Algorithm termination: (Relative) difference between a lower and an upper bound construct the upper bound (for minimization) using a feasible solution, lower bound... Martin Branda (KPMS MFF UK) 2017-03-20 25 / 39

Obsah Duality 1 Introduction to computational complexity 2 3 Duality 4 Dynamic programming Martin Branda (KPMS MFF UK) 2017-03-20 26 / 39

Duality Duality Set S(b) = {x Z n + : Ax = b} and define the value function A dual function F : R m R A general form of dual problem z(b) = min x S(b) ct x. (2) F (b) z(b), b R m. (3) max F {F (b) : s.t. F (b) z(b), b Rm, F : R m R}. (4) We call F a weak dual function if it is feasible, and strong dual if moreover F (b) = z(b). Martin Branda (KPMS MFF UK) 2017-03-20 27 / 39

Duality Duality A function F is subadditive over a domain Θ if for all θ 1 + θ 2, θ 1, θ 2 Θ. F (θ 1 + θ 2 ) F (θ 1 ) + F (θ 2 ) The value function z is subadditive over {b : S(b) }, since the sum of optimal x s is feasible for the problem with b 1 + b 2 r.h.s., i.e. ˆx 1 + ˆx 2 S(b 1 + b 2 ). Martin Branda (KPMS MFF UK) 2017-03-20 28 / 39

Duality Duality If F is subadditive, then condition F (Ax) c T x for x Z n + is equivalent to F (a j ) c j, j = 1,..., m. This is true since F (Ae j ) c T e j is the same as F (a j ) c j. On the other hand, if F is subadditive and F (a j ) c j, j = 1,..., m imply F (Ax) m F (a j )x j j=1 m c j x j = c T x. j=1 Martin Branda (KPMS MFF UK) 2017-03-20 29 / 39

Duality Duality If we set Γ m = {F : R m R, F (0) = 0, F subadditive}, then we can write a subadditive dual independent of x: max F {F (b) : s.t. F (a j) c j, F Γ m }. (5) Weak and strong duality holds. An easy feasible solution based on LP duality (= weak dual) F LP (b) = max b T y s.t. A T y c. (6) y Martin Branda (KPMS MFF UK) 2017-03-20 30 / 39

Duality Duality Complementary slackness condition: if ˆx is an optimal solution for IP, and ˆF is an optimal subadditive dual solution, then ( ˆF (a j ) c j )ˆx j = 0, j = 1,..., m. Martin Branda (KPMS MFF UK) 2017-03-20 31 / 39

Obsah Dynamic programming 1 Introduction to computational complexity 2 3 Duality 4 Dynamic programming Martin Branda (KPMS MFF UK) 2017-03-20 32 / 39

Dynamic programming Knapsack problem max s.t. n c i x i i=1 n a i x i b, i=1 x i {0, 1}. Martin Branda (KPMS MFF UK) 2017-03-20 33 / 39

Dynamic programming Dynamic programming Let a i, b be positive integers. r f r (λ) = max c i x i i=1 r s.t. a i x i λ, i=1 x i {0, 1}. If ˆx r = 0, then f r (λ) = f r 1 (λ), ˆx r = 1 then f r (λ) = c r + f r 1 (λ a r ). Thus we arrive at the recursion f r (λ) = max {f r 1 (λ), c r + f r 1 (λ a r )}. Martin Branda (KPMS MFF UK) 2017-03-20 34 / 39

Dynamic programming Dynamic programming 0. Start with f 1 (λ) = 0 for 0 λ < a 1 and f 1 (λ) = max{0, c 1 } for λ a 1. 1. Use the forward recursion f r (λ) = max {f r 1 (λ), c r + f r 1 (λ a r )}. to successively calculate f 2,..., f n for all λ {0, 1,..., b}; f n (b) is the optimal value. 2. Keep indicator p r (λ) = 0 if f r (λ) = f r 1 (λ), and p r (λ) = 1 otherwise. 3. Obtain the optimal solution by a backward recursion: if p n (b) = 0 then set ˆx n = 0 and continue with f n 1 (b), else (if p n (b) = 1) set ˆx n = 1 and continue with f n 1 (b a n )... Martin Branda (KPMS MFF UK) 2017-03-20 35 / 39

Dynamic programming Knapsack problem Values a 1 = 4, a 2 = 6, a 3 = 7, costs c 1 = 4, c 2 = 5, c 3 = 11, budget b = 10: max s.t. 3 c i x i i=1 3 a i x i 10, i=1 x i {0, 1}. Martin Branda (KPMS MFF UK) 2017-03-20 36 / 39

Dynamic programming Knapsack problem Dynamic programming a 1 = 4, a 2 = 6, a 3 = 7, c 1 = 4, c 2 = 5, c 3 = 11 r/ λ 0 1 2 3 4 5 6 7 8 9 10 1 0 0 0 0 4 4 4 4 4 4 4 f r 2 0 0 0 0 4 4 5 5 5 5 9 3 0 0 0 0 4 4 5 11 11 11 11 1 0 0 0 0 1 1 1 1 1 1 1 p r 2 0 0 0 0 0 0 1 1 1 1 1 3 0 0 0 0 0 0 0 1 1 1 1 Martin Branda (KPMS MFF UK) 2017-03-20 37 / 39

Dynamic programming Dynamic programming Other successful applications Uncapacitated lot-sizing problem Shortest path problem... Martin Branda (KPMS MFF UK) 2017-03-20 38 / 39

Literature Dynamic programming V. Klee, G.J. Minty, (1972). How good is the simplex algorithm?. In Shisha, Oved. Inequalities III (Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1 9, 1969). New York-London: Academic Press, 159 175. G.L. Nemhauser, L.A. Wolsey (1989). Integer Programming. Chapter VI in Handbooks in OR & MS, Vol. 1, G.L. Nemhauser et al. Eds. L.A. Wolsey (1998). Integer Programming. Wiley, New York. L.A. Wolsey, G.L. Nemhauser (1999). Integer and Combinatorial Optimization. Wiley, New York. Northwestern University Open Text Book on Process Optimization, available online: https://optimization.mccormick.northwestern.edu [2017-03-19] Martin Branda (KPMS MFF UK) 2017-03-20 39 / 39