Lecture 8: Column Generation

Similar documents
Lecture 8: Column Generation

3.10 Column generation method

3.10 Column generation method

Large-scale optimization and decomposition methods: outline. Column Generation and Cutting Plane methods: a unified view

Column Generation I. Teo Chung-Piaw (NUS)

Lecture 9: Dantzig-Wolfe Decomposition

1 Column Generation and the Cutting Stock Problem

Column Generation. ORLAB - Operations Research Laboratory. Stefano Gualandi. June 14, Politecnico di Milano, Italy

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Operations Research Lecture 6: Integer Programming

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

Column Generation. MTech Seminar Report. Soumitra Pal Roll No: under the guidance of

Optimization Methods in Management Science

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Integer Linear Programming Modeling

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

Column Generation. i = 1,, 255;

Decomposition Methods for Integer Programming

Lagrangian Relaxation in MIP

Integer Programming ISE 418. Lecture 16. Dr. Ted Ralphs

Introduction to Integer Programming

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

Improving Branch-And-Price Algorithms For Solving One Dimensional Cutting Stock Problem

Interior-Point versus Simplex methods for Integer Programming Branch-and-Bound

Integer Programming: Cutting Planes

In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.

Introduction to Integer Linear Programming

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Introduction to Bin Packing Problems

3.7 Cutting plane methods

and to estimate the quality of feasible solutions I A new way to derive dual bounds:

3.4 Relaxations and bounds

Introduction to optimization and operations research

Integer Solutions to Cutting Stock Problems

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2

AM 121: Intro to Optimization! Models and Methods! Fall 2018!

3.10 Lagrangian relaxation

Part 4. Decomposition Algorithms

A Branch-and-Cut-and-Price Algorithm for One-Dimensional Stock Cutting and Two-Dimensional Two-Stage Cutting

Integer Programming Reformulations: Dantzig-Wolfe & Benders Decomposition the Coluna Software Platform

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Advanced linear programming

The Modified Integer Round-Up Property of the One-Dimensional Cutting Stock Problem

A BRANCH&BOUND ALGORITHM FOR SOLVING ONE-DIMENSIONAL CUTTING STOCK PROBLEMS EXACTLY

Planning and Optimization

Spring 2018 IE 102. Operations Research and Mathematical Programming Part 2

Optimisation and Operations Research

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

Linear integer programming and its application

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

Decision Procedures An Algorithmic Point of View

A Node-Flow Model for 1D Stock Cutting: Robust Branch-Cut-and-Price

Revised Simplex Method

15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization

min3x 1 + 4x 2 + 5x 3 2x 1 + 2x 2 + x 3 6 x 1 + 2x 2 + 3x 3 5 x 1, x 2, x 3 0.

Feasibility Pump Heuristics for Column Generation Approaches

Summary of the simplex method

PART 4 INTEGER PROGRAMMING

UNIVERSIDADE DE SÃO PAULO

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Lecture 7: Lagrangian Relaxation and Duality Theory

Cutting Planes in SCIP

Travelling Salesman Problem

Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications

Lecture 6 Simplex method for linear programming

Lecture 5 Simplex Method. September 2, 2009

The Fixed Charge Transportation Problem: A Strong Formulation Based On Lagrangian Decomposition and Column Generation

Review Problems Solutions MATH 3300A (01) Optimization Fall Graphically, the optimal solution is found to be x 1 = 36/7, x 2 = 66/7, z = 69/7.

Integer Programming Part II

Combinatorial Optimization

Discrete (and Continuous) Optimization WI4 131

Strengthened Benders Cuts for Stochastic Integer Programs with Continuous Recourse

Integer Programming Methods LNMB

Linear Programming Inverse Projection Theory Chapter 3

Optimization Exercise Set n.5 :

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

IE 5531: Engineering Optimization I

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Lift-and-Project Inequalities

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

The Dual Simplex Algorithm

COMP3121/9101/3821/9801 Lecture Notes. Linear Programming

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

Integer Linear Programming

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini

Design and Analysis of Algorithms

21. Set cover and TSP

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Cutting Plane Separators in SCIP

3 The Simplex Method. 3.1 Basic Solutions

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

Modeling with Integer Programming

4. Duality and Sensitivity

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

Week Cuts, Branch & Bound, and Lagrangean Relaxation

Transcription:

Lecture 8: Column Generation (3 units) Outline Cutting stock problem Classical IP formulation Set covering formulation Column generation A dual perspective 1 / 24

Cutting stock problem 2 / 24

Problem description A paper mill has a number of rolls of paper of fixed width. Different customers want different numbers of rolls of various-sized widths. How are you going to cut the rolls so that you minimize the waste (amount of left-overs)? 3 / 24

An example The width of large rolls: 5600mm. The width and demand of customers: Width 1380 1520 1560 1710 1820 1880 1930 2000 2050 2100 2140 2150 2200 Demand 22 25 12 14 18 18 20 10 12 14 16 18 20 An optimal solution: 4 / 24

Classical IP formulation Suppose the fixed width of large rolls is W. The m customers want n i rolls of width w i (i = 1,..., m), (w i W ). Notations: K: Index set of available rolls y k = 1 if roll k is cut, 0 otherwise xi k : number of times item i is cut on roll k The IP formulation of Kantorovich: (P 1 ) min k K y k s.t. xi k n i, k K i = 1,..., m, (demand) n w i xi k Wy k, k K, (width limitation) x k i Z +, y k {0, 1}. 5 / 24

However, the IP formulation (P 1 ) is inefficient both from computational and theoretical point views. For example, when the number of rolls is 100 and the number of items is 20, the problem could not be solved to optimality in days (CPLEX). The main reason is that the linear program (LP) relaxation of (P 1 ) is poor. Actually, the LP bound of (P 1 ) is Z LP = y k = k K k K = w i k K w i x k i W xi k m W = w i n i W. Question: Is there an alternative IP formulation? m w i n i W. 6 / 24

Set covering formulation of Gilmore and Gomory Let xj = number of times pattern j is used aij = number of times item i is cut in pattern j For example, the fixed width of large rolls is W = 100 and the demands are n i = 100, 200, 300, w i = 25, 35, 45 (i = 1, 2, 3). The large roll can be cut into Pattern 1: 4 rolls each of width w1 = 25 a 11 = 4 Pattern 2: 1 roll with width w 1 = 25 and 2 rolls each of width w 2 = 35 a 12 = 1, a 22 = 2 Pattern 3: 2 rolls with width w3 = 45 a 33 = 2 7 / 24

Set covering formulation: (P 2 ) min s.t. n j=1 x j n a ij x j n i, i = 1,..., m, (demand) j=1 x j Z +, j = 1,..., n, where n is the total number of cutting patterns satisfying m w ia ij W. Each column in (P 2 ) represents a cutting pattern. How many columns (cutting patterns) are there? It could be as many as m! k!(m k)!, where k is the average number of items in each cutting patterns. Exponentially large!. 8 / 24

Let s first consider the LP relaxation of (P 2 ): (LPM) min s.t. n j=1 x j n a ij x j n i, i = 1,..., m, (demand) j=1 x j R +, j = 1,..., n, which is called Linear Programming Master Problem. The solution to (LPM) could be fractional, It is possible to round up the fractional solution to get a feasible solution to (P 2 ). The Simple algorithm for Master LP: Select initial solution Find the most negative reduced cost (new basic variable) Find variable to replace (new non-basic var) Repeat until there exists no variables with negative reduced cost exists 9 / 24

Consider the standard LP and its dual: min c T x s.t. Ax = b x 0 Simplex Tableau x B x N rhs B N b = cb T cn T 0 min b T π s.t. A T π c x B x N rhs I B 1 N b 0 cn T ct B B 1 N c T B 1 b Reduced cost of non-basic variable: c j c T B B 1 a j, where a j is a column of A. If c j c T B B 1 a j < 0 implies the current basis can be improved, otherwise, if c j c T B B 1 a j 0 for all j N, then the current solution is optimal and π T = c T B B 1 is dual feasible (optimal). 10 / 24

What if we have extremely many variables? Even if we had a way of generating all the columns (cutting patterns), the standard simplex algorithm will need to calculate the reduced cost for each non-basic variable, which is impossible if n is huge (out of memory). A crucial insight: The number of non-zero variables (the basis variables) is equal to the number of constraints, hence even if the number of possible variables (columns) may be large, we only need a small subset of these columns in the optimal solution. In the case of cutting stock problem, the number of items is usually much smaller than the number of cutting patterns. 11 / 24

The main idea of column generation is to start with a small subset P {1,..., n} such that the following subproblem is feasible: (RLPM) min j P x j s.t. a ij x j n i, j P i = 1,..., m, x j 0, j P, Recall that the dual problem of (LPM) is max n i π i s.t. a ij π i 1, j P, π i 0, i = 1,..., m. 12 / 24

Generating columns Our next task is to find a column (cut pattern) in {1,..., n} \ P that could improve the current optimal solution of the linear relaxation (RLPM). Given the optimal dual solution π of (RLPM), the reduced cost of column j {1,..., n} \ P is 1 a ij π i. A naive way of finding the new column: min{1 a ij π i j {1,..., n} \ P}, which is impractical because we are not able to list all cutting patterns in real applications. 13 / 24

Knapsack subproblem We can look for a column (cut pattern) such that: min 1 s.t. π i y i = 1 max π i y i w i y i W, y i Z +, i = 1,..., m, where y = (y 1,..., y m ) represents a column (a 1j,..., a mj ) T (a cutting pattern) and the constraints m w iy i W, y Z m +, enforce that y satisfies the conditions for a feasible cutting pattern. This is a Knapsack Problem (an an easy NP-hard problem) and can be solved in O(mW ) time by dynamic programming. 14 / 24

Column generation algorithm The Column Generation Algorithm Start with initial columns A of (LPM). For instance, use the simple pattern to cut a roll into W /w i rolls of width w i, A is a diagonal matrix. repeat 1. Solve the restricted LP master problem (RLPM). Let π be the optimal multipliers ( π T = c T B B 1 ). 2. Identify a new column by solving the knapsack subproblem with optimal value κ. 3. Add the new column to master problem (RLPM) until κ 0 15 / 24

An example A steel company wants to cut the steel rods of width 218cm. The customers want 44 pieces of width 81 cm., 3 pieces of width 70 cm. and 48 pieces of width 68 cm. Initial master problem (one item in each large rod): min x 1 + x 2 + x 3 s.t. x 1 44, x 2 3, x 3 48. Optimal multipliers: π = (1, 1, 1) T. Initial knapsack subproblem: max y 1 + y 2 + y 3 s.t. 81y 1 + 70y 2 + 68y 3 218, y Z 3 +. 16 / 24

Optimal solution to the initial knapsack subproblem: y = (0, 0, 3) T with κ = 1 3 = 2 < 0. Second master problem: min x 1 + x 2 + x 3 + x 4 s.t. x 1 44, x 2 3, x 3 + 3x 4 48, Optimal multipliers: π = (1.0, 1.0, 0.33) T. Second knapsack subproblem: max y 1 + y 2 + 0.33y 3 s.t. 81y 1 + 70y 2 + 68y 3 218, y Z 3 +. Optimal solution: y = (0, 3, 0) T with κ = 1 3 = 2 < 0. Continue... 17 / 24

Getting integer solutions After solving the master linear program, we still have to find the integer solution to the original problem (P 2 ). Let x j = x j, x j is integral and n j=1 a ijx j n j=1 a ijx j n i. So the rounding up solution is feasible to (P 2 ). We can also use branch-and-bound framework to get the optima; integral solution. Branch-and-Price algorithm. 18 / 24

A Dual perspective Consider the classical formulation of cutting stock problem: (P 1 ) min k K y k s.t. k K x k i n i, i = 1,..., m, (demand) n w i xi k Wy k, k K, (width limitation) x k i Z +, y k {0, 1}. How to get a Lagrangian relaxation? Observe that if the demand constraints are removed, the problem can be decomposed into K knapsack problem! 19 / 24

For u i 0, i = 1,..., m, the Lagrangian function is L(u) = min ( y k + u i n i ) xi k k K k K s.t. w i xi k Wy k, k K, xi k Z +, y k {0, 1}. So L(u) = L k (u) + n i u i k K where L k (u) = min y k u i xi k s.t. w i xi k Wy k, x k i Z +, y k {0, 1}. 20 / 24

L s (u) = min(0, 1 z ), where z = max u i xi k s.t. w i xi k W, xi k Z +. This is a knapsack problem. Notice that L s (u) is independent of k. L(u) = KL s (u) + n i u i, where K = K. 21 / 24

Theorem: the dual problem max u 0 L(u) = v(lpm). Proof: Let z j = (a 1j,..., a mj ) T, j = 1,..., n, be the extreme points of the convex hull of the integer set X = {x Z m + w i x i W }. Then L s (u) = min(0, 1 max j=1,...,n So that max u 0 L(u) = max u 0 min j=1,...,n ( a ij u i ) = min min(0, 1 m a ij u i ). j=1,...,n K min(0, 1 a ij u i ) + ) n i u i 22 / 24

This can be reduced to max z s.t. z n i u i, z K(1 a ij u i ) + n i u i, j = 1,..., n, u i 0, i = 1,..., m. The dual of the above problem is n min Kλ j j=1 s.t. λ 0 + n λ j = 1, j=1 n i (λ 0 + j=1 j=1 n n λ j ) + a ij Kλ j 0, i = 1,..., m λ j 0, j = 0, 1,..., n. 23 / 24

Notice that λ 0 can be eliminated from the second constraint. So the constraint λ 0 + n j=1 λ j = 1 is redundant provided that there is (λ 1,..., λ n ) satisfying n a ij Kλ j n i, i = 1,..., m, and j=1 n λ j 1. j=1 This is always possible when K is large (enough rolls). Now, let x j = Kλ j, we obtain min s.t. n j=1 x j n a ij x j n i, i = 1,..., m, j=1 x j 0, j = 0, 1,..., n. This is exactly the LP relaxation (LPM). 24 / 24