Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

Similar documents
Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Lagrangian Relaxation in MIP

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Lecture 9: Dantzig-Wolfe Decomposition

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

Decomposition and Reformulation in Integer Programming

Lecture 8: Column Generation

3.10 Lagrangian relaxation

Integer program reformulation for robust branch-and-cut-and-price

On the exact solution of a large class of parallel machine scheduling problems

Notes on Dantzig-Wolfe decomposition and column generation

Column Generation for Extended Formulations

Decomposition-based Methods for Large-scale Discrete Optimization p.1

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

Integer Program Reformulation for Robust. Branch-and-Cut-and-Price Algorithms

Stabilization in Column Generation: numerical study

Workforce Scheduling. Outline DM87 SCHEDULING, TIMETABLING AND ROUTING. Outline. Workforce Scheduling. 1. Workforce Scheduling.

to work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting

Lecture 8: Column Generation

3.4 Relaxations and bounds

Decomposition Methods for Integer Programming

Operations Research Lecture 6: Integer Programming

How to Relax. CP 2008 Slide 1. John Hooker Carnegie Mellon University September 2008

The Fixed Charge Transportation Problem: A Strong Formulation Based On Lagrangian Decomposition and Column Generation

Chapter 3: Discrete Optimization Integer Programming

Integer Programming ISE 418. Lecture 16. Dr. Ted Ralphs

15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

A Node-Flow Model for 1D Stock Cutting: Robust Branch-Cut-and-Price

Column generation for extended formulations

Part 4. Decomposition Algorithms

and to estimate the quality of feasible solutions I A new way to derive dual bounds:

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

Multicommodity Flows and Column Generation

Recoverable Robustness in Scheduling Problems

Introduction to optimization and operations research

Chapter 3: Discrete Optimization Integer Programming

Vehicle Routing and MIP

Introduction to Integer Programming

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Operations Research Methods in Constraint Programming

0-1 Reformulations of the Network Loading Problem

An Integer Cutting-Plane Procedure for the Dantzig-Wolfe Decomposition: Theory

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Integer Programming Reformulations: Dantzig-Wolfe & Benders Decomposition the Coluna Software Platform

Discrete (and Continuous) Optimization WI4 131

Disconnecting Networks via Node Deletions

An Integrated Column Generation and Lagrangian Relaxation for Flowshop Scheduling Problems

Introduction to Bin Packing Problems

Reformulation and Decomposition of Integer Programs

A Hub Location Problem with Fully Interconnected Backbone and Access Networks

Advanced linear programming

Lecture 7: Lagrangian Relaxation and Duality Theory

Lagrangean relaxation

Solving Dual Problems

Solutions to Exercises

Travelling Salesman Problem

Column Generation. ORLAB - Operations Research Laboratory. Stefano Gualandi. June 14, Politecnico di Milano, Italy

Classification of Dantzig-Wolfe Reformulations for MIP s

The traveling salesman problem

The Separation Problem for Binary Decision Diagrams

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

minimize x subject to (x 2)(x 4) u,

3.10 Column generation method

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Acceleration and Stabilization Techniques for Column Generation

Single Machine Problems Polynomial Cases

Integer programming: an introduction. Alessandro Astolfi

Integer Linear Programming (ILP)

Lecture 10: Linear programming duality and sensitivity 0-0

The Traveling Salesman Problem: An Overview. David P. Williamson, Cornell University Ebay Research January 21, 2014

1 Column Generation and the Cutting Stock Problem

Partial Path Column Generation for the Elementary Shortest Path Problem with Resource Constraints

The two-dimensional bin-packing problem is the problem of orthogonally packing a given set of rectangles

Large-scale optimization and decomposition methods: outline. Column Generation and Cutting Plane methods: a unified view

Reconnect 04 Introduction to Integer Programming

Interior-Point versus Simplex methods for Integer Programming Branch-and-Bound

Logic-based Benders Decomposition

EXACT ALGORITHMS FOR THE ATSP

Partial Path Column Generation for the Vehicle Routing Problem with Time Windows

4y Springer NONLINEAR INTEGER PROGRAMMING

Exact algorithm over an arc-time-indexed formulation for parallel machine scheduling problems

Lecture 23 Branch-and-Bound Algorithm. November 3, 2009

Overview of course. Introduction to Optimization, DIKU Monday 12 November David Pisinger

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

Chapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.

Projection in Logic, CP, and Optimization

3.10 Column generation method

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

Constrained Optimization

Introduction to Integer Linear Programming

Column Generation. MTech Seminar Report. Soumitra Pal Roll No: under the guidance of

15-850: Advanced Algorithms CMU, Fall 2018 HW #4 (out October 17, 2018) Due: October 28, 2018

ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization

3.7 Cutting plane methods

RCPSP Single Machine Problems

Transcription:

Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column Generation Marco Chiarandini Slides from David Pisinger s lectures at DIKU 3. Single Machine Models 2 Outline Relaxation In branch and bound we find upper bounds by relaxing the problem 1. Lagrangian Relaxation Relaxation { max g(s) maxs P f(s) s P max s S g(s) } max s S f(s) 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column Generation 3. Single Machine Models P : candidate solutions; S P feasible solutions; g(x) f(x) Which constraints should be relaxed? Quality of bound (tightness of relaxation) Remaining problem can be solved efficiently Proper multipliers can be found efficiently Constraints difficult to formulate mathematically Constraints which are too expensive to write up 3 4

Tightness of relaxation Different relaxations LP-relaxation Tighter max cx s.t. Ax b Dx d Deleting constraint Lagrange relaxation Surrogate relaxation Semidefinite relaxation Relaxations are often used in combination. Best surrogate relaxation Best Lagrangian relaxation LP relaxation x Z n + LP-relaxation: max {cx : x conv(ax b, Dx d, x Z + )} Lagrangian Relaxation: max z LR (λ) = cx λ(dx d) s.t. Ax b x Z n + LP-relaxation: max {cx : Dx d, x conv(ax b, x Z + )} 5 6 Relaxation strategies Which constraints should be relaxed "the complicating ones" remaining problem is polynomially solvable (e.g. min spanning tree, assignment problem, linear programming) remaining problem is totally unimodular (e.g. network problems) remaining problem is NP-hard but good techniques exist (e.g. knapsack) constraints which cannot be expressed in MIP terms (e.g. cutting) constraints which are too extensive to express (e.g. subtour elimination in TSP) 7 Subgradient optimization Lagrange multipliers max z = cx s. t. Ax b Dx d x Z n + Lagrange Relaxation, multipliers λ 0 max z LR (λ) = cx λ(dx d) s. t. Ax b x Z n + Lagrange Dual Problem z LD = min λ 0 z LR(λ) We do not need best multipliers in B&B algorithm Subgradient optimization fast method Works well due to convexity Roots in nonlinear programming, Held and Karp (1971) 8

Subgradient optimization, motivation Subgradient Generalization of gradients to non-differentiable functions. Definition An m-vector γ is subgradient of f(λ) at λ λ if f(λ) f( λ) + γ(λ λ) The inequality says that the hyperplane y = f( λ) + γ(λ λ) is tangent to y = f(λ) at λ λ and supports f(λ) from below Netwon-like method to minimize a function in one variable Lagrange function z LR (λ) is piecewise linear and convex 9 10 Proposition Given a choice of nonnegative multipliers λ. If x is an optimal solution to z LR (λ) then is a subgradient of z LR (λ) at λ = λ. γ = d Dx Proof We wish to prove that from the subgradient definition: max (cx = λ(dx d)) γ(λ λ) ( ) + max cx λ(dx d) Ax b Ax b where x is an opt. solution to the right-most subproblem. Inserting γ we get: max (cx λ(dx d)) (d Ax b Dx )(λ λ) + (cx λ(dx d)) = cx λ(dx d) Intuition Lagrange relaxation max z LR (λ) = cx λ(dx d) s.t. Ax b x Z n + Gradient in x is Subgradient Iteration Recursion where θ > 0 is step-size γ = d Dx λ k+1 = max { λ k θγ k, 0 } If γ > 0 and θ is sufficiently small z LR (λ) will decrease. Small θ slow convergence Large θ unstable 11 12

Lagrange relaxation and LP For an LP-problem where we Lagrange relax all constraints Dual variables are best choice of Lagrange multipliers Lagrange relaxation and LP "relaxation" give same bound Gives a clue to solve LP-problems without Simplex Iterative algorithms Polynomial algorithms 13 14 Outline Dantzig-Wolfe Decomposition Motivation split it up into smaller pieces a large or difficult problem 1. Lagrangian Relaxation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column Generation 3. Single Machine Models Applications Cutting Stock problems Multicommodity Flow problems Facility Location problems Capacitated Multi-item Lot-sizing problem Air-crew and Manpower Scheduling Vehicle Routing Problems Scheduling (current research) Two currently most promising directions for MIP: Branch-and-price Branch-and-cut 15 17

Dantzig-Wolfe Decomposition The problem is split into a master problem and a subproblem + Tighter bounds + Better control of subproblem Model may become (very) large Delayed column generation Write up the decomposed model gradually as needed Generate a few solutions to the subproblems Solve the master problem to LP-optimality Use the dual information to find most promising solutions to the subproblem Extend the master problem with the new subproblem solutions. 18

Delayed Column Generation Delayed column generation, linear master Master problem can (and will) contain many columns To find bound, solve LP-relaxation of master Delayed column generation gradually writes up master 29

Reduced Costs Simplex in matrix form min {cx Ax = b, x } In matrix form: [ ] [ ] 0 A z 1 c x B = {1, 2,..., p} basic variables = [ ] b 0 L = {1, 2,..., q} non-basis variables (will be set to lower bound = 0) (B, L) basis structure x B, x L, c B, c L, B = [A 1, A 2,..., A p ], L = [A p+1, A p+2,..., A p+q ] [ ] z [ 0 B L x 1 c B c B b = L 0] x L Bx B + Lx L = b x B + B 1 Lx L = B 1 b [ xl = 0 x B = B 1 b 31 [ ] z 0 B L x 1 c B c B = L x L Simplex algorithm sets x L = 0 and x B = B 1 b B invertible, hence rows linearly independent [ ] b 0 The objective function is obtained by multiplying and subtracting constraints by means of multipliers π (the dual variables) [ ] [ ] p p q p p z = c j π i a ij + c j π i a ij + π i b i i=1 i=1 Each basic variable has cost null in the objective function p c j π i a ij = 0 = π = B 1 c B i=1 Reduced costs of non-basic variables: p c j π i a ij i=1 i=1 32

Questions Will the process terminate? Always improving objective value. Only a finite number of basis solutions. Can we repeat the same pattern? No, since the objective functions is improved. We know the best solution among existing columns. If we generate an already existing column, then we will not improve the objective. 36 Outline Scheduling 1. Lagrangian Relaxation Sequencing (linear ordering) variables 1 prec P w j C j 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column Generation 3. Single Machine Models n n n min w j p k x kj + w j p j k=1 s.t. x kj + x lk + x jl 1 j, k, l = 1,..., nj k, k l x kj + x jk = 1 j, k = 1,..., n, j k x jk {0, 1} j, k = 1,..., n x jj = 0 j = 1,..., n 38 39

Scheduling Scheduling Time indexed variables 1 P h j (C j ) Completion time variables n min w j z j s.t. z k z j p k for j k A z j p j, for j = 1,..., n z k z j p k or z j z k p j, for (i, j) I z j R, j = 1,..., n 1 prec C max min s.t. n n h j (t + p j )x jt x jt = 1, t s=t p j +1 x js 1, for all j = 1,..., n for each t = 1,..., T x jt {0, 1}, for each j = 1,..., n; t = 1,..., T p j + 1 + This formulation gives better bounds than the two preceding pseudo-polynomial number of variables 40 41 Dantzig-Wolfe decomposition Reformulation: Dantzig-Wolfe decomposition Substituting X in original model getting master problem min s.t. n h j (t + p j )x jt x jt = 1, for all j = 1,..., n x jt X for each j = 1,..., n; t = 1,..., T p j + 1 n where X = x {0, 1} : t s=t p j +1 x js 1, for each t = 1,..., T π α min s.t. n h j (t + p j )( L λ l x l ) l=1 L λ l x l jt = 1, for all j = 1,..., n = l=1 L λ l = 1, l=1 λ l {0, 1} = λ l 0 LP-relaxation L λ l n l j = 1 l=1 x l, l = 1,..., L extreme points of X. x {0, 1} : x = L l=1 λ lx l X = L l=1 λ l = 1, λ l {0, 1} matrix of X is interval matrix extreme points are integral they are pseudo-schedules 42 solve LP-relaxation by column generation on pseudo-schedules x l reduced cost of λ k is c k = n (c jt π j )x k jt α 43

The subproblem can be solved by finding shortest path in a network N with 1, 2,..., T + 1 nodes corresponding to time periods process arcs, for all j, t, t t + p j and cost c jt π j idle time arcs, for all t, t t + 1 and cost 0 a path in this network corrsponds to a pseudo-schedule in which a job may be started more than once or not processed. the lower bound on the master problem produced by the LP-relaxation of the restricted master problem can be tighten by inequalities [Pessoa, Uchoa, Poggi de Aragão, Rodrigues, 2008], propose another time index formulation that dominates this one. They can solve consistently instances up to 100 jobs. 44