Discrete Optimization in a Nutshell

Similar documents
Preliminaries and Complexity Theory

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Introduction to Integer Linear Programming

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

Introduction to Integer Programming

In complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.

Integer programming: an introduction. Alessandro Astolfi

Chapter 3: Discrete Optimization Integer Programming

Integer Linear Programming (ILP)

Maximum flow problem

Introduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs

Computational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs

Computational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs

3.7 Cutting plane methods

Exercises NP-completeness

Multicommodity Flows and Column Generation

3. Linear Programming and Polyhedral Combinatorics

3.10 Lagrangian relaxation

MVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg

Chapter 3: Discrete Optimization Integer Programming

CSC 8301 Design & Analysis of Algorithms: Lower Bounds

Modeling with Integer Programming

4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle

Introduction to Bin Packing Problems

Maximum Flow Problem (Ford and Fulkerson, 1956)

Linear and Integer Programming - ideas

CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

Integer Programming Methods LNMB

1 Column Generation and the Cutting Stock Problem

3.3 Easy ILP problems and totally unimodular matrices

Lectures 6, 7 and part of 8

Travelling Salesman Problem

3.10 Column generation method

5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1

CO 250 Final Exam Guide

Discrete Optimization 23

Introduction to integer programming III:

Lecture 5 January 16, 2013

Lecture 8: Column Generation

Discrete (and Continuous) Optimization WI4 131

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

3.10 Column generation method

Decomposition-based Methods for Large-scale Discrete Optimization p.1

Totally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems

Week 8. 1 LP is easy: the Ellipsoid Method

3.4 Relaxations and bounds

Integer Programming ISE 418. Lecture 8. Dr. Ted Ralphs

3. Linear Programming and Polyhedral Combinatorics

Unit 1A: Computational Complexity

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

The traveling salesman problem

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Combinatorial Optimization

Mathematical Programs Linear Program (LP)

Integer Linear Programs

Combinatorial optimization problems

Optimization Exercise Set n. 4 :

Resource Constrained Project Scheduling Linear and Integer Programming (1)

Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point

Integer Programming, Part 1

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Week Cuts, Branch & Bound, and Lagrangean Relaxation

16.410/413 Principles of Autonomy and Decision Making

Week 4. (1) 0 f ij u ij.

Contents. Introduction

NP-completeness. Chapter 34. Sergey Bereg

MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms

Introduction to optimization and operations research

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming

maxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2

Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications

Limitations of Algorithm Power

Branch-and-Bound. Leo Liberti. LIX, École Polytechnique, France. INF , Lecture p. 1

(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )

Introduction to Complexity Theory

Cutting Plane Methods II

Reconnect 04 Introduction to Integer Programming

ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES

Complexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler

Outline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.

CHAPTER 3: INTEGER PROGRAMMING

21. Set cover and TSP

Introduction to integer programming II

Duality of LPs and Applications

This means that we can assume each list ) is

15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization

Optimization methods NOPT048

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Agenda. Soviet Rail Network, We ve done Greedy Method Divide and Conquer Dynamic Programming

Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P

Outline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model

Connectedness of Efficient Solutions in Multiple. Objective Combinatorial Optimization

Analysis of Algorithms. Unit 5 - Intractable Problems

IS703: Decision Support and Optimization. Week 5: Mathematical Programming. Lau Hoong Chuin School of Information Systems

Optimization methods NOPT048

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III. Instructor: Shaddin Dughmi

NP-problems continued

Transcription:

Discrete Optimization in a Nutshell Integer Optimization Christoph Helmberg : [,]

Contents Integer Optimization 2: 2 [2,2]

Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 3: 3 [3,3]

. Application: The Marriage Problem (Bipartite Matching) Find a maximum number of pairs! men women, worker mashines, students positions,... (also weighted versions) 4: 4 [4,6]

. Application: The Marriage Problem (Bipartite Matching) Find a maximum number of pairs! Maximal (cannot be increased), but not of maximum cardinality men women, worker mashines, students positions,... (also weighted versions) 4: 5 [4,6]

. Application: The Marriage Problem (Bipartite Matching) Find a maximum number of pairs! Maximum Cardinality Matching (even perfect) men women, worker mashines, students positions,... (also weighted versions) 4: 6 [4,6]

Bipartite Matching An (undirected) graph G = (V, E) is a pair consisting of a node/vertex set V and an edge set E {{u, v} : u, v V, u v}. Two nodes u, v V are adjacent/neighbors, if {u, v} E. A node v V and an edge e E are incident, if v e. Two edges e, f V are incident, if e f. 5: 7 [7,9]

Bipartite Matching An (undirected) graph G = (V, E) is a pair consisting of a node/vertex set V and an edge set E {{u, v} : u, v V, u v}. Two nodes u, v V are adjacent/neighbors, if {u, v} E. A node v V and an edge e E are incident, if v e. Two edges e, f V are incident, if e f. An edge set M E is a matching/-factor, if for e, f M with e f there holds e f =. The matching is perfect, if V = 2 M. 5: 8 [7,9]

Bipartite Matching An (undirected) graph G = (V, E) is a pair consisting of a node/vertex set V and an edge set E {{u, v} : u, v V, u v}. Two nodes u, v V are adjacent/neighbors, if {u, v} E. A node v V and an edge e E are incident, if v e. Two edges e, f V are incident, if e f. An edge set M E is a matching/-factor, if for e, f M with e f there holds e f =. The matching is perfect, if V = 2 M. G = (V, E) is bipartite, if V = V V 2 with V V 2 = and E {{u, v} : u V, v V 2 }. 5: 9 [7,9]

Model: Maximum Cardinatliy Bipartite Matching given: G = (V V 2, E) bipartite find: matching M E with M maximal 6: 0 [0,2]

Model: Maximum Cardinatliy Bipartite Matching given: G = (V V 2, E) bipartite find: matching M E with M maximal { if e M variables: x {0, } E with x e = (e E) 0 otherwise. (it represents the incidence/characteristic vector of M w.r.t. E) constraints: Ax, where A {0, } V E Node-Edge-Incidencematrix of G: A v,e = { if v e 0 otherwise. (v V, e E) A B C b c a d D E A = (a) (b) (c) (d) (A) 0 0 (B) 0 0 0 (C) 0 0 0 (D) 0 0 0 (E) 0 6: [0,2]

Model: Maximum Cardinatliy Bipartite Matching given: G = (V V 2, E) bipartite find: matching M E with M maximal { if e M variables: x {0, } E with x e = (e E) 0 otherwise. (it represents the incidence/characteristic vector of M w.r.t. E) constraints: Ax, where A {0, } V E Node-Edge-Incidencematrix of G: A v,e = optimization problem: { if v e 0 otherwise. max T x s.t. Ax x {0, } E (v V, e E) This is no LP! Enlarge x {0, } E to x [0, ] E LP For G bipartite, Simplex always delivers an optimal solution x {0, } E! (this does not hold i.g. for general graphs G!) 6: 2 [0,2]

Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 7: 3 [3,3]

.2 Integeral Polyhedra Simplex automatically yields an integral solution, if all vertices of the feasible set are integral. min c T x s.t. x X := {x 0 : Ax = b} Are all vertices of X integral? 8: 4 [4,6]

.2 Integeral Polyhedra Simplex automatically yields an integral solution, if all vertices of the feasible set are integral. min c T x s.t. x X := {x 0 : Ax = b} Are all vertices of X integral? Almost never! But there is an important class of matrices A Z m n, for which X has only integral vertices for any (!) b Z m : A vertex is integral basic solution x B = A B b Zm. If det(a B ) =, Cramer s rule implies x B Z m. 8: 5 [4,6]

.2 Integeral Polyhedra Simplex automatically yields an integral solution, if all vertices of the feasible set are integral. min c T x s.t. x X := {x 0 : Ax = b} Are all vertices of X integral? Almost never! But there is an important class of matrices A Z m n, for which X has only integral vertices for any (!) b Z m : A vertex is integral basic solution x B = A B b Zm. If det(a B ) =, Cramer s rule implies x B Z m. A matrix A Z m n of full row rank is unimodular, if det(a B ) = holds for each basis B. Theorem A Z m n is unimodular if and only if for each b Z m all vertices of the polyhedron X := {x 0 : Ax = b} are integral. Does this also hold for X := {x 0 : Ax b}? 8: 6 [4,6]

Totally Unimodular Matrices {x 0 : Ax b} {[ x s ] [ x 0 : [A, I ] s ] } = b Certainly integral, if Ā = [A, I ] is unimodular. Laplace development of the determinant for each basis B A matrix A is totally unimodular if for each square submatrix of A the determinant has value 0, or. (requires A {0,, } m n ) Theorem (Hoffmann und Kruskal) A Z m n is totally unimodular if and only if for each b Z m all vertices of the polyhedron X := {x 0 : Ax b} are integral. 9: 7 [7,7] Note: A tot. unimod. A T resp. [A, A, I, I ] tot. unimod. consequence: dual LP, variants with equality constraints, etc. are integral

Recognizing Totally Unimodular Matrices Theorem (Heller and Tompkins) Let A {0,, } m n have at most two nonzero entries per column. A is totally unimodular the rows A can be partitioned int two classes so that (i) rows with one + and one entry in the same column belong to the same class, (ii) rows with two nonzeros of equal sign in the same column belong to distinct classes. Example : the node-edge-incidencematrix of a bipartite graph A B C b c a d D E A = (a) (b) (c) (d) (A) 0 0 (B) 0 0 0 (C) 0 0 0 (D) 0 0 0 (E) 0 0: 8 [8,8]

Example : Bipartite Graphs A... node-edge incidence matrix of G = (V V 2, E) bipartite Bipartite Matching of Maximum Cardinality: max T x s.t. Ax, x 0 Because A tot. unimod. the veritces of the feasilbe set are all integral Simplex delivers optimal solution x {0, } E. : 9 [9,2]

Example : Bipartite Graphs A... node-edge incidence matrix of G = (V V 2, E) bipartite Bipartite Matching of Maximum Cardinality: max T x s.t. Ax, x 0 Because A tot. unimod. the veritces of the feasilbe set are all integral Simplex delivers optimal solution x {0, } E. The dual is also integral, because A T tot. unimod.: min T y s.t. A T y, y 0 Interpretation: y {0, } V is the incidence vector of a smallest node set V V, so that e E : e V (Minimum Vertex Cover) : 20 [9,2]

Example : Bipartite Graphs A... node-edge incidence matrix of G = (V V 2, E) bipartite Bipartite Matching of Maximum Cardinality: max T x s.t. Ax, x 0 Because A tot. unimod. the veritces of the feasilbe set are all integral Simplex delivers optimal solution x {0, } E. The dual is also integral, because A T tot. unimod.: min T y s.t. A T y, y 0 Interpretation: y {0, } V is the incidence vector of a smallest node set V V, so that e E : e V (Minimum Vertex Cover) The Assignment Problem: V = V 2 = n, complete bipartite: E = {{u, v} : u V, v V 2 }; edge weights c R E Find a perfect matching of minimum total weight: min c T x s.t. Ax =, x 0 is integral, too, because [A; A] tot. unimod. : 2 [9,2]

Example 2: Node-Arc Incidence Matrix of Digraphs A digraph/directed graph D = (V, E) is a pair consisting of a node set V and a (multi-)set of directed edges/arcs E {(u, v) : u, v V, u v}. [multiple arcs are allowed!] For e = (u, v) E, u is the tail and v the head of e. B a d A b c e D C f 2: 22 [22,23]

Example 2: Node-Arc Incidence Matrix of Digraphs A digraph/directed graph D = (V, E) is a pair consisting of a node set V and a (multi-)set of directed edges/arcs E {(u, v) : u, v V, u v}. [multiple arcs are allowed!] For e = (u, v) E, u is the tail and v the head of e. The node-arc incidence matrix A {0,, } V E of D has entries v is the tail of e A v,e = v is the head of e (v V, e E). 0 otherwise the node-arc incidence matrix of a digraph is totally unimodular. A a b c B C d D e A = f (a) (b) (c) (d) (e) (f ) (A) 0 0 0 (B) 0 0 0 (C) 0 0 0 (D) 0 0 0 2: 23 [22,23]

Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 3: 24 [24,24]

.3 Application: Networkflow Modelling tool: transport problems, evacuation planning, scheduling, (internet) traffic planning,... A network (D, w) consists of a digraph D = (V, E) and (arc-)capacities w R E +. A vector x R E is a flow on (D, w), if it satisfies the flow conservation constraints x e = e=(u,v) E e=(v,u) E x e (v V ) [ Ax = 0 for node-arc incidence matrix A] A flow x R E on (D,w) is feasible, if 0 x w [lower bounds would also be possible] 5 4 3 5 2 6 7 2 8 9 4 4 5 0 2 4 7 5 9 4: 25 [25,25] flow feasible flow

Maximal s-t-flows, Minimal s-t-cuts Given a source s V and a sink t V with (t, s) E, find a feasible flow x R E on (D, w) with maximum flow value x (t,s). s max a b 5 4 B c 2 C d 7 t e A = f 9 LP: max x (t,s) s.t. Ax = 0, 0 x w, (a) (b) (c) (d) (e) (f ) (s) 0 0 0 (B) 0 0 0 (C) 0 0 0 (t) 0 0 0 If w Z E, the OS of Simplex is x Z E, because [A; A; I ] tot. unimod. 5: 26 [26,29]

Maximal s-t-flows, Minimal s-t-cuts Given a source s V and a sink t V with (t, s) E, find a feasible flow x R E on (D, w) with maximum flow value x (t,s). B s max 5 4 2 C 7 9 t S = {s}, δ + (S) = {(s, B), (s, C)}, w(δ + (S)) = 4 + 5 = 9. LP: max x (t,s) s.t. Ax = 0, 0 x w, If w Z E, the OS of Simplex is x Z E, because [A; A; I ] tot. unimod. Each S V with s S and t / S defines an s-t-cut δ + (S) := {(u, v) E : u S, v / S}, the out-flow is at most w(δ + (S)) := e δ + (S) w e, the value of the cut. 5: 27 [26,29]

Maximal s-t-flows, Minimal s-t-cuts Given a source s V and a sink t V with (t, s) E, find a feasible flow x R E on (D, w) with maximum flow value x (t,s). s 5: 28 [26,29] max B 4 S = {s, B, C}, 7 2 t δ + (S) = {(B, t), (C, t)}, w(δ + (S)) = 7 + = 8. 5 C 9 LP: max x (t,s) s.t. Ax = 0, 0 x w, If w Z E, the OS of Simplex is x Z E, because [A; A; I ] tot. unimod. Each S V with s S and t / S defines an s-t-cut δ + (S) := {(u, v) E : u S, v / S}, the out-flow is at most w(δ + (S)) := e δ + (S) w e, the value of the cut. Theorem (Max-Flow Min-Cut Theorem of Ford and Fulkerson) The maximum s-t-flow value equals the minimum s-t cut value. [both computable by Simplex]

Maximal s-t-flows, Minimal s-t-cuts Given a source s V and a sink t V with (t, s) E, find a feasible flow x R E on (D, w) with maximum flow value x (t,s). s 5: 29 [26,29] max 4 4 3 5 B 2 2 C 6 7 7 9 LP: max x (t,s) s.t. Ax = 0, 0 x w, t S = {s, C}, δ + (S) = {(s, B), (C, B), (C, t)}, w(δ + (S)) = 4 + 2 + = 7= x (t,s) If w Z E, the OS of Simplex is x Z E, because [A; A; I ] tot. unimod. Each S V with s S and t / S defines an s-t-cut δ + (S) := {(u, v) E : u S, v / S}, the out-flow is at most w(δ + (S)) := e δ + (S) w e, the value of the cut. Theorem (Max-Flow Min-Cut Theorem of Ford and Fulkerson) The maximum s-t-flow value equals the minimum s-t cut value. [both computable by Simplex]

Minimum Cost Flow (Min-Cost-Flow) The flow value is now prescribed by balances b R V ( T b = 0) on the nodes; each unit of flow induces arc costs c R E. Find the cheapest flow. 6: 30 [30,30] 0 min 2 4 [ 5 2 3 4 0 ] x 4 3 0 0 0 5 2 7 2 5 5 3 s.t. 0 0 0 2 0 0 0 x = 5 4 0 0 0 0 0 9 0 0 x w LP: min c T x s.t. Ax = b, 0 x w, 5 0 0 5 For b, c and w integr., Simplex gives OS x Z E, as [A; A; I ] tot. unimod. [also works for lower bounds on arcs: u x w!] For LPs min c T x s.t. Ax = b, u x w, A node-arc inc. there is a particularly efficient simplex variant, the network simplex, it only needs addtions, subtractions and comparisons! Extremly broad scope of applications, popular modelling tool

Example: Transportation Problem A company with several production sites has to serve several customers. How is this best done in view of transportation costs? 2 7 5 5 3 2 3 2 4 4 2 0 2 3 3 0 9 0 0 2 5 2 6 9 7 7: 3 [3,3] Note: only one product!

Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. 8: 32 [32,39]

Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. 4 20 6 / /2 4/2 2/2 3/ 2/2 2/2 4 / / 2/ 40 /2 4 / 2/ / 3 4 Arcs with capacities and crossing times 8: 33 [32,39]

Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. 4 /2 20 6 2/2 / 4/2 4 2/2 4 / / 2/ 40 2/ / 4 /2 3 2/2 3/ / geometry is not important, simplify 8: 34 [32,39]

Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= /2 20 6 2/2 / 4/2 4 2/2 4 / / 2/ 40 2/ / 4 /2 3 2/2 3/ / t=2 discretize time, one level per time step 8: 35 [32,39]

Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= /2 20 6 2/2 / 4/2 4 2/2 4 / / 2/ 40 2/ / 4 /2 3 2/2 3 t=2 3 crossing times connect levels, capacity stays 8: 36 [32,39]

Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= t=2 /2 20 6 2 2 4 / 4 / / 2/ 2/2 40 2/ / 4 4/2 /2 3 2 2 3 3 crossing times connects levels, capacity stays 8: 37 [32,39]

Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= t=2 20 6 2 2 4 2 2 4 40 4 3 3 3 rising costs on the exit arcs to encourage fast exits 2 2 8: 38 [32,39]

Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= t=2 20 6 2 2 4 2 2 4 40 4 3 3 3 rising costs on the exit arcs to encourage fast exits Approach only ok if persons do not need to be discerned! 2 2 8: 39 [32,39]

Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 9: 40 [40,40]

.4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s t t 20: 4 [4,48]

.4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s t t mixing is forbidden! 20: 42 [4,48]

.4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s t t integr. is impossible 20: 43 [4,48]

.4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s 0.5 0.5 0.5 0.5 t t fractional works 20: 44 [4,48]

.4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s 0.5 0.5 0.5 0.5 implementation s s t t fractional works copy t t copy 2 20: 45 [4,48]

.4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s 0.5 0.5 0.5 0.5 implementation s s t t fractional works copy t t copy 2 min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) Ix () + Ix (2) w x () 0, x (2) 0. 20: 46 [4,48]

.4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s 0.5 0.5 0.5 0.5 implementation s s t t fractional works A 0 0 A I I i.g. not tot.unimod.! copy t t copy 2 min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) Ix () + Ix (2) w x () 0, x (2) 0. 20: 47 [4,48]

.4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s 0.5 0.5 0.5 0.5 implementation s s t t fractional works fractionally solvable, integral VERY difficult! copy t t copy 2 min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) Ix () + Ix (2) w x () 0, x (2) 0. 20: 48 [4,48]

Example: Logistics cover demand by shifting pallets by trucks between warehouses B A C pro Artikel ein Palettengraph A B C A B C t=0 t= t=2 t=3 t=0.5 t=.5 t=2.5 t=3.5 2: 49 [49,49]

Further Application Areas } fractional: capacity planning for integral: time discretized routing and scheduling street traffic trains internet logistics (bottleneck analysis/steering) production (mashine loads/-assignment) 22: 50 [50,5]

Further Application Areas } fractional: capacity planning for integral: time discretized routing and scheduling street traffic trains internet logistics (bottleneck analysis/steering) production (mashine loads/-assignment) network-design: installed capacities should satisfy as many demands as possible also in case of failures [ robust variants are extremly difficult!] Multi-commodity flow is often used as basic model that is combined with further constraints. 22: 5 [50,5]

Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 23: 52 [52,52]

.5 Integer Optimization (Integer Programming) mainly: linear programs with exclusively integer variables (otherw. mixed integer programming) max s.t. c T x Ax b x Z n Typically contains many binary variables ({0,}) for yes/no decisions Difficulty: i.g. not solvable efficiently, complexity class NP exact solutions rely heavily on enumeration (systematic exploration) Exact solutions by combining the following techniques: (upper) bounds by linear/convex relaxation improved by cutting plane approaches feasible solutions (lower bounds) by rounding- and local search heuristics enumerate by branch&bound, branch&cut 24: 53 [53,53]

Combinatorial Optimization Mathematically: Given a finite ground set Ω, a set of feasible subsets F 2 Ω [power set, set of all subsets] and a goal function c : F Q, determine max{c(f ) : F F} or F Argmax{c(F ) : F F} 25: 54 [54,57]

Combinatorial Optimization Mathematically: Given a finite ground set Ω, a set of feasible subsets F 2 Ω [power set, set of all subsets] and a goal function c : F Q, determine max{c(f ) : F F} or F Argmax{c(F ) : F F} Mostly only linear c: c(f ) := e F c e with c Q Ω. 25: 55 [54,57]

Combinatorial Optimization Mathematically: Given a finite ground set Ω, a set of feasible subsets F 2 Ω [power set, set of all subsets] and a goal function c : F Q, determine max{c(f ) : F F} or F Argmax{c(F ) : F F} Mostly only linear c: c(f ) := e F c e with c Q Ω. Ex: maximum cardinality matching for G = (V, E): Ω = E, F = {M E : M matching in G}, c = Ex2: minimum vertex cover for G = (V, E): Ω = V, F = {V V : e V für e E}, c = 25: 56 [54,57]

Combinatorial Optimization Mathematically: Given a finite ground set Ω, a set of feasible subsets F 2 Ω [power set, set of all subsets] and a goal function c : F Q, determine max{c(f ) : F F} or F Argmax{c(F ) : F F} Mostly only linear c: c(f ) := e F c e with c Q Ω. Ex: maximum cardinality matching for G = (V, E): Ω = E, F = {M E : M matching in G}, c = Ex2: minimum vertex cover for G = (V, E): Ω = V, F = {V V : e V für e E}, c = formulation as a binary program: Notation: incidence-/characteristic vector χ Ω (F ) {0, } Ω for F Ω [in short χ(f ), satisfies [χ(f )] e = e F ] A linear program max{c T x : Ax b, x [0, ] Ω } is a formulation of the combinatorial optimization problem, if {x {0, } Ω : Ax b} = {χ(f ) : F F}. 25: 57 [54,57]

(Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. 26: 58 [58,62]

(Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. The runtime of an algorithm for one instance is the number of elementary operations executed (read/write a symbol, add/multiply/compare bytes, etc.) 26: 59 [58,62]

(Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. The runtime of an algorithm for one instance is the number of elementary operations executed (read/write a symbol, add/multiply/compare bytes, etc.) An algorithm solves a problem in polynomial time or efficiently, if it computes the correct answer within a runtime that is bounded by a polynomial in the encoding length. 26: 60 [58,62]

(Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. The runtime of an algorithm for one instance is the number of elementary operations executed (read/write a symbol, add/multiply/compare bytes, etc.) An algorithm solves a problem in polynomial time or efficiently, if it computes the correct answer within a runtime that is bounded by a polynomial in the encoding length. A problem is polynomially/efficiently solvable, if there is an algorithm that solves it efficiently. The class P comprises all problems, that are solvable efficiently. 26: 6 [58,62]

(Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. The runtime of an algorithm for one instance is the number of elementary operations executed (read/write a symbol, add/multiply/compare bytes, etc.) An algorithm solves a problem in polynomial time or efficiently, if it computes the correct answer within a runtime that is bounded by a polynomial in the encoding length. A problem is polynomially/efficiently solvable, if there is an algorithm that solves it efficiently. The class P comprises all problems, that are solvable efficiently. Ex: lineare optimization is in P, BUT Simplex is not polynomial. Ex2: maximum cardinality matching in general graphs is in P. Ex3: minimum vertex cover in bipartite graphs is in P. 26: 62 [58,62]

Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] 27: 63 [63,68]

Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP 27: 64 [63,68]

Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP Ex: Hamiltonian Cycle: in a graphen G = (V, E) an edge set C = {{v, v 2 }, {v 2, v 3 },..., {v k, v k }, {v k, v }} E is a cycle (of length k) if v i v j for i j. A cycle C is hamiltonian if C = V. 27: 65 [63,68]

Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP Ex: Hamiltonian Cycle: in a graphen G = (V, E) an edge set C = {{v, v 2 }, {v 2, v 3 },..., {v k, v k }, {v k, v }} E is a cycle (of length k) if v i v j for i j. A cycle C is hamiltonian if C = V. decision problem: Does G contain a Hamiltonian cycle? 27: 66 [63,68]

Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP Ex: Hamiltonian Cycle: in a graphen G = (V, E) an edge set C = {{v, v 2 }, {v 2, v 3 },..., {v k, v k }, {v k, v }} E is a cycle (of length k) if v i v j for i j. A cycle C is hamiltonian if C = V. decision problem: Does G contain a Hamiltonian cycle? Yes-answer can be checked efficiently, the problem is in NP. 27: 67 [63,68]

Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP Ex: Hamiltonian Cycle: in a graphen G = (V, E) an edge set C = {{v, v 2 }, {v 2, v 3 },..., {v k, v k }, {v k, v }} E is a cycle (of length k) if v i v j for i j. A cycle C is hamiltonian if C = V. decision problem: Does G contain a Hamiltonian cycle? Yes-answer can be checked efficiently, the problem is in NP. 27: 68 [63,68]

NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. 28: 69 [69,74]

NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. 28: 70 [69,74]

NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. If P NP and each ˆP NP is polynomially transformable to P then P is NP-complete. 28: 7 [69,74]

NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. If P NP and each ˆP NP is polynomially transformable to P then P is NP-complete. If an NP-complete problem P can be polynomially transformed to a problem ˆP NP then ˆP is NP-complete, too; so they are equally difficult. 28: 72 [69,74]

NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. If P NP and each ˆP NP is polynomially transformable to P then P is NP-complete. If an NP-complete problem P can be polynomially transformed to a problem ˆP NP then ˆP is NP-complete, too; so they are equally difficult. There are voluminous collections of NP-complete problems; examples: integer optimization (in its decision version) integer multi-commodity flow Hamiltonian Cycle Minimum Vertex Cover on general graphs Knapsack (for big numbers) 28: 73 [69,74]

NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. If P NP and each ˆP NP is polynomially transformable to P then P is NP-complete. If an NP-complete problem P can be polynomially transformed to a problem ˆP NP then ˆP is NP-complete, too; so they are equally difficult. If there is an efficient algorithm for one NP-complete problem, all are solvable efficiently. For years the assumption is: P NP. If one wants to solve all instances of a problem, partial enumeration seems to be unavoidable. A problem is NP-hard, if it would allow to solve an NP-complete problem. 28: 74 [69,74]

Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 29: 75 [75,75]

.6 Branch-and-Bound In enumerating all solutions, as many as possible should be eliminated early on by upper and lower bounds. 30: 76 [76,78]

.6 Branch-and-Bound In enumerating all solutions, as many as possible should be eliminated early on by upper and lower bounds. Ex: {0, }-knapsack: weights a N n, capacity b N, profit c N n, max c T x s.t. a T x b, x {0, } n upper bound: max c T x s.t. a T x b, x [0, ] n [LP-relaxation] lower bound: sort by profit/weight and fill in this sequence 30: 77 [76,78]

.6 Branch-and-Bound In enumerating all solutions, as many as possible should be eliminated early on by upper and lower bounds. 30: 78 [76,78] Ex: {0, }-knapsack: weights a N n, capacity b N, profit c N n, max c T x s.t. a T x b, x {0, } n upper bound: max c T x s.t. a T x b, x [0, ] n [LP-relaxation] lower bound: sort by profit/weight and fill in this sequence algorithmic scheme (for maximization problems): M... set of open problems, initially M = {orig. problem} f... value of best known solution, initially f =. if M = STOP, else choose P M, M M \ {P} 2. compute upper bound f (P). 3. if f (P) < f (P contains no OS), goto. 4. compute feasible solutions ˆf (P) for P (lower bound). 5. if ˆf (P) > f (new best solution), put f ˆf (P) 6. if f (P) = ˆf (P) (no better solution in P), goto. 7. split P into smaller subproblem P i, M M {P,..., P k } 8. goto.

Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) 9 7 6 4 4 3 4 profit (c) 8 6 8 7 6 5 sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 [LP-relaxation] lower bound: sort by profit/weight and fill in this sequence 3: 79 [79,86]

Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) 9 7 6 4 4 3 4 profit (c) 8 6 8 7 6 5 sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C + 8 9 A=34 LB: C +D +F =30 [LP-relaxation] 3: 80 [79,86]

Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) 9 7 6 4 4 3 4 profit (c) 8 6 8 7 6 5 sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C + 8 9 A=34 LB: C +D +F =30 x A 0 [LP-relaxation] 3: 8 [79,86]

Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) 9 7 6 4 4 3 4 profit (c) 8 6 8 7 6 5 sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C + 8 9 A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = 26 2 3 UB <30 no OS x A 0 [LP-relaxation] 3: 82 [79,86]

Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) 9 7 6 4 4 3 4 profit (c) 8 6 8 7 6 5 sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C + 8 9 A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = 26 2 3 UB <30 no OS x A 0 P 3 : x A =0 UB: C +D +F + 4 E =3 2 LB: C +D +F =30 [LP-relaxation] 3: 83 [79,86]

Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) 9 7 6 4 4 3 4 profit (c) 8 6 8 7 6 5 sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C + 8 9 A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = 26 2 3 UB <30 no OS x A 0 P 3 : x A =0 UB: C +D +F + 4 E =3 2 LB: C +D +F =30 x E 0 [LP-relaxation] 3: 84 [79,86]

Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) 9 7 6 4 4 3 4 profit (c) 8 6 8 7 6 5 3: 85 [79,86] sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C + 8 9 A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = 26 2 3 UB <30 no OS x A 0 P 4 : x A =0, x E = UB: E +C +D =3 LB: E +C +D =3 P 3 : x A =0 UB: C +D +F + 4 E =3 2 LB: C +D +F =30 x E 0 [LP-relaxation]

Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) 9 7 6 4 4 3 4 profit (c) 8 6 8 7 6 5 3: 86 [79,86] sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C + 8 9 A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = 26 2 3 UB <30 no OS x A 0 P 4 : x A =0, x E = UB: E +C +D =3 LB: E +C +D =3 P 3 : x A =0 UB: C +D +F + 4 E =3 2 LB: C +D +F =30 x E 0 [LP-relaxation] P 5 : x A =x E =0 UB: C+ D +F + 7 B =30 6 7 LB <3 no OS

The branch&bound tree will get huge whenever many solutions are almost optimal. For successful branch&bound we need to answer How can we obtain good upper and lower bounds? 32: 87 [87,87]

Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 33: 88 [88,88]

.7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. 34: 89 [89,93]

.7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. Examples:, R n, halfspaces, the intersection of convex sets is convex, polyhedra, the k-dim. unit simplex k := {α R k + : k i= α i = } Halbraum Schnitt 3 34: 90 [89,93]

.7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. Examples:, R n, halfspaces, the intersection of convex sets is convex, polyhedra, the k-dim. unit simplex k := {α R k + : k i= α i = } For S R n the convex hull is the intersection of all convex sets that contain S, conv S := {C convex : S C}. 34: 9 [89,93]

.7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. Examples:, R n, halfspaces, the intersection of convex sets is convex, polyhedra, the k-dim. unit simplex k := {α R k + : k i= α i = } For S R n the convex hull is the intersection of all convex sets that contain S, conv S := {C convex : S C}. For given points x (i) R n, i {,..., k} and α k, the point x = α i x (i) is a convex combination of the x (i). 0.4 0.2 0. 34: 92 [89,93] 0.4

.7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. Examples:, R n, halfspaces, the intersection of convex sets is convex, polyhedra, the k-dim. unit simplex k := {α R k + : k i= α i = } For S R n the convex hull is the intersection of all convex sets that contain S, conv S := {C convex : S C}. For given points x (i) R n, i {,..., k} and α k, the point x = α i x (i) is a convex combination of the x (i). conv S is the set of all convex combinations of finitely many points in S, conv S = { k i= α ix (i) : x (i) S, i =,..., k N, α k }. 34: 93 [89,93]

Convex Hull and Integer Programming Theorem The convex hull of finitely many points is a (bounded) polyhedron. 35: 94 [94,97]

Convex Hull and Integer Programming Theorem The convex hull of finitely many points is a (bounded) polyhedron. The integer hull of a polyhedron P = {x R n : Ax b} is the convex hull of the integer points in P, P I := conv(p Z n ). Theorem If A Q m n and b Q m, the integer hull P I of polyhedron P = {x R n : Ax b} is itself a polyhedron. 35: 95 [94,97]

Convex Hull and Integer Programming Theorem The convex hull of finitely many points is a (bounded) polyhedron. The integer hull of a polyhedron P = {x R n : Ax b} is the convex hull of the integer points in P, P I := conv(p Z n ). Theorem If A Q m n and b Q m, the integer hull P I of polyhedron P = {x R n : Ax b} is itself a polyhedron. difficulty: the description of P I is mostly unknown or excessive! P P I [exception e.g. for A tot. unimod., b Z n, then P = P I ] 35: 96 [94,97]

Convex Hull and Integer Programming Theorem The convex hull of finitely many points is a (bounded) polyhedron. The integer hull of a polyhedron P = {x R n : Ax b} is the convex hull of the integer points in P, P I := conv(p Z n ). Theorem If A Q m n and b Q m, the integer hull P I of polyhedron P = {x R n : Ax b} is itself a polyhedron. difficulty: the description of P I is mostly unknown or excessive! If the integer hull can be given explicitly, the integer optimization problem can be solved by the simplex method: Theorem Suppose the integer hull of P = {x R n : Ax b} is given by P I = {x R n : A I x b I }, then: sup{c T x : A I x b I, x R n } = sup{c T x : Ax b, x Z n }, Argmin{c T x : A I x b I, x R n } = conv Argmin{c T x : Ax b, x Z n }. 35: 97 [94,97]

Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). f(x) f(x) f(y) f(y) x y x y 36: 98 [98,03]

Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] epi f epi f 36: 99 [98,03]

Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] Theorem A function is convex if and only if its epigraph is a convex set. epi f epi f 36: 00 [98,03]

Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] Theorem A function is convex if and only if its epigraph is a convex set. Ex.: for convex f i : R n R also f (x) := sup i f i (x), x R n is convex. f epi f fk f2 f3 36: 0 [98,03]

Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] Theorem A function is convex if and only if its epigraph is a convex set. Ex.: for convex f i : R n R also f (x) := sup i f i (x), x R n is convex. Theorem Each local minimum of a convex function is also a global minimum, and for strictly convex functions it is unique (if it exists). For convex functions there exist rather good optimization methods. 36: 02 [98,03]

Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] Theorem A function is convex if and only if its epigraph is a convex set. Ex.: for convex f i : R n R also f (x) := sup i f i (x), x R n is convex. Theorem Each local minimum of a convex function is also a global minimum, and for strictly convex functions it is unique (if it exists). For convex functions there exist rather good optimization methods. A function f is concave, if f is convex. (Each local maximum of a concave function is a global one.) 36: 03 [98,03]

Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 37: 04 [04,04]

.8 Relaxation Concept applicable to arbitrary optimization problems (here maximize): Definition Given two optimization problems with X, W R n and f, f : R n R (OP) max f (x) s.t. x X and (RP) max f (x) s.t. x W, (RP) is a relaxation of (OP) if. X W, 2. f (x) f (x) for all x X. 38: 05 [05,07]

.8 Relaxation Concept applicable to arbitrary optimization problems (here maximize): Definition Given two optimization problems with X, W R n and f, f : R n R (OP) max f (x) s.t. x X and (RP) max f (x) s.t. x W, (RP) is a relaxation of (OP) if. X W, 2. f (x) f (x) for all x X. Let (RP) be a relaxation of (OP). Observation. v(rp) v(op). [(RP) yields an upper bound] 2. If (RP) is infeasible, then so is (OP), 3. If x is OS of (RP) and x X with f (x ) = f (x ), then x is OS of (OP). 38: 06 [05,07]

.8 Relaxation Concept applicable to arbitrary optimization problems (here maximize): Definition Given two optimization problems with X, W R n and f, f : R n R (OP) max f (x) s.t. x X and (RP) max f (x) s.t. x W, (RP) is a relaxation of (OP) if. X W, 2. f (x) f (x) for all x X. Let (RP) be a relaxation of (OP). Observation. v(rp) v(op). [(RP) yields an upper bound] 2. If (RP) is infeasible, then so is (OP), 3. If x is OS of (RP) and x X with f (x ) = f (x ), then x is OS of (OP). Search for a suitable small W X and f f so that (RP) is efficiently solvable. 38: 07 [05,07] A relaxation (RP) of (OP) is called exact if v(op) = v(rp).

Convex Relaxation If convexity is required for W and (for max) concavity for f, this yields a convex relaxation. Mostly (but not always!) convexity ensures reasonable algorithmic solvability of the relaxation. 39: 08 [08,0]

Convex Relaxation If convexity is required for W and (for max) concavity for f, this yields a convex relaxation. Mostly (but not always!) convexity ensures reasonable algorithmic solvability of the relaxation. Ex.: for a combinatorial max problem with finite ground set Ω, feasible solutions F 2 Ω and linear cost function c R Ω max c T x s.t. x conv{χ Ω (F ) : F F} is an exact convex (even linear) relaxation, but it is useful only if the polyhedron conv{χ(f ) : F F} can be described by a reasonable inequality system Ax b. 39: 09 [08,0]

Convex Relaxation If convexity is required for W and (for max) concavity for f, this yields a convex relaxation. Mostly (but not always!) convexity ensures reasonable algorithmic solvability of the relaxation. Ex.: for a combinatorial max problem with finite ground set Ω, feasible solutions F 2 Ω and linear cost function c R Ω max c T x s.t. x conv{χ Ω (F ) : F F} is an exact convex (even linear) relaxation, but it is useful only if the polyhedron conv{χ(f ) : F F} can be described by a reasonable inequality system Ax b. In global optimization, nonlinear functions are approximated from below by convex functions on domain subdivisions. Ex.: Consider (OP) min f (x) := 2 x T Qx + q T x s.t. x [0, ] n with f not convex, i.e., λ min (Q) < 0. [λ min... minimal eigenvalue] Q λ min (Q)I is positive semidefinite and by xi 2 x i on [0, ] n there holds f (x) := 2 x T (Q λ min (Q)I )x +(q +λ min (Q)) T x f (x) x [0, ] n. Thus, (RP) min f (x) s.t. x [0, ] n is a convex relaxation of (OP). 39: 0 [08,0]

LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] 40: [,5]

LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] Ex.: knapsack problem: n = 2, weights a = (6, 8) T, capacity b = 0, max c T x s.t. a T x b, x Z n + max c T x s.t. a T x b, x 0, x 2 a feasible integer points: LP-relaxation: green best relaxation: the convex hull x 40: 2 [,5]

LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] Ex.: knapsack problem: n = 2, weights a = (6, 8) T, capacity b = 0, max c T x s.t. a T x b, x Z n + max c T x s.t. a T x b, x 0, x 2 a feasible integer points: LP-relaxation: green best relaxation: the convex hull x 40: 3 [,5]

LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] Ex.: knapsack problem: n = 2, weights a = (6, 8) T, capacity b = 0, max c T x s.t. a T x b, x Z n + max c T x s.t. a T x b, x 0, x 2 a feasible integer points: LP-relaxation: green best relaxation: the convex hull x 40: 4 [,5]

LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] Ex: integer multi-commodity flow fractional multi-commodity flow s s s s 0.5 0.5 0.5 0.5 too large, would need convex hull 40: 5 [,5] t t infeasible, X = t t frac. feasible, W

Lagrangian Relaxation [Appl. to constrained optim. in general, here only for ineq.-constraints] Inconvenient constraints are lifted into the cost function via a Lagrange multiplier that penalizes violations (g : R n R k ): (OP) max f (x) s.t. g(x) 0 λ 0 x Ω (RP λ ) max s.t. c T x λ T g(x) x Ω is a relaxation: X := {x Ω : g(x) 0} {x Ω} =: W and for x X, λ 0 there holds f (x) f (x) λ T g(x) =: f (x) by g(x) 0. 4: 6 [6,7]

Lagrangian Relaxation [Appl. to constrained optim. in general, here only for ineq.-constraints] Inconvenient constraints are lifted into the cost function via a Lagrange multiplier that penalizes violations (g : R n R k ): (OP) max f (x) s.t. g(x) 0 λ 0 x Ω (RP λ ) max s.t. c T x λ T g(x) x Ω is a relaxation: X := {x Ω : g(x) 0} {x Ω} =: W and for x X, λ 0 there holds f (x) f (x) λ T g(x) =: f (x) by g(x) 0. Define the dual function ϕ(λ) := sup x Ω [ f (x) g(x) T λ ] = v(rp λ ) [for each fixed x linear in λ] for each λ 0 there holds ϕ(λ) v(op) [upper bound] ϕ(λ) easy to compute if (RP λ ) is easy to compute ϕ is convex, because sup of linear functions in λ best bound is inf{ϕ(λ) : λ 0} [convex problem!] well suited for convex optimization methods! 4: 7 [6,7]

Example: Integer Multi-Commodity Flow Let A be the node-arc incidence matrix to D = (V, E), 2 goods, relax the coupling capacity constraints by λ 0: min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) λ x () + x (2) w x () w, x (2) w x () Z E +, x (2) Z E +. min (c () +λ) T x () +(c (2) +λ) T x (2) λ T w s.t. Ax () =b () Ax (2) =b (2) x () w, x (2) w, x () Z E +, x (2) Z E +. The relaxation consists of two independent min-cost-flow problems (RP (i) λ ) min (c(i) +λ) T x (i) s.t. Ax (i) = b (i), w x (i) Z E + i {, 2} These can be solved integrally and efficiently! 42: 8 [8,9]

Example: Integer Multi-Commodity Flow Let A be the node-arc incidence matrix to D = (V, E), 2 goods, relax the coupling capacity constraints by λ 0: min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) λ x () + x (2) w x () w, x (2) w x () Z E +, x (2) Z E +. min (c () +λ) T x () +(c (2) +λ) T x (2) λ T w s.t. Ax () =b () Ax (2) =b (2) x () w, x (2) w, x () Z E +, x (2) Z E +. The relaxation consists of two independent min-cost-flow problems (RP (i) λ ) min (c(i) +λ) T x (i) s.t. Ax (i) = b (i), w x (i) Z E + i {, 2} These can be solved integrally and efficiently! If Lagrangian relaxation splits the problem into independent subproblems, this is sometimes called Lagrangian decomposition. Frequently this allows to solve much bigger problems efficiently. 42: 9 [8,9] Does this also yield better bounds?

Comparison of Lagrange- and LP-Relaxation Given finite Ω Z n and D Q k n, d Q k, (OP) max c T x s.t. Dx d λ 0 x Ω (RP λ ) max ct x + λ T (d Dx) s.t. x Ω In the ex.: Ω = Ω () Ω (2) with Ω (i) = {x Z E + : Ax = b (i), x w}, i {, 2}. Theorem inf λ 0 v(rp λ) = sup{c T x : Dx d, x conv Ω}. If conv Ω is identical to the feasible set of the LP-relaxation of Ω, the values of the best Lagrange relaxation and the LP-relaxation match! 43: 20 [20,22]

Comparison of Lagrange- and LP-Relaxation Given finite Ω Z n and D Q k n, d Q k, (OP) max c T x s.t. Dx d λ 0 x Ω (RP λ ) max ct x + λ T (d Dx) s.t. x Ω In the ex.: Ω = Ω () Ω (2) with Ω (i) = {x Z E + : Ax = b (i), x w}, i {, 2}. Theorem inf λ 0 v(rp λ) = sup{c T x : Dx d, x conv Ω}. If conv Ω is identical to the feasible set of the LP-relaxation of Ω, the values of the best Lagrange relaxation and the LP-relaxation match! In the ex. A is totally unimodular, thus for i {, 2} and w Z E conv{x Z E + : Ax = b (i), x w} = {x 0 : Ax = b (i), x w}. The best λ yields the value of the fractional multi-comm.-flow problem! 43: 2 [20,22]

Comparison of Lagrange- and LP-Relaxation Given finite Ω Z n and D Q k n, d Q k, (OP) max c T x s.t. Dx d λ 0 x Ω (RP λ ) max ct x + λ T (d Dx) s.t. x Ω In the ex.: Ω = Ω () Ω (2) with Ω (i) = {x Z E + : Ax = b (i), x w}, i {, 2}. Theorem inf λ 0 v(rp λ) = sup{c T x : Dx d, x conv Ω}. If conv Ω is identical to the feasible set of the LP-relaxation of Ω, the values of the best Lagrange relaxation and the LP-relaxation match! In the ex. A is totally unimodular, thus for i {, 2} and w Z E conv{x Z E + : Ax = b (i), x w} = {x 0 : Ax = b (i), x w}. The best λ yields the value of the fractional multi-comm.-flow problem! In general: Let {x Z n : Ax b} = Ω be a formulation of Ω. Only if {x R n : Ax b} conv Ω, Lagrange relaxation may yield a better value Wert than the LP-relaxation. 43: 22 [20,22]

Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 44: 23 [23,23]

.9 Application: Traveling Salesman Problem (TSP) TSP: Given n cities with all pairwise distances, find a shortest round trip that visits each city exactly once. 45: 24 [24,27]

.9 Application: Traveling Salesman Problem (TSP) TSP: Given n cities with all pairwise distances, find a shortest round trip that visits each city exactly once. Comb. opt.: D = (V, E = {(u, v) : u, v V, u v}) complete digraph, costs c R E, feasible set F = {R E : R (dir.) cycle in D, R = n}. Find R Argmin{c(R) = e R c e : R F} NP-complete problem Number of tours? 45: 25 [24,27]

.9 Application: Traveling Salesman Problem (TSP) TSP: Given n cities with all pairwise distances, find a shortest round trip that visits each city exactly once. Comb. opt.: D = (V, E = {(u, v) : u, v V, u v}) complete digraph, costs c R E, feasible set F = {R E : R (dir.) cycle in D, R = n}. Find R Argmin{c(R) = e R c e : R F} NP-complete problem Number of tours? (n )! combinatorial explosion n = 3: 2 n = 4: 3 2 = 6 n = 5: 4 3 2 = 24 n = 0: 362880 n = : 3628800 n = 2: 3996800 n = 3: 47900600 n = 4: 6227020800 n =00: 0 58 45: 26 [24,27]

.9 Application: Traveling Salesman Problem (TSP) TSP: Given n cities with all pairwise distances, find a shortest round trip that visits each city exactly once. Comb. opt.: D = (V, E = {(u, v) : u, v V, u v}) complete digraph, costs c R E, feasible set F = {R E : R (dir.) cycle in D, R = n}. Find R Argmin{c(R) = e R c e : R F} NP-complete problem Number of tours? (n )! combinatorial explosion Typical problem for: delivery services (parcels, pharmacy-transports) taxi services, breakdown services scheduling with setup costs [e.g. coloring cars] steering robots (mostly nonlinear) [frequently with several vehicles and time windows] n = 3: 2 n = 4: 3 2 = 6 n = 5: 4 3 2 = 24 n = 0: 362880 n = : 3628800 n = 2: 3996800 n = 3: 47900600 n = 4: 6227020800 n =00: 0 58 45: 27 [24,27]

Drilling holes into main boards 46: 28 [28,28] [http://www.math.princeton.edu/tsp]

With online aspects: breakdown services Find an assignment of cars and a sequence for each repair man, so that promised waiting periods are not exceeded. 47: 29 [29,29]

Scheduling for long setup times Two rotational printing machines for printing wrapping papier 2 8 5+3x# 8 0 4 8 2 2 0 2 0 4 4 8 2 2 0 Anfangszustand M Anfangszustand M2 Passer 25 Endzustand M 48: 30 [30,30] Endzustand M2 Farbanpassung neue Farben: 20