Discrete Optimization in a Nutshell
|
|
- John Everett Wood
- 5 years ago
- Views:
Transcription
1 Discrete Optimization in a Nutshell Integer Optimization Christoph Helmberg : [,]
2 Contents Integer Optimization 2: 2 [2,2]
3 Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 3: 3 [3,3]
4 . Application: The Marriage Problem (Bipartite Matching) Find a maximum number of pairs! men women, worker mashines, students positions,... (also weighted versions) 4: 4 [4,6]
5 . Application: The Marriage Problem (Bipartite Matching) Find a maximum number of pairs! Maximal (cannot be increased), but not of maximum cardinality men women, worker mashines, students positions,... (also weighted versions) 4: 5 [4,6]
6 . Application: The Marriage Problem (Bipartite Matching) Find a maximum number of pairs! Maximum Cardinality Matching (even perfect) men women, worker mashines, students positions,... (also weighted versions) 4: 6 [4,6]
7 Bipartite Matching An (undirected) graph G = (V, E) is a pair consisting of a node/vertex set V and an edge set E {{u, v} : u, v V, u v}. Two nodes u, v V are adjacent/neighbors, if {u, v} E. A node v V and an edge e E are incident, if v e. Two edges e, f V are incident, if e f. 5: 7 [7,9]
8 Bipartite Matching An (undirected) graph G = (V, E) is a pair consisting of a node/vertex set V and an edge set E {{u, v} : u, v V, u v}. Two nodes u, v V are adjacent/neighbors, if {u, v} E. A node v V and an edge e E are incident, if v e. Two edges e, f V are incident, if e f. An edge set M E is a matching/-factor, if for e, f M with e f there holds e f =. The matching is perfect, if V = 2 M. 5: 8 [7,9]
9 Bipartite Matching An (undirected) graph G = (V, E) is a pair consisting of a node/vertex set V and an edge set E {{u, v} : u, v V, u v}. Two nodes u, v V are adjacent/neighbors, if {u, v} E. A node v V and an edge e E are incident, if v e. Two edges e, f V are incident, if e f. An edge set M E is a matching/-factor, if for e, f M with e f there holds e f =. The matching is perfect, if V = 2 M. G = (V, E) is bipartite, if V = V V 2 with V V 2 = and E {{u, v} : u V, v V 2 }. 5: 9 [7,9]
10 Model: Maximum Cardinatliy Bipartite Matching given: G = (V V 2, E) bipartite find: matching M E with M maximal 6: 0 [0,2]
11 Model: Maximum Cardinatliy Bipartite Matching given: G = (V V 2, E) bipartite find: matching M E with M maximal { if e M variables: x {0, } E with x e = (e E) 0 otherwise. (it represents the incidence/characteristic vector of M w.r.t. E) constraints: Ax, where A {0, } V E Node-Edge-Incidencematrix of G: A v,e = { if v e 0 otherwise. (v V, e E) A B C b c a d D E A = (a) (b) (c) (d) (A) 0 0 (B) (C) (D) (E) 0 6: [0,2]
12 Model: Maximum Cardinatliy Bipartite Matching given: G = (V V 2, E) bipartite find: matching M E with M maximal { if e M variables: x {0, } E with x e = (e E) 0 otherwise. (it represents the incidence/characteristic vector of M w.r.t. E) constraints: Ax, where A {0, } V E Node-Edge-Incidencematrix of G: A v,e = optimization problem: { if v e 0 otherwise. max T x s.t. Ax x {0, } E (v V, e E) This is no LP! Enlarge x {0, } E to x [0, ] E LP For G bipartite, Simplex always delivers an optimal solution x {0, } E! (this does not hold i.g. for general graphs G!) 6: 2 [0,2]
13 Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 7: 3 [3,3]
14 .2 Integeral Polyhedra Simplex automatically yields an integral solution, if all vertices of the feasible set are integral. min c T x s.t. x X := {x 0 : Ax = b} Are all vertices of X integral? 8: 4 [4,6]
15 .2 Integeral Polyhedra Simplex automatically yields an integral solution, if all vertices of the feasible set are integral. min c T x s.t. x X := {x 0 : Ax = b} Are all vertices of X integral? Almost never! But there is an important class of matrices A Z m n, for which X has only integral vertices for any (!) b Z m : A vertex is integral basic solution x B = A B b Zm. If det(a B ) =, Cramer s rule implies x B Z m. 8: 5 [4,6]
16 .2 Integeral Polyhedra Simplex automatically yields an integral solution, if all vertices of the feasible set are integral. min c T x s.t. x X := {x 0 : Ax = b} Are all vertices of X integral? Almost never! But there is an important class of matrices A Z m n, for which X has only integral vertices for any (!) b Z m : A vertex is integral basic solution x B = A B b Zm. If det(a B ) =, Cramer s rule implies x B Z m. A matrix A Z m n of full row rank is unimodular, if det(a B ) = holds for each basis B. Theorem A Z m n is unimodular if and only if for each b Z m all vertices of the polyhedron X := {x 0 : Ax = b} are integral. Does this also hold for X := {x 0 : Ax b}? 8: 6 [4,6]
17 Totally Unimodular Matrices {x 0 : Ax b} {[ x s ] [ x 0 : [A, I ] s ] } = b Certainly integral, if Ā = [A, I ] is unimodular. Laplace development of the determinant for each basis B A matrix A is totally unimodular if for each square submatrix of A the determinant has value 0, or. (requires A {0,, } m n ) Theorem (Hoffmann und Kruskal) A Z m n is totally unimodular if and only if for each b Z m all vertices of the polyhedron X := {x 0 : Ax b} are integral. 9: 7 [7,7] Note: A tot. unimod. A T resp. [A, A, I, I ] tot. unimod. consequence: dual LP, variants with equality constraints, etc. are integral
18 Recognizing Totally Unimodular Matrices Theorem (Heller and Tompkins) Let A {0,, } m n have at most two nonzero entries per column. A is totally unimodular the rows A can be partitioned int two classes so that (i) rows with one + and one entry in the same column belong to the same class, (ii) rows with two nonzeros of equal sign in the same column belong to distinct classes. Example : the node-edge-incidencematrix of a bipartite graph A B C b c a d D E A = (a) (b) (c) (d) (A) 0 0 (B) (C) (D) (E) 0 0: 8 [8,8]
19 Example : Bipartite Graphs A... node-edge incidence matrix of G = (V V 2, E) bipartite Bipartite Matching of Maximum Cardinality: max T x s.t. Ax, x 0 Because A tot. unimod. the veritces of the feasilbe set are all integral Simplex delivers optimal solution x {0, } E. : 9 [9,2]
20 Example : Bipartite Graphs A... node-edge incidence matrix of G = (V V 2, E) bipartite Bipartite Matching of Maximum Cardinality: max T x s.t. Ax, x 0 Because A tot. unimod. the veritces of the feasilbe set are all integral Simplex delivers optimal solution x {0, } E. The dual is also integral, because A T tot. unimod.: min T y s.t. A T y, y 0 Interpretation: y {0, } V is the incidence vector of a smallest node set V V, so that e E : e V (Minimum Vertex Cover) : 20 [9,2]
21 Example : Bipartite Graphs A... node-edge incidence matrix of G = (V V 2, E) bipartite Bipartite Matching of Maximum Cardinality: max T x s.t. Ax, x 0 Because A tot. unimod. the veritces of the feasilbe set are all integral Simplex delivers optimal solution x {0, } E. The dual is also integral, because A T tot. unimod.: min T y s.t. A T y, y 0 Interpretation: y {0, } V is the incidence vector of a smallest node set V V, so that e E : e V (Minimum Vertex Cover) The Assignment Problem: V = V 2 = n, complete bipartite: E = {{u, v} : u V, v V 2 }; edge weights c R E Find a perfect matching of minimum total weight: min c T x s.t. Ax =, x 0 is integral, too, because [A; A] tot. unimod. : 2 [9,2]
22 Example 2: Node-Arc Incidence Matrix of Digraphs A digraph/directed graph D = (V, E) is a pair consisting of a node set V and a (multi-)set of directed edges/arcs E {(u, v) : u, v V, u v}. [multiple arcs are allowed!] For e = (u, v) E, u is the tail and v the head of e. B a d A b c e D C f 2: 22 [22,23]
23 Example 2: Node-Arc Incidence Matrix of Digraphs A digraph/directed graph D = (V, E) is a pair consisting of a node set V and a (multi-)set of directed edges/arcs E {(u, v) : u, v V, u v}. [multiple arcs are allowed!] For e = (u, v) E, u is the tail and v the head of e. The node-arc incidence matrix A {0,, } V E of D has entries v is the tail of e A v,e = v is the head of e (v V, e E). 0 otherwise the node-arc incidence matrix of a digraph is totally unimodular. A a b c B C d D e A = f (a) (b) (c) (d) (e) (f ) (A) (B) (C) (D) : 23 [22,23]
24 Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 3: 24 [24,24]
25 .3 Application: Networkflow Modelling tool: transport problems, evacuation planning, scheduling, (internet) traffic planning,... A network (D, w) consists of a digraph D = (V, E) and (arc-)capacities w R E +. A vector x R E is a flow on (D, w), if it satisfies the flow conservation constraints x e = e=(u,v) E e=(v,u) E x e (v V ) [ Ax = 0 for node-arc incidence matrix A] A flow x R E on (D,w) is feasible, if 0 x w [lower bounds would also be possible] : 25 [25,25] flow feasible flow
26 Maximal s-t-flows, Minimal s-t-cuts Given a source s V and a sink t V with (t, s) E, find a feasible flow x R E on (D, w) with maximum flow value x (t,s). s max a b 5 4 B c 2 C d 7 t e A = f 9 LP: max x (t,s) s.t. Ax = 0, 0 x w, (a) (b) (c) (d) (e) (f ) (s) (B) (C) (t) If w Z E, the OS of Simplex is x Z E, because [A; A; I ] tot. unimod. 5: 26 [26,29]
27 Maximal s-t-flows, Minimal s-t-cuts Given a source s V and a sink t V with (t, s) E, find a feasible flow x R E on (D, w) with maximum flow value x (t,s). B s max C 7 9 t S = {s}, δ + (S) = {(s, B), (s, C)}, w(δ + (S)) = = 9. LP: max x (t,s) s.t. Ax = 0, 0 x w, If w Z E, the OS of Simplex is x Z E, because [A; A; I ] tot. unimod. Each S V with s S and t / S defines an s-t-cut δ + (S) := {(u, v) E : u S, v / S}, the out-flow is at most w(δ + (S)) := e δ + (S) w e, the value of the cut. 5: 27 [26,29]
28 Maximal s-t-flows, Minimal s-t-cuts Given a source s V and a sink t V with (t, s) E, find a feasible flow x R E on (D, w) with maximum flow value x (t,s). s 5: 28 [26,29] max B 4 S = {s, B, C}, 7 2 t δ + (S) = {(B, t), (C, t)}, w(δ + (S)) = 7 + = 8. 5 C 9 LP: max x (t,s) s.t. Ax = 0, 0 x w, If w Z E, the OS of Simplex is x Z E, because [A; A; I ] tot. unimod. Each S V with s S and t / S defines an s-t-cut δ + (S) := {(u, v) E : u S, v / S}, the out-flow is at most w(δ + (S)) := e δ + (S) w e, the value of the cut. Theorem (Max-Flow Min-Cut Theorem of Ford and Fulkerson) The maximum s-t-flow value equals the minimum s-t cut value. [both computable by Simplex]
29 Maximal s-t-flows, Minimal s-t-cuts Given a source s V and a sink t V with (t, s) E, find a feasible flow x R E on (D, w) with maximum flow value x (t,s). s 5: 29 [26,29] max B 2 2 C LP: max x (t,s) s.t. Ax = 0, 0 x w, t S = {s, C}, δ + (S) = {(s, B), (C, B), (C, t)}, w(δ + (S)) = = 7= x (t,s) If w Z E, the OS of Simplex is x Z E, because [A; A; I ] tot. unimod. Each S V with s S and t / S defines an s-t-cut δ + (S) := {(u, v) E : u S, v / S}, the out-flow is at most w(δ + (S)) := e δ + (S) w e, the value of the cut. Theorem (Max-Flow Min-Cut Theorem of Ford and Fulkerson) The maximum s-t-flow value equals the minimum s-t cut value. [both computable by Simplex]
30 Minimum Cost Flow (Min-Cost-Flow) The flow value is now prescribed by balances b R V ( T b = 0) on the nodes; each unit of flow induces arc costs c R E. Find the cheapest flow. 6: 30 [30,30] 0 min 2 4 [ ] x s.t x = x w LP: min c T x s.t. Ax = b, 0 x w, For b, c and w integr., Simplex gives OS x Z E, as [A; A; I ] tot. unimod. [also works for lower bounds on arcs: u x w!] For LPs min c T x s.t. Ax = b, u x w, A node-arc inc. there is a particularly efficient simplex variant, the network simplex, it only needs addtions, subtractions and comparisons! Extremly broad scope of applications, popular modelling tool
31 Example: Transportation Problem A company with several production sites has to serve several customers. How is this best done in view of transportation costs? : 3 [3,3] Note: only one product!
32 Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. 8: 32 [32,39]
33 Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known / /2 4/2 2/2 3/ 2/2 2/2 4 / / 2/ 40 /2 4 / 2/ / 3 4 Arcs with capacities and crossing times 8: 33 [32,39]
34 Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. 4 / /2 / 4/2 4 2/2 4 / / 2/ 40 2/ / 4 /2 3 2/2 3/ / geometry is not important, simplify 8: 34 [32,39]
35 Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= / /2 / 4/2 4 2/2 4 / / 2/ 40 2/ / 4 /2 3 2/2 3/ / t=2 discretize time, one level per time step 8: 35 [32,39]
36 Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= / /2 / 4/2 4 2/2 4 / / 2/ 40 2/ / 4 /2 3 2/2 3 t=2 3 crossing times connect levels, capacity stays 8: 36 [32,39]
37 Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= t=2 / / 4 / / 2/ 2/2 40 2/ / 4 4/2 / crossing times connects levels, capacity stays 8: 37 [32,39]
38 Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= t= rising costs on the exit arcs to encourage fast exits 2 2 8: 38 [32,39]
39 Example: Evacuation Planning Determine for each room a flight path, so that the building is cleared as fast as possible. Per aisle capacity and crossing times are known. t=0 4 t= t= rising costs on the exit arcs to encourage fast exits Approach only ok if persons do not need to be discerned! 2 2 8: 39 [32,39]
40 Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 9: 40 [40,40]
41 .4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s t t 20: 4 [4,48]
42 .4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s t t mixing is forbidden! 20: 42 [4,48]
43 .4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s t t integr. is impossible 20: 43 [4,48]
44 .4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s t t fractional works 20: 44 [4,48]
45 .4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s implementation s s t t fractional works copy t t copy 2 20: 45 [4,48]
46 .4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s implementation s s t t fractional works copy t t copy 2 min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) Ix () + Ix (2) w x () 0, x (2) 0. 20: 46 [4,48]
47 .4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s implementation s s t t fractional works A 0 0 A I I i.g. not tot.unimod.! copy t t copy 2 min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) Ix () + Ix (2) w x () 0, x (2) 0. 20: 47 [4,48]
48 .4 Multi-Commodity Flow Problems Given a network (D, w) and several different commodities K = {,..., k} with sources/sinks (s i, t i ) and flow values f i, i K find feasible flows x (i) R E, so that in sum capacities are observed. s s implementation s s t t fractional works fractionally solvable, integral VERY difficult! copy t t copy 2 min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) Ix () + Ix (2) w x () 0, x (2) 0. 20: 48 [4,48]
49 Example: Logistics cover demand by shifting pallets by trucks between warehouses B A C pro Artikel ein Palettengraph A B C A B C t=0 t= t=2 t=3 t=0.5 t=.5 t=2.5 t=3.5 2: 49 [49,49]
50 Further Application Areas } fractional: capacity planning for integral: time discretized routing and scheduling street traffic trains internet logistics (bottleneck analysis/steering) production (mashine loads/-assignment) 22: 50 [50,5]
51 Further Application Areas } fractional: capacity planning for integral: time discretized routing and scheduling street traffic trains internet logistics (bottleneck analysis/steering) production (mashine loads/-assignment) network-design: installed capacities should satisfy as many demands as possible also in case of failures [ robust variants are extremly difficult!] Multi-commodity flow is often used as basic model that is combined with further constraints. 22: 5 [50,5]
52 Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 23: 52 [52,52]
53 .5 Integer Optimization (Integer Programming) mainly: linear programs with exclusively integer variables (otherw. mixed integer programming) max s.t. c T x Ax b x Z n Typically contains many binary variables ({0,}) for yes/no decisions Difficulty: i.g. not solvable efficiently, complexity class NP exact solutions rely heavily on enumeration (systematic exploration) Exact solutions by combining the following techniques: (upper) bounds by linear/convex relaxation improved by cutting plane approaches feasible solutions (lower bounds) by rounding- and local search heuristics enumerate by branch&bound, branch&cut 24: 53 [53,53]
54 Combinatorial Optimization Mathematically: Given a finite ground set Ω, a set of feasible subsets F 2 Ω [power set, set of all subsets] and a goal function c : F Q, determine max{c(f ) : F F} or F Argmax{c(F ) : F F} 25: 54 [54,57]
55 Combinatorial Optimization Mathematically: Given a finite ground set Ω, a set of feasible subsets F 2 Ω [power set, set of all subsets] and a goal function c : F Q, determine max{c(f ) : F F} or F Argmax{c(F ) : F F} Mostly only linear c: c(f ) := e F c e with c Q Ω. 25: 55 [54,57]
56 Combinatorial Optimization Mathematically: Given a finite ground set Ω, a set of feasible subsets F 2 Ω [power set, set of all subsets] and a goal function c : F Q, determine max{c(f ) : F F} or F Argmax{c(F ) : F F} Mostly only linear c: c(f ) := e F c e with c Q Ω. Ex: maximum cardinality matching for G = (V, E): Ω = E, F = {M E : M matching in G}, c = Ex2: minimum vertex cover for G = (V, E): Ω = V, F = {V V : e V für e E}, c = 25: 56 [54,57]
57 Combinatorial Optimization Mathematically: Given a finite ground set Ω, a set of feasible subsets F 2 Ω [power set, set of all subsets] and a goal function c : F Q, determine max{c(f ) : F F} or F Argmax{c(F ) : F F} Mostly only linear c: c(f ) := e F c e with c Q Ω. Ex: maximum cardinality matching for G = (V, E): Ω = E, F = {M E : M matching in G}, c = Ex2: minimum vertex cover for G = (V, E): Ω = V, F = {V V : e V für e E}, c = formulation as a binary program: Notation: incidence-/characteristic vector χ Ω (F ) {0, } Ω for F Ω [in short χ(f ), satisfies [χ(f )] e = e F ] A linear program max{c T x : Ax b, x [0, ] Ω } is a formulation of the combinatorial optimization problem, if {x {0, } Ω : Ax b} = {χ(f ) : F F}. 25: 57 [54,57]
58 (Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. 26: 58 [58,62]
59 (Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. The runtime of an algorithm for one instance is the number of elementary operations executed (read/write a symbol, add/multiply/compare bytes, etc.) 26: 59 [58,62]
60 (Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. The runtime of an algorithm for one instance is the number of elementary operations executed (read/write a symbol, add/multiply/compare bytes, etc.) An algorithm solves a problem in polynomial time or efficiently, if it computes the correct answer within a runtime that is bounded by a polynomial in the encoding length. 26: 60 [58,62]
61 (Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. The runtime of an algorithm for one instance is the number of elementary operations executed (read/write a symbol, add/multiply/compare bytes, etc.) An algorithm solves a problem in polynomial time or efficiently, if it computes the correct answer within a runtime that is bounded by a polynomial in the encoding length. A problem is polynomially/efficiently solvable, if there is an algorithm that solves it efficiently. The class P comprises all problems, that are solvable efficiently. 26: 6 [58,62]
62 (Algorithmic) Complexity of Problems An instance I of a problem is a concrete assignment of values to the problem data; its size I is the encoding length, i.e. the number of symbols in a description string according to a reasonalbe encoding scheme. The runtime of an algorithm for one instance is the number of elementary operations executed (read/write a symbol, add/multiply/compare bytes, etc.) An algorithm solves a problem in polynomial time or efficiently, if it computes the correct answer within a runtime that is bounded by a polynomial in the encoding length. A problem is polynomially/efficiently solvable, if there is an algorithm that solves it efficiently. The class P comprises all problems, that are solvable efficiently. Ex: lineare optimization is in P, BUT Simplex is not polynomial. Ex2: maximum cardinality matching in general graphs is in P. Ex3: minimum vertex cover in bipartite graphs is in P. 26: 62 [58,62]
63 Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] 27: 63 [63,68]
64 Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP 27: 64 [63,68]
65 Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP Ex: Hamiltonian Cycle: in a graphen G = (V, E) an edge set C = {{v, v 2 }, {v 2, v 3 },..., {v k, v k }, {v k, v }} E is a cycle (of length k) if v i v j for i j. A cycle C is hamiltonian if C = V. 27: 65 [63,68]
66 Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP Ex: Hamiltonian Cycle: in a graphen G = (V, E) an edge set C = {{v, v 2 }, {v 2, v 3 },..., {v k, v k }, {v k, v }} E is a cycle (of length k) if v i v j for i j. A cycle C is hamiltonian if C = V. decision problem: Does G contain a Hamiltonian cycle? 27: 66 [63,68]
67 Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP Ex: Hamiltonian Cycle: in a graphen G = (V, E) an edge set C = {{v, v 2 }, {v 2, v 3 },..., {v k, v k }, {v k, v }} E is a cycle (of length k) if v i v j for i j. A cycle C is hamiltonian if C = V. decision problem: Does G contain a Hamiltonian cycle? Yes-answer can be checked efficiently, the problem is in NP. 27: 67 [63,68]
68 Decision Problems and the Class NP In a decision problem each instance is a question, that must be answered by either yes or no. A decision problem is solvable in nondeterministic polynomial time, if for each yes -instance I a solution string (the certificate) exists, that allows to check correctnes of the yes -answer in time polynomially bounded in I. [only yes is relevant!] The cass NP comprises all decision problems that are solvable in nondeterministic polynomial time. There holds: P NP Ex: Hamiltonian Cycle: in a graphen G = (V, E) an edge set C = {{v, v 2 }, {v 2, v 3 },..., {v k, v k }, {v k, v }} E is a cycle (of length k) if v i v j for i j. A cycle C is hamiltonian if C = V. decision problem: Does G contain a Hamiltonian cycle? Yes-answer can be checked efficiently, the problem is in NP. 27: 68 [63,68]
69 NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. 28: 69 [69,74]
70 NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. 28: 70 [69,74]
71 NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. If P NP and each ˆP NP is polynomially transformable to P then P is NP-complete. 28: 7 [69,74]
72 NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. If P NP and each ˆP NP is polynomially transformable to P then P is NP-complete. If an NP-complete problem P can be polynomially transformed to a problem ˆP NP then ˆP is NP-complete, too; so they are equally difficult. 28: 72 [69,74]
73 NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. If P NP and each ˆP NP is polynomially transformable to P then P is NP-complete. If an NP-complete problem P can be polynomially transformed to a problem ˆP NP then ˆP is NP-complete, too; so they are equally difficult. There are voluminous collections of NP-complete problems; examples: integer optimization (in its decision version) integer multi-commodity flow Hamiltonian Cycle Minimum Vertex Cover on general graphs Knapsack (for big numbers) 28: 73 [69,74]
74 NP-complete Problems A decision problem P can be polynomially transformed to a decision problem P 2 if there is an algorithm that transforms each instance I of P in running time polynomial in I into an instance I 2 of P 2 so that I 2 is a yes-instance of P 2 if and only if I is a yes-instance of P. If P can be polynomially transformed to P 2 then an efficient algorithm for P 2 also solves P efficiently; P 2 is at least as difficult as P. If P NP and each ˆP NP is polynomially transformable to P then P is NP-complete. If an NP-complete problem P can be polynomially transformed to a problem ˆP NP then ˆP is NP-complete, too; so they are equally difficult. If there is an efficient algorithm for one NP-complete problem, all are solvable efficiently. For years the assumption is: P NP. If one wants to solve all instances of a problem, partial enumeration seems to be unavoidable. A problem is NP-hard, if it would allow to solve an NP-complete problem. 28: 74 [69,74]
75 Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 29: 75 [75,75]
76 .6 Branch-and-Bound In enumerating all solutions, as many as possible should be eliminated early on by upper and lower bounds. 30: 76 [76,78]
77 .6 Branch-and-Bound In enumerating all solutions, as many as possible should be eliminated early on by upper and lower bounds. Ex: {0, }-knapsack: weights a N n, capacity b N, profit c N n, max c T x s.t. a T x b, x {0, } n upper bound: max c T x s.t. a T x b, x [0, ] n [LP-relaxation] lower bound: sort by profit/weight and fill in this sequence 30: 77 [76,78]
78 .6 Branch-and-Bound In enumerating all solutions, as many as possible should be eliminated early on by upper and lower bounds. 30: 78 [76,78] Ex: {0, }-knapsack: weights a N n, capacity b N, profit c N n, max c T x s.t. a T x b, x {0, } n upper bound: max c T x s.t. a T x b, x [0, ] n [LP-relaxation] lower bound: sort by profit/weight and fill in this sequence algorithmic scheme (for maximization problems): M... set of open problems, initially M = {orig. problem} f... value of best known solution, initially f =. if M = STOP, else choose P M, M M \ {P} 2. compute upper bound f (P). 3. if f (P) < f (P contains no OS), goto. 4. compute feasible solutions ˆf (P) for P (lower bound). 5. if ˆf (P) > f (new best solution), put f ˆf (P) 6. if f (P) = ˆf (P) (no better solution in P), goto. 7. split P into smaller subproblem P i, M M {P,..., P k } 8. goto.
79 Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) profit (c) sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 [LP-relaxation] lower bound: sort by profit/weight and fill in this sequence 3: 79 [79,86]
80 Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) profit (c) sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C A=34 LB: C +D +F =30 [LP-relaxation] 3: 80 [79,86]
81 Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) profit (c) sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C A=34 LB: C +D +F =30 x A 0 [LP-relaxation] 3: 8 [79,86]
82 Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) profit (c) sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = UB <30 no OS x A 0 [LP-relaxation] 3: 82 [79,86]
83 Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) profit (c) sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = UB <30 no OS x A 0 P 3 : x A =0 UB: C +D +F + 4 E =3 2 LB: C +D +F =30 [LP-relaxation] 3: 83 [79,86]
84 Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) profit (c) sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = UB <30 no OS x A 0 P 3 : x A =0 UB: C +D +F + 4 E =3 2 LB: C +D +F =30 x E 0 [LP-relaxation] 3: 84 [79,86]
85 Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) profit (c) : 85 [79,86] sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = UB <30 no OS x A 0 P 4 : x A =0, x E = UB: E +C +D =3 LB: E +C +D =3 P 3 : x A =0 UB: C +D +F + 4 E =3 2 LB: C +D +F =30 x E 0 [LP-relaxation]
86 Example: {0, }-Knapsack problem Item A B C D E F capacity weight (a) profit (c) : 86 [79,86] sorted profit/weight: C > A > D > F > E > B. upper bound: max c T x s.t. a T x 4, x [0, ] 6 lower bound: sort by profit/weight and fill in this sequence P : original problem UB: C A=34 LB: C +D +F =30 P 2 : x A = x B =x C =0 UB: A + D + 3 F = UB <30 no OS x A 0 P 4 : x A =0, x E = UB: E +C +D =3 LB: E +C +D =3 P 3 : x A =0 UB: C +D +F + 4 E =3 2 LB: C +D +F =30 x E 0 [LP-relaxation] P 5 : x A =x E =0 UB: C+ D +F + 7 B = LB <3 no OS
87 The branch&bound tree will get huge whenever many solutions are almost optimal. For successful branch&bound we need to answer How can we obtain good upper and lower bounds? 32: 87 [87,87]
88 Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 33: 88 [88,88]
89 .7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. 34: 89 [89,93]
90 .7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. Examples:, R n, halfspaces, the intersection of convex sets is convex, polyhedra, the k-dim. unit simplex k := {α R k + : k i= α i = } Halbraum Schnitt 3 34: 90 [89,93]
91 .7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. Examples:, R n, halfspaces, the intersection of convex sets is convex, polyhedra, the k-dim. unit simplex k := {α R k + : k i= α i = } For S R n the convex hull is the intersection of all convex sets that contain S, conv S := {C convex : S C}. 34: 9 [89,93]
92 .7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. Examples:, R n, halfspaces, the intersection of convex sets is convex, polyhedra, the k-dim. unit simplex k := {α R k + : k i= α i = } For S R n the convex hull is the intersection of all convex sets that contain S, conv S := {C convex : S C}. For given points x (i) R n, i {,..., k} and α k, the point x = α i x (i) is a convex combination of the x (i) : 92 [89,93] 0.4
93 .7 Convex Sets and Convex Hull A set C R n is convex, if for all x, y C the straight line segment [x, y] := {αx + ( α)y : α [0, ]} lies in C. Examples:, R n, halfspaces, the intersection of convex sets is convex, polyhedra, the k-dim. unit simplex k := {α R k + : k i= α i = } For S R n the convex hull is the intersection of all convex sets that contain S, conv S := {C convex : S C}. For given points x (i) R n, i {,..., k} and α k, the point x = α i x (i) is a convex combination of the x (i). conv S is the set of all convex combinations of finitely many points in S, conv S = { k i= α ix (i) : x (i) S, i =,..., k N, α k }. 34: 93 [89,93]
94 Convex Hull and Integer Programming Theorem The convex hull of finitely many points is a (bounded) polyhedron. 35: 94 [94,97]
95 Convex Hull and Integer Programming Theorem The convex hull of finitely many points is a (bounded) polyhedron. The integer hull of a polyhedron P = {x R n : Ax b} is the convex hull of the integer points in P, P I := conv(p Z n ). Theorem If A Q m n and b Q m, the integer hull P I of polyhedron P = {x R n : Ax b} is itself a polyhedron. 35: 95 [94,97]
96 Convex Hull and Integer Programming Theorem The convex hull of finitely many points is a (bounded) polyhedron. The integer hull of a polyhedron P = {x R n : Ax b} is the convex hull of the integer points in P, P I := conv(p Z n ). Theorem If A Q m n and b Q m, the integer hull P I of polyhedron P = {x R n : Ax b} is itself a polyhedron. difficulty: the description of P I is mostly unknown or excessive! P P I [exception e.g. for A tot. unimod., b Z n, then P = P I ] 35: 96 [94,97]
97 Convex Hull and Integer Programming Theorem The convex hull of finitely many points is a (bounded) polyhedron. The integer hull of a polyhedron P = {x R n : Ax b} is the convex hull of the integer points in P, P I := conv(p Z n ). Theorem If A Q m n and b Q m, the integer hull P I of polyhedron P = {x R n : Ax b} is itself a polyhedron. difficulty: the description of P I is mostly unknown or excessive! If the integer hull can be given explicitly, the integer optimization problem can be solved by the simplex method: Theorem Suppose the integer hull of P = {x R n : Ax b} is given by P I = {x R n : A I x b I }, then: sup{c T x : A I x b I, x R n } = sup{c T x : Ax b, x Z n }, Argmin{c T x : A I x b I, x R n } = conv Argmin{c T x : Ax b, x Z n }. 35: 97 [94,97]
98 Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). f(x) f(x) f(y) f(y) x y x y 36: 98 [98,03]
99 Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] epi f epi f 36: 99 [98,03]
100 Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] Theorem A function is convex if and only if its epigraph is a convex set. epi f epi f 36: 00 [98,03]
101 Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] Theorem A function is convex if and only if its epigraph is a convex set. Ex.: for convex f i : R n R also f (x) := sup i f i (x), x R n is convex. f epi f fk f2 f3 36: 0 [98,03]
102 Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] Theorem A function is convex if and only if its epigraph is a convex set. Ex.: for convex f i : R n R also f (x) := sup i f i (x), x R n is convex. Theorem Each local minimum of a convex function is also a global minimum, and for strictly convex functions it is unique (if it exists). For convex functions there exist rather good optimization methods. 36: 02 [98,03]
103 Convex Functions A function f : R n R := R { } is convex if f (αx + ( α)y) αf (x) + ( α)f (y) for x, y R n and α [0, ]. f is strictly convex, if f (αx + ( α)y) < αf (x) + ( α)f (y) for x, y R n, x y and α (0, ). The epigraph {( of) a function f : R n } R is the set x epi f := : x R r n, r f (x) [the points above f (x)] Theorem A function is convex if and only if its epigraph is a convex set. Ex.: for convex f i : R n R also f (x) := sup i f i (x), x R n is convex. Theorem Each local minimum of a convex function is also a global minimum, and for strictly convex functions it is unique (if it exists). For convex functions there exist rather good optimization methods. A function f is concave, if f is convex. (Each local maximum of a concave function is a global one.) 36: 03 [98,03]
104 Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 37: 04 [04,04]
105 .8 Relaxation Concept applicable to arbitrary optimization problems (here maximize): Definition Given two optimization problems with X, W R n and f, f : R n R (OP) max f (x) s.t. x X and (RP) max f (x) s.t. x W, (RP) is a relaxation of (OP) if. X W, 2. f (x) f (x) for all x X. 38: 05 [05,07]
106 .8 Relaxation Concept applicable to arbitrary optimization problems (here maximize): Definition Given two optimization problems with X, W R n and f, f : R n R (OP) max f (x) s.t. x X and (RP) max f (x) s.t. x W, (RP) is a relaxation of (OP) if. X W, 2. f (x) f (x) for all x X. Let (RP) be a relaxation of (OP). Observation. v(rp) v(op). [(RP) yields an upper bound] 2. If (RP) is infeasible, then so is (OP), 3. If x is OS of (RP) and x X with f (x ) = f (x ), then x is OS of (OP). 38: 06 [05,07]
107 .8 Relaxation Concept applicable to arbitrary optimization problems (here maximize): Definition Given two optimization problems with X, W R n and f, f : R n R (OP) max f (x) s.t. x X and (RP) max f (x) s.t. x W, (RP) is a relaxation of (OP) if. X W, 2. f (x) f (x) for all x X. Let (RP) be a relaxation of (OP). Observation. v(rp) v(op). [(RP) yields an upper bound] 2. If (RP) is infeasible, then so is (OP), 3. If x is OS of (RP) and x X with f (x ) = f (x ), then x is OS of (OP). Search for a suitable small W X and f f so that (RP) is efficiently solvable. 38: 07 [05,07] A relaxation (RP) of (OP) is called exact if v(op) = v(rp).
108 Convex Relaxation If convexity is required for W and (for max) concavity for f, this yields a convex relaxation. Mostly (but not always!) convexity ensures reasonable algorithmic solvability of the relaxation. 39: 08 [08,0]
109 Convex Relaxation If convexity is required for W and (for max) concavity for f, this yields a convex relaxation. Mostly (but not always!) convexity ensures reasonable algorithmic solvability of the relaxation. Ex.: for a combinatorial max problem with finite ground set Ω, feasible solutions F 2 Ω and linear cost function c R Ω max c T x s.t. x conv{χ Ω (F ) : F F} is an exact convex (even linear) relaxation, but it is useful only if the polyhedron conv{χ(f ) : F F} can be described by a reasonable inequality system Ax b. 39: 09 [08,0]
110 Convex Relaxation If convexity is required for W and (for max) concavity for f, this yields a convex relaxation. Mostly (but not always!) convexity ensures reasonable algorithmic solvability of the relaxation. Ex.: for a combinatorial max problem with finite ground set Ω, feasible solutions F 2 Ω and linear cost function c R Ω max c T x s.t. x conv{χ Ω (F ) : F F} is an exact convex (even linear) relaxation, but it is useful only if the polyhedron conv{χ(f ) : F F} can be described by a reasonable inequality system Ax b. In global optimization, nonlinear functions are approximated from below by convex functions on domain subdivisions. Ex.: Consider (OP) min f (x) := 2 x T Qx + q T x s.t. x [0, ] n with f not convex, i.e., λ min (Q) < 0. [λ min... minimal eigenvalue] Q λ min (Q)I is positive semidefinite and by xi 2 x i on [0, ] n there holds f (x) := 2 x T (Q λ min (Q)I )x +(q +λ min (Q)) T x f (x) x [0, ] n. Thus, (RP) min f (x) s.t. x [0, ] n is a convex relaxation of (OP). 39: 0 [08,0]
111 LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] 40: [,5]
112 LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] Ex.: knapsack problem: n = 2, weights a = (6, 8) T, capacity b = 0, max c T x s.t. a T x b, x Z n + max c T x s.t. a T x b, x 0, x 2 a feasible integer points: LP-relaxation: green best relaxation: the convex hull x 40: 2 [,5]
113 LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] Ex.: knapsack problem: n = 2, weights a = (6, 8) T, capacity b = 0, max c T x s.t. a T x b, x Z n + max c T x s.t. a T x b, x 0, x 2 a feasible integer points: LP-relaxation: green best relaxation: the convex hull x 40: 3 [,5]
114 LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] Ex.: knapsack problem: n = 2, weights a = (6, 8) T, capacity b = 0, max c T x s.t. a T x b, x Z n + max c T x s.t. a T x b, x 0, x 2 a feasible integer points: LP-relaxation: green best relaxation: the convex hull x 40: 4 [,5]
115 LP-Relaxation for Integer Programs For an integer program max c T x s.t. Ax b, x Z n dropping the integrality constraints yields the LP-relaxation max c T x s.t. Ax b, x R n. It is a relaxation, because X := {x Z n : Ax b} {x R n : Ax b} =: W. [basis of all standard solvers for mixed integer programs] Ex: integer multi-commodity flow fractional multi-commodity flow s s s s too large, would need convex hull 40: 5 [,5] t t infeasible, X = t t frac. feasible, W
116 Lagrangian Relaxation [Appl. to constrained optim. in general, here only for ineq.-constraints] Inconvenient constraints are lifted into the cost function via a Lagrange multiplier that penalizes violations (g : R n R k ): (OP) max f (x) s.t. g(x) 0 λ 0 x Ω (RP λ ) max s.t. c T x λ T g(x) x Ω is a relaxation: X := {x Ω : g(x) 0} {x Ω} =: W and for x X, λ 0 there holds f (x) f (x) λ T g(x) =: f (x) by g(x) 0. 4: 6 [6,7]
117 Lagrangian Relaxation [Appl. to constrained optim. in general, here only for ineq.-constraints] Inconvenient constraints are lifted into the cost function via a Lagrange multiplier that penalizes violations (g : R n R k ): (OP) max f (x) s.t. g(x) 0 λ 0 x Ω (RP λ ) max s.t. c T x λ T g(x) x Ω is a relaxation: X := {x Ω : g(x) 0} {x Ω} =: W and for x X, λ 0 there holds f (x) f (x) λ T g(x) =: f (x) by g(x) 0. Define the dual function ϕ(λ) := sup x Ω [ f (x) g(x) T λ ] = v(rp λ ) [for each fixed x linear in λ] for each λ 0 there holds ϕ(λ) v(op) [upper bound] ϕ(λ) easy to compute if (RP λ ) is easy to compute ϕ is convex, because sup of linear functions in λ best bound is inf{ϕ(λ) : λ 0} [convex problem!] well suited for convex optimization methods! 4: 7 [6,7]
118 Example: Integer Multi-Commodity Flow Let A be the node-arc incidence matrix to D = (V, E), 2 goods, relax the coupling capacity constraints by λ 0: min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) λ x () + x (2) w x () w, x (2) w x () Z E +, x (2) Z E +. min (c () +λ) T x () +(c (2) +λ) T x (2) λ T w s.t. Ax () =b () Ax (2) =b (2) x () w, x (2) w, x () Z E +, x (2) Z E +. The relaxation consists of two independent min-cost-flow problems (RP (i) λ ) min (c(i) +λ) T x (i) s.t. Ax (i) = b (i), w x (i) Z E + i {, 2} These can be solved integrally and efficiently! 42: 8 [8,9]
119 Example: Integer Multi-Commodity Flow Let A be the node-arc incidence matrix to D = (V, E), 2 goods, relax the coupling capacity constraints by λ 0: min c ()T x () + c (2)T x (2) s.t. Ax () = b () Ax (2) = b (2) λ x () + x (2) w x () w, x (2) w x () Z E +, x (2) Z E +. min (c () +λ) T x () +(c (2) +λ) T x (2) λ T w s.t. Ax () =b () Ax (2) =b (2) x () w, x (2) w, x () Z E +, x (2) Z E +. The relaxation consists of two independent min-cost-flow problems (RP (i) λ ) min (c(i) +λ) T x (i) s.t. Ax (i) = b (i), w x (i) Z E + i {, 2} These can be solved integrally and efficiently! If Lagrangian relaxation splits the problem into independent subproblems, this is sometimes called Lagrangian decomposition. Frequently this allows to solve much bigger problems efficiently. 42: 9 [8,9] Does this also yield better bounds?
120 Comparison of Lagrange- and LP-Relaxation Given finite Ω Z n and D Q k n, d Q k, (OP) max c T x s.t. Dx d λ 0 x Ω (RP λ ) max ct x + λ T (d Dx) s.t. x Ω In the ex.: Ω = Ω () Ω (2) with Ω (i) = {x Z E + : Ax = b (i), x w}, i {, 2}. Theorem inf λ 0 v(rp λ) = sup{c T x : Dx d, x conv Ω}. If conv Ω is identical to the feasible set of the LP-relaxation of Ω, the values of the best Lagrange relaxation and the LP-relaxation match! 43: 20 [20,22]
121 Comparison of Lagrange- and LP-Relaxation Given finite Ω Z n and D Q k n, d Q k, (OP) max c T x s.t. Dx d λ 0 x Ω (RP λ ) max ct x + λ T (d Dx) s.t. x Ω In the ex.: Ω = Ω () Ω (2) with Ω (i) = {x Z E + : Ax = b (i), x w}, i {, 2}. Theorem inf λ 0 v(rp λ) = sup{c T x : Dx d, x conv Ω}. If conv Ω is identical to the feasible set of the LP-relaxation of Ω, the values of the best Lagrange relaxation and the LP-relaxation match! In the ex. A is totally unimodular, thus for i {, 2} and w Z E conv{x Z E + : Ax = b (i), x w} = {x 0 : Ax = b (i), x w}. The best λ yields the value of the fractional multi-comm.-flow problem! 43: 2 [20,22]
122 Comparison of Lagrange- and LP-Relaxation Given finite Ω Z n and D Q k n, d Q k, (OP) max c T x s.t. Dx d λ 0 x Ω (RP λ ) max ct x + λ T (d Dx) s.t. x Ω In the ex.: Ω = Ω () Ω (2) with Ω (i) = {x Z E + : Ax = b (i), x w}, i {, 2}. Theorem inf λ 0 v(rp λ) = sup{c T x : Dx d, x conv Ω}. If conv Ω is identical to the feasible set of the LP-relaxation of Ω, the values of the best Lagrange relaxation and the LP-relaxation match! In the ex. A is totally unimodular, thus for i {, 2} and w Z E conv{x Z E + : Ax = b (i), x w} = {x 0 : Ax = b (i), x w}. The best λ yields the value of the fractional multi-comm.-flow problem! In general: Let {x Z n : Ax b} = Ω be a formulation of Ω. Only if {x R n : Ax b} conv Ω, Lagrange relaxation may yield a better value Wert than the LP-relaxation. 43: 22 [20,22]
123 Contents Integer Optimization. Bipartite Matching.2 Integral Polyhedra ( and directed Graphs).3 Application: Networkflows.4 Multi-Commodity Flow Problems.5 Integer and Combinatorial Optimization.6 Branch-and-Bound.7 Convex Sets, Convex Hull, Convex Functions.8 Relaxation.9 Application: Traveling Salesman Problem (TSP).0 Finding Good Solutions, Heuristics. Mixed-Integer Optimization 44: 23 [23,23]
124 .9 Application: Traveling Salesman Problem (TSP) TSP: Given n cities with all pairwise distances, find a shortest round trip that visits each city exactly once. 45: 24 [24,27]
125 .9 Application: Traveling Salesman Problem (TSP) TSP: Given n cities with all pairwise distances, find a shortest round trip that visits each city exactly once. Comb. opt.: D = (V, E = {(u, v) : u, v V, u v}) complete digraph, costs c R E, feasible set F = {R E : R (dir.) cycle in D, R = n}. Find R Argmin{c(R) = e R c e : R F} NP-complete problem Number of tours? 45: 25 [24,27]
126 .9 Application: Traveling Salesman Problem (TSP) TSP: Given n cities with all pairwise distances, find a shortest round trip that visits each city exactly once. Comb. opt.: D = (V, E = {(u, v) : u, v V, u v}) complete digraph, costs c R E, feasible set F = {R E : R (dir.) cycle in D, R = n}. Find R Argmin{c(R) = e R c e : R F} NP-complete problem Number of tours? (n )! combinatorial explosion n = 3: 2 n = 4: 3 2 = 6 n = 5: = 24 n = 0: n = : n = 2: n = 3: n = 4: n =00: : 26 [24,27]
127 .9 Application: Traveling Salesman Problem (TSP) TSP: Given n cities with all pairwise distances, find a shortest round trip that visits each city exactly once. Comb. opt.: D = (V, E = {(u, v) : u, v V, u v}) complete digraph, costs c R E, feasible set F = {R E : R (dir.) cycle in D, R = n}. Find R Argmin{c(R) = e R c e : R F} NP-complete problem Number of tours? (n )! combinatorial explosion Typical problem for: delivery services (parcels, pharmacy-transports) taxi services, breakdown services scheduling with setup costs [e.g. coloring cars] steering robots (mostly nonlinear) [frequently with several vehicles and time windows] n = 3: 2 n = 4: 3 2 = 6 n = 5: = 24 n = 0: n = : n = 2: n = 3: n = 4: n =00: : 27 [24,27]
128 Drilling holes into main boards 46: 28 [28,28] [
129 With online aspects: breakdown services Find an assignment of cars and a sequence for each repair man, so that promised waiting periods are not exceeded. 47: 29 [29,29]
130 Scheduling for long setup times Two rotational printing machines for printing wrapping papier x# Anfangszustand M Anfangszustand M2 Passer 25 Endzustand M 48: 30 [30,30] Endzustand M2 Farbanpassung neue Farben: 20
Preliminaries and Complexity Theory
Preliminaries and Complexity Theory Oleksandr Romanko CAS 746 - Advanced Topics in Combinatorial Optimization McMaster University, January 16, 2006 Introduction Book structure: 2 Part I Linear Algebra
More informationTechnische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)
Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik Combinatorial Optimization (MA 4502) Dr. Michael Ritter Problem Sheet 1 Homework Problems Exercise
More informationIntroduction to Integer Linear Programming
Lecture 7/12/2006 p. 1/30 Introduction to Integer Linear Programming Leo Liberti, Ruslan Sadykov LIX, École Polytechnique liberti@lix.polytechnique.fr sadykov@lix.polytechnique.fr Lecture 7/12/2006 p.
More informationCS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest
More informationCS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source
More informationIntroduction to Integer Programming
Lecture 3/3/2006 p. /27 Introduction to Integer Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Lecture 3/3/2006 p. 2/27 Contents IP formulations and examples Total unimodularity
More informationIn complexity theory, algorithms and problems are classified by the growth order of computation time as a function of instance size.
10 2.2. CLASSES OF COMPUTATIONAL COMPLEXITY An optimization problem is defined as a class of similar problems with different input parameters. Each individual case with fixed parameter values is called
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationChapter 3: Discrete Optimization Integer Programming
Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-16-17.shtml Academic year 2016-17
More informationInteger Linear Programming (ILP)
Integer Linear Programming (ILP) Zdeněk Hanzálek, Přemysl Šůcha hanzalek@fel.cvut.cz CTU in Prague March 8, 2017 Z. Hanzálek (CTU) Integer Linear Programming (ILP) March 8, 2017 1 / 43 Table of contents
More informationMaximum flow problem
Maximum flow problem 7000 Network flows Network Directed graph G = (V, E) Source node s V, sink node t V Edge capacities: cap : E R 0 Flow: f : E R 0 satisfying 1. Flow conservation constraints e:target(e)=v
More informationIntroduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 21 Dr. Ted Ralphs IE406 Lecture 21 1 Reading for This Lecture Bertsimas Sections 10.2, 10.3, 11.1, 11.2 IE406 Lecture 21 2 Branch and Bound Branch
More informationComputational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs
Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational
More informationComputational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs
Computational Complexity IE 496 Lecture 6 Dr. Ted Ralphs IE496 Lecture 6 1 Reading for This Lecture N&W Sections I.5.1 and I.5.2 Wolsey Chapter 6 Kozen Lectures 21-25 IE496 Lecture 6 2 Introduction to
More information3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationExercises NP-completeness
Exercises NP-completeness Exercise 1 Knapsack problem Consider the Knapsack problem. We have n items, each with weight a j (j = 1,..., n) and value c j (j = 1,..., n) and an integer B. All a j and c j
More informationMulticommodity Flows and Column Generation
Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More informationMVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg
MVE165/MMG630, Integer linear programming: models and applications; complexity Ann-Brith Strömberg 2011 04 01 Modelling with integer variables (Ch. 13.1) Variables Linear programming (LP) uses continuous
More informationChapter 3: Discrete Optimization Integer Programming
Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Sito web: http://home.deib.polimi.it/amaldi/ott-13-14.shtml A.A. 2013-14 Edoardo
More informationCSC 8301 Design & Analysis of Algorithms: Lower Bounds
CSC 8301 Design & Analysis of Algorithms: Lower Bounds Professor Henry Carter Fall 2016 Recap Iterative improvement algorithms take a feasible solution and iteratively improve it until optimized Simplex
More informationModeling with Integer Programming
Modeling with Integer Programg Laura Galli December 18, 2014 We can use 0-1 (binary) variables for a variety of purposes, such as: Modeling yes/no decisions Enforcing disjunctions Enforcing logical conditions
More information4/12/2011. Chapter 8. NP and Computational Intractability. Directed Hamiltonian Cycle. Traveling Salesman Problem. Directed Hamiltonian Cycle
Directed Hamiltonian Cycle Chapter 8 NP and Computational Intractability Claim. G has a Hamiltonian cycle iff G' does. Pf. Suppose G has a directed Hamiltonian cycle Γ. Then G' has an undirected Hamiltonian
More informationIntroduction to Bin Packing Problems
Introduction to Bin Packing Problems Fabio Furini March 13, 2015 Outline Origins and applications Applications: Definition: Bin Packing Problem (BPP) Solution techniques for the BPP Heuristic Algorithms
More informationMaximum Flow Problem (Ford and Fulkerson, 1956)
Maximum Flow Problem (Ford and Fulkerson, 196) In this problem we find the maximum flow possible in a directed connected network with arc capacities. There is unlimited quantity available in the given
More informationLinear and Integer Programming - ideas
Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature
More informationCHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY. E. Amaldi Foundations of Operations Research Politecnico di Milano 1
CHAPTER 3 FUNDAMENTALS OF COMPUTATIONAL COMPLEXITY E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Goal: Evaluate the computational requirements (this course s focus: time) to solve
More information1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)
CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) Final Review Session 03/20/17 1. Let G = (V, E) be an unweighted, undirected graph. Let λ 1 be the maximum eigenvalue
More informationInteger Programming Methods LNMB
Integer Programming Methods LNMB 2017 2018 Dion Gijswijt homepage.tudelft.nl/64a8q/intpm/ Dion Gijswijt Intro IntPM 2017-2018 1 / 24 Organisation Webpage: homepage.tudelft.nl/64a8q/intpm/ Book: Integer
More information1 Column Generation and the Cutting Stock Problem
1 Column Generation and the Cutting Stock Problem In the linear programming approach to the traveling salesman problem we used the cutting plane approach. The cutting plane approach is appropriate when
More information3.3 Easy ILP problems and totally unimodular matrices
3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationTravelling Salesman Problem
Travelling Salesman Problem Fabio Furini November 10th, 2014 Travelling Salesman Problem 1 Outline 1 Traveling Salesman Problem Separation Travelling Salesman Problem 2 (Asymmetric) Traveling Salesman
More information3.10 Column generation method
3.10 Column generation method Many relevant decision-making problems can be formulated as ILP problems with a very large (exponential) number of variables. Examples: cutting stock, crew scheduling, vehicle
More information5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1
5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Definition: An Integer Linear Programming problem is an optimization problem of the form (ILP) min
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationDiscrete Optimization 23
Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:
More informationIntroduction to integer programming III:
Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability
More informationLecture 5 January 16, 2013
UBC CPSC 536N: Sparse Approximations Winter 2013 Prof. Nick Harvey Lecture 5 January 16, 2013 Scribe: Samira Samadi 1 Combinatorial IPs 1.1 Mathematical programs { min c Linear Program (LP): T x s.t. a
More informationLecture 8: Column Generation
Lecture 8: Column Generation (3 units) Outline Cutting stock problem Classical IP formulation Set covering formulation Column generation A dual perspective Vehicle routing problem 1 / 33 Cutting stock
More informationDiscrete (and Continuous) Optimization WI4 131
Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More information3.10 Column generation method
3.10 Column generation method Many relevant decision-making (discrete optimization) problems can be formulated as ILP problems with a very large (exponential) number of variables. Examples: cutting stock,
More informationDecomposition-based Methods for Large-scale Discrete Optimization p.1
Decomposition-based Methods for Large-scale Discrete Optimization Matthew V Galati Ted K Ralphs Department of Industrial and Systems Engineering Lehigh University, Bethlehem, PA, USA Départment de Mathématiques
More informationTotally unimodular matrices. Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems
Totally unimodular matrices Introduction to integer programming III: Network Flow, Interval Scheduling, and Vehicle Routing Problems Martin Branda Charles University in Prague Faculty of Mathematics and
More informationWeek 8. 1 LP is easy: the Ellipsoid Method
Week 8 1 LP is easy: the Ellipsoid Method In 1979 Khachyan proved that LP is solvable in polynomial time by a method of shrinking ellipsoids. The running time is polynomial in the number of variables n,
More information3.4 Relaxations and bounds
3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper
More informationInteger Programming ISE 418. Lecture 8. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationUnit 1A: Computational Complexity
Unit 1A: Computational Complexity Course contents: Computational complexity NP-completeness Algorithmic Paradigms Readings Chapters 3, 4, and 5 Unit 1A 1 O: Upper Bounding Function Def: f(n)= O(g(n)) if
More informationNetwork Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique
More informationThe traveling salesman problem
Chapter 58 The traveling salesman problem The traveling salesman problem (TSP) asks for a shortest Hamiltonian circuit in a graph. It belongs to the most seductive problems in combinatorial optimization,
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationCombinatorial Optimization
Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011 Contents 1 The knapsack problem 1 1.1 Complete enumeration..................................
More informationMathematical Programs Linear Program (LP)
Mathematical Programs Linear Program (LP) Integer Program (IP) Can be efficiently solved e.g., by Ellipsoid Method Cannot be efficiently solved Cannot be efficiently solved assuming P NP Combinatorial
More informationInteger Linear Programs
Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens
More informationCombinatorial optimization problems
Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Optimization In general an optimization problem can be formulated as:
More informationOptimization Exercise Set n. 4 :
Optimization Exercise Set n. 4 : Prepared by S. Coniglio and E. Amaldi translated by O. Jabali 2018/2019 1 4.1 Airport location In air transportation, usually there is not a direct connection between every
More informationResource Constrained Project Scheduling Linear and Integer Programming (1)
DM204, 2010 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Resource Constrained Project Linear and Integer Programming (1) Marco Chiarandini Department of Mathematics & Computer Science University of Southern
More informationDeciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point
Deciding Emptiness of the Gomory-Chvátal Closure is NP-Complete, Even for a Rational Polyhedron Containing No Integer Point Gérard Cornuéjols 1 and Yanjun Li 2 1 Tepper School of Business, Carnegie Mellon
More informationInteger Programming, Part 1
Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More informationWeek Cuts, Branch & Bound, and Lagrangean Relaxation
Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is
More information16.410/413 Principles of Autonomy and Decision Making
6.4/43 Principles of Autonomy and Decision Making Lecture 8: (Mixed-Integer) Linear Programming for Vehicle Routing and Motion Planning Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute
More informationWeek 4. (1) 0 f ij u ij.
Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known
More informationContents. Introduction
Introduction Contents Chapter 1. Network Flows 1 1.1. Graphs 1 1.2. Shortest Paths 8 1.3. Maximum Flow 17 1.4. Min Cost Flows 22 List of frequently used Symbols 31 Bibliography 33 Index 37 iii i Introduction
More informationNP-completeness. Chapter 34. Sergey Bereg
NP-completeness Chapter 34 Sergey Bereg Oct 2017 Examples Some problems admit polynomial time algorithms, i.e. O(n k ) running time where n is the input size. We will study a class of NP-complete problems
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms
MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2017 04 07 Lecture 8 Linear and integer optimization with applications
More informationIntroduction to optimization and operations research
Introduction to optimization and operations research David Pisinger, Fall 2002 1 Smoked ham (Chvatal 1.6, adapted from Greene et al. (1957)) A meat packing plant produces 480 hams, 400 pork bellies, and
More informationCOT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748
COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous
More informationDiscrete Optimization 2010 Lecture 7 Introduction to Integer Programming
Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming Marc Uetz University of Twente m.uetz@utwente.nl Lecture 8: sheet 1 / 32 Marc Uetz Discrete Optimization Outline 1 Intro: The Matching
More informationmaxz = 3x 1 +4x 2 2x 1 +x 2 6 2x 1 +3x 2 9 x 1,x 2
ex-5.-5. Foundations of Operations Research Prof. E. Amaldi 5. Branch-and-Bound Given the integer linear program maxz = x +x x +x 6 x +x 9 x,x integer solve it via the Branch-and-Bound method (solving
More informationExtended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications
Extended Formulations, Lagrangian Relaxation, & Column Generation: tackling large scale applications François Vanderbeck University of Bordeaux INRIA Bordeaux-Sud-Ouest part : Defining Extended Formulations
More informationLimitations of Algorithm Power
Limitations of Algorithm Power Objectives We now move into the third and final major theme for this course. 1. Tools for analyzing algorithms. 2. Design strategies for designing algorithms. 3. Identifying
More informationBranch-and-Bound. Leo Liberti. LIX, École Polytechnique, France. INF , Lecture p. 1
Branch-and-Bound Leo Liberti LIX, École Polytechnique, France INF431 2011, Lecture p. 1 Reminders INF431 2011, Lecture p. 2 Problems Decision problem: a question admitting a YES/NO answer Example HAMILTONIAN
More information(tree searching technique) (Boolean formulas) satisfying assignment: (X 1, X 2 )
Algorithms Chapter 5: The Tree Searching Strategy - Examples 1 / 11 Chapter 5: The Tree Searching Strategy 1. Ex 5.1Determine the satisfiability of the following Boolean formulas by depth-first search
More informationIntroduction to Complexity Theory
Introduction to Complexity Theory Read K & S Chapter 6. Most computational problems you will face your life are solvable (decidable). We have yet to address whether a problem is easy or hard. Complexity
More informationCutting Plane Methods II
6.859/5.083 Integer Programming and Combinatorial Optimization Fall 2009 Cutting Plane Methods II Gomory-Chvátal cuts Reminder P = {x R n : Ax b} with A Z m n, b Z m. For λ [0, ) m such that λ A Z n, (λ
More informationReconnect 04 Introduction to Integer Programming
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, Reconnect 04 Introduction to Integer Programming Cynthia Phillips, Sandia National Laboratories Integer programming
More informationON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES
ON COST MATRICES WITH TWO AND THREE DISTINCT VALUES OF HAMILTONIAN PATHS AND CYCLES SANTOSH N. KABADI AND ABRAHAM P. PUNNEN Abstract. Polynomially testable characterization of cost matrices associated
More informationComplexity Theory VU , SS The Polynomial Hierarchy. Reinhard Pichler
Complexity Theory Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität Wien 15 May, 2018 Reinhard
More informationOutline. Complexity Theory EXACT TSP. The Class DP. Definition. Problem EXACT TSP. Complexity of EXACT TSP. Proposition VU 181.
Complexity Theory Complexity Theory Outline Complexity Theory VU 181.142, SS 2018 6. The Polynomial Hierarchy Reinhard Pichler Institut für Informationssysteme Arbeitsbereich DBAI Technische Universität
More informationCHAPTER 3: INTEGER PROGRAMMING
CHAPTER 3: INTEGER PROGRAMMING Overview To this point, we have considered optimization problems with continuous design variables. That is, the design variables can take any value within a continuous feasible
More information21. Set cover and TSP
CS/ECE/ISyE 524 Introduction to Optimization Spring 2017 18 21. Set cover and TSP ˆ Set covering ˆ Cutting problems and column generation ˆ Traveling salesman problem Laurent Lessard (www.laurentlessard.com)
More informationIntroduction to integer programming II
Introduction to integer programming II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationThis means that we can assume each list ) is
This means that we can assume each list ) is of the form ),, ( )with < and Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible
More information15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization
15.081J/6.251J Introduction to Mathematical Programming Lecture 24: Discrete Optimization 1 Outline Modeling with integer variables Slide 1 What is a good formulation? Theme: The Power of Formulations
More informationOptimization methods NOPT048
Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague
More informationLinear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming
Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)
More informationAgenda. Soviet Rail Network, We ve done Greedy Method Divide and Conquer Dynamic Programming
Agenda We ve done Greedy Method Divide and Conquer Dynamic Programming Now Flow Networks, Max-flow Min-cut and Applications c Hung Q. Ngo (SUNY at Buffalo) CSE 531 Algorithm Analysis and Design 1 / 52
More informationDiscrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P
Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P Marc Uetz University of Twente m.uetz@utwente.nl Lecture 8: sheet 1 / 32 Marc Uetz Discrete Optimization Outline 1 Lagrangian
More informationOutline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model
Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 and Mixed Integer Programg Marco Chiarandini 1. Resource Constrained Project Model 2. Mathematical Programg 2 Outline Outline 1. Resource Constrained
More informationConnectedness of Efficient Solutions in Multiple. Objective Combinatorial Optimization
Connectedness of Efficient Solutions in Multiple Objective Combinatorial Optimization Jochen Gorski Kathrin Klamroth Stefan Ruzika Communicated by H. Benson Abstract Connectedness of efficient solutions
More informationAnalysis of Algorithms. Unit 5 - Intractable Problems
Analysis of Algorithms Unit 5 - Intractable Problems 1 Intractable Problems Tractable Problems vs. Intractable Problems Polynomial Problems NP Problems NP Complete and NP Hard Problems 2 In this unit we
More informationIS703: Decision Support and Optimization. Week 5: Mathematical Programming. Lau Hoong Chuin School of Information Systems
IS703: Decision Support and Optimization Week 5: Mathematical Programming Lau Hoong Chuin School of Information Systems 1 Mathematical Programming - Scope Linear Programming Integer Programming Network
More informationOptimization methods NOPT048
Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague
More informationCS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III. Instructor: Shaddin Dughmi
CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 17: Combinatorial Problems as Linear Programs III Instructor: Shaddin Dughmi Announcements Today: Spanning Trees and Flows Flexibility awarded
More informationNP-problems continued
NP-problems continued Page 1 Since SAT and INDEPENDENT SET can be reduced to each other we might think that there would be some similarities between the two problems. In fact, there is one such similarity.
More information