Optmization models for communication networks
|
|
- Clyde Esmond Doyle
- 6 years ago
- Views:
Transcription
1 Optmization models for communication networks (PhD course) prof. Michał Pióro Department of Electrical and Information Technology Lund University, Sweden Institute of Telecommunications Warsaw University of Technology, Poland Dipartimento di Elettronica e Informazione, Politecnico di Milano November 10-14, 2014 Michał Pióro 1
2 purpose to present basic approaches to network optimization: multi-commodity flow networks (MCFN) models resource dimensioning (link capacity) routing of demands (flows) optimization methods linear programming mixed-integer programming heuristic methods Michał Pióro 2
3 course literature L. Lasdon: Optimization Theory for Large Systems, McMillan, 1972 M. Minoux: Mathematical Programming, Theory and Algorithms, J.Wiley, 1986 L.A. Wosley: Integer Programming, J.Wiley, 1998 M. Pióro and D. Medhi: Routing, Flow, and Capacity Design in Communication and Computer Networks, Morgan Kaufmann, 2004 M. Pióro: Network Optimization Techniques, Chapter 18 in Mathematical Foundations for Signal Processing, Communications, and Networking, E. Serpedin, T. Chen, D. Rajan (eds.), CRC Press, 2012 Michał Pióro 3
4 CONTENTS I 1. Basics of optimization theory. Classification of optimization problems. Relaxation and duality. The role of convexity. 2. Multicommodity flow network (MCFN) problems linear and mixeinteger problem formulations. Link-path vs. node-link formulations. Allocation vs. dimensioning problems. Various cases of routing, modular links. 3. Linear programming (LP). Basic notions and properties of LP problems. Simplex method basic algorithm and its features. 4. Mixed-integer programming (MIP) and its relation to LP. Branchand-bound (B&B) method and algorithms for problems involving binary variables. Extensions to the general MIP formulation. 5. Modeling non-linearities. Convex and concave objective functions and the crucial differences between the two. Step-wise link capacity/cost functions. Michał Pióro 4
5 CONTENTS II 6. Duality in LP. Path generation (PG). 7. Strengthening MIP formulations. Cutting plane method. Valid inequalities and branch-and-cut (B&C). 8. Case study: wireless mesh network design. 9. Heuristics for combinatorial optimization. Local search. Stochastic heuristics: simulated annealing and evolutionary algorithms. GRASP. 10. Notion of NP-completeness/hardness. Separation theorem. Michał Pióro 5
6 basic notions set X R n is bounded: contained in a ball B(0,r) = { x R n : x r } closed: for any { x n X, n=1,2, }, lim x n X (if lim exists), X = closure(x) compact: bounded and closed (every sequence contains a convergent subsequence) open: x X r > 0, B(x,r) X (R n \ X is closed) function f: X R is continuous (X - closed) f(lim x n ) = lim f(x n ) lecture 1 extreme value theorem (Weierstrass) theorem: assumptions: continuous function f: X R, X compact f achieves global maximum and global minimum on X. linear function: f(x) = a 1 x 1 + a 2 x a n x n = ax (n-1) dimensional hyperplane: ax = c half-space: ax c Michał Pióro 6
7 basic notions - examples intervals (a,b), [a,b], (a,b] in R 1 disc in R 2 : D = { (x 1,x 2 ): x x 2 2 r 2 }, circumference D = { (x 1,x 2 ): x x 2 2 = r 2 } disc in R 2 : D = { (x 1,x 2 ): x x 2 2 < r 2 } simplex in R 3 : S = {(x 1,x 2,x 3 ): x 1 + x 2 + x 3 1, x 1 0, x 2 0, x 3 0 } polyhedron: x R n : Ax b (m inequality constraints: A is an m by n matrix) functions: quadratic, square root, linear, Michał Pióro 7
8 convexity (concavity) set X R n is convex iff for each pair of points x, y X, the segment [x,y] X, i.e., { (1-α)x + αy : 0 α 1 } X conv(x) convex hull of (a non-convex) X: the smallest convex set including X conv(x) set of all convex combinations of the finite subsets of X function f: X R is convex (for convex X) iff for each x, y X and for each scalarα (0 α 1) f((1-α)x + αy) (1-α)f(x) + αf(y) strictly convex: if < for 0 < α < 1 examples: f(x) = x 2 on R, f(x) = max{ a k x + b k } on R convex continuous function f: X R is concave (for convex X) iff f is convex Michał Pióro 8
9 general form of an optimization problem optimization problem (OP): minimize F(x) F: X R objective function x X X R n optimization space, feasible set x = (x 1,x 2,,x n ) R n variables convex problem (CXP): X convex set F convex function effectively tractable linear programming (LP) problem a very special convex problem (X polyhedron, F linear function) efficient methods (simplex method) non-convex problems (mixed) integer programming problems (MIP) (LP with discrete variables) linear constraints and concave objective function (CVP) Michał Pióro 9
10 common form optimization problem: minimize F(x) subject to h i (x) = 0 i=1,2,..,m g j (x) 0 j=1,2,..,k x X constraints h explicit equality constraints g explicit inequality constraints X the set representing other constraints (e.g., x 0) Michał Pióro 10
11 relaxation - intuition Optimization problem (P): minimize F(x) x Y Relaxation of (P) - problem (R): minimize G(x) x X such that Y X G(x) F(x) for all x Y Property (obvious): G opt = G(x opt(r) ) F(x opt(p) ) = F opt, i.e., the optimal solution of (R) is a lower bound for (P) Example linear relaxation of integer programming problem: max cx over Ax b, x - integer Michał Pióro 11
12 Dual theory Consider a programing problem (P): minimize F(x) subject to h i (x) = 0 i=1,2,..,k g j (x) 0 j=1,2,..,m set Y x X Form the Lagrangean function: L(x; π,λ) = F(x) + i λ i h i (x) + j π j g j (x) x X, λ - unconstrained in sign, π 0 Define the optimization problem for fixed π and λ (R(π,λ)): min x X L(x; π,λ) R(π,λ) is a relaxation of (P) Michał Pióro 12
13 why the dual is a relaxation? Problem (R(π,λ)): minimize G(x) = F(x) + i λ i h i (x) + j π j g j (x) (π, λ are given!) subject to x X W(π,λ) = G opt hence Y X (trivially) G(x) F(x) for all x Y (because π j 0, and for all x Y, g j (x) 0 and h i (x) = 0) and W(π,λ) F(x opt(p) ) for all (π,λ) Dom(W) Michał Pióro 13
14 dual problem Lagrangean function (one vector of primal variables and two vectors of dual variables) L(x; π,λ) = F(x) + i λ i h i (x) + j π j g j (x) x X, λ - unconstrained in sign, π 0 Dual function: W(π,λ) = min x X L(x; π,λ) λ - unconstrained in sign, π 0 Dom(W) = {(π,λ): λ - unconstrained in sign, π 0, min x X L(x; π,λ) > - } note that when X is compact then min x X L(x; π,λ) > - Dual problem (D): finding the best relaxation of (P) maximize W(π,λ) subject to (π,λ) Dom(W) Michał Pióro 14
15 duality basic properties for general problems property 1 W is concave, Dom(W) is convex, i.e., (D) is convex property 2 x Y (π,λ) Dom(W), W(π,λ) F(x) Michał Pióro 15
16 convex poblem X convex set F, g j convex on X h i linear (!) convex problems nice properties (in general hold only for convex problems) Every local optimum is the global optimum. There are no local minima. The set of all optimal solutions is convex. If F is strictly convex then the optimal point is unique. strong duality theorem x * π * λ *, F(x * ) = W(π *,λ * ) (global extrema) additional properties if both (P) and (D) are feasible then F(x * ) = W(π *,λ * ) (P) unbounded then (D) infeasible; (D) unbouded then (P) infeasible Michał Pióro 16
17 undirected graphs and directed graphs lecture 2 e 1 v 2 e 4 e 1 v 2 e 5 v 1 v 4 e 3 v 1 e 3 e 4 v 4 e 2 e 5 v 3 e 2 v 3 e 6 nodes (vertices): V = {v 1,v 2,v 3,v 4 } links (edges/arcs): E = {e 1,e 2,e 3,e 4,e 5 } undirected/directed capacity: c e, cost: ξ e demands: D = {d 1,d 2 } undirected/directed end nodes, volume h d undirected paths for d 1 = {v 1,v 4 } P 11 = {e 1,e 4 }, P 12 = {e 2,e 5 }, P 13 = {e 1,e 3,e 5 }, P 14 = {e 2,e 3,e 4 } directed paths for d 1 = {v 1,v 4 } P 11 = {e 1,e 5 }, P 12 = {e 2,e 6 }, P 13 = {e 1,e 3,e 6 }, P 14 = {e 2,e 4,e 5 } link-path incidence δ edp = 1 when link e belongs to path p of demand d node-link incidence a ve = 1 when link e originates at node v b ve = 1 when link e terminates at node v Michał Pióro 17
18 flow allocation problem (FAP) link-path formulation indices d=1,2,,d demands p=1,2,,p d paths for flows realizing demand d e=1,2,,e links constants h d volume of demand d c e capacity of link e ξ e unit flow cost on link e δ edp = 1 if e belongs to path p realizing demand d; 0, otherwise Michał Pióro 18
19 FAP link-path formulation for undirected as well as for directed graphs variables x dp flow realizing demand d on path p ( objective minimize Σ e ξ e (Σ d Σ p δ edp x dp ) ) constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp c e e=1,2,,e flow variables are continuous and non-negative x ed = Σ p δ edp x dp link flows version with no objective: only a feasible solution required Michał Pióro 19
20 Example: complete FAP formulation d 2 d 3 formulation (a linear programming problem) e 2 e 3 e 1 d 1 no objective input data: h 1 = 10, h 2 = 5, h 3 = 12 P 11 = {e 1 }, P 12 = {e 2,e 3 } P 21 = {e 2 }, P 22 = {e 1,e 3 } P 31 = {e 3 }, P 32 = {e 1,e 2 } ξ 1 = 1, ξ 2 = 3, ξ 3 = 2 demand constraints: x 11 + x 12 = 10 x 21 + x 22 = 5 x 31 + x 32 = 12 capacity constraints: x 11 + x 22 + x 32 c 1 x 12 + x 21 + x 32 c 2 x 12 + x 22 + x 31 c 3 non-negativity constraints: x 11, x 12, x 21, x 22, x 31, x 32 0, continuous Michał Pióro 20
21 FAP - example h d = 10 (for each of three top-down demands) C 10 5 C 10 C = 10? C = 15? why? Michał Pióro 21
22 FAP an always feasible version now we make use of the objective to force feasibility variables x dp flow realizing demand d on path p objective minimize z constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp c e + z e=1,2,,e flow variables are continuous and non-negative how to specify appropriate path list? Michał Pióro 22
23 example of the difficulty the number of paths in the graph grows exponentially so we simply cannot put them all on the path lists! 5 by 5 Manhattan network: 840 shortest-hop paths between two opposite corners c = 1 + ε c = 1 + ε all 10 demands but one with h = 1 all 10 links but four with capacity 1 c = 1 + ε c = 1 + ε how should we know that the thick path must be used to get the optimal solution? h = 1 + ε Michał Pióro 23
24 number of paths in Manhattan s n = 3 t The number of shortest paths (each shortest path has 2(n-1) links) from s to t is equal to (2n-2) over (n-1) in the sense of the Newton symbol. In the above example it is 4 over 2, i.e., 6. In general, when we have n x m nodes (n in the horizontal direction, and m in vertical), the formula reads (n+m-2) over (m-1) which is equal to (m+n-2) over (n-1). Michał Pióro 24
25 FAP - node-link formulation indices d=1,2,,d demands v=1,2,...,v nodes e=1,2,...,e links (directed arcs) constants h d volume of demand d s d, t d source, sink node of demand d a ve = 1 if arc e originates at node v; 0, otherwise b ve = 1 if arc e terminates in node v; 0, otherwise c e capacity of arc e for directed graphs Michał Pióro 25
26 node-link formulation variables x ed 0 flow of demand d on arc e objective minimize Σ e ξ e (Σ d x ed ) constraints = h d if v = s d Σ e a ve x ed - Σ e b ve x ed = 0 if v s d,t d = - h d if v = t d (dependent on the rest) v=1,2,...,v, d=1,2,,d Σ d x ed c e e=1,2,,e we need to find the path flows, loops possible Michał Pióro 26
27 max flow from s to t: node-link formulation variables x e 0 flow realizing the demand on arc e maximize Σ e a se x e - Σ e b se x e constraints Σ e a ve x e - Σ e b ve x e = 0, v s, t v=1,2,...,v x e c e e=1,2,,e Michał Pióro 27
28 FAP - example v s t w h st = 2 c e = 1 for all arcs Michał Pióro 28
29 indices d=1,2,,d demands v=1,2,...,v nodes e=1,2,...,e links (undirected) a=1,2,,a arc (for bi-directed links) FAP - node-link formulation eʹ, eʺ two oppositely directed arcs of link e constants h d volume of demand d s d, t d source, sink node of demand d a va = 1 if arc a originates at node v; 0, otherwise b va = 1 if arc a terminates in node v; 0, otherwise c e capacity of link e for undirected graphs Michał Pióro 29
30 node-link formulation for undirected graphs variables x ed 0 flow of demand d on link e xʹ ad 0 flow of demand d on arc a objective minimize Σ e ξ e (Σ d x ed ) constraints = h d if v = s d Σ a a va xʹ ad - Σ a b va xʹ ad = 0 if v s d,t d = - h d if v = t d (dependent on the rest) v=1,2,...,v, d=1,2,,d x ed = xʹ eʹ d + xʹ eʺ d e=1,2,...,e, d=1,2,,d Σ d x ed c e e=1,2,,e Michał Pióro 30
31 aggregated node-link formulation indices v,t nodes e arcs v t demands (w.l.o.g. all demand pairs assumed) constants h vt volume of demand from node v to node t H t = Σ v V\{t} h vt total demand volume to node t a ve incidence coefficients for arcs originating at node v b ve incidence coefficients for arcs terminating at node v c e capacity of arc e Michał Pióro 31
32 aggregated node-link formulation (cntd.) variables x et 0 flow on arc e realizing all demands destined to node t constraints Σ e b te x et = H t t =1,2,,V Σ e a ve x et = Σ e b ve x et + h vt t,v=1,2,,v, t v Σ t x et c e e=1,2,,e t b ve h vt v t a ve Michał Pióro 32
33 #variables #constraints comparison of N-L and L-P L-P P V(V-1) = O(V 2 ) V(V-1) + (k V)/2 = O(V 2 ) N-L (k V V(V-1))/2 = O(V 3 ) V V(V-1) + (k V)/2 = O(V 3 ) A/N-L (k V V)/2 = O(V 2 ) V V + (k V)/2 = O(V 2 ) L-P advantages N-L advantages more general than N-L no need to bother about paths hop-limit, flow restoration compact path-flows directly calculated more effective for known paths L-P disadvantages N-L disadvantages initial paths sets less general need for path generation need for finding optimal path flows non-compact Michał Pióro 33
34 node-link formulation alternative notation variables x ed 0 flow of demand d on arc e constraints = h d if v = s d Σ e δ+(v) x ed - Σ e δ-(v) x ed = 0 if v s d,t d = - h d if v = t d v V d D Σ d x ed c e e E where: δ+(v), δ-(v) outgoing and incoming star of arcs, respectively Michał Pióro 34
35 link-path formulation alternative notation variables x dp flow realizing demand d on path p constraints Σ p Pd x dp = h d d D Σ d D Σ p Qed x dp c e e D flow variables are continuous and non-negative where P d set of admissible paths for demand d Q ed set of admissible paths for d containing link e Michał Pióro 35
36 routing restrictions hop-limit (upper limit on the number of links in paths) link-path formulation - easy node-link formulation problems integer flows path diversity x dp h d / n d d D, p P d problems for node-link formulation link diversity x ed h d / n d d D, e E Michał Pióro 36
37 single path allocation (IP) (non-bifurcated flows, unsplittable flows) variables NP-hard u dp binary flow variable corresponding to demand d and path p constraints Σ p u dp = 1 d=1,2,,d Σ d Σ p δ edp (h d u dp ) c e e=1,2,,e u binary Michał Pióro 37
38 lower bounds on non-zero flows (MIP) NP-hard variables u dp binary flow variable corresponding to demand d and path p x dp absolute flow variable of demand d on path p constraints Σ p x dp = h d d=1,2,,d l d u dp x dp h d u dp d=1,2,,d, p=1,2,,p d Σ d Σ p δ edp x dp c e e=1,2,,e u binary, x non-negative continuous other problems: (1) equal split among k paths (2) equal split among 2 or 3 paths Michał Pióro 38
39 single-path allocation: N-L formulation variables u ed 0 binary variable associated with flow of demand d on link e constraints = 1 if v = s d Σ e a ev u ed - Σ e b ev u ed = 0 if v s d,t d = -1 if v = t d v=1,2,...,v d=1,2,,d Σ d h d u ed c e e=1,2,,e now, hop limit can be introduced (how?) easy in the L-P formulation for the splittable case as well aggregated formulation cannot be (easily) adapted to this case Michał Pióro 39
40 simple dimensioning problem (SDP) link-path formulation indices e=1,2,,e links d=1,2,,d demands p=1,2,,p d paths (for realizing flows) of demand d ( P dp {e 1,e 2,...,e E } subset of the set of links) constants δ edp = 1 if e belongs to path p of demand d; 0, otherwise ( δ edp = 1 iff e P dp ) h d volume of demand d ξ e unit (marginal) cost of link e Michał Pióro 40
41 SDP formulation variables x dp continuous flow realizing demand d on path p y e continuous capacity of link e objective minimize F(y) = Σ e ξ e y e constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = y e e=1,2,,e variables x, y are continuous and non-negative how to solve this problem? Michał Pióro 41
42 SDP complete example formulation d 2 d 3 formulation (a linear programming problem) e 2 e 3 e 1 d 1 objective: minimize y 1 + 3y 2 + 2y 3 input data: h 1 = 10, h 2 = 5, h 3 = 12 P 11 = {e 1 }, P 12 = {e 2,e 3 } P 21 = {e 2 }, P 22 = {e 1,e 3 } P 31 = {e 3 }, P 32 = {e 1,e 2 } ξ 1 = 1, ξ 2 = 3, ξ 3 = 2 demand constraints: x 11 + x 12 = 10 x 21 + x 22 = 5 x 31 + x 32 = 12 capacity constraints: x 11 + x 22 + x 32 = y 1 x 12 + x 21 + x 32 = y 2 x 12 + x 22 + x 31 = y 3 non-negativity constraints: x 11, x 12, x 21, x 22, x 31, x 32 0, continuous Michał Pióro 42
43 SDP - example allowable paths for d=4, h 4 = 10 P 41 = {e 3,e 6,e 9 }, P 42 = {e 1,e 4,e 9 }, P 43 = {e 3,e 6,e 8,e 11 }, d 2 demand constraint for d=4, h 4 = 10 d 1 d 4 x 41 + x 42 + x 43 + = 10 d 3 capacity constraints (on the blackboard) d 5 e 3 e 1 e 6 e 4 e 8 e 2 e 7 e 9 e 11 e 5 e 10 ξ e = 3 (black links) ξ e =1 (red links) Michał Pióro 43
44 solving SDP minimize F(y) = Σ e ξ e y e Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = y e e=1,2,,e minimize F = Σ e ξ e (Σ d Σ p δ edp x dp ) = Σ d Σ p (Σ e ξ e δ edp )x dp Σ p x dp = h d d=1,2,,d for each d=1,2,,d separately minimize F = Σ p (Σ e ξ e δ edp )x dp Σ p x dp = h d d=1,2,,d Michał Pióro 44
45 solution shortest path allocation rule (SPAR) for a fixed d minimize F = Σ p (Σ e ξ e δ edp )x dp = Σ p κ dp x dp = α d h d Σ p x dp = h d d=1,2,,d solution: F = Σ d α d h d where κ dp - cost of path p of demand d α d cost of the cheapest (shortest with respect to ξ e ) path of demand d let p(d) be such a path (Σ e Pdp(d) ξ e = α d ) solution: put the whole demand on the shortest path: x* dp(d) := h d ; x* dp := 0, p p(d) we may also split h d arbitrarily over the shortest paths for d Michał Pióro 45
46 DP formulation modular links (MIP) NP-hard variables x dp continuous flow realizing demand d on path p y ek number of modules of type k assigned to link e objective minimize F(y) = Σ e Σ k ξ ek y ek constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp Σ k M k y ek e=1,2,,e Michał Pióro 46
47 DP MIP examples h 1 = 1, h 2 = 1, h 3 = 1 M = 2, ξ = 1 C =? h 1 = 1, h 2 = 2, h 3 = 1 M = 2, ξ = 1 C =? Michał Pióro 47
48 DP MIP examples cntd. ξ e =1, M= demands optimal capacities and flows (flows not depicted: direct) SPAR does not apply, bifurcated Michał Pióro 48
49 Linear Programming - a problem and its solution lecture 3 extreme point (vertex) x 2 -x 1 +x 2 = 1 (1/2,3/2) maximize z = x 1 + 3x 2 subject to - x 1 + x 2 1 x 1 + x 2 2 x 1 0, x 2 0 x 1 +x 2 = 2 c=5 x 1 +3x 2 =c c=3 c=2 Michał Pióro c=0 49 x 1
50 linear program in general form indices j=1,2,...,n variables i=1,2,...,m equality constraints k=1,2,,p inequality costraints constants c = (c 1,c 2,...,c n ) revenue (in minimization: cost) coefficients a = (b 1,b 2,...,b m ) right-hand-sides of the equality constraints A = (a ij ) m n matrix of equality constraint coefficients e = (e 1,e 2,...,e p ) right-hand-sides of inequality constraints D = (d ij ) p n matrix of inequality constraint coefficients variables x = (x 1, x 2,...,x n ) objective maximize cx (or minimize) constraints Ax = b, Dx e (any of these two suffice - why) optimization space (feasible set): convex Michał Pióro 50
51 a (convex) set of the form X = {x R n : Ax b} is called a (convex) polyhedron notion of polyhedron in general also equalities a bounded polyhedron is called a polytope a vertex (extreme point) x X: x cannot be expressed as a convex combination of any finite set of other points y X (y x): x λ 1 y 1 + λ 2 y λ k y k for any k, y 1, y 2,..., y k X and λ 1 + λ λ k = 1, λ 1, λ 2,..., λ k 0. every polytope X is a convex hull of its vertices x 1, x 2,..., x m, i.e., X is the set of all convex combinations of vertices: X = conv({x 1,x 2,...,x m }). A set is convex, if it contains all convex combinations of its finite subsets. Michał Pióro 51
52 characterization of a polyhedron I P = { x R n : Ax b } intersection of half-planes (half-spaces) x 2 -x 1 +x 2 = 1 equalities represent hyperplanes; expressed through two inequalities ax = b ax b and ax b x 2 = 0 (0,1) (½,1½) x 1 +x 2 = 2 example - x 1 + x 2 1 x 1 + x x x 2 0 x 1 = 0 (0,0) (2,0) x 1 Michał Pióro 52
53 characterization of a polyhedron II x 2 P = conv( { y 1,y 2,,y k } ) + cone( { z 1,z 2,,z p } ) = { y+z: y conv(y), z cone(z) } convex hull of finitely many points plus a cone of finitely many points convex hull of a finite set Y = set of all convex combinations of the elements of Y (convex combination: Σ y Y α y y, where Σ y Y α y = 1, all α y 0) cone of a finite set of points Z = set of all linear combinations of the elements of Z with non-negative coefficients (Σ z Z λ z z, all λ z 0) (½,1½) (0,1) example conv({ (0,0),(2,0),(½,1 ½),(0,1) }) (0,0) (2,0) x 1 Michał Pióro 53
54 Herman Weyl s theorem (1933) The two characterizations are equivalent. y 3 + λz 1 y 2 + λz 2 z 1 y 3 + z 1 = y 3 y 1 y 2 y 1 y 2 0 conv( { y 1,y 2,y 3 } ) + cone( { z 1, z 2 } ) = P = { x R 2 : Ax b } Michał Pióro 54
55 remarks 1 polytope P = bounded polyhedron; polytope = conv(y) vertices of polyhedron P: extreme points of conv(y) (extreme) rays of polyhedron P: extreme rays of cone(z) consider a polyhedron P and problem max { cx : x P }. Then the problem is equivalent to: max cx subject to Ax b max cx subject to x = Σ y Y α y y + Σ z Z λ z z Σ y Y α y = 1 α y 0, y Y λ z 0, z Z. the number of vertices is exponential with the number of constraints y is an extreme point of conv(y) if there do not exist points y 1, y 2 conv(y)\{y} such that y = ½y 1 + ½y 2 z is an extreme ray of cone(z) if there do not exist rays z 1, z 2 cone(z)\{z} such that z = ½z 1 + ½z 2 Michał Pióro 55
56 remarks 2 LP: max cx, x P infeasible: P = unbounded: there exists a sequence {x n : n=1,2,...} P such that x n as n finite solution: x* P x P, cx* cx (x* - vertex, if a vertex exists) Michał Pióro 56
57 linear program in standard form SIMPLEX indices j=1,2,...,n variables i=1,2,...,m equality constraints constants c = (c 1,c 2,...,c n ) revenue (in minimization: cost) coefficients b = (b 1,b 2,...,b m ) right-hand-sides of the constraints A = (a ij ) m n matrix of constraint coefficients variables x = (x 1, x 2,...,x n ) linear program maximize z = Σ j=1,2,...,n c j x j subject to Σ j=1,2,...,n a ij x j = b i, i=1,2,...,m x j 0, j=1,2,...,n n > m rank(a) = m linear program (matrix form) maximize cx subject to Ax = b x 0 rank(a): maximum number of linearly independent rows (columns) { x R n : Ax = b } iff rank(a) = rank (A,b) Michał Pióro 57
58 rank of a m x n matrix A The maximum number of linearly independent rows of A (viewed as vectors a i R n ) equals the maximum number of linearly independent columns of A (viewed as vectors a j R m ). rank(a) = the maximum number of linearly independent rows of A = the maximum number of linearly independent columns of A. The following statements are equivalent: { x R n : Ax = b } rank(a) = rank(a,b). A square n x n matrix A has rank(a) = n if, and only, if its rows (columns) are linearly independent. Michał Pióro 58
59 transformation of LPs to the standard form slack variables Σ j=1,2,...,n a ij x j b i to Σ j=1,2,...,n a ij x j + x n+i = b i, x n+i 0 Σ j=1,2,...,n a ij x j b i to Σ j=1,2,...,n a ij x j - x n+i = b i, x n+i 0 remark: in exercises we will use s i instead of x n+i nonnegative variables x k with unconstrained sign: x k = x kʹ - x kʺ, x kʹ 0, x kʺ 0 exercise: transform the following LP to the standard form maximize z = x 1 + x 2 subject to 2x 1 + 3x 2 6 x 1 + 7x 2 4 x 1 + x 2 = 3 x 1 0, x 2 unconstrained in sign Michał Pióro 59
60 standard form Basic facts of Linear Programming feasible solution - satisfying constraints basis matrix - a non-singular m m submatrix of A basic solution to a LP - the unique vector determined by a basis matrix: n-m variables associated with columns of A not in the basis matrix are set to 0, and the remaining m variables result from the square system of equations basic feasible solution - basic solution with all variables nonnegative (at most m variables can be positive) Theorem 1 A vector x = (x 1, x 2,...,x n ) is an extreme point of the constraint set if and only if x is a basic feasible solution. Theorem 2 The objective function, z, assumes its maximum at an extreme point of the constraint set. To find the optimum: (efficient?) generate all basis matrices and find the best basic feasible solution. Michał Pióro 60
61 a(j 1 ) a(j 2 ) a(j m ) basic solutions A : B = [a(j 1 ),a(j 2 ),,a(j m )] basis matrix (basis) x B = (x j1,x j2,,x jm ) basic variables the rest non-basic variables equal to 0 by definition y = (y 1,y 2,,y m ) By = b y = B -1 b x B = y (unique!) x = (0,,0,x j1,0,0,,0,x j2,0,0,,0,x jm,0,0,,0) x basic solution x feasible basic solution when y 0 Michał Pióro 61
62 The first problem revisited extreme point maximize z = x 1 + 3x 2 subject to - x 1 + x 2 1 x 1 + x 2 2 x 1 0, x 2 0 x 2 -x 1 +x 2 = 1 maximise (1/2,3/2) z = x 1 + 3x 2 subject to - x 1 + x 2 + x 3 = 1 x 1 + x 2 + x 4 = 2 x j 0, j=1,2,3,4 x 1 +x 2 = 2 c=5 c=3 c=2 Michał Pióro c=0 62 x 1 x 1 +3x 2 = c
63 the first problem revisited cntd. maximize z = x 1 + 3x 2 subject to - x 1 + x 2 + x 3 = 1 x 1 + x 2 + x 4 = 2 x j 0, j=1,2,3,4 simplex method basis matrix corresponding to columns 3 and basic feasible solution 0 1 x 1 =x 2 =0, x 3 =1, x 4 =2 basis matrix corresponding to columns 1 and basic solution - not feasible 1 1 x 2 =x 3 =0, x 1 =-1, x 4 =3 Michał Pióro 63
64 simplex method simplex method in general works in two phases: Phase 1: finding an initial basic feasible solution (extreme point) sometimes can be guessed in general: Phase 1 Phase 2: going through extreme points of the constraint set decreasing the objective function in each step by exchanging one variable in the basis matrix (pivoting) Michał Pióro 64
65 phase 2 of the simplex method - example feasible canonical form 1 (maximize z = x 1 + 3x 2 ; x j 0, j=1,2,3,4) -x 1 + x 2 + s 1 = 1 x 1 + x 2 + s 2 = 2 simplex table (tableaux) -z + 1x 1 + 3x 2 = 0 basic feasible solution: x 1 = 0, x 2 = 0, s 1 = 1, s 2 = 2, z = 0 basic variables in red, in green reduced costs) s 1 = 1 - x 2, s 2 = 2 - x 2 ; x 2 enters the base, x 3 leaves the base by pivoting on x 2 (eliminated from the second and the third equation new basic feasible solution: x 1 = 0, x 2 = 1, s 1 = 0, s 2 = 1, z = 3 canonical form 2 -x 1 + x 2 + s 1 = 1 2x 1 - s 1 + s 2 = 1 - z +4x 1-3s 1 = -3 basic feasible solution: x 1 = 0, x 2 = 1, s 1 = 0, s 2 = 1, z = 3 x 2 = 1 + x 1, s 2 = 1-2x 1 ; x 1 enters the base, s 2 leaves the base new basic feasible solution: x 1 = 1/2, x 2 = 3/2, s 1 = 0, s 2 = 0, z = 5 Michał Pióro 65
66 phase 2 of the simplex method - example canonical form 3 x 2 + (1/2)s 1 + (1/2)s 2 = 3/2 x 1 - (1/2)s 1 + (1/2)s 2 = 1/2 - z + - 1s 1-2s 2 = -5 basic feasible solution: x 1 = 1/2, x 2 = 3/2, s 1 = 0, s 2 = 0, z = 5 all reduced costs are negative: optimal (and unique) optimal solution reached in two iterations (two pivoting operations) x 1 = 1/2, x 2 = 3/2, s 1 = 0, s 2 = 1, z = 5 Theorem If all reduced costs are non-positive (non-negative) then the solution is maximal (minimal). If all are negative (positive), the maximum (minimum) is unique. Michał Pióro 66
67 simplex path through the vertices x 2 (1/2,3/2) maximize z = x 1 + 3x 2 subject to - x 1 + x 2 1 x 1 + x 2 2 x 1 0, x 2 0 c=5 x 1 +3x 2 = c c=3 c=2 Michał Pióro c=0 67 x 1
68 simplex algorithm main step repeated sequentially suppose x 1,x 2,,x m are current basic variables (all e j 0) x 1 + d 1m+1 x m+1 + d 1m+2 x m d 1n x n = e 1 x 2 + d 2m+1 x m+1 + d 2m+2 x m d 2n x n = e 2... x m + d mm+1 x m+1 + d mm+2 x m d mn x n = e m -z + r m+1 x m+1 + r m+2 x m r n x n = w let r m+k = max {r m+j : j=1,2,,n-m } > 0 let j be the index of the basic variable with minimum e j / d jm+k (over d jm+k > 0) x m+k enters the base and x j leaves the base we divide the j-th row by d jm+k (to normalize the coefficient before x m+k ) and use this row for eliminating x m+k from the rest of the rows check why this works Michał Pióro 68
69 simplex method phase 1 Phase 1: finding an initial basic feasible solution minimize x n+1 + x n x n+m a 11 x 1 +a 12 x a 1n x n +x n+1 = b 1 a 21 x 1 +a 22 x a 2n x n +x n+2 = b 2... a m1 x 1 +a m2 x a mn x n +x n+m = b m x j 0, j=1,2,...,m+n (x n+i - artificial variables) b i have all been made positive Remark 1: The slack (but not surplus) variables introduced for achieving the standard form can be used instead of artificial variables. Remark 2: We can keep the original objective as an additional row of the simplex table. Then we can immediately start Phase 2 once all artificial variables become equal to 0. Michał Pióro 69
70 simplex - remarks cycling non-decreasing objective IPM, ellipsoidal, simplex (polynomiality, practical effectiveness) simplex: exponential, practical (n) 1947 Dantzig ellipsoidal: polynomial (n 6 ), impractical 1979 Khachian IPM: polynomial, practical (n) 1984 Karmarkar Michał Pióro 70
71 flow allocation problem LP formulation variables x dp flow realizing demand d on path p constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp + s e = c e e=1,2,,e flow variables are continuous and non-negative property: at each vertex solution there is at most D+E non-zero depending on the number of saturated links if all links unsaturated: D flows only! Michał Pióro 71
72 integer programming relation to LP Integer Program (IP) maximize z = cx subject to Ax b, x 0 (linear constraints) x integer (integrality constraint) lecture 4 X IP set of all feasible solutions of IP z IP optimal objective P LP polyhedron of the linear relaxation z LP optimal objective of LP Fact 1: z IP z LP (z IP z LP for minimization) P IP = conv(x IP ) convex hull of X IP (the smallest polyhedron containing X IP ) Fact 2: IP equivalent to the linear program max { cx : x P IP } Michał Pióro 72
73 IP relation to LP: example Integer Program (IP) maximize z = x 1 + 2x 2 subject to x 1 + x 2 5/4 x 1, x 2 0 x 1, x 2 integer (integrality constraint) X IP = {(0,0), (1,0), (0,1)} z IP = 2 P LP = { (x 1, x 2 ): x 1 + x 2 5/4, x 1 0, x 2 0 } z LP = 10/4 P IP = conv(x IP ) remark: for x 1 + x 2 1 instead of x 1 + x 2 5/4 we would have P IP = P LP 5/4 Michał Pióro 73 x /4 x 1 red: X IP green: P IP blue: P LP z = 10/4 z = 1
74 MIP relation to LP Mixed Integer Program (MIP) maximize z = cx + ey subject to Ax + Dy b, x, y 0 (linear constraints) x integer (integrality constraint) X MIP set of all feasible solutions of MIP z MIP optimal objective P LP polyhedron of the linear relaxation z LP optimal objective of LP Fact 1: z MIP z LP (z MIP z LP for minimization) P MIP = conv(x MIP ) convex hull of X MIP (the smallest polyhedron containing X MIP ) Fact 2: MIP equivalent to the linear program max { cx + ey : (x,y) P MIP } Michał Pióro 74
75 MIP relation to LP: example Mixed Integer Program (IP) maximize z = x + 2y subject to x + y 5/4 x, y 0 x - integer (integrality constraint) X MIP = {(x, y): x = 0, 0 y 5/4} {(x, y): x = 1, 0 y 1/4} z IP = 10/4 P LP = { (x, y): x + y 5/4, x 0, y 0 } zlp = 10/4 P MIP = conv(x MIP ) 5/4 1 y red: X MIP green: P MIP blue: P LP z = 10/4 z = 1 1 5/4 x Michał Pióro 75
76 IP and MIP: another example IP MIP maximize z = 5x 1 + 5x 2 maximize z = 2x 1-2y 1 subject to 2x 1 + x 2 10 subject to -2x 1 y 1-3 x 1 + 2x x 1 2y 1 17 x 1 0, x 2 0 and integer 2y 1-1 x 1 0 and integer y 1 0, y 1 5 LR: x* 1 = 3 1/3, x* 2 = 3 1/3, z* = 33 1/3 IP: x* 1 = 3, x* 2 = 3, z* = 30 x* 1 = 4, x* 2 = 2, z* = 30 x* 1 = 4, x* 2 = 4, z* = 30 LR: x* 1 = 3.6, y* 1 = 0.5, z* = 6.2 MIP: x* 1 = 3, y* 1 = 0.5, z* = 5 x* 1 = 4, y* 1 = 1.5, z* = 5 x 2 y max 5x 1 + 5x max 2x 1 2y x Michał Pióro 76 x1
77 conv(x IP/MIP ) IP MIP maximize z = 5x 1 + 5x 2 maximize z = 2x 1-2y 1 subject to 2x 1 + x 2 10 subject to -2x 1 y 1-3 x 1 + 2x x 1 2y 1 17 x 1 0, x 2 0 and integer 2y 1-1 x 1 0 and integer y 1 0, y 1 5 x x 1 + y x 1 + x 2 6 x 2 y 1 -x 1 + y x Michał Pióro 77 x 1
78 binary problem: full search algorithm for min Problem P minimize f(x) subject to x i {0,1}, i=1,2,...,n N U, N 0, N 1 {1,2,...,n} partition of N = {1,2,...,n} procedure NB(N U,N 0,N 1 ) { z best = + ; N U = N } begin if N U = then if f(n 0,N 1 ) < z best then begin z best := f(n 0,N 1 ); N 0 best := N 0 ; N 1 best := N 1 end else begin { branching } end choose i N U such that x i is fractional; NB(N U \ { i },N 0 { i },N 1 ); NB(N U \ { i },N 0,N 1 { i }) end { procedure } depth-first search of the B&B tree Michał Pióro 78
79 B&B algorithim for the pure binary case Problem P minimize z = cx subject to Ax b x i {0,1}, i=1,2,...,n N U, N 0, N 1 {1,2,...,n} partition of N = {1,2,...,n} B&B subproblem: P(N U,N 0,N 1 ) relaxed problem in continuous variables x i, i N U minimize z = cx Ax b important: the sub-problem is a relaxation, that is, 0 x i 1, i N U z* cx for x obtained by an arbitrary x i = 0, i N 0 assignment of binary values to x i = 1, i N 1 the variables in N U z best = + upper bound (or the best known feasible solution of problem P) convention: if P(N U,N 0,N 1 ) infeasible then z = + (z = - for maximization) Michał Pióro 79
80 B&B for the pure binary case algorithm for min procedure BBB(N U,N 0,N 1 ) { z best = + } begin solution(n U,N 0,N 1,x,z); { solve P(N U,N 0,N 1 ) } if N U = or for all i N U x i are binary then if z < z best then begin z best := z; x best := x end else if z z best then return { bounding } else begin { branching } choose i N U such that x i is fractional; BBB(N U \ { i },N 0 { i },N 1 ); BBB(N U \ { i },N 0,N 1 { i }) end end { procedure } depth-first search of the B&B tree Michał Pióro 80
81 single-path allocation problem MIP formulation variables u dp binary flow realizing demand d on path p minimize z constraints Σ p u dp = 1 d=1,2,,d Σ d Σ p δ edp h d u dp c e + z e=1,2,,e in the optimum: z = max{ Σ d Σ p δ edp h d u dp - c e } i.e., integer when h and c integer Michał Pióro 81
82 B&B example (depth-first) u 11 =1/30, u 12 =28/30, u 13 =1/30 u 21 =0, u 22 =0, u 23 =1 all relaxed z = 1/3 h 1 =10 h 2 =4 c=0 c=9 c=4 u 11 =0, u 12 =0, u 13 =1 u 21 =0, u 22 =1, u 23 =0 u 11 =0 z = 1/3 u 11 =1 z = 10 No need to consider when integrality of z is exploited! u 11 =0, u 12 =0 z = 6 (integer) u 11 =0, u 12 =1 z = 1 (int. opt.) u 11 =0, u 12 =1, u 13 =0 u 21 =0, u 22 =0, u 23 =1 Michał Pióro 82
83 B&B for the pure binary case algorithm for max procedure BBB(N U,N 0,N 1 ) { z best = - } begin solution(n U,N 0,N 1,x,z); { solve P(N U,N 0,N 1 ) } if N U = or for all i N U x i are binary then if z > z best then begin z best := z; x best := x end else if z z best then return { bounding } else begin { branching } choose i N U such that x i is fractional; BBB(N U \ { i },N 0,N 1 { i }); BBB(N U \ { i },N 0 { i },N 1 ) end end { procedure } Michał Pióro 83
84 procedure BBB begin z best = - ; solution(n,,,x,z); put_list(n,,,x,z); { solve(n,, ) and put active node on the list } while list not empty do begin take_list(n U,N 0,N 1,x,z); { take an active node from the list } if N U = or for all i N U x i are binary then if z > z best then begin z best := z; x best := x end else if z > z best then { bounding if z z best } begin { branching } end end { while } end { procedure } binary case general algorithm for max choose(i); { choose i N U such that x i is fractional } solution(n U \ { i },N 0,N 1 { i },x,z); { solve P(N U \ { i },N 0,N 1 { i }) } put_list(n U \ { i },N 0,N 1 { i }, x,z); { put active node on the list } solution(n U \ { i },N 0 { i },N 1,x,z); { solve P(N U \ { i },N 0 { i },N 1 ) } put_list(n U \ { i },N 0 { i },N 1, x,z); { put active node on the list } Michał Pióro 84
85 original problem: (IP) maximize cx subject to Ax b linear relaxation: (LR) maximize cx subject to Ax b x 0 x 0 and integer B&B - example The optimal objective value for (LR) is greater than or equal to the optimal objective for (IP). If (LR) is infeasible then so is (IP). If (LR) is optimised by integer variables, then that solution is feasible and optimal for (IP). If the cost coefficients c are integer, then the optimal objective for (IP) is less than or equal to the round down of the optimal objective for (LR). Michał Pióro 85
86 B&B knapsack problem (best-first) maximize 8x x 2 + 6x 3 + 4x 4 subject to 5x 1 + 7x 2 + 4x 3 + 3x 4 14 x j {0,1}, j=1,2,3,4 (LR) solution: x 1 = 1, x 2 = 1, x 3 = 0.5, x 4 = 0, z = 22 no integer solution will have value greater than 22 add the constraint to (LR) Fractional z = 22 x 3 = 0 Fractional z = x 3 = 1 Fractional z = x 1 = 1, x 2 = 1, x 3 = 0, x 4 = x 1 = 1, x 2 = 0.714, x 3 = 1, x 4 = 0 Michał Pióro 86
87 we know that the optimal integer solution is not greater than (21 in fact) we will take a subproblem and branch on one of its variables - we choose an active subproblem (here: not chosen before) - we choose a subproblem with highest solution value Fractional z = 22 B&B example cntd. x 3 = 0 Fractional z = x 3 = 1 Fractional z = x 3 = 1, x 2 = 0 Integer z = 18 INTEGER no further branching, not active x 1 = 1, x 2 = 0, x 3 = 1, x 4 = 1 x 3 = 1, x 2 = 1 Fractional z = 21.8 x 1 = 0.6, x 2 = 1, x 3 = 1, x 4 = 0 Michał Pióro 87
88 Fractional z = 22 B&B example cntd. x 3 = 0 Fractional z = x 3 = 1 Fractional z = there is no better solution than 21: fathom x 3 = 1, x 2 = 0 Integer z = 18 INTEGER x 3 = 1, x 2 = 1 Fractional z = 21.8 x 3 = 1, x 2 = 1, x 1 = 0 Integer z = 21 INTEGER x 3 = 1, x 2 = 1, x 1 = 1 Infeasible INFEASIBLE optimal x 1 = 0, x 2 = 1, x 3 = 1, x 4 = 1 x 1 = 1, x 2 = 1, x 3 = 1, x 4 =? Michał Pióro 88
89 B&B example - summary Solve the linear relaxation of the problem. If the solution is integer, then we are done. Otherwise create two new subproblems by branching on a fractional variable. A subproblem is not active when any of the following occurs: you have already used the subproblem to branch on all variables in the solution are integer the subproblem is infeasible you can fathom the subproblem by a bounding argument. Choose an active subproblem and branch on a fractional variable. Repeat until there are no active subproblems. Remarks If x is restricted to integer (but not necessarily to 0 or 1), then if x = 4.27 you would branch with the constraints x 4 and x 5. If some variables are not restricted to integer you do not branch on them. Michał Pióro 89
90 tree searching and branching strategies the order of visiting the nodes of the B&B tree procedure take: take the first element from the list of active nodes procedure put: defines the order best first: sort by the optimal values of the LR subproblem z depth-first: put on top of the list (list = stack) choose(i) choose the first fractional variable choose the one closest to ½ (in the binary case) there are no general rules for put and choose Michał Pióro 90
91 B&B algorithim for the mixed binary case Problem P minimize z = cx subject to Ax b x i {0,1}, i=1,2,...,k x i 0, i=k+1,k+2,...,n N U, N 0, N 1 {1,2,...,k} partition of {1,2,...,k} P(N U,N 0,N 1 ) relaxed problem in continuous variables x i, i N U {k+1,k+2,...,n} 0 x i 1, i N U x i 0, i=k+1,k+2,...,n x i = 0, i N 0 x i = 1, i N 1 z best = + upper bound (or the best known feasible solution of problem P) Michał Pióro 91
92 B&B for the mixed binary case algorithm for min procedure BBB(N U,N 0,N 1 ) { z best = + } begin solution(n U,N 0,N 1,x,z); { solve P(N U,N 0,N 1 ) } if N U = or for all i N U x i are binary then if z < z best then begin z best := z; x best := x end else if z z best then return { bounding } else begin { branching } choose i N U such that x i is fractional; BBB(N U \ { i },N 0 { i },N 1 ); BBB(N U \ { i },N 0,N 1 { i }) end end { procedure } Michał Pióro 92
93 B&B algorithim for the integer case Problem P minimize z = cx subject to Ax b 0 x j + and integer, j=1,2,...,k x j 0, continuous, j=k+1,k+2,...,n Remark: MIP can always be converted into BIP. transformation: x j = 2 0 u j u j q u jq (x j 2 q+1-1) Direct BB procedure: Set of inequalities Ω = node of the BB tree: d j (Ω) x j g j (Ω), j=1,2,...,k initial Ω = { d j (Ω) = 0, g j (Ω) = + : j=1,2,...,k } Problem P(Ω) minimize z = cx (z(ω)) subject to Ax b d j (Ω) x j g j (Ω), j=1,2,...,k (x (Ω)) x j 0, continuous, j=k+1,k+2,...,n (x (Ω)) Michał Pióro 93
94 B&B for the integer case algorithm for min procedure BBI(Ω) { z best = + } begin solution(ω,z(ω),x (Ω),x (Ω)); { solve P(Ω) } if integer(x (Ω)) then if z(ω) < z best then begin z best := z; x best := (x (Ω),x (Ω)) end else { x contains non-integer componenets } if z(ω) z best then return { bounding } else begin { branching } choose index j of one of the non-integer components of x (Ω); BBI(Ω \ { d j (Ω) x j g j (Ω) } { d j (Ω) x j x j (Ω) }); BBI(Ω \ { d j (Ω) x j g j (Ω) } { x j (Ω) x j g j (Ω) }); end end { procedure } Michał Pióro 94
95 lecture 5 dimensioning - cases objective minimize F(y) = Σ e ξ e y e Σ d Σ p δ edp x dp M y e e=1,2,,e y e integer MIP Σ d Σ p δ edp x dp Σ k M k y ek e=1,2,,e y ek integer IP objective minimize F(y) = Σ e ξ e f(y e ) y e = Σ d Σ p δ edp x dp f(z) convex (penalty, delay) - CXP f(z) concave (dimensioning function) - CVP Michał Pióro 95
96 convexity (concavity) Set X E n is convex iff for each pair of points x, y X, the segment [x,y] X, i.e., { (1-α)x + αy : 0 α 1 } X Function f: X E is convex (for convex X) iff for each x, y X and for each scalarα (0 α 1) f((1-α)x + αy) (1-α)f(x) + αf(y) Strictly convex: if < for 0 < α < 1 Function f: X E is concave if f is convex Michał Pióro 96
97 convex link penalty functions objective minimize F(x) = Σ e ξ e f(y e ) constraints: all flows non-negative Σ p x dp = h d d=1,2,,d y e = Σ d Σ p δ edp x dp e=1,2,,e Σ d Σ p δ edp x dp M e e=1,2,,e z f(y) example: delay function M e y One global minimum. Solution is bifurcated. y 1 < y 2 f(y 1 )/y 1 < f(y 2 )/y 2 Michał Pióro 97
98 piece-wise linear approximation of a convex function z f(y) z = c 3 y+b 3 z = c 2 y+b 2 z z = c 1 y+b 1 f(y) = max{ c k y + b k : k=1,2,...,n } a 1 y a 2 minimize z ( z = f(y) ) constraints z c k y + b k, k=1,2,,n y the approximation converts CXP to LP Michał Pióro 98
99 LP approximation of a convex problem variables x dp flow realizing demand d on path p y e capacity of link e objective minimize Σ e ξ e f(y e ) constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = y e e=1,2,,e all variables are continuous and non-negative f(y) = max{ c k y + b k : k=1,2,...,k } minimize Σ e ξ e z e constraints z e c k y e + b k e=1,2,,e, k=1,2,,k Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = y e e=1,2,,e Michał Pióro 99
100 concave link dimensioning functions objective minimize F(x) = Σ e ξ e f(y e ) constraints: all flows non-negative Σ p x dp = h d d=1,2,,d y e = Σ d Σ p δ edp x dp e=1,2,,e z example: inverse Erlang loss formula f(y) Numerous local minima. Solution is non-bifurcated. y 1 < y 2 f(y 1 )/y 1 > f(y 2 )/y 2 y Michał Pióro 100
101 piece-wise linear approximation of a concave function c 1 y+b 1 c 2 y+b 2 f(y) z b 3 c 3 y+b 3 b 2 Using this approximation we can convert CVP to MIP. b 1 a 1 y a 2 y minimize z = k (c k y k + b k u k ) ( z = f(y) ) constraints: k y k = y k u k = 1 0 y k Δu k, u k {0,1}, k=1,2,,n Michał Pióro 101
102 piece-wise approximation of a concave problem (MIP) variables x dp flow realizing demand d on path p y e capacity of link e objective minimize Σ e ξ e f(y e ) constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = y e e=1,2,,e all variables are continuous and non-negative f(y) = min{ c k y + b k : k=1,2,...,k } minimize Σ e ξ e ( k (c k y k + b ek u ek )) constraints: k y ek = Y e e=1,2,,e k u ek = 1 e=1,2,,e 0 y ek Δu ek, u ek {0,1} e=1,2,,e, k=1,2,,k Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = Y e e=1,2,,e Michał Pióro 102
103 duality in LP lecture 6 primal dual minimize z = cx maximize w = bu subject to Ax = b, x 0 subject to A T u c for LP in standard form with optimal basic matrix B: u * = c B B -1 in fact: u* = - λ* D(D(P)) = P Michał Pióro 103
104 dual separation in LP and column generation In many problems most of potential variables are not used in the primal problem formulation. Dual constraints correspond to primal variables that are used. It can happen that we are able to produce one (or more) new dual constraints (corresponding to primal variables that are not considered in the problem) violated by current optimal dual solution u*. Then by adding these new constraints we are potentially able to decrease the optimal dual solution (since we are adding constraints to the dual problem). If we decrease the dual maximum then we decrease the primal minimum because W* = F*. Besides, if we are not able to eliminate current u*, then the current primal solution is optimal in the general sense (i.e., for the problem with all potential primal variables included). Michał Pióro 104
105 flow allocation problem variables x dp flow realizing demand d on path p z auxiliary variable recall: lists of admissible paths are given objective minimize z constraints Σ p x dp = h d d=1,2,,d (λ d - unconstrained) Σ d Σ p δ edp x dp c e + z e=1,2,,e (π e 0) flow variables are continuous and non-negative, z is continuous Michał Pióro 105
106 dual for LP there is a receipt for formulating duals L(x,z; π,λ) = z + Σ d λ d (h d - Σ p x dp ) + Σ e π e (Σ d Σ p δ edp x dp - c e - z) x dp 0 for all (d,p) W(π,λ) = min x 0,z L(x,z; π,λ) Dual maximize W(π,λ) = Σ d λ d h d - Σ e π e c e subject to Σ e π e = 1 λ d Σ e δ edp π e d=1,2,,d p=1,2,...,p d π e 0 e=1,2,...,e Michał Pióro 106
107 path generation - the reason Dual maximize Σ d λ d h d - Σ e π e c e subject to Σ e π e = 1 λ d Σ e δ edp π e d=1,2,,d p=1,2,...,p d π e 0 e=1,2,...,e if we can find a path shorter than λ d * then we will get a more constrained dual problem and hence have a chance to improve (decrease) the optimal dual objective i.e., to decrease the optimal primal objective shortest path algorithm can be used for finding shortest paths with respect to π* Michał Pióro 107
108 path generation - how it works We can start with only one single path on the list for each demand (P d = 1 for all d). We solve the dual problem for the given path-lists. Then for each demand d we find a shortest path with respect to weights π*, and if its length is shorter than λ d * we add to the current path-list of demand d. If no path is added then we stop. If added, we come back to the previous step. This process will terminate typically (although not always) after a reasonable number of steps. Cycling may occur, so it is better not to remove paths that are not used. Michał Pióro 108
109 c 2 =2, h 2 =1 PG example c 1 =1, h 1 =2 c 3 =2, h 3 =1 primal min z x 11 = 2, x 21 = 1, x 31 = 1 dual 1 x z, x z, x z x 0 max W = 2λ 1 + λ 2 + λ 3 - π 1-2π 2-2π 3 π 1 + π 2 + π 3 = 1, π 0 λ 1 π 1, λ 2 π 2, λ 3 π 3 solution W* = 1 ( = z*) π 1 * = 1, π 2 * = π 3 * = 0 λ 1 * = 1, λ 2 * = λ 3 * = 0 add path {2,3} with dual length equal to 0 dual 2 max W = 2λ 1 + λ 2 + λ 3 - π 1-2π 2-2π 3 π 1 + π 2 + π 3 = 1, π 0 λ 1 π 1, λ 1 π 2 + π 3 λ 2 π 2, λ 3 π 3 solution W* = 0 ( = z*) π 1 * = 1/2, π 2 * + π 3 * = 1/2 λ 1 * = 1/2, λ 2 * + λ 3 * = 1/2 no paths to add! Michał Pióro 109
110 path generation note that in the link-path formulation the lists of candidate paths are predefined using of full lists is not realistic (exponential number of paths) optimal dual multipliers π e * associated with capacity constraints are used to generate new shortest paths the paths can be generated using Dijkstra (or some other shortest path algorithm), e.g., with limited number of hops path generation is related to column generation a general method of LP related to the revised Simplex method Michał Pióro 110
111 optimization methods for MIP and IP lecture 7 no hope for efficient (polynomial time) exact general methods branch-and-bound strenghtening MIP formulations stronger formulations additional cuts valid inequalities: branch-and-cut cutting plane path generation for MIPs: branch-and-price based on using LP (can be enhanced with Lagrangean relaxation) stochastic heuristics evolutionary algorithms, simulated annenaling, etc. Michał Pióro 111
112 strengthening MIP formulations: alternative formulations example: topological design (fixed charge link problem) variables x dp flow of demand d on path p y e capacity of link e u e =1 if link e is installed; 0, otherwise objective minimize F = Σ e ξ e y e + Σ e κ e u e constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = y e e=1,2,,e y e M e u e e=1,2,,e y and x non-negative, and u binary NP-hard Michał Pióro 112
113 Linear relaxation variables x dp flow of demand d on path p y e capacity of link e objective minimize F = Σ e (ξ e + κ e /M e ) y e constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = y e e=1,2,,e x, y non-negative continuous (u e = y e /M e e=1,2,,e) single path allocation rule! Michał Pióro 113
114 B&B subproblem: Linear relaxation LR(N U,N 0,N 1 ) variables x dp flow of demand d on path p y e capacity of link e u e link status variable objective minimize F = Σ e (ξ e y e + κ e u e ) constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = y e e=1,2,,e y e M e u e e=1,2,,e 0 u e 1 e N U u e = 0 for e N 0, u e = 1 for e N 1 x, y non-negative continuous Michał Pióro 114
115 Lower bounds I minimize F = Σ e ζ e y e + Σ e N1 κ e subject to Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp = y e e=1,2,,e y and x non-negative continuous (y e M e u e e=1,2,,e) ζ e = ξ e + κ e / M e for e N U ζ e = ξ e for e N 1 solution: immediate (SPAR) ζ e = + for e N 0 observe the importance of Me (the smaller the better) Michał Pióro 115
116 Lower bounds II idea: more constraints but tighter SPAR not applicable minimize F = Σ e ξ e y e + Σ e κ e u e subject to Σ p x dp = 1 d=1,2,,d y e = Σ d Σ p δ edp x dp h d e=1,2,...,e 0 u e 1 e=1,2,...,e x dp u e d=1,2,...,d, p=1,2,...,p d, e=1,2,,e: δ edp = 1 x non-negative continuous, (u e binary/continuous) Michał Pióro 116
117 Lower bounds II B&B subproblems minimize F = Σ e ξ e y e + Σ e κ e u e subject to Σ p x dp = 1 d=1,2,,d y e = Σ d Σ p δ edp x dp h d e=1,2,...,e 0 u e 1 e N U u e = 0 e N 0 u e = 1 e N 1 x dp u e d=1,2,...,d, p=1,2,...,p d, e=1,2,,e: δ edp = 1 x non-negative continuous, (u e : e N U ) continuous Michał Pióro 117
118 example all links: ξ = 1 thick: κ = 3 thin: κ = 2 3 demands with h = 1 optimum: F* = 8 (use two thin links why?) LB1(all) = 5, LB1(thick) = 6 M = 3 LB2(all) = 7.5, LB2(thick) = 9 consider B&B node N 0 = {thin}, N 1 =, N u = {thick} current UB = 9 (use three thin links) LB2 will bound (fathom) this node, LB1 will not! Michał Pióro 118
119 strengthening MIP formulations: adding cuts example: modular dimensioning idea: improve the LR bound to speed up B&B variables x dp flow of demand d on path p y e capacity of link e objective minimize F = Σ e ξ e y e constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp My e e=1,2,,e x, y non-negative, and y integer NP-hard Michał Pióro 119
120 example: modular dimensioning equivalent formulation h d = h d /M, d=1,2,,d (changing the units) variables x dp flow of demand d on path p y e capacity of link e objective minimize F = Σ e ξ e y e constraints Σ p x dp = h d d=1,2,,d Σ d Σ p δ edp x dp y e e=1,2,,e x, y non-negative, and y integer Michał Pióro 120
121 example: modular dimensioning valid inequality Σ e E(V1,V2) y e Σ d D(V1,V2) h d equivalent formulation Σ e E(V1,V2) y e Σ d D(V1,V2) h d /M original formulation (V1,V2) cut in the network graph, V1 V2 = V, V1 V2 =, V1, V2 for example: generate such inequalities for all nodes and pairs of nodes V1 V2 Michał Pióro 121
122 IP: example revisted IP maximize z = 5x 1 + 5x 2 subject to 2x 1 + x 2 10 x 1 + 2x 2 10 x 1 0, x 2 0 and integer x 2 4 max 5x 1 + 5x x 1 Michał Pióro 122
123 conv(x IP ) IP maximize z = 5x 1 + 5x 2 subject to 2x 1 + x 2 10 x 1 + 2x 2 10 x 1 + x 2 6 (cut) x 1 0, x 2 0 and integer x x 1 Michał Pióro 123
124 cutting plane method for IP (Gomory) Definitions P LP = { x R n : Ax b, x 0} LP domain X IP = { x Z n : Ax b, x 0 } IP domain P IP = conv(x IP ) not VI facet def. cut VI cut P IP P LP cut An inequality fx f 0 is called a valid inequality for IP if fx f 0 for all x X IP A valid inequality for IP is a cut (or cutting plane) if P LP { x R n : fx f 0 } P LP where the containment is proper Recall that IP is equivalent to LP: max { cx: x P IP } In the tutorial by Manfred Padberg: Cutting Plane Methods for Mixed-Integer Programming Michał Pióro 124
125 cutting plane method - continued Let P 0 = P LP = { x R n : Ax b, x 0 } and z 0 = cx 0 = max { cx : x P 0 } If x 0 Z n or z 0 {-,+ } then STOP. Otherwise, let F be a family of cuts for MIP, such that fx 0 > f 0 for all (f,f 0 ) F. Let P 1 = P 0 { x R n : fx f 0 for all (f,f 0 ) F } P 0 and z 1 = cx 1 = max { cx : x P 1 }. Michał Pióro 125
126 cutting plane method - continued Note that X IP P 1 P 0 and we can iterate, generating a sequence of polyhedra P 0 P 1... P k P k+1... conv(x IP ) X IP such that z k+1 = cx k+1 z k = cx k where z k = cx k = max { cx : x P k }. We stop when x k Z n or z k = - (i.e., when P k = ). Michał Pióro 126
127 Gomory cutting plane method details for IP Integer Program (IP) maximize z = cx (z IP ) subject to Ax = b, x 0 (linear constraints) x integer (integrality constraint) Assumption: A, b integer Idea solve the associated LP relaxation find optimal basis choose a basic variable that is not integer generate a Chvátal-Gomory inequality using the constraint associated with this basic variable to cut-off the current relaxed solution Wolsey Integer Programming Michał Pióro 127
128 Chvátal-Gomory valid inequalities For the set X = { x R n + : Ax b } Zn ( A is m x n, a j is its j-th column) and any u R m + the following are valid inequalities: Σ j=1,2,...,n ua j x j ub ( because u 0 ) Σ j=1,2,...,n ua j x j ub ( because x 0 ) Σ j=1,2,...,n ua j x j ub ( because the left hand side is integer ) (the first inequality is obtained by mutiplying i-th row of Ax b by u i and summing up all the rows) Theorem: Every valid inequality for X can be obtained by applying the above Chvátal- Gomory procedure a finite number of times. Wolsey Integer Programming Michał Pióro 128
129 Gomory integer cut from the final simplex tableau: max c + Σ j NB c j x j x u + Σ j NB a uj x j = b u for u=1,2,...,m (x u - basic variables) x 0 (where c j 0 for j NB, b u 0 for u=1,2,...,m) take u with fractional b u and add the inequality: x u + Σ j NB a uj x j b u this inequality cuts off the current fractional solution x* Michał Pióro 129
130 from the final simplex tableau: max c + Σ j NB c j x j x u + Σ j NB a uj x j = b u x 0 for u=1,2,...,m (where c j 0 for j NB, b u 0 for u=1,2,...,m) the inequality take u with fractional b u and add the inequality: x u + Σ j NB a uj x j b u this inequality cuts off the current fractional solution x* Equivalently, by eliminating x u : Σ j NB (a uj - a uj ) x j b u - b u Note that the new slack variable s = - f 0 + Σ j NB f j x j is a non-negative integer. or Σ j NB f j x j f 0 where f j = a uj - a uj for j NB, f 0 = b u - b u Michał Pióro 130
131 cutting plane method - discussion Is it always possible to find a cut that cuts off the LP optimum? YES the previous method. Does there exist a cut generation mechanism which guarantees the finite number of steps? NO, only for IP (certainly, the number of iterations is in general exponential). For general mixed integer programs this is an open question. For MIPs finite converge is guaranteed if the objective function is integer valued. Michał Pióro 131
132 perfect matching problem (PMP) Perfect matching A subset M of V /2 links in an undirected graph G = (V,E) that covers all nodes (number of nodes must be even). Perfect matching problem (PMP) Find a perfect matching x = (x 1,x 2,,x E ) minimizing z = cx PMP as BP minimize cx subject to Σ e δ(v) x e = 1 x e {0,1} v V e E x=1 x=0 x=0 x=0 x=0 x=1 x=0 x=0 x=1 Michał Pióro 132
133 B&C perfect matching problem PMP as a non-compact LP minimize cx subject to Σ e δ(v) x e = 1 v V 0 x e 1 e E x=0 x=½ x=½ x=½ x=½ x=0 x=0 x(δ(u)) 1 U V, U 3 and odd (*) x=½ x=½ B&C: start B&B with no constraints of type (*). In the B&B nodes generate new (global) constraints. C&B: generate a lot of constraints of type (*) (e.g., for all triangles and 5-element subsets) and start B&B Michał Pióro 133
134 B&B algorithim - comments branch-and-bound (B&B) MIP can always be converted into binary MIP transformation: x j = 2 0 u j u j q u jq (x j 2 q+1-1) Lagrangean relaxation can also be used for finding lower bounds (instead of linear relaxation). branch-and-price (B&P) solving LP subproblems at the B&B nodes by path generation branch-and-cut (B&C) combination of B&B with the cutting plane method the most effective exact approach to NP-complete MIPs idea: add cuts (ideally, defining facets of the integer polyhedron) cut generation is problem dependent, and not based on general formulas such as Gomory fractional cuts cuts are generated at the B&B nodes Michał Pióro 134
135 wireless mesh network (WMN) design lecture 8 WMN provide inexpensive broadband access to the Internet deployed by communities/authorities/companies in metropolitan and residential areas based on wireless networking standards IEEE family (Wi-Fi) IEEE family (WiMAX) IEEE (bluetooth, home applications) using off-the-shelf wireless communication components and technologies specific features of radio communications and the data link layer protocols make optimization difficult Michał Pióro 135
136 broadband Internet access via WMN Michał Pióro 136
Optimization Methods for NDP
Optimization Methods for NDP Hongwei Zhang http://www.cs.wayne.edu/~hzhang Acknowledgment: the slides are based on those from Drs. Yong Liu, Deep Medhi, and Michał Pióro. Optimization Methods optimization
More informationLagrangian Relaxation in MIP
Lagrangian Relaxation in MIP Bernard Gendron May 28, 2016 Master Class on Decomposition, CPAIOR2016, Banff, Canada CIRRELT and Département d informatique et de recherche opérationnelle, Université de Montréal,
More information3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationSection Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010
Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts
More informationInteger Programming ISE 418. Lecture 8. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer
More informationLecture 9: Dantzig-Wolfe Decomposition
Lecture 9: Dantzig-Wolfe Decomposition (3 units) Outline Dantzig-Wolfe decomposition Column generation algorithm Relation to Lagrangian dual Branch-and-price method Generated assignment problem and multi-commodity
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationWeek 4. (1) 0 f ij u ij.
Week 4 1 Network Flow Chapter 7 of the book is about optimisation problems on networks. Section 7.1 gives a quick introduction to the definitions of graph theory. In fact I hope these are already known
More informationLinear and Integer Programming - ideas
Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature
More informationNetwork Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique
More informationCSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017
CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationA packing integer program arising in two-layer network design
A packing integer program arising in two-layer network design Christian Raack Arie M.C.A Koster Zuse Institute Berlin Takustr. 7, D-14195 Berlin Centre for Discrete Mathematics and its Applications (DIMAP)
More informationInteger Programming ISE 418. Lecture 12. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 12 Dr. Ted Ralphs ISE 418 Lecture 12 1 Reading for This Lecture Nemhauser and Wolsey Sections II.2.1 Wolsey Chapter 9 ISE 418 Lecture 12 2 Generating Stronger Valid
More information3.10 Lagrangian relaxation
3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the
More informationDiscrete Optimization 2010 Lecture 7 Introduction to Integer Programming
Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming Marc Uetz University of Twente m.uetz@utwente.nl Lecture 8: sheet 1 / 32 Marc Uetz Discrete Optimization Outline 1 Intro: The Matching
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More informationOptimization methods NOPT048
Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationCOT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748
COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous
More informationMulticommodity Flows and Column Generation
Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07
More informationCutting Plane Separators in SCIP
Cutting Plane Separators in SCIP Kati Wolter Zuse Institute Berlin DFG Research Center MATHEON Mathematics for key technologies 1 / 36 General Cutting Plane Method MIP min{c T x : x X MIP }, X MIP := {x
More informationColumn Generation. MTech Seminar Report. Soumitra Pal Roll No: under the guidance of
Column Generation MTech Seminar Report by Soumitra Pal Roll No: 05305015 under the guidance of Prof. A. G. Ranade Computer Science and Engineering IIT-Bombay a Department of Computer Science and Engineering
More informationCS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest
More informationOutline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation
Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column
More informationLecture 6 Simplex method for linear programming
Lecture 6 Simplex method for linear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,
More informationOptimization methods NOPT048
Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More informationOptimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems
Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}
More informationLinear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004
Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define
More informationCombinatorial Optimization
Combinatorial Optimization Lecture notes, WS 2010/11, TU Munich Prof. Dr. Raymond Hemmecke Version of February 9, 2011 Contents 1 The knapsack problem 1 1.1 Complete enumeration..................................
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationCS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More informationNotes on Dantzig-Wolfe decomposition and column generation
Notes on Dantzig-Wolfe decomposition and column generation Mette Gamst November 11, 2010 1 Introduction This note introduces an exact solution method for mathematical programming problems. The method is
More informationSection Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018
Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections
More information4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b
4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal
More informationLINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm
Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides
More information3.3 Easy ILP problems and totally unimodular matrices
3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =
More information3.4 Relaxations and bounds
3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper
More informationTechnische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)
Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik Combinatorial Optimization (MA 4502) Dr. Michael Ritter Problem Sheet 1 Homework Problems Exercise
More informationLecture slides by Kevin Wayne
LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming
More informationLecture 8: Column Generation
Lecture 8: Column Generation (3 units) Outline Cutting stock problem Classical IP formulation Set covering formulation Column generation A dual perspective Vehicle routing problem 1 / 33 Cutting stock
More informationMotivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory
Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization
More informationCOMP3121/9101/3821/9801 Lecture Notes. Linear Programming
COMP3121/9101/3821/9801 Lecture Notes Linear Programming LiC: Aleks Ignjatovic THE UNIVERSITY OF NEW SOUTH WALES School of Computer Science and Engineering The University of New South Wales Sydney 2052,
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationLINEAR PROGRAMMING II
LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality
More informationIntroduction to Linear and Combinatorial Optimization (ADM I)
Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and
More informationChapter 3, Operations Research (OR)
Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z
More information7. Lecture notes on the ellipsoid algorithm
Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear
More informationTRANSPORTATION PROBLEMS
Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations
More informationBranch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems
Branch-and-cut Approaches for Chance-constrained Formulations of Reliable Network Design Problems Yongjia Song James R. Luedtke August 9, 2012 Abstract We study solution approaches for the design of reliably
More informationOptimization Exercise Set n. 4 :
Optimization Exercise Set n. 4 : Prepared by S. Coniglio and E. Amaldi translated by O. Jabali 2018/2019 1 4.1 Airport location In air transportation, usually there is not a direct connection between every
More informationAn introductory example
CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationSection Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.
Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch
More informationLinear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming
Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)
More informationResource Constrained Project Scheduling Linear and Integer Programming (1)
DM204, 2010 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 Resource Constrained Project Linear and Integer Programming (1) Marco Chiarandini Department of Mathematics & Computer Science University of Southern
More informationMath 341: Convex Geometry. Xi Chen
Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry
More informationTHE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I
LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions
More informationLecture 3: Semidefinite Programming
Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality
More informationIntroduction to Integer Programming
Lecture 3/3/2006 p. /27 Introduction to Integer Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Lecture 3/3/2006 p. 2/27 Contents IP formulations and examples Total unimodularity
More informationCutting Plane Methods II
6.859/5.083 Integer Programming and Combinatorial Optimization Fall 2009 Cutting Plane Methods II Gomory-Chvátal cuts Reminder P = {x R n : Ax b} with A Z m n, b Z m. For λ [0, ) m such that λ A Z n, (λ
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationLecture notes on the ellipsoid algorithm
Massachusetts Institute of Technology Handout 1 18.433: Combinatorial Optimization May 14th, 007 Michel X. Goemans Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm
More informationInteger Linear Programs
Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens
More information1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016
AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture
More informationWeek Cuts, Branch & Bound, and Lagrangean Relaxation
Week 11 1 Integer Linear Programming This week we will discuss solution methods for solving integer linear programming problems. I will skip the part on complexity theory, Section 11.8, although this is
More informationComputational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs
Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational
More informationCombinatorial Optimization
Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is
More information3.8 Strong valid inequalities
3.8 Strong valid inequalities By studying the problem structure, we can derive strong valid inequalities which lead to better approximations of the ideal formulation conv(x ) and hence to tighter bounds.
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms
MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2017 04 07 Lecture 8 Linear and integer optimization with applications
More informationReconnect 04 Introduction to Integer Programming
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, Reconnect 04 Introduction to Integer Programming Cynthia Phillips, Sandia National Laboratories Integer programming
More informationLift-and-Project Inequalities
Lift-and-Project Inequalities Q. Louveaux Abstract The lift-and-project technique is a systematic way to generate valid inequalities for a mixed binary program. The technique is interesting both on the
More informationOptimization Exercise Set n.5 :
Optimization Exercise Set n.5 : Prepared by S. Coniglio translated by O. Jabali 2016/2017 1 5.1 Airport location In air transportation, usually there is not a direct connection between every pair of airports.
More informationInteger Programming, Part 1
Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas
More informationAppendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS
Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution
More informationCS 6820 Fall 2014 Lectures, October 3-20, 2014
Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given
More informationLinear Programming and the Simplex method
Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction
More informationOutline. Outline. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Scheduling CPM/PERT Resource Constrained Project Scheduling Model
Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING Lecture 3 and Mixed Integer Programg Marco Chiarandini 1. Resource Constrained Project Model 2. Mathematical Programg 2 Outline Outline 1. Resource Constrained
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5
Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationLinear Programming Inverse Projection Theory Chapter 3
1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!
More informationIP Duality. Menal Guzelsoy. Seminar Series, /21-07/28-08/04-08/11. Department of Industrial and Systems Engineering Lehigh University
IP Duality Department of Industrial and Systems Engineering Lehigh University COR@L Seminar Series, 2005 07/21-07/28-08/04-08/11 Outline Duality Theorem 1 Duality Theorem Introduction Optimality Conditions
More informationFundamental Theorems of Optimization
Fundamental Theorems of Optimization 1 Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2 Maximizing Concave
More information3.7 Strong valid inequalities for structured ILP problems
3.7 Strong valid inequalities for structured ILP problems By studying the problem structure, we can derive strong valid inequalities yielding better approximations of conv(x ) and hence tighter bounds.
More informationΩ R n is called the constraint set or feasible set. x 1
1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We
More informationDuality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information
More informationDiscrete Optimization
Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises
More informationDiscrete (and Continuous) Optimization WI4 131
Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl
More informationChapter 11. Approximation Algorithms. Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved. 1 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should
More informationPolynomiality of Linear Programming
Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is
More informationMath 5593 Linear Programming Week 1
University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About
More informationAdvanced Linear Programming: The Exercises
Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z
More information