3.10 Lagrangian relaxation
|
|
- Donald Harvey
- 5 years ago
- Views:
Transcription
1 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the relaxation by elimination of Dx d yield weak bounds (e.g., TSP/UFL deleting cut-set/demand constraints) More general setting: min {c t x : Dx d, x X R n } (1) Idea: Delete the complicating constraints Dx d and, for each one of them, add to the objective function a term with a multiplier u i, which penalizes its violation and that is 0 for all feasible solutions of problem (1). Edoardo Amaldi (PoliMI) Optimization Academic year / 40
2 Definition: Given a problem z = min {c t x : Dx d, x X R n } (2) For each vector of Lagrange multipliers u 0, the Lagrangian subproblem is w(u) = min {c t x + u t (d Dx) : x X R n } (3) where L(x, u) = c t x + u t (d Dx) is the Lagrangian function of the primal problem (2), w(u) = min {L(x, u) : x X R n } is the dual function. Proposition: For any u 0, the Lagrangian subproblem (3) is a relaxation of problem (2). Proof: Clearly {x X : Dx d} X. Moreover, for each u 0 and each feasible x for (2), we have w(u) c t x. Indeed w(u) c t x + u t (d Dx) c t x since u t (d Dx) 0 and w(u) is the minimum value of c t x + u t (d Dx) for x X. Corollary: If z = min {c t x : Dx d, x X } is finite, then w(u) z u 0. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
3 To determine the tightest lower bound Definition: The Lagrangian dual of the primal problem (2) is Only nonnegativity constraints! w = max u 0 w(u). (4) Note: By relaxing (dualizing) linear constraints, the objective function remains linear. The other constraints can be of any type, provided subproblem (3) is sufficiently easy. For LPs the Lagrangian dual coincides with the LP dual (exercise 5.2). Corollary: (Weak Duality) For each pair of feasible solutions x {x X : Dx d} of the primal (2) and u 0 of the Lagrangian dual (4), we have w(u) c t x. (5) Edoardo Amaldi (PoliMI) Optimization Academic year / 40
4 Consequences: i) If x is feasible for the primal (2), ũ is feasible for the Lagrangian dual (4) and c t x = w(ũ), then x and ũ are optimal for respectively (2) and (4). ii) In particular w = max u 0 w(u) z = min {c t x : Dx d, x X }. If one problem is unbounded, the other one is infeasible. Recall: For any primal-dual pair of bounded LPs we have strong duality (w = z ). Observation: Unlike for LPs, discrete optimization problems can have a duality gap, that is w < z. Lagrangian relaxation of equality constraints: Only difference: the Lagrange multipliers of equalities are unrestriced in sign (u i R). If all the m relaxed/dualized constraints are equality constraints, the Lagrangian dual is: max w(u) u R m Edoardo Amaldi (PoliMI) Optimization Academic year / 40
5 Example 1: Binary knapsack Consider max z = 10x 1 + 4x x 3 s.t. 3x 1 + x 2 + 4x 3 4 x 1, x 2, x 3 {0, 1} Relaxing the capacity constraint, we obtain the Lagrangian function: L(x, u) = 10x 1 + 4x x 3 + u(4 3x 1 x 2 4x 3) Dual function: w(u) = max x {0,1} 3 10x 1 + 4x x 3 + u(4 3x 1 x 2 4x 3) u 0 The Lagrangian subproblem: w(u) = max (10 3u)x1 + (4 u)x2 + (14 4u)x3 + 4u x {0,1} 3 optimal solution: x i = 1 (x i = 0) if nonnegative (nonpositive) coefficient and to an arbitrary value if coefficient 0. Lagrangian dual: min w(u) = min ( max (10 3u)x1 + (4 u)x2 + (14 4u)x3 + 4u ) u 0 u 0 x {0,1} 3 Edoardo Amaldi (PoliMI) Optimization Academic year / 40
6 Dual function: w(u) = max x {0,1} 3 (10 3u)x 1 + (4 u)x 2 + (14 4u)x 3 + 4u Values of u for which the coefficients of x 1, x 2, x 3 are nonpositive: u 10 3 for x1, u 14 4 for x 3 and u 4 for x 2. Optimal solution of the Lagrangian subproblem as a function of u: x = (1, 1, 1) for u [0, 10 3 ], x = (0, 1, 1) for u [ 10 3, 14 4 ], x = (0, 1, 0) for u [ 14 4, 4], x = (0, 0, 0) for u [4, ). Thus w(u) = 28 4u for u [0, 10 3 ] 18 u for u [ 10 3, 14 4 ] 4 + 3u for u [ 14 4, 4] 4u for u [4, ) Edoardo Amaldi (PoliMI) Optimization Academic year / 40
7 Dual function: w(u) u Lagrangian dual: min w(u) = min ( max (10 3u)x1 + (4 u)x2 + (14 4u)x3 + 4u ) u 0 u 0 x {0,1} 3 optimal solution u = 14 4 with w = w(u ) = Edoardo Amaldi (PoliMI) Optimization Academic year / 40
8 Example 2: Uncapacitated Facility Location (UFL) Variant with profits p ij, fixed costs f j for opening the depots in the candidate sites, and total profit to be maximized. MILP formulation: z = max s.t. i M j N p ijx ij j N f jy j j N x ij = 1 i M (6) x ij y j i M, j N y j {0, 1} j N 0 x ij 1 i M, j N Relaxing the demand constraints (6), we obtain the Lagrangian subproblem: w(u) = max i M j N (p ij u i )x ij j N f jy j + i M u i s.t. x ij y j i M, j N (7) y j {0, 1} j N (8) 0 x ij 1 i M, j N (9) which decomposes into N independent subproblems, one for each candidate site j. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
9 Indeed w(u) = j N w j(u) + i M u i where w j (u) = max i M (p ij u i )x ij f j y j (10) s.v. x ij y j i M y j {0, 1} 0 x ij 1 i M For each j N, the subproblem (10) can be solved by inspection: If y j = 0, then x ij = 0 for each i and the objective function value is 0. If y j = 1, it is convenient to serve all clients with positive profit, namely x ij = 1 for all i such that p ij u i > 0, with an objective function value of max{p ij u i, 0} f j. i M Thus w j (u) = max{0, i M max{p ij u i, 0} f j }. Example: see Chapter 10 of L. Wolsey, Integer Programming, p Edoardo Amaldi (PoliMI) Optimization Academic year / 40
10 Sometimes solving the Lagrangian subproblem (3) yields an optimal solution of the primal (2). Proposition: If u 0 and i) x(u) is an optimal solution of the Lagrangian subproblem (3) ii) Dx(u) d iii) (Dx(u)) i = d i for each u i > 0 (complementary slackness conditions), then x(u) is an optimal solution of the primal poblem (2). Proof: Due to (i) we have w w(u) = c t x(u) + u t (d Dx(u)) and to (iii) we have c t x(u) + u t (d Dx(u)) = c t x(u). According to (ii), x(u) is a feasible solution of the primal (2) and hence c t x(u) z. Thus w c t x(u) + u t (d Dx(u)) = c t x(u) z and, since w z, everything holds with equality and x(u) is an optimal solution of the primal (2). Observation: If only equalities, conditions (iii) are automatically satisfied, and an optimal solution of Lagrangian subproblem is optimal for primal (2) if it is feasible. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
11 Proposition: The dual function w(u) is concave. Proof: Consider any pair u 1 0 and u 2 0. For any α such that 0 α 1, let x be an optimal solution of the Lagrangian subproblem (3) for ũ = αu 1 + (1 α)u 2, namely w(ũ) = c t x + ũ t (d D x). By definition of w(u), we have w(u 1 ) c t x + u t 1 (d D x) and w(u 2 ) c t x + u t 2 (d D x). Multipying the first inequality by α and the second one by 1 α, we obtain αw(u 1 ) + (1 α)w(u 2 ) c t x + (αu 1 + (1 α)u 2 ) t (d D x) = w(αu 1 + (1 α)u 2 ). Edoardo Amaldi (PoliMI) Optimization Academic year / 40
12 LP characterization of the Lagrangian dual How strong is the lower bound on z we obtain by solving the Lagrangian dual? Theorem: Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Let w(u) = min {c t x + u t (d Dx) : Ax b, x Z n } be the dual function, w = max u 0 w(u) be the optimal value of the Lagrangian dual and X = {x Z n : Ax b}, then w = min {c t x : Dx d, x conv(x )}. (11) The problem is convexified, the Lagrangian dual is characterized in terms of a Linear Program. Corollary 1: Since conv(x ) {x R n : Ax b}, z LP = min {c t x : Ax b, Dx d, x R n } w z. Depending on the objective function, these inequalities can be strict, i.e, z LP < w < z. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
13 Illustration: from D. Bertsimas, R. Weismantel, Optimization over integers, Dynamic Ideas, 2005, p Consider the ILP problem min 3x1 x 2 s.t. x 1 x 2 1 x 1 + 2x 2 5 3x 1 + 2x 2 3 6x 1 + x 2 15 x 1, x 2 0 integer - Represent graphically the feasible region and the optimal solutions of the ILP and of its linear relaxation: x ILP = (1, 2) with z ILP = 1 and x LP = (1/5, 6/5) with z LP = 3/5. - Apply Lagrangian relaxation to the first constraint. For every u 0, the Lagrangian subproblem is w(u) = min 3x 1 x 2 + u( 1 x 1 + x 2) (x 1,x 2 ) X where X is the set of all integer solutions that satisfy all the other constraints. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
14 - Use the theorem to find the optimal value of the Lagrangian dual w = max u 0 w(u) and the corresponding optimal solution x D = x(u ), where u is an optimal solution of the Lagrangian dual. Represent conv(x ) and the polyhedron conv(x ) {(x 1, x 2) R 2 : x 1 x 2 1}. We obtain x D = (1/3, 4/3) with w = 1/3. Thus, we have: z LP = 3/5 < w = 1/3 < z ILP = 1 Drawing the dual function w(u) it is possibile to verify that the optimal solution of the Lagrangian dual is u = 5/3 with w = 1/3. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
15 Proof: Consider the case where X = {x 1,..., x k } with k finite even though huge. The Lagrangian dual is equivalent to maximize a nondifferentiable piecewise linear concave function: w = max w(u) = max {min u 0 u 0 x X [ct x + u t (d Dx)]} = max { min u 0 1 l k [ct x l + u t (d Dx l )]} and it can be expressed as the following Linear Program: w = max y which contains a huge number of constraints. s.t. c t x l + u t (d Dx l ) y l u 0, y R m Taking its dual and applying strong duality, we obtain: k w = min (c t x l )µ l s.t. l=1 k (Dx l d)µ l 0 l=1 k µ l = 1 l=1 µ l 0 l. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
16 w = min s.t. k (c t x l )µ l l=1 k (Dx l d)µ l 0 l=1 k µ l = 1 l=1 µ l 0 l. Setting x = k l=1 µ lx l with k l=1 µ l = 1 and µ l 0 for each l, we obtain: w = min c t x s.t. Dx d x conv(x ). The result can be extended to the case where X is the feasible region of any ILP. Example of a nondifferentiable piecewise linear concave dual function w(u). Edoardo Amaldi (PoliMI) Optimization Academic year / 40
17 When is the Lagrangian relaxation bound not stronger than the linear relaxation one? Corollary 2: If X = {x Z n : Ax b} and conv(x ) = {x R n : Ax b}, then w = max u 0 w(u) = z LP = min {c t x : Ax b, Dx d, x R n }. Example: Given a generic binary knapsack problem and its linear relaxation max s.t. z = n i=1 p ix i n i=1 a ix i b x i {0, 1} n z LP KP = max x [0,1] n{ p i x i : i=1 i n a i x i b}. The Lagrangian relaxation is as weak as the linear relaxation. Indeed, X = {x {0, 1} n } and obviously conv(x ) = {x [0, 1] n }, whose constraints 0 x i 1 are already contained in the linear relaxation. According to Corollary 2, we have w = z LP KP. i=1 Edoardo Amaldi (PoliMI) Optimization Academic year / 40
18 Solution of the Lagrangian duals Generalization of the gradient method for functions of class C 1 to functions piecewise C 1 (not everywhere differentiable) Definition: Let C R n be a convex set and f : C R a convex function on C a vector γ R n is a subgradient of f at x C if f (x) f (x) + γ t (x x) x C the subdifferential, denoted by f (x), is the set of all subgradients of f at x. Examples: - For f (x) = x 2, at x = 3 the only subgradient is γ = 6. Indeed 0 (x 3) 2 = x 2 6x + 9 implies that for each x: f (x) = x 2 6x 9 = 9 + 6(x 3) = f (x) + 6(x x) - For f (x) = x it is clear that: γ = 1 if x > 0, γ = 1 if x < 0, and f (x) = [ 1, 1] if x = 0 Edoardo Amaldi (PoliMI) Optimization Academic year / 40
19 Properties: 1) A convex function f : C R has at least one subgradient at each interior point x of C. N.B.: The existence of (at least) one subgradient at any point of int(c), with C convex, is a necessary and sufficient condition for f to be convex on int(c). 2) If f is convex and x C, f (x) is a nonempty, convex, closed and bounded set. 3) x is a global minimum of f if and only if 0 f (x ). Edoardo Amaldi (PoliMI) Optimization Academic year / 40
20 Subgradient method Consider min x R n f (x) with f (x) convex. Start from an abitrary x 0. At the k-th iteration: consider a γ k f (x k ) and set with α k > 0 x k+1 := x k α k γ k Observation: We do not perform a 1-D search because for nondifferentiable functions a subgradient γ f (x) is not necessarily a descent direction! Example: next page For sufficiently small stepsizes, one gets closer to an optimal solution: Lemma: If x k is a non-optimal solution and x any optimal solution, then implies that x k+1 x < x k x 2. 0 < α k < 2 f (x k ) f (x ) γ k 2 Edoardo Amaldi (PoliMI) Optimization Academic year / 40
21 Example: min 1 x1,x 2 1 f (x 1, x 2) with f (x 1, x 2) = max{ x 1, x 1 + x 2, x 1 2x 2} not everywhere differentiable Level curves in brown, points of nondifferentiability in green (of type: (t, 0), ( t, 2t) and ( t, t) for t 0), global minimum x = (0, 0). If x k = (1, 0) and we consider γ k = (1, 1) f (x k ), f (x) increases along {x R 2 : x = x k α k γ k, α k 0} but if α k is sufficiently small then x k+1 = x k α k γ k is closer to x. From Chapter 8, Bazaraa et al., Nonlinear Programming, Wiley, 2006, p Edoardo Amaldi (PoliMI) Optimization Academic year / 40
22 The method can be easily extended to the case with bound constraints by including a Edoardo Amaldi (PoliMI) Optimization Academic year / 40 Theorem: If f is convex, lim x f (x) = +, lim k α k = 0 and k=0 α k =, the subgradient method terminates after a finite number of iterations with an optimal solution x or generates an infinite sequence {x k } admitting a subsequence converging to x. Choice of the stepsize: In practice {α k } such that lim k α k = 0, k=0 α k = (e.g., α k = 1/k) are too slow. An alternative: α k = α 0ρ k, for a given ρ < 1. A more sophisticated and popular rule: α k = ε k f (x k ) ˆf γ k 2, where 0 < ε k < 2 and ˆf is either the optimal value (minimum) f (x ) or an estimate. Stopping criterion: prescribed maximum number of iterations because, even if 0 f (x k ) that subgradient may non be considered at x k. N.B.: Since it is not a monotone method, one needs to store the best solution x k found so far.
23 Subgradient method for solving the Lagrangian dual Lagrangian dual: max u 0 w(u) where w(u) = min {c t x + u t (d Dx) : x X R n } is concave and piecewise linear. Simple characterization of the subgradients of w(u): Proposition: Consider ũ 0 and X (ũ) = {x X : w(ũ) = c t x + ũ t (d Dx)} the set of optimal solutions of the Lagrangian subproblem (3). Then For each x(ũ) X (ũ), the vector (d Dx(ũ)) w(ũ). Each subgradient of w(u) at ũ can be expressed as a convex combination of subgradients (d Dx(ũ)) with x(ũ) X (ũ). Edoardo Amaldi (PoliMI) Optimization Academic year / 40
24 Proof: (first point) By definition of w(u): w(u) c t x + u t (d Dx) x X, u 0. In particular, for any x(ũ) X (ũ) we have w(u) c t x(ũ) + u t (d Dx(ũ)) u 0 (12) and clearly w(ũ) = c t x(ũ) + ũ t (d Dx(ũ)). (13) Substracting (13) from (12) we obtain w(u) w(ũ) (u t ũ t )(d Dx(ũ)) u 0 which is equivalent to w(u) w(ũ) + (u t ũ t )(d Dx(ũ)) u 0. Thus (d Dx(ũ)) w(ũ). Edoardo Amaldi (PoliMI) Optimization Academic year / 40
25 Procedure: 1) Select an initial u 0 and set k := 0. 2) Solve the Lagrangian subproblem w(u k ) = min {c t x + u t k (d Dx) : x X }. Let x(u k ) be the optimal solution found, then (d Dx(u k )) is a subgradient of w(u) at u k. 3) Update the Lagrange multipliers: u k+1 = max{0, u k + α k (d Dx(u k ))} with, for instance, α k = ε k ŵ w(u k ) d Dx(u k ) 2, where ŵ is an estimate of the optimal value w. 4) Set k := k + 1 Edoardo Amaldi (PoliMI) Optimization Academic year / 40
26 Example: Lagrangian relaxation for the binary knapsack problem Consider max z = 10x 1 + 4x x 3 s.t. 3x 1 + x 2 + 4x 3 4 x 1, x 2, x 3 {0, 1} Lagrangian dual: min w(u) = min ( max (10 3u)x1 + (4 u)x2 + (14 4u)x3 + 4u ) u 0 u 0 x {0,1} 3 with the dual function w(u) = 28 4u for u [0, 10 3 ] 18 u for u [ 10 3, 14 4 ] 4 + 3u for u [ 14 4, 4] 4u for u [4, ) Edoardo Amaldi (PoliMI) Optimization Academic year / 40
27 w(u) u Optimal solution u = 14 4 with w = w(u ) = Edoardo Amaldi (PoliMI) Optimization Academic year / 40
28 Lagrangian dual: min w(u) = min ( max (10 3u)x1 + (4 u)x2 + (14 4u)x3 + 4u ) u 0 u 0 x {0,1} 3 Subgradient γ k = (4 3x k 1 x k 2 4x k 3 ), where x k = x(u k ) is any optimal solution of the Lagrangian subproblem at the k-th iteration. Lagrange multiplier update: u k+1 = max{0, u k α k γ k } with α k = 1 2 k Subgradient method: k u k α k w(u k ) x k γ k max (10, 4, 14) t x + 4u = 28 (1, 1, 1) max ( 2, 0, 2) t x + 4u = (0, 0, 0)* max (4, 2, 6) t x + 4u = (1, 1, 1) max (1, 1, 2) t x + 4u = (1, 1, 1) max ( 0.5, 0.5, 0) t x + 4u = (0, 1, 0)* 3 Symbol : there are several optimal solutions x k and we choose the lexicographically smallest one (set to 0 each variable xi k with zero coefficient) N.B.: The optimal value of the multiplier u = 14 4 considered subgradient is nonzero. is reached at iteration k = 4 but the Edoardo Amaldi (PoliMI) Optimization Academic year / 40
29 Lagrangian relaxation for the STSP (Held & Karp) Symmetric TSP: Given an undirected graph G = (V, E) with a cost c e Z + for each edge e E, determine a Hamiltonian cycle of minimum total cost. ILP formulation: min s.t. e E cexe e δ(i) e E(S) xe = 2 i V (14) xe S 1 S V, 2 S n 1 (15) x e {0, 1} e E where E(S) = {{i, j} E : i S, j S} Observation: i) Due to the presence of constraints (14), half of the subtour-elimination ones (15) are redundant: e E(S) xe S 1 iff e E(S) xe S 1, where S = V \ S. Thus all the constraints (15) with 1 S can be deleted. ii) Summing over all the degree constraints (14) and dividing by 2, we obtain xe = n that can be added to the formulation. e E Edoardo Amaldi (PoliMI) Optimization Academic year / 40
30 Recall that a Hamiltonian cycle is a 1-tree (i.e., a spanning tree on nodes {2,..., n} plus two edges incident to node 1) in which all nodes have exactly two incident edges. Since e E cexe + i V u i(2 e δ(i) xe) = e={i,j} E (ce u i u j )x e + 2 i V u i, relaxing/dualizing the degree constraints (14) for all nodes except for node 1, we obtain the Lagrangian subproblem: w(u) = min s.t. e E (ce u i u j )x e + 2 i V u i e δ(1) xe = 2 e E(S) xe S 1 e E xe = n x e {0, 1} S V, 2 S n 1, 1 S e E where u 1 = 0 and E(S) = {{i, j} E : i S, j S}. N.B.: The set of feasible solutions of this problem coincides with the set of all 1-trees. To find a minimum cost 1-tree: determine a minimum cost spanning tree on nodes {2,..., n} (Kruskal or Prim) and select two smallest cost edges incident to node 1. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
31 Observation: Since constraints e δ(1) xe = 2 e E(S) xe S 1 e E xe = n S V, 2 S n 1, 1 S with x 0 describe the convex hull of the (binary) incidence vectors of 1-trees, Corollary 2 implies that w = z LP. The linear relaxation min s.t. e E cexe e δ(i) xe = 2 e E(S) i V xe S 1 S V, 2 S n 1 x e 0 e E with an exponential number of constraints can thus be solved without considering them explicitly. Since the dualized constraints are equations, the Lagrangian dual is: max w(u) u R V : u 1 =0 Edoardo Amaldi (PoliMI) Optimization Academic year / 40
32 Example: taken from L. Wolsey, Integer Programming, p Consider the undirected graph G = (V, E) with 5 nodes and the cost matrix: Dual function: w(u k ) = min e ui e={i,j} E(c k uj k )xe k + 2 i V u k i : x k incidence vector of a 1-tree Notation: cij k = c e ui k uj k for e = {i, j} E Subgradient γ k with γi k = (2 e δ(i) x e k ), where x k = x(u k ) is an optimal solution of the Lagrangian subproblem at the k-th iteration. Since e δ(1) xe = 2 is not relaxed, γk 1 = 0 for all k. Starting from u1 k = 0 for k = 0, this implies that u1 k = 0 for each k 1. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
33 A feasible solution of cost 148 found with a primal heuristic: x 12 = x 23 = x 34 = x 45 = x 51 = 1 and x ij = 0 for all other {i, j} E Solution of the Lagrangian dual starting from u 0 = 0 with ε = 1: Solving the Lagrangian subproblem with costs: C 0 = C = (c 0 e = c e for each e E since u 0 = 0), we find x(u 0 ) that corresponds to the 1-tree of cost 130: x 12 = x 13 = x 23 = x 34 = x 35 = 1 and x ij = 0 for all other {i, j} E Edoardo Amaldi (PoliMI) Optimization Academic year / 40
34 Knowing x(u 0 ), we can compute the value of the dual function: w(u 0 ) = , equivalent to 1-tree cost + 2 i V u0 i. Subgradient Update the Lagrange multipliers: γ 0 = u 1 = u 0 + (ŵ w(u0 )) γ 0 ( ) = 0 + γ = Thus C 1 = Edoardo Amaldi (PoliMI) Optimization Academic year / 40
35 As optimal solution x(u 1 ) of the Lagrangian subproblem with cost matrix C 1 we find the 1-tree of cost 143: x 12 = x 13 = x 23 = x 34 = x 45 = 1 and x ij = 0 for all other {i, j} E and w(u 1 ) = i V u1 i = 143. Since we have u 2 = u 1 + (ŵ w(u1 )) γ 1 = γ 2 1 γ 1 = , + ( ) = Edoardo Amaldi (PoliMI) Optimization Academic year / 40
36 Therefore C 2 = and we obtain x(u 2 ) that corresponds to the 1-tree of cost 147.5: x 12 = x 15 = x 23 = x 35 = x 45 = 1 and x ij = 0 for all other {i, j} E and w(u 2 ) = Since all costs c e are integer, the feasible solution of cost 148 found by the heuristic is optimal. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
37 Choice of the Lagrangian dual For problems with different groups of constraints, we need to decide which ones to relax. Choice criteria: i) Strength of the bound w obtained by solving the Lagrangian dual, ii) Difficulty of solving the Lagrangian subproblems w(u) = min {c t x + u t (d Dx) : x X R n }, iii) Difficulty of solving the Lagrangian dual: w = max u 0 w(u). For (i) we have the characterization of the Lagrangian dual bound w in terms of LP. The difficulty of the Lagrangian subproblems depends on the specific problem. The difficulty of the Lagrangian dual depends, among others, on the number of dual variables. We look for a reasonable trade-off. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
38 Example: Generalized assignment problem Given a set I of processes and a set J of machines with c ij cost for executing process i I on machine j J, w ij the amount of resource required to execute process i I on machine j J, b j the total amount of resource available on machine j J. assign the processes to the machines so as to minimize the total cost, while respecting the resource constraints. Assumption: Once started processes cannot be interrupted. ILP formulation min z = s.t. i I j J c ijx ij j J x ij = 1 i I w ijx ij b j x ij {0, 1} i I j J i I, j J where x ij = 1 if process i I is assigned to machine j J, and x ij = 0 otherwise. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
39 Three possible Lagrangian relaxations: 1) Relaxing the capacity constraints: w 1(u) = min i I j J c ijx ij j J u j(b j i I w ijx ij ) s.t. j J x ij = 1 x ij {0, 1} i I i I, j J Trivial solution: assign each process i I to machine j J with min c ij + u j w ij. N.B.: If the integrality conditions were relaxed, the solution would not change because for each i conv({x i {0, 1} J : j J x ij = 1}) = {x R J : j J x ij = 1, 0 x i 1}. 2) Relaxing the assignment constraints: w 2(v) = min i I j J c ijx ij i I v i( j J x ij 1) s.t. i I w ijx ij b j x ij {0, 1} j J i I, j J Since the variables x ij corresponding to different machines j are not linked, the subproblem decomposes into J independent binary knapsack subproblems, with profit c ij + v i (max version) and weight w ij for item i I. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
40 3) Relaxing all the constraints, we obtain the Lagrangian subproblem: w 3(u, v) = min i I j J c ijx ij i I v i( j J x ij 1) j J u j(b j i I w ijx ij ) x ij {0, 1} i I, j J Trivial solution: set x ij = 1 if c ij v i + u j w ij < 0, x ij = 0 if c ij v i + u j w ij > 0 and arbitrarily x ij = 0 or x ij = 1 if c ij v i + u j w ij = 0. Observations: i) According to Corollary 2, the first and third Lagrangian relaxation bounds are as weak as (not stronger than) the linear relaxation one. ii) Since the ideal formulation for a binary knapsack problem contains many other inequalities (e.g., cover inequalities), namely for each j J conv({x j {0, 1} I : i I w ij x ij b j }) {x j R I : i I w ij x ij b j, 0 x j 1}, the second Lagrangian relaxation provides a potentially stronger bound. Edoardo Amaldi (PoliMI) Optimization Academic year / 40
3.7 Cutting plane methods
3.7 Cutting plane methods Generic ILP problem min{ c t x : x X = {x Z n + : Ax b} } with m n matrix A and n 1 vector b of rationals. According to Meyer s theorem: There exists an ideal formulation: conv(x
More informationand to estimate the quality of feasible solutions I A new way to derive dual bounds:
Lagrangian Relaxations and Duality I Recall: I Relaxations provide dual bounds for the problem I So do feasible solutions of dual problems I Having tight dual bounds is important in algorithms (B&B), and
More information3.4 Relaxations and bounds
3.4 Relaxations and bounds Consider a generic Discrete Optimization problem z = min{c(x) : x X} with an optimal solution x X. In general, the algorithms generate not only a decreasing sequence of upper
More information3.7 Strong valid inequalities for structured ILP problems
3.7 Strong valid inequalities for structured ILP problems By studying the problem structure, we can derive strong valid inequalities yielding better approximations of conv(x ) and hence tighter bounds.
More informationChapter 3: Discrete Optimization Integer Programming
Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-16-17.shtml Academic year 2016-17
More informationChapter 3: Discrete Optimization Integer Programming
Chapter 3: Discrete Optimization Integer Programming Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Sito web: http://home.deib.polimi.it/amaldi/ott-13-14.shtml A.A. 2013-14 Edoardo
More informationDiscrete (and Continuous) Optimization WI4 131
Discrete (and Continuous) Optimization WI4 131 Kees Roos Technische Universiteit Delft Faculteit Electrotechniek, Wiskunde en Informatica Afdeling Informatie, Systemen en Algoritmiek e-mail: C.Roos@ewi.tudelft.nl
More information3.8 Strong valid inequalities
3.8 Strong valid inequalities By studying the problem structure, we can derive strong valid inequalities which lead to better approximations of the ideal formulation conv(x ) and hence to tighter bounds.
More information3.3 Easy ILP problems and totally unimodular matrices
3.3 Easy ILP problems and totally unimodular matrices Consider a generic ILP problem expressed in standard form where A Z m n with n m, and b Z m. min{c t x : Ax = b, x Z n +} (1) P(b) = {x R n : Ax =
More informationIntroduction to Mathematical Programming IE406. Lecture 21. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 21 Dr. Ted Ralphs IE406 Lecture 21 1 Reading for This Lecture Bertsimas Sections 10.2, 10.3, 11.1, 11.2 IE406 Lecture 21 2 Branch and Bound Branch
More informationLagrangean relaxation
Lagrangean relaxation Giovanni Righini Corso di Complementi di Ricerca Operativa Joseph Louis de la Grange (Torino 1736 - Paris 1813) Relaxations Given a problem P, such as: minimize z P (x) s.t. x X P
More informationwhere X is the feasible region, i.e., the set of the feasible solutions.
3.5 Branch and Bound Consider a generic Discrete Optimization problem (P) z = max{c(x) : x X }, where X is the feasible region, i.e., the set of the feasible solutions. Branch and Bound is a general semi-enumerative
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms
MVE165/MMG631 Linear and integer optimization with applications Lecture 8 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2017 04 07 Lecture 8 Linear and integer optimization with applications
More informationNetwork Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationChapter 2: Preliminaries and elements of convex analysis
Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15
More informationDiscrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P
Discrete Optimization 2010 Lecture 8 Lagrangian Relaxation / P, N P and co-n P Marc Uetz University of Twente m.uetz@utwente.nl Lecture 8: sheet 1 / 32 Marc Uetz Discrete Optimization Outline 1 Lagrangian
More informationSection Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.
Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch
More informationInteger Programming ISE 418. Lecture 8. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 8 Dr. Ted Ralphs ISE 418 Lecture 8 1 Reading for This Lecture Wolsey Chapter 2 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Duality for Mixed-Integer
More informationComputational Integer Programming. Lecture 2: Modeling and Formulation. Dr. Ted Ralphs
Computational Integer Programming Lecture 2: Modeling and Formulation Dr. Ted Ralphs Computational MILP Lecture 2 1 Reading for This Lecture N&W Sections I.1.1-I.1.6 Wolsey Chapter 1 CCZ Chapter 2 Computational
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More information3.10 Column generation method
3.10 Column generation method Many relevant decision-making problems can be formulated as ILP problems with a very large (exponential) number of variables. Examples: cutting stock, crew scheduling, vehicle
More informationOutline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation
Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING 1. Lagrangian Relaxation Lecture 12 Single Machine Models, Column Generation 2. Dantzig-Wolfe Decomposition Dantzig-Wolfe Decomposition Delayed Column
More informationDiscrete Optimization 2010 Lecture 7 Introduction to Integer Programming
Discrete Optimization 2010 Lecture 7 Introduction to Integer Programming Marc Uetz University of Twente m.uetz@utwente.nl Lecture 8: sheet 1 / 32 Marc Uetz Discrete Optimization Outline 1 Intro: The Matching
More informationSection Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010
Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal
More informationLagrangian Relaxation in MIP
Lagrangian Relaxation in MIP Bernard Gendron May 28, 2016 Master Class on Decomposition, CPAIOR2016, Banff, Canada CIRRELT and Département d informatique et de recherche opérationnelle, Université de Montréal,
More information5.5 Quadratic programming
5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationLecture 7: Lagrangian Relaxation and Duality Theory
Lecture 7: Lagrangian Relaxation and Duality Theory (3 units) Outline Lagrangian dual for linear IP Lagrangian dual for general IP Dual Search Lagrangian decomposition 1 / 23 Joseph Louis Lagrange Joseph
More information3.10 Column generation method
3.10 Column generation method Many relevant decision-making (discrete optimization) problems can be formulated as ILP problems with a very large (exponential) number of variables. Examples: cutting stock,
More information5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1
5 Integer Linear Programming (ILP) E. Amaldi Foundations of Operations Research Politecnico di Milano 1 Definition: An Integer Linear Programming problem is an optimization problem of the form (ILP) min
More informationIntroduction to integer programming II
Introduction to integer programming II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationRelaxations and Bounds. 6.1 Optimality and Relaxations. Suppose that we are given an IP. z = max c T x : x X,
6 Relaxations and Bounds 6.1 Optimality and Relaxations Suppose that we are given an IP z = max c T x : x X, where X = x : Ax b,x Z n and a vector x X which is a candidate for optimality. Is there a way
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationBBM402-Lecture 20: LP Duality
BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationCS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest
More information(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6
The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider
More informationWeek 8. 1 LP is easy: the Ellipsoid Method
Week 8 1 LP is easy: the Ellipsoid Method In 1979 Khachyan proved that LP is solvable in polynomial time by a method of shrinking ellipsoids. The running time is polynomial in the number of variables n,
More informationUses of duality. Geoff Gordon & Ryan Tibshirani Optimization /
Uses of duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Remember conjugate functions Given f : R n R, the function is called its conjugate f (y) = max x R n yt x f(x) Conjugates appear
More informationLecture 9: Dantzig-Wolfe Decomposition
Lecture 9: Dantzig-Wolfe Decomposition (3 units) Outline Dantzig-Wolfe decomposition Column generation algorithm Relation to Lagrangian dual Branch-and-price method Generated assignment problem and multi-commodity
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1
More informationto work with) can be solved by solving their LP relaxations with the Simplex method I Cutting plane algorithms, e.g., Gomory s fractional cutting
Summary so far z =max{c T x : Ax apple b, x 2 Z n +} I Modeling with IP (and MIP, and BIP) problems I Formulation for a discrete set that is a feasible region of an IP I Alternative formulations for the
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More informationBounds on the Traveling Salesman Problem
Bounds on the Traveling Salesman Problem Sean Zachary Roberson Texas A&M University MATH 613, Graph Theory A common routing problem is as follows: given a collection of stops (for example, towns, stations,
More informationIn the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight.
In the original knapsack problem, the value of the contents of the knapsack is maximized subject to a single capacity constraint, for example weight. In the multi-dimensional knapsack problem, additional
More information7. Lecture notes on the ellipsoid algorithm
Massachusetts Institute of Technology Michel X. Goemans 18.433: Combinatorial Optimization 7. Lecture notes on the ellipsoid algorithm The simplex algorithm was the first algorithm proposed for linear
More information4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n
2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationDetermine the size of an instance of the minimum spanning tree problem.
3.1 Algorithm complexity Consider two alternative algorithms A and B for solving a given problem. Suppose A is O(n 2 ) and B is O(2 n ), where n is the size of the instance. Let n A 0 be the size of the
More informationOptimization Exercise Set n. 4 :
Optimization Exercise Set n. 4 : Prepared by S. Coniglio and E. Amaldi translated by O. Jabali 2018/2019 1 4.1 Airport location In air transportation, usually there is not a direct connection between every
More informationA Review of Linear Programming
A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex
More informationCS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi
CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More informationDuality of LPs and Applications
Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationOptimization Exercise Set n.5 :
Optimization Exercise Set n.5 : Prepared by S. Coniglio translated by O. Jabali 2016/2017 1 5.1 Airport location In air transportation, usually there is not a direct connection between every pair of airports.
More informationRelation of Pure Minimum Cost Flow Model to Linear Programming
Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m
More informationModeling with Integer Programming
Modeling with Integer Programg Laura Galli December 18, 2014 We can use 0-1 (binary) variables for a variety of purposes, such as: Modeling yes/no decisions Enforcing disjunctions Enforcing logical conditions
More informationOperations Research Lecture 6: Integer Programming
Operations Research Lecture 6: Integer Programming Notes taken by Kaiquan Xu@Business School, Nanjing University May 12th 2016 1 Integer programming (IP) formulations The integer programming (IP) is the
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationLinear and Integer Optimization (V3C1/F4C1)
Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates
More informationLecture 8: Column Generation
Lecture 8: Column Generation (3 units) Outline Cutting stock problem Classical IP formulation Set covering formulation Column generation A dual perspective Vehicle routing problem 1 / 33 Cutting stock
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationLecture 15 (Oct 6): LP Duality
CMPUT 675: Approximation Algorithms Fall 2014 Lecturer: Zachary Friggstad Lecture 15 (Oct 6): LP Duality Scribe: Zachary Friggstad 15.1 Introduction by Example Given a linear program and a feasible solution
More informationSection Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018
Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationIntroduction to Bin Packing Problems
Introduction to Bin Packing Problems Fabio Furini March 13, 2015 Outline Origins and applications Applications: Definition: Bin Packing Problem (BPP) Solution techniques for the BPP Heuristic Algorithms
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory
More informationInteger Programming ISE 418. Lecture 16. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 16 Dr. Ted Ralphs ISE 418 Lecture 16 1 Reading for This Lecture Wolsey, Chapters 10 and 11 Nemhauser and Wolsey Sections II.3.1, II.3.6, II.3.7, II.5.4 CCZ Chapter 8
More informationPrimal-dual Subgradient Method for Convex Problems with Functional Constraints
Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual
More informationThe traveling salesman problem
Chapter 58 The traveling salesman problem The traveling salesman problem (TSP) asks for a shortest Hamiltonian circuit in a graph. It belongs to the most seductive problems in combinatorial optimization,
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationInteger Programming ISE 418. Lecture 12. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 12 Dr. Ted Ralphs ISE 418 Lecture 12 1 Reading for This Lecture Nemhauser and Wolsey Sections II.2.1 Wolsey Chapter 9 ISE 418 Lecture 12 2 Generating Stronger Valid
More informationChapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)
Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationIntroduction to optimization and operations research
Introduction to optimization and operations research David Pisinger, Fall 2002 1 Smoked ham (Chvatal 1.6, adapted from Greene et al. (1957)) A meat packing plant produces 480 hams, 400 pork bellies, and
More informationCSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017
CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =
More informationCO759: Algorithmic Game Theory Spring 2015
CO759: Algorithmic Game Theory Spring 2015 Instructor: Chaitanya Swamy Assignment 1 Due: By Jun 25, 2015 You may use anything proved in class directly. I will maintain a FAQ about the assignment on the
More informationIntroduction to Integer Linear Programming
Lecture 7/12/2006 p. 1/30 Introduction to Integer Linear Programming Leo Liberti, Ruslan Sadykov LIX, École Polytechnique liberti@lix.polytechnique.fr sadykov@lix.polytechnique.fr Lecture 7/12/2006 p.
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationMVE165/MMG630, Applied Optimization Lecture 6 Integer linear programming: models and applications; complexity. Ann-Brith Strömberg
MVE165/MMG630, Integer linear programming: models and applications; complexity Ann-Brith Strömberg 2011 04 01 Modelling with integer variables (Ch. 13.1) Variables Linear programming (LP) uses continuous
More informationLinear Programming. Scheduling problems
Linear Programming Scheduling problems Linear programming (LP) ( )., 1, for 0 min 1 1 1 1 1 11 1 1 n i x b x a x a b x a x a x c x c x z i m n mn m n n n n! = + + + + + + = Extreme points x ={x 1,,x n
More informationsubject to (x 2)(x 4) u,
Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationMulticommodity Flows and Column Generation
Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07
More informationLecture 8: Column Generation
Lecture 8: Column Generation (3 units) Outline Cutting stock problem Classical IP formulation Set covering formulation Column generation A dual perspective 1 / 24 Cutting stock problem 2 / 24 Problem description
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationTechnische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)
Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik Combinatorial Optimization (MA 4502) Dr. Michael Ritter Problem Sheet 1 Homework Problems Exercise
More information15.081J/6.251J Introduction to Mathematical Programming. Lecture 24: Discrete Optimization
15.081J/6.251J Introduction to Mathematical Programming Lecture 24: Discrete Optimization 1 Outline Modeling with integer variables Slide 1 What is a good formulation? Theme: The Power of Formulations
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More informationComputational Complexity. IE 496 Lecture 6. Dr. Ted Ralphs
Computational Complexity IE 496 Lecture 6 Dr. Ted Ralphs IE496 Lecture 6 1 Reading for This Lecture N&W Sections I.5.1 and I.5.2 Wolsey Chapter 6 Kozen Lectures 21-25 IE496 Lecture 6 2 Introduction to
More information