CS711008Z Algorithm Design and Analysis

Size: px
Start display at page:

Download "CS711008Z Algorithm Design and Analysis"

Transcription

1 .. CS711008Z Algorithm Design and Analysis Lecture 9. Algorithm design technique: Linear programming and duality Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 183

2 . Outline The first example: the dual of Diet problem; Understanding duality: Lagrangian function, Lagrangian dual function, and Lagrangian dual problem; Conditions of optimal solution; Four properties of duality for linear program; Solving LP using duality: Dual simplex algorithm, Primal and dual algorithm, and interior point method; Applications of duality: Farkas lemma, von Neumann s MiniMax theorem, Yao s MiniMax theorem, Dual problem in SVM, and ShortestPath problem; Appendix: Proof of Slater theorem, and finding initial solution to dual problem. 2 / 183

3 . Importance of duality When minimizing a function f(x), it is invaluable to know a lower bound of f(x) in advance. Calculation of lower bound is extremely important to the design of approximation algorithm and branch-and-bound method. Duality and relaxation (say integer relaxation, convex relaxation) are powerful techniques to obtain a reasonable lower bound. Linear programs come in primal/dual pairs. It turns out that every feasible solution for one of these two problems provides a bound for the objective value for the other problem. The dual problems are always convex even if the primal problems are not convex. 3 / 183

4 . Ṭhe first example: the dual of Diet problem. 4 / 183

5 . Revisiting Diet problem A housewife wonders how much money she must spend on foods in order to get all the energy (2000 kcal), protein (55 g), and calcium (800 mg) that she needs every day. Food Energy Protein Calcium Price Quantity Oatmeal x 1 Whole milk x 2 Cherry pie x 3 Pork beans x 4 Linear program: min 3x 1 + 9x x x 4 money s.t. 110x x x x energy 4x 1 + 8x 2 + 4x x 4 55 protein 2x x x x calcium x 1, x 2, x 3, x / 183

6 . Dual of Diet problem: Pricing problem Consider a company producing protein powder, energy bar, and calcium tablet as substitution to foods. The company wants to design a reasonable pricing strategy to earn money as much as possible. However, the price cannot be arbitrarily high due to the following considerations:.1 If the prices are competitive with foods, one might consider choosing a combination of the ingredients rather than foods;.2 Otherwise, one will choose to buy foods directly. 6 / 183

7 . LP model of Pricing problem Food Energy Protein Calcium Price (cents) Oatmeal Whole milk Cherry pie Pork with beans Price y 1 y 2 y 3 Linear program: max 2000y y y 3 money s.t. 110y 1 + 4y 2 + 2y 3 3 oatmeal 160y 1 + 8y y 3 9 milk 420y 1 + 4y y 3 20 pie 260y y y 3 19 pork&beans y 1, y 2, y / 183

8 . Primal problem and Dual problem c 1 c 2... c n a 11 a a 1n b 1 a 21 a a 2n b 2... a m1 a m2... a mn b m Primal problem and Dual problem are two points of view of the coefficient matrix A: Primal problem: row point of view Dual problem: column point of view 8 / 183

9 . Primal problem min c 1 x 1 + c 2 x c n x n a 11 x 1 + a 12 x a 1n x n b 1 a 21 x 1 + a 22 x a 2n x n b 2... a m1 x 1 + a m2 x a mn x n b m x i 0 for each i Primal problem: row point of view (in red); min c T x s.t. Ax b x 0 9 / 183

10 . Dual problem c 1 c 2... c n max y 1 a 11 y 1 a y 1 a 1n y 1 b y 2 a 21 y 2 a y 2 a 2n y 2 b y m a m1 y m a m2... y m a mn y m b m y j 0 for each j Dual problem: column point of view (in blue). max b T y s.t. y 0 A T y c 10 / 183

11 . How to write Dual problem? Case 1 For each constraint in the Primal problem, a variable is set in the Dual problem. If the Primal problem has inequality constraints, the Dual problem is written as follows. Primal problem: min c T x s.t. Ax b x 0 Dual problem: max b T y s.t. y 0 A T y c 11 / 183

12 . How to write Dual problem? Case 2 For each constraint in the Primal problem, a variable is set in the Dual problem. If the Primal problem has inequality constraints, the Dual problem is written as follows. Primal problem: min c T x s.t. Ax b x 0 Dual problem: max b T y s.t. y 0 A T y c 12 / 183

13 . How to write Dual problem? Case 3 For each constraint in the Primal problem, a variable is set in the Dual problem. If the Primal problem has equality constraints, the Dual problem is as follows. Primal problem: Dual problem: min c T x s.t. Ax = b x 0 max s.t. b T y A T y c Note: there is neither y 0 nor y 0 constraint in the dual problem. 13 / 183

14 . Why can the Dual problem be written as above?. Understanding duality from the Lagrangian dual point of view 14 / 183

15 . Standard form of constrained optimization problems Consider the following constrained optimization problem (might be non-convex). min f 0 (x) s.t. f i (x) 0 i = 1,..., m h i (x) = 0 i = 1,..., p Here the variables x R n and we use D = m p dom f i dom h i to represent the domain of i=0 i=1 definition. We use p to represent the optimal value of the problem. 15 / 183

16 . An equivalent unconstrained optimization problem We can transform this constrained optimization problem into an equivalent unconstrained optimization problem: m p min f 0 (x) + I (f i (x)) + I 0 (h i (x)) i=1 i=1 where x D, I (u) and I 0 (u) are indicator functions for non-positive reals and the set {0}, respectively: { { 0 u 0 0 u = 0 I (u) = I 0 (u) = u > 0 u I (u) 10 5 I 0 (u) u u 16 /

17 . Difficulty in solving the optimization problem 10 5 I (u) 10 5 I 0 (u) u u 1 2 Intuitively, I (u) and I 0 (u) represent our infinite dissatisfaction with the violence of constraints. However both I 0 (u) and I (u) are non-differentiable, making the optimization problem m p min f 0 (x) + I (f i (x)) + I 0 (h i (x)) i=1, although unconstrained, not easy to solve. Question: How to efficiently solve this optimization problem? i=1 17 / 183

18 . Approximating I (u) using a differentiable function (1) t log( u) (t > 0) I (u) u 1 2 An approximation to I (u) is logarithm barrier function: Î (u) = 1 log( u) (t > 0) t The difference between Î (u) and I (u) decreases as t increases. This approximation was used in the interior point method. 18 / 183

19 . Approximating I (u) using a differentiable function (2) I (u) ϕ(u) u 1 2 Another approximation to I (u) is a penalty function: { x t (t > 1) x 0 Î (u) = ϕ(u) = 0 otherwise The penalty function penalizes any u which is greater than zero. It is a hands-off method for converting contracted problems into unconstrained problems, to which an initial feasible solution is easy to obtained. However, in some cases it cannot be applied because the objective function is undefined out of the feasible set or the unconstrained problem becomes ill-conditioned as t increases. 19 / 183

20 . Approximating I (u) using a differentiable function (3) I (u). λu (λ 0) u Another approximation to I (u) is a simple linear function: Î (u) = λu (λ 0) Despite the considerable difference between Î (u) and I (u), Î (u) still provides lower bound information of I (u). 20 / 183

21 . Approximating I 0 (u) using a differentiable function νu (ν 0). I 0 (u) νu (ν 0) u 1 2 I 0 (u) can also be approximated using linear function: Î 0 (u) = νu Although Î0(u) deviates considerably from I 0 (u), Î0(u) still provides lower bound information of I 0 (u). It is worthy pointed out that unlike Î (u), Î0(u) has no restriction on ν. 21 / 183

22 . Lagrangian function Consider the unconstrained optimization problem min f 0 (x) + m I (f i (x)) + i=1 p I 0 (h i (x)). Now let s replace I (u) with λu (λ 0) and replace I 0 (u) with νu. Then the objective function becomes: L(x, λ, ν) = f 0 (x) i=1 m λ i f i (x) i=1 p ν i h i (x) This function is called Lagrangian function, which is a lower bound of f 0 (x) for any feasible solution x when λ 0. Here we call λ i Lagrangian multiplier for the i-th inequality constraint f i (x) 0 and ν i Lagrangian multiplier for the i-th equality constraint h i (x) = 0. i=1 22 / 183

23 . Lagrangian connecting primal and dual: An example Primal problem: Lagrangian: min x 2 2x s.t. x 0 L(x, λ) = x 2 2x + λx Note that L(x, λ) is a lower bound of the primal objective function x 2 2x when λ 0 and x 0. Dual problem: max 1 4 (2 λ)2 s.t. λ 0 23 / 183

24 . Lagrangian connecting primal and dual Primal: f 0 (x) = x*x - 2*x L(x,y) = x*x - 2*x + y*x Dual: g(y) = -1.0/4.0*(2-y)*(2-y) x y Observation: Primal objective function Lagrangian Dual objective function in the feasible region. 24 / 183

25 . Ḷagrangian dual function and Lagrangian dual problem 25 / 183

26 . Lagrangian dual function Consider the following constrained optimization problem. Lagrangian function: min f 0 (x) s.t. f i (x) 0 i = 1,..., m h i (x) = 0 i = 1,..., p L(x, λ, ν) = f 0 (x) m λ i f i (x) i=1 p ν i h i (x) which is a lower bound of f 0 (x) for any feasible solution x when λ 0. Now let s consider the infimum of Lagrangian function (called Lagrangian dual function): g(λ, ν) = inf L(x, λ, ν) x D Note: infimum rather than minimum is used here as some sets have no minimum. i=1 26 / 183

27 . Lagrangian dual problem Lagrangian dual function provides lower bound of the primal objective function, i.e. f 0 (x) L(x, λ, ν) g(λ, ν) for any feasible solution x when λ 0. Now let s try to find the tightest lower bound of the primal objective function, which can be obtained by solving the following Lagrangian dual problem: max g(λ, ν) s.t. λ 0 27 / 183

28 . Dual problem is always convex Note that the Lagrangian dual function g(λ, ν) is a point-wise minimum of affine functions over λ, ν. g(λ, ν) = inf L(x, λ, ν) x D = inf x D (f 0(x) m λ i f i (x) i=1 m ν i h i (x)) Thus Lagrangian dual function g(λ, ν) is always concave and the dual problem is always a convex programming problem even if the primal problem is non-convex. i=1 28 / 183

29 . Note: The dual is not intrinsic It is worthy pointed out that the dual problem and its optimal objective value are not properties of the primal feasible set and primal objective function alone. They also depend on the specific constraints in the primal problem. Thus we can construct equivalent primal optimization problem with different duals through the following ways: Replacing primal objective function f 0 (x) with h(f 0 (x)) where h(u) is monotonically increasing. Introducing new variables. Adding redundant constraints. 29 / 183

30 . Lagrangian function and dual function: An example I Consider the following primal problem: min e x2 + 4e x2 s.t. 2e x i = 1,..., m Lagrangian function: L(x, λ) = e x2 + 4e x2 λ( 2e x2 + 1) where λ 0. 8 f 0 (x)=exp(x*x)+4exp(-x*x) f 1 (x)=-2exp(-x*x) L(x, -0.5) L(x, -1) L(x, -1.5) x 30 / 183

31 . Lagrangian function and dual function: An example II Lagrangian dual function: g(λ) = inf L(x, λ) x R { = 5 + λ λ λ λ 1.5 λ 0 5 g(lambda) lambda 31 / 183

32 . Ḍeriving dual problem of linear program in standard form 32 / 183

33 . Dual problem of LP problem in standard form Consider a LP problem in standard form: min c T x s.t. Ax b x 0 Lagrangian function: L(x, λ, ν) = c T x m λ i(a i1 x a in x n b i ) n ν ix i i=1 i=1 where λ 0, ν 0. Notice that for any feasible solution x and λ 0, ν 0, Lagrangian function is a lower bound of the primal objective function, i.e. c T x L(x, λ, ν), and further c T x L(x, λ, ν) inf L(x, λ, ν) x D Let s define Lagrangian dual function g(λ, ν) = inf L(x, λ, ν) and rewrite the above inequality as x D c T x L(x, λ, ν) g(λ, ν) 33 / 183

34 . Lagrangian dual function What is the Lagrangian dual function g(λ, ν)? g(λ, ν) = inf L(x, λ, ν) x D m = inf x D (ct x λ i (a i1 x a in x n b i ) i=1 = inf x D (λt b + (c T λ T A ν T )x) { λ T b if c T = λ T A + ν T = otherwise m ν i x i ) i=1 Note that D = R n. Thus g(λ, ν) = λ T b if c T = λ T A + ν T ; otherwise, g(λ, ν) =, which is a trivial lower bound for the primal objective function c T x. We usually denote the domain of g(λ, ν) as dom g = {(λ, ν) g(λ, ν) > }. 34 / 183

35 . Lagrangian dual problem Now let s try to find the tightest lower bound of the primal objective function c T x, which can be calculated by solving the following Lagrangian dual problem: { max g(λ, ν) = λ T b if c T = λ T A + ν T otherwise s.t. λ 0 ν 0, or explicitly representing constraints in dom g: max λ T b s.t. λ T A c T λ 0 Note that this is actually the Dual form of LP if replacing λ by y; thus, we have another explanation of Dual variables y the Lagrangian multiplier. 35 / 183

36 . An example Primal problem: Lagrangian function: min x s.t. x 2 x 0 L(x, λ, ν) = x λ(x 2) νx = λ + (1 λ ν)x Note that when λ 0, ν 0 and x 2, L(x, λ, ν) is a lower bound of the primal objective function x. Lagrangian dual function: g(λ, ν) = inf L(x, λ, ν) = x R Dual problem: { 2λ if 1 λ ν = 0 max 2λ s.t. λ 1 λ 0 otherwise 36 / 183

37 . Ḍeriving dual problem of linear program in slack form 37 / 183

38 . Dual problem of LP problem in slack form Consider a LP problem in slack form: Lagrangian function: min c T x s.t. Ax = b x 0 L(x, λ, s) = c T x m λ i(a i1 x a in x n b i ) n ν ix i i=1 i=1 Notice that for any feasible solution x and ν 0, Lagrangian function is a lower bound of the primal objective function, i.e. c T x L(x, λ, ν), and further c T x L(x, λ, ν) inf L(x, λ, ν) x D Let s define Lagrangian dual function g(λ, ν) = inf L(x, λ, ν) and rewrite the above inequality as x D c T x L(x, λ, ν) g(λ, ν) 38 / 183

39 . Lagrangian dual function What is the Lagrangian dual function g(λ, ν)? g(λ, ν) = inf L(x, λ, ν) x D m = inf x D (ct x λ i (a i1 x a in x n b i ) i=1 = inf x D (λt b + (c T λ T A ν T )x) { λ T b if c T = λ T A + ν T = otherwise Note that D = R n. Thus g(λ, ν) = λ T b if c T = λ T A + ν T ; otherwise, g(λ, ν) =, which is a trivial lower bound for the primal objective function c T x. m ν i x i ) i=1 39 / 183

40 . Lagrangian dual problem Now let s try to find the tightest lower bound of the primal objective function c T x, which can be calculated by solving the following Lagrangian dual problem: { λ T b if c T = λ T A + ν T max g(λ, ν) = otherwise s.t. ν 0, or explicitly representing constraints in dom g: max λ T b s.t. λ T A c T Note that the dual problem does not have the λ 0 constraint as for any λ i, λ i (a i1 x a in x n b i ) is a lower bound for I 0 (a i1 x a in x n b i ). 40 / 183

41 . Ẹxplanations of dual variables 41 / 183

42 . Two explanations of dual variables y: L. Kantorovich vs. T. Koopmans.1 Price interpretation: Constrained optimization plays an important role in economics. Dual variables are also called as shadow price (by T. Koopmans), i.e. the instantaneous change in the optimization objective function when constraints are relaxed, or marginal cost when strengthening constraints..2 Lagrangian multiplier interpretation: Dual variables are essentially Lagrangian multiplier, which describe the effect of constraints on the objective function (by L. Kantorovich). For example, it can describe how much the optimal objective function will change when b i increase to b i + b i. In fact, we have L(x,λ) b i = λ i. 42 / 183

43 . Explanation of dual variables y: using Diet as an example Optimal solution to primal problem with b 1 = 2000, b 2 = 55, b 3 = 800: x = (14.24, 2.70, 0, 0), c T x = Optimal solution to dual problem: y = (0.0269, 0, ), y T b = Let s make a slight change on b, and watch the effect on min c T x..1 b = (2001, 55, 800): min c T x = (Note that.2 y 1 = = ) b = (2000, 56, 800): min c T x = (Note that.3 y 2 = 0 = ) b = (2000, 55, 801): min c T x = (Note that y 3 = = ) 43 / 183

44 ..Property of Lagrangian dual problem: Weak duality 44 / 183

45 . Weak duality Let s p and d denote the optimal objective value of a primal problem and its dual problem, respectively. We always have d p regardless of non-convexity of the primal problem. The difference p d is called duality gap. Weak duality holds even if p =, which means the infeasibility of the dual problem. Similarly, if d = +, the primal problem is infeasible. As dual problems are always convex, it is relatively easy to calculate d efficiently and thus obtain a lower bound for p. 45 / 183

46 . An example of non-zero duality gap Consider the following non-convex optimization problem. Lagrange dual function: min x 1 x 2 s.t. x 1 0 x 2 0 x x2 2 1 g(λ) = inf x D (x 1x 2 λ 1 x1 λ 2 x 2 + λ 3 (x x 2 2 1)) Dual problem: max g(λ) s.t. λ 1 0 λ 2 0 λ Duality gap: p = 0 > d = / 183

47 ..Property of Lagrangian dual problem: Strong duality 47 / 183

48 . Strong duality Strong duality holds if p = d, i.e., the duality gap is 0. Strong duality doesn t necessarily hold for any optimization problem, but it almost always holds for convex ones, i.e. min f 0 (x) s.t. f i (x) 0 i = 1,..., m Ax = b where f i (x), (i = 0, 1,..., m) are convex functions. The conditions that guarantee strong duality are called regularity conditions, one of which is the Slater s condition. 48 / 183

49 . Slater s condition Slater s condition: Consider a convex optimization problem. The strong duality holds if there exists a vector x relint D such that f i (x)<0, i = 1,..., m, Ax = b Suppose the first k constraints are affine, then the Slater s conditions turns into: there exists a vector x relintd such that f i (x) 0, i = 1,..., k, f i (x)<0, i = 1 + 1,..., m, Ax = b Specifically, if all constraints are affine, then the original constraints are themselves the Slater s conditions. Please refer to Appendix for the proof of the Slater theorem. 49 / 183

50 ..Conditions of optimal solution: KKT conditions 50 / 183

51 . Three types of optimization problems It is relatively easy to optimize an objective function without any constraint, say: min f 0 (x) But how to optimize an objective function with equality constraints? min f 0 (x) s.t. h i (x) = 0 i = 1, 2,..., p And how to optimize an objective function with inequality constraints? min f 0 (x) s.t. f i (x) 0 i = 1, 2,..., m h i (x) = 0 i = 1, 2,..., p 51 / 183

52 . Optimization problem over equality constraints Consider the following optimization problem: min f 0 (x, y) s.t. h(x, y) = 0 Intuition: suppose (x, y ) is the optimum point. Thus at (x, y ), f 0 (x, y) does not change when walking along the curve h(x, y) = 0; otherwise, we can follow the curve to make f 0 (x, y) smaller, meaning that the starting point (x, y ) is not optimum. h(x, y) h(x, y) = 0 (x, y ) f 0 (x, y). f 0 (x, y) = 1 52 / 183

53 h(x, y) = 0 h(x, y) (x, y ) f 0 (x, y). f 0 (x, y) = 1 f 0 (x, y) = 2 So at (x, y ), the red line tangentially touches a blue contour, i.e. there exists a real λ such that: f 0 (x, y) = λ h(x, y) Lagrange must have cleverly noticed that the equation above looks like partial derivatives of some larger scalar function: L(x, y, λ) = f 0 (x, y) λh(x, y) Necessary conditions of optimum point: If (x, y ) is local optimum, then there exists a λ such that L(x, y, λ) = / 183

54 . Understanding Lagrangian function Lagrangian function: a combination of the original optimization objective function and constraints: L(x, y, λ) = f 0 (x, y) λh(x, y) The critical point of Lagrangian L(x, y, λ) occurs at saddle points rather than local minima (or maxima). Thus, to utilize numerical optimization techniques, we must first transform the problem such that the critical points lie at local minima. This is done by calculating the magnitude of the gradient of Lagrangian. 54 / 183

55 . Optimization problems over inequality constraints Consider the following optimization problem: f 1 (x, y) 0 min f 0 (x, y) s.t. f 1 (x, y) 0 f 1 (x, y) 0 (x, y ). f 0 (x, y) = 1 f 0 (x, y) = 2 (x, y ). f 0 (x, y) = 1 f 0 (x, y) = 2 Figure: Case 1: the optimum point (x, y ) lies in the curve f 1 (x, y) = 0. Thus Lagrangian condition L(x, y, λ) = 0 applies. Case 2: (x, y ) lies within the interior region f 1 (x, y)<0; thus we have f.. 0 (x, y) = 0 at.... (x, y ) 55 / 183

56 . Complementary slackness These two cases can be summarized as the following two conditions: (Stationary point) L(x, y, λ) = 0 (Complementary slackness) λf 1 (x, y ) = 0 Reason: In case 2, f i (x )<0 λ = 0 by complementary slackness. We further have f 0 (x, y ) = 0 since L(x, y, λ) = 0. Complementary slackness, also called orthogonality by Gomory, essentially equals to the strong duality for convex optimization problems. A relaxation of this condition, i.e., λ i f i (x ) = µ, where µ is a small positive number, is used in the interior point method. 56 / 183

57 . Proof of complementary slackness Assume that strong duality holds, i.e., p = d. Let s x and (λ, ν ) λ 0 denote the optimal solution to the primal and dual problem, respectively. We have: f 0 (x ) = g(λ, ν ) = inf (f 0(x) m λ if i (x) p ν ih i (x)) x D i=1 i=1 f 0 (x ) m λ if i (x ) p ν ih i (x ) i=1 i=1 f 0 (x ) Thus the last two inequalities turns into equalities, which implies that Lagrangian function L(x, λ, ν ) reaches its minimum at x and m i=1 λ i f i (x ) = 0. Note that λ i f i(x ) 0. Hence λ i f i(x ) = 0 for i = 1,..., m. 57 / 183

58 . KKT conditions for general optimization problems I Consider the following constrained optimization problem (might be non-convex). Lagrangian function: min f 0 (x) s.t. f i (x) 0 i = 1,..., m h i (x) = 0 i = 1,..., p L(x, λ) = f 0 (x) m λ i f i (x) i=1 p ν i h i (x) Let x and (λ, ν) denote the optimal solutions to the primal and dual problems, respectively. Suppose the strong duality holds. i=1 58 / 183

59 . KKT conditions for general optimization problems II Since Lagrangian function reaches its minimum at x, its gradient is 0 at x, i.e., f 0 (x ) m λ i f i (x ) i=1 p ν i h i (x ) = 0 i=1 Then x and (λ, ν) satisfy the following KKT conditions:.1 (Stationary point) L(x, λ, ν) = 0.2 (Primal feasibility) f i (x ) 0, i = 1,..., m; h i (x ) = 0, i = 1,..., p.3 (Dual feasibility) λ i 0, i = 1,..., m.4 (Complementary slackness) λ i f i (x ) = 0, i = 1,..., m 59 / 183

60 . KKT conditions for convex problems I Consider the following convex optimization problem. Lagrangian function: min f 0 (x) s.t. f i (x) 0 i = 1,..., m Ax = b L(x, λ) = f 0 (x) m λ i f i (x) ν T (Ax b) i=1 60 / 183

61 . KKT conditions for convex problems II x and (λ, ν) are optimal solutions to the primal and dual problems, respectively, if they satisfy the following KKT conditions:.1 (Stationary point) L(x, λ, ν) = 0.2 (Primal feasibility) f i (x ) 0, i = 1,..., m; Ax = b;.3 (Dual feasibility) λ i 0, i = 1,..., m.4 (Complementary slackness) λ i f i (x ) = 0, i = 1,..., m KKT conditions, named after William Karush, Harold W. Kuhn, and Albert W. Tucker, are usually not solved directly in optimization; instead, iterative successive approximation is most often used to find the final results that satisfy KKT conditions. 61 / 183

62 ..Four properties of duality for linear program 62 / 183

63 . Property 1: Primal is the dual of dual. Theorem..For linear program, primal problem is the dual of dual. For a general optimization problem, the dual of dual is not always the primal problem but a convex relaxation of the primal problem. 63 / 183

64 . Property 2: Weak duality. Theorem. (Weak duality) The objective value of any feasible solution to the dual problem is always a lower bound of the objective value of. primal problem. It is easy to prove this property as Lagrangian function connects primal objective function and dual objective function. 64 / 183

65 . An example: Diet problem and its dual problem Primal problem P : min 3x 1 + 9x x x 4 money s.t. 110x x x x energy 4x 1 + 8x 2 + 4x x 4 55 protein 2x x x x calcium x 1, x 2, x 3, x 4 0 Feasible solution x T = [0, 8, 2, 0] T c T x = 112. Dual problem D: max 2000y y y 3 money s.t. 110y 1 + 4y 2 + 2y 3 3 oatmeal 160y 1 + 8y y 3 9 milk 420y 1 + 4y y 3 20 pie 260y y y 3 19 pork&beans y 1, y 2, y 3 0 Feasible solution y T = [0.0269, 0, ] T y T b = The theorem states that c T x y T b for any feasible solutions x and y. 65 / 183

66 . Proof.. Consider the following Primal problem: and Dual problem: min c T x s.t. Ax b x 0 max b T y s.t. y 0 A T y c. Let x and y denote a feasible solution to primal and dual problems, respectively. We have c T x y T Ax (by the feasibility of dual problem, i.e., y T A c T, and x T 0 ) Therefore c T x y T Ax y T b (by the feasibility of primal problem, i.e., Ax b, and y 0) 66 / 183

67 . Property 3: Strong duality. Theorem. (Strong duality) Consider a linear program. If the primal problem has an optimal solution, then the dual problem also has an optimal. solution with the same objective value.. Proof.. [ B Suppose x = 1 ] b be the optimal solution to the primal 0 problem. We have c T c T B B 1 A 0.. Let s set y T = c B T B 1. We will show that y T is the optimal solution to the dual problem. In fact, we have y T b = c B T B 1 b = c T x. That is, y T b reaches its upper bound. So y T is an optimal solution to the dual problem. 67 / 183

68 . Property 4: Complementary slackness. Theorem. Let x and y denote feasible solutions to the primal and dual problems, respectively. Then x and y are optimal solutions iff u i = y i (a i1 x 1 + a i2 x a in x n b i ) = 0 for any 1 i m, and v. j = (c j a 1j y 1 a 2j y 2... a mj y m )x j = 0 for any 1 j n. Intuition: a constraint of primal problem is loosely restricted the corresponding dual variable is tight. An example: the optimal solutions to Diet and its dual are x = (14.244, 2.707, 0, 0) and y = (0.0269, 0, ). 110x x x x 4 =2000 4x 1 + 8x 2 + 4x x 4 >55 y 2 = 0 2x x x x 4 =800 x 1, x 2, x 3, x / 183

69 . Proof. Proof.. u i = 0 and v j = 0 for any i and j i u i = 0 and j v j = 0 (since u i 0, v j 0) i u i + j v j = 0 (y T Ax y T b) + (c T x y T Ax) = 0 y T b = c T x y and x are optimal solutions (by strong duality property, i.e., both y T b and c T x reach its bound). 69 / 183

70 . Summary: 9 cases of primal and dual problems 70 / 183

71 . Example 1: Primal has unbounded objective value and Dual is infeasible Primal: Dual: min 2x 1 2x 2 s.t. x 1 x 2 1 x 1 + x 2 1 x 1 0 x 2 0 max y 1 + y 2 s.t. y 1 0 y 2 0 y 1 y 2 2 y 1 + y / 183

72 . Example 2: both Primal and Dual are infeasible Primal: Dual: min x 1 2x 2 s.t. x 1 x 2 2 x 1 + x 2 1 x 1 0 x 2 0 max 2y 1 y 2 s.t. y 1 0 y 2 0 y 1 y 2 1 y 1 + y / 183

73 . Ṣolving linear program using duality 73 / 183

74 . KKT conditions for linear program Consider a linear program in slack form min c T x s.t. Ax = b x 0 The KKT conditions turns into: x and y are optimal solutions to the primal and dual problems, respectively, if they satisfy the following three conditions:.1 (Primal feasibility) Ax = b, x 0.2 (Dual feasibility).3 (Complementary slackness) y T A c T c T x = y T b Question: How to obtain x and y that satisfy these three conditions simultaneously? 74 / 183

75 . Strategy 1 1: x = x 0 ; //Initialize x with a primal feasible solution 2: y = y 0 ; //Calculate initial y according to complementary slackness 3: while TRUE do 4: x =Improve(x); 5: //Improve x towards dual feasibility of corresponding y. Throughout this process, primal feasibility is maintained and y is recalculated according to complementary slackness 6: if stopping(x) then 7: break; 8: //If y is dual feasible then stop 9: end if 10: end while 11: return x; Example: Primal simplex 75 / 183

76 . Strategy 2 1: y = y 0 ; //Initialize y with a dual feasible solution 2: x = x 0 ; //Calculate initial x according to complementary slackness 3: while TRUE do 4: y =Improve(y); 5: //Improve y towards primal feasibility of corresponding x. Throughout this process, dual feasibility is maintained and x is recalculated according to complementary slackness 6: if stopping(y) then 7: break; 8: //If x is primal feasible then stop 9: end if 10: end while 11: return y; Example: Dual simplex, primal and dual 76 / 183

77 . Strategy 3 1: x = x 0 ; //Initialize x with a primal feasible solution 2: y = y 0 ; //Initialize y with a dual feasible solution 3: while TRUE do 4: (x, y) =Improve(x, y); //Improve x and y towards complementary slackness 5: if stopping(x, y) then 6: break; 7: //If x and y satisfy complementary slackness then stop 8: end if 9: end while 10: return y; Example: Interior point method 77 / 183

78 . Ḍual simplex method 78 / 183

79 . Revisiting primal simplex: An example min x 3 +x 4 s.t. x 1 x 3 +2x 4 = 2 x 2 +3x 3 2x 4 = 6 x 1, x 2, x 3, x 4 0 x 1 x 2 x 3 x 4 -z= 0 c 1 = 0 c 2 =0 c 3 =-1 c 4 =1 x B1 = b 1 = x B2 = b 2 = x 4 4 3x 3 2x 4 = 6 3 x 3 + 2x 4 = x 3 79 / 183

80 . Step 2 min x 3 +x 4 s.t. x 1 x 3 +2x 4 = 2 x 2 +3x 3 2x 4 = 6 x 1, x 2, x 3, x 4 0 x 1 x 2 x 3 x 4 -z= 2 c 1 = 0 c 2 = 1 c 3 3 =0 c 4 = 1 3 x B1 = b 1 = x B2 = b 2 = x 4 4 3x 3 2x 4 = 6 3 x 3 + 2x 4 = x 3 80 / 183

81 . The viewpoint of dual problem Primal problem P : Dual problem D: min x 3 +x 4 s.t. x 1 x 3 +2x 4 = 2 x 2 +3x 3 2x 4 = 6 x 1, x 2, x 3, x 4 0 max 2y 1 +6y 2 s.t. y 1 0 y 2 0 y 1 +3y 2 1 2y 1 2y 2 1 x 4 43x 3 2x 4 = x 3 + 2x 4 = 2 y 1 + 3y 2 = y 2 y x 3 2y 1 2y 2 = 1 81 / 183

82 . Set primal and dual solutions according to a basis Let s consider a linear program in slack form and its dual problem, i.e. Primal problem: min c T x s.t. Ax = b x 0 Dual problem: max b T y s.t. A T y c From each basis B of A, we can set a primal solution x and a dual solution y. [ ] B Primal solution: x = 1 b. x is feasible if B 0 1 b 0. Dual solution: y T = c T B B 1. y is feasible if y T A c T, i.e., c T = c T c T B B 1 A = c T y T A 0. Note that by this setting, complementary slackness follows as c T x = c T B B 1 b = y T b. 82 / 183

83 . Primal feasible basis Consider the Primal problem: min x 3 +x 4 s.t. x 1 x 3 +2x 4 = 2 x 2 +3x 3 2x 4 = 6 x 1, x 2, x 3, x 4 0 x 1 x 2 x 3 x 4 -z= 0 c 1 = 0 c 2 =0 c 3 =-1 c 4 =1 x B1 = b 1 = x B2 = b 2 = Primal variables: x; Feasible: B 1 b 0. Basis B is called primal feasible if all elements in B 1 b (the first column except for z) are non-negative. 83 / 183

84 . Dual feasible basis Now let s consider the Dual problem: max 2y 1 +6y 2 s.t. y 1, y 2 0 y 1 +3y 2 1 2y 1 2y 2 1 Consider Primal simplex tabular again: x 1 x 2 x 3 x 4 -z= 0 c 1 = 0 c 2 =0 c 3 =-1 c 4 =1 x B1 = b 1 = x B2 = b 2 = Dual variables: y T = c T B B 1 ; Feasible: y T A c T. Basis B is called dual feasible if all elements in c T = c T c T B B 1 A = c T y T A (the first row except for z) are non-negative. 84 / 183

85 . Another view point of the Primal Simplex algorithm Thus another view point of the Primal Simplex algorithm can be described as:.1 Starting point: The Primal Simplex algorithm starts with a primal basic feasible solution (the first column in simplex table x B = B 1 b 0). Set dual variable y T = c T B B 1..2 Improvement: By pivoting basis, we move towards dual feasibility, i.e. the first row in simplex table c T = c T c T B B 1 A = c T y T A 0. Here, the improvement of dual feasibility is achieved by selecting a negative element in the first row in the pivoting process. Throughout the process we maintain the primal feasibility and complementary slackness, i.e., c T x = c T B 1 b = y T b..3 Stopping criterion: c T = c T c T B B 1 A 0, i.e., y T A c T. In other words, the iteration process ends when the basis is both primal feasible and dual feasible. 85 / 183

86 . Another viewpoint of primal simplex: Step 1 min x 3 +x 4 s.t. x 1 x 3 +2x 4 = 2 x 2 +3x 3 2x 4 = 6 x 1, x 2, x 3, x 4 0 x 1 x 2 x 3 x 4 -z= 0 c 1 = 0 c 2 =0 c 3 =-1 c 4 =1 x B1 = b 1 = x B2 = b 2 = Dual solution: y T = c T B B 1 = [0 0], which is infeasible. x 4 4 3x 3 2x 4 = x 3 + 2x 4 = 2 y 1 + 3y 2 = y 2 y x 3 2y 1 2y 2 = 1 86 / 183

87 . Another viewpoint of primal simplex: Step 2 min x 3 +x 4 s.t. x 1 x 3 +2x 4 = 2 x 2 +3x 3 2x 4 = 6 x 1, x 2, x 3, x 4 0 x 1 x 2 x 3 x 4 -z= 2 c 1 = 0 c 2 = 1 c 3 3 =0 c 4 = 1 3 x B1 = b 1 = x B2 = b 2 = Dual solution: y T = c T B B 1 = [0 1 3 ], which is feasible. x 4 4 3x 3 2x 4 = x 3 + 2x 4 = 2 y 1 + 3y 2 = y 2 y x 3 2y 1 2y 2 = 1 87 / 183

88 . Dual simplex works in just an opposite fashion Dual simplex:.1 Starting point: The Dual Simplex algorithm starts with a dual basic feasible solution y T = c T B B 1 such that y T A c T, i.e., the first row in simplex table c T = c T c T B B 1 A = c T y T A 0. Set primal variables x B = B 1 b and x N = 0)..2 Improvement: By pivoting basis, we move towards primal feasibility, i.e. the first column in simplex table B 1 b 0. Here, the improvement of primal feasibility is achieved by selecting a negative element from the first column in the pivoting process. Throughout the process we maintain the dual feasibility and complementary slackness, i.e., c T x = c T B 1 b = y T b..3 Stopping criterion: x B = B 1 b 0. In other words, the iteration process ends when the basis is both primal feasible and dual feasible. 88 / 183

89 . Primal simplex vs. Dual simplex Both primal simplex and dual simplex terminate at the same condition, i.e. the basis is primal feasible and dual feasible simultaneously. However, the final objective is achieved in totally opposite fashions the primal simplex method keeps the primal feasibility while the dual simplex method keeps the dual feasibility during the pivoting process. The primal simplex algorithm first selects an entering variable and then determines the leaving variable. In contrast, the dual simplex method does the opposite; it first selects a leaving variable and then determines an entering variable. 89 / 183

90 Dual simplex(b I, z, A, b, c) 1: //Dual simplex starts with a dual feasible basis. Here, B I contains the indices of the basic variables. 2: while TRUE do 3: if there is no index l (1 l m) has b l < 0 then 4: x =CalculateX(B I, A, b, c); 5: return (x, z); 6: end if; 7: choose an index l having b l < 0 according to a certain rule; 8: for each index j (1 i n) do 9: if a lj < 0 then 10: j = c j a lj ; 11: else 12: j = ; 13: end if 14: end for 15: choose an index e that minimizes j; 16: if e = then 17: return no feasible solution ; 18: end if 19: (B I, A, b, c, z) = Pivot(B I, A, b, c, z, e, l); 20: end while 90 / 183

91 . An example Standard form: Slack form: min 5x x x 3 s.t. x 1 x 2 x 3 2 x 1 3x 2 3 x 1, x 2, x 3 0 min 5x x x 3 s.t. x 1 x 2 x 3 + x 4 = 2 x 1 3x 2 + x 5 = 3 x 1, x 2, x 3, x 4, x / 183

92 . Step 1 x 1 x 2 x 3 x 4 x 5 -z= 0 c 1 = 5 c 2 =35 c 3 =20 c 4 =0 c 5 =0 x B1 = b 1 = x B2 = b 2 = Basis (in blue): B = {a 4, a 5 } [ B Solution: x = 1 ] b = (0, 0, 0, 2, 3). 0 Pivoting: choose a 5 to leave basis since b 2 = 3 < 0; choose c a 1 to enter basis since min j j,a2j <0 a 2j = c 1 a / 183

93 . Step 2 x 1 x 2 x 3 x 4 x 5 -z= -15 c 1 = 0 c 2 =20 c 3 =20 c 4 =0 c 5 =5 x B1 = b 1 = x B2 = b 2 = Basis (in blue): B = {a 1, a 4 } [ B Solution: x = 1 ] b = (3, 0, 0, 5, 0). 0 Pivoting: choose a 4 to leave basis since b 1 = 5 < 0; choose c a 2 to enter basis since min j j,a1j <0 a 1j = c 2 a / 183

94 . Step 3 x 1 x 2 x 3 x 4 x 5 -z= -40 c 1 = 0 c 2 =0 c 3 =15 c 4 =5 c 5 =10 x B1 = b 1 = x B2 = b 2 = Basis (in blue): B = {a 1, a 2 } [ B Solution: x = 1 ] b = ( 5 0 4, 3 4, 0, 0, 0). Pivoting: choose a 1 to leave basis since b 2 = 3 4 c a 3 to enter basis since min j j,a2j <0 a 2j = c 3 a 23. < 0; choose 94 / 183

95 . Step 4 x 1 x 2 x 3 x 4 x 5 -z= -55 c 1 = 20 c 2 =0 c 3 =0 c 4 =20 c 5 =5 x B1 = b 1 = x B2 = b 2 = Basis (in blue): B = {a 2, a 3 } [ B Solution: x = 1 ] b = (0, 1, 1, 0, 0). 0 Done! 95 / 183

96 . When dual simplex method is useful?.1 The dual simplex algorithm is most suited for problems for which an initial dual feasible solution is easily available..2 It is particularly useful for reoptimizing a problem after a constraint has been added or some parameters have been changed so that the previously optimal basis is no longer feasible..3 Trying dual simplex is particularly useful if your LP appears to be highly degenerate, i.e. there are many vertices of the feasible region for which the associated basis is degenerate. We may find that a large number of iterations (moves between adjacent vertices) occur with little or no improvement. 1 1 References: Operations Research Models and Methods, Paul A. Jensen and Jonathan F. Bard; OR-Notes, J. E. Beasley 96 / 183

97 . Primal and dual method: another strategy to find a pair of primal. and dual variables satisfying KKT conditions 97 / 183

98 . Primal and dual method: a brief history In 1955, H. Kuhn proposed the Hungarian method for the MaxWeightedMatching problem. This method effectively explores the duality property of linear programming. In 1956, G. Dantzig, R. Ford, and D. Fulkerson extended this idea to solve linear programming problems. In 1957, R. Ford, and D. Fulkerson applied this idea to solve network-flow problem and Hitchcock problem. In 1957, J. Munkres applied this idea to solve the transportation problem. 98 / 183

99 . Primal and dual method Basic idea: It is not easy to find primal variables x and dual variables y to satisfy the KKT conditions simultaneously. A reasonable strategy is starting from a dual feasible y, and pose restrictions on x according to complementary slackness. Next a step-by-step improvement procedure was performed towards the primal feasibility of x.. Primal P x Dual D y Step 1: y x Restricted Primal RP x Step 3: y = y+θ y Step 2: x y Dual of RP DRP y 99 / 183

100 . Basic idea of primal and dual method Primal P: min c 1x 1 + c 2x c nx n s.t. a 11x 1 + a 12x a 1nx n = b 1 (y 1) a 21 x 1 + a 22 x a 2n x n = b 2 (y 2 )... a m1 x 1 + a m2 x a mn x n = b m (y m ) x 1, x 2,..., x n 0 Dual D: max b 1 y 1 + b 2 y b m y m s.t. a 11y 1 + a 21y a m1y m c 1 a 12y 1 + a 22y a m2y m c 2... a 1n y 1 + a 2n y a mn y m c n Basic idea: Suppose we are given a dual feasible solution y. Let s verify whether y is an optimal solution or not:.1 If y is an optimal solution to the dual problem D, then the corresponding primal variables x should satisfy a restricted primal problem called RP ;.2 Furthermore, even if y is not optimal, the solution to the dual of RP (called DRP ) still provide invaluable. information.. it. can. be. 100 / 183 used to improve y.

101 . Step 1: y x I Dual problem D: max b 1 y 1 + b 2 y b m y m s.t. a 11 y 1 + a 21 y a m1 y m c 1 ( = x 1 0)... a 1n y 1 + a 2n y a mn y m c n ( < x n = 0) y provides information of the corresponding primal variables x:.1 Given a dual feasible solution y. Let s check whether y is optimal solution or not..2 If y is optimal, we have the following restrictions on x: a 1i y 1 + a 2i y a mi y m <c i x i = 0 (Reason: complement slackness. An optimal solution y satisfies (a 1i y 1 + a 2i y a mi y m c i ) x i = 0).3 Let s use J to record the index of tight constraints where = holds. 101 / 183

102 . Step 1: y x II 102 / 183

103 . Step 1: y x III.4 Thus the corresponding primal solution x should satisfy the following restricted primal (RP):.5 RP: a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2... a m1 x 1 + a m2 x a mn x n = b m x i = 0 i / J x i 0 i J.6 In other words, the optimality of y is determined via solving RP. 103 / 183

104 . But how to solve RP? I RP: a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2... a m1 x 1 + a m2 x a mn x n = b m x i = 0 i / J x i 0 i J How to solve RP? Recall that Ax = b, x 0 can be solved via solving an extended LP. 104 / 183

105 . But how to solve RP? II RP (extended through introducing slack variables): min ϵ = s 1 +s s m s.t. s 1 +a 11 x a 1n x n = b 1 s 2 +a 21 x a 2n x n = b s m +a m1 x a mn x n = b m x i = 0 i / J x i 0 i J s i 0 i.1 If ϵ OP T = 0, then we find a feasible solution to RP, implying that y is an optimal solution;.2 If ϵ OP T > 0, y is not an optimal solution. 105 / 183

106 . Step 2: x y I Alternatively, we can solve the dual of RP, called DRP : max w = b 1 y 1 + b 2 y b m y m s.t. a 11 y 1 + a 21 y a m1 y m 0 a 12 y 1 + a 22 y a m2 y m 0... a 1 J y 1 + a 2 J y a m J y m 0 y 1, y 2,... y m 1.1 If w OP T = 0, y is an optimal solution.2 If w OP T > 0, y is not an optimal solution. However, the optimal solution still provides useful information the optimal solution to DRP can be used to improve y. 106 / 183

107 . The difference between DRP and D Dual problem D: DRP: max b 1 y 1 + b 2 y b m y m s.t. a 11 y 1 + a 21 y a m1 y m c 1 a 12 y 1 + a 22 y a m2 y m c 2... a 1n y 1 + a 2n y a mn y m c n max w = b 1 y 1 + b 2 y b m y m s.t. a 11 y 1 + a 21 y a m1 y m 0 a 12 y 1 + a 22 y a m2 y m 0... a 1 J y 1 + a 2 J y a m J y m 0 y 1, y 2,... y m 1 How to write DRP from D? Replacing c i with 0; Only J restrictions in DRP ; An additional restriction: y 1, y 2,..., y m 1; 107 / 183

108 . Step 3: y y I Why y can be used to improve y? Consider an improved dual solution y = y+θ y, θ > 0. We have: Objective function: Since y T b = w OP T > 0, y T b = y T b + θw OP T > y T b. In other words, (y + θ y) is better than y. Constraints: The dual feasibility requires that: For any j J, a 1j y 1 + a 2j y a mj y m 0. Thus we have y T a j = y T a j + θ y T a j c j for any θ > 0. For any j / J, there are two cases: 108 / 183

109 . Step 3: y y II.1 j / J, a 1j y 1 + a 2j y a mj y m 0: Thus y is feasible for any θ > 0 since for 1 j n, a 1jy 1 + a 2jy a mjy m (1) = a 1j y 1 + a 2j y a mj y m (2) + θ(a 1j y 1 + a 2j y a mj y m ) (3) c j (4) Hence dual problem D is unbounded and the primal problem P is infeasible..2 j / J, a 1j y 1 + a 2j y a mj y m > 0: We can safely set θ c j (a 1j y 1 +a 2j y a mj y m) a 1j y 1 +a 2j y a mj y m to guarantee that y T a j = y T a j + θ y T a j c j. = c j y T a j y T a j 109 / 183

110 . Primal Dual algorithm 1: Infeasible = No Optimal = No y = y 0 ; //y 0 is a feasible solution to the dual problem D 2: while TRUE do 3: Finding tight constraints index J, and set corresponding x j = 0 for j / J. 4: Thus we have a smaller RP. 5: Solve DRP. Denote the solution as y. 6: if DRP objective function w OP T = 0 then 7: Optimal= Yes 8: return y; 9: end if 10: if y T a j 0 (for all j / J) then 11: Infeasible = Yes ; 12: return ; 13: end if 14: Set θ = min cj yt a j y T a j for y T a j > 0, j / J. 15: Update y as y = y + θ y; 16: end while 110 / 183

111 . Advantages of Primal and dual algorithm Primal and dual algorithm ends if using anti-cycling rule. (Reason: the objective value y T b increases if there is no degeneracy.) Both RP and DRP do not explicitly rely on c. In fact, the information of c is represented in J. This leads to another advantage of primal and dual technique, i.e., RP is usually a purely combinatorial problem. Take ShortestPath as an example. RP corresponds to a connection problem. An optimal solution to DRP usually has combinatorial explanation, especially for graph-theory problems. More and more constraints become tight in the primal and dual process. Unlike dual simplex starting from a dual basic feasible solution, primal and dual method requires only a dual feasible solution. (See Lecture 10 for a primal and dual algorithm for MaximumFlow problem. ) 111 / 183

112 . ShortestPath: Dijkstra s algorithm is essentially Primal Dual. algorithm 112 / 183

113 . ShortestPath problem. INPUT: n cities, and a collection of roads. A road from city i to j has a distance d(i, j). Two specific cities: s and t.. OUTPUT: the shortest path from city s to t. s 5 u. 8 1 v 6 2 t 113 / 183

114 . ShorestPath problem: Primal problem s x 1 5 u x 2. 8 x 3 1 x 4 6 v x 5 2 t Primal problem: set variables for roads (Intuition: x i = 0/1 means whether edge i appears in the shortest path), and a constraint means that we enter a node through an edge and leaves it through another edge. min 5x 1 + 8x 2 + 1x 3 + 6x 4 + 2x 5 s.t. x 1 + x 2 = 1 vertex s x 4 x 5 = 1 vertex t x 1 + x 3 + x 4 = 0 vertex u x 2 x 3 + x 5 = 0 vertex v x 1, x 2, x 3, x 4, x 5 = 0/1 114 / 183

115 . ShorestPath problem s x 1 5 u x 2. 8 x 3 1 x 4 6 v x 5 2 t Primal problem: relax the 0/1 integer linear program into linear program by the totally uni-modular property. min 5x 1 + 8x 2 + 1x 3 + 6x 4 + 2x 5 s.t. x 1 + x 2 = 1 vertex s x 4 x 5 = 1 vertex t x 1 + x 3 + x 4 = 0 vertex u x 2 x 3 + x 5 = 0 vertex v x 1, x 2, x 3, x 4, x 5 0 x 1, x 2, x 3, x 4, x / 183

116 . ShortestPath problem: Dual problem Dual problem: set variables y i for cities and variables z i for the constraints x i 1. max y s y t +z 1 + z 2 + z 3 + z 4 + z 5 s.t. y s y u 5 y s y v 8 y u y v 1 y t +y u 6 y t +y v 2 z i 0 i = 1,..., / 183

117 . Dual of ShortestPath problem height y s s y 5 u u. 8 1 y v y t v 6 2 t Dual problem: set variables for cities. (Intuition: y i means the height of city i; thus, y s y t denotes the height difference between s and t, providing a lower bound of the shortest path length.) max y s y t s.t. y s y u 5 x 1 : edge (s, u) y s y v 8 x 2 : edge (s, v) y u y v 1 x 3 : edge (u, v) y t + y u 6 x 4 : edge (u, t) y t + y v 2 x 5 : edge (v, t) 117 / 183

118 . A simplified version height y s s y 5 u u. 8 1 y v y t v 6 2 t Dual problem: simplify by setting y t = 0 (and remove the 2nd constraint in the primal problem P, accordingly) max y s s.t. y s y u 5 x 1 : edge (s, u) y s y v 8 x 2 : edge (s, v) y u y v 1 x 3 : edge (u, v) y u 6 x 4 : edge (u, t) y v 2 x 5 : edge (v, t) 118 / 183

119 . Iteration 1 I Dual feasible solution: y T = (0, 0, 0). Let s check the constraints in D: y s y u < 5 x 1 = 0 y s y v < 8 x 2 = 0 y u y v < 1 x 3 = 0 y u < 6 x 4 = 0 y v < 2 x 5 = 0 Identifying tight constraints in D: J = Φ, implying that x 1, x 2, x 3, x 4, x 5 = 0. RP: min s 1 +s 2 +s 3 s.t. s 1 +x 1 +x 2 = 1 node s s 2 x 1 +x 3 +x 4 = 0 node u s 3 x 2 x 3 +x 5 = 0 node v s 1, s 2, s 3, 0 x 1, x 2, x 3, x 4, x 5 = / 183

120 . Iteration 1 II DRP : max y s s.t. y s 1 y u 1 y v 1 Solve DRP using combinatorial technique: optimal solution y T = (1, 0, 0). Note: the optimal solution is not unique Step length θ: θ = min{ c1 yt a 1 y T a 1, c2 yt a 2 y T a 2 } = min{5, 8} = 5 Update y: y T = y T + θ y T = (5, 0, 0). 120 / 183

121 . Iteration 1 III height y s = 5 s 5 8 y u = y v = y t = 0.u. v 1 2 t 6 From the point of view of Dijkstra s algorithm: Optimal solution to DRP is y T = (1, 0, 0): the explored vertex set S = {s} in Dijkstra s algorithm. In fact, DRP is solved via identifying the nodes reachable from s. Step length θ = min{ c 1 y T a 1 y T a 1, c 2 y T a 2 y T a 2 } = min{5, 8} = 5: finding the closest vertex to the nodes in S via comparing all edges going out from S. 121 / 183

122 . Iteration 2 I Dual feasible solution: y T = (5, 0, 0). Let s check the constraints in D: y s y u = 5 y s y v < 8 x 2 = 0 y u y v < 1 x 3 = 0 y u < 6 x 4 = 0 y v < 2 x 5 = 0 Identifying tight constraints in D: J = {1}, implying that x 2, x 3, x 4, x 5 = 0. RP: min s 1 +s 2 +s 3 s.t. s 1 +x 1 +x 2 = 1 node s s 2 x 1 +x 3 +x 4 = 0 node u s 3 x 2 x 3 +x 5 = 0 node v s 1, s 2, s 3, 0 x 2, x 3, x 4, x 5 = / 183

123 . Iteration 2 II DRP : max y s s.t. y s y u 0 y s, y u, y v 1 Solve DRP using combinatorial technique: optimal solution y T = (1, 1, 0). Note: the optimal solution is not unique Step length θ: θ = min{ c2 yt a 2 y T a 2, c3 yt a 3 y T a 3, c4 yt a 4 y T a 4 } = min{3, 1, 6} = 1 Update y: y T = y T + θ y T = (6, 1, 0). 123 / 183

124 . Iteration 2 III height y s = 6 s 5 8 y u = 1 u 6 1 y v = y t = 0. v t 2 From the point of view of Dijkstra s algorithm: Optimal solution to DRP is y T = (1, 1, 0): the explored vertex set S = {s, u} in Dijkstra s algorithm. In fact, DRP is solved via identifying the nodes reachable from s. Step length θ = min{ c2 yt a 2 y T a 2, c3 yt a 3 y T a 3, c4 yt a 4 y T a 4 } = min{3, 1, 6} = 1: finding the closest vertex to the nodes in S via comparing all edges going out from S. 124 / 183

125 . Iteration 3 I Dual feasible solution: y T = (6, 1, 0). Let s check the constraints in D: y s y u = 5 y s y v < 8 x 2 = 0 y u y v = 1 y u < 6 x 4 = 0 y v < 2 x 5 = 0 Identifying tight constraints in D: J = {1, 3}, implying that x 2, x 4, x 5 = 0. RP: min s 1 +s 2 +s 3 s.t. s 1 +x 1 +x 2 = 1 node s s 2 x 1 +x 3 +x 4 = 0 node u s 3 x 2 x 3 +x 5 = 0 node v s 1, s 2, s 3, 0 x 2, x 4, x 5 = / 183

126 . Iteration 3 II DRP : max y s s.t. y s y u 0 y u y v 0 y s, y u, y v 1 Solve DRP using combinatorial technique: optimal solution y T = (1, 1, 1). Note: the optimal solution is not unique Step length θ: θ = min{ c4 yt a 4 y T a 4, c5 yt a 5 y T a 5 } = min{5, 2} = 2 Update y: y T = y T + θ y T = (8, 3, 2). 126 / 183

127 . Iteration 3 III height y s = 8 s 5 8 y u = 3 u y v = 2. 1 v 2 y t = 0 t 6 From the point of view of Dijkstra s algorithm: Optimal solution to DRP is y T = (1, 1, 1): the explored vertex set S = {s, u, v} in Dijkstra s algorithm. In fact, DRP is solved via identifying the nodes reachable from s. 127 / 183

128 . Iteration 3 IV Step length θ = min{ c4 yt a 4 y T a 4, c5 yt a 5 y T a 5 } = min{5, 2} = 2: finding the closest vertex to the nodes in S via comparing all edges going out from S. 128 / 183

129 . Iteration 4 I Dual feasible solution: y T = (8, 3, 2). Let s check the constraints in D: y s y u = 5 y s y v < 8 x 2 = 0 y u y v = 1 y u < 6 x 4 = 0 y v = 2 Identifying tight constraints in D: J = {1, 3, 5}, implying that x 2, x 4 = 0. RP: min s 1 +s 2 +s 3 s.t. s 1 +x 1 +x 2 = 1 node s s 2 x 1 +x 3 +x 4 = 0 node u s 3 x 2 x 3 +x 5 = 0 node v s 1, s 2, s 3, 0 x 2, x 4 = / 183

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis .. CS711008Z Algorithm Design and Analysis Lecture 9. Lagrangian duality and SVM Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 136 . Outline Classification

More information

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

More information

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Part IB Optimisation

Part IB Optimisation Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

More information

Convex Optimization Boyd & Vandenberghe. 5. Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

Convex Optimization and Modeling

Convex Optimization and Modeling Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual

More information

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

Lecture: Duality of LP, SOCP and SDP

Lecture: Duality of LP, SOCP and SDP 1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

More information

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

Lecture: Duality.

Lecture: Duality. Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

EE/AA 578, Univ of Washington, Fall Duality

EE/AA 578, Univ of Washington, Fall Duality 7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST) Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

On the Method of Lagrange Multipliers

On the Method of Lagrange Multipliers On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should

More information

Lecture 6: Conic Optimization September 8

Lecture 6: Conic Optimization September 8 IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

More information

Primal/Dual Decomposition Methods

Primal/Dual Decomposition Methods Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients

More information

A Brief Review on Convex Optimization

A Brief Review on Convex Optimization A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lecture-8 Lecture 8 Slide# 1 Today s Lecture 1. Recap of dual variables and strong duality. 2. Complementary Slackness Theorem. 3. Interpretation of dual variables.

More information

CSCI : Optimization and Control of Networks. Review on Convex Optimization

CSCI : Optimization and Control of Networks. Review on Convex Optimization CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Convex Optimization and SVM

Convex Optimization and SVM Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

LINEAR PROGRAMMING II

LINEAR PROGRAMMING II LINEAR PROGRAMMING II LP duality strong duality theorem bonus proof of LP duality applications Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM LINEAR PROGRAMMING II LP duality Strong duality

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Homework Set #6 - Solutions

Homework Set #6 - Solutions EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

More information

12. Interior-point methods

12. Interior-point methods 12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

More information

Math 5593 Linear Programming Week 1

Math 5593 Linear Programming Week 1 University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About

More information

Lagrangian Duality and Convex Optimization

Lagrangian Duality and Convex Optimization Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?

More information

Linear Programming. Leo Liberti. LIX, École Polytechnique. Operations research courses / LP theory p.

Linear Programming. Leo Liberti. LIX, École Polytechnique. Operations research courses / LP theory p. Operations research courses / LP theory p. 1/47 Linear Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Operations research courses / LP theory p. 2/47 Contents LP formulations

More information

subject to (x 2)(x 4) u,

subject to (x 2)(x 4) u, Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem Algorithmic Game Theory and Applications Lecture 7: The LP Duality Theorem Kousha Etessami recall LP s in Primal Form 1 Maximize c 1 x 1 + c 2 x 2 +... + c n x n a 1,1 x 1 + a 1,2 x 2 +... + a 1,n x n

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Lagrange Relaxation and Duality

Lagrange Relaxation and Duality Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler

More information

Generality of Languages

Generality of Languages Increasing generality Decreasing efficiency Generality of Languages Linear Programs Reduction Algorithm Simplex Algorithm, Interior Point Algorithms Min Cost Flow Algorithm Klein s algorithm Reduction

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function

More information

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.

More information

Tutorial on Convex Optimization: Part II

Tutorial on Convex Optimization: Part II Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation

More information

Chap 2. Optimality conditions

Chap 2. Optimality conditions Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

The CPLEX Library: Mixed Integer Programming

The CPLEX Library: Mixed Integer Programming The CPLEX Library: Mixed Programming Ed Rothberg, ILOG, Inc. 1 The Diet Problem Revisited Nutritional values Bob considered the following foods: Food Serving Size Energy (kcal) Protein (g) Calcium (mg)

More information

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Karush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725 Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =

More information

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark Lagrangian Duality Richard Lusby Department of Management Engineering Technical University of Denmark Today s Topics (jg Lagrange Multipliers Lagrangian Relaxation Lagrangian Duality R Lusby (42111) Lagrangian

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG631,Linear and integer optimization with applications The simplex method: degeneracy; unbounded solutions; starting solutions; infeasibility; alternative optimal solutions Ann-Brith Strömberg

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:

The Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem: HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course

More information

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017

Linear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017 Linear Programming (Com S 4/ Notes) Yan-Bin Jia Nov 8, Introduction Many problems can be formulated as maximizing or minimizing an objective in the form of a linear function given a set of linear constraints

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

1. f(β) 0 (that is, β is a feasible point for the constraints)

1. f(β) 0 (that is, β is a feasible point for the constraints) xvi 2. The lasso for linear models 2.10 Bibliographic notes Appendix Convex optimization with constraints In this Appendix we present an overview of convex optimization concepts that are particularly useful

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization February 12, 2002 Overview The Practical Importance of Duality ffl Review of Convexity ffl A Separating Hyperplane Theorem ffl Definition of the Dual

More information

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Primal-dual Subgradient Method for Convex Problems with Functional Constraints Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

4. The Dual Simplex Method

4. The Dual Simplex Method 4. The Dual Simplex Method Javier Larrosa Albert Oliveras Enric Rodríguez-Carbonell Problem Solving and Constraint Programming (RPAR) Session 4 p.1/34 Basic Idea (1) Algorithm as explained so far known

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information