Duality in Linear Programming

Size: px
Start display at page:

Download "Duality in Linear Programming"

Transcription

1 Duality in Linear Programming Gary D. Knott Civilized Software Inc Heritage Park Circle Silver Spring MD 296 phone: URL: May 1, Duality in Linear Programming Consider the following standard primal linear programming problem. (Here we impose n 1 and m 1. Also note that in this section the symbol c is not reserved to be an (n + m)-covector; it may have differing numbers of components in differing contexts.) Determine x (R n ) to maximize c T x subject to A o x b and x where A o is an m n matrix, and c (R n ), and b (R m ). Now note that if y with y (R m ) then A o x b implies y T A o x y T b. This is because y T A o x y T b is just the sum of the non-negatively-scaled inequalities y i (A o row i, x) b i, i.e., 1 i m y i(a o row i, x) 1 i m y ib i. Now let P = {x (R n ) A o x b and x }. Suppose x is a feasible point for our primal linear programming problem, so x P, i.e., A o x b and x. Also suppose we can choose y 1,...,y m defining an m-covector y such that (y T A o ) i c i and y i for i = 1,...,n. Then c T x y T A o x, and we have y T A o x y T b, so altogether we have c T x y T A o x y T b. We see that y T A o x and y T b are both upper-bounds of our primal objective function ζ(x) = c T x for admissible choices of x and y, and y T b is independent of x, so for any admissible choice of y, y T b is an upper-bound of c T x for all feasible points x P. Exercise.1: Why must we assume both A o x b and x to conclude that c T x y T A o x y T b? If we want to make y T b = b T y a close upper-bound for c T x, we will want to choose y, subject to y T A o c T and y, to make b T y as small as possible. This is just a linear programming problem: determine y (R m ) to minimize b T y subject to (A o ) T y c and y where A o is an m n matrix, and c (R n ), and b (R m ). 1

2 2 This linear programming problem is called the standard dual problem associated with our standard primal linear programming problem. If this dual problem has a feasible point then it has an optimal point. Moreover, for every feasible point, y, of our dual problem, we have c T x b T y for all feasible points x of the corresponding primal problem. (Note if this dual problem is non-empty, i.e., has a feasible point, and the only optimal points for this dual problem are points at infinity, then the value of the objective function b T y will be at such an optimal point and the primal problem can have no solution. This is because b T y is minimized when y is an optimal point.) Exercise.2: Suppose our dual problem has a feasible point. Why must b T ŷ = when all the optimal points of our dual problem are points at infinity? Hint: consider how we might go from a finite feasible point to a point-at-infinity optimal point. Exercise.3: If b =, is the standard dual problem guaranteed to have a solution? Exercise.4: Show that the dual problem of the linear programming problem: determine y (R m ) to minimize b T y subject to (A o ) T y c and y where A o is an m n matrix, and c (R n ), and b (R m ) is just our original primal programming problem: determine x (R n ) to maximize c T x subject to A o x b and x. Solution.4: Constructing the dual problem of a given primal linear programming problem is a syntactic process. When we state a given linear programming problem in the standard form: determine x (R n ) to maximize c T x subject to A o x b and x where A o is an m n matrix, and c (R n ), and b (R m ). Then the dual problem is: determine y (R m ) to minimize b T y subject to (A o ) T y c and y. This dual problem is equivalent to the standard-form problem: determine y (R m ) to maximize b T y subject to ( A o ) T y c and y. Syntactically, we replace c T with b T, b with c, and A o with ( A o ) T in our standard primal linear programming problem in order to construct its dual problem in standard form. A o b In array form, the array c T describing our primal problem becomes the array ( A o ) T c b T describing our dual problem. This is sometimes summarized by saying the dual problem is the negative transpose of the primal problem. ( A Now note that the negative transpose of o ) T c A o b b T is just c T again. Thus the dual of the dual of our primal problem is our primal problem itself. Exercise.5: Show that the dual problem of the linear programming problem: determine x (R k ) to maximize c T x subject to Ax = b and x where A is an m k matrix, and c (R k ), and b (R m ) is the problem: determine w (R m ) to minimize b T w subject to A T w c. Note c and x are here k-covectors. This fluidity in the meanings of the symbols c and x is a potential source of confusion herein; beware. (We shall sometimes write c o to indicate c o (R n ), but this is not uniformly done because it is often counterproductive.)

3 A b Solution.5: Put this problem is standard form by writing Ax = b as x. A b A b Then the primal problem array is A b and the corresponding negative transpose array c T A for the dual problem is T A T c b T b T, and this specifies the dual problem as: determine y1 to maximize b y T b T y 1 subject to A 2 y T A T y 1 y1 c and 2 where y 1 and are m-covectors. And this problem can be restated as: determine w to minimize b T w subject to A T w c. This is because max b T b T y1 = max b T y y 1, y 1 + b T 2 y 1, where w = y 1. And A T A T y 1 = max y 1, b T y 1 + = max w bt ( w) = min bt w w c and y1 3 is equivalent to A T ( y 1 + ) c and y 1 and, which is equivalent to A T w c with w (R m ) otherwise unrestricted. (Given w, we may determine y 1 and as follows. Take (y 1 ) i = w i when w i and take (y 1 ) i = otherwise. And take ( ) i = w i when w i < and take ( ) i = otherwise.) c Note if k = n+m and A = A o I where A o o is an m n matrix and c = with c o (R n ), then this dual problem becomes determine w (R m ) to minimize b T w subject to (A o ) T w c o and w. Exercise.6: Show that the linear programming problem: determine w to minimize b T w subject to A T w c is equivalent to the linear programming problem: determine z to maximize b T z subject to A T z c. Exercise.7: Show that the dual problem of the slack-variable form of our primal linear programming problem is equivalent to the standard problem dual to our standard primal problem. Solution.7: Put the slack-variable form of our primal problem in standard form as: determine x to maximize (c o ) T o x A I x o b subject to. Here t t A I t b represents the slack variables. A o I b Then the corresponding array representation is A o I b and the corresponding neg- (c o ) T

4 ative transpose array is ( A o ) T (A o ) T c o ( I) T I T b T b T to maximize b T b T y 1, and this specifies the dual problem: deter- y1 ( A mine subject to o ) T (A o ) T y1 c o ( I) T I T. y1 And this problem is equivalent to determine to maximize b T (y 1 ) subject to ( A o ) T y 1 + (A o ) T c o and y 1 + which is in turn equivalent to determine w to maximize b T w subject to ( A o ) T w c o and w where w = y 1, and finally, this is equivalent to determine w to minimize b T w subject to (A o ) T w c o and w. Note the array form we used to introduce the negative transpose dual is not the form we use in the A o tableau representation of the simplex algorithm. This is because we require c T to transform to ( A o ) T c b corresponding to (A o ) T c, while we need to transform to b T ; if we were to change the sign of c, we would need to change the sign of b also The Strong Duality Theorem Let P be the polyhedron of our primal linear programming problem: determine x (R n ) to maximize c T x subject to A o x b and x where A o is an m n matrix, and c (R n ), and b (R m ) and let Q be the polyhedron of the associated dual linear programming problem: determine y (R m ) to minimize b T y subject to (A o ) T y c and y. Thus P = {x (R n ) A o x b and x } and Q = {y (R m ) (A o ) T y c and y }. We saw above that if x P and y Q, then c T x b T y. This is called the weak-duality theorem. Thus if c T x = b T y with x P and y Q, then x must be an optimal point of P that maximizes c T x and y must be an optimal point of Q that minimizes b T y. This is because b T y max{c T v v P }. And when b T y = c T x, b T y is the least possible upper-bound of {c T v v P }, and thus b T y = c T x = max{c T v v P }, so x must be a solution of our primal linear programming problem. Similarly, c T x is the greatest possible lower-bound of {b T w w Q}, and thus y must be a solution of our dual linear programming problem when c T x = b T y. Also we have the following corollary of the weak duality theorem. If the polyhedron P of our primal linear programming problem contains a point at infinity x for which the objective function value c Tˆx =, then, since b T y c Tˆx whenever y Q, b T y must also be for all y Q. But this is impossible if Q is non-empty, since Q then contains a finite feasible point y, i.e., Q cannot be composed entirely of points at infinity, and then the dual objective function value b T y is finite and the minimal value of b T y must be no greater than b T y. Hence Q must be empty. Thus if our primal linear programming problem is unbounded with an unbounded objective function value, then the dual linear programming problem must be empty i.e., Q =. Conversely, if Q is non-empty, our primal problem must have a finite optimal point. Similarly, if the polyhedron Q of our dual linear programming problem contains a point at infinity ŷ as an optimal point for which the objective function vaue b T ŷ =, then the primal linear

5 5 programming problem must be empty, i.e., P =. And conversely, if P is non-empty, our dual problem must have a finite optimal point. Remember a linear programming problem may have an associated unbounded polyhedron without its objective function being ±, but if the objective function is ± then the associated polyhedron is necessarily unbounded. If either our primal or dual linear programming problem has a finite optimal point, then so does the other problem. And in fact, the values of their respective objective functions must be identical! This fact is called the strong-duality theorem. Exercise.8: Give an example where P is unbounded but the dual problem is non-empty and has a finite optimal point. Exercise.9: Give an example where P = and Q =. Solution.9: 1 1 primal: determine x to maximize x 1 subject to x and x dual: determine y to minimize y 1 subject to y and y. 1 Altogether we have the following possiblities for our standard-form primal and dual problems. 1. Our primal problem has a finite optimal point ˆx and our dual problem has a finite optimal point ŷ and c Tˆx = b T ŷ. Also, if one problem has a face of finite solutions of dimension greater than, then the finite solution of the other problem is an overdetermined vertex. 2. Our primal problem has a point-at-infinity optimal point ˆx and c Tˆx =, and our dual problem has no solution, (Q is empty.) 3. Our dual problem has a point-at-infinity optimal point ŷ and b T ŷ =, and our primal problem has no solution, (P is empty.) 4. Neither our primal problem or our dual problem have solutions, (P is empty and Q is empty.) Recall we have a criterion that specifies exactly when P = {x (R n ) A o x b and x } is empty, namely: P is empty exactly when b K = {p (R m ) p = A o Ix and x } which (A is equivalent to the criterion: there exists y (R m ) such that o ) T y and b I T y <. The possibilities above are somewhat subtle when P or Q is unbounded. For example, consider the primal problem: find x (R 2 ) to maximize c T x subject to x P where 1 1 P = {x (R 2 ) A o x b and x } with A o = and b =, i.e., 1 { } 1 1 P = x (R 2 ) x1 and x. Note b = means P is a cone 1 x 2 with { apex. The dual problem is: find y } (R 2 ) to minimize subject to y Q where 1 y1 c1 Q = y and y. 1 1 c 2

6 In the case where c =, all the points in P, including points at infinity, are optimal points, but c T x = for x P; and the polyhedron Q is the singleton set {(, ) T } so the solution of the dual problem is the finite point ŷ = T =. Also note (, ) T is an overdetermined vertex of Q. (And (, ) T is an overdetermined vertex of P as well.) 1 In the case where c =, we have the set of points {(, x 2 ) T x 2 } P as optimal points of our primal problem with c T x = for x {(, x 2 ) T x 2 }; all these optimal points are points at infinity. And the dual problem has Q =. 1 In the case where c =, we have the set of points {(, x 2 ) T x 2 } P as optimal points of our primal problem with c T x = for x {(, x 2 ) T x 2 }; all these optimal points are finite (R 2 ) -points. And the dual problem has every point of Q as an optimal point, where Q = {y (R 2 ) y 1 1 and y 1 } and b T y = T y = for y Q. Note Q is bounded. Exercise.1: With respect to the example discussed above, what is the set Q, and the set of solutions of our primal problem, and the set of solutions of our dual problem, when c T = ( 1, 1)? What are the solution sets when c T = ( 1, 1)? The fact that, if ˆx P is a finite optimal point of our standard-form primal problem then there exists ŷ Q such that ŷ is a finite optimal point of our standard-form dual problem, and conversely, and moreover c Tˆx = b T ŷ, can be proven as follows. Suppose ˆx is a finite optimal point of our primal linear programming problem with ˆx a vertex of P. Then the gradient covector c belongs to the normal cone of ˆx with respect to P. Recall that P = H 1 H n H n+1 H n+m where H i = {x (R n ) ( e T i, x) } for i = 1,...,n, and H n+j = {x (R n ) (a T j, x) b j} for j = 1,...,m where a j = A o row j. Here e T 1,..., et n, a T 1,...,aT m are all outwardly-directed { normal vectors of the } half-spaces whose A intersection defines the polyhedron P. Thus P = x (R n ) o b x. I Let H i 1,...,H iq be the half-spaces whose boundaries contain the optimal vertex ˆx; this means the bounding hyperplanes H i1,..., H iq intersect at the vertex ˆx. Let v 1,...,v q denote the outwardly-directed normal vectors of the half-spaces H i1,...,h iq. Because H i1 H iq = {ˆx}, we have q n and dim({v 1,...,v q }) = n. The normal cone at ˆx with respect to P is the polyhedral cone Nˆx = cone (v 1,...,v q ), and we have seen that c Nˆx because ˆx is an optimal vertex of P that maximizes (c, x). Thus c is a coniccombination of the covectors v 1,...,v q. (Remember a conic-combination is a linear combination of vectors (or covectors) with non-negative scalar coefficients.) The covectors v 1,...,v q all appear as columns of the m (n + m) matrix (A o ) T I. (Recall Nˆx is just the polar dual cone of the -apex cone H i1 H i2 H iq ˆx.) Carathéodory s theorem for polyhedral cones asserts that we can write c as a conic-combination of 6

7 7 a subset of n of the covectors v 1,...,v q. Thus we can write c = (A o ) T I y z where y (R m ) and z (R n ) with y and z, and at most n of the components of the y covector are positive and the rest are zero. z Exercise.11: Show that the columns of the matrix (A o ) T I are the outwardlydirected normal covectors a T 1,...,aT m, e T 1,..., et n. Thus c = (A o ) T y z, and since y is a non-negative m-covector and z is a non-negative (n m)- covector, we have c (A o ) T y and y. Note this is valid in the special case where c =. Exercise.12: Explain why the relations c (A o ) T y and y are valid when P = {}. What if an entire facet of P is comprised of optimal points of P? Exercise.13: Show that if ˆx, then at least one facet of P containing ˆx lies in a nonorthant-boundary hyperplane among H n+1,..., H n+m, i.e., at least one of the inequalities Aˆx b is tight and at least one component of y is positive. The relations c (A o ) T y and y mean y Q (!) So our dual problem has y as a feasible point and therefore our dual problem is not empty and hence has an optimal point ŷ with b T ŷ b T y. And c Tˆx b T ŷ by the weak duality theorem, so b T ŷ > and hence ŷ is finite when ˆx is finite. Suppose ŷ is a finite optimal point of our dual linear programming problem with ŷ a vertex of Q. Then the dual problem gradient covector b belongs to the normal cone of ŷ with respect to Q. { (A Since Q = y (R m ) o ) T } c y and the gradient vector of our dual problem is I b, we may follow the same line of argument as used above to characterize b Nŷ by writing b = A o I x where x (R n ) and v (R m ) with x and v and at most m of the components of the x covector are positive and the rest are zero. Here the columns of A v o I are outwarddirected normals of the half-space boundaries whose intersection defines ŷ. Thus b = A o x v where x and v, so b = A o x + v and x and v, and thus A o x b with x. The relations A o x b and x mean x P so our primal problem has x as a feasible point and therefore our primal problem is not empty and hence also has an optimal point ˆx. And we have c Tˆx b T ŷ by the weak duality theorem, so c Tˆx < and hence ˆx is finite when ŷ is finite. Now let us return to the representation of the gradient covector c as a conic combination of n of the outwardly-directed covectors e T 1,..., et n, a T 1,...,aT m where a j = A o row j. Let j 1,...,j n be the indices in {1,...,n + m} of the hyperplanes H j1,..., H jn from among the hyperplanes v

8 8 H 1,..., H n+m such that H j1,..., H jn are linearly-independent hyperplanes that intersect at y {ˆx}. The corresponding constraints are tight at ˆx and hence we may choose the covector so z y that zero or more of the non-negative coefficients in the covector that multiply the outwardlydirected normals of the hyperplanes z H j1,..., H jn in the columns of (A o ) T I will be positive. y and all the other components of will be zero. z Exercise.14: Show that {j 1,...,j n } {i 1,...,i q }. Exercise.15: Explain why we said zero or more above. y Let p =, so that p row 1 : m = y z and p row (m + 1) : (m + n) = z. Let t = b A oˆx. Define the sequence J = j 1,...,j n and define the sequence K = 1,...,n+m J. (The ordering of the indices in J and K are not important.) We have p row J and p row K =. The hyperplane normals indexed by K include all the slack constraints. Thus (A o ) T I col K contains all the columns of (A o ) T I where (A o row i) Tˆx < b i for 1 i m or e T i ˆx < for 1 i n. The components of p corresponding to these slack columns lie in p row K and are thus all zero. The non-zero components of p all lie in p row J and correspond to tight columns of (A o ) T I where (A o row i, ˆx) = or ˆx i =. This means (b A oˆx) T ˆx T p =! That is (b A oˆx) T ˆx T y =. z Thus (b A oˆx) T y + ˆx T z =. Exercise.16: Show that t T y + ˆx T z = where t = b A oˆx, and hence t T y = and ˆx T z =. Recall we have c = (A o ) T y z. Thus b T y = ˆx T (A o ) T y ˆx T z = ˆx T ((A o ) T y z) = ˆx T c = c Tˆx. Therefore, as we have seen above, y must be an optimal point of our dual problem and we have b T ŷ = c Tˆx where ŷ = y. The conclusion that y is an optimal point of Q is a consequence of the corollary of the weak duality theorem discussed above. Similarly, we may follow the line of reasoning parallel to that used above to establish the identity (( A o ) T ŷ + c) T ŷ T x =. v Thus ŷ T A o x + c T x ŷ T v =. But then c T x = ŷ T (A o x + v) = ŷ T b = b T ŷ since we showed that v + A o x = b above. Therefore x must be an optimal point of our primal problem and again we have c Tˆx = b T ŷ where ˆx = x. (We also have t = v because x = ˆx.) QED

9 9 We have c T x y T A o x b T y when x P and y Q; and when ˆx P maximizes c T x subject to x P and ŷ Q minimizes b T y subject to y Q, we have c Tˆx = ŷ T A oˆx = b T ŷ. Now we can deduce ŷ T (b A oˆx) = from ŷ T A oˆx = b T ŷ. And we can deduce ˆx T ((A o ) T ŷ c) = from c Tˆx = ŷ T A oˆx. The identities ŷ T (b A oˆx) = and ˆx T ((A o ) T ŷ c) = are called the complementary slackness conditions. Exercise.17: We have already obtained the complementary slackness conditions above. Identify where they are derived in the proof of the strong duality theorem. Let t = b A oˆx; we have t because A oˆx b. The elements of the m-covector t are the slack variables of our primal linear programming problem. Similarly, Let z = (A o ) T ŷ c; we have z because (A o ) T ŷ c. The elements of the n-covector z are the slack variables of our dual linear programming problem. The complementary slackness conditions are then ŷt = and ˆxz =. Since ŷ, t, ˆx, and z, these conditions imply that if ŷ i, t i =, and if t i, ŷ i =. And similarly, if ˆx i, z i =, and if z i, ˆx i =. In fact, unless ˆx is an overdetermined vertex of P, exactly one of ˆx i and z i are zero, and unless ŷ is an overdetermined vertex of Q, exactly one of ŷ i and t i are zero.1.2 Computing a Solution of the Dual Problem The simplex algorithm can be applied to our dual problem recast as: determine y (R m ) to maximize b T y subject to ( A o ) T y c o and y where A o is an m n matrix, and c o (R n ), and b (R m ), but in fact the simplex algorithm applied to the corresponding primal problem, (i.e., to the dual of this problem,) also produces an optimal point of this dual problem when our primal problem has a finite optimal point (!) We can see this as follows. Recall our primal problem is: determine x o (R n ) to maximize (c o ) T x o subject to A o x o b and x o where A o is an m n matrix, and c o (R n ), x o (R n ), and b (R m ). This corresponds to the slack-variable form problem: determine x (R n+m ) to maximize c T x subject to Ax = b and x where A = A o I is an m (n + m) matrix, and c T = (c o ) T R n+m, x T = (x o ) T t T R n+m, (the elements of the covector x row (n + 1) : m = t constitute our slack variables,) and b (R m ). When the simplex algorithm applied to our slack-variable primal problem halts with a finite optimal point ˆx (R n+m ), we have a length-n basis sequence N and a corresponding orthant-boundary constraint sequence Z = 1,...,n + m N, and ˆx row N = w and and ˆx row Z = and Aˆx = b. We also have the reduced gradient covector d where d T = (c T col N)B 1 A c T and where B is the non-singular m m matrix A col N. And z = c Tˆx. Recall w = B 1 b. Now d Tˆx = (c T col N)B 1 Aˆx c Tˆx. We know d since this is the termination condition in the simplex algorithm for the discovery of a finite optimal point. Also d row N =. Moreover, c Tˆx = (c T col N)B 1 b since d Tˆx = and Aˆx = b. Alternatively, (c T col N)B 1 b = (c T col N)(ˆx row N)

10 1 = (c T col N)(ˆx row N) + (c T col Z)(ˆx row Z) = c Tˆx. Exercise.18: Show that d Tˆx =. Hint: look at d row N, d row Z, ˆx row N, and ˆx row Z. Thus we have c Tˆx = (c T col N)B 1 b. Let ŷ T = (c T col N)B 1. Then c Tˆx = ŷ T b = b T ŷ. So ŷ might potentially be an optimal point of our dual problem: determine y (R m ) to minimize b T y subject to (A o ) T y c and y where A o is an m n matrix, and c (R n ), and b (R m ). Indeed ŷ will be an optimal point if ŷ is a feasible point of our dual problem; this is a consequence of the strong duality theorem. c But, we have the termination condition: d T = ŷ T A c T, and A = A o o I and c =, so d T = ŷ T A o c o ŷ T. Thus ŷ T A o (c o ) T and ŷ T, i.e., (A o ) T ŷ c o and ŷ. Thus ŷ is a feasible point of our dual problem, so ŷ is in fact an optimal point of our dual problem. Note ŷ T = d T col (n+1) : (n+m). Thus our terminal tableau exhibits not only ˆx (as ˆx row N = w and ˆx row Z =,) but also ŷ (as ŷ T = d T col (n + 1) : (n + m),) and the common value of the primal and dual objective functions is given by z = c Tˆx = b T ŷ. Exercise.19: What are the values of the n slack variables associated with the dual problem? Also, we can obtain a constructive proof of the strong duality theorem from the above specification of ŷ. Given that ˆx is a finite optimal point of our primal problem and ŷ is a feasible point of our dual problem and b T ŷ = c Tˆx, we may conclude that ŷ is an optimal point of our dual problem. Since the weak duality theorem requires min y Q b T y max x P c T x = c Tˆx, and min y Q b T y b T ŷ = c Tˆx, we see that min y Q b T y c Tˆx = b T ŷ min y Q b T y so min y Q b T y = b T ŷ. And when ˆx is a finite optimal point of our primal problem, we see that ŷ = B 1T (c row N) is a feasible point of our dual problem with the associated objective function value b T ŷ = c Tˆx, so ŷ is an optimal point of our dual problem..1.3 The Geometry of Duality Recall our standard primal linear programming problem: determine x (R n ) to maximize c T x subject to A o x b and x where A o is an m n matrix, and c (R n ), and b (R m ). We may introduce m slack variables t 1,...,t m defined in terms of x by t = b A o x, necessarily with t. Now consider the following slack-variable formulation of our primal linear programming problem. (Recall we assume n 1 and m 1.) Determine x (R n+m ) to maximize c T x subject to Ax = b and x where A is an m (n+m) rank m matrix, and c (R n+m ), and b (R m ).

11 11 Here we have redefined x to be an (n + m)-covector with the m added slack-variable components x n+1 = t 1,..., x n+m = t m, and we have redefined c to be an (n + m)-covector by extending it with c row (n + 1) : (n + m) =, and we define A to be the m (n + m) matrix A o I. The corresponding polyhedron is P = {x (R n+m ) Ax b and x }. (In fact the following results do not depend on the matrix A being of the form A o I; the following is valid when A is an arbitrary m (n + m) rank m matrix.) The corresponding dual linear programming problem is: determine y (R m ) to minimize b T y subject to A T y c where A is an m (n + m) rank m matrix, and c (R n+m ), and b (R m ). Let k = n + m so that A is an m k matrix. The slack-variable primal problem above can be written as: determine x (R k ) to maximize c T x subject to A(x d) and x where d (R k ) and satisfies Ad = b. Note when A = A o I, we can choose d =. When there is no covector d such that Ad = b, b our slack-variable primal problem has no solution; otherwise the m equations Ax = Ad specify a (k m)-dimensional flat F in (R k ). Even if d exists such that Ad = b, our problem will only have a solution if d can be chosen such that d. Let us assume our slack-variable primal problem has a feasible point, i.e., the corresponding polyhedron P = F O + k is non-empty. Prabhakaran Pra2 gives a way to lift the dual problem from (R m ) to (R k ). The basic idea is that the m-dimensional polyhedron Q = {y (R m ) A T y c} of the dual problem can be mapped one-to-one into an m-dimensional flat G in (R k ) orthogonal to the (k m)-dimensional flat F (R k ) where F = {x (R k ) Ax = b} = {x (R k ) Ax = Ad} is the flat associated with the slack-variable formulation of our primal linear programming problem. First introduce k slack variables s 1,...,s k to write the dual problem as: determine s (R k ) and y (R m ) to minimize b T y subject to s = A T y c and s. Note that the constraints s = A T y c define s in terms of y (R m ) and thus confine s to an m-dimensional flat in (R k ). The map y A T y c is the map that carries Q into the flat G orthogonal to F. When there are no covectors s and y such that A T y = c + s with s, our dual problem has no solution. Let us assume our dual problem has a feasible point, i.e., the corresponding polyhedron Q is non-empty. Exercise.2: Show that if our primal slack-variable form problem has a feasible point and our dual slack-variable form problem also has a feasible point, then both our primal and our dual problem have finite optimal points. Now recall k m = n and define A as a n k rank n matrix whose rows form a basis of rowspace(a). Note A A T = O n m. (When A = A o I m m, A can be taken as I n n (A o ) T.) The dual problem constraint s + c = A T y can now be transformed to yield A (s + c) = because A (s + c) = A A T y and A A T = O n m and A has rank n.

12 12 Exercise.21: Show that when Q, s+c rowspace(a). And then, since rank(a) = m, there exists a unique m-covector y such that A T y = s + c. Exercise.22: Show that A T Q = {A T y (R k ) A T y c} = {s + c (R k ) s = A T y c and s } = {s + c (R k ) A (s + c) = and s }. Also the dual problem objective function b T y can be written as d T (c + s) since we have assumed there is a covector d (R k ) such that b T = d T A T, so b T y = d T A T y, and we have c + s = A T y. And thus b T y = d T (c + s) and we see that choosing y to minimize b T y is equivalent to choosing s to minimize d T s. Thus our dual problem is transformable to: determine s (R k ) to minimize d T s subject to A (s + c) = and s. The n equations A s = A c specify an m-dimensional flat G in (R k ) where G = {s (R k ) A s = A c}. Recall F = {x (R k ) Ax = Ad}. Note c G and d F. When our slack-variable primal and dual problems both have feasible points, we have ldim(f) = k m = n and ldim(g) = m, and in fact F G. Exercise.23: Show that G = flat(a T Q ) c. Since the rows of A are normal to the rows of A, the (R k ) -flats F and G are orthogonal, and since the subspaces F d and G + c are likewise orthogonal and their dimensions sum to k, they are complementary to one-another in (R k ), and (F d) (G + c) = {}. Hence there is a unique (R k ) -point a such that F G = {a}. The point a is computable as the unique solution of the k equations A A a = Ad A c. (Note A A is a k k non-singular matrix.) Exercise.24: Is the point a necessarily a feasible point of either the slack-varible form of our primal linear programming problem or of the slack-varible form of our dual linear programming problem? That is, does a necessarily hold? Now, A(x d) = A((x a) (d a)) = A(x a) since A(d a) =, (because a F.) And A (s+c) = A ((s a) ( c a)) = A (s a) since A ( c a) =, (because a G.) Moreover, the k-covector ˆx that maximizes c T x subject to A(x a) = and x also minimizes a T x subject to the same constraints. This is because c T x = (c a+a) T (x d+d) = a T x+(c+a) T (x d)+(c+a) T d, and A(x d) = and A (c+a) = implies x d and c+a are perpendicular, so (c a) T (x d) =. Thus c T x = a T x + (c + a) T d, so the covector ˆx that yields a constrained minimum of a T x is the same as the covector that yields a constrained maximum of c T x. In the same way, the k-covector ŝ that minimizes d T s subject to the constraints A (s + c) = A (s a) = and s also minimizes a T s subject to the same constraints. Exercise.25: Show that d T s = a T s + (d a) T a. Thus the above transformations produce the modified primal problem: determine x (R k ) to minimize a T x subject to A(x a) = and x.

13 13 And the modified dual problem: determine s (R k ) to minimize a T s subject to A (s a) = and s. Suppose that one of our modified primal and modified dual problems has a finite optimal point. Then the strong-duality theorem implies that both have finite solutions. Note if a, both our modified primal and modified dual problems have a as a feasible point. Let x and s denote feasible points of our modified primal and modified dual problems respectively. We have x F O + k and s G O + k. This means x d F d and s+c G+c, so (x d, s+c) = = xt (s+c) d T (s+c). And then b T y = d T (c + s) = x T (c + s) c T x since x and s. This is the weak-duality theorem. Exercise.26: Assume there is a covector y (R m ) such that A T y c. Given s (R k ) such that A (s a) = and s, explain how to compute the corresponding feasible point y such that A T y c. Hint: the affine transformation that maps Q into G is given by v A T v c Solution.26: Recall A a = A c. Thus we have A (s + c) =, so s + c rowspace(a), i.e., there exists a unique m-covector y such that A T y = s + c. (Note rtnullspace(a ) = rowspace(a).) Thus y = (A T ) + (s + c). The k equations A T y = s + c are overdetermined, but s is defined exactly so that these equations have a (unique) solution. And since s, A T y = s + c ensures that A T y c. The modified primal and modified dual problems obtained above have a geometrical interpretation where the solution to our modified primal problem, if it exists, is an (R k ) -covector ˆx such that ˆx is an extreme point of the (k m)-dimensional degenerate polyhedron F O + k lying in the intersection of some distinguished collection of m (and possibly more) of the hyperplanes defining the boundary of O + k, while the solution to our modified dual problem, if it exists, is an (Rk ) -covector ŝ such that ŝ is an extreme point of the m-dimensional degenerate polyhedron G O + k lying in the intersection of the other k m hyperplanes defining the boundary of O + k, and possibly more such O+ k -hyperplanes as well. And the connection between duality and orthogonality is clearly seen since the flats F and G are orthogonal. Exercise.27: Show that (ˆx a,ŝ a) =. If we start at the point a common to F and G, and move in F to the extreme point ˆx and also move in G to the extreme point ŝ, then as we move, the distance between the two moving points gets steadily larger until we reach the semi-corners of the boundary of O + k where the optimal points ˆx and ŝ are found. Note that the motion of these two points need not be consistent with the two gradient directions obtained from the common gradient direction a projected in F and in G; this is because a need not lie in O + k, and if so, both points will move from outside O+ k to the boundary of O + k. Exercise.28: Under what conditions does the angle between these two moving points also grow steadily larger, or at least not diminish, as we move until the maximum possible separating distance is attained at the two optimal points? Exercise.29: Let α, β R + and let p, q, a R 2 with p and q and p q. Let C denote the cone cone (p, q). Show that when a C ( C), the full -angle in, 2π) between

14 REFERENCES 14 the two vectors αp + a and βq + a increases as α and β increase, unless a =. (We need to admit full-angles greater than π for this to be generally true for a C.) References Pra2 Manoj Prabhakaran. Visualizing LP duality. Princeton CS Dept., 22.

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Week 3 Linear programming duality

Week 3 Linear programming duality Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Linear Programming and the Simplex method

Linear Programming and the Simplex method Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction

More information

Understanding the Simplex algorithm. Standard Optimization Problems.

Understanding the Simplex algorithm. Standard Optimization Problems. Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Linear Programming Inverse Projection Theory Chapter 3

Linear Programming Inverse Projection Theory Chapter 3 1 Linear Programming Inverse Projection Theory Chapter 3 University of Chicago Booth School of Business Kipp Martin September 26, 2017 2 Where We Are Headed We want to solve problems with special structure!

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

An introductory example

An introductory example CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

Summer School: Semidefinite Optimization

Summer School: Semidefinite Optimization Summer School: Semidefinite Optimization Christine Bachoc Université Bordeaux I, IMB Research Training Group Experimental and Constructive Algebra Haus Karrenberg, Sept. 3 - Sept. 7, 2012 Duality Theory

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method... Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize.

Supplementary lecture notes on linear programming. We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2016 Supplementary lecture notes on linear programming CS 6820: Algorithms 26 Sep 28 Sep 1 The Simplex Method We will present an algorithm to solve linear programs of the form

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality. CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and

More information

Linear and Integer Optimization (V3C1/F4C1)

Linear and Integer Optimization (V3C1/F4C1) Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction

More information

Lecture 7 Duality II

Lecture 7 Duality II L. Vandenberghe EE236A (Fall 2013-14) Lecture 7 Duality II sensitivity analysis two-person zero-sum games circuit interpretation 7 1 Sensitivity analysis purpose: extract from the solution of an LP information

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010 I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

More information

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics

TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN. School of Mathematics JS and SS Mathematics JS and SS TSM Mathematics TRINITY COLLEGE DUBLIN THE UNIVERSITY OF DUBLIN School of Mathematics MA3484 Methods of Mathematical Economics Trinity Term 2015 Saturday GOLDHALL 09.30

More information

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008 Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

1 Review of last lecture and introduction

1 Review of last lecture and introduction Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first

More information

Linear Programming, Lecture 4

Linear Programming, Lecture 4 Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

Optimisation and Operations Research

Optimisation and Operations Research Optimisation and Operations Research Lecture 9: Duality and Complementary Slackness Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009 UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem Algorithmic Game Theory and Applications Lecture 7: The LP Duality Theorem Kousha Etessami recall LP s in Primal Form 1 Maximize c 1 x 1 + c 2 x 2 +... + c n x n a 1,1 x 1 + a 1,2 x 2 +... + a 1,n x n

More information

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 3 Dr. Ted Ralphs IE406 Lecture 3 1 Reading for This Lecture Bertsimas 2.1-2.2 IE406 Lecture 3 2 From Last Time Recall the Two Crude Petroleum example.

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space. Chapter 1 Preliminaries The purpose of this chapter is to provide some basic background information. Linear Space Hilbert Space Basic Principles 1 2 Preliminaries Linear Space The notion of linear space

More information

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality CSCI5654 (Linear Programming, Fall 013) Lecture-7 Duality Lecture 7 Slide# 1 Lecture 7 Slide# Linear Program (standard form) Example Problem maximize c 1 x 1 + + c n x n s.t. a j1 x 1 + + a jn x n b j

More information

Numerical Optimization

Numerical Optimization Linear Programming Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on min x s.t. Transportation Problem ij c ijx ij 3 j=1 x ij a i, i = 1, 2 2 i=1 x ij

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

Optimization 4. GAME THEORY

Optimization 4. GAME THEORY Optimization GAME THEORY DPK Easter Term Saddle points of two-person zero-sum games We consider a game with two players Player I can choose one of m strategies, indexed by i =,, m and Player II can choose

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min. MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Lagrangian Duality Theory

Lagrangian Duality Theory Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual

More information

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization

Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Sparse Optimization Lecture: Dual Certificate in l 1 Minimization Instructor: Wotao Yin July 2013 Note scriber: Zheng Sun Those who complete this lecture will know what is a dual certificate for l 1 minimization

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,

More information

1 Lagrange Multiplier Method

1 Lagrange Multiplier Method 1 Lagrange Multiplier Method Near a maximum the decrements on both sides are in the beginning only imperceptible. J. Kepler When a quantity is greatest or least, at that moment its flow neither increases

More information

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality. The Lagrangian. We consider an optimization program of the form Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

More information

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true. Dimension We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true. Lemma If a vector space V has a basis B containing n vectors, then any set containing more

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

5. Duality. Lagrangian

5. Duality. Lagrangian 5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness. CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

Week 3: Faces of convex sets

Week 3: Faces of convex sets Week 3: Faces of convex sets Conic Optimisation MATH515 Semester 018 Vera Roshchina School of Mathematics and Statistics, UNSW August 9, 018 Contents 1. Faces of convex sets 1. Minkowski theorem 3 3. Minimal

More information

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008. 1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function

More information

Interior Point Methods for LP

Interior Point Methods for LP 11.1 Interior Point Methods for LP Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor, Winter 1997. Simplex Method - A Boundary Method: Starting at an extreme point of the feasible set, the simplex

More information

Key Things We Learned Last Time. IE418 Integer Programming. Proving Facets Way #2 Indirect. A More Abstract Example

Key Things We Learned Last Time. IE418 Integer Programming. Proving Facets Way #2 Indirect. A More Abstract Example Key Things We Learned Last Time IE48: Integer Programming Department of Industrial and Systems Engineering Lehigh University 8th March 5 A face F is said to be a facet of P if dim(f ) = dim(p ). All facets

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

A Lower Bound on the Split Rank of Intersection Cuts

A Lower Bound on the Split Rank of Intersection Cuts A Lower Bound on the Split Rank of Intersection Cuts Santanu S. Dey H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology. 200 Outline Introduction: split rank,

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Instructor: Moritz Hardt Email: hardt+ee227c@berkeley.edu Graduate Instructor: Max Simchowitz Email: msimchow+ee227c@berkeley.edu

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Closing the duality gap in linear vector optimization

Closing the duality gap in linear vector optimization Closing the duality gap in linear vector optimization Andreas H. Hamel Frank Heyde Andreas Löhne Christiane Tammer Kristin Winkler Abstract Using a set-valued dual cost function we give a new approach

More information