4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Size: px
Start display at page:

Download "4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n"

Transcription

1 2 4. Duality of LPs and the duality theorem Complementary slackness The shortest path problem and its dual Farkas' Lemma Dual information in the tableau The dual Simplex algorithm Duality of LPs and the duality theorem 22- The dual of an LP in general form Derivation of the dual Consider an LP in general form: (4.) min c T x x R n, c R n s.t. ai Tx = b i i M a i R n ai Tx b i i M x j 0 j N x j unconstrained j N we transform it to standard form according to Lemma 3.2 with s surplus variables x i for the inequalities split variables + x j = x j - - xj with + xj, - xj! 0 This gives

2 4. Duality of LPs and the duality theorem 22-2 min ĉ T ˆx s.t.  ˆx = b, ˆx 0 with  = A j, j N (A j, A j ), j N I, 0, i M i M (4.2) ˆx = x j, j N (x + j, x j ), j N x s i, i M T ĉ = c j, j N (c j, c j ), j N 0, i M T where, w.o.l.g., matrix! has full row rank, and where A j denotes the column of x j in (4.) The previous results on the simplex algorithm give: If (4.2) has an optimal solution, then there is a basis!! of! with! "! "! "!#!#!# $!% " ' %&$ " i.e., reduced cost! 0 Let m be the number of constraints in (4.). Then 4. Duality of LPs and the duality theorem 22-3 π T := ĉ Ṱ B ˆB R m is a feasible solution for inequalities π T  ĉ T (4.3) Inequalities (4.3) have 3 groups w.r.t. their columns: Group Group 2 Group 3! " # $! % $ & $ '!"("#! " # $! % $ "! " # $! "% $!" # $! % $ & $ ' "#($%!! " "!! " #!# " $ "#%$% Definition of the dual of LP (4.4) - (4.6) define constraints for a new LP with variables π,..., π m. These constraints, together with the

3 4. Duality of LPs and the duality theorem 22-4 objective function max π T b constitute the dual LP of (4.). The initial problem (4.) is called the primal LP. Transformation rules primal -> dual (follow from (4.4) - (4.6)) primal dual min c T x max π T b a T i x = b i i M π i unconstrained a T i x b i i M π i 0 x j 0 j N π T A j c j x j unconstrained j N π T A j = c j Observe: The dual LP is obtained from the optimality criterion of the primal. The variables π,..., π m correspond to multipliers of the rows of! that fulfill the primal optimality criterion. 4. Theorem (dual dual = primal) The dual of the dual is the primal. We therefore speak of primal-dual pairs of LPs Proof 4. Duality of LPs and the duality theorem 22-5 Write the dual in primal form: min π T ( b) such that ( A T j )π c i j N ( A T j )π = c i j N π i 0 j M π i unconstrained j M The transformation rules yield the following dual LP max x T ( c) such that x j 0 j N x j unconstrained j N a T i x b i i M a T i x = b i i M which is the primal LP! The Duality Theorem 4.2 Theorem (Weak and Strong Duality Theorem)

4 4. Duality of LPs and the duality theorem 22-6 Let x be a primal feasible solution and π be a dual feasible solution. Then (Weak Duality Theorem)! " #! $ " %!"&#$ If an LP has an optimal solution, so has its dual, and the optimal objective values are the same (Strong Duality Theorem) Proof Let x be a primal feasible solution and π be a dual feasible solution. Then c T x π dual feasible (π T A)x = π T x primal feasible (Ax) π T b Assume w.o.l.g. that the LP is in primal form (4.2) and has an optimal solution => has an basic optimal feasible solution! with associated basis! and π T =ĉ Ṱ ˆB B is feasible for the dual by construction For this π we obtain! " #! "#$ " #%#%!$ %#! #$ " #% " #%!$ #%! #$ " #% #& %! #$ " #& So π and! have the same objective function value. Weak Duality (4.7) then implies that π is a dual optimal solution! 4. Duality of LPs and the duality theorem Theorem (Possible primal-dual pairs) Primal-dual pairs exist exactly in one of the following cases: () both LPs have a finite optimal solution and their objective values are equal (2) both LPs have no feasible solution (3) one LP has an unbounded objective function and the other has no feasible solution

5 4. Duality of LPs and the duality theorem 22-8 primal dual finite optimal solution feasible solution, unbounded objective no feasible solution finite optimal solution () feasible solution, unbounded objective (3) no feasible solution (3) (2) Proof Strong Duality Theorem => Case () occurs in row and column of the table, and this is the only table entry in which it occurs Consider now row 2 of the table, i.e., x is a primal feasible solution but c T x unbounded from below. If there is a dual feasible solution π, we obtain π T b " c T x with the Weak Duality Theorem 4. Duality of LPs and the duality theorem 22-9 => c T x is bounded from below, a contradiction. Therefore case (3) can only occur at positions (2,3) and (3,2) An example for (3) (P) min x s.t. x + x 2!, - x - x 2!, x, x 2! 0 => (P) has no feasible solution (D) max π + π 2 s.t. π - π 2 ", π - π 2 " 0, π, π 2! 0! 2! T b! => π T b is unbounded So only entry (3,3) remains. This case can occur

6 4. Duality of LPs and the duality theorem 22-0 An example for (2) (P) min x s.t. x + x 2!, - x - x 2!, x, x 2 unconstrained x 2 x => (P) has no feasible solutions (D) max π + π 2 s.t. π - π 2 =, π - π 2 = 0, π, π 2! 0 => (D) has no feasible solution! The transportation problem and its dual Hitchcock problem or transportation problem (Hitchcock 94) is a special minimum cost flow problem, see ADM 4. Duality of LPs and the duality theorem 22- I supply in A demand in B A u =! B G "bipartite" We want to transport a good (oil, grain, coal) at minimum cost from the supply locations to the demand locations Vertex i A (i =,..., m) supplies a i units Vertex j B (j =,..., n) demands b j units, total supply = total demand. Edges (i,j) A x B have cost c ij per transported unit and infinite capacity u ij An LP formulation for the transportation problem x ij = number of units transported from i to j min # i,j c ij x ij s.t. # j x ij = a i for all i (pick up supply a i from vertex i)

7 4. Duality of LPs and the duality theorem 22-2 # i x ij = b j for all j (deliver demand b j to vertex j ) x ij! 0 for all i, j The associated matrix A of coefficients has the form i =,...,m j =,...,n!!!!!" " " "!!#! "!! "" " " "! "# " " "! $!! $" " " "! $#!! " " "! # # " " " # " " " # # " " " # # # " " " #!! " " "! " " " # # " " " # $ $$ $ $$ $ $$ $ $$ # # " " " # # # " " " # " " "!! " " "!! # " " " #! # " " " # " " "! # " " " # #! " " " # #! " " " # " " " #! " " " # $ $$ $ $$ $ $$ $ $$ # # " " "! # # " " "! " " " # # " " "! The dual of the transportation problem Introduce dual variables u i, v j for the constraints as follows u i v j - # j x ij = - a i for all i # i x ij = b j for all j The dual LP reads 4. Duality of LPs and the duality theorem 22-3 max # i - a i u i + # j b j v j s.t. - u i + v j " c ij for all i, j u i, v j unconstrained Interpretation of the dual LP "Dual" entrepreneur offers to do the transportation for pairs (i,j) He can buy the supply a i at location i from the primal entrepreneur, transport it to j and sell it there u i v j v j - u i = price to buy a unit of the good at vertex i = returns per unit at vertex j = profit per unit bought in i and sold in j v j - u i " c ij dual entrepreneur must stay below primal transportation cost in order to get the transport (i,j) from the primal entrepreneur (otherwise primal entrepreneur will do it himself) Dual entrepreneur wants to maximize his total profit # j b j v j - # i a i u i under these conditions The dual of the diet problem The primal problem (see example 3.) min c T x s.t. Ax! r

8 4. Duality of LPs and the duality theorem 22-4 x! 0 The associated dual problem max π T r s.t. π T A " c T π T! 0 Interpretation The dual entrepreneur makes nutrient pills for each of the m ingredients (magnesium, vitamin C,...) He asks the price π i per unit of nutrient i π T A j " c j <=> the total price of all pills substituting one unit of food j must not exceed the price c j of one unit of food j (pills will not be bought otherwise) max π T r <=> maximizing total profit of the dual entrepreneur Dual LPs often have a natural interpretation in practice 4.2 Complementary slackness 23- Complementary slackness provides simple necessary and sufficient conditions for optimality of a pair of primal feasible and dual feasible solutions. They have far reaching consequences for the design of algorithms (primaldual algorithms, primal-dual approximation algorithms) 4.4 Theorem (Complementary slackness) Let x be a primal feasible solution and π be a dual feasible solution. The following statements are equivalent: x, π are optimal (in the primal and the dual, respectively) u i := π i (a T i x - b i ) = 0 for all i =,..., m (4.8) v j := (c j - π T A j ) x j = 0 for all j =,..., n (4.9) i.e.,: (slack of primal or dual constraint) (value of associated dual or primal variable) = 0 Proof u i! 0 since a T i x - b i = 0 => u i = 0 a T i x - b i! 0 => π i! 0 => u i! 0 v j! 0

9 4.2 Complementary slackness 23-2 since x j unconstrained => π T A j = c j => v j = 0 x j! 0 => π T A j " c j => v j! 0 Set u := # i u i, v := # j v j => u, v! 0. Then u = 0 <=> (4.8) holds v = 0 <=> (4.9) holds Then u + v = # i π i (a i T x - b i ) + # j (c j - π T A j ) x j = - # i π i b i + # j c j x j + # i π i a T i x - # j π T A j x j = - π T b + c T x + (π T A)x - π T (Ax) = - π T b + c T x Hence: u + v = - π T b + c T x Suppose (4.8) and (4.9) hold => u + v = 0 => c T x = π T b Weak Duality Theorem => x, π are optimal Suppose that x and π are optimal Strong Duality Theorem => c T x = π T b => u + v = 0 => (4.8) and (4.9)! 4.2 Complementary slackness 23-3

10 4.3 The shortest path problem and its dual 24- The shortest path problem as primal LP Shortest Path Problem (SP) Instance Digraph G Rational edge weights c(e), e E(G) Vertices s, t V(G) Task Determine an elementary s,t-path P of minimum weight c(p) (shortest s,t-path)!!"" # # $!""!!#"c(w) = e E(P) c(e) (SP) is an instance of (LP) 4.3 The shortest path problem and its dual 24-2 The vertex-edge-incidence matrix A = (a ij ) of G is defined as #$ %&''( if i e j! "#!"!$ %&''( if e j i ) (*+(, otherwise where V(G) = {,..., n } and E(G) = { e,..., e m } Example G a A!!! "! #! $! % e e 4 "!! & & & s e 2 e 3 b e 5 t # & & &!!!! $!! &!! & % &!!!! &! The vertex-edge-incidence matrix of a digraph has per column exactly one, exactly one -, and 0 otherwise => sum of rows is 0 => rank(a) < n

11 4.3 The shortest path problem and its dual 24-3 Later: rank(a) = n- if G is connected (in the undirected sense) Let f j be a variable representing the amount of flow on edge e j, and let f := (f,..., f m ) T Flow conservation in node i is then expressed as a i T f = v 4 inflow in v = 5 = outflow from v An s,t-path is a flow of flow value from s to t (all f j = on the path and 0 otherwise) => every s,t-path is a solution of the linear system Af!" =! b # with "#$ #! %$!$ & ' & row s row t flow conservation with v = Of course, this linear system has also solutions that do not correspond to s,t-paths. But we have 4.3 The shortest path problem and its dual Lemma () If min c T f Af = b f! 0 has an optimal solution, then also one with f j { 0, }. Every such solution corresponds to an s,t-path (2) The simplex algorithm finds such a solution Proof: () follows from the algorithm for minimum cost s,t-flows in ADM I (2) can easily be shown directly, but follows also from the fact that matrix A is totally unimodular and b is integer. Then all basic feasible solutions of the LP are integer. We will show this more general result in Chapter 7.2.! Solving (SP) with the simplex algorithm We formulate (SP) as (LP)

12 4.3 The shortest path problem and its dual 24-5 min c T f Af = b (A = vertex-edge-incidence matrix ) f! 0 and solve it with the simplex algorithm. Since rank(a) < n, we may delete a row => delete the row for vertex t, this yields b! 0 In the example we obtain the following tableau for cost vector c = (, 2, 2, 3, ) Initial tableau, not yet transformed w.r.t. a basis, and graph G with edge costs!!! "! #! $! %! " " #! "!!! & & & # &!! &!! & $ & &!!!! &! G s 2 a 2 b 3 t Choose {, 4, 5 } as basis and transform the tableau w.r.t. that basis. Interpret the associated basic feasible solution in the graph. 4.3 The shortest path problem and its dual 24-6!!! "! #! $! %!$ &!! & & &!!!!! & & &! $! &!!! &! % & &!!!! &! s a e e 4 3 t e 5 b f5 = 0 The basic solution has n- = V - variables, but not every s,t-path has so many edges => many basic feasible solutions are degenerate (a common phenomenon in combinatorial optimization problem) Next tableau and basic feasible solution in the graph!!! "! #! $! % a!# & &!! &!! &! &!!!! &! "! &!!! &! %! & & &!! s e e 2 2 e 5 b t => optimal solution found, shortest path has length 3 The dual of the shortest path problem

13 4.3 The shortest path problem and its dual 24-7 We formulate it w.r.t. the full tableau containing also the row for vertex t => dual variables π i correspond to a node potential in graph G Tableau in the example:!!! "! #! $! % " #! " " #! $ %!! & & & '! $ & & & &!!!!!! $ '!! &!! & & $ " &!!!! &! & Dual LP: max π s - π t π i - π j " c ij for all edges (i, j) E(G) π i unconstrained Interpretation of the dual LP Along any path 4.3 The shortest path problem and its dual 24-8 i k p... q t from i to t we have (π i - π k ) + (π k - π p ) (π q - π t ) = π i - π t " c ik " c kp " c qt => π i - π t " c ik + c kp c qt = length of the path from i to t Since this holds for every such path, π i - π t " length of a shortest path from i to t => max π s - π t is equivalent to finding the greatest lower bound for the length of a shortest path from s to t Complementary slackness conditions Path f and node potential π are primal-dual optimal <=> () f ij > 0 => π i - π j = c ij i.e., edge (i,j) lies on a shortest path => potential difference = cost (2) π i - π j < c ij => f ij = 0

14 4.3 The shortest path problem and its dual 24-9 i.e., potential difference < cost => edge (i,j) does not lie on a shortest path Interpretation: the lower bounds π i - π t are tight along any shortest path The cord model (for c ij! 0) edge (i, j) <-> cord with length c ij π i - π j π i - π j " c ij <-> pulling vertices i and j apart <-> pulling is bounded from above by length c ij max π s - π t <-> pull s and t apart as far as possible complementary slackness: the cords on shortest paths are the tight ones 4.3 The shortest path problem and its dual 24-0 Remarks Deleting the row for vertex t => we have no variable π t => the dual objective function is max π s But: edges (i,t) yield the dual constraint π i " c it, so that π s cannot get arbitrarily large. We obtains the same dual constraint π i " c it if we set π t = 0 (which we may do w.o.l.g. since we only have potential differences in the dual). Dijkstra's algorithm (ADM I) applied to the dual graph (in which the direction of all edges of G is reversed) iteratively computes the π i, where π t is set to 0.

15 4.4 Farkas' Lemma 25- This is a central and very useful lemma in duality theory. It has several variants also known as Theorems of the Alternative. Cones and projections The cone C(a,..., a m ) generated by a,..., a m Let a,..., a m R n (e.g. the rows of matrix A). The cone C(a,..., a m ) generated by a,..., a m is defined as!!" " # $ $ $ # " % # $%! & R ' " & % % ) ( " ( # ) ( # & $ (%" = set of non-negative linear combinations of a,..., a n a 2 C(a,a 2 ) a vectors in the green angle have a non-negative projection onto a und a Farkas' Lemma 25-2 The projection of y onto a α y cos α = a! " #!! # projection of y onto a =! cos α => the projection of y onto a is non-negative <=> y T a is non-negative 4.5 Theorem (Farkas' Lemma) Let a,..., a m R n and c R n. The following are equivalent () for all y R n : y T a i! 0 for all i =,...,m => y T c! 0 i.e., for all y : y has a non-negative projection onto each a i => y has a non-negative projection onto c (2) c C(a,..., a m ) i.e., c lies in the cone generated by a,..., a m

16 4.4 Farkas' Lemma 25-3 Proof () => (2) Consider the LP min c T y a i T y! 0 i =,...,m y unconstrained => y = 0 is a feasible solution of the LP The objective function is bounded from below since the constraints of the LP imply c T y! 0 because of (),. => LP has a finite optimal solution => the dual LP max 0 π T A j = c j π! 0 has a feasible solution => there are numbers π,..., π m! 0 with c = π T A = # i π i a i => c C(a,..., a m ) 4.4 Farkas' Lemma 25-4 (2) => () c C(a,..., a m ) => there are numbers π i! 0 with c = # i π i a i consider y with y T a i! 0 for all i =,...m => y T c = # i π i y T a i! # i π i 0 = 0! There are many equivalent formulations of Farkas' Lemma. Examples are (A) y (y T a i! 0 i => y T b! 0) <=> x! 0 with A T x = b (original version by Farkas 894) (B) y! 0 (y T a i! 0 i => y T b! 0) <=> x! 0 with A T x " b More in Chapter 7.5 An application of Farkas' Lemma: necessary conditions for the disjoint path problem Disjoint Path Problem Instance Undirected graph G Pairs of vertices { s, t },..., { s k, t k } Task Determine pairwise edge disjoint paths from s i to t i (i =,..., k)

17 4.4 Farkas' Lemma 25-5 An example: minimum cost embeddings of VPNs into the base net of Telekom 4.4 Farkas' Lemma 25-6 The decision version of the disjoint path problem is NP-complete. We therefore look for strong necessary and hopefully also sufficient criteria for the existence of a solution. Cut criterion Let H be the graph with V(H) := V(G) and E(H) := { { s, t },..., { s k, t k } }. A necessary condition for the existence of a solution is the cut criterion δ G (X) δ H (X) for all = X V(G) i.e., there are at least as many edges leaving X in G as there are pairs in H to be connected G H V-X X V-X X V-X The cut criterion is not sufficient 4.6 Example

18 4.4 Farkas' Lemma 25-7 G 2 4 H Cut criterion holds, but there is no solution Distance criterion Let dist G,z (s,t) be the length of a shortest path from s to t in G w.r.t. edge weights z(e)! 0, e E(G). An instance of the disjoint path problem fulfills the distance criterion :<=> for any choice of edge weights z(e)! 0, e E(G), &'!# (")!!" #" # )!*"!!"#" $!%" * $!(" The cut criterion reduces to the distance criterion for edge weights if e δ(x) z(e) := 0 otherwise 4.4 Farkas' Lemma Theorem (The distance criterion is necessary) The distance criterion is necessary and sufficient for the existence of a fractional solution of the disjoint path problem. In particular, it is necessary for the existence of a solution of a disjoint path problem Proof Consider the disjoint path problem as a cycle packing problem cycles = all elementary cycles in G + H that contain exactly one edge of H k := number of these cycles integer cycle packing = union of pairwise edge disjoint cycles that contain every edge of H in exactly one cycle (existence <=> feasibility of the disjoint path problem) fractional cycle packing = non-negative linear combination (of incidence vectors) of all these cycles such that the resulting vector has the value at the entries corresponding to the edges of H, and is at most at every entry corresponding to an edge of G. (they contain integer cycle packings as special case) Example 4.6 has the following cycles in the cycle packing problem

19 4.4 Farkas' Lemma 25-9 A formulation of the fractional cycle packing 4.4 Farkas' Lemma 25-0 Let M be the E(G)-cycle-incidence matrix, i.e., rows of M <-> edges of G columns of M <-> incidence vectors of all cycles of G + H M e,c = <=> e lies on cycle C Let N be the E(H)-cycle-incidence matrix, i.e., rows of N columns of N <-> edges of H <-> incidence vectors of all cycles of G + H N e,c = <=> e lies on cycle C Observe: every column of N contains exactly one => fractional cycle packing = π' R k with π'! 0, Mπ' ", Nπ' = Add slack variables to obtain a linear system and denote the enlarged vector again by π => fractional cycle packing = π R k+m (m = E(G) ) with π! 0, Mπ =, Nπ = Write it as M I Aπ =, π 0 with A = N 0 i.e., the all ones vector lies in the cone C(A,..., A k+m ) generated by the columns A j = of A

20 4.4 Farkas' Lemma 25- Applying Farkas' Lemma gives condition (3) Farkas' Lemma yields: there is such a vector π <=> for all y R E(G) + E(H) : y T A j! 0 for all j =,...,k+m => y T! 0 Partition y into (z,v) T, such that z corresponds to the rows of M (edges of G) and v to the rows of N (edges of H). We then get: y T A j! 0 => z i! 0 y T A j! 0 => z T M j + v T N j! 0 for columns A j of slack variables for the other columns A j Let C j be the cycle of column A j => C j decomposes into a path P j in G and an edge f from H Then z T M j = length z(p j ) of the path P j w.r.t. edge weights z(e) v T N j = edge weight v(f), where f is the edge of H lying on cycle C j Hence y T A j! 0 => z(p j ) + v(f)! 0 for all cycles C j containing edge f So z(p j ) + v(f)! 0 is equivalent to 4.4 Farkas' Lemma 25-2 dist G,z (s,t) + v(f)! 0 with f = { s, t } () The constraint y T! 0 becomes # e! E(G) z(e) + # e! v(e)! 0 (2) E(H) Farkas' Lemma then yields for arbitrary (z,v) (3) z(e)! 0, dist G,z (s,t) + v(f)! 0 for all edges f = { s, t } in H => # e! E(G) z(e) + # f! E(H) v(f)! 0 Condition (3) is equivalent to the distance criterion is (by proving that their negations are equivalent) (3) violated => distance criterion violated (3) violated => there are z, v with z(e)! 0, dist G,z (s,t) + v(f)! 0 for all edges f = { s, t } in H and # e! E(G) z(e) + # f! E(H) v(f) < 0 => 0 " # f! E(H) dist G,z (s,t) + # f! E(H) v(f) < # f! E(H) dist G,z (s,t) - # e! E(G) z(e) => # e! E(G) z(e) < # f! E(H) dist G,z (s,t) => distance criterion violated distance criterion violated => (3) violated

21 4.4 Farkas' Lemma 25-3 distance criterion violated => there is z! 0 with # e! E(G) z(e) < # f! E(H) dist G,z (s,t) choose v(f) := - dist G,z (s,t) for edge f = {s, t} in H => dist G,z (s,t) + v(f)! 0 for all edges f = { s, t } in H and # e! E(G) z(e) + # f! E(H) v(f) = # e! E(G) z(e) - # f! E(H) dist G,z (s,t) < 0 => (3) is violated! The distance criterion is stronger than the cut criterion Example 4.6 does not fulfill the distance criterion G 2 4 H Set z(e) = for all e in G => # f = {s,t}! E(H) dist G,z (s,t) = 8, # e! E(G) z(e) = Farkas' Lemma 25-4 The distance criterion is not sufficient for the existence of a solution of the disjoint path problem The instance of the disjoint path problem G 2 H 2 A fractional cycle packing! +! +! +! So the distance criterion holds because of Theorem 4.7 There is no solution for the disjoint path problem

22 4.5 Dual information in the tableau 26- How to get dual information from the optimal primal tableau? Suppose w.o.l.g. that the initial tableau (possibly with artificial variables from Phase I) has columns,...,m as basic columns and that the tableau is transformed w.r.t. to this basis... Then the following properties hold in the optimal tableau with basis B rows,...,m are obtained from the initial tableau by multiplying it with B - from the left the reduced cost are obtained as! " "! "! # $ % " " # $%&&#' where π is an optimal solution of the dual problem (Proof of the Strong Duality Theorem) In columns,...,m (which are unit vectors in the initial tableau) we get! " "! "! # $ % " "! "! # " #$&%%& Hence an optimal dual solution is obtained from the optimal tableau of the primal as 4.5 Dual information in the tableau 26-2! "! # "! "# " #"! $$ % % % $ &% #&%$'% Observe: this holds for the dual problem of the initial tableau (and not for dual versions of other, equivalent primal formulations). Moreover, the first m columns contain B - = B - I (4.3) c j -! j B Example (Example for the Two-Phase-Method continued) Initial tableau

23 4.5 Dual information in the tableau 26-3! "!! " "! " #!!! "! #! $! %!# & & & &!!!!!!$ &!!! & & & & &! "!!! & & # "! & &! " " # &! & %!!! &! " # $ & &! " "! &! Optimal tableau! "!! " "! " #!!! "! #! $! %!#!&$" %$"!!!! #$" ' #$" ' '!% '!!! ' ' ' ' '! "!$"!$" ' ' #$"!!$" ' '! $ %$"!!$"! ' ($" '!$"! '! % #$"!%$" '!!!!$" '!!$" '! (4.2) gives π = 0-5/2 = - 5/2 π 2 = 0 - (- ) = π 3 = 0 - (- ) = 4.5 Dual information in the tableau 26-4 for the values of the dual variables w.r.t. the dual problem obtained from the primal formulation with artificial a variables x i 4.9 Example (Example for the shortest path problem continued) Solving the primal problem Initial tableau has no identity matrix, but 2 unit vectors => add one artificial variable in Phase I! " #! # " # # # $ # %!$! & & & & &!% &! " " #! &! "!!!! & & & " # $ & &!! &!! & ' # % & & &!!!! &! Transform cost coefficients of ξ and z to reduced form (must become 0 for basic variables)

24 4.5 Dual information in the tableau 26-5! " #! # " # # # $ # %!$!! &!!!! & & &!% & & $ # & & & &! "!!!! & & & " # $ & &!! &!! & ' # % & & &!!!! &! Pivot step! " #! # " # # # $ # %!$ &! & & & & &!%!#!#! & & & & & # "!!!! & & & " # $ & &!! &!! & ' # %!!! &!! &! => ξ = 0 and x a is a non-basic variable => optimal w.r.t. z basic columns of the initial tableau Primal information (visualized in the graph) 4.5 Dual information in the tableau 26-6 s a e 4 e 2 2 e 5 b t the primal optimal solution displays the edges on the shortest path Dual information (obtained from the primal optimal tableau and displayed in the graph)! "! # $ %! "# $ %! #! $!%&! %! %! # '! "# '! %! #! %! &! # (! "# (! )! #! )! '! # π t = 0 since row t is not in the primal LP

25 4.5 Dual information in the tableau s 2 3 a 2 b 3 t 0 the dual solution displays the shortest distance from a vertex to t 4.6 The dual Simplex algorithm 27- Goal: use the primal tableau to solve the dual LP Characteristics of the dual LP The primal optimality condition!! " becomes a dual constraint => the primal simplex algorithm has a primal feasible solution and fulfills the dual constraint!! " only at termination when the optimum is reached This suggests the following characteristics for the dual simplex algorithm generate a sequence of dual feasible solutions establish primal feasibility only at termination when the optimum is reached Deriving the operations in the tableau Tableau X with basic solution

26 4.6 The dual Simplex algorithm 27-2!!! "! #! $! %!#! & & & &! "!!! & & &! $! & & &!!! #!!!! &! &!! => dual feasible, i.e.,!! " primal infeasible, i.e., x B! 0 Choose a pivot row r (instead of a pivot column) with x r0 < 0 (i.e., an infeasible entry x r0 < 0 in the primal basic solution) Choose a pivot column in row r the by considering entries x rj < 0 (as to obtain x r0! 0 after the pivot) Pivoting with x rs < 0 changes the cost row to!!" "!!"!! #"! #$!!$ " " #% & & & % ' 4.6 The dual Simplex algorithm 27-3 j s 0 x 0j x 0s must become 0 r x rj x rs must become To stay feasible in the dual, x 0j! 0 for all j!!"!!!$ for "#$%! #" %!! #"! #$ => choose column s is such a way that!!"! #$ " #$%!!!$! #$ "! #$ %!& $ " && ' ' ' & ( # Observe the symmetry with the primal simplex algorithm in particular: all x rj > 0 => dual LP has an unbounded objective function 4.0 Theorem (Interpretation of the dual simplex algorithm)

27 4.6 The dual Simplex algorithm 27-4 The dual simplex algorithm is the primal simplex algorithm applied to the primal formulation of the dual LP Proof: Check! 4. Example (Example for the shortest path problem continued) initial tableau, not yet transformed w.r.t. a basis and graph with costs!!! "! #! $! %! " " #! "!!! & & & # &!! &!! & $ & &!!!! &! G s 2 a 2 b 3 t choose B = { 2, 4, 3 } as basis and transform the tableau w.r.t. B, display the basic solution in the graph 4.6 The dual Simplex algorithm 27-5!!! "! #! $! %!#! & & & & => dual feasible! "!!! & & &! $! & & &!!! #!!!! &! &!! primal infeasible s a e e 3 e 2 b e 4 e 5 t the basic solution corresponds to the s,t-cut X = { s, a } ADM I: s,t-cuts are "dual structures" of s,t-flows. This is confirmed here by LP duality Choosing the pivot element r = 3 is the pivot row choosing the pivot column:!!"! #$ " #$%!!!$! #$ "! #$ %!& $ " && ' ' ' & ( # " #$%! & $& &! $& # "! j = j = 5 => s = 5 is the pivot column

28 4.6 The dual Simplex algorithm 27-6!!! "! #! $! %!#! & & & &! "!!! & & &! $! & & &!!! #!!!! &! &!! pivot operation!!! "! #! $! %!#! & & & &! "!!! & & &! $ &!! &!! &! %!! &!! &! s a e 4 e 2 2 e 5 b t => primal and dual feasible => optimal The dual optimal solution can be obtained from the inverse of the optimal basis as! "! # " $ $!" (Duality Theorem). The optimal basis is B = { 2, 4, 5 } with inverse 4.6 The dual Simplex algorithm!!! "! # # #! #! #! 27-7 So! "! # " $ $!"! #$% %% "& " ' ' ' " ' " ' "! #%% %% "& 3 s 2 3 a 2 b 3 t 0 Observe: in this case we could not obtain π and B - directly from the optimal tableau, since the dual LP is not the one constructed from the initial tableau with basis { 2, 4, 3 }, but the dual LP of Example 4.9.

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004 1 In this section we lean about duality, which is another way to approach linear programming. In particular, we will see: How to define

More information

F 1 F 2 Daily Requirement Cost N N N

F 1 F 2 Daily Requirement Cost N N N Chapter 5 DUALITY 5. The Dual Problems Every linear programming problem has associated with it another linear programming problem and that the two problems have such a close relationship that whenever

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu https://moodle.cis.fiu.edu/v2.1/course/view.php?id=612 Gaussian Elimination! Solving a system of simultaneous

More information

Linear Programming Duality

Linear Programming Duality Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve

More information

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 1 Linear Programming. Paragraph 5 Duality Chapter 1 Linear Programming Paragraph 5 Duality What we did so far We developed the 2-Phase Simplex Algorithm: Hop (reasonably) from basic solution (bs) to bs until you find a basic feasible solution

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)

Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502) Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik Combinatorial Optimization (MA 4502) Dr. Michael Ritter Problem Sheet 1 Homework Problems Exercise

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP Different spaces and objective functions but in general same optimal

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

The Dual Simplex Algorithm

The Dual Simplex Algorithm p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

UNIT-4 Chapter6 Linear Programming

UNIT-4 Chapter6 Linear Programming UNIT-4 Chapter6 Linear Programming Linear Programming 6.1 Introduction Operations Research is a scientific approach to problem solving for executive management. It came into existence in England during

More information

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006 1 Simplex solves LP by starting at a Basic Feasible Solution (BFS) and moving from BFS to BFS, always improving the objective function,

More information

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)

Chapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1) Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

II. Analysis of Linear Programming Solutions

II. Analysis of Linear Programming Solutions Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

March 13, Duality 3

March 13, Duality 3 15.53 March 13, 27 Duality 3 There are concepts much more difficult to grasp than duality in linear programming. -- Jim Orlin The concept [of nonduality], often described in English as "nondualism," is

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

Special cases of linear programming

Special cases of linear programming Special cases of linear programming Infeasible solution Multiple solution (infinitely many solution) Unbounded solution Degenerated solution Notes on the Simplex tableau 1. The intersection of any basic

More information

Introduction to linear programming using LEGO.

Introduction to linear programming using LEGO. Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

Advanced Linear Programming: The Exercises

Advanced Linear Programming: The Exercises Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

Duality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics

Dr. S. Bourazza Math-473 Jazan University Department of Mathematics Dr. Said Bourazza Department of Mathematics Jazan University 1 P a g e Contents: Chapter 0: Modelization 3 Chapter1: Graphical Methods 7 Chapter2: Simplex method 13 Chapter3: Duality 36 Chapter4: Transportation

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis

MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis MVE165/MMG631 Linear and integer optimization with applications Lecture 5 Linear programming duality and sensitivity analysis Ann-Brith Strömberg 2017 03 29 Lecture 4 Linear and integer optimization with

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems

OPERATIONS RESEARCH. Michał Kulej. Business Information Systems OPERATIONS RESEARCH Michał Kulej Business Information Systems The development of the potential and academic programmes of Wrocław University of Technology Project co-financed by European Union within European

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

Linear Programming, Lecture 4

Linear Programming, Lecture 4 Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex

More information

Linear Programming: Chapter 5 Duality

Linear Programming: Chapter 5 Duality Linear Programming: Chapter 5 Duality Robert J. Vanderbei September 30, 2010 Slides last edited on October 5, 2010 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544

More information

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

Minimum cost transportation problem

Minimum cost transportation problem Minimum cost transportation problem Complements of Operations Research Giovanni Righini Università degli Studi di Milano Definitions The minimum cost transportation problem is a special case of the minimum

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

Discrete Optimization 23

Discrete Optimization 23 Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

Linear and Integer Programming - ideas

Linear and Integer Programming - ideas Linear and Integer Programming - ideas Paweł Zieliński Institute of Mathematics and Computer Science, Wrocław University of Technology, Poland http://www.im.pwr.wroc.pl/ pziel/ Toulouse, France 2012 Literature

More information

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

3. Duality: What is duality? Why does it matter? Sensitivity through duality. 1 Overview of lecture (10/5/10) 1. Review Simplex Method 2. Sensitivity Analysis: How does solution change as parameters change? How much is the optimal solution effected by changing A, b, or c? How much

More information

Optimisation and Operations Research

Optimisation and Operations Research Optimisation and Operations Research Lecture 9: Duality and Complementary Slackness Matthew Roughan http://www.maths.adelaide.edu.au/matthew.roughan/ Lecture_notes/OORII/

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable

More information

Systems Analysis in Construction

Systems Analysis in Construction Systems Analysis in Construction CB312 Construction & Building Engineering Department- AASTMT by A h m e d E l h a k e e m & M o h a m e d S a i e d 3. Linear Programming Optimization Simplex Method 135

More information

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. Problem 1 Consider

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food. 18.310 lecture notes September 2, 2013 Linear programming Lecturer: Michel Goemans 1 Basics Linear Programming deals with the problem of optimizing a linear objective function subject to linear equality

More information

Cutting Plane Methods II

Cutting Plane Methods II 6.859/5.083 Integer Programming and Combinatorial Optimization Fall 2009 Cutting Plane Methods II Gomory-Chvátal cuts Reminder P = {x R n : Ax b} with A Z m n, b Z m. For λ [0, ) m such that λ A Z n, (λ

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

2. Linear Programming Problem

2. Linear Programming Problem . Linear Programming Problem. Introduction to Linear Programming Problem (LPP). When to apply LPP or Requirement for a LPP.3 General form of LPP. Assumptions in LPP. Applications of Linear Programming.6

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Lecture 5 January 16, 2013

Lecture 5 January 16, 2013 UBC CPSC 536N: Sparse Approximations Winter 2013 Prof. Nick Harvey Lecture 5 January 16, 2013 Scribe: Samira Samadi 1 Combinatorial IPs 1.1 Mathematical programs { min c Linear Program (LP): T x s.t. a

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

Introduction to Linear and Combinatorial Optimization (ADM I)

Introduction to Linear and Combinatorial Optimization (ADM I) Introduction to Linear and Combinatorial Optimization (ADM I) Rolf Möhring based on the 20011/12 course by Martin Skutella TU Berlin WS 2013/14 1 General Remarks new flavor of ADM I introduce linear and

More information

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Dual Basic Solutions Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Observation 5.7. AbasisB yields min c T x max p T b s.t. A x = b s.t. p T A apple c T x 0 aprimalbasicsolutiongivenbyx

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Sensitivity Analysis and Duality in LP

Sensitivity Analysis and Duality in LP Sensitivity Analysis and Duality in LP Xiaoxi Li EMS & IAS, Wuhan University Oct. 13th, 2016 (week vi) Operations Research (Li, X.) Sensitivity Analysis and Duality in LP Oct. 13th, 2016 (week vi) 1 /

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem Algorithmic Game Theory and Applications Lecture 7: The LP Duality Theorem Kousha Etessami recall LP s in Primal Form 1 Maximize c 1 x 1 + c 2 x 2 +... + c n x n a 1,1 x 1 + a 1,2 x 2 +... + a 1,n x n

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Mathematical Programs Linear Program (LP)

Mathematical Programs Linear Program (LP) Mathematical Programs Linear Program (LP) Integer Program (IP) Can be efficiently solved e.g., by Ellipsoid Method Cannot be efficiently solved Cannot be efficiently solved assuming P NP Combinatorial

More information

Integer Linear Programming (ILP)

Integer Linear Programming (ILP) Integer Linear Programming (ILP) Zdeněk Hanzálek, Přemysl Šůcha hanzalek@fel.cvut.cz CTU in Prague March 8, 2017 Z. Hanzálek (CTU) Integer Linear Programming (ILP) March 8, 2017 1 / 43 Table of contents

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

Maximum flow problem

Maximum flow problem Maximum flow problem 7000 Network flows Network Directed graph G = (V, E) Source node s V, sink node t V Edge capacities: cap : E R 0 Flow: f : E R 0 satisfying 1. Flow conservation constraints e:target(e)=v

More information

4. The Dual Simplex Method

4. The Dual Simplex Method 4. The Dual Simplex Method Javier Larrosa Albert Oliveras Enric Rodríguez-Carbonell Problem Solving and Constraint Programming (RPAR) Session 4 p.1/34 Basic Idea (1) Algorithm as explained so far known

More information

Planning and Optimization

Planning and Optimization Planning and Optimization C23. Linear & Integer Programming Malte Helmert and Gabriele Röger Universität Basel December 1, 2016 Examples Linear Program: Example Maximization Problem Example maximize 2x

More information