Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Size: px
Start display at page:

Download "Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities"

Transcription

1 Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012

2 Linear Programming 1 The standard Linear Programming (SLP) problem: subject to minimize x R n c T x {}}{ c 1 x 1 + c 2 x c n x n { a 11 x 1 Ax=b }} + a 12 x a 1n x n = { b 1 a 21 x 1 + a 22 x a 2n x n = b 2 a n1 x 1 + a n2 x a mn x n = b m x i 0 for all i = 1,, n

3 Define the feasible set of the SLP as 2 Λ := {x R n Ax = b, x 0} The SLP is given by minimize {c T x x Λ} Theorem 1 Consider the following problems µ = min{ c T z A 1 z b 1, A 2 z = b 2 and z 0} (1) and where c = [ c 0 ν = min{c T x Ax = b, x 0} (2) ], A = [ A1 I A 2 0 ] and b = [ b1 ] b 2

4 Then µ = ν If the optimal solution of (1) is z o then an optimal solution x o of (2) is given by 3 x o = [ z 0 y o ] where y o 0 and vice versa Proof: Note that as x o is an optimal solution of (2) it follows that ν = c T x o, Ax o = b and x o 0 Partition x o appropriately as x o = [ z 1 y o ]

5 where z has the same dimension as c Then it follows that 4 z 1 0, A 1 z 1 + y o = b 1, y o 0 and A 2 z 1 = b 2 This implies that z 1 0, A 1 z 1 b 1, A 2 z 1 = b 2 and thus z 1 is a feasible element for the optimization problem of (1) Thus it follows that ν = c T x o = c T z 1 min{ c T z A 1 z b 1, A 2 z = b 2 and z 0} = µ Note that as z o is an optimal solution of (1) it follows that µ = c T z o, A 1 z o b 1, A 2 z o = b 2 and z o 0 Define y 1 := b 1 A 1 z o 0

6 Define [ ] x 1 z o = y 1 Then it follows that x 1 0, Ax 1 = b and x 1 0 and thus x 1 is a feasible element for the optimization problem of (2) Thus it follows that 5 µ = c T z o = c T x 1 min{c T x 1 Ax = b and x 0} = ν = c T z 1 µ This proves µ = ν Also we have shown that if the optimal solution of (1) is z o then an optimal solution x o of (2) is given by where y o 0 and vice versa x o = [ z 0 y o ]

7 Theorem 2 Consider the following problems 6 µ = min{ ( c T 1 c T 2 ) ( z y ) A 1 z + A 2 y = b, z 0} (3) and where c = ν = min{c T x Ax = b, x 0} (4) c 1 c 2 c 2, A = [ A 1 A 2 A 2 ] Then µ = ν ( ) z o If the optimal solution of (3) is y o then an optimal solution x o of (4) is

8 given by x o = where y o = u o v o 0 and vice versa z0 u o v o 7 Proof: Note that as x o is an optimal solution of (4) it follows that ν = c T x o, Ax o = b and x o 0 Partition x o appropriately as x o = z1 u o v o where z 1, u o and v o have dimensions same as c 1, c 2 and c 2 respectively Then it follows that z 1 0, A 1 z 1 + A 2 (u 0 v o ) = b, u o, v o 0

9 Let y 1 := u 0 v o This implies that 8 z 1 0, A 1 z 1 + A 2 y 1 = b ( ) z 1 and thus y 1 is a feasible element for the optimization problem of (3) Thus it follows that ν = c T x o = ( c T 1 c T 2 c T 2 = ( c T 1 c T 2 min{ ( c T 1 c T 2 = µ ) ) ( z 1 y 1 ) ) ( z y z1 u o v o = c T 1 z 1 + c T 2 (u o v o ) ) A 1 z + A 2 y = b, z 0}

10 Note that as ( z o y o ) is an optimal solution of (3) it follows that 9 µ = c T 1 z o + c T 1 y o, A 1 z o + A 2 y o = b, z o 0 Define u o and v o to satisfy u 1 (i) := y o (i) if y o (i) 0 = 0 if y o (i) < 0 v 1 (i) = 0 if y o (i) 0 = y o (i) if y o (i) < 0 for all i = 1,, n y n y being the dimension of y Note that u 1 0 v 1 0 and y o = u 1 v 1

11 Therefore it follow that 10 Thus x 1 := zo u 1 v 1 A 1 z o + A 2 u 1 A 2 v 1 = A 1 z o + A 2 y o = b u 1 1 v 1 0 is a feasible solution for (4) Thus it follows that µ = c T 1 z o + c T 2 y o = c T 1 z o + c T 2 u o c T 2 v o = c T x 1 min{c T x Ax = b and x 0} = ν = c T 1 z 1 + c T 2 (u o v o ) µ This proves µ = ν Also we have shown that if the optimal solution of (3) is

12 ( z o y o ) then an optimal solution x o of (4) is given by 11 x o = z0 u o v o where y o = u o v o 0 and vice versa

13 Feasible solution and Optimal solution 12 Definition 1 Consider the Standard Linear Programming (SLP) problem µ = min{c T x Ax = b, x 0, x R n } where A is a m n matrix Any x R n that satisfies Ax = b, x 0 is a feasible solution If x o is such that then x o is an optimal solution µ = c T x o, Ax o = b and x 0,

14 Basic Solution, basic variable and nonbasic variables 13 Definition 2 Consider the Standard Linear Programming (SLP) problem min{c T x Ax = b, x 0, x R n } where A is a m n matrix Suppose Rank(A) = m Suppose x = x 1 x 2 x n

15 is such that only m elements {x k1, x k2,, x km } are non zero with x k1 x k2 x km = B 1 b, B = [ a k1 a k2 a km ] 14 Then x is a basic solution of the SLP The variables {x k1, x k2,, x km } are called the basic variables associated with the matrix B The variables x i with i {k 1, k 2,, k m } are called the non-basic variables Note that B is a matrix formed by m linearly independent columns of A Basic solution depends only on A and b and not on c

16 A non-basic variable is set to zero in a basic solution 15 A basic variable can be zero in a basic solution There are only finitely many basic solutions associated with A R m n and b R m Definition 3 A basic solution is said to be degenerate if any of the basic variables is zero Definition 4 feasible Definition 5 x is said to be basic feasible solution if x is basic and is x is said to be basic optimal solution if x is basic and is optimal

17 The Fundamental Theorem of Linear Programming 16 Theorem 3 Consider the optimization problem where A R m n has rank m Then min{c T x Ax = b, x 0, x R n }, 1 If there exists a feasible solution then there exists a basic feasible solution 2 If there is an optimal solution then there is a basic optimal solution SLP has only finitely many basic solutions Fundamental theorem on Linear Programming asserts that LP can be solved in a finite number of steps

18 Proof of (1): Let x R n be a feasible solution Suppose only p elements of the vector x be nonzero Without loss of generality assume these variables to be x 1,, x p Thus x p+1 = x p+2 = = x n = 0 Note also that Ax = b and x 0 Let 17 A = [ a 1 a 2 a n ], where a i denotes the i th column of A Then as Ax = b we have n a i x i = b i=1 p a i x i = b i=1 Case1: Suppose a 1,, a p are independent set of vectors Then as p m as Rank(A) = m One can add columns a ip+1,, a im such that the B = [ a 1 a p a ip+1 a ip+2 a im ],

19 has independent columns and thus is invertible It is evident that x 0, Ax = b and the nonzero variables are basic variables associated with the matrix B above Thus it follows that x is a basic feasible solution 18 Case 2: Suppose the columns a 1,, a p form a dependent set Then there exists real variables y 1,, y p with at least one element strictly positive such that p y i a i = 0 Let i=1 y = y 1 y p 0 0 n 1

20 It is evident that Ay = 0 Let 19 Let Then ɛ := min{ x 1 y 1,, x p y p y i > 0} > 0 z = x ɛy z 0, Az = A(x ɛy) = Ax ɛay = Ax = b and z has p 1 nonzero elements Thus z is a feasible solution and has at most p 1 nonzero elements This process can be continued to a stage when the non-zero elements of a feasible element are associated with independent columns and then we revert to Case 1 This proves the first part of the theorem

21 Proof of (2): Suppose x is such that 20 µ = c T x, A x = b and x 0 that is x is an optimal solution Suppose only p elements of the vector x be nonzero Without loss of generality assume these variables to be x 1,, x p Thus x p+1 = x p+2 = = x n = 0 Note also that A x = b and x 0 Let A = [ a 1 a 2 a n ], where a i denotes the i th column of A Then as A x = b we have n a i x i = b i=1 p a i x i = b i=1

22 Case1: Suppose a 1,, a p are independent set of vectors Then as p m as Rank(A) = m Using results from linear algebra, one can add columns a ip+1,, a im such that the 21 B = [ a 1 a p a ip+1 a ip+2 a im ], has independent columns and thus is invertible It is evident that x 0, A x = b and the nonzero variables are basic variables associated with the matrix B above Thus it follows that x is a basic feasible solution x is an optimal solution too and thus x is a basic optimal solution Case 2: Suppose the columns a 1,, a p forms a dependent set Then there exists real variables y 1,, y p with at least one element strictly positive such that p y i a i = 0 i=1

23 22 Let It is evident that Ay = 0 y = y 1 y p 0 0 n 1 Let δ := min{ x 1 y 1,, x p y p y i 0} > 0 Let ɛ be any real number such that ɛ < δ Then x ɛy 0 and A( x ɛy) = b

24 Thus x ɛy is a feasible solution Suppose c T y 0 then we can choose ɛ such that 0 < ɛ δ and sgn( ɛ) = sgn(c T y) Then 23 c T ( x ɛy) = c T x ɛc T y = c T x c T ɛc T y < c T x As ( x ɛy) is a feasible solution x cannot be an optimal solution This a contradiction and thus c T y = 0 Now let Let Then z is a feasible solution with ɛ := min{ x 1 y 1,, x p y p y i > 0} > 0 z = x ɛy c T z = c T ( x ɛc T y) = c T x = µ

25 and thus z is an optmal solution Also z has at most p 1 nonzero elements This process can be continued to a stage when the only non-zero terms in the optimal solution are associated with independent columns of A 24 This proves (2) Definition 6 [Convex sets] A subset Ω of a vector space X is said to be convex if for any two elements c 1 and c 2 in Ω and for a real number λ with 0 < λ < 1 the element λc 1 + (1 λ)c 2 Ω (see Figure??) The set {} is assumed to be convex Theorem 4 Let Λ α, α S be an arbitrary collection of convex sets Then α S Λ α

26 is a convex set 25 Theorem 5 Suppose K and G are convex subsets of a vector space X Then K + G := {x X x = x k + x G, x K K and x G G} is convex Definition 7 Let S be an arbitrary set of a vector space X Then the convex hull of S is the smallest convex set containing S and is denoted by co(s) Note that where Λ α is any set that contains S co(s) = Λ α Definition 8 [Convex combination] A vector of the form n k=1 λ kx k, where n k=1 λ k = 1 and λ k 0 for all k = 1,, n is a convex combination of the vectors x 1,, x n

27 Definition 9 [Cones] A subset C of a vector space X is a cone if for every non-negative α in R and c in C, αc C 26 A subset C of a vector space is a convex cone if C is convex and is also a cone Definition 10 [Positive cones] A convex cone P in a vector space X is a positive convex cone if a relation is defined on X based on P such that for elements x and y in X, x y if x y P We write x > 0 if x int(p ) Similarly x y if x y P := N and x < 0 if x int(n) Example 1 Consider the real number system R The set P := {x : x is nonnegative}, defines a cone in R It also induces a relation on R where for any two elements x and y in R, x y if and only if x y P The convex cone P with the relation defines a positive cone on R

28 Definition 11 [Convex maps] Let X be a vector space and Z be a vector space with positive cone P A mapping, G : X Z is convex if G(tx + (1 t)y) tg(x) + (1 t)g(y) for all x, y in X and t with 0 t 1 and is strictly convex if G(tx + (1 t)y) < tg(x) + (1 t)g(y) for all x y in X and t with 0 < t < 1 27 Definition 12 [Extreme points] Let C be a convex set Then a C is said to be an extreme point of the set C if for any x, y C and 0 < λ < 1 λx + (1 λ)y = a implies that x = y = a Note that the feasible set of a SLP is given by Λ = {x R n Ax = bx 0}

29 Clearly if x and y Λ then it follows that 28 A(λx+(1 λ)y) = λax+(1 λ)ay = λb+(1 λ)ab = b and (λx+(1 λ)y) 0 Thus (λx + (1 λ)y) Λ if x and y Λ Thus Λ is convex

30 Equivalence of extreme points and basic solutions 29 Theorem 6 Let A be a m n matrix with Rank(A) = m and let Λ = {x R n Ax = bx 0} Then a vector x is an extreme point of Λ if and only if x is a basic feasible solution Proof: Suppose x is a basic feasible solution Assume without loss of generality that the basic variables are the first m elements of x given by x i, i = 1,, m Also let B = [ a 1 a 2 a m ] Then it follows that x 1 a 1 + x 2 a x m a m = b

31 or in other words x 1 x m = B 1 b Now suppose 0 < λ < 1 and y and z Λ are such that 30 λy + (1 λ)z = x Thus it follows that λy i + (1 λ)z i = x i for all i = 1,, n Note that as x i = 0 for all i = m + 1,, n, λ > 0 and (1 λ) > 0 it follows that y i = z i = 0 for all i = m + 1,, n

32 Note that as y and z Λ, Ay = Az = b and thus 31 Thus Thus y 1 a y m a m = z 1 a z m a m = b y 1 y m = z 1 z m = B 1 b = y = z = x x 1 x m Thus we have shown that every basic feasible solution is an extreme point Suppose x is an extreme point of the set Λ Without loss of generality assume that x 1,, x p are the only non-zero elements of x Suppose a 1,, a p form a dependent set Then there exists scalars y 1,, y p not all zero such that y 1 a y p a p = 0

33 Let Note that ɛ = min{x i / y i i {1,, p} and y i 0} > 0 y 1 y = y p 0 0 n 1 32 x ɛy 0, x + ɛy 0, A(x ɛy) = b and A(x + ɛy) = b Also, Clearly x = 1 2 (x ɛy) + 1 (x + ɛy) 2 x x ɛ and x (x + ɛy)

34 as y 0 and ɛ 0 Thus we have written x as a non trivial convex combination of x ɛ and (x + ɛy) Thus x is not an extreme point This is a contradiction and therefore a 1,, a p are independent Thus x is a basic feasible solution 33 The following corollaries follow easily from Theorem 3 and Theorem 6 Corollary 1 Suppose Rank(A) = m where A is a m n matrix The feasible set of the SLP Λ = {x R n Ax = b and x 0} is nonempty if and only if there exists an extreme point of Λ Corollary 2 Suppose Rank(A) = m where A is a m n matrix Let the feasible set of the SLP be Λ = {x R n Ax = b and x 0} Then an optimal solution to the SLP exists if and only if an optimal solution to the SLP that is also an extreme point of Λ exists

35 Which basic variable becomes non-basic 34 Consider the SLP with as the feasible set Lets assume that Λ = {x R n Ax = b, x 0} the first m columns of A are independent the basic solution x associated with the first m columns is feasible ie x 0 Thus x 1 a 1 +x 2 a 2 + +x m a m = b, x m+1 = x m+2 = = 0 and x i > 0 for all i = 1,, m

36 The columns a 1, a 2 a m are the basic columns Suppose it is determined that a nonbasic column a q should become a basic column Then we have to determine which column has to leave from the basic set Note that a 1, a 2 a m form a basis for R m Therefore we can write 35 a q = y 1q a y mq a m Suppose the variable associated with a q is increased from 0 to ɛ > 0 while keeping all other non-basic variables 0 Then to maintain feasibility we have that coefficients of the initial basic set a 1,, a m will be altered to satisfy x 1 a 1 + x 2 a x m a m + ɛa q = b where x i, i = 1,, m have to be determined Clearly x 1 a 1 + x 2 a x m a m = b ɛa q = m i=1 x ia i ɛ m = m i=1 (x i ɛy iq )a i i=1 y iqa i

37 This implies that 36 m [ x (x i ɛy iq )]a i = 0 i=1 Thus x i = x i ɛy iq for all i = 1,, m Thus we have m [(x i ɛy iq )]a i + ɛa q = b i=1

38 37 Thus we have a solution to the equation Az = b given by x = x 1 ɛy 1q x m ɛy mq 0 0 ɛ 0 0, where the ɛ is the q th element

39 Feasibility of x 38 Note that A x = b where x = x 1 ɛy 1q x m ɛy mq 0 To be feasible x 0 This may be satisfied by an appropriate choice of ɛ Indeed there are two possible case for feasibility of x Case 1: y iq 0 for all i = 1,, m In this case x is feasible (that is x Λ) for any ɛ > 0 Also we can conclude that Λ is an unbounded set 0 ɛ 0 0

40 Case 2: there exists at least one i 0 {1, 2,, m} such that y i0 q > 0 In this case for any 0 ɛ ɛ M, x is feasible with 39 ɛ M = min i=1,,m { x i y iq y iq > 0} 0

41 Making x basic feasible solution with a q basic 40 Note that x = x 1 ɛy 1q x m ɛy mq 0 0 ɛ 0 0 Case 1: If y iq 0 for all i = 1,, m then it is not possible to make x i ɛy iq = 0 any ɛ if x i > 0 If the initial basic feasible solution was degenerate with x k = 0, k {1,, m} then by choosing ɛ = 0 one can swap the role of a k and a q However in this case the solution x = x

42 Case 2: there exists at least one i 0 {1, 2,, m} such that y i0 q > 0 In this case let ɛ = ɛ M, with 41 ɛ M = min { x i y iq > 0} 0 i=1,,m y iq Let p a minimizing index above Then a p can be replaced with a q in the basic column set Note again that if x p = 0 then again the new solution x = x

43 Boundedness and Non-degeneracy assumption 42 x = x 1 ɛy 1q x m ɛy mq 0 0 ɛ 0 0 If we make the assumption that Λ is bounded and that x is a non-degenerate basic feasible solution the following steps can be performed to determine which basic column leaves the basic column set to allow a q to enter the basic column set

44 1 43 ɛ M = min { x i y iq > 0} > 0 i=1,,m y iq with p = arg{ min { x i y iq > 0}} i=1,,m y iq 2 Let

45 44 x = x 1 ɛ M y 1q x m ɛ M y mq 0 0 ɛ M 0 0 Note that x 0 and it has the p th element zero The only possible nonzero elements are belong to the set {1,, (p 1), (p + 1),, m} {q}

46 Effect on the Cost of changing a basic feasible solution 45 Suppose x bfs is a basic feasible solution We will also assume that A = [ e 1 e m y m+1 y n ] and b = yn+1 In this case we will have x bfs = [ ] yn+1 = b 0 n m The cost associated with this basic feasible solution is z bfs = c T x bfs = c T By n+1 Suppose x is another feasible solution How does the cost c T x compare with

47 c T x bfs As x is feasible Ax = b = y n+1 Thus 46 x 1 a 1 + x m a m + x m+1 y m x n y n = y n+1

48 Thus m i=1 x ie i = y n+1 n i=m+1 x iy i c T B ( m i=1 x ie i ) = c T B (y n+1) c T B ( n i=m+1 x iy i ) 47 m i=1 x ic i = z bfs =:z {}}{ i n i=m+1 x i c T By i m i=1 x ic i = z bfs n i=m+1 x iz i n i=1 x ic i = z bfs n i=m+1 x iz i + n i=m+1 x ic i n x i c i i=1 }{{} c T x n x i c i i=1 }{{} c T x z bfs }{{} c T x bfs = = z }{{} bfs + c T x bfs n i=m+1 n i=m+1 x i (c i z i ) }{{} difference in cost x i (c i z i ) }{{} difference in cost

49 Thus if c i z i 0 for all i = m + 1,, n then x bfs is optimal 48

50 Consider the following SLP The Simplex Method 49 Minimize c 1 x 1 + c 2 x c n x n subject to: a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = + b 2 a n1 x 1 + a n2 x a mn x n = b m x 1, x 2,, x n 0 Lets assume that the matrix A has rank m We will assume that the matrix A is such that a i = e i, i = 1,, m and a i = y i, i = m + 1,, n We will denote the vector b by y n+1 Thus we have

51 min subject to [ c1 c 2 c m c m+1 c n ] x y 1m+1 y 1n y 2m+1 y 2n y mm+1 y mn x 1 x 2 x n = y 1n+1 y 2n+1 y mn+1 50 [ x1 x 2 x m x m+1 x n ] 0 Lets assume that y 1n+1,, y mn+1 0 Then for the above table a basic feasible solution is given by x 1 = y 1n+1,, x m = y mn+1, x m+1 = 0,, x n = 0 Suppose it is decided that x q will enter the basic set

52 Consider the table: y 1m+1 y 1n y 2m+1 y 2n y mm+1 y mn y 1n+1 y 2n+1 y mn+1 [ c1 c 2 c m c m+1 c n ] [ 0 ] Suppose we do the following operation: [ row (m + 1)] [ row (m + 1)] c 1 [ row 1] c 2 [ row 2] c m [ row m]

53 Then we have the table: Consider the table: y 1m+1 y 1n y 2m+1 y 2n y mm+1 y mn [ m i=1 c iy im+1 y 1n+1 y 2n+1 y mn+1 m i=1 c iy in ] [ m i=1 c iy in+1 ] Note that r i = c i c T By i for all i = 1,, n and r n+1 = m c i y in+1 = z bfs i=1

54 Note that if any other feasible solution has cost c T x then 53 z z bfs = n i=m+1 c i r i (5) Determining the variable that will enter: First determine If r q = min{r i, i = 1,, n} r q 0, then the current bfs is the optimal solution as the cost of any other feasible solution is greater than or equal to z bfs (see Equation 5) If r q < 0 then q is chosen as the variable to become basic Determining the basic variable that will leave the basic set:

55 Note that the bfs is given by x bfs = [ yn+1 0 ] 54 Also note that y j = y 1j e 1 + y 2j e2 + + y mj e m = y 1j y 1 + y 2j e y mj y m Suppose q enters the basic set Suppose we denote the new solution to be x Then as A x = b it follows that x 1 e 1 + x 2 e x m e m + ɛy q = y n+1 Thus m ( x i + ɛy iq x i )e i = 0 i=1

56 Thus That is x i = x i ɛy iq for all i = 1,, m and x q = ɛ 55 To be feasible x 0 x = y 1n+1 ɛy 1q y mn+1 ɛy mq 0 0 ɛ 0 0 = If y iq 0 for all i = 1,, m then any ɛ > 0 does not violate feasibility and

57 the feasible set is unbounded Note that in this case as r q < 0 and the cost of making the non-basic q variable to take a nonzero value ɛ is from Equation (5) is z z bfs = r q ɛ As ɛ 0 can be arbitrarily large without violating feasibility we conclude that the SLP has no solution and the minimum value is Thus if 56 y iq 0 for all i = 1,, m then one can stop the Simplex algorithm and conclude that the optimal value is If y iq > 0 for some i 0 {1,, m} then let p = arg[ min {y in+1 y iq > 0}] and ɛ = min i=1,,m y iq i=1,,m {y in+1 y iq y iq > 0} p is the basic variable that will leave the basic set to be replaced by q as the basic variable

58 If the initial bfs is not degenerate then y in+1 > 0 for all i = 1, m Thus, ɛ > 0 and from Equation (5) as r q < 0 and ɛ > 0 the new cost z < z bfs Thus if all bfs are non-degenerate then at every time the simplex table is updated the cost strictly decreases and thus the same solution cannot be visited twice Thus there is no cycling in the iterations 57 Update the table by the following operations y ij y ij y iq y pq y pj if i p y pj y pj y pq In other words [row i] [row i] y iq y pq [row p] row p] 1 y pq [row p] if i p

59 Note that with this operation 58 y i = e i for all i [{1, 2,, p 1, p, m} {q}]

60 The Simplex Algorithm 59 (Step 1) Find index q such that r q = min{r j j = 1,, n} If r q 0 STOP The current basic feasible solution is the optimal solution (Step 2) Let r q be the solution in Step 1 with r q < 0 1 If y iq 0 for all i = 1,, m then STOP There is no optimal solution and the optimal value is 2 If there exists i 0 such that y i0 q > 0 then let ɛ = min{ y in+1 y iq y iq > 0}

61 and let p = arg[min{ y in+1 y iq y iq > 0}] 60 (Step 3) Update the table by the following operations y ij y ij y iq y pq y pj if i p y pj y pj y pq A new basic feasible solution is obtained (Step 4) Update the relative cost vector to assure that all the relative cost with respect to basic variables are zero Return to Step 1 Theorem 7 The simplex algorithm will yield an optimal basic feasible solution in a finite number of steps if the SLP has any optimal solution and all basic feasible solutions are non-degenerate

62 Proof: Follows from the fact that there are finite number of basic feasible solutions and that the simplex algorithm at each iteration yields a new basic feasible solution that has a cost strictly smaller than the previous iteration cost (if all basic feasible solutions are non-degenerate there is no cycling) 61

63 Revised Simplex: Matrix Method 62 Let the SLP be given by µ = min{c T x Ax = b, x 0} where A R m n and b R m Suppose the first m columns of A are independent Let B := [ a 1 a 2 a m ] and D := [ am+1 a m+2 a n ] With this definition we have A = [ B D ]

64 Partition any feasible x according to x = [ xb ] x D 63 Let the basic feasible solution associated with B be given by x = [ xb x D ] where x B = B 1 b and x D = 0 If x = [ ] x B x D is feasible then from Ax = b it follows that [ ] [ ] B D xb x D = b

65 Thus Thus Bx B + Dx D = b x B = B 1 b B 1 Dx D = x B B 1 Dx D The cost associated with this feasible solution is [ ] [ ] z = c T x = c T B c T D B D = c T B x B + c T D x D = c T B (B 1 b B 1 Dx D ) + c T D x D = c T B x B + (c T D ct B B 1 D)x D z = z bfs + (c T D ct B B 1 D)x D z z bfs = (c T D ct B B 1 D)x D = rd T x D 64 where r T D = (ct D ct B B 1 D) Thus the following algorithm can be followed

66 (Step 1): Compute r T D = (c T D c T BB 1 D) If r D 0 STOP The current solution is optimal 65 (Step 2) Let q be the most negative element of r D a q will enter the basic set (Step 3) Let y q = B 1 a q the coordinate vector of a q in the basis given by B (Step 4) If y iq 0 for all i = 1,, m STOP The SLP has no solution and the optimal value is Else calculate p = arg[min{ y in+1 y iq y iq > 0}]

67 where Replace the vector a p in B by a q y n+1 = B 1 b 66 Go to step 1

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

Operations Research Lecture 2: Linear Programming Simplex Method

Operations Research Lecture 2: Linear Programming Simplex Method Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

CO 602/CM 740: Fundamentals of Optimization Problem Set 4 CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

ORF 522. Linear Programming and Convex Analysis

ORF 522. Linear Programming and Convex Analysis ORF 522 Linear Programming and Convex Analysis The Simplex Method Marco Cuturi Princeton ORF-522 1 Reminder: Basic Feasible Solutions, Extreme points, Optima Some important theorems last time for standard

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 3: Linear Programming, Continued Prof. John Gunnar Carlsson September 15, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 3. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 3 Dr. Ted Ralphs IE406 Lecture 3 1 Reading for This Lecture Bertsimas 2.1-2.2 IE406 Lecture 3 2 From Last Time Recall the Two Crude Petroleum example.

More information

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I LN/MATH2901/CKC/MS/2008-09 THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS Operations Research I Definition (Linear Programming) A linear programming (LP) problem is characterized by linear functions

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Lecture 6 Simplex method for linear programming

Lecture 6 Simplex method for linear programming Lecture 6 Simplex method for linear programming Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University,

More information

Notes taken by Graham Taylor. January 22, 2005

Notes taken by Graham Taylor. January 22, 2005 CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first

More information

Termination, Cycling, and Degeneracy

Termination, Cycling, and Degeneracy Chapter 4 Termination, Cycling, and Degeneracy We now deal first with the question, whether the simplex method terminates. The quick answer is no, if it is implemented in a careless way. Notice that we

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Linear & Integer programming

Linear & Integer programming ELL 894 Performance Evaluation on Communication Networks Standard form I Lecture 5 Linear & Integer programming subject to where b is a vector of length m c T A = b (decision variables) and c are vectors

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations

15.083J/6.859J Integer Optimization. Lecture 10: Solving Relaxations 15.083J/6.859J Integer Optimization Lecture 10: Solving Relaxations 1 Outline The key geometric result behind the ellipsoid method Slide 1 The ellipsoid method for the feasibility problem The ellipsoid

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

AM 121: Intro to Optimization Models and Methods Fall 2018

AM 121: Intro to Optimization Models and Methods Fall 2018 AM 121: Intro to Optimization Models and Methods Fall 2018 Lecture 5: The Simplex Method Yiling Chen Harvard SEAS Lesson Plan This lecture: Moving towards an algorithm for solving LPs Tableau. Adjacent

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS

Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Appendix PRELIMINARIES 1. THEOREMS OF ALTERNATIVES FOR SYSTEMS OF LINEAR CONSTRAINTS Here we consider systems of linear constraints, consisting of equations or inequalities or both. A feasible solution

More information

Lecture 6 - Convex Sets

Lecture 6 - Convex Sets Lecture 6 - Convex Sets Definition A set C R n is called convex if for any x, y C and λ [0, 1], the point λx + (1 λ)y belongs to C. The above definition is equivalent to saying that for any x, y C, the

More information

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence

More information

Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang

Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang Lecture Notes for CAAM 378 A Quick Introduction to Linear Programming (DRAFT) Yin Zhang Sept. 25, 2007 2 Contents 1 What is Linear Programming? 5 1.1 A Toy Problem.......................... 5 1.2 From

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

Math 341: Convex Geometry. Xi Chen

Math 341: Convex Geometry. Xi Chen Math 341: Convex Geometry Xi Chen 479 Central Academic Building, University of Alberta, Edmonton, Alberta T6G 2G1, CANADA E-mail address: xichen@math.ualberta.ca CHAPTER 1 Basics 1. Euclidean Geometry

More information

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2

x 1 + x 2 2 x 1 x 2 1 x 2 2 min 3x 1 + 2x 2 Lecture 1 LPs: Algebraic View 1.1 Introduction to Linear Programming Linear programs began to get a lot of attention in 1940 s, when people were interested in minimizing costs of various systems while

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones Chapter 2 THE COMPLEMENTARY PIVOT ALGORITHM AND ITS EXTENSION TO FIXED POINT COMPUTING LCPs of order 2 can be solved by drawing all the complementary cones in the q q 2 - plane as discussed in Chapter.

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Week 2. The Simplex method was developed by Dantzig in the late 40-ties. 1 The Simplex method Week 2 The Simplex method was developed by Dantzig in the late 40-ties. 1.1 The standard form The simplex method is a general description algorithm that solves any LPproblem instance.

More information

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form.

Lecture 2: The Simplex method. 1. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form. Lecture 2: The Simplex method. Repetition of the geometrical simplex method. 2. Linear programming problems on standard form. 3. The Simplex algorithm. 4. How to find an initial basic solution. Lecture

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

ECE 307 Techniques for Engineering Decisions

ECE 307 Techniques for Engineering Decisions ECE 7 Techniques for Engineering Decisions Introduction to the Simple Algorithm George Gross Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign ECE 7 5 9 George

More information

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm. 1 date: February 23, 1998 le: papar1 KLEE - MINTY EAMPLES FOR (LP) Abstract : The problem of determining the worst case behavior of the simplex algorithm remained an outstanding open problem for more than

More information

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22.

6.1 Matrices. Definition: A Matrix A is a rectangular array of the form. A 11 A 12 A 1n A 21. A 2n. A m1 A m2 A mn A 22. 61 Matrices Definition: A Matrix A is a rectangular array of the form A 11 A 12 A 1n A 21 A 22 A 2n A m1 A m2 A mn The size of A is m n, where m is the number of rows and n is the number of columns The

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010

More information

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem A NEW FACE METHOD FOR LINEAR PROGRAMMING PING-QI PAN Abstract. An attractive feature of the face method [9 for solving LP problems is that it uses the orthogonal projection of the negative objective gradient

More information

Advanced Linear Programming: The Exercises

Advanced Linear Programming: The Exercises Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

Notes on Simplex Algorithm

Notes on Simplex Algorithm Notes on Simplex Algorithm CS 9 Staff October 8, 7 Until now, we have represented the problems geometrically, and solved by finding a corner and moving around Now we learn an algorithm to solve this without

More information

Linear and Integer Optimization (V3C1/F4C1)

Linear and Integer Optimization (V3C1/F4C1) Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Introduction to Integer Programming

Introduction to Integer Programming Lecture 3/3/2006 p. /27 Introduction to Integer Programming Leo Liberti LIX, École Polytechnique liberti@lix.polytechnique.fr Lecture 3/3/2006 p. 2/27 Contents IP formulations and examples Total unimodularity

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Lecture Note 18: Duality

Lecture Note 18: Duality MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

2 Two-Point Boundary Value Problems

2 Two-Point Boundary Value Problems 2 Two-Point Boundary Value Problems Another fundamental equation, in addition to the heat eq. and the wave eq., is Poisson s equation: n j=1 2 u x 2 j The unknown is the function u = u(x 1, x 2,..., x

More information

Simplex Algorithm Using Canonical Tableaus

Simplex Algorithm Using Canonical Tableaus 41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau

More information

On mixed-integer sets with two integer variables

On mixed-integer sets with two integer variables On mixed-integer sets with two integer variables Sanjeeb Dash IBM Research sanjeebd@us.ibm.com Santanu S. Dey Georgia Inst. Tech. santanu.dey@isye.gatech.edu September 8, 2010 Oktay Günlük IBM Research

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

ORF 522. Linear Programming and Convex Analysis

ORF 522. Linear Programming and Convex Analysis ORF 5 Linear Programming and Convex Analysis Initial solution and particular cases Marco Cuturi Princeton ORF-5 Reminder: Tableaux At each iteration, a tableau for an LP in standard form keeps track of....................

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Review Notes for Linear Algebra True or False Last Updated: February 22, 2010 Chapter 4 [ Vector Spaces 4.1 If {v 1,v 2,,v n } and {w 1,w 2,,w n } are linearly independent, then {v 1 +w 1,v 2 +w 2,,v n

More information

Transportation Problem

Transportation Problem Transportation Problem Alireza Ghaffari-Hadigheh Azarbaijan Shahid Madani University (ASMU) hadigheha@azaruniv.edu Spring 2017 Alireza Ghaffari-Hadigheh (ASMU) Transportation Problem Spring 2017 1 / 34

More information

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method...

3 Development of the Simplex Method Constructing Basic Solution Optimality Conditions The Simplex Method... Contents Introduction to Linear Programming Problem. 2. General Linear Programming problems.............. 2.2 Formulation of LP problems.................... 8.3 Compact form and Standard form of a general

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

1 Review of last lecture and introduction

1 Review of last lecture and introduction Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first

More information

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form 4.5 Simplex method min z = c T x s.v. Ax = b x 0 LP in standard form Examine a sequence of basic feasible solutions with non increasing objective function value until an optimal solution is reached or

More information

Linear programming: theory, algorithms and applications

Linear programming: theory, algorithms and applications Linear programming: theory, algorithms and applications illes@math.bme.hu Department of Differential Equations Budapest 2014/2015 Fall Vector spaces A nonempty set L equipped with addition and multiplication

More information

Chapter 1 Preliminaries

Chapter 1 Preliminaries Chapter 1 Preliminaries 1.1 Conventions and Notations Throughout the book we use the following notations for standard sets of numbers: N the set {1, 2,...} of natural numbers Z the set of integers Q the

More information

4. Duality and Sensitivity

4. Duality and Sensitivity 4. Duality and Sensitivity For every instance of an LP, there is an associated LP known as the dual problem. The original problem is known as the primal problem. There are two de nitions of the dual pair

More information

Convex Sets with Applications to Economics

Convex Sets with Applications to Economics Convex Sets with Applications to Economics Debasis Mishra March 10, 2010 1 Convex Sets A set C R n is called convex if for all x, y C, we have λx+(1 λ)y C for all λ [0, 1]. The definition says that for

More information

Linear Programming The Simplex Algorithm: Part II Chapter 5

Linear Programming The Simplex Algorithm: Part II Chapter 5 1 Linear Programming The Simplex Algorithm: Part II Chapter 5 University of Chicago Booth School of Business Kipp Martin October 17, 2017 Outline List of Files Key Concepts Revised Simplex Revised Simplex

More information

Multicommodity Flows and Column Generation

Multicommodity Flows and Column Generation Lecture Notes Multicommodity Flows and Column Generation Marc Pfetsch Zuse Institute Berlin pfetsch@zib.de last change: 2/8/2006 Technische Universität Berlin Fakultät II, Institut für Mathematik WS 2006/07

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

NOTES (1) FOR MATH 375, FALL 2012

NOTES (1) FOR MATH 375, FALL 2012 NOTES 1) FOR MATH 375, FALL 2012 1 Vector Spaces 11 Axioms Linear algebra grows out of the problem of solving simultaneous systems of linear equations such as 3x + 2y = 5, 111) x 3y = 9, or 2x + 3y z =

More information

MATH 445/545 Homework 2: Due March 3rd, 2016

MATH 445/545 Homework 2: Due March 3rd, 2016 MATH 445/545 Homework 2: Due March 3rd, 216 Answer the following questions. Please include the question with the solution (write or type them out doing this will help you digest the problem). I do not

More information