Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16
Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 / 16
The Setting Consider the linear programming problem, Minimize : c x, subject to : A x = b, x 0. The feasible region (the polyhedral set) S = { x R n : A x = b, x 0} is put into its standard form. If A x b is given, we can add slack variables, y R m such that A x + y = b with y 0. If A x b is given, we can add surplus variables, z R m such that A x z = b with z 0. If any part of x is unrestricted, e.g., x j, then consider x j = x + j with x + j, x j 0. we also assume that S is non-empty and the rank of A is m. x j Ming Zhong (JHU) AMS Fall 2018 3 / 16
The Algorithm The Algorithm is given (and broken down) as follows Step 0: Find a starting extreme point x with basis B and set k = 1. B = { a j } for some j J {1, 2,, N}, and a j are column vectors of A. Therefore N = { a j } for j J; the j will refer to the index of columns in A. In order to save memory, one can just use B and N to save the indices. How to find a starting extreme point x with the given basis B will be discussed later. Step 1: Let x k be an extreme associated with the basis B k. Calculate c B B 1 N c N, if this vector is nonpositive, stop; x k is an optimal extreme point. Otherwise, pick the component of c B B 1 N c N which is the most positive (find the corresponding j). Ming Zhong (JHU) AMS Fall 2018 4 / 16
The Algorithm, cont. Continue on, Let y j = B 1 a j. If y j 0, stop; the objective value is unbounded along the ray, ( ) yj { x 1 + λ : λ 0}. e j j N corresponds to j, and e j is a vector of of zeros except for a 1 in position j. If y j 0, go to Step 2. Step 2: Compute the index l such that the following ratio is at its minimum (let bk = B 1 k b) min {( bk ) i : ( y j ) i 0}. 1 i m ( y j ) i Ming Zhong (JHU) AMS Fall 2018 5 / 16
The Algorithm, cont. A few more steps: Form the new extreme point as follows, for i = 1,, m and i l; ( x k+1 ) i = ( bk ) i ( bk ) l ( y j ) l ( y j ) i, ( x k+1 ) k = ( bk ) l ( y j ) l, and all other x k+1 values are equal to zero. Form the new basis by deleting the column a l from B and introducing a j in its place. Repeat Step 1. Ming Zhong (JHU) AMS Fall 2018 6 / 16
Discussion: the Initial Extreme point Recall that the simplex method starts with an initial extreme point, Finding an initial extreme point of the set S = { x R n : A x = b, x 0} involves decomposing A into B and N with B 1 0. An initial extreme point may be not conveniently available, we can overcome it by introducing artificial variables. Two methods, both involves putting into the standard form A x = b and x 0 and b 0 (if not, for sample b i < 0, replace this i th constraint by 1 of the original). Two-Phase Method Note that x R n and x a R m. We will add in an extra variable, A x + x a = b, x, x a 0. Ming Zhong (JHU) AMS Fall 2018 7 / 16
Two-Phase Method Continue on, Obviously, x = 0 and x a = b represents an extreme point. A feasible solution of the original system will be obtained only if x a = 0. We can use the simplex method to Minimize : u x a, subject to : A x + x a = b x, x a 0, where u if a vector of all 1 s. This is the Phase I problem. By the end of this problem, either x a 0 or x a = 0. If x a 0, we conclude that the original system is inconsistent (the feasible region is empty). If x a = 0, we would obtain an extreme point of the original system. Starting from this extreme point, Phase II for the original problem. Ming Zhong (JHU) AMS Fall 2018 8 / 16
Big-M Method Big-M Method We will use an artificial vector x a together with a large positive cost coefficient M > 0 (a scalar), so that each artificial variable will drop to zero. Minimize : c + M u x a, subject to : A x + x a = b x, x a 0, M should be picked very large. We can apply the Two-Phase Method to this new problem without specifying M, in this way, a x = 0 and a large M will be found after executing Phase I. Such M will most likely forces every If not, the original problem is not feasible. Ming Zhong (JHU) AMS Fall 2018 9 / 16
Duality in Linear Programming Consider the linear program in its standard form, Minimize : c x, subject to : A x = b, x 0. Let us refer to this as the primal problem P. The following is called the dual of the foregoing problem, D: Maximize : b y, subject to : We will discuss the relationship between P and D. y A y c, unrestricted Ming Zhong (JHU) AMS Fall 2018 10 / 16
Primal and Dual Problems Theorem Let the pair of linear programming P and D be as defined before, Then Proof. Weak duality result: c x b y for any feasible solution x to P and any solution feasible solution y to D. Unbounded-infeasible relationship: If P is unbounded, D is infeasible, and vice versa. Strong duality result: If both P and D are feasible, they both have optimal solutions with the same objective value. For any pair ( x, y), any feasible solutions to P and D, we have c x y A x = y b. Ming Zhong (JHU) AMS Fall 2018 11 / 16
The Proof, cont. Proof. Continue on, If P is unbounded, then D must be infeasible, or else, any feasible solution to D would provide a lower bound on the objective value for P (by previous proof). Similarly for D being unbounded and P is infeasible. Now suppose both P and D are both feasible, neither can be unbounded (previous proof), so they both have optimal solutions. ( ) xb Let x = be an optimal basic feasible solution to P, where x N x B = B 1 b and xn = 0. Ming Zhong (JHU) AMS Fall 2018 12 / 16
The Proof, cont. Proof. Moving on, ( Consider y = c B cb B 1, where c = c N We have, ). y A = c B B 1 [B, N] = [ c B, c B B 1 N] [ c B, c N ], since c B B 1 N c N by the optimality condition of the given basis feasible solution. y is feasible; moreover y b = c B B 1 b = c x, so by previous part, y solves D. Ming Zhong (JHU) AMS Fall 2018 13 / 16
Consequences of the Previous Theorem Corollary If D is infeasible, P is unbounded of infeasible, and vice versa. Corollary Let x and y be the feasible solution to the primal and dual problems P and D respectively. Then x and y are optimal to P and D if and only if v j x j = 0 for j = 1,, n, where v v 2 = = c A y.. v n v 1 Ming Zhong (JHU) AMS Fall 2018 14 / 16
More on the Second Corollary v is the vector of slack variables in the dual constraints for the dual solution y. This condition is called the complementary slackness condition. The primal and dual solutions are called complementary slack solutions. A given feasible solution of P is optimal if and only if there exists a complementary slack dual feasible solution, and vice versa. Ming Zhong (JHU) AMS Fall 2018 15 / 16
The Proof Proof. Let x and y are primal and dual feasible solutions. We have A x = b, x 0, and A y + v = c, v 0. v is the slack variables to y. Hence, c x b y = v x. When x and y are both optimal, by previous theorem, c x = b y. Thus, v x = 0. Ming Zhong (JHU) AMS Fall 2018 16 / 16