Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Size: px
Start display at page:

Download "Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization"

Transcription

1 Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table of Contents 1 May 8, IP: Discrete values May 10, Incident Edges Notation Matching Edges May 12, IP is LP? May 15, Non-linear Programming (NLP) May 17, Feasibility, Optimal, and Unbounded Certificate of Infeasibility May 19, Other Ways to Show Infeasibility Unbounded Solutions Certificate of Unboundeness May 23, More Unbounded Examples Optimal Solutions Certificate of Optimality Harder Example for Optimality May 24, Standard Equality Form Equivalent LP problems May 26, Simplex Iteration Basis i

2 Spring 2017 CO 250 Course Notes TABLE OF CONTENTS 10 May 31, Basic and Non-Basic Canonical Form June 2, Converting to Canoncial Form Simplex: Better Feasible Solution June 5, Simplex from Canonical Form June 7, Bland s Rule Unboundedness from Simplex June 9, From SEF to Simplex (Given a Feasible Basis) Two-Phase Simplex June 12, Infeasible LP from Auxiliary LP June 14, Prove Certificate of Optimality of Auxiliary is Certificate of Infeasibility of Original LP Feasible Two-Phase Simplex Summary of Two-Phase Simplex June 16, Fundamental Theorem of LP (SEF) Fundamental Theorem of LP Geometry June 19, Hyperplane and Halfspace Converting to Halfspaces (a polyhedron) Convexity June 21, Extreme Points Linear Constraints to Polyhedron June 23, Non-Polyhedron Example Duality Weak Duality in SEF Primal vs Duality ii

3 Spring 2017 CO 250 Course Notes TABLE OF CONTENTS 21 June 26, Optimality of Primal and Dual Infeasibility of Primal and Dual Strong Duality in SEF June 28, Primal and Dual Solution Types Matrix Deriving Dual from Primal (non-sef) June 30, Another Example of Dual from Primal Dual of Dual is Primal Complementary Slackness (Special Case) July 5, Cone Geometric Characterization of Optimality July 7, Minimum Cost Perfect Matching Showing Perfect Matching is Minimum Reduced Cost and Potential Feasibility for Minimum Cost Perfect Matching Optimality for Perfect Matchings General Primal-Dual for Minimum Cost Perfect Matching Bipartite graphs Perfect Matching and Bipartite Graphs Hall s Theorem July 10, Hungarian Algorithm for Minimum Cost Perfect Matching for Bipartite Graphs July 12, Integer Programming as LP Relaxation (with Geometry) Convex Hull IP to LP with Convex Hull July 14, Cutting Planes Obtaining the Cutting Plane July 17, Solving LP with Added Cutting Plane Formalized Cutting Plane Algorithm Non-linear Programming - Convex Functions July 19, Proving a Function is Convex Level Set iii

4 Spring 2017 CO 250 Course Notes TABLE OF CONTENTS 31 July 21, Epigraph Non-Linear Programming Revisited Convex NLPs Convex NLP Optimality Example July 24, Another Convex NLP Optimality Example Optimality for Convex NLP Converse of Optimality Proposition Slater Point KKT Theorem iv

5 Spring 2017 CO 250 Course Notes 3 MAY 12, 2017 Abstract These notes are intended as a resource for myself; past, present, or future students of this course, and anyone interested in the material. The goal is to provide an end-to-end resource that covers all material discussed in the course displayed in an organized manner. These notes are my interpretation and transcription of the content covered in lectures. The instructor has not verified or confirmed the accuracy of these notes, and any discrepancies, misunderstandings, typos, etc. as these notes relate to course s content is not the responsibility of the instructor. If you spot any errors or would like to contribute, please contact me directly. 1 May 8, IP: Discrete values How would you represent a variable x C {0, 2} (This is not a valid LP constraint)? Introduce a binary variable z C to indicate whether item C is used or not 0 z C 1, z C integer x C = 2z C Note in IP we are allowed to use z C {0, 1} to represent this constraint. Similarly, for more than 2 discrete values e.g. x {A, B, C}, define binary variables y A, y B, y C {0, 1} for each value in {A, B, C}. x = Ay A + By B + Cy C y A + y B + y C = 1 2 May 10, Incident Edges Notation To indicate all incident edges of a vertex v S(v) = {e 1, e 2,... e n } where e i are all edges that have v as one of its vertices. 2.2 Matching Edges A set of edges M is matching in a graph G with vertices V if and only if S(v) M 1 v V 3 May 12, IP is LP? Note Integer Programming (IP) is not a subset of Linear Programming (LP). It however uses linear constraints with an integral restrition. 1

6 Spring 2017 CO 250 Course Notes 4 MAY 15, May 15, Non-linear Programming (NLP) Let f : R n R and g i : R n R for all i {1,..., m}. Non-linear programming (NLP) problem is an optimization problem of the form min f(x) g i (x) 0. g m (x) 0 Example 4.1. We have three different figures (or boundaries on 2D Cartesian plane) F 1, F 2, F 3 R 2 and a point x = (x 1, x 2 ). Determine a point from F 1 F 2 F 3 which is closest to the point x. F 1 is a circle of radius 3 centered at ( 4, 0). F 2 is a rectangle with bottom-left corner (1, 2) and top-right corner (6, 5). F 2 is a triangle with corners (2, 1), (4, 5), (2, 5). For two points x, y R 2 the distance between them is (x1 y 1 ) 2 + (x 2 y 2 ) 2 Define the variables t = (t 1, t 2 ) F 1 y = (y 1, y 2 ) F 2 p = (p 1, p 2 ) F 3 z 1, z 2, z 3 indicates the figure in which the closest point to x lies (t 1 + 4) 2 + t (t 1, t 2 ) F 1 y 1 1 y 1 6 y 2 2 y 2 5 } (y 1, y 2 ) F 2 p 1 2 } p 2 3 (p 1, p 2 ) F 3 p 1 + p 2 1 z 1, z 2, z 3 {0, 1} z 1 + z 2 + z 3 = 1 Finally, the objective function is min z 1 (x1 t 1 ) 2 + (x 2 t 2 ) 2 + z 2 (x1 y 1 ) 2 + (x 2 y 2 ) 2 + z 3 (x1 p 1 ) 2 + (x 2 p 2 ) 2 2

7 Spring 2017 CO 250 Course Notes 5 MAY 17, 2017 Note in order for our constraints to fit our definition of NLP (i.e. g i (x) 0, we must reformulate them (t 1 + 4) 2 + t (t 1 + 4) 2 + t y 1 1 y 1 1 y z 1 + z 2 + z 3 = 1 z 1 + z 2 + z 3 1 z 1 + z 2 + z May 17, Feasibility, Optimal, and Unbounded Example 5.1. Given an (LP) problem denoted by (P) where (P) is a maximization (LP) problem. Does (P) have a feasible solution? max 7x 1 + 3x 2 2x 1 x 2 = 4 4x 1 2x 2 = 9 x 1, x 2 0 This is clearly infeasible (divide 2nd equation by 2, same LHS as equation 1 but different RHS) thus (P) is infeasible. We can fix this by changing equation 2 to 4x 1 + 2x 2 = 8 Note that x = (2, 0) is now a feasible solution. Optimal solution x for (P) if for every feasible solution x the value of x is not not smaller than the value of x. Note the value of x is the value of the objective function for (P). The complement is true for minimization problems. Note for (P), we can describe a feasible solution as for every t 0. Note x = (2, 0) + t(1, 2) x 1 = 2 + t x 2 = 0 + 2t So the value of x is t. There is no maximum for a line. If for every alpha there is a feasible solution which has value larger than α, then (P) is unbounded. 5.2 Certificate of Infeasibility Example 5.2. Let our LP problem be max x 1 + 2x 2 + x 3 3

8 Spring 2017 CO 250 Course Notes 6 MAY 19, 2017 (1)x 1 x 2 x 3 = 1 (2)2x 1 x 2 x 3 = 5 (3)x 1 + x 2 + 2x 3 = 2 (4)x 1, x 2, x 3 0 Let s simplify the three equations. Take (1) + 2(2) + 3(3) (arbitrary LC) which results in 0x 1 + 4x 2 + 7x 3 = 3 Note this is obviously infeasible since one cannot produce a negative RHS given x 2, x 3 0. Therefore this LP problem is infeasible. Proposition 5.1. (Prop 2.1, page 46) Let A be a matrix and b be a vector. Then the system has no solution if there is a vector y such that 1. y A 0 AND 2. y b < 0 Ax = b where in the above example y = (1, 2, 3). Such a y that satisfies (1) and (2) is called a certificate of infeasibility for the given system (or constraints). Proof. Assume there is a solution for this system denoted by x. We have A x = b note the constraint is because our quantities must be non-negative. Then y A x = y b Let z = y A, which we know z 0 by (1). So z = (z 1,..., z n ) where z i 0 for all 1 i n. Note y A x = z x = z 1 x 1 + z 2 x z n x n 0 since x i 0 for all 1 i n. Therefore the RHS 0 and the LHS < 0 which is a contradiction. 6 May 19, Other Ways to Show Infeasibility Example 6.1. Let there be an optimization problem such that our objective function is max x 1 + x 2 + x 3 4

9 Spring 2017 CO 250 Course Notes 6 MAY 19, x 1 + x 2 + 3x 3 = 5 x 1 + x 2 + 2x 3 = 4 x 1 0, x 2 0 Let us sum the equalities multiplied by the corresponding coefficients in y = (2, 3). We get 1x 1 1x 2 + 0x 3 = 2 1x 1 + ( 1)x 2 + 0x 3 = 2 Let us assume that there is a feasible solution denoted by x. Then 1 x 1 + ( 1) x x 3 = 2 Note x 1 0, ( 1) x 2 0 and 0 x 3 = 0 (from our constraint x 1 0, x 2 0. The LHS is therefore 0 but the RHS is < 0. We obtain a contradiction to the assumption that there is a feasible solution, therefore this problem is infeasible. 6.2 Unbounded Solutions Example 6.2. Recall from a previous example max 7x 1 + 3x 2 x 1 0, x 2 0 2x 1 x 2 = 4 4x 1 2x 2 = 8 Note that we found x = (2, 0) is a feasible solution. More generally, we note x = (2, 0) + t(1, 2) is feasible for every t 0 with value t. 6.3 Certificate of Unboundeness Proposition 6.1. Given the optimization problem max c x Ax = b Let x be a feasible solution for the above system. Let d be a vector such that (1) Ad = 0 (2) d 0 (3) c d > 0 5

10 Spring 2017 CO 250 Course Notes 7 MAY 23, 2017 Then the system is unbounded. Such a pair x, d is called a certificate of unboundedness. Example 6.3. In our previous example A = [ ] and c = (7, 3). Then in this example d = (1, 2) or the slope of our general equation for all feasible solutions. Proof. Let us assume the contrary, that Ax = b is not unbounded. In other words, there exists α such that for every feasible solution x the value of x is at most α. Define x such that x = x + td for some t 0 Then x is a feasible solution for Ax = b. Indeed this is true when we check this A x = A( x + td) = A x + A(td) = A x + tad = b Remember that for any particular feasible x, A x = b and from (1) Ad = 0. So A x = b thus x is a valid solution set. Going back our definition x = x + td, note that (part of problem, t 0 (by definition), and d 0 from (2). Inserting x into our objective function c x = c ( x + td) = c x + c (td) = c x + tc d where c d > 0 from (3). If we choose t 0 so that c x = c x + tc d > α for our chosen fixed α value, then we obtain a contradiction. Solving for this t t > α c x c d 7 May 23, More Unbounded Examples Example 7.1. Let there be a minimization optimization problem min x 1 + x 2 + x 3 6

11 Spring 2017 CO 250 Course Notes 7 MAY 23, x 1 x 2 + 4x 3 = 39 2x 1 x 2 + 2x 3 = 15 x 1 0, x 2 0 A feasible solution is x = (3, 7, 1) where d = (1, 2, 2). Note other feasible solutions can then be written as x = x + td for some t 0 Define x = x + td for some t 0. Note that x is a feasible solution. We can check this Plugging into our constraints moreover note our constraint x 1 0, x 2 0, so x 1 = 3 + t 1 x 2 = 7 + t ( 2) x 3 = 1 + t ( 2) 6 x 1 x x 3 = 6(3 + t) ( 7 2t) + 4(1 2t) = t = 29 2 x 1 x x 3 = 2(3 + t) ( 7 2t) + 2(1 2t) = t = 15 x 1 = 3 + t 0 for every t 0 x 2 = 7 2t 0 for every t 0 therefore x is a feasible solution for every t 0. The value of x is therefore x 1 + x 2 + x 3 = 3 t which becomes arbitrarily small as t increases. 7.2 Optimal Solutions Example 7.2. Suppose we have max 5x 2 9x 3 + 7x 4 (1) x 1 + 2x 2 3x 3 + x 4 = 6 (2) 2x 1 x 2 + x 3 5x 4 = 5 Note that x 1 is missing from the objective function. We want to combine our constraints such that it looks similar to the object function. Eliminate x 1 by summation of our two constraints i.e. 2(1) 1(2) we get 0x 1 + 5x 2 7x 3 + 7x 4 = 17 Let us suppose x = (1, 2, 0, 1) is the most optimal solution. Suppose that (from the objective function) 5 x 2 9 x x 4 = 17 7

12 Spring 2017 CO 250 Course Notes 7 MAY 23, 2017 To show that x is the optimal solution we need to prove that for every feasible solution x 5 x 2 9 x x 4 17 since this is a maximization problem and we supposed x is the most optimal. From our constraint we have 5 x 2 7 x x 4 = 17 Since x is feasible, then from our constraint x 3 0. Thus since 2 x Certificate of Optimality Proposition 7.1. Suppose we are given an LP problem 5 x 2 7 x x 4 2 x 3 17 max c x Ax = b and a feasible solution x. If there is a vector y such that (1) A y c (2) c x = y b then x is an optimal solution for the LP problem. Proof. We need to prove that for every feasible solution x, we have c x c x Let us compute c x c x = y b from (2) = y A x = c x + y A x c x = c x + (y A c ) x Note that y A c 0 from (1) and x from. Thus the right sumamnd is always 0 so c x = c x + (A y c) x c x such a vector y is called a certificate of optimality for x. 8

13 Spring 2017 CO 250 Course Notes 8 MAY 24, Harder Example for Optimality Example 7.3. Let there be max 14x 1 + x x 3 + 3x 4 (1) 2x 1 + x 2 + 3x 3 x 4 = 3 (2) 4x 1 x 2 + x 3 + 2x 4 = 9 x 1, x 3 0 x 2, x 4 0 taking y = (3, 2) or 3(1) + 2(2) we get 14x 1 + x x 3 + x 4 = 27 we claim x = (2, 1, 0, 0) is a feasible solution. Let us show that x is also the optimal solution. Going back to our combined constraint, every feasible solution x we have 14 x 1 + x x 3 + x 4 = x 1 + x x x 4 + x 3 2 x 4 Note that x 3 0 and 2 x 4 0. So from the objective function 14 x 1 + x x x 4 14 x 1 + x x 3 + x 4 = 27 Thus every feasible solution x 27 thus x is the most optimal since c x = May 24, Standard Equality Form An LP problem is in Standard Equality Form if it is of the form max c x + z Ax = b where c, z, A, b are fixed numbers, matrices, or vectors. That is it must satisfy (1) Maximization problem (2) Every variable has a non-negativity constraint (3) Except for non-negativity constraints, all constraints are equations 9

14 Spring 2017 CO 250 Course Notes 8 MAY 24, Equivalent LP problems What does it mean for two LP problems (P) and (Q) to be equivalent? (1) (P) is infeasible (Q) is infeasible (2) (P) is unbounded (Q) is unbounded (3) from an optimal solution for (P), we can construct an optimal solution for (Q) (and vice versa) Example 8.1. Given min 7x 1 + 3x 2 2x 3 x 1 + x 2 + x 3 7 2x 1 x 2 + 7x 3 1 x 1 + x 2 x 3 = 0 x 1 0 x 2 0 x 3 free convert this to Standard Equality Form. Step 1 Change minimization to maximization (if necessary). That is negate every term min 7x 1 + 3x 2 2x 3 max 7x 1 3x 2 + 2x 3 Step 2 Convert all inequalities to equations. In the example, for every feasible solution we have x 1 + x 2 + x 3 7 create a temp variable x 4 0 such that x 1 + x 2 + x 3 x 4 = 7 At the end of step 2, we have for our example max 7x 1 3x 2 + 2x 3 x 1 + x 2 + x 3 x 4 = 7 2x 1 x 2 + 7x 3 + x 5 = 1 x 1 + x 2 x 3 = 0 x 1 0, x 2 0, x 3 free, x 4 0, x 5 0 Step 3 Replace all non-positive variables by non-negative variables. In the example, x 1 x 1 max 7x 1 3x 2 + 2x 3 10

15 Spring 2017 CO 250 Course Notes 9 MAY 26, 2017 x 1 + x 2 + x 3 x 4 = 7 2x 1 x 2 + 7x 3 + x 5 = 1 x 1 + x 2 x 3 = 0 x 1 0, x 2 0, x 3 free, x 4 0, x 5 0 Step 4 Replace all free variables by a difference of two non-negative variables. In our example x 3 = x + 3 x 3 where x + 3, x 3 0. Thus at the end of step 4 we have for our example max 7x 1 3x 2 + 2x + 3 2x 3 x 1 + x 2 + x + 3 x 3 x 4 = 7 2x 1 x 2 + 7x + 3 7x 3 + x 5 = 1 x 1 + x 2 x x 3 = 0 x 1, x 2, x + 3, x 3, x 4, x May 26, Simplex Iteration Example 9.1. Given min 5x 1 x 2 4x 3 4x 4 4x 1 + x 2 + 3x 3 + 3x 4 17 x 1 + x 3 + x 4 4 2x 1 + x 2 + 2x 3 + 3x 4 10 x 1 0, x 2, x 3, x 4 0 First we must transform this LP problem to Standard Equality Form (SEF). max 5x 1 + x 2 + 4x 3 + 4x 4 4x 1 + x 2 + 3x 3 + 3x 4 + x 5 = 17 x 1 + x 3 + x 4 + x 6 = 4 2x 1 + x 2 + 2x 3 + 3x 4 + x 7 = 10 x 1 0, x 2,..., x

16 Spring 2017 CO 250 Course Notes 9 MAY 26, 2017 Then we modify x 1 x 1 and get max 5x 1 + x 2 + 4x 3 + 4x 4 (1) 4x 1 + x 2 + 3x 3 + 3x 4 + x 5 = 17 (2) x 1 + x 3 + x 4 + x 6 = 4 (3) 2x 1 + x 2 + 2x 3 + 3x 4 + x 7 = 10 which is our 1st iteration. An obvious solution is x = (0, 0, 0, 0, 17, 4, 10) (just look at the free variables in each equation). Is this optimal? Probably not (no). This is because from our objective function, we see that x 1,..., x 4 contribute to the optimal value, but we have set them to 0 in favor of x 5,..., x 7. What if we try to find x 1,..., x 4 > 0 that fits within our constraint? In practice, we choose the first variable with a coefficient > 0 to optimize (Bland s Rule). Let s try to increase (and maximize) t = x 1, t 0 first and modify x 5, x 6, x 7 accordingly hwhile leaving x 2, x 3, x 4 = 0. Solving for x 5 in each of the constraints 4x 1 + x 5 = 17 x 5 = 17 4t x 1 + x 6 = 4 x 6 = 4 t 2x 1 + x 7 = 10 x 7 = 10 2t Note we cannot simply set t = 100 for example since x 5, x 6, x 7 0. Solving for t in each of the new constraints 0, we get t 17 4, t 4, t 5, respectively. Since ALL of them have to be non-negative, we take the min. of these bounds ie. t min{ 17 4, 4, 5} = 4. Letting t = 4 we get the new solution x = (4, 0, 0, 0, 1, 0, 2) with the value of 20. Note that our t was bounded by equation (2). Let us eliminate x 1 from the other equations (1) and (3) by subtracting each with (2). For the objective function, we can write (2) in terms of 0 then subtract/add 0 from/to the objective function. So we get max x 2 x 3 x 4 5x (1) x 2 x 3 x 4 + x 5 4x 6 = 1 (2) x 1 + x 3 + x 4 + x 6 = 4 (3) x 2 + x 4 2x 6 + x 7 = 2 which is our 2nd iteration. This allows us to keep x 1 = 4 but now optimize for x 2. Setting x 3, x 4 = 0 and solving for the free variables x 2 + x 5 = 1 x 5 = 1 t 1 t 0 t 1 x 1 = 4 0 x 2 + x 7 = 2 x 7 = 2 t 2 t 0 t 2 Thus we have t min{1, 2} = 1. Increasing x 2 in the current solution to the value t, t 0 (leaving x 3, x 4, x 6 = 0) we obtain the more optimal solution x = (4, 1, 0, 0, 0, 0, 1) with value equal to

17 Spring 2017 CO 250 Course Notes 10 MAY 31, 2017 By subtracting all functions by (1) to eliminate x 2 max x 5 5x (1) x 2 x 3 x 4 + x 5 4x 6 = 1 (2) x 1 + x 3 + x 4 + x 6 = 4 (3) x 3 + 2x 4 x 5 + 2x 6 + x 7 = 1 which is our 3rd iteration. Observe our last solution x = (4, 1, 0, 0, 0, 0, 1) with value 21 is optimal since and Z 3 (x) 21 (Z 3 is the objective function of the 3rd iteration). To prove this, we can find a certifiate of optimality for x from the final LP: that is y R 3 s.t. 1. y A c 2. y b = c x we get y by taking the negative vector of coefficients of (x 5, x 6, x 7 ) in the final objective function so y = ( 1, 1, 0) = (1, 1, 0). Multiplying out y with our initial LP: (1, 1, 0)(17, 4, 10) = = 21 = Z 3 (x) = c x = (5, 1, 4, 4, 0, 0, 0)(4, 1, 0, 0, 0, 0, 1) = = 21 Note that in the 3 iterations in our example, the LPs were in a special form: The columns of the constraint matrix corresponding to the non-zero entries of the current solution (for a given iteration) are LI (linearly independent). 9.2 Basis Definition 9.1. Suppose A is an m n constraint matrix of an LP in SEF, such that rank(a) = m. Then any subset of m column indices that are LI are called a basis. Example 9.2. {1, 5, 7} is clearly a basis for the LP in the 2nd iteration. Also {1, 5, 6}. 10 May 31, Basic and Non-Basic We also say that x j for j B is a basic variable and x j with j B is called non-basic. Let us denote N = {1, 2,..., n} B. We ll also consider restrictions of A, x, c to B, N respectively, denoted as A B, x B, c B, and A N, x N, c N respectively for the basic and non-basic parts. Example In the previous example, one basis is {5, 6, 7}. 13

18 Spring 2017 CO 250 Course Notes 10 MAY 31, 2017 Note that wiht B = {5, 6, 7}, then N = {1, 2, 3, 4}. Thus we have (from the 1st iteration) A B = c b = (0, 0, 0) A N = c N = (5, 1, 4, 4) Note that in our 1st iteration we had the naive solution x = (0, 0, 0, 0, 17, 4, 10). Thus x B = (17, 4, 10) and x N = (0, 0, 0, 0). This is a basic solution. Definition We say a solution x R n is basic for basis B if x N = 0, A x = b. Note that A = [ A B A N ] by permuting columns if necessary. A B has m columns and A N has m n columns. Note we can show that x B always exists given a basis and A B Note that A N x N = 0 since x N = 0 so A x = b [ A B ] [ x ] A B N = b x N A B x B + A N x N = b A B x B = b x B = A 1 B b where A B is invertible since its columns are LI. So given a basis, we can always find a basic solution corresponding to it. Suppose we take B = {1, 2, 5} then by permuting columns we write them in order (1, 2, 5, 3, 4, 6, 7) or A = Example x R 6, and [ ] x = Are the following bases? 1. {2, 3} yes [ ] {2, 6} no 3. {1} no, similarly for {3, 4, 5} not a basis 14

19 Spring 2017 CO 250 Course Notes 11 JUNE 2, 2017 A basis solution for B = {2, 3} is which gives us A B x B = b [ ] ] 2 1 [ x2 = 1 1 x 3 [ ] x 2 + x 3 = 2 x 2 x 3 = 4 Solving gives us x 2 = 6 and x 3 = 10. Thus the solution would be x = (0, 6, 10, 0, 0, 0). Note however that thus this solution is infeasible. Definition We say that a basic solution x (for basis B) is feasible (a basic feasible solution) if. Note that if we chose B = {3, 4} then we get x = (0, 0, 2, 6, 0, 0) which is a basic feasible solution for B Canonical Form Note that in our example from the day before in the 1st iteration, the LP satisfies 1. A B = I 2. c B = 0 Definition Whenever an LP s.t. max c x Ax = b satisfies A B = I, c B = 0 for a basis B, then we say that the LP is in canonical form for B. In the 2nd iteration, we had a solution (4, 0, 0, 0, 1, 0, 2) a basic feasible solution for basis {1, 5, 7} A B = I (even up to permutations of columns). This is not canonical. 11 June 2, Converting to Canoncial Form To bring an LP in SEF to canonical form for a basis B 1. Multiply the constraints Ax = b by A 1 B on both sides to get A 1 B Ax = A B 1b (note these are equivalent to each other) Therefore on the LHS we have A 1 B A = [ ] [ A 1 B AB A N = A 1 B A ] B A N so in the constraint matrix à = A 1 B A, thus AB = I. 15

20 Spring 2017 CO 250 Course Notes 11 JUNE 2, We multiply Ãx = b (where b = A 1 B b) by c B (objective coefficient restricted to B) to get cbãx = c b c B b c BÃx = 0 If we add the LHS to the objective function z(x) = c x (adding 0) we get z(x) = c x + c B b c BÃx for every feasible solution x Z(x) = Z(x). Observe that Z(x) = [ c B N] [ ] x B c c x B N [ ] [ ] x I B ÃN + c x N B b = c B x B + c N x N c B x B c BÃx n + c B b = c N x N c BÃx n + c B b So the coefficients of the basic variables in c, where c = c c BÃ = c c B A 1 B A, are all 0. So in summary, starting with max Z(x) = c x s.t. Ax = b we get s.t. This is in canonical form for B. max (c c B A 1 B A)x + c B b A 1 B Ax = A 1 B b Example Given the first iteration of our previous example max 5x 1 + x 2 + 4x 3 + 4x 4 4x 1 + x 2 + 3x 3 + 3x 4 + x 5 = 17 x 1 + x 3 + x 4 + x 6 = 4 2x 1 + x 2 + 2x 3 + 3x 4 + x 7 = 10 our solution is (4, 0, 0, 0, 1, 0, 2). 16

21 Spring 2017 CO 250 Course Notes 11 JUNE 2, Let thus A 1 B would be Multiplying out A 1 B A where we have (A 1 B A) (5,1,7) = I as desired A B = A (5,1,7) = A 1 B = A 1 B A = Note that c = (5, 1, 4, 4, 0, 0, 0) thus c (5,1,7) = (0, 5, 0) (mind the order, must match A B). So that gives us c = c (0, 5, 0)Ã = c (5, 0, 5, 5, 0, 5, 0) = (0, 1, 1, 1, 0, 5, 0) 3. With c and Ã, we can rewrite our LP problem in canonical form max x 2 x 3 + x 4 5x x 2 + x 3 + x 5 4x 6 = 1 x 1 + x 3 + x 4 + x 6 = 4 x 2 + x 4 2x 6 + x 7 = 2 where the RHS of the constraints is b = A 1 B b Simplex: Better Feasible Solution 1. Start with LP in canonical form for a basis B and basic feasible solution x for B. 2. If c j > 0 for some non-basic variable x j, then we try to find a new feasible solution with x j > 0 (say x j = t) with all the remaining non-basic variables = 0. So for a given constraint Ax = b that is [ ] [ ] x I A B N = b x N note the I contains A B and the jth column A j is in A N. 17

22 Spring 2017 CO 250 Course Notes 12 JUNE 5, 2017 Setting x j = t j and all other non-basic variables = 0 we get x B + ta j = b we take the max t such that x B = b ta j 0 (and t 0). 12 June 5, Simplex from Canonical Form Example max 5x 1 + 3x 2 x 1 + x 2 + x 3 = 4 5x 1 + 2x 2 + x 4 = 10 Note that B = 3, 4 (canonical form) with the corresponding basis solution (0, 0, 4, 10). Since the corresponding basic solution is feasible, B is a feasible basis. The value of the basic solution is 0. Increasing x 1 to the value t 0 while keeping x 2 equal to 0 Then x 3 = 4 t x 4 = 10 5t So for x 3 and x 4 to be non-negative, we get t 4 and t 2. Thus we need t 2. Setting t = 2 we get a new solution (2, 0, 2, 0) with value 10. Let us update the basis, where x 1 enters the basis and x 4 leaves the basis. Thus the new B = {1, 3}. We need to update our LP such that it is in canonical form again. Thus for B = {1, 3} we have max x 2 x x 2 + x x 4 = 2 x x x 4 = 2 This was derived by noting that 5x 1 + 2x 2 + x 4 10 = 0 (from the second constraint), subtracting this (0) from z(x) = 5x 1 + 3x 2 to get rid of x 1, and similarly subtracting one fifth of that equation from the first constraint to get rid of x 1 from the first constraint. Finally, dividing the second constraint by 5 such that A B = I. Now we can optimize for x 2 = t 0 similar to what we did before, fixing x 1 and x 3 and setting everything else (x 4 ) to 0. x t = 2 x 3 = t x t = 2 x 1 = t 18

23 Spring 2017 CO 250 Course Notes 13 JUNE 7, 2017 where t = min{ 10 3, 5} we obtain solution ( 2 3, 10 3, 0, 0) with value 40 3 (note we overrode the previous value of x 1 where it s lower than before but gives us a better value). Let us update the basis: x 2 enters B and x 3 leaves B. The new basis is B = {1, 2}. Computing the canonical form for B. Subtract 5 3 of (2) (second constraint) from z(x), multiply 5 3 with (1) to get, and subtracting 2 3 of (1) from (2) to get max 5 3 x x x x x 4 = 10 3 x x x 4 = 2 3 Note that in our objective function, 5 3 x 3 and 2 3 x 4 are both 0, thus z(x) Thus ( 2 3, 10 3, 0, 0) is an optimal solution with value (We only stop once the coefficients are negative since. Note that the certificate of optimality is y = ( 5 3, 2 3 ) (derived from negative coefficient of final objective function with respect to the initial LP (where x 3, x 4 had coefficients of 1). Note if we plug y into our initial LP, it satisfies y A c and c x = y b. 13 June 7, Bland s Rule Increase the variable with the smallest index that has a non-zero coefficient Unboundedness from Simplex Example max x 1 + x 2 = z(x) 2x + 3x 2 + x 3 = 9 x 1 2x 2 + x 4 = 2 where B = {3, 4} with the corresponding basis solution (0, 0, 9, 2) with value 0. Increase x 1 (Bland s Rule) to the value t 0 keeping x 2 equal to 0. Then 2t + x 3 = 9 x 3 = 9 + 2t t + x 4 = 2 x 4 = 2 t Note x 3 has no role in the constraint (it will always be non-negative for all t 0 since the coefficient of x 1 in the first constraint is negative). Thus t = 2 from x 4 = 2 t 0. x 1 enters B and x 4 leaves B, thus the basis is now B = {1, 3}. 19

24 Spring 2017 CO 250 Course Notes 14 JUNE 9, 2017 The canonical form for this basis is as follows Increase x 2 to value t 0 keeping x 4 = 0 max 3x 2 x x 2 + x 3 + 2x 4 = 13 x 1 2x 2 + x 4 = 2 x 3 = 13 + t 0 x 1 = 2 + 2t 0 for all t 0. Since t is unbounded, then this tells us the problem is unbounded. We must thus find a certificate of unboundedness. Note that a feasible solution is x = (2, 0, 13, 0) (the constant in the above f(t)). Note for for every increase of x 2 by 1, x 3 increases by 1 and x 1 increases by 2 (from the coefficient of t). Thus d = (2, 1, 1, 0). We must check 1. c d > 0: c d = 3 > d 0 which holds. 3. Ad = 0 which also holds. 14 June 9, From SEF to Simplex (Given a Feasible Basis) Given max (3, 2, 1, 1, 0)x [ ] x = [ ] Note that we need to convert this to canonical form for a feasible basis B. We are given a hint that B = {1, 5} is feasible (It is important that the initial basis is feasible). Thus the canonical form for B is max (0, 1 2, 1, 1 2, 0)x + 6 [ ] x = [ ]

25 Spring 2017 CO 250 Course Notes 14 JUNE 9, 2017 The current solution is x = (2, 0, 0, 0, 0). Bland s rule and solving for x 2 = t, we get x 2 = 4. x 2 enters and x 1 leaves the basis leaving us with B = {2, 5} and the corresponding canonical form max ( 1, 0, 1, 1, 0)x + 8 [ ] x = [ ] 4 4 Bland s rule implies x 3 = t, and solving we get x 3 = 1. x 3 enters and x 5 exits the basis where B = {2, 3}. max ( 3 2, 0, 0, 1 2, 1 4 )x + 9 [ ] x = [ ] 4 1 Since all coefficients are negative, x = (0, 4, 1, 0, 0) is an optimal solution. What is the certificate of optimality? It is y = A T B c B from proposition 2.4, where B is the final basis B = {2, 3}. So note that [ ] [ ] A B =, c 4 4 B = 1 Thus y = A T B c B = 1 4 [ ] [ 2 1 We can check that y A c and c x = y b where x = (0, 4, 1, 0, 0) Two-Phase Simplex ] [ ] 1 = How do we use Simplex iteration if we are not given the feasible basis B to iterate on? Phase I we create an auxiliary LP by adding two dummy variables x 5, x 6 max x 5 x [ ] x = [ ] 9 5 Note that the negative signs bound the LP to have a max value 0. So we can either have an optimal value of 0 or < 0. Note that if b is negative, we need to first convert it to positive by multiplying rows by 1 since x 5, x 6 0 and the basic feasible solution must be positive. If the optimal value of the auxliary LP is < 0, then the original LP is infeasible. 21

26 Spring 2017 CO 250 Course Notes 15 JUNE 12, 2017 Proof. Assume that the original LP has a feasible solution x = ( x 1, x 2, x 3, x 4 ), then ( x 1, x 2, x 3, x 4, 0, 0) is feasible too for the auxiliary LP which will have an optimal value of 0. So an optimal value of 0 in the auxiliary LP implies a feasible solution. For the auxiliary LP, the basis B = {5, 6} is feasible. Thus we convert to the canonical form max ( 5, 10, 1, 3, 0, 0)x 14 [ ] x = [ ] June 12, Infeasible LP from Auxiliary LP Example From last time, we have the auxiliary LP in canonical form for B = {5, 6} max ( 5, 10, 1, 3, 0, 0)x 14 [ ] x = [ ] 9 5 Note x 2 enters and x 5 exits yielding us the new basis B = {2, 6}. Converting to its canonical form we get max ( 5 4, 0, 5 2, 1 2, 5 11, 0)x 4 4 [ ] x = 4 1 [ 9 ] Now x 4 enters and x 2 leaves. So we get B = {4, 6} with the canonical form max ( 1 2, 2, 1, 0, 3 2, 0)x 1 2 [ ] x = 2 1 [ 9 ]

27 Spring 2017 CO 250 Course Notes 16 JUNE 14, 2017 Note the optimal value for this auxiliary LP is 1 2 < 0 which by our previous theorem means that the original LP is infeasible. How do we find a certificate of infeasibility for the original LP? Note the certifiate of optimality of the auxliary LP is a certificate of infeasibility for the original LP! 16 June 14, 2017 Example From last time, we can find the certificate of infeasibility from y = A T B c B, the certificate of optimality of the auxiliary LP. That is [ ] [ ] A B =, c 1 1 B = 1 Therefore y = A T B c B = 1 2 which is the certificate of infeasibility for the original LP. [ ] [ ] [ ] = Prove Certificate of Optimality of Auxiliary is Certificate of Infeasibility of Original LP The original LP is max Ax = b c x And the auxiliary LP is max x n+1 x n+2... x m [ A Im ] x = b where A R m n, b R m, b 0, c R n and I m = R m m Let x be an optimal solution for the auxiliary LP with negative objective value. Let y be the certificate of optimality for x. Then y is also a certificate of infeasibility for the original LP. Proof. Note c = (0, 0, 0,..., 0, 1, 1,..., 1) where there are n 0s and m -1s. Note a certifiate of optimality has y A c. Note y [ A I m ] = [ y A y I m ] so we know y A c A = 0 and y b = c x < 0. Thus this will be the certificate of infeasibility for the original LP. 23

28 Spring 2017 CO 250 Course Notes 16 JUNE 14, Feasible Two-Phase Simplex Example Phase I: Construct the auxiliary LP max (2, 1, 2)x [ ] x = [ ] 1 3 max x 4 x 5 [ ] x = [ ] 1 3 where B = {4, 5} is feasible for the auxiliary LP. max (2, 1, 0, 0, 0)x 4 [ ] x = [ ] 1 3 Note we when solve this using Simplex, we ll get B = {1, 5} then B = {1, 3}. Note the same objective function can be used from before!!! max x 4 x 5 [ ] x = [ ] 2 1 Phase II: Use the feasible basis B = {1, 3} for the original LP. The canonical form correspond to B = {1, 3} max 6 + x 2 24

29 Spring 2017 CO 250 Course Notes 17 JUNE 16, 2017 [ ] x = 2 1 [ ] 2 1 x 2 enters and x 1 leaves thus we get B = {2, 3}. So we get max 2x [ ] x = [ ] 4 7 thus the optimal basis for the original LP is x = (0, 4, 7) is an optimal solution Summary of Two-Phase Simplex 1. Phase I: Construct and solve for basic feasible solution of auxiliary LP. (a) If optimal value is 0, continue to phase II. (b) Otherwise, the original LP is infeasible and the certificate of infeasibility is y = A T B c B. 2. Phase II: Use the feasible basis from Phase I to solve for the solution of the original LP. 17 June 16, Fundamental Theorem of LP (SEF) If (P) is an SEF has does not have an optimal solution, then (P) is either infeasible or unbounded. If (P) is feasible, then it has a basic feasible solution. If (P) has an optimal solution, then (P) has a basic feasible solution that is optimal Fundamental Theorem of LP Exactly one of the following holds for an LP (P): 1. (P) is infeasible 2. (P) is unbounded 3. (P) has an optimal solution which follows from the fundamental theorem of LP (SEF) Geometry Linear problems (or systems of equations) can be interpreted geometrically. Note in the follow example 25

30 Spring 2017 CO 250 Course Notes 18 JUNE 19, 2017 Example s.t. max (c 1, c 2 )x x The set of points (x 1, x 2 ) satisfying the second row with equality is the line in row 2. Set of points that satisfies the constraint is all points to the left of the line in row 2. A similar argument holds for the rest of the constraints. These constraints will form a shaded region of feasible solutions. Recall that the dot product a b = a b cos(θ) where θ is the angle between a and b. Note that a b = 0 when a, b are orthogonal, > 0 when the angle is less than 90 degrees, and < 0 when the angle is larger than 90 degrees. 18 June 19, Hyperplane and Halfspace Note that for linear constraints, we can either have = (equality) or (inequality). Let β R, a a non-zero vector with n components. Then Hyperplane H = {x R n : a x = β} Halfspace F = {x R n : a x β} In the above example, H is the set of points satisfying the constraints with equality and F is the set of points satisfying the constraints in general. Note if x H, then a x = β. Thus a (x x) = 0. Thus we get the following remarks. Let x H H is the set of points x for which a and x x are orthogonal F is the set of points x for which a and x x form an angle of at least 90 degrees (since a x = β, subtracting β from a value < β would result in a negative number, which translates to an angle greater than 90 degrees). Hyperplanes are all n 1 dimension. Note that the halfspace is a polyhedron Converting to Halfspaces (a polyhedron) Note we can convert a x β constraints to a x β and for equalities a x = β to a x β and a x β. Hence any set of linear constraints can be written as a x b for some A and some b. Thus all linear constraints can be mapped to a polyhedron Convexity Note we define the line through x (1) and x (2) to be the set of points {λx (1) + (1 λ)x (2) : λ R} If we would like to bound the line to a line segment with ends x (1) and x (2), then we get {λx (1) + (1 λ)x (2) : 0 λ 1} 26

31 Spring 2017 CO 250 Course Notes 19 JUNE 21, 2017 We define a set C of R n to be convex if for every pair of points x (1) and x (2) in C the line segment with ends x (1), x (2) is included in C. Theorem All halfspaces are convex. Proof. Let x 1 and x 2 be two arbitrary points in the half space H = {x : a x β}. Let x be an arbitrary point on the line segment between x 1 and x 2, such that (from our definition) x = λx 1 + (1 λ)x 2 where 0 λ 1. So we get where x H. a x = a (λx 1 + (1 λ)x 2 ) = λa x 1 + (1 λ)a x 2 = λβ + (1 λ)β = β Theorem For every j J let C j denote a convex set. Then the intersection is convex. Note that J can be infinite. C = {C j : j J} Proof. Let x 1 and x 2 be two arbitrary points in C. For every j J where x 1, x 2 C j since C j is convex the line segment between x 1 and x 2 is in C j. It follows that the line segment between x 1 and x 2 is in C. Hence C is convex. Polyhedra are convex. 19 June 21, Extreme Points A point is properly contained in a line segment if it is in a line segment but it is distinct from its ends (not one of the end points). A point x is an extreme point of C if there are no line segments in C that properly contains x. That is Definition x C is not an extreme point of C if and only if x = λx 1 + (1 λ)x 2 for distinct points x 1, x 2 C and λ with 0 < λ < 1 (Note non-equality). How do we isolate for the boundaries of the half space for a given point x on the boundary? We know for some rows of A, there we have a x = β (equality). We call these constraints tight (or active) for x. These rows of A can be represented by A =. That is Theorem Let P = {x R n : Ax b} be a polyhedron and let x P. Then A = x = b = be the set of tight constraints for x. Then x is an extreme point of P if and only if rank(a = ) = n (Intuition: in R n, there must be n vectors from A that bounds a point by equality, sort of like a corner ). Proof. Backwards direction: Suppose rank(a = ) = n. Suppose x is not an extreme point. Thus there exists x 1 x 2 P such that 0 < λ < 1 and x = λx 1 + (1 λ)x 2, Thus b = = A = x = A = (λx 1 + (1 λ)x 2 ) = λa = x 1 + (1 λ)a = x 2 = λb = + (1 λ)b = = b = 27

32 Spring 2017 CO 250 Course Notes 19 JUNE 21, 2017 Hence we have an equality throughout which means A = x 1 = A = x 2 = b =. As rank(a = ) = n, there is a unique solution to A = x = b =, this is a contradiction thus x is an extreme point. Forwards direction (Contrapositive): Suppose rank(a = ) < n, then we will show x is not an extreme point. Note since columns of A = are linearly dependent, there is a non-zero vector d such that A = d = 0. We pick ɛ > 0 small and define x 1 = x + ɛd x 2 = x ɛd Note that x = 1 2 x x 2 and x 1 and x 2 are distinct. Thus x is in the line segment between x 1 and x 2. We need to show that x 1, x 2 P for ɛ > 0 small enough. Note A = x 1 = A = ( x + ɛd) = A = x + ɛa = d = b = Similarly for A = x 2 = b =. Let a x β be any inequalities of Ax b that is not in A = x b =. It follows for ɛ > 0 small enough a x 1 = a ( x + ɛd) = a x + ɛa d β since a x β and ɛ is small. Thus x 1 P and similarly x 2 P Linear Constraints to Polyhedron We can convert linear constraints (or a polyhedron) to an intersection of halfspaces for which we can solve for the extreme points. Example Consider the polyhedron P P = { x R 4 : [ ] x = [ ] } 2, 1 We claim x = (0, 0, 2, 1) the basic feasible solution is an extreme point. Note from before equalities can be converted to and can be represented as x i 0. Thus Thus the set of tight constraints A = x b = for x is } x A = =

33 Spring 2017 CO 250 Course Notes 20 JUNE 23, 2017 (This is derived by taking the two constraints, and ensuring that x 1, x 2 0). Obviously rank(a = ) = 4 = n. By the theorem that rank(a = ) = n x is an extreme point, we have that the basic feasible solution is an extreme point. Theorem Let A be a matrix where the rows are linear independent and let b be a vector. Let P = {x : Ax = b, } and let x P. Then x is an extreme point if and only if x is a basic feasible solution to Ax = b. Proof. TODO (was in lectures, ask Boshen?) 20 June 23, Non-Polyhedron Example Example If Q = {x R 2, sin(α)x 1 + cos(α)x 2 1} for all α R}. x = (1, 0) is an extreme point of Q. Note however that Q is not a polyhedron so all our theorems before breaks Duality Example max (1, 2, 0, 0)x [ ] x = [ ] 8 5 Solving this LP with simplex method, we will get an optimal solution x = (0, 5, 3, 0) and a certificate of optimatlity ȳ = (0, 2). We can check that 1. ȳ A c (2, 2, 0, 2) (1, 2, 0, 0) 2. ȳ b = c x 10 = 10 Another vector that satisfies (1) is ȳ = ( 1 2, 2) satisfies (1) then for every x feasible for the original LP we have c x = (c ȳ A)x + ȳ Ax where c ȳ A) 0 (from (1)) and. Thus c x ȳ x = ȳ b. So if there s a certificate of optimality ȳ in the original LP (primal) then y A c implies that the upper bound is y b. Let us construct a dual LP where we minimize that upper bound. For a given Primal LP (P) with respect to x max c x Ax = b then the Dual LP (D) with respect to y is min b y 29

34 Spring 2017 CO 250 Course Notes 21 JUNE 26, 2017 A y c 20.3 Weak Duality in SEF Theorem Consider the primal-dual pair (P), (D). Let x be a feasible solution for (P) and let ȳ be a feasible solution for (D). Then c x b ȳ. Proof. Note from our (P) we have where c ȳ A 0 from the constraints of D,, so c x = (c ȳ A) x + ȳ A x c x ȳ A x = ȳ b = b ȳ 20.4 Primal vs Duality (D) is unbounded (the upper bound b ȳ is unbounded downwards) if and only if (P) must be infeasible since it would be pushed down indefinitely. (P) is unbounded if and only if (D) is infeasible (by similar logic). 21 June 26, Optimality of Primal and Dual From last time using weak duality for SEF, we have c x b ȳ for some feasible solutions x for (P) and ȳ for (D). Theorem If c x = b ȳ, then x is an optimal solution for (P) and similarly ȳ is optimal for (D). Proof. Indeed for every x feasible for (P), we have by weak duality, thus x is optimal. Similarly for every ỹ feasible for (D), we have again by weak duality, so ȳ is optimal Infeasibility of Primal and Dual c x b ȳ = c x b ỹ c x = b ȳ It might happen that both (P) and (D) are infeasible, for example, let (P) be max (0, 1)x 30

35 Spring 2017 CO 250 Course Notes 21 JUNE 26, 2017 (1, 0)x = 1 and let (D) be min [ ] 1 y 0 y [ ] Strong Duality in SEF Let the primal LP (P) be max Ax = b c x then the dual LP (D) is min A y c b y Theorem Let (P) and (D) be the LPs above. If (P) has an optimal solution then (D) has an optimal solution. Moreover, the optimal value of (P) equals the optimal value of (D). Proof. (We need the correctness of Simplex Method: the simplex with Bland s rule terminates) So we assume that the Simplex Method applied to (P) terminates with an optimal solution x. This means that x is a basic feasible solution for an optimal basis B. That is, the objective function corresponding to the canonical from for B has only non-positive coefficients (Simplex method) ȳ b + (c ȳ A)x where ȳ = A B c B. Since B is an optimal basis c ȳ A 0 A ȳ c. So ȳ is feasible for (D). Now it is enough to show that c x = b ȳ because of weak duality (c x b ȳ). That is c x = (c ȳ A + ȳ A) x = (c ȳ A) x + ȳ b where ȳ A x = y b because A x = b. Note that x is a basic feasible solution thus x N = 0. Furthermore, from (B), c ȳ A is the objective function in basis B, thus it has (c ȳ A) B = 0. 31

36 Spring 2017 CO 250 Course Notes 22 JUNE 28, June 28, Primal and Dual Solution Types Matrix Our final chart of possible solution types between the primal and dual LP is Dual Primal Infeasible Unbounded Optimal Infeasible Yes Yes No Unbounded Yes No No Optimal No No Yes 22.2 Deriving Dual from Primal (non-sef) Example Given the primal min (1, 3, 5)x (4, 0, 1)x 1 (7, 2, 3)x 3 (0, 1, 8)x = 4 (2, 6, 1)x 10 x 1 free x 2 0 x 3 0 We want a y such that y Ax y b so we see that y 1 0, y 2, y 4 0, and y 3 free (based on the signs in the primal). So, when can we say that y Ax y b implies c x y b for every feasible solution x? Let s look at an example (1, 1, 6)x 7 (our y Ax y b for x such that x 1 free, x 2 0, and x 3 0. Does this imply (1, 3, 5)x 7 (c x y b? Let us subtract the two LHS (1, 1, 6)x (1, 3, 5)x = (0, 2, 1)x = 0x 1 + ( 2)x 2 + x 3 Note the domain of our x i implies that the above expression is 0. So note for each variable x i free we need the corresponding y i A i to be equivalent to c i (in the above, 1 = 1) 0 y i A i c i 0 y i A i c i (in the above, 1 3) (in the above, 6 5) So our dual LP and its objective function b y max (1, 3, 4, 10)y 32

37 Spring 2017 CO 250 Course Notes 23 JUNE 30, 2017 (A y c) (4, 7, 0, 2)y = 1 (0, 2, 1, 6)y 3 (1, 3, 8, 1)y 5 y 1 0 y 2, y 4 0 y 3 free Note the signage comes from the domain of x 1, x 2, x 3 correspondingly (if x i is free, then use =; x i 0, then ; x i 0, then ). 23 June 30, Another Example of Dual from Primal Given primal (P) min (7, 13, 3, 5)x (2, 1, 2, 0)x 4 (3, 5, 3, 2)x 5 (1, 3, 2, 1)x 4 ( 1, 2, 4, 7)x 18 x 1 0 x 2 0 x 3 0 x 4 0 where x = (2, 1, 0, 3) is a feasible solution. Thus our dual would be (P) max (4, 5, 4, 18)y where ȳ = (0, 2, 1, 0) is a feasible solution. (2, 3, 1, 1)y 7 (1, 5, 3, 2)y 13 (2, 3, 2, 4)y 3 (0, 2, 1, 7)y 5 y 1 0 y 2 0 y 3 0 y

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality

More information

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2) CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2) Tim Roughgarden February 2, 2016 1 Recap This is our third lecture on linear programming, and the second on linear programming

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

BBM402-Lecture 20: LP Duality

BBM402-Lecture 20: LP Duality BBM402-Lecture 20: LP Duality Lecturer: Lale Özkahya Resources for the presentation: https://courses.engr.illinois.edu/cs473/fa2016/lectures.html An easy LP? which is compact form for max cx subject to

More information

Week 3 Linear programming duality

Week 3 Linear programming duality Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the

More information

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM Abstract These notes give a summary of the essential ideas and results It is not a complete account; see Winston Chapters 4, 5 and 6 The conventions and notation

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 3: Linear Programming, Continued Prof. John Gunnar Carlsson September 15, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Discrete Optimization

Discrete Optimization Prof. Friedrich Eisenbrand Martin Niemeier Due Date: April 15, 2010 Discussions: March 25, April 01 Discrete Optimization Spring 2010 s 3 You can hand in written solutions for up to two of the exercises

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 7: Duality and applications Prof. John Gunnar Carlsson September 29, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 29, 2010 1

More information

Lecture 5. x 1,x 2,x 3 0 (1)

Lecture 5. x 1,x 2,x 3 0 (1) Computational Intractability Revised 2011/6/6 Lecture 5 Professor: David Avis Scribe:Ma Jiangbo, Atsuki Nagao 1 Duality The purpose of this lecture is to introduce duality, which is an important concept

More information

1 Seidel s LP algorithm

1 Seidel s LP algorithm 15-451/651: Design & Analysis of Algorithms October 21, 2015 Lecture #14 last changed: November 7, 2015 In this lecture we describe a very nice algorithm due to Seidel for Linear Programming in lowdimensional

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016 Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 5: The Simplex method, continued Prof. John Gunnar Carlsson September 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 22, 2010

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Linear and Integer Optimization (V3C1/F4C1)

Linear and Integer Optimization (V3C1/F4C1) Linear and Integer Optimization (V3C1/F4C1) Lecture notes Ulrich Brenner Research Institute for Discrete Mathematics, University of Bonn Winter term 2016/2017 March 8, 2017 12:02 1 Preface Continuous updates

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20. Extra Problems for Chapter 3. Linear Programming Methods 20. (Big-M Method) An alternative to the two-phase method of finding an initial basic feasible solution by minimizing the sum of the artificial

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2

Linear programming. Saad Mneimneh. maximize x 1 + x 2 subject to 4x 1 x 2 8 2x 1 + x x 1 2x 2 2 Linear programming Saad Mneimneh 1 Introduction Consider the following problem: x 1 + x x 1 x 8 x 1 + x 10 5x 1 x x 1, x 0 The feasible solution is a point (x 1, x ) that lies within the region defined

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Duality of LPs and Applications

Duality of LPs and Applications Lecture 6 Duality of LPs and Applications Last lecture we introduced duality of linear programs. We saw how to form duals, and proved both the weak and strong duality theorems. In this lecture we will

More information

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs

More information

1 Review of last lecture and introduction

1 Review of last lecture and introduction Semidefinite Programming Lecture 10 OR 637 Spring 2008 April 16, 2008 (Wednesday) Instructor: Michael Jeremy Todd Scribe: Yogeshwer (Yogi) Sharma 1 Review of last lecture and introduction Let us first

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

Summary of the simplex method

Summary of the simplex method MVE165/MMG631,Linear and integer optimization with applications The simplex method: degeneracy; unbounded solutions; starting solutions; infeasibility; alternative optimal solutions Ann-Brith Strömberg

More information

Linear Programming and the Simplex method

Linear Programming and the Simplex method Linear Programming and the Simplex method Harald Enzinger, Michael Rath Signal Processing and Speech Communication Laboratory Jan 9, 2012 Harald Enzinger, Michael Rath Jan 9, 2012 page 1/37 Outline Introduction

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

More information

{ move v ars to left, consts to right { replace = by t wo and constraints Ax b often nicer for theory Ax = b good for implementations. { A invertible

{ move v ars to left, consts to right { replace = by t wo and constraints Ax b often nicer for theory Ax = b good for implementations. { A invertible Finish remarks on min-cost ow. Strongly polynomial algorithms exist. { Tardos 1985 { minimum mean-cost cycle { reducing -optimality { \xing" arcs of very high reduced cost { best running running time roughly

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

Lecture #21. c T x Ax b. maximize subject to

Lecture #21. c T x Ax b. maximize subject to COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

Advanced Linear Programming: The Exercises

Advanced Linear Programming: The Exercises Advanced Linear Programming: The Exercises The answers are sometimes not written out completely. 1.5 a) min c T x + d T y Ax + By b y = x (1) First reformulation, using z smallest number satisfying x z

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2016 Combinatorial Problems as Linear and Convex Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source

More information

MATH 445/545 Homework 2: Due March 3rd, 2016

MATH 445/545 Homework 2: Due March 3rd, 2016 MATH 445/545 Homework 2: Due March 3rd, 216 Answer the following questions. Please include the question with the solution (write or type them out doing this will help you digest the problem). I do not

More information

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence

Linear Algebra Review: Linear Independence. IE418 Integer Programming. Linear Algebra Review: Subspaces. Linear Algebra Review: Affine Independence Linear Algebra Review: Linear Independence IE418: Integer Programming Department of Industrial and Systems Engineering Lehigh University 21st March 2005 A finite collection of vectors x 1,..., x k R n

More information

Lecture 9 Tuesday, 4/20/10. Linear Programming

Lecture 9 Tuesday, 4/20/10. Linear Programming UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 9 Tuesday, 4/20/10 Linear Programming 1 Overview Motivation & Basics Standard & Slack Forms Formulating

More information

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Fall 2014 Combinatorial Problems as Linear Programs Instructor: Shaddin Dughmi Outline 1 Introduction 2 Shortest Path 3 Algorithms for Single-Source Shortest

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

IE 521 Convex Optimization Homework #1 Solution

IE 521 Convex Optimization Homework #1 Solution IE 521 Convex Optimization Homework #1 Solution your NAME here your NetID here February 13, 2019 Instructions. Homework is due Wednesday, February 6, at 1:00pm; no late homework accepted. Please use the

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming Distributed Real-Time Control Systems Lecture 13-14 Distributed Control Linear Programming 1 Linear Programs Optimize a linear function subject to a set of linear (affine) constraints. Many problems can

More information

OPRE 6201 : 3. Special Cases

OPRE 6201 : 3. Special Cases OPRE 6201 : 3. Special Cases 1 Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are

More information

Lecture 1: Convex Sets January 23

Lecture 1: Convex Sets January 23 IE 521: Convex Optimization Instructor: Niao He Lecture 1: Convex Sets January 23 Spring 2017, UIUC Scribe: Niao He Courtesy warning: These notes do not necessarily cover everything discussed in the class.

More information

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 1 Lecture 5. Theorems of Alternatives and Self-Dual Embedding IE 8534 2 A system of linear equations may not have a solution. It is well known that either Ax = c has a solution, or A T y = 0, c

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION /09 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and

More information

MAT-INF4110/MAT-INF9110 Mathematical optimization

MAT-INF4110/MAT-INF9110 Mathematical optimization MAT-INF4110/MAT-INF9110 Mathematical optimization Geir Dahl August 20, 2013 Convexity Part IV Chapter 4 Representation of convex sets different representations of convex sets, boundary polyhedra and polytopes:

More information

Critical Reading of Optimization Methods for Logical Inference [1]

Critical Reading of Optimization Methods for Logical Inference [1] Critical Reading of Optimization Methods for Logical Inference [1] Undergraduate Research Internship Department of Management Sciences Fall 2007 Supervisor: Dr. Miguel Anjos UNIVERSITY OF WATERLOO Rajesh

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

Simplex Method for LP (II)

Simplex Method for LP (II) Simplex Method for LP (II) Xiaoxi Li Wuhan University Sept. 27, 2017 (week 4) Operations Research (Li, X.) Simplex Method for LP (II) Sept. 27, 2017 (week 4) 1 / 31 Organization of this lecture Contents:

More information

Theory and Internet Protocols

Theory and Internet Protocols Game Lecture 2: Linear Programming and Zero Sum Nash Equilibrium Xiaotie Deng AIMS Lab Department of Computer Science Shanghai Jiaotong University September 26, 2016 1 2 3 4 Standard Form (P) Outline

More information

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following

More information

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities Linear Programming Murti V Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities murtis@umnedu September 4, 2012 Linear Programming 1 The standard Linear Programming (SLP) problem:

More information

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

The Simplex Method for Solving a Linear Program Prof. Stephen Graves The Simplex Method for Solving a Linear Program Prof. Stephen Graves Observations from Geometry feasible region is a convex polyhedron an optimum occurs at a corner point possible algorithm - search over

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

Part III: A Simplex pivot

Part III: A Simplex pivot MA 3280 Lecture 31 - More on The Simplex Method Friday, April 25, 2014. Objectives: Analyze Simplex examples. We were working on the Simplex tableau The matrix form of this system of equations is called

More information

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality.

Topic: Primal-Dual Algorithms Date: We finished our discussion of randomized rounding and began talking about LP Duality. CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Primal-Dual Algorithms Date: 10-17-07 14.1 Last Time We finished our discussion of randomized rounding and

More information

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting

More information

Notes taken by Graham Taylor. January 22, 2005

Notes taken by Graham Taylor. January 22, 2005 CSC4 - Linear Programming and Combinatorial Optimization Lecture : Different forms of LP. The algebraic objects behind LP. Basic Feasible Solutions Notes taken by Graham Taylor January, 5 Summary: We first

More information

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R

More information

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10 MVE165/MMG631 Linear and Integer Optimization with Applications Lecture 4 Linear programming: degeneracy; unbounded solution; infeasibility; starting solutions Ann-Brith Strömberg 2017 03 28 Lecture 4

More information