Math 273a: Optimization The Simplex method

Similar documents
Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Chapter 3, Operations Research (OR)

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

TIM 206 Lecture 3: The Simplex Method

Simplex Algorithm Using Canonical Tableaus

Simplex method(s) for solving LPs in standard form

Math Models of OR: Some Definitions

3. THE SIMPLEX ALGORITHM

Part 1. The Review of Linear Programming

Linear programs Optimization Geoff Gordon Ryan Tibshirani

IE 5531: Engineering Optimization I

Linear Programming and the Simplex method

Ω R n is called the constraint set or feasible set. x 1

Linear Programming Redux

3 The Simplex Method. 3.1 Basic Solutions

9.1 Linear Programs in canonical form

AM 121: Intro to Optimization

MATH 4211/6211 Optimization Linear Programming

OPERATIONS RESEARCH. Linear Programming Problem

Review Solutions, Exam 2, Operations Research

IE 5531: Engineering Optimization I

1 Review Session. 1.1 Lecture 2

Lecture 2: The Simplex method

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

A Review of Linear Programming

February 17, Simplex Method Continued

Chapter 5 Linear Programming (LP)

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Lesson 27 Linear Programming; The Simplex Method

Chapter 4 The Simplex Algorithm Part II

Lecture slides by Kevin Wayne

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

The simplex algorithm

IE 400: Principles of Engineering Management. Simplex Method Continued

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

Linear Programming. Murti V. Salapaka Electrical Engineering Department University Of Minnesota, Twin Cities

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Notes taken by Graham Taylor. January 22, 2005

AM 121: Intro to Optimization Models and Methods Fall 2018

The augmented form of this LP is the following linear system of equations:

THE UNIVERSITY OF HONG KONG DEPARTMENT OF MATHEMATICS. Operations Research I

Chap6 Duality Theory and Sensitivity Analysis

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Simplex Method for LP (II)

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

OPRE 6201 : 3. Special Cases

F 1 F 2 Daily Requirement Cost N N N

MATH 445/545 Homework 2: Due March 3rd, 2016

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Ann-Brith Strömberg. Lecture 4 Linear and Integer Optimization with Applications 1/10

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Summary of the simplex method

Operations Research Lecture 2: Linear Programming Simplex Method

Lecture 6 Simplex method for linear programming

Linear Programming: Simplex

Lecture: Algorithms for LP, SOCP and SDP

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Part 1. The Review of Linear Programming

ORF 522. Linear Programming and Convex Analysis

MATH2070 Optimisation

Systems Analysis in Construction

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

CO 250 Final Exam Guide

3.7 Cutting plane methods

TRANSPORTATION PROBLEMS

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Distributed Real-Time Control Systems. Lecture Distributed Control Linear Programming

Math Models of OR: Sensitivity Analysis

The Simplex Algorithm

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Optimization (168) Lecture 7-8-9

Chapter 1: Linear Programming

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

In Chapters 3 and 4 we introduced linear programming

A primal-simplex based Tardos algorithm

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

Special cases of linear programming

Chapter 1 Linear Programming. Paragraph 5 Duality

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Metode Kuantitatif Bisnis. Week 4 Linear Programming Simplex Method - Minimize

ECE 307 Techniques for Engineering Decisions

(includes both Phases I & II)

Summary of the simplex method

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem

Transcription:

Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed.

Overview: idea and approach If a standard-form LP has a solution, then there exists an extreme-point solution. Therefore: just search the extreme points. In a standard-form LP, what is an extreme point? Answer: a basic feasible solution (BFS), defined in a linear algebraic form Remaining: move through a series of BFSs until reaching the optimal BFS We shall recognize an optimal BFS move from a BFS to a better BFS (realize these in linear algebra for the LP standard form)

Overview: edge direction and reduced cost Edges connect two neighboring BFSs Reduced cost is how the objective will change when moving along an edge direction How to recognize the optimal BFS? none of the feasible edge directions is improving equivalently, all reduced costs are nonnegative

Overview: move from a BFS to a better BFS If the BFS is not optimal, then some reduced cost is negative How to move to a better BFS? pick a feasible edge direction with a negative reduced cost move along the edge direction until reaching another BFS it is possible that the edge direction is unbounded

The Simplex method (abstract) input: an BFS x check: reduce costs 0 if yes: optimal, return x; stop. if not: choose an edge direction corresponding to a negative reduced cost, and then move along the edge direction if unbounded: then the problem is unbounded otherwise: replace x by the new BFS; restart

Basic solution (not necessarily feasible) minimize c T x subject to Ax = b x 0. common assumption: rank(a) = m, full row rank or A is surjective (otherwise, either Ax = b has no solution or some rows of A can be safely eliminated) write A as A = [B, D] where B is a square matrix with full rank (its rows/columns are linearly independent). This might require reordering the columns of A. We call x = [x T B, 0 T ] T a basic solution if Bx B = b. x B and B are called basic variables and basis.

Basic feasible solution( BFS) basic because x B is uniquely determined by B and b more definitions: if x 0 (equivalently, x B 0), then x is a basic feasible solution (BFS) if any entry of x B is 0, then x is a degenerate; otherwise, it is called nondegenerate. (Why? it may be difficult to move from a degenerate BFS to another BFS) given basic columns, a basic solution is determined, and then we check whether the solution is feasible and/or degenerate

Example (example 15.12) A = [ [ ] ] 1 1 1 4 a 1, a 2, a 3, a 4 =, b = 1 2 1 1 [ ] 8 2 1. Pick B = [a 1, a 2], obtain x B = B 1 b = [6, 2] T. x = [6, 2, 0, 0] T is a basic feasible solution and is nondegenerate. 2. Pick B = [a 3, a 4], obtain x B = B 1 b = [0, 2] T. x = [0, 0, 0, 2] T is a degenerate basic feasible solution. In this example, B = [a 1, a 4] and [a 2, a 4] also give the same x! 3. Pick B = [a 2, a 3], obtain x B = B 1 b = [2, 6] T. x = [0, 2, 6, 0] T is a basic solution but infeasible, violating x 0. 4. x = [3, 1, 0, 1] T is feasible but not basic.

The total number of possible basic solutions is at most ( ) n n! = m m!(n m)!. For small m and n, e.g., m = 2 and n = 4, this number is 6. So we can check each basic feasible for feasibility and optimality. Any vector x, basic or not, that yields the minimum value of c T x over the feasible set {x : Ax = b, x 0} is called an optimal (feasible) solution.

The idea behind basic solution In R n, a set of n linearly independent equations define a unique point (special case: in R 2, two crossing lines determine a point) Thus, an extreme point of P = {x : Ax = b, x 0} is given by n linearly independent and active (i.e., = ) constraints obtained from Ax = b (m linear constraints) x = 0 (n linear constraints) In a standard form LP, if we assume rank(a) = m < n, then Ax = b give just m linearly independent constraints. Therefore, we need to select addition (n m) linear constraints from x 1 0,..., x n 0 and make them active, that is, set the corresponding components 0 ensure that all the n linear constraints are linearly independent

without loss of generality, we set the last (n m) components of x to 0: x n m+1 = 0,..., x n = 0. By stacking these equation below Ax = b, where A = [B, D], we get [ ] [ ] [ ] B D xb b = 0 I x D 0 }{{} M M R n n has n rows. If the rows of M are linearly independent (if and only if B R m m has full rank), then x is uniquely determined as [ ] [ ] xb B 1 b x = =. 0 x D Therefore, setting some (n m) components of x to 0 may uniquely determine x.

Now, to select an extreme point x of P = {x : Ax = b, x 0}, we set some (n m) components of x as 0 let x B denote the remaining components and B denote the corresponding submatrix of the matrix A check whether B has full rank. If not, then not getting a point. If yes, then compute x = [ xb x D ] [ ] B 1 b =. 0 Furthermore, check if x B 0. If not, then x P; if yes, then x is an extreme point of P.

Fundamental theorem of LP Theorem Consider an LP in the standard form. 1. If it has a feasible solution, then there exists a basic feasible solution (BFS). 2. If it has an optimal feasible solution, then there exists an optimal BFS.

Proof of Part 1 Suppose x = [x 1,..., x n] T is a feasible solution. Thus x 0. WOLG, suppose that only the first p entries of x are positive, so x 1a 1 + + x pa p = b (1) Case 1: if a 1,..., a p are linearly independent, then p m = rank(a). a. if p = m, then B = [a 1,..., a m] forms a basis and x is the BFS b. if p < m, then we can find m p columns from a p+1,..., a n to form 1 basis B and x is the BFS. Case 2: if a 1,..., a p are linearly dependent, then y 1,..., y p such that some y i > 0 and y 1a 1 + + y pa p = 0. From this and (??), we get (x 1 ɛy 1)a 1 + + (x p ɛy p)a p = b 1 This is a consequence of the so-called matroid structure of linear independence.

for sufficiently small ɛ > 0, (x 1 ɛy 1) > 0,..., (x p ɛy p) > 0. since there is some y i > 0, set ɛ = min { x i y i i = 1,... p, y i > 0 }. Then, the first p components of (x ɛy) 0 and at least one of them is 0. Therefore, we have reduced p by at least 1. by repeating this process, we either reach Case 1 or p = 0. The latter situation can be handled by Case 1 as well. therefore, part 1 is proved. Part 2 can be similarly proved except we argue that c T y = 0.

Extreme points BFSs Theorem Let P = {x R n Ax = x, x 0}, where A m n has full row rank. Then, x is an extreme point of P if and only if x is a BFS to P. Proof: = Let x be an extreme point of P. Suppose, WOLG, its first p components are positive. Let y = [y 1,..., y p] be such that y 1a 1 + + y pa p = 0 x 1a 1 + + x pa p = b. Since x 1,..., x p > 0 by assumption, for sufficiently small ɛ > 0, we have x + ɛy P and x ɛy P, which have x as their middle point. By the definition of extreme point, we must have x + ɛy = x ɛy and thus y = 0. Therefore, a 1,..., a p are linearly independent.

= Let x P be an BFS corresponding to the basis B = [a 1,..., a m]. Let y, z P be such that x = αy + (1 α)z, for some α (0, 1). We show y = z and conclude that x is an extreme point. Since y 0 and z 0 yet the last (n m) entries of x are 0, the last (n m) entries of y, z are 0, too. Hence, y 1a 1 + + y pa m = b z 1a 1 + + z pa m = b, and thus (y 1 z 1)a 1 + + (y m z m)a m = 0. Since a 1,..., a m are linearly independent, y i z i = 0 for all i and thus y = z.

Edge directions Edges have two functions: reduced costs are defined for edge directions we move from one BFS to another along an edge direction From now on, we assume non-degeneracy, i.e., any BFS x has its basic subvector x B > 0. The purpose: to avoid edge of 0 length. An edge has 1 degree of freedom and connects to at least one BFS.

An edge in R n is obtained from an BFS by removing one equation from the n linearly independent equations that define the BFS. consider the BFS x = [x B ; 0] T. let A = [B, D], B = {1,..., m} and D = {m + 1,..., n}. {x} = {y : Ay = b, y i = 0 i D} P pick any j D, then { y 0 : Ay = b, yi = 0 i D \ {j} } P is the edge connected to x corresponding to x j 0 Bottomline: an edge is obtained by releasing a non-basic variable x j from 0

pick a non-basic coordinate j the edge direction for the BFS x = [x B ; 0] T corresponding to x j is δ (j) B B 1 a j δ (j) = 0 δ (j) = 0 1, where satisfies Aδ (j) = 0. j 0 we have { y 0 : Ay = b, yi = 0 i D \ {j} } = {y 0 : y = x + ɛδ (j), ɛ 0} 0 an non-degenerate BFS has (n m) different edges a degenerate BFS may have fewer edges

Reduced cost given the BFS x = [x B ; 0] T, the non-basic coordinate j, and the edge direction δ (j) = [ B 1 a j; 0; 1; 0] T the unit change in c T x along δ (j) is ˉc j = c T δ (j) = c j c T B B 1 a j a negative reduced cost moving along δ (j) will decrease the objective define the reduced cost (row) vector: ˉc T = [ˉc 1,..., ˉc n ] as ˉc T := c T c T B B 1 A, which includes the reduced costs for all edge directions. Note that its basic part: ˉc T B = c T B c T B B 1 B = 0.

Optimal BFS Theorem Let A = [B, D], where B a basis. Suppose that x = [x B ; 0] T is an BFS. Then, x is optimal if and only if ˉc T := c T c T B B 1 A 0 T. Proof: Let {δ (j) } j J be the set of edge directions of x. Then, P = {y : Ay = b, y 0} is a subset of x + cone({δ (j) } j J ), that is, any y P can be written as y = x + α jδ (j) j J where α j 0. Then c T y = c T x + α jc T δ (j) = c T x + α jˉc j c T x. j J j J

The Simplex method (we have so far) input: a basis B and the corresponding BFS x check: ˉc T := c T c T B B 1 A 0 T if yes: optimal, return x; stop. if not: choose j such that ˉc j < 0, and then move along δ (j) if unbounded: then the problem is unbounded otherwise: replace x by the first BFS reached; restart Next: unbounded and bounded edge directions

Unbounded edge direction and cost suppose at a BFS x, we select ˉc j < 0 and now move along δ (j) B B 1 a j δ (j) = 0 δ (j) = 0 1, since Aδ (j) = 0 and Ax = x, we have for any α j 0 0 A(x + αδ (j) ) = b since x 0, if δ (j) B = B 1 a j 0, then we have for any α 0, x + αδ (j) 0 therefore, x + αδ (j) is feasible for any α 0 however, the cost is (lower) unbounded: c T (x + αδ (j) ) = c T x + αˉc j

Bounded edge direction if δ (j) B = B 1 a j 0, then α must be sufficiently small; otherwise x + αδ (j) 0 ratio test { } xi α min = min i B, δ (j) δ (j) i < 0 i for some i B, we have x i + α minδ (j) i = 0 the nondegeneracy assumption: x B > 0, thus α min > 0 let x = x + α min δ (j) : x B 0 but x i = 0 x j > 0 x i = 0 for i B {j} updated basis: B = B {j} \ {i } and BFS: x = x + α minδ (j)

The Simplex method (we have so far) input: a basis B and the corresponding BFS x check: ˉc T := c T c T B B 1 A 0 T if yes: optimal, return x; stop. if not: choose j such that ˉc j < 0, and then move along δ (j) if δ (j) B 0: then the problem is unbounded otherwise: { α min = min x i δ (j) i updated basis: B B {j} \ {i } updated BFS: x x + α minδ (j) } i B, δ (j) i < 0 achieved at index i

Example 16.2 introduce slack variables and reformulate to the standard form

the starting basis B = {3, 4, 5}, B = [a 3, a 4, a 5], and BFS x = B 1 b = [0, 0, 4, 6, 8] T reduced costs ˉc T := c T c T B B 1 A = [ 2, 5, 0, 0, 0] since ˉc T B = 0 T, only need to compute ˉc j for j B. since ˉc 2 < 0, current solution is no optimal and bring j = 2 into basis

compute edge direction: δ (j) = [0, 1, B 1 a j] T = [0, 1, 0, 1, 1] T δ (j) B = [0, 1, 1]T 0, so this edge direction is not unbounded ratio test α min = min { xi δ (j) i } i B, δ (j) i < 0 = min { 6 1, 8 } = 6 1 the min is achieved at i = 4, so remove 4 from basis updated basis: B = {2, 3, 5}, B = [a 2, a 3, a 5] and BFS: x = B 1 b = [0, 6, 4, 0, 2] T

current basis: B = {2, 3, 5}, B = [a 2, a 3, a 5] and BFS x = [0, 6, 4, 0, 2] T reduced costs ˉc T := c T c T B B 1 A = [ 2, 0, 0, 5, 0] since ˉc 1 < 0, current solution is not optimal, and bring j = 1 into basis compute edge direction: δ (j) = [1, 0, 1, 0, 1] ratio test: α min = min { xi δ (j) i } i B, δ (j) i < 0 = min { 4 1, 2 } = 2 1 the min is achieved at i = 5, so remove 5 from basis updated basis: B = {1, 2, 3, }, B = [a 1, a 2, a 3] and BFS: x = B 1 b = [2, 6, 2, 0, 0] T updated reduced costs: ˉc T = [0, 0, 0, 3, 2] 0, the solution is optimal.

Row operations on the canonical augmented matrix Starting Itr 1: j = 2 in and i = 4 out. Make new a j = old a i through row ops. Itr 2: j = 1 in and i = 5 out. Make new a j = old a i through row ops.

Finite convergence Theorem Consider a LP in the standard form minimize c T x subject to Ax = b, x 0. Suppose that the LP is feasible and all BFSs are nondegenerate. Then, the Simplex method terminates after a finite number of iterations at termination, we either have an optimal basis B or a direction δ such that Aδ = 0, δ 0, and c T δ < 0. In the former case, the optimal cost is finite, and in the latter case it has unbounded optimal cost of.

Degeneracy x is degenerate if some components of x B equals 0 geometrically, more than n active linear constraints at x consequence: the ratio test α min = min { xi δ (j) i } i B, δ (j) i < 0 may return α min = 0, then causing x + α minδ (j) = x, BFS remains unchanged no improvement in cost the basis does change then, finite termination is no longer guaranteed, revisiting a previous basis is possible. there are a number of remedies to avoid cycling. further, a tie in the ratio test will cause the updated BFS to be degenerate

Degeneracy illustration n m = 2, we are looking at the 2D affine set {x : Ax = b} B = {1, 2, 3, 6}, BFS x determined by Ax = b, x 4, x 5 = 0; x is degenerate x 4, x 5 are non-basic, have edge directions g and f, respectively, both make the ratio tests return 0 and do not improve cost after change to basis {1, 2, 3, 5}, then non-basic x 4, x 6 have edge directios h and f. h can improve cost, f cannot.

The Simplex method (we have so far) input: a basis B and the corresponding BFS x check: ˉc T := c T c T B B 1 A 0 T if yes: optimal, return x; stop. if not: choose j such that ˉc j < 0, and then move along δ (j) if δ (j) B 0: then the problem is unbounded otherwise: { α min = min x i δ (j) i updated basis: B B {j} \ {i } updated BFS: x x + α minδ (j) } i B, δ (j) i < 0 achieved at index i (if α min = 0, anti-cycle schemes are applied) Remaining question: how to find the initial BFS?

An easy case suppose the original LP has the constraints where b 0. Ax b, x 0, add slack variables an obvious basis is I [ ] [ ] x A, I = b, s [ ] x 0. s corresponding BFS [ ] [ ] x 0 =. s b

Two-phase Simplex method generally, given a standard-form LP, neglecting c T x for the moment: Ax = b, x 0 step 1. ensure b 0, by ( 1) to rows with b i < 0 (this would fail to work for inequality constraints) step 2. add artificial variables, y 1,..., y m and minimize their sum minimize y 1 + + y m x,y subject to [ ] [ ] x A, I = b, y [ ] x 0. y step 3. Phase I: solve this LP, starting with the obvious basis I and BFS step 4. Phase II: when y = 0 and becomes non-basic, we get a basis from A and a BFS in x. then, drop y, run the Simplex method with min c T x.

Proposition (16.1) The original LP has a BFS if and only if the phase-i LP has an optimal BFS with 0 objective. Proof. If the original LP has a BFS x, the [x T, 0 T ] T is a BFS to the phase-i LP. Clearly, this solution gives a 0 objective. Since the phase-i LP is feasible and lower bounded by 0, this solution is optimal to the phase-i LP. Suppose the phase-i LP has an optimal BFS. Then, this solution must have the form [x T, 0 T ] T, where x 0. Hence, we have Ax = b and x 0. The original LP has a feasible solution. By the fundamental theorem of LP, there exists a BFS.

Two-phase Simplex method: illustration n m = 3, each 2D face corresponds to some x i = 0

Summary The Simplex method: traverses through a set of BFS (vertices of the feasible set) each BS corresponds to a basis B and has the form [x T B, 0 T ] T x B 0 BFS. moreover, x B > 0 non-degenerate BFS leaves an BFS along an edge direction with a negative reduced cost an edge direction may be unbounded, or reaches x i = 0 for some i B each iteration, some j enters basic and some i leaves basis with anti-cycle schemes, the Simplex stops in a finite number of iterations the phase-i LP checks feasibility and, if feasible, obtains a BFS in x

Uncovered topics big-m method: another way to obtain BFS or check feasibility anti-cycle scheme: special pivot rules that introduce an order to the basis Simplex method on the tableau revised Simplex method: maintaining [B 1, δ B ] at a low cost dual Simplex method: maintaining nonnegative reduced cost ˉc 0 and work toward the feasibility x 0 column generation: in large-scale problem, add c i and a i on demand sensitivity analysis: answer what if questions: c, b, A change slightly network flow problems: combinatorial problems with exact LP relaxation and fast algorithms

Quiz Consider a general polyhedron Q = {x R n : Ax = b, Bx d}. Definition 1: x is a basic feasible solution of Q if x Q and there are n linearly independent active constraints at x. Consider the standard-form polyhedron P = {x R n : Ax = b, x 0}, where A has linearly-independent rows. Definition 2: x is a basic feasible solution of P if A = [B D] where B is a full-rank square matrix and x B = B 1 b 0 and x D = 0. Show that the two definitions are equivalent for P.

Quiz Consider the standard-form polyhedron P = {x R n : Ax = b, x 0}, where A has linearly-independent rows. Prove: 1. If two bases B 1 and B 2 lead to the same basic solution x, then x must be degenerate. 2. Suppose x is a degenerate basic solution. Then, it corresponds to two or more distinct bases? Prove or give a counter example. 3. Suppose x is a degenerate basic solution. Then, there exists an adjacent basic solution that is generate. Prove or give a counter example.

Quiz Consider the standard-form polyhedron P = {x R n : Ax = b, x 0}, where A R m n has linearly-independent rows. Prove the following statements or provide counter examples: 1. If m + 1 = n, then P has at most two basic feasible solutions. 2. The set of all solutions, if exist, is bounded. 3. At every solution, no more than m variables can be positive. 4. If there is more than one solution, then there are uncountably many solutions. 5. Consider minimizing max{c T x, d T x} over P. If this problem has a solution, it must have an solution that is an extreme point of P.

Quiz Q1: Let x P = {x R n : Ax = b, x 0}. Show that d R n is a feasible direction at x if and only if Ad = 0 and d i 0 for every i such that x i = 0. Q2: Let x P = {x R n : Ax = b, Dx f, Ex g} such that Dx = f and Ex < g. Show that d R n is a feasible direction at x if and only if Ad = 0 and Dd 0.