Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Similar documents
LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

On the number of distinct solutions generated by the simplex method for LP

A Bound for the Number of Different Basic Solutions Generated by the Simplex Method

On the Number of Solutions Generated by the Simplex Method for LP

A Strongly Polynomial Simplex Method for Totally Unimodular LP

A primal-simplex based Tardos algorithm

Optimization (168) Lecture 7-8-9

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

"SYMMETRIC" PRIMAL-DUAL PAIR

Numerical Optimization

Simplex method(s) for solving LPs in standard form

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

Lesson 27 Linear Programming; The Simplex Method

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

CO350 Linear Programming Chapter 6: The Simplex Method

Math Models of OR: Some Definitions

The Simplex Method is Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

An approximation algorithm for the partial covering 0 1 integer program

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

1 Review Session. 1.1 Lecture 2

Chap6 Duality Theory and Sensitivity Analysis

A Review of Linear Programming

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Lecture 11: Post-Optimal Analysis. September 23, 2009

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

A Redundant Klee-Minty Construction with All the Redundant Constraints Touching the Feasible Region

Lecture slides by Kevin Wayne

MATH 445/545 Homework 2: Due March 3rd, 2016

The Simplex Method. Lecture 5 Standard and Canonical Forms and Setting up the Tableau. Lecture 5 Slide 1. FOMGT 353 Introduction to Management Science

Sensitivity Analysis

Linear programming: algebra

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

A Simpler and Tighter Redundant Klee-Minty Construction

CPS 616 ITERATIVE IMPROVEMENTS 10-1

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

MAT016: Optimization

The simplex algorithm

Simplex Algorithm Using Canonical Tableaus

Dantzig s pivoting rule for shortest paths, deterministic MDPs, and minimum cost to time ratio cycles

TIM 206 Lecture 3: The Simplex Method

The Simplex Method for Solving a Linear Program Prof. Stephen Graves

CS711008Z Algorithm Design and Analysis

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Linear Programming: Simplex

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

OPERATIONS RESEARCH. Linear Programming Problem

Introduction to optimization

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

(includes both Phases I & II)

Introduction to integer programming II

Summary of the simplex method

Lecture 9 Tuesday, 4/20/10. Linear Programming

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

The Simplex Method. Formulate Constrained Maximization or Minimization Problem. Convert to Standard Form. Convert to Canonical Form

Lecture Simplex Issues: Number of Pivots. ORIE 6300 Mathematical Programming I October 9, 2014

The Simplex Method. Standard form (max) z c T x = 0 such that Ax = b.

Prelude to the Simplex Algorithm. The Algebraic Approach The search for extreme point solutions.

Ω R n is called the constraint set or feasible set. x 1

MATH 373 Section A1. Final Exam. Dr. J. Bowman 17 December :00 17:00

The Ellipsoid (Kachiyan) Method

Simplex Method for LP (II)

4.5 Simplex method. min z = c T x s.v. Ax = b. LP in standard form

Part 1. The Review of Linear Programming

Review Solutions, Exam 2, Operations Research

The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

AM 121: Intro to Optimization

Linear Programming: Chapter 5 Duality

3 The Simplex Method. 3.1 Basic Solutions

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

Simplex tableau CE 377K. April 2, 2015

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Chapter 1 Linear Programming. Paragraph 5 Duality

Chapter 5 Linear Programming (LP)

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

ORF 307: Lecture 2. Linear Programming: Chapter 2 Simplex Methods

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

CO350 Linear Programming Chapter 6: The Simplex Method

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

(includes both Phases I & II)

IE 5531: Engineering Optimization I

Optimisation and Operations Research

Chapter 3, Operations Research (OR)

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

Lecture 10: Linear programming duality and sensitivity 0-0

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Operations Research Lecture 6: Integer Programming

Fast Algorithms for LAD Lasso Problems

IP Cut Homework from J and B Chapter 9: 14, 15, 16, 23, 24, You wish to solve the IP below with a cutting plane technique.

Lecture #21. c T x Ax b. maximize subject to

Lecture 11 Linear programming : The Revised Simplex Method

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

Transcription:

Properties of a Simple Variant of Klee-Minty s LP and Their Proof Tomonari Kitahara and Shinji Mizuno December 28, 2 Abstract Kitahara and Mizuno (2) presents a simple variant of Klee- Minty s LP, which has some nice properties. We explain and prove the properties in this paper. Keywords: Simplex method, Linear programming, Basic feasible solutions, Klee-Minty s LP Introduction Kitahara and Mizuno [2] utilize Ye s analysis [5] for the Markov Decision problem to general LPs (linear programming problems), and they get two upper bounds for the number of different BFSs (basic feasible solutions) generated by Dantzig s simplex method [] for the standard form of LP min c T x, subject to Ax = b, x, () where the number of equality constraints is m, the number of variables is n, A R m n, b R m, c R n, and x R n. The upper bounds they get are m γ x z δ log(ct ) (2) c T x z Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-2--W9-62, Ookayama, Meguro-ku, Tokyo, 52-8552, Japan. Tel.: +8-3-5734-2896 Fax: +8-3-5734-2947 E-mail: kitahara.t.ab@m.titech.ac.jp Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-2--W9-58, Ookayama, Meguro-ku, Tokyo, 52-8552, Japan. Tel.: +8-3-5734-286 Fax: +8-3-5734-2947 E-mail: mizuno.s.ab@m.titech.ac.jp

and n m γ δ log(mγ ), (3) δ where x is an initial BFS, x is a second optimal BFS, z is the optimal value, δ and γ are the minimum and the maximum values of all the positive elements of primal BFSs, respectively, and a denotes the smallest integer bigger than a R. The size of the bounds highly depends on the ratio γ/δ. A question one may have is how tight the bounds are. Kitahara and Mizuno [3] partially answer to the question above. They presents a simple variant of Klee-Minty s LP [4], and they show that the number of iterations by Dantzig s simplex method for solving the variant is just γ/δ. So it is impossible to get an upper bound which is smaller than γ/δ. In this paper, we explain and prove the properties of the variant given in [3] more precisely. 2 A simple variant of Klee-Minty s LP Klee-Minty s LP [4] is max m i= m i x i, subject to x, 2 k i= k i x i + x k k for k = 2, 3,, m, x, where x = (x, x 2,, x m ) T is a variable vector. Kitahara and Mizuno [3] presents a simple variant of Klee-Minty s LP, which is represented as max m i= x i, subject to x, 2 k i= x i + x k 2 k for k = 2, 3,, m, x, The standard form using a vector y = (y, y 2,, y m ) T of slack variables is written as max m i= x i, subject to x + y =, 2 k i= x i + x k + y k = 2 k (4) for k = 2, 3,, m, x, y. The number of equality constraints of this problem is m and the number of variables is n = 2m. 2

Lemma 2. The variant (4) has the following properties. for each k {, 2,, m} at any BFS (basic feasible solution), exactly one of x k and y k is a basic variable, 2. the problem has 2 m BFSs, 3. each component of any BFS is an integer, 4. the problem is nondegenerate, 5. the optimal BFS is and the optimal value is 2 m, x = (,,,, 2 m ) T, y = (, 2 2,, 2 m, ) T, 6. we have that δ = and γ = 2 m, where δ and γ are the minimum and the maximum values of all the positive elements of primal BFSs, respectively. Proof. At first, we easily get from the equality constraints of (4) that x k 2 k and y k 2 k (5) for any feasible solution (x, y) and any k {, 2,, m}. Next we will show that x k + y k. (6) From the first equality constraint in (4), we have (6) for k =. When k 2, we have from the (k )-th and k-th constraints that 2 k 2 i= x i + x k + y k = 2 k, 2 k i= x i + x k + y k = 2 k. Subtracting the first equality from the second equality, we get x k + y k + x k y k = 2 k. (7) Hence we obtain (6) from (5), (7), and y k. Let (x, y) be any BFS of (4). From (6), either x k or y k for any k {, 2,, m} must be a basic variable. Since the number of basic variables of any BFS is m, exactly one of x k and y k is a basic variable. 3

If we choose one of x k and y k as a basic variable for each k {, 2,, m}, a basic solution (x, y ) is determined. The value of x or y is from the equality constraint x + y =, and the values of x k or y k for k = 2, 3,, m are easily computed from (7) in this order. Then the value of each basic variable is a positive integer from (6) and (7). So the basic solution (x, y ) is feasible and nondegenerate. The number of such basic feasible solutions is obviously 2 m. We get from the last equality constraint of (4) that m i= m x i 2 x i + x m + y m = 2 m. i= Hence the object function value is at most 2 m. When y i (i {, 2,, m }) and x m are basic variables, the BFS is computed as in 5 of the lemma. Since the object function value is 2 m, it is an optimal solution. The last property 6 in the lemma is obvious from the discussion above. 3 Dantzig s simplex method for the variant Suppose that the initial BFS is x =, y = (, 2 2,, 2 m ) T and we generate a sequence of BFSs by Dantzig s simplex method [], in which a nonbasic variable, whose reduced cost is maximum (in the case of a maximization problem), is chosen as an entering variable, if two or more nonbasic variables have maximal reduced cost, then a variable (either x i or y i ) of the minimum index in them is chosen as an entering variable. At first, we explain the case of m =. Since x =, y = at the initial BFS, y is the basic variables and the dictionary for the initial BFS is z = x, y = x, (8) where z represents the objective function. Then Dantzig s simplex method updates the dictionaries as follows: x is chosen as an entering variable from (8), and the second dictionary is calculated as z = y, x = y. 4

This dictionary is optimal. We get an optimal solution x =, y = and an optimal value. Then we explain the case of m = 2. Since x =, y = (, 3) T at the initial BFS, y and y 2 are the basic variables and the dictionary for the initial BFS is z = x +x 2, y = x, (9) y 2 = 3 2x x 2, where z represents the objective function. Then Dantzig s simplex method updates the dictionary as follows: x is chosen as an entering variable from (9), and the second dictionary is calculated as The third dictionary is The 4-th dictionary is z = y +x 2, x = y, y 2 = +2y x 2. z = 2 +y y 2, x = y, x 2 = +2y y 2. z = 3 x y 2, y = x, x 2 = 3 2x y 2. () This dictionary is optimal. We get an optimal solution (x, x 2 ) = (, 3), (y, y 2 ) = (, ) and an optimal value 2 2 = 3. The x-parts of BFSs generated are ( ) ( ) v = ( v 2 =, v = ), v 3 = ( 3 which are shown in Figure. We observe that the number of iterations is (2 2 ),, ), any reduced cost of every dictionary is or, the objective function value increases by at each iteration. 5

x 2 3 v 2 v v v x Figure : Vertices generated by Dantzig s simplex method when m = 2 Now we consider the following variant of Klee-Minty s LP max (2 m ) m i= x i, subject to x, 2 k i= x i + x k 2 k for k = 2, 3,, m, x. The standard form using a vector y = (y, y 2,, y m ) T of slack variables is written as max (2 m ) m i= x i, subject to x + y =, 2 k i= x i + x k + y k = 2 k for k = 2, 3,, m, x, y. () It is easy to see that this problem () also has the properties in Lemma 2. except for the optimal solution. The optimal solution of () is x =, y = (, 2 2,, 2 m ), and the optimal value is (2 m ). Suppose that the initial BFS is x = (,,,, 2 m ) T, y = (, 2 2,, 2 m, ) T and we generate a sequence of BFSs by Dantzig s simplex method for the problem (). When m =, Dantzig s simplex method finds the optimal solution in one iteration just as the case of the problem (4). Next 6

we explain the case of m = 2. Since x = (, 3) T, y = (, ) at the initial BFS, y and x 2 are the basic variables and the dictionary for the initial BFS is w = +x +y 2, y = x, (2) x 2 = 3 2x y 2. where w represents the objective function. Then Dantzig s simplex method updates the dictionary as follows: x is chosen as an entering variable from (2), and the second dictionary is calculated as The third dictionary is The 4-th dictionary is w = y +y 2, x = y, x 2 = +2y y 2. w = 2 +y x 2, x = y, y 2 = +2y x 2. w = 3 x x 2, y = x, y 2 = 3 2x x 2, (3) This dictionary is optimal. We get an optimal solution (x, x 2 ) = (, ), (y, y 2 ) = (, 3) and an optimal value 3. The x-parts of BFSs generated are v 3, v 2, v, v, which are just the opposite order in the case of the problem (4). We observe that the number of iterations is (2 2 ), any reduced cost of every dictionary is or, the objective function value increases by at each iteration. Theorem 3. When we generate a sequence of BFSs by Dantzig s simplex method for the problem (4) from an initial BFS x =, y = (, 2 2,, 2 m ) T then the number of iterations is (2 m ), any reduced cost of every dictionary is or, 7

the objective function value increases by at each iteration. When we generate a sequence of BFSs by Dantzig s simplex method for the problem () from an initial BFS x = (,,,, 2 m ) T, y = (, 2 2,, 2 m, ) T then the number of iterations is (2 m ), any reduced cost of every dictionary is or, the objective function value increases by at each iteration. We can prove these results by using induction. We get the results for m =, 2 from the discussion above. Then we will show that the results are also true for m = 3. For ease of understanding the discussion, we show all the iterations of Dantzig s simplex method for solving the problem (4), when m = 3 and m = 4, in Section 5. When m = 3, the first dictionary is z = x +x 2 +x 3, y = x, y 2 = 3 2x x 2, y 3 = 7 2x 2x 2 x 3. (4) Note that the nonbasic variable x 3 does not appear except for the objective function and the last constraint in (4), and that the reduced cost of x 3 is. So the variable x 3 does not chosen as an entering variable until the reduced costs of the other nonbasic variables becomes less than. While the variable x 3 is nonbasic, the dictionaries except for the last constraint are updated just as the case of m = 2. Since any reduced cost for m = 2 is or, all the reduced costs becomes less than only when an optimal BFS for m = 2 is obtained. Thus the first 2 2 = 3 iterations are just like the case of m = 2, and (x, x 2 ) T -part of BFSs moves in the order of v, v, v 2, v 3. After 3 iterations, we get the 4-th dictionary z = 3 x y 2 +x 3, y = x, x 2 = 3 2x y 2, y 3 = +2x +2y 2 x 3. (5) We can see that this dictionary except for the last row and the last column is just same as the dictionary (). At this dictionary, x 3 is chosen as an 8

entering variable, and the 5-th dictionary is z = 4 +x +y 2 y 3, y = x, x 2 = 3 2x y 2, x 3 = +2x +2y 2 y 3. (6) We see that the objective function value increases by and each reduced cost is or in this case too. The nonbasic variable y 3 does not appear except for the objective function and the last constraint in (6), and the reduced cost of y 3 is. So the variable y 3 does not chosen as an entering variable until the algorithm stops. Hence iterations are just like the case of m = 2. Note that the dictionary (6) except for the last column, the last row, and the constant in z is same as the dictionary (2). So (x, x 2 ) T -part of BFSs generated by Dantzig s simplex method starting from (6) moves in the order of v 3, v 2, v, v (see Figure 2 where the vertices generated are shown as u, u 2,, u 7 ). After 3 iterations, we get the 8-th dictionary z = 7 x x 2 y 3, y = x, y 2 = 3 2x x 2, x 3 = 7 2x 2x 2 y 3. This dictionary is optimal and the optimal value is 2 3. It is easy to check that x-parts of the iterates are u =, u =, u 2 =, u 3 = u 6 = 3 5, u 4 =, u 7 = 3, u 5 = whose objective function values are,, 2, 3, 4, 5, 6, 7 in this order. Hence all the results of the problem (4) in the theorem are true for m = 3. In the similar way, we can also prove that all the results of the problem () in the theorem are true for m = 3. We will prove the theorem in the next section. Here we state the next two corollaries, which are obtained from the second and third results for each problem in the theorem. 9 7, 3,

x 3 7 u 6 u 5 u u u x 2 u 4 u 3 u x 2 Figure 2: Vertices generated by Dantzig s simplex method when m = 3 Corollary 3. The sequence of BFSs generated by the simplex method for the problem (4) by the minimum index rule is same as the case of Dantzig s rule. Corollary 3.2 Let {(v i, u i ) R 2m : i =,, 2,, 2 m } be the sequence of BFSs generated by Dantzig s simplex method for the problem (4) from an initial BFS x =, y = (, 2 2,, 2 m ) T. Then the sequence of BFSs generated by Dantzig s simplex method for the problem () from an initial BFS x = (,,,, 2 m ) T, y = (, 2 2,, 2 m, ) T is {(v 2m i, u 2m i ) R 2m : i =,, 2,, 2 m }. 4 Proof of Theorem 3. In this section, we prove Theorem 3.. For ease of understanding the proof, we show in Section 5 all the iterations of Dantzig s simplex method for solving the problem (4) when m = 3 and m = 4.

Proof. Assume that all the results in Theorem 3. are true for m = k. When m = k +, the first dictionary of the problem (4) is z = x +x k +x k+, y = x,. y k = (2 k ) 2x x k, y k+ = (2 k+ ) 2x 2x k x k+. (7) Note that the nonbasic variable x k+ does not appear except for the objective function and the last constraint in (7), and that the reduced cost of x k+ is. So the variable x k+ does not chosen as an entering variable until the reduced costs of the other nonbasic variables becomes less than. While the variable x k+ is nonbasic, the dictionaries except for the last constraint are updated just as the case of m = k. Since any reduced cost for m = k is or, all the reduced costs becomes less than only when an optimal BFS for m = k is obtained. Thus the first (2 k ) iterations are just like the case of m = k. After (2 k ) iterations, the basic variables are y,, y k, x k, y k+, and we get the 2 k -th dictionary z = (2 k ) x y k +x k+, y = x,. x k = (2 k ) 2x y k, y k+ = +2x +2y k x k+. (8) This dictionary except for the last row and the last column is just same as the dictionary of the optimal BFS for m = k. At this dictionary, x k+ is chosen as an entering variable, and the (2 k + )-th dictionary is z = 2 k +x +y k y k+, y = x,. x k = (2 k ) 2x y k, x k+ = +2x +2y k y k+. (9) We see that the objective function value increases by, and each reduced cost is or in this case too. The nonbasic variable y k+ does not appear except for the objective function and the last constraint in (9), and the reduced cost of y k+ is. So the variable y k+ does not chosen as an entering variable until the algorithm stops. Hence iterations are just like the case of m = k.

The dictionary (9) except for the last column, the last row, and the constant in z is same as the initial dictionary of the problem () when m = k. Thus the next (2 k ) iterations are just the case of the problem () when m = k. After (2 k ) iterations, we get the 2 k+ -th dictionary z = (2 k+ ) x x k y k+, y = x,. y k = (2 k ) 2x x k, x k+ = (2 k+ ) 2x 2x k y k+. This dictionary is optimal and the optimal value is 2 k+. It is easy to check that all the results of the problem (4) for m = k + in the theorem are true. In the similar way, we can also prove that all the results of the problem () in the theorem are true when m = k +. 5 Iterations when m = 3 and m = 4 In this section, we show all the iterations of Dantzig s simplex method for solving the problem (4) when m = 3 and m = 4. When m = 3, the initial dictionary is The second dictionary is The third dictionary is z = x +x 2 +x 3 y = x y 2 = 3 2x x 2 y 3 = 7 2x 2x 2 x 3. z = y +x 2 +x 3 x = y y 2 = +2y x 2 y 3 = 5 +2y 2x 2 x 3. z = 2 +y y 2 +x 3 x = y x 2 = +2y y 2 y 3 = 3 2y +2y 2 x 3. 2

The 4-th dictionary is z = 3 x y 2 +x 3 y = x x 2 = 3 2x y 2 y 3 = +2x +2y 2 x 3. The 5-th dictionary is z = 4 +x +y 2 y 3 y = x x 2 = 3 2x y 2 x 3 = +2x +2y 2 y 3. The 6-th dictionary is z = 5 y +y 2 y 3 x = y x 2 = +2y y 2 x 3 = 3 2y +2y 2 y 3. The 7-th dictionary is z = 6 +y x 2 y 3 x = y y 2 = +2y x 2 x 3 = 5 +2y 2x 2 y 3. The 8-th dictionary is z = 7 x x 2 y 3 y = x y 2 = 3 2x x 2 x 3 = 7 2x 2x 2 y 3. This dictionary is optimal. Next we show all the iterations of Dantzig s simplex method for solving the problem (4) when m = 4. The initial dictionary is z = x +x 2 +x 3 +x 4 y = x y 2 = 3 2x x 2 y 3 = 7 2x 2x 2 x 3 y 4 = 5 2x 2x 2 2x 3 x 4. 3

The second dictionary is The third dictionary is The 4-th dictionary is The 5-th dictionary is The 6-th dictionary is The 7-th dictionary is z = y +x 2 +x 3 +x 4 x = y y 2 = +2y x 2 y 3 = 5 +2y 2x 2 x 3 y 4 = 3 +2y 2x 2 2x 3 x 4. z = 2 +y y 2 +x 3 +x 4 x = y x 2 = +2y y 2 y 3 = 3 2y +2y 2 x 3 y 4 = 2y +2y 2 2x 3 x 4. z = 3 x y 2 +x 3 +x 4 y = x x 2 = 3 2x y 2 y 3 = +2x +2y 2 x 3 y 4 = 9 +2x +2y 2 2x 3 x 4. z = 4 +x +y 2 y 3 +x 4 y = x x 2 = 3 2x y 2 x 3 = +2x +2y 2 y 3 y 4 = 7 2x 2y 2 +2y 3 x 4. z = 5 y +y 2 y 3 +x 4 x = y x 2 = +2y y 2 x 3 = 3 2y +2y 2 y 3 y 4 = 5 +2y 2y 2 +2y 3 x 4. z = 6 +y x 2 y 3 +x 4 x = y y 2 = +2y x 2 x 3 = 5 +2y 2x 2 y 3 y 4 = 3 2y +2x 2 +2y 3 x 4. 4

The 8-th dictionary is The 9-th dictionary is The -th dictionary is The -th dictionary is The 2-th dictionary is The 3-th dictionary is z = 7 x x 2 y 3 +x 4 y = x y 2 = 3 2x x 2 x 3 = 7 2x 2x 2 y 3 y 4 = +2x +2x 2 +2y 3 x 4. z = 8 +x +x 2 +y 3 y 4 y = x y 2 = 3 2x x 2 x 3 = 7 2x 2x 2 y 3 x 4 = +2x +2x 2 +2y 3 y 4. z = 9 y +x 2 +y 3 y 4 x = y y 2 = +2y x 2 x 3 = 5 +2y 2x 2 y 3 x 4 = 3 2y +2x 2 +2y 3 y 4. z = +y y 2 +y 3 y 4 x = y x 2 = +2y y 2 x 3 = 3 2y +2y 2 y 3 x 4 = 5 +2y 2y 2 +2y 3 y 4. z = x y 2 +y 3 y 4 y = x x 2 = 3 2x y 2 x 3 = +2x +2y 2 y 3 x 4 = 7 2x 2y 2 +2y 3 y 4. z = 2 +x +y 2 x 3 y 4 y = x x 2 = 3 2x y 2 y 3 = +2x +2y 2 x 3 x 4 = 9 +2x +2y 2 2x 3 y 4. 5

The 4-th dictionary is The 5-th dictionary is The 6-th dictionary is This dictionary is optimal. z = 3 y +y 2 x 3 y 4 x = y x 2 = +2y y 2 y 3 = 3 2y +2y 2 x 3 x 4 = 2y +2y 2 2x 3 y 4. z = 4 +y x 2 x 3 y 4 x = y y 2 = +2y x 2 y 3 = 5 +2y 2x 2 x 3 x 4 = 3 +2y 2x 2 2x 3 y 4. z = 5 x x 2 x 3 y 4 y = x y 2 = 3 2x x 2 y 3 = 7 2x 2x 2 x 3 x 4 = 5 2x 2x 2 2x 3 y 4. Acknowledgment This research is supported in part by Grant-in-Aid for Science Research (A) 22438 of Japan Society for the Promotion of Science. References [] G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, New Jersey, 963. [2] T. Kitahara and S. Mizuno, A Bound for the Number of Different Basic Solutions Generated by the Simplex Method, Technical paper, available at http://www.me.titech.ac.jp/ mizu lab/simplexmethod/, 2. [3] T. Kitahara and S. Mizuno, Klee-Minty s LP and Upper Bounds for Dantzig s Simplex Method, Technical paper, available at http://www.me.titech.ac.jp/ mizu lab/simplexmethod/, 2. 6

[4] V. Klee and G. J. Minty, How good is the simplex method. In O. Shisha, editor, Inequalities III, pp. 59-75, Academic Press, New York, NY, 972. [5] Y. Ye, The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with a Fixed Discount Rate, Technical paper, available at http://www.stanford.edu/ yyye/, 2. 7