Finite Pivot Algorithms and Feasibility. Bohdan Lubomyr Kaluzny School of Computer Science, McGill University Montreal, Quebec, Canada May 2001

Similar documents
Optimization (168) Lecture 7-8-9

GETTING STARTED INITIALIZATION

1 Review Session. 1.1 Lecture 2

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

New Artificial-Free Phase 1 Simplex Method

A primal-simplex based Tardos algorithm

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

AM 121: Intro to Optimization

Part 1. The Review of Linear Programming

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Scholars Research Library

Simplex method(s) for solving LPs in standard form

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Chapter 3, Operations Research (OR)

Lecture 2: The Simplex method

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Lecture 5. x 1,x 2,x 3 0 (1)

The simplex algorithm

The Simplex Algorithm

Lecture slides by Kevin Wayne

The Simplex Algorithm: Technicalities 1

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

Discrete Optimization. Guyslain Naves

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

"SYMMETRIC" PRIMAL-DUAL PAIR

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

A Strongly Polynomial Simplex Method for Totally Unimodular LP

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

3 The Simplex Method. 3.1 Basic Solutions

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

The Strong Duality Theorem 1

Linear Programming for Planning Applications

0.1. Linear transformations

Chapter 4 The Simplex Algorithm Part II

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

MATH 445/545 Test 1 Spring 2016

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

MAT016: Optimization

Linear & Integer programming

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

Linear Programming. Chapter Introduction

Linear Programming Redux

OPERATIONS RESEARCH. Linear Programming Problem

Review Solutions, Exam 2, Operations Research

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

IE 400: Principles of Engineering Management. Simplex Method Continued

CHAPTER 2. The Simplex Method

ECE 307 Techniques for Engineering Decisions

In Chapters 3 and 4 we introduced linear programming

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.:

AM 121: Intro to Optimization Models and Methods Fall 2018

MATH2070 Optimisation

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

3. Linear Programming and Polyhedral Combinatorics

The dual simplex method with bounds

A Review of Linear Programming

Linear programming on Cell/BE

Relation of Pure Minimum Cost Flow Model to Linear Programming

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Linear Programming: Simplex

Chap6 Duality Theory and Sensitivity Analysis

Termination, Cycling, and Degeneracy

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Lesson 27 Linear Programming; The Simplex Method

Introduction to optimization

ORF 522. Linear Programming and Convex Analysis

3.3 Sensitivity Analysis

3. THE SIMPLEX ALGORITHM

Math 5593 Linear Programming Week 1

Lecture 9 Tuesday, 4/20/10. Linear Programming

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

Linear Programming. 1 An Introduction to Linear Programming

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Polynomiality of Linear Programming

Linear Programming, Lecture 4

Introduction to Linear Programming

Introduction to Mathematical Programming

CPS 616 ITERATIVE IMPROVEMENTS 10-1

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

III. Linear Programming

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

MATH 445/545 Homework 2: Due March 3rd, 2016

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming duality and sensitivity 0-0

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

3. Linear Programming and Polyhedral Combinatorics

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

Transcription:

Finite Pivot Algorithms and Feasibility Bohdan Lubomyr Kaluzny School of Computer Science, McGill University Montreal, Quebec, Canada May A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements of the degree of Master of Science Supervised by Professor David Avis School of Computer Science McGill University Copyright Bohdan Lubomyr Kaluzny

Abstract This thesis studies the classical finite pivot methods for solving linear programs and their efficiency in attaining primal feasibility. We review Dantzig s largestcoefficient simple method, Bland s smallest-inde rule, and the least-inde criss-cross method. We present the b -rule: a simple algorithm - based on Bland s smallest inde rule - for solving systems of linear inequalities (feasibility of linear programs). We prove that the b -rule is finite, from which we then prove Farka s Lemma, the Duality Theorem for Linear Programming, and the Fundamental Theorem of Linear Inequalities. We present eperimental results that compare the speed of the b -rule to the classical methods. i

Resumé Cette thèse étudie l éfficacité des méthodes classiques finies des pivots qui résout les problèmes de programmation linéaire pour atteindre une solution admissible. Nous passons en revue la méthode du simplèe du plus-grand-coefficient de Dantzig, la règle du plus-petit-indice de Bland, et la méthode entrecroisée du plus-petit-indice. Nous presentons la règle- b : un algorithme simple - basée sur la règle du simplèe de Bland - qui résout un système de contraintes linéaires (admissibilité d un programme linéaire). Nous prouvons que la règle- b est fini. Ceci mène à une preuve de la Lemme de Farka, le Théorème de Dualité de la Programmation Linéaire, et le Théorème Fondamentale des Contraintes Linéaires. Finalement, nous comparons les résultats des epériences empiriques qui démontrent la vitesse de la règle- b au mèthodes classiques finies. ii

Statement of Originality Assistance for this thesis, research and writing, has been received only where mentioned in the acknowledgements. Chapters,, and 5 present a review of literature. In addition to this survey, the observations in section.5 (leicographic increasing vector for Bland s rule), section 5.5 (primal feasibility using criss-cross methods) and section 5.8 (practical criss-cross methods) represent an original contribution to knowledge. The main contributions of the thesis, chapters 6 and 7, unless where noted otherwise, are also original contributions to knowledge. iii

Acknowledgements I would first like to thank David Avis for being a great supervisor and mentor. My master degree and this thesis could not have been completed without his helpful discussions, constant encouragement, patience, and wise direction. I thank him for introducing me to the field of operations research and for keeping me interested! I would like to thank NSERC, FCAR, the School of Computer Science, and McGill University for providing me with the financial and educational resources I needed to carry out my research. I thank my family and friends for their support and for giving me the opportunity to escape the abstract world in my mind once in a while. I dedicate my thesis to my sisters Darianna and Zoriana, my brother Oleh, and to my parents for providing me with love, support, and everything else I needed so that I could focus on advancing my education. Thank you! iv

Table of Contents ABSTRACT RESUMÉ ACKNOWLEDGEMENTS LIST OF TABLES LIST OF EQUATIONS LIST OF FIGURES I II IV VII VIII IX. INTRODUCTION. FUNDAMENTAL CONCEPTS AND NOTATION. INTRODUCTION.... VECTORS AND MATRICES.... LINEAR SYSTEMS.... ELIMINATION METHODS...5.5 LINEAR COMBINATIONS...6.6 LINEAR PROGRAMS...6.7 DICTIONARIES...9.8 PIVOTING.... DANTZIG S SIMPLEX METHOD 7. INTRODUCTION...7. DANTZIG S SIMPLEX METHOD...7. INITIALIZATION: PHASE ONE...9. DEGENERACY AND CYCLING....5 LEXICOGRAPHIC MINIMUM RATIO TEST....6 FUNDAMENTAL THEOREM OF LINEAR PROGRAMMING....7 DUAL SIMPLEX METHOD...5.8 COMPLEXITY RESULTS...7.9 COMMENTS...8. BLAND S PIVOT RULE 9. INTRODUCTION...9. BLAND S RULE...9. INITIALIZATION.... PROOF OF FINITENESS....5 LEXICOGRAPHIC INCREASE....6 COMPLEXITY RESULTS...5.7 COMMENTS...6 v

5. CRISS-CROSS METHODS 8 5. INTRODUCTION...8 5. LEAST-INDEX CRISS-CROSS...9 5. PROOF OF FINITENESS... 5. LEXICOGRAPHIC INCREASE... 5.5 CRISS-CROSS: PRIMAL FEASIBILITY...6 5.6 COMPLEXITY...5 5.7 PRACTICAL CRISS-CROSS VARIANTS...5 5.8 COMMENTS...5 6. THE b -RULE 55 6. INTRODUCTION...55 6. NOTATION...56 6. NON-NEGATIVE SOLUTION TO A SYSTEM OF LINEAR EQUATIONS...58 6. PROOF OF FINITENESS...6 6.5 NON-NEGATIVE SOLUTION TO A SYSTEM OF LINEAR INEQUALITES...6 6.6 FEASIBILITY OF A LINEAR PROGRAM...6 6.7 SOLUTION TO GENERAL LINEAR SYSTEMS...6 6.8 FUNDAMENTAL THEOREM OF LINEAR INEQUALITIES...66 6.9 FARKA S LEMMA...69 6. DUALITY THEOREM FOR LINEAR PROGRAMMING...7 6. SOLVING A LINEAR PROGRAM...7 6. CONCLUSIONS...78 7. EXPERIMENTAL RESULTS 79 7. INTRODUCTION...79 7. PREVIOUS WORK...8 7. RANDOM LP S AND FEASIBILITY...8 7. LOW DIMENSIONAL TESTS...8 7.5 HIGH DIMENSIONAL TESTS...86 7.6 SPARSE LP S...88 8.7 CONCLUSIONS...89 8. CONCLUSION 9 APPENDIX A 9 APPENDIX B 95 BIBLIOGRAPHY 99 vi

List of Tables Table : Primal-Dual possibilities...8 Table : (Avis and Chvátal) Dantzigs largest-coeffcient simple...8 Table : (Avis and Chvátal) Blands smallest-inde rule...8 Table : (Namiki) Simple vs. Criss-Cross...8 Table 5: Comparison of finite methods on low dimensional feasible/infeasible LP s...85 Table 6: Comparison of finite methods on high dimensional feasible/infeasible LP s...86 Table 7: Comparison of random feasible linear programs...87 Table 8: Comparison of random infeasible linear programs...88 Table 9: Tests on random sparse linear programs...89 vii

List of Equations Equation : Primal linear program in standard form...7 Equation : Dual linear program in standard form...8 Equation : Dictionary of a linear program... Equation : LP dictionary in matri form... Equation 5: Dual dictionary... Equation 6: Primal and dual dictionary relationship...5 Equation 7: Klee-Minty eample...7 Equation 8: b-rule LP formulation...7 Equation 9: b-rule dictionary for solving LPs...76 Equation : Kuhn and Quandt random linear program model...8 Equation : Namiki s random LP model for testing criss-cross...8 Equation : Model for feasible and infeasible random LPs...8 viii

List of Figures Figure : Terminal dictionary sign structures... Figure : Admissible pivots...6 Figure : Largest-inde variable leaves basis in Blands rule... Figure : Largest-inde variable enters basis in Blands rule... Figure 5: Optimal dictionary sign structure... Figure 6: Primal unbounded dictionary sign structure... Figure 7: Situation L after substitution... Figure 8: Situation E after substitution... Figure 9: Dictionary structure when largest-inde enters basis in criss-cross... Figure : Dictionary structure when largest-inde variable enters basis in criss-cross.. Figure : Terminal dictionary sign structures... Figure : Entering situations after substitution... Figure : Leaving situations after substitution... Figure : Terminal dictionary sign structures for systems requiring non-negativity...57 Figure 5: Admissible pivot for the b-rule...58 Figure 6: Largest-inde variable leaves basis in b-rule...6 Figure 7: Largest-inde variable enters basis in b-rule...6 Figure 8: Optimal dictionary sign structure for the b -rule...6 Figure 9: Infeasible dictionary sign structure for the b-rule...6 Figure : Leaving dictionary after substitution...6 Figure : Entering dictionary after substitution...6 i

. Introduction Solving a system of linear equations has been of interest to humans since the second millennium B.C. Today, the Gaussian elimination method [Gauss] is taught to students as part of their basic high school math curriculum. On the other hand, most university students would not know how to solve a system of linear equations with nonnegativity constraints on the variables, let alone a system of linear inequalities. Algorithms for finding a solution to a system of linear inequalities are relatively new; first studied by Fourier in the 9 th century [Fou9] and later re-discovered by several mathematicians ([Mot6], [Din8]). Since the discovery of the simple method for linear programming by Dantzig in 97 [Dan8], more attention has been given to the problem of solving linear systems. The Gaussian elimination method for solving a system of linear equations is a polynomial-time algorithm. Until the recent discoveries of polynomial-time methods by Khachian [Kha8] (ellipsoid method) and Karmarkar [Kar8] (interior point method) the compleity of linear programming (and solving systems of linear inequalities) was an open problem. While these solutions give polynomial-time algorithms with respect to the number of bits of input, pivot methods may yield a polynomial-time algorithm with respect to the number of variables and constraints only. However, whether a polynomialtime pivot algorithm eists remains an intriguing open problem. Dantzig s simple method, although known to be very efficient in practice, is a worst-case eponential-time pivot algorithm.

In this thesis we review the classical finite pivot methods for solving linear programs and their efficiency in attaining primal feasibility (solving a system of linear inequalities) or proving infeasibility. The aim of this thesis is twofold: first we ehibit a simple algorithm for solving a system of linear inequalities that could be taught to students at the high school level as complementary material. Secondly, we eamine the applications of our algorithm in the theory of linear programming and linear inequalities, and we compare its efficiency to the classical finite pivot methods for attaining feasibility in linear programs. The thesis is organized as follows: In chapter we define the fundamental concepts and introduce notation. We review Dantzig s two-phase largest-coefficient simple method with leicographic ratio test in chapter, and Bland s smallest-inde rule in chapter. In chapter 5 we study finite criss-cross methods and etend the leastinde criss-cross method to solve linear systems to primal feasibility. In chapters 6 and 7 we present our main results. Chapter 6 is self-contained: we present the b -rule; a simple method for solving systems of linear inequalities based on the dual of Bland s smallestinde rule [Bla77]. The finiteness of the b -rule results in simple, easy-to-follow proofs Farka s Lemma, the Duality Theorem for linear programming, and the Fundamental Theorem of linear inequalities. Hence we suggest it be used as a pedagogical teaching tool in the instruction of students being introduced to linear programming. The b -rule is also an alternative to the phase one methods for attaining a basic feasible solution of a linear program. In chapter 7 we define a random linear programming problem model that generates both feasible and infeasible problems and use it to compare the efficiency of the b -rule to the classical finite methods presented in chapters,, and 5.

. Fundamental Concepts and Notation. Introduction We assume the reader is familiar with the basic elements of linear algebra such as vectors, matrices, and their properties. For an in depth introduction to linear algebra, please see Lay [La9]. Chvátal [Chv8] provides an ecellent introduction to linear programming. In this chapter we define general concepts and introduce notation used throughout this thesis.. Vectors and Matrices An n-dimensional vector v is a list of n real numbers v, v,, v n usually epressed by one of the following notations: (v, v,, v n ), [v v v n ], or v v. The M vn last two representations are referred to as row and column vectors respectively. The set of all vectors with n entries is denoted by R n. A real number is a -dimensional vector. An mn matri A is a collection of m n-dimensional row vectors, or equivalently n m- dimensional column vectors: a M a m L O K an M. The element (number) of the i th row and amn the j th column of a matri A is denoted by a ij. The set of all mn matrices is denoted by

R mn. An m-dimensional row vector and an n-dimensional column vector are also m and n matrices respectively. See Appendi A for a short overview of vector and matri arithmetic and other properties.. Linear Systems A linear equation in the variables,, n is an equation that can be written in the form a a... an n b, where b and the coefficients a,, a n are real numbers known in advance. Similarly, a linear inequality can be written in the form a a... ann b. A (linear) system of linear equations (or inequalities) is a collection of one or more linear equations (inequalities) involving the same set of variables, say,, n. A solution of a linear system is the assignment of values (real numbers) to the variables of the system such that every equation/inequality is satisfied... Matri Notation The information of a linear system of m equations/inequalities and n variables can be recorded using a coefficient matri A R mn, column vector R n, and column vector b R m : a a m... a n... a mn n b M M M n bn a M a m L O K an M amn M n b M bm A b.. Equivalent Forms A system of linear inequalities has can have alternative forms, for eample:

5 A b (A) (b) A b, C d A b C d A system of linear equations can be interpreted as a system of linear inequalities: A b A b A b.. Linear Subsystems Given a system of linear inequalities, A b, removing p inequalities results in a new system A b, where A R (m-p)n, b R (m-p). A b, is a subsystem of A b.. Elimination Methods.. Gaussian Elimination In the 9 th century Gauss [Gauss] discovered an algorithm for solving an arbitrary system of linear equations. The method consists of successive elimination of variables and equations (back substitution). Similar elimination methods date back to 5B.C. Chvátal [Chv8] provides a concise presentation of the method and discusses the accuracy and speed. For more information concerning the history of elimination methods see [Str67]... Fourier-Motzkin Elimination Fourier, and later Motzkin, discovered a method similar to Gaussian Elimination applied to a system of linear inequalities. For an in-depth look, see [Ku56].

6.5 Linear Combinations.. Linear Combinations Given vectors v, v,, v p, R n and given scalars c, c,, c p, the vector w defined by w c v c v c p v p, is called a linear combination of v, v,, v p using weights c, c,, c p... Linear Dependence and Independence A set of vectors {v,, v p } R d is said to be linearly dependent if there eist weights c, c,, c p, not all zero, such c v c v c p v p. Otherwise, the set of vectors {v,, v p } is linearly independent..6 Linear Programs If c, c,...,c n are real numbers, then the function f of real variables,,..., n defined by f(,,..., n ) c c... c n n is called a linear function. Linear equations and linear inequalities are also known as linear constraints. The problem of maimizing (or minimizing) a linear function subject to a finite number of linear constraints is called linear programming..6. Standard Form Within this thesis, we consider linear programs in the following standard form:

7 (P) Maimize c subject to : A b, where c R n, A R mn, and b R m are given. Equation : Primal linear program in standard form We will also refer to this formulation as the primal form of the linear program ( primal LP for short)..6. Terminology The linear function c, which we attempt to optimize, is called the objective function. A feasible solution is an assignment of values to the decision variables,,..., n such that all the constraints are satisfied. A feasible solution that optimizes the objective function is an optimal solution. A linear program that does not admit any feasible solution is called infeasible, and an unbounded linear program has feasible solutions but no optimal solution..6. Fundamental Theorem of Linear Programming Theorem. (Fundamental Theorem of Linear Programming): Every LP problem satisfies eactly one of the following three conditions:. It is infeasible. It is unbounded. It has an optimal solution. In order to solve a linear program, it is necessary to obtain a certificate of one of these three terminal conditions. We provide a proof of the theorem in chapter.

8.5. Duality: Certificate of Optimality Every primal LP problem admits a dual problem of the form: (D) Minimize b subject to : A T y T y c T y, (dual LP) Equation : Dual linear program in standard form Table shows the different combinations possible for a primal-dual pair. The dual LP represents a linear combination of the primal LP constraints. The importance of the duality, known as the Duality Theorem, is that every feasible solution of the dual LP yields a bound on the optimal value of the primal LP. The Duality Theorem provides us with the ability to certify whether a solution to an LP is optimal or not. Primal LP Dual LP Optimal Infeasible Unbounded Optimal Yes No No Infeasible No Yes Yes Unbounded No Yes No Table : Primal-Dual possibilities Theorem. (Weak Duality Theorem): If ~ is a primal feasible solution, and y ~ is a dual feasible solution, then c ~ T b ~ y. Theorem. (Strong Duality Theorem): If an LP has an optimal solution then the dual problem has an optimal solution y and their optimal values are equal: T c b y.

9 Theorem. (Complementary Slackness): If an LP has an optimal solution and the dual has optimal solution y then y ( b A) and ( A T y c T )..6.5 Certificate of Unboundedness Theorem.5 (Unboundedness Certificate): An LP in standard form is unbounded if and only if it has a feasible solution ~ and there eists (a direction) d such that d, Ad and c T d >..6.6 Certificate of Infeasibility Theorem.6 (Farkas Lemma (variant)): The system A b, of linear inequalities is infeasible, if and only if the system w, wa and wb < has a solution. We prove the above theorems in chapter 6..7 Dictionaries To clarify pivot algorithms for solving linear programming problems, it is convenient to use a dictionary representation of the LP system..7. Slack Variables Given an LP,

Maimize subject to : n j n j cjj aijj bi, j ( i,,..., m) ( j,,..., n) (.7.a) we denote the objective function by z and introduce the slack variables n, n,, nm, defined as: n i z bi n j n j aijj cjj ( i,,..., m) (.7.b) Equation : Dictionary of a linear program Every dictionary associated with (.7.a) is a system of linear equations in the decision variables,,, n, the slack variables n, n,, nm as defined in (.7.b), and z. For eample, (.7.b) is a dictionary representation of (.7.a). Every solution of the set of equations comprising a dictionary is also a solution of (.7.a) and vice versa, if and only if the solution values of the variables (including slacks) are non-negative..7. Basic Feasible Solutions The equations of every dictionary epress m of the variables,,, nm, and the objective function z in terms of the remaining n variables. The m variables are known as basic variables. Basic variables constitute a basis. Similarly, the n remaining variables are referred to as co-basic (or non-basic) and constitute a co-basis (non-basis). There are In the event that no slack variables are given, an initial LP basis (or proving that none eists) can be easily found by Gaussian elimination. For the remainder of this thesis, unless otherwise noted, we assume an initial basic solution, feasible or not, can be found.

m n only a finite number bases, clearly not more than. If setting the co-basic m variables of a given dictionary to zero results in the basic variables evaluating to nonnegative values, then the dictionary is a primal feasible dictionary. Solutions of this type are basic feasible solutions. The fundamental theorem of linear programming implies that if an LP has a feasible solution, then it also has a basic feasible solution and similarly, if an LP has an optimal solution, then it has a basic optimal solution..7. Matri Notation Given an LP in standard form (.7.a), we introduce the slack variables n, n,, nm, and record the new problem matri notation: Ma s. t. : c A b, where A is a matri with m rows and nm columns. The first n columns form the original coefficient matri (a ij s,), and the last m columns form the identity matri. The row vector c has length nm with the first n entries containing the cost coefficients, c i s, from (.7.a). The remaining m entries of c are set to zero. is a column vector reflecting the addition of m slack variables. The entries of column vector b b i s, for (i m). [ L n] T [ Ln n Ln m] T [ clcn] [ clcn L], c, R nm, R m, are the

[ ] I A a a a a A mn m n L M O M L, with I R mm. Let B be the set of indices of the variables in the basis, and N be the set of indices of variables in the co-basis. We write A as N B A A N B, and c as N N B N c c. Thus, a dictionary in matri notation is recorded as: N B N B N B A A c c b A c z A A b A N B B N B B ) ( (.7.a) Equation : LP dictionary in matri form A B - b is the vector specifying the current values of the basic variables. Let b A B - b (the b-column), and N AB A A where ij α represents the coefficient of the co-basic variable j in the dictionary row of the basic variable i ( ij α should not to be confused with row i, column j of A, which is denoted ij a ). Let j z represent the coefficient of the nonbasic variable j in the objective row (z-row) of the current dictionary. Given a (primal) basis, the dual dictionary of the primal dictionary (.7.a), is represented as: ) ( ) ( ) ( B B B B N N y b A b A c w y A A A A c c y T B B T N B T N B (.7.b) Equation 5: Dual dictionary The dual basis N is the primal co-basis. Similarly, the dual co-basis B is the primal basis. Note that (.7.a) and (.7.b) are mirror images: the rows of a primal dictionary correspond to the negative of the columns of its dual dictionary..7. Basis

Definition.: A basis B is a maimal subset of indices {,,..., nm} such that the corresponding column vectors of the matri A are independent. Definition.: Given a basis B, setting the co-basic variables to zero and evaluating the basic variables results in a basic solution. A basic solution of a basis B is primal feasible if i for all i B. It is dual feasible if z j for all j N. An LP is primal inconsistent if it does not have a primal feasible basis. A dual inconsistent LP does not have a dual feasible basis. Theorem.7: A linear program ma c, A b, is infeasible (primal inconsistent) if there eists a basis such that a row element i of b < and A AB B AN. Proof. The statement implies that a variable, i, is epressed as a negative linear combination of the non-negative variables j, for j N. This linear combination is an unsatisfiable constraint. Theorem.8: A linear program is unbounded (dual inconsistent) if it is feasible and there eists a basis such that a column element j of A B AN and ( c N cbab AN) >. Proof. Starting from the basic feasible solution, we can increase the value of j indefinitely: since element j of A B AN, the basic variables will remain feasible, and

( c N cbab AN) > implies the optimal value z* will increase in direct proportion to j..7.6 Dictionary Structures We represent a dictionary (.7.a), by a table structure of coefficients for a given basis: A B b A B A N cba B b ( c N cba B AN) The sign structures of optimal, primal and dual inconsistent dictionaries are illustrated in Figure. We indicate the negative, non-positive, zero, non-negative, and positive components by,,,, respectively. Figure : Terminal dictionary sign structures.8 Pivoting Given a dictionary D with basis B, a pivot operation is the process of swapping a variable i B with a variable j N and re-solving the system in terms of the new basis: (B B{i}{j}, and N N{i}{j}). Given a dictionary:

5 n n n mn m m m n n n n z z z z z b b b ˆ ˆ ˆ L L M O M M L α α α α α α α i ˆ represents the i th variable of the current basis ordered by indices, and j represents the j th variable of the non-basis (refer to section.7 for definitions of i b and ij α ). Pivoting on (i,j) performs the following operation: n ij in j n i ij j ij i j ij i j n ij in mj mn i ij mj ij i mj m ij mj i m m n ij in ij i ij i ij i j n ij in j n i ij j ij i j ij j i z z z z z A b z z z b b b b b ) ( ) ˆ ( ) ( ) ( ) ( ˆ ) ( ) ( ) ( ˆ ) ( ˆ ) ( ) ( ˆ ) ( ) ( ) ( ˆ α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α L L L L M M L L M M L L Many rules for selecting the entering and leaving variables have been proposed with the goal of moving from a given basis to the optimal basis, thus solving a linear program (see [Dan5], [Bla77], [Zo69]). For a survey on recent pivot rules for linear programming, see Terlaky and Zhang [Ter9]. A pivot rule is finite if it reaches the optimal basic solution after a finite number of steps. Otherwise the pivot rule cycles. A pivot method is called a simple method if it preserves the (primal/dual) feasibility of the basic solution. Pivot rules that maintain primal feasibility and attempt to reach dual feasibility (and thus optimality), require that the initial basis be primal feasible, and are called two- phase rules because of the need to obtain an initial primal feasible basis (phase one) before proceeding to solve to optimality. Pivot methods that do not preserve feasibility and hence require only one phase are called criss-cross methods. Combinatorial pivot

6 rules are pivot rules that are only concerned with the signs of the coefficients of a dictionary. Fukuda and Terlaky [Fu97] define an admissible pivot rule to be a pivot on (i,j) such that either b i < and α ij < (type ), or z j > and α ij > (type ). See figure for the sign structures of a dictionaries with admissible pivots. Figure : Admissible pivots

7. Dantzig s Simple Method. Introduction In 97 Dantzig [Da8], [Da9] designed the largest-coefficient simple method for solving a linear program. This method is very efficient and widely used in practice ([Bi9]), and shown to have an epected polynomial-time behaviour in theory ([Dan8], [Bor8], [Sma8]). In this chapter we eamine the Dantzig s original two-phase simple method for solving linear programs, and the leicographic minimum ratio test for avoiding cycles.. Dantzig s Simple Method Dantzig s simple method is a gradient ascent approach that iteratively improves the objective value while maintaining primal feasibility. Given a primal feasible basis, the algorithm selects the co-basic variable that has the largest positive coefficient in the z- row of the current dictionary to enter the basis. This greedy choice is a result of the desire to increase the value of z. The leaving variable is chosen by a ratio test: primal feasibility is maintained by choosing the leaving variable that imposes the most stringent upper bound on the increase of the entering variable.

8 Problem.: Given a mn matri A, m-dimensional vector b, n-dimensional vector c, and a primal feasible basis B (and co-basis N), solve the linear program: ma c, A b,. Method. (Largest-Coefficient Simple Method): Add slack variables and let the initial primal feasible dictionary (i.e. AB b ) can be written as: B AB b AB ANN z cbab b ( cn cba B AN) N () Step : If all the coefficients of the co-basic variables in the z-row are non-positive, then set B AB b and N. The solution is optimal. Done. Step : Otherwise let j N be the inde of variable with the largest positive coefficient in the z-row of the current dictionary (break ties by choosing minimum inde). Step : Let K be the subset of B, where for every k K, the coefficient α kj is negative. If K is empty, then stop: the linear program is unbounded (the certificate direction is given by the n-dimensional vector that represents j in terms of the n decision variables). Step : Otherwise, choose i to be the minimum inde, ( kj i K, such that b i α ij) ( b k α ) for all k K (ratio test with minimum inde for breaking ties). Step 5: Set B B {i} {j}, N N {i} {j}. Compute the new dictionary (). Go to Step.

9. Initialization: Phase One In order to solve a linear program, Dantzig s simple method requires an initial primal feasible basis. In many cases a phase one simple method must be eecuted to attain such a basis. As a result, Dantzig s simple method is a two-phase algorithm. Problem.: Given a mn feasible basis B (and co-basis N), such that: matri A, an m-dimensional vector b, find and a primal A b, () Method. (Phase One Simple): Step : Introduce an artificial variable and formulate the following auiliary linear program: ma (- ), A b,, () Step : Obtain a basic feasible solution by setting all of the original variables to zero, and making the value of sufficiently large. Step : Solve () using Dantzig s simple method (Algorithm.). Step : If the optimal value of () is zero, then the optimal basis of () is a feasible basis of (). Otherwise the system is infeasible. Eample. (Dantzig s two-phase simple method): Maimize z, Subject to: 8 Introduce and solve for the slack variables and to obtain a system of equations: Gaussian elimination can be used to solve for m of the variables, or prove no such system eists

8 z The system is primal infeasible, introduce the artificial variable and solve the auiliary linear program to complete phase one: 8 w Pivot with to obtain a primal feasible dictionary for the auiliary problem: 8 7 w 8 6 / 7 ( / 7) (/ 7) ( / 7) Pivot: / 7 (/ 7) (/ 7) (/ 7) w 6 / 7 ( / 7) (/ 7) ( / 7) 8/ 5 (7 /) (/) ( / 5) Pivot: 6 / 5 (/) (/) (/ 5) w Optimal phase one solution with zero value, we now proceed with phase two: We return to the original objective row, and substitute the basic variables by their equations so that the objective row is in terms of the co-basic variables. 8 / 5 (/) ( / 5) 6 / 5 (/) (/ 5) z 6 / 5 (/ 5) (/ 5) (/ ) Pivot: 6 5 (/ ) z (/ ) Optimal phase two solution achieved.

. Degeneracy and Cycling If a linear program has a basic feasible solution where one basic variable or more has a value of zero, then this solution is a degenerate basic feasible solution. The presence of degeneracy can have the following consequences:. A simple pivot from one feasible basis to another may not improve the value of the objective function z. This phenomenon is known as stalling.. The simple method might cycle and never reach the optimal solution. Theorem.: The simple method is guaranteed to stop in a finite number of iterations if there is no degeneracy. m n Proof. There are only a finite number bases, clearly not more than, and every m non-degenerate simple pivot strictly increases the value of the objective function. This implies that a basis cannot be encountered twice. Hoffman [Hof5] constructed the first eample of a linear program that cycles. Lee [Lee97] describes the geometry behind Hoffman s eample. Wolfe, [Wol6], and Kotiah and Steinberg [Kot78] reported eamples of practical linear problems that cycled. We present the eample found in [Chv8], a modification of the eample constructed by Marshal and Suurballe [Ma69].

Eample. (Cycle): Maimize 9 57 z, subject to: (/) (/) ) (/ 9 (5/) ) (/ ) (/ Introduce slack variables 5, 6, and 7 to obtain the starting dictionary: 7 6 5 9 57.5.5.5 9.5 5.5.5 z {5,6,7} Pivot: 5 5 5 7 5 6 5 5 8 5 8 8 5 z {,6,7} Pivot: 6 6 5 6 5 7 6 5 6 5 98.5.5 6.75.5.5.75.5.75.75.5.5.5 z {,,7} Pivot: 6 5 7 6 5 6 5 9 5 8 9.5.5 5.5.5 8 z {,,7} Pivot: 6 5 7 6 5 6 5 7.5.5 9.5.5.5.5.5.5 z {,,7}

5 8 9 6 Pivot: 5 7.5.5.5 6 {,5,7} z 9 6 Pivot: 6 returns to basis {5,6,7}..5 Leicographic Minimum Ratio Test Dantzig, Orden and Wolfe [Dan55] developed the leicographic minimum ratio test for avoiding cycling. The algorithm is equivalent to method. ecept that the algorithm breaks ties for the leaving variable by vector leicography instead of variable indices..5. Definitions A vector v is leicographically positive if v, and the first nonzero element of v is positive. Given two vectors v and v, v is leicographically greater than v if ( v v) is leicographically positive. Given a set of vectors, v i is the leicographically minimum vector if the other vectors of the set are all leicographically greater than v i..5. Breaking Ties Theorem.: The simple method is finite if ties for the leaving variables are broken using Dantzig s leicographic minimum ratio test. Proof.

The simple method starts with a feasible basis, in other words with I and b AB. The rows of the initial matri defined by [ A B b A B] [ b I] are leicographically positive. Dantzig s et al leicographic ratio test maintains the rows of the matri [ A B b A B ] as leicographically positive during every iteration of the simple method: let j be the variable chosen to enter the basis. If the ratio test (step of method.) results in a tie, choose the basic variable whose row is the leicographically minimum row of all the rows of [( A Bb A B) / αj] (the matri with its row elements divided by the coefficient of j, α j, in that row) which are involved in the tie. Since the rows of this matri are linearly independent, this will always yield a unique selection. Because we choose to pivot in the leicographic minimum row, the resulting pivot operation maintains all the rows of [ A Bb A B] as leicographically positive. A positive multiple of this row is added to the objective row, hence the objective vector, [ cba Bb cba B], is always increasing leicographically and in the presence of degeneracy, and cycling is avoided. The Dantzig s leicographic two-phase simple method is a finite algorithm..6 Fundamental Theorem of Linear Programming Recall Theorem. (Fundamental Theorem of Linear Programming): Every LP problem satisfies eactly one of the following three conditions:. It is infeasible (primal inconsistent).. It is unbounded (dual inconsistent).. It has an optimal solution.

5 Proof. Phase one of Dantzig s two-phase simple algorithm determines that either the problem is infeasible or it returns a basic feasible solution. Phase two determines that either the problem is unbounded or delivers a basic optimal solution..7 Dual Simple Method The Duality Theorem for linear programming states that a primal linear program shares the same optimal value as its dual. Therefore, we can apply the simple method to the dual linear program to attain optimality. In section.5 we noted that every primal dictionary is the mirror image of the corresponding dual dictionary: z d i b i α ijj ( i B, j N) j N j N z jj y j z j α ijyi ( i B, j N) i B w d i B b ii (primal dictionary) (dual dictionary) Equation 6: Primal and dual dictionary relationship The coefficients appearing in a row of a primal dictionary are found with opposite signs in the corresponding column of the dual dictionary. Lemke [Lem5] designed the dual simple method that performs the simple method on the dual problem using only the primal formulation/dictionary structure:

6 Problem.: Given a mn matri A, an m-dimensional vector b, a n-dimensional vector c, and a dual feasible basis B (and non-basis N), solve the linear program: ma c, A b,. Method. (Dual Simple): Add slack variables and let the current dual feasible dictionary, i.e. ( cn cbab AN), can be written as: B AB b AB ANN z cbab b ( cn cba B AN) N () Step : If AB b, then set B AB b and N. The solution is optimal. Done. Step : Otherwise let i B be the inde of basic variable with the most negative coefficient in the b-column of the current dictionary (break ties by choosing minimum inde). Step : Let K be the subset of N, where for every k K, the coefficient α ik is positive. If K is empty, then stop: the linear program is infeasible (the entering row containing only non-positive coefficients provides the certificate linear combination that causes infeasibility). Step : Otherwise, choose j to be the minimum inde, j K, such that ( z j α ij) ( z k α ik) for all k K (ratio test with minimum inde for breaking ties). Step 5: Set B B {i} {j}, N N {i} {j}. Compute the new dictionary () and go to Step.

7 Clearly the dual simple method is useful when the initial basis is dual feasible and primal infeasible. The dual simple method is often used for sensitivity analysis; see Chvátal [Chv8] and Schrijver [Sch86] for further details..8 Compleity Results The worst-case compleity of the simple algorithm is not known. Klee and Minty [Kle7] showed that it is at least an eponential-time algorithm with the following linear program requires n - iterations (or pivots): Maimize n j subject to : i j n j j i j j i j i ( i,,..., n) ( j,,..., n) Equation 7: Klee-Minty eample However, Dantzig [Dan6] argued that for practical linear programming problems with m < 5 and m n <, the number of iterations is usually less than m/ and rarely up to m. The number of iterations usually increases proportionally to m and very slowly with n. Dantzig [Dan8], Borgwardt [Bor8], [Bor87] and Smale [Sma8] provide theoretical eplanations of this result.

8.9 Comments The first algorithm designed to solve linear programs is still the preferred method of choice in practice. On average the two-phase simple method is a linear-time algorithm with respect to the number of constraints in the input. However, there are a couple downsides to the Dantzig s method:. Depending on the implementation, finding an entering variable might require up to n comparisons to find the largest coefficient in the objective row.. The leicographic ratio test is an epensive operation that must be performed in order to avoid cycling in the presence of degeneracy. Up to mn comparisons are required for the ratio test. It is often left out since it can be comple to implement.. If an initial primal feasible basis is not available, then an auiliary linear program must be solved to attain one. In the net chapter we present Bland s pivot rule for the simple algorithm. It is a simple rule that is finite and does not require leicographic ratio tests.

9. Bland s Pivot Rule. Introduction In 977, Bland [Bla77] presented a simple finite pivot selection rule (known as Bland s rule) for the simple method. The resulting algorithm is similar to Danzig s simple method: it attempts to increase the objective value while preserving primal feasibility. It is a two-phase algorithm, but it avoids computations relating to the leicography of the leaving row, and also coefficient comparisons for choosing the entering variable. In this chapter we look at Bland s smallest inde rule and provide a proof of finiteness. Theoretical compleity results are discussed.. Bland s Rule Bland s rule chooses the entering variable by its inde: the entering variable is chosen to be the variable with a positive coefficient in the objective row with the smallest inde. Dantzig s ratio test with smallest inde for breaking ties is used to choose the leaving variable in order to preserve primal feasibility. Problem.: Given a mn matri A, an m-dimensional vector b, a n-dimensional vector c, and a primal feasible basis B (and co-basis N), solve the linear program: ma c, A b,.

Algorithm. (Simple with Bland s rule): Add slack variables and let the current primal feasible dictionary (i.e. AB b ) can be written as: B AB b AB ANN z cbab b ( cn cba B AN) N () Step : If all the coefficients of the non-basic variables in the z-row are non-positive, then set B AB b and N. The solution is optimal. Done. Step : Otherwise let j N be the smallest inde of a variable with a positive coefficient in the z-row of the current dictionary (break ties by choosing minimum inde). Step : Let K be the subset of B, where for every k K, the coefficient α kj is negative. If K is empty, then stop: the linear program is unbounded (the certificate direction is given by the n-dimensional vector that represents j in terms of the n decision variables). Step : Otherwise, choose i to be the minimum inde, ( kj i K, such that b i α ij) ( b k α ) for all k K (ratio test with minimum inde for breaking ties). Step 5: Set B B {i} {j}, N N {i} {j}. Compute the new dictionary (). Go to Step.. Initialization In order to solve a linear program, Bland s rule, like every simple method, requires an initial primal feasible basis. A phase one may have to be eecuted to attain

such a basis. The simple method with Bland s rule is a two-phase algorithm: an auiliary problem must be set up and solved using Bland s pivoting rule to attain an initial primal feasible basis.. Proof of Finiteness Theorem. (Algorithm. is finite): The simple method is finite if the entering and leaving variables are selected by Bland s smallest inde rule in every iteration. The proof we present uses the ideas from the proof of finiteness of the least-inde crisscross algorithm by Fukuda and Terlaky [Fuk97]. Proof. Lets assume there eists a system on which the simple method using Bland s rule does not terminate. There are only a finite number bases, clearly not more than m n, hence we must assume the simple method using Bland s rule cycles. Let m ma c s.t: A b, be a system that causes algorithm. to cycle. We will assume that every variable in the system enters and leaves the basis during the cycle. Otherwise we can use a smaller eample of this system that cycles by removing the variables that are not involved. Let k be the largest inde of the system s variables. In order for k to enter and leave the basis, the following two situations must occur: Situation LEAVE ( k leaves the basis):

Figure : Largest-inde variable leaves basis in Blands rule Situation ENTER ( k enters the basis): Figure : Largest-inde variable enters basis in Blands rule Now consider the following two situations for any system Situation OPTIMAL (optimal solution): ma c, A b, : Figure 5: Optimal dictionary sign structure Situation UNBOUNDED (unbounded system):

Figure 6: Primal unbounded dictionary sign structure Both situations are terminal. By the Fundamental Theorem of Linear Programming the system cannot have an optimal solution and also be dual inconsistent (unbounded and/or have an unbounded direction). Only one of situation OPTIMAL and situation UNBOUNDED can occur for a given system ma c, A b,. Lets apply this fact to our cycling system with the following substitution: let k ( ) k. Eamine the original situations: Situation LEAVE ( k leaves the basis): Figure 7: Situation L after substitution Situation ENTER ( k enters the basis): Figure 8: Situation E after substitution Note that situation ENTER now corresponds to situation OPTIMAL, and situation LEAVE corresponds to situation UNBOUNDED (unbounded direction). In a cycle both situations must be encountered, but this contradicts the fact that a system cannot be both

inconsistent and have a feasible solution. Algorithm. is finite and either ends with an optimal solution or proof of inconsistency..5 Leicographic Increase A corollary of Theorem. is that during a pivot sequence of Bland s smallestinde simple method both almost terminal dictionaries, situation ENTER and situation LEAVE, cannot be both encountered. A nice application of this corollary is the construction of a vector L that increases leicographically after each iteration of Bland s simple method. The construction of vector L is defined by Fukuda and Matsui [Fuk89], but in the contet of another finite pivot rule (the least-inde criss-cross method. See chapter 5). We etend their results to Bland s smallest-inde rule: Let L be a - vector indeed by the mn variable indices in decreasing order; L L, L, L, L, ) ( m n m n L. Initially, set L (,,,). After every Bland rule pivot on the indices i and j, update L as follows: q ma{i, j} L ( L q) L k if k < q if k q if k > q k {, k k K, m n} Theorem. In Bland s smallest-inde rule, the vector L increases monotonically in the sense of leicographic ordering, and hence it terminates in a finite number of steps.

5 Proof. Let (i, j) be the net Bland rule pivot. Let q ma{i, j}. We show that with every iteration the vector L satisfies L q. This implies that the vector L strictly increases. Assume that L does not strictly increase after some iteration k, i.e. L t for some maimum pivot inde t before we update L. In order for this to occur, there must eist have been a previous iteration where t is also chosen as the maimum pivot inde. Let k be the most recent such iteration before k. L t at all iterations from k to k, and the maimum pivot inde at each iteration between k and k is lower than t. Thus t is the maimum inde of a variable that enters and leaves the basis during this pivot sequence of the criss-cross method where L does not increase. This corresponds to situations ENTER, and situations LEAVE respectively. The two situations cannot be both encountered during Bland s smallest-inde rule. Vector L strictly increases leicographically after every iteration..6 Compleity Results Avis and Chvátal [Av78] showed that the worst-case number of iterations required by Bland s pivoting rule is bounded from below by the n-th Fibonacci number. This lower bound was later imporoved to n, see [Sch86]. The eample Avis and Chvátal provide is a disguised form of the Klee-Minty cube, Maimize subject to : n j ε n j i j ε j i j j i j where < ε < ( i,,..., n) ( j,,..., n) (.)

6 for which they show that the number of iterations required, bland(n), is bounded from below by: bland ( n) 5 5 n n 5 5 Replacing the right-hand side of (.) by zeroes forms a new LP that is completely degenerate: Maimize subject to : n j ε n j i j ε j i j j i j where < ε < ( i,,..., n) ( j,,..., n) (.) Bland s smallest-inde rule generates the same sequence of pivots in (.) as is (.). This result shown by Avis and Chvátal provides a lower bound on the worst-case number of stalling iterations that the simple rule with Bland s rule might require. Avis and Chvátal also provided results from Monte-Carlo eperiments on a number of pivot rules run on various types of linear programs. These results will be discussed when we present eperimental results in chapter 7..7 Comments Bland published the first finite pivot rules for the simple method. Although an important and eciting discovery for theoretical purposes, it shares many of the costly attributes of Dantzig s largest-coefficient simple method. The ratio test to preserve feasibility restricts the potential pivot paths to optimality. However, Bland s findings

7 lead to the discovery of another pivot method makes use of non-simple paths: the leastinde criss-cross method.

8 5. Criss-Cross Methods 5. Introduction In 969, Zoints [Zo69] published a new method for solving linear programs that required no initialization (phase one/auiliary problem). The method, named the crisscross method, alternates between the primal simple method and the dual simple method while the basis remains neither primal feasible nor dual feasible. Once primal or dual feasibility is attained, then Zoint s criss-cross method reduces to the primal simple method or dual simple method respectively. Thus either leicographic ratio tests with Dantzig s rule or Bland s rule is required in order to avoid cycles in the simple method caused by degeneracy. However, even though Zoint s criss-cross method tends to reach optimality, this is not sufficient to prove finiteness of the method when dealing with bases that are both primal and dual infeasible as we will show in section 5.7. The first finite criss-cross algorithm, the least-inde criss-cross method, was discovered independently by Chang [Chn79], Terlaky [Ter85] and Wang [Wan87]. Jensen s general recursion, [Jen85], implicitly includes the criss-cross method. The least-inde criss-cross method is a descendant of Zoint s criss-cross with the observation that it is not necessary to maintain primal or dual feasibility in order to get a sequence of pivots that leads to the optimal solution. In this chapter we present the least-inde criss-cross method for solving linear programs.

9 5. Least-Inde Criss-Cross The least-inde criss-cross method is a finite algorithm for computing the optimality of a linear program. The method does not require any type of initialization (auiliary phase one problems), nor does it maintain primal/dual feasibility using ratio tests. It is an concise algorithm whose simplicity is due to its primal-dual symmetry. Problem 5.: Given a mn matri A, an m-dimensional vector b, and n-dimensional vector c, find the optimal solution of the linear program: ma c, A b,. Algorithm 5. (Least-Inde Criss-Cross): Add slack variables and let the initial dictionary be written as: B AB b AB ANN z cbab b ( cn cba B AN) N () Step : If the current basic solution is both primal feasible ( AB b ) and dual Step p: Let feasible ( c cbab AN ), then set A b N B B and N. The solution is optimal. Done. i B be the smallest inde of a variable with a negative entry in the b- column. Step d: Let j N be the smallest inde of a variable in the z-row with a positive coefficient. Step : If i < j, then go to Step p. Otherwise j < i, go to Step d. Step p: Let K be the subset of N, where for every k K, the coefficient α ik is positive. If K is empty, then stop: the linear program is primal inconsistent (the

certificate of infeasibility is given by the non-positive linear combination of the entering row). Otherwise, let j be the smallest inde in K. Go to Step 5. Step d: Let K be the subset of B, where for every k K, the coefficient α kj is negative. If K is empty, then stop: the linear program is dual inconsistent. Otherwise let i be the smallest inde in K. Go to Step 5. Step 5: Set B B {i} {j}, N N {i} {j}. Compute the new dictionary (). Go to Step. 5. Proof of Finiteness [Fuk97]. We present a simplified version of the proof by given by Fukuda and Terlaky Theorem 5. (Algorithm 5. is finite): The least-inde criss-cross method is finite. Proof. If the least-inde criss-cross method does not terminate then it must cycle since there are a finite number of possible bases. Lets assume that the least-inde criss-cross method cycles. Let ma c s.t: A b, be a system that causes algorithm 5. to cycle. We will assume that every variable in the system enters and leaves the basis during the cycle. Otherwise we can use a smaller eample of this system that cycles by removing the variables that are not involved. Let k be the largest inde of the system s variables. The contradiction lies in the proof that k cannot enter and leave the basis by the criss-cross method: