Finite Pivot Algorithms and Feasibility. Bohdan Lubomyr Kaluzny School of Computer Science, McGill University Montreal, Quebec, Canada May 2001

Size: px
Start display at page:

Download "Finite Pivot Algorithms and Feasibility. Bohdan Lubomyr Kaluzny School of Computer Science, McGill University Montreal, Quebec, Canada May 2001"

Transcription

1 Finite Pivot Algorithms and Feasibility Bohdan Lubomyr Kaluzny School of Computer Science, McGill University Montreal, Quebec, Canada May A thesis submitted to the Faculty of Graduate Studies and Research in partial fulfillment of the requirements of the degree of Master of Science Supervised by Professor David Avis School of Computer Science McGill University Copyright Bohdan Lubomyr Kaluzny

2 Abstract This thesis studies the classical finite pivot methods for solving linear programs and their efficiency in attaining primal feasibility. We review Dantzig s largestcoefficient simple method, Bland s smallest-inde rule, and the least-inde criss-cross method. We present the b -rule: a simple algorithm - based on Bland s smallest inde rule - for solving systems of linear inequalities (feasibility of linear programs). We prove that the b -rule is finite, from which we then prove Farka s Lemma, the Duality Theorem for Linear Programming, and the Fundamental Theorem of Linear Inequalities. We present eperimental results that compare the speed of the b -rule to the classical methods. i

3 Resumé Cette thèse étudie l éfficacité des méthodes classiques finies des pivots qui résout les problèmes de programmation linéaire pour atteindre une solution admissible. Nous passons en revue la méthode du simplèe du plus-grand-coefficient de Dantzig, la règle du plus-petit-indice de Bland, et la méthode entrecroisée du plus-petit-indice. Nous presentons la règle- b : un algorithme simple - basée sur la règle du simplèe de Bland - qui résout un système de contraintes linéaires (admissibilité d un programme linéaire). Nous prouvons que la règle- b est fini. Ceci mène à une preuve de la Lemme de Farka, le Théorème de Dualité de la Programmation Linéaire, et le Théorème Fondamentale des Contraintes Linéaires. Finalement, nous comparons les résultats des epériences empiriques qui démontrent la vitesse de la règle- b au mèthodes classiques finies. ii

4 Statement of Originality Assistance for this thesis, research and writing, has been received only where mentioned in the acknowledgements. Chapters,, and 5 present a review of literature. In addition to this survey, the observations in section.5 (leicographic increasing vector for Bland s rule), section 5.5 (primal feasibility using criss-cross methods) and section 5.8 (practical criss-cross methods) represent an original contribution to knowledge. The main contributions of the thesis, chapters 6 and 7, unless where noted otherwise, are also original contributions to knowledge. iii

5 Acknowledgements I would first like to thank David Avis for being a great supervisor and mentor. My master degree and this thesis could not have been completed without his helpful discussions, constant encouragement, patience, and wise direction. I thank him for introducing me to the field of operations research and for keeping me interested! I would like to thank NSERC, FCAR, the School of Computer Science, and McGill University for providing me with the financial and educational resources I needed to carry out my research. I thank my family and friends for their support and for giving me the opportunity to escape the abstract world in my mind once in a while. I dedicate my thesis to my sisters Darianna and Zoriana, my brother Oleh, and to my parents for providing me with love, support, and everything else I needed so that I could focus on advancing my education. Thank you! iv

6 Table of Contents ABSTRACT RESUMÉ ACKNOWLEDGEMENTS LIST OF TABLES LIST OF EQUATIONS LIST OF FIGURES I II IV VII VIII IX. INTRODUCTION. FUNDAMENTAL CONCEPTS AND NOTATION. INTRODUCTION.... VECTORS AND MATRICES.... LINEAR SYSTEMS.... ELIMINATION METHODS LINEAR COMBINATIONS LINEAR PROGRAMS DICTIONARIES PIVOTING.... DANTZIG S SIMPLEX METHOD 7. INTRODUCTION...7. DANTZIG S SIMPLEX METHOD...7. INITIALIZATION: PHASE ONE...9. DEGENERACY AND CYCLING....5 LEXICOGRAPHIC MINIMUM RATIO TEST....6 FUNDAMENTAL THEOREM OF LINEAR PROGRAMMING....7 DUAL SIMPLEX METHOD COMPLEXITY RESULTS COMMENTS...8. BLAND S PIVOT RULE 9. INTRODUCTION...9. BLAND S RULE...9. INITIALIZATION.... PROOF OF FINITENESS....5 LEXICOGRAPHIC INCREASE....6 COMPLEXITY RESULTS COMMENTS...6 v

7 5. CRISS-CROSS METHODS 8 5. INTRODUCTION LEAST-INDEX CRISS-CROSS PROOF OF FINITENESS LEXICOGRAPHIC INCREASE CRISS-CROSS: PRIMAL FEASIBILITY COMPLEXITY PRACTICAL CRISS-CROSS VARIANTS COMMENTS THE b -RULE INTRODUCTION NOTATION NON-NEGATIVE SOLUTION TO A SYSTEM OF LINEAR EQUATIONS PROOF OF FINITENESS NON-NEGATIVE SOLUTION TO A SYSTEM OF LINEAR INEQUALITES FEASIBILITY OF A LINEAR PROGRAM SOLUTION TO GENERAL LINEAR SYSTEMS FUNDAMENTAL THEOREM OF LINEAR INEQUALITIES FARKA S LEMMA DUALITY THEOREM FOR LINEAR PROGRAMMING SOLVING A LINEAR PROGRAM CONCLUSIONS EXPERIMENTAL RESULTS INTRODUCTION PREVIOUS WORK RANDOM LP S AND FEASIBILITY LOW DIMENSIONAL TESTS HIGH DIMENSIONAL TESTS SPARSE LP S CONCLUSIONS CONCLUSION 9 APPENDIX A 9 APPENDIX B 95 BIBLIOGRAPHY 99 vi

8 List of Tables Table : Primal-Dual possibilities...8 Table : (Avis and Chvátal) Dantzigs largest-coeffcient simple...8 Table : (Avis and Chvátal) Blands smallest-inde rule...8 Table : (Namiki) Simple vs. Criss-Cross...8 Table 5: Comparison of finite methods on low dimensional feasible/infeasible LP s...85 Table 6: Comparison of finite methods on high dimensional feasible/infeasible LP s...86 Table 7: Comparison of random feasible linear programs...87 Table 8: Comparison of random infeasible linear programs...88 Table 9: Tests on random sparse linear programs...89 vii

9 List of Equations Equation : Primal linear program in standard form...7 Equation : Dual linear program in standard form...8 Equation : Dictionary of a linear program... Equation : LP dictionary in matri form... Equation 5: Dual dictionary... Equation 6: Primal and dual dictionary relationship...5 Equation 7: Klee-Minty eample...7 Equation 8: b-rule LP formulation...7 Equation 9: b-rule dictionary for solving LPs...76 Equation : Kuhn and Quandt random linear program model...8 Equation : Namiki s random LP model for testing criss-cross...8 Equation : Model for feasible and infeasible random LPs...8 viii

10 List of Figures Figure : Terminal dictionary sign structures... Figure : Admissible pivots...6 Figure : Largest-inde variable leaves basis in Blands rule... Figure : Largest-inde variable enters basis in Blands rule... Figure 5: Optimal dictionary sign structure... Figure 6: Primal unbounded dictionary sign structure... Figure 7: Situation L after substitution... Figure 8: Situation E after substitution... Figure 9: Dictionary structure when largest-inde enters basis in criss-cross... Figure : Dictionary structure when largest-inde variable enters basis in criss-cross.. Figure : Terminal dictionary sign structures... Figure : Entering situations after substitution... Figure : Leaving situations after substitution... Figure : Terminal dictionary sign structures for systems requiring non-negativity...57 Figure 5: Admissible pivot for the b-rule...58 Figure 6: Largest-inde variable leaves basis in b-rule...6 Figure 7: Largest-inde variable enters basis in b-rule...6 Figure 8: Optimal dictionary sign structure for the b -rule...6 Figure 9: Infeasible dictionary sign structure for the b-rule...6 Figure : Leaving dictionary after substitution...6 Figure : Entering dictionary after substitution...6 i

11 . Introduction Solving a system of linear equations has been of interest to humans since the second millennium B.C. Today, the Gaussian elimination method [Gauss] is taught to students as part of their basic high school math curriculum. On the other hand, most university students would not know how to solve a system of linear equations with nonnegativity constraints on the variables, let alone a system of linear inequalities. Algorithms for finding a solution to a system of linear inequalities are relatively new; first studied by Fourier in the 9 th century [Fou9] and later re-discovered by several mathematicians ([Mot6], [Din8]). Since the discovery of the simple method for linear programming by Dantzig in 97 [Dan8], more attention has been given to the problem of solving linear systems. The Gaussian elimination method for solving a system of linear equations is a polynomial-time algorithm. Until the recent discoveries of polynomial-time methods by Khachian [Kha8] (ellipsoid method) and Karmarkar [Kar8] (interior point method) the compleity of linear programming (and solving systems of linear inequalities) was an open problem. While these solutions give polynomial-time algorithms with respect to the number of bits of input, pivot methods may yield a polynomial-time algorithm with respect to the number of variables and constraints only. However, whether a polynomialtime pivot algorithm eists remains an intriguing open problem. Dantzig s simple method, although known to be very efficient in practice, is a worst-case eponential-time pivot algorithm.

12 In this thesis we review the classical finite pivot methods for solving linear programs and their efficiency in attaining primal feasibility (solving a system of linear inequalities) or proving infeasibility. The aim of this thesis is twofold: first we ehibit a simple algorithm for solving a system of linear inequalities that could be taught to students at the high school level as complementary material. Secondly, we eamine the applications of our algorithm in the theory of linear programming and linear inequalities, and we compare its efficiency to the classical finite pivot methods for attaining feasibility in linear programs. The thesis is organized as follows: In chapter we define the fundamental concepts and introduce notation. We review Dantzig s two-phase largest-coefficient simple method with leicographic ratio test in chapter, and Bland s smallest-inde rule in chapter. In chapter 5 we study finite criss-cross methods and etend the leastinde criss-cross method to solve linear systems to primal feasibility. In chapters 6 and 7 we present our main results. Chapter 6 is self-contained: we present the b -rule; a simple method for solving systems of linear inequalities based on the dual of Bland s smallestinde rule [Bla77]. The finiteness of the b -rule results in simple, easy-to-follow proofs Farka s Lemma, the Duality Theorem for linear programming, and the Fundamental Theorem of linear inequalities. Hence we suggest it be used as a pedagogical teaching tool in the instruction of students being introduced to linear programming. The b -rule is also an alternative to the phase one methods for attaining a basic feasible solution of a linear program. In chapter 7 we define a random linear programming problem model that generates both feasible and infeasible problems and use it to compare the efficiency of the b -rule to the classical finite methods presented in chapters,, and 5.

13 . Fundamental Concepts and Notation. Introduction We assume the reader is familiar with the basic elements of linear algebra such as vectors, matrices, and their properties. For an in depth introduction to linear algebra, please see Lay [La9]. Chvátal [Chv8] provides an ecellent introduction to linear programming. In this chapter we define general concepts and introduce notation used throughout this thesis.. Vectors and Matrices An n-dimensional vector v is a list of n real numbers v, v,, v n usually epressed by one of the following notations: (v, v,, v n ), [v v v n ], or v v. The M vn last two representations are referred to as row and column vectors respectively. The set of all vectors with n entries is denoted by R n. A real number is a -dimensional vector. An mn matri A is a collection of m n-dimensional row vectors, or equivalently n m- dimensional column vectors: a M a m L O K an M. The element (number) of the i th row and amn the j th column of a matri A is denoted by a ij. The set of all mn matrices is denoted by

14 R mn. An m-dimensional row vector and an n-dimensional column vector are also m and n matrices respectively. See Appendi A for a short overview of vector and matri arithmetic and other properties.. Linear Systems A linear equation in the variables,, n is an equation that can be written in the form a a... an n b, where b and the coefficients a,, a n are real numbers known in advance. Similarly, a linear inequality can be written in the form a a... ann b. A (linear) system of linear equations (or inequalities) is a collection of one or more linear equations (inequalities) involving the same set of variables, say,, n. A solution of a linear system is the assignment of values (real numbers) to the variables of the system such that every equation/inequality is satisfied... Matri Notation The information of a linear system of m equations/inequalities and n variables can be recorded using a coefficient matri A R mn, column vector R n, and column vector b R m : a a m... a n... a mn n b M M M n bn a M a m L O K an M amn M n b M bm A b.. Equivalent Forms A system of linear inequalities has can have alternative forms, for eample:

15 5 A b (A) (b) A b, C d A b C d A system of linear equations can be interpreted as a system of linear inequalities: A b A b A b.. Linear Subsystems Given a system of linear inequalities, A b, removing p inequalities results in a new system A b, where A R (m-p)n, b R (m-p). A b, is a subsystem of A b.. Elimination Methods.. Gaussian Elimination In the 9 th century Gauss [Gauss] discovered an algorithm for solving an arbitrary system of linear equations. The method consists of successive elimination of variables and equations (back substitution). Similar elimination methods date back to 5B.C. Chvátal [Chv8] provides a concise presentation of the method and discusses the accuracy and speed. For more information concerning the history of elimination methods see [Str67]... Fourier-Motzkin Elimination Fourier, and later Motzkin, discovered a method similar to Gaussian Elimination applied to a system of linear inequalities. For an in-depth look, see [Ku56].

16 6.5 Linear Combinations.. Linear Combinations Given vectors v, v,, v p, R n and given scalars c, c,, c p, the vector w defined by w c v c v c p v p, is called a linear combination of v, v,, v p using weights c, c,, c p... Linear Dependence and Independence A set of vectors {v,, v p } R d is said to be linearly dependent if there eist weights c, c,, c p, not all zero, such c v c v c p v p. Otherwise, the set of vectors {v,, v p } is linearly independent..6 Linear Programs If c, c,...,c n are real numbers, then the function f of real variables,,..., n defined by f(,,..., n ) c c... c n n is called a linear function. Linear equations and linear inequalities are also known as linear constraints. The problem of maimizing (or minimizing) a linear function subject to a finite number of linear constraints is called linear programming..6. Standard Form Within this thesis, we consider linear programs in the following standard form:

17 7 (P) Maimize c subject to : A b, where c R n, A R mn, and b R m are given. Equation : Primal linear program in standard form We will also refer to this formulation as the primal form of the linear program ( primal LP for short)..6. Terminology The linear function c, which we attempt to optimize, is called the objective function. A feasible solution is an assignment of values to the decision variables,,..., n such that all the constraints are satisfied. A feasible solution that optimizes the objective function is an optimal solution. A linear program that does not admit any feasible solution is called infeasible, and an unbounded linear program has feasible solutions but no optimal solution..6. Fundamental Theorem of Linear Programming Theorem. (Fundamental Theorem of Linear Programming): Every LP problem satisfies eactly one of the following three conditions:. It is infeasible. It is unbounded. It has an optimal solution. In order to solve a linear program, it is necessary to obtain a certificate of one of these three terminal conditions. We provide a proof of the theorem in chapter.

18 8.5. Duality: Certificate of Optimality Every primal LP problem admits a dual problem of the form: (D) Minimize b subject to : A T y T y c T y, (dual LP) Equation : Dual linear program in standard form Table shows the different combinations possible for a primal-dual pair. The dual LP represents a linear combination of the primal LP constraints. The importance of the duality, known as the Duality Theorem, is that every feasible solution of the dual LP yields a bound on the optimal value of the primal LP. The Duality Theorem provides us with the ability to certify whether a solution to an LP is optimal or not. Primal LP Dual LP Optimal Infeasible Unbounded Optimal Yes No No Infeasible No Yes Yes Unbounded No Yes No Table : Primal-Dual possibilities Theorem. (Weak Duality Theorem): If ~ is a primal feasible solution, and y ~ is a dual feasible solution, then c ~ T b ~ y. Theorem. (Strong Duality Theorem): If an LP has an optimal solution then the dual problem has an optimal solution y and their optimal values are equal: T c b y.

19 9 Theorem. (Complementary Slackness): If an LP has an optimal solution and the dual has optimal solution y then y ( b A) and ( A T y c T )..6.5 Certificate of Unboundedness Theorem.5 (Unboundedness Certificate): An LP in standard form is unbounded if and only if it has a feasible solution ~ and there eists (a direction) d such that d, Ad and c T d >..6.6 Certificate of Infeasibility Theorem.6 (Farkas Lemma (variant)): The system A b, of linear inequalities is infeasible, if and only if the system w, wa and wb < has a solution. We prove the above theorems in chapter 6..7 Dictionaries To clarify pivot algorithms for solving linear programming problems, it is convenient to use a dictionary representation of the LP system..7. Slack Variables Given an LP,

20 Maimize subject to : n j n j cjj aijj bi, j ( i,,..., m) ( j,,..., n) (.7.a) we denote the objective function by z and introduce the slack variables n, n,, nm, defined as: n i z bi n j n j aijj cjj ( i,,..., m) (.7.b) Equation : Dictionary of a linear program Every dictionary associated with (.7.a) is a system of linear equations in the decision variables,,, n, the slack variables n, n,, nm as defined in (.7.b), and z. For eample, (.7.b) is a dictionary representation of (.7.a). Every solution of the set of equations comprising a dictionary is also a solution of (.7.a) and vice versa, if and only if the solution values of the variables (including slacks) are non-negative..7. Basic Feasible Solutions The equations of every dictionary epress m of the variables,,, nm, and the objective function z in terms of the remaining n variables. The m variables are known as basic variables. Basic variables constitute a basis. Similarly, the n remaining variables are referred to as co-basic (or non-basic) and constitute a co-basis (non-basis). There are In the event that no slack variables are given, an initial LP basis (or proving that none eists) can be easily found by Gaussian elimination. For the remainder of this thesis, unless otherwise noted, we assume an initial basic solution, feasible or not, can be found.

21 m n only a finite number bases, clearly not more than. If setting the co-basic m variables of a given dictionary to zero results in the basic variables evaluating to nonnegative values, then the dictionary is a primal feasible dictionary. Solutions of this type are basic feasible solutions. The fundamental theorem of linear programming implies that if an LP has a feasible solution, then it also has a basic feasible solution and similarly, if an LP has an optimal solution, then it has a basic optimal solution..7. Matri Notation Given an LP in standard form (.7.a), we introduce the slack variables n, n,, nm, and record the new problem matri notation: Ma s. t. : c A b, where A is a matri with m rows and nm columns. The first n columns form the original coefficient matri (a ij s,), and the last m columns form the identity matri. The row vector c has length nm with the first n entries containing the cost coefficients, c i s, from (.7.a). The remaining m entries of c are set to zero. is a column vector reflecting the addition of m slack variables. The entries of column vector b b i s, for (i m). [ L n] T [ Ln n Ln m] T [ clcn] [ clcn L], c, R nm, R m, are the

22 [ ] I A a a a a A mn m n L M O M L, with I R mm. Let B be the set of indices of the variables in the basis, and N be the set of indices of variables in the co-basis. We write A as N B A A N B, and c as N N B N c c. Thus, a dictionary in matri notation is recorded as: N B N B N B A A c c b A c z A A b A N B B N B B ) ( (.7.a) Equation : LP dictionary in matri form A B - b is the vector specifying the current values of the basic variables. Let b A B - b (the b-column), and N AB A A where ij α represents the coefficient of the co-basic variable j in the dictionary row of the basic variable i ( ij α should not to be confused with row i, column j of A, which is denoted ij a ). Let j z represent the coefficient of the nonbasic variable j in the objective row (z-row) of the current dictionary. Given a (primal) basis, the dual dictionary of the primal dictionary (.7.a), is represented as: ) ( ) ( ) ( B B B B N N y b A b A c w y A A A A c c y T B B T N B T N B (.7.b) Equation 5: Dual dictionary The dual basis N is the primal co-basis. Similarly, the dual co-basis B is the primal basis. Note that (.7.a) and (.7.b) are mirror images: the rows of a primal dictionary correspond to the negative of the columns of its dual dictionary..7. Basis

23 Definition.: A basis B is a maimal subset of indices {,,..., nm} such that the corresponding column vectors of the matri A are independent. Definition.: Given a basis B, setting the co-basic variables to zero and evaluating the basic variables results in a basic solution. A basic solution of a basis B is primal feasible if i for all i B. It is dual feasible if z j for all j N. An LP is primal inconsistent if it does not have a primal feasible basis. A dual inconsistent LP does not have a dual feasible basis. Theorem.7: A linear program ma c, A b, is infeasible (primal inconsistent) if there eists a basis such that a row element i of b < and A AB B AN. Proof. The statement implies that a variable, i, is epressed as a negative linear combination of the non-negative variables j, for j N. This linear combination is an unsatisfiable constraint. Theorem.8: A linear program is unbounded (dual inconsistent) if it is feasible and there eists a basis such that a column element j of A B AN and ( c N cbab AN) >. Proof. Starting from the basic feasible solution, we can increase the value of j indefinitely: since element j of A B AN, the basic variables will remain feasible, and

24 ( c N cbab AN) > implies the optimal value z* will increase in direct proportion to j..7.6 Dictionary Structures We represent a dictionary (.7.a), by a table structure of coefficients for a given basis: A B b A B A N cba B b ( c N cba B AN) The sign structures of optimal, primal and dual inconsistent dictionaries are illustrated in Figure. We indicate the negative, non-positive, zero, non-negative, and positive components by,,,, respectively. Figure : Terminal dictionary sign structures.8 Pivoting Given a dictionary D with basis B, a pivot operation is the process of swapping a variable i B with a variable j N and re-solving the system in terms of the new basis: (B B{i}{j}, and N N{i}{j}). Given a dictionary:

25 5 n n n mn m m m n n n n z z z z z b b b ˆ ˆ ˆ L L M O M M L α α α α α α α i ˆ represents the i th variable of the current basis ordered by indices, and j represents the j th variable of the non-basis (refer to section.7 for definitions of i b and ij α ). Pivoting on (i,j) performs the following operation: n ij in j n i ij j ij i j ij i j n ij in mj mn i ij mj ij i mj m ij mj i m m n ij in ij i ij i ij i j n ij in j n i ij j ij i j ij j i z z z z z A b z z z b b b b b ) ( ) ˆ ( ) ( ) ( ) ( ˆ ) ( ) ( ) ( ˆ ) ( ˆ ) ( ) ( ˆ ) ( ) ( ) ( ˆ α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α α L L L L M M L L M M L L Many rules for selecting the entering and leaving variables have been proposed with the goal of moving from a given basis to the optimal basis, thus solving a linear program (see [Dan5], [Bla77], [Zo69]). For a survey on recent pivot rules for linear programming, see Terlaky and Zhang [Ter9]. A pivot rule is finite if it reaches the optimal basic solution after a finite number of steps. Otherwise the pivot rule cycles. A pivot method is called a simple method if it preserves the (primal/dual) feasibility of the basic solution. Pivot rules that maintain primal feasibility and attempt to reach dual feasibility (and thus optimality), require that the initial basis be primal feasible, and are called two- phase rules because of the need to obtain an initial primal feasible basis (phase one) before proceeding to solve to optimality. Pivot methods that do not preserve feasibility and hence require only one phase are called criss-cross methods. Combinatorial pivot

26 6 rules are pivot rules that are only concerned with the signs of the coefficients of a dictionary. Fukuda and Terlaky [Fu97] define an admissible pivot rule to be a pivot on (i,j) such that either b i < and α ij < (type ), or z j > and α ij > (type ). See figure for the sign structures of a dictionaries with admissible pivots. Figure : Admissible pivots

27 7. Dantzig s Simple Method. Introduction In 97 Dantzig [Da8], [Da9] designed the largest-coefficient simple method for solving a linear program. This method is very efficient and widely used in practice ([Bi9]), and shown to have an epected polynomial-time behaviour in theory ([Dan8], [Bor8], [Sma8]). In this chapter we eamine the Dantzig s original two-phase simple method for solving linear programs, and the leicographic minimum ratio test for avoiding cycles.. Dantzig s Simple Method Dantzig s simple method is a gradient ascent approach that iteratively improves the objective value while maintaining primal feasibility. Given a primal feasible basis, the algorithm selects the co-basic variable that has the largest positive coefficient in the z- row of the current dictionary to enter the basis. This greedy choice is a result of the desire to increase the value of z. The leaving variable is chosen by a ratio test: primal feasibility is maintained by choosing the leaving variable that imposes the most stringent upper bound on the increase of the entering variable.

28 8 Problem.: Given a mn matri A, m-dimensional vector b, n-dimensional vector c, and a primal feasible basis B (and co-basis N), solve the linear program: ma c, A b,. Method. (Largest-Coefficient Simple Method): Add slack variables and let the initial primal feasible dictionary (i.e. AB b ) can be written as: B AB b AB ANN z cbab b ( cn cba B AN) N () Step : If all the coefficients of the co-basic variables in the z-row are non-positive, then set B AB b and N. The solution is optimal. Done. Step : Otherwise let j N be the inde of variable with the largest positive coefficient in the z-row of the current dictionary (break ties by choosing minimum inde). Step : Let K be the subset of B, where for every k K, the coefficient α kj is negative. If K is empty, then stop: the linear program is unbounded (the certificate direction is given by the n-dimensional vector that represents j in terms of the n decision variables). Step : Otherwise, choose i to be the minimum inde, ( kj i K, such that b i α ij) ( b k α ) for all k K (ratio test with minimum inde for breaking ties). Step 5: Set B B {i} {j}, N N {i} {j}. Compute the new dictionary (). Go to Step.

29 9. Initialization: Phase One In order to solve a linear program, Dantzig s simple method requires an initial primal feasible basis. In many cases a phase one simple method must be eecuted to attain such a basis. As a result, Dantzig s simple method is a two-phase algorithm. Problem.: Given a mn feasible basis B (and co-basis N), such that: matri A, an m-dimensional vector b, find and a primal A b, () Method. (Phase One Simple): Step : Introduce an artificial variable and formulate the following auiliary linear program: ma (- ), A b,, () Step : Obtain a basic feasible solution by setting all of the original variables to zero, and making the value of sufficiently large. Step : Solve () using Dantzig s simple method (Algorithm.). Step : If the optimal value of () is zero, then the optimal basis of () is a feasible basis of (). Otherwise the system is infeasible. Eample. (Dantzig s two-phase simple method): Maimize z, Subject to: 8 Introduce and solve for the slack variables and to obtain a system of equations: Gaussian elimination can be used to solve for m of the variables, or prove no such system eists

30 8 z The system is primal infeasible, introduce the artificial variable and solve the auiliary linear program to complete phase one: 8 w Pivot with to obtain a primal feasible dictionary for the auiliary problem: 8 7 w 8 6 / 7 ( / 7) (/ 7) ( / 7) Pivot: / 7 (/ 7) (/ 7) (/ 7) w 6 / 7 ( / 7) (/ 7) ( / 7) 8/ 5 (7 /) (/) ( / 5) Pivot: 6 / 5 (/) (/) (/ 5) w Optimal phase one solution with zero value, we now proceed with phase two: We return to the original objective row, and substitute the basic variables by their equations so that the objective row is in terms of the co-basic variables. 8 / 5 (/) ( / 5) 6 / 5 (/) (/ 5) z 6 / 5 (/ 5) (/ 5) (/ ) Pivot: 6 5 (/ ) z (/ ) Optimal phase two solution achieved.

31 . Degeneracy and Cycling If a linear program has a basic feasible solution where one basic variable or more has a value of zero, then this solution is a degenerate basic feasible solution. The presence of degeneracy can have the following consequences:. A simple pivot from one feasible basis to another may not improve the value of the objective function z. This phenomenon is known as stalling.. The simple method might cycle and never reach the optimal solution. Theorem.: The simple method is guaranteed to stop in a finite number of iterations if there is no degeneracy. m n Proof. There are only a finite number bases, clearly not more than, and every m non-degenerate simple pivot strictly increases the value of the objective function. This implies that a basis cannot be encountered twice. Hoffman [Hof5] constructed the first eample of a linear program that cycles. Lee [Lee97] describes the geometry behind Hoffman s eample. Wolfe, [Wol6], and Kotiah and Steinberg [Kot78] reported eamples of practical linear problems that cycled. We present the eample found in [Chv8], a modification of the eample constructed by Marshal and Suurballe [Ma69].

32 Eample. (Cycle): Maimize 9 57 z, subject to: (/) (/) ) (/ 9 (5/) ) (/ ) (/ Introduce slack variables 5, 6, and 7 to obtain the starting dictionary: z {5,6,7} Pivot: z {,6,7} Pivot: z {,,7} Pivot: z {,,7} Pivot: z {,,7}

33 Pivot: {,5,7} z 9 6 Pivot: 6 returns to basis {5,6,7}..5 Leicographic Minimum Ratio Test Dantzig, Orden and Wolfe [Dan55] developed the leicographic minimum ratio test for avoiding cycling. The algorithm is equivalent to method. ecept that the algorithm breaks ties for the leaving variable by vector leicography instead of variable indices..5. Definitions A vector v is leicographically positive if v, and the first nonzero element of v is positive. Given two vectors v and v, v is leicographically greater than v if ( v v) is leicographically positive. Given a set of vectors, v i is the leicographically minimum vector if the other vectors of the set are all leicographically greater than v i..5. Breaking Ties Theorem.: The simple method is finite if ties for the leaving variables are broken using Dantzig s leicographic minimum ratio test. Proof.

34 The simple method starts with a feasible basis, in other words with I and b AB. The rows of the initial matri defined by [ A B b A B] [ b I] are leicographically positive. Dantzig s et al leicographic ratio test maintains the rows of the matri [ A B b A B ] as leicographically positive during every iteration of the simple method: let j be the variable chosen to enter the basis. If the ratio test (step of method.) results in a tie, choose the basic variable whose row is the leicographically minimum row of all the rows of [( A Bb A B) / αj] (the matri with its row elements divided by the coefficient of j, α j, in that row) which are involved in the tie. Since the rows of this matri are linearly independent, this will always yield a unique selection. Because we choose to pivot in the leicographic minimum row, the resulting pivot operation maintains all the rows of [ A Bb A B] as leicographically positive. A positive multiple of this row is added to the objective row, hence the objective vector, [ cba Bb cba B], is always increasing leicographically and in the presence of degeneracy, and cycling is avoided. The Dantzig s leicographic two-phase simple method is a finite algorithm..6 Fundamental Theorem of Linear Programming Recall Theorem. (Fundamental Theorem of Linear Programming): Every LP problem satisfies eactly one of the following three conditions:. It is infeasible (primal inconsistent).. It is unbounded (dual inconsistent).. It has an optimal solution.

35 5 Proof. Phase one of Dantzig s two-phase simple algorithm determines that either the problem is infeasible or it returns a basic feasible solution. Phase two determines that either the problem is unbounded or delivers a basic optimal solution..7 Dual Simple Method The Duality Theorem for linear programming states that a primal linear program shares the same optimal value as its dual. Therefore, we can apply the simple method to the dual linear program to attain optimality. In section.5 we noted that every primal dictionary is the mirror image of the corresponding dual dictionary: z d i b i α ijj ( i B, j N) j N j N z jj y j z j α ijyi ( i B, j N) i B w d i B b ii (primal dictionary) (dual dictionary) Equation 6: Primal and dual dictionary relationship The coefficients appearing in a row of a primal dictionary are found with opposite signs in the corresponding column of the dual dictionary. Lemke [Lem5] designed the dual simple method that performs the simple method on the dual problem using only the primal formulation/dictionary structure:

36 6 Problem.: Given a mn matri A, an m-dimensional vector b, a n-dimensional vector c, and a dual feasible basis B (and non-basis N), solve the linear program: ma c, A b,. Method. (Dual Simple): Add slack variables and let the current dual feasible dictionary, i.e. ( cn cbab AN), can be written as: B AB b AB ANN z cbab b ( cn cba B AN) N () Step : If AB b, then set B AB b and N. The solution is optimal. Done. Step : Otherwise let i B be the inde of basic variable with the most negative coefficient in the b-column of the current dictionary (break ties by choosing minimum inde). Step : Let K be the subset of N, where for every k K, the coefficient α ik is positive. If K is empty, then stop: the linear program is infeasible (the entering row containing only non-positive coefficients provides the certificate linear combination that causes infeasibility). Step : Otherwise, choose j to be the minimum inde, j K, such that ( z j α ij) ( z k α ik) for all k K (ratio test with minimum inde for breaking ties). Step 5: Set B B {i} {j}, N N {i} {j}. Compute the new dictionary () and go to Step.

37 7 Clearly the dual simple method is useful when the initial basis is dual feasible and primal infeasible. The dual simple method is often used for sensitivity analysis; see Chvátal [Chv8] and Schrijver [Sch86] for further details..8 Compleity Results The worst-case compleity of the simple algorithm is not known. Klee and Minty [Kle7] showed that it is at least an eponential-time algorithm with the following linear program requires n - iterations (or pivots): Maimize n j subject to : i j n j j i j j i j i ( i,,..., n) ( j,,..., n) Equation 7: Klee-Minty eample However, Dantzig [Dan6] argued that for practical linear programming problems with m < 5 and m n <, the number of iterations is usually less than m/ and rarely up to m. The number of iterations usually increases proportionally to m and very slowly with n. Dantzig [Dan8], Borgwardt [Bor8], [Bor87] and Smale [Sma8] provide theoretical eplanations of this result.

38 8.9 Comments The first algorithm designed to solve linear programs is still the preferred method of choice in practice. On average the two-phase simple method is a linear-time algorithm with respect to the number of constraints in the input. However, there are a couple downsides to the Dantzig s method:. Depending on the implementation, finding an entering variable might require up to n comparisons to find the largest coefficient in the objective row.. The leicographic ratio test is an epensive operation that must be performed in order to avoid cycling in the presence of degeneracy. Up to mn comparisons are required for the ratio test. It is often left out since it can be comple to implement.. If an initial primal feasible basis is not available, then an auiliary linear program must be solved to attain one. In the net chapter we present Bland s pivot rule for the simple algorithm. It is a simple rule that is finite and does not require leicographic ratio tests.

39 9. Bland s Pivot Rule. Introduction In 977, Bland [Bla77] presented a simple finite pivot selection rule (known as Bland s rule) for the simple method. The resulting algorithm is similar to Danzig s simple method: it attempts to increase the objective value while preserving primal feasibility. It is a two-phase algorithm, but it avoids computations relating to the leicography of the leaving row, and also coefficient comparisons for choosing the entering variable. In this chapter we look at Bland s smallest inde rule and provide a proof of finiteness. Theoretical compleity results are discussed.. Bland s Rule Bland s rule chooses the entering variable by its inde: the entering variable is chosen to be the variable with a positive coefficient in the objective row with the smallest inde. Dantzig s ratio test with smallest inde for breaking ties is used to choose the leaving variable in order to preserve primal feasibility. Problem.: Given a mn matri A, an m-dimensional vector b, a n-dimensional vector c, and a primal feasible basis B (and co-basis N), solve the linear program: ma c, A b,.

40 Algorithm. (Simple with Bland s rule): Add slack variables and let the current primal feasible dictionary (i.e. AB b ) can be written as: B AB b AB ANN z cbab b ( cn cba B AN) N () Step : If all the coefficients of the non-basic variables in the z-row are non-positive, then set B AB b and N. The solution is optimal. Done. Step : Otherwise let j N be the smallest inde of a variable with a positive coefficient in the z-row of the current dictionary (break ties by choosing minimum inde). Step : Let K be the subset of B, where for every k K, the coefficient α kj is negative. If K is empty, then stop: the linear program is unbounded (the certificate direction is given by the n-dimensional vector that represents j in terms of the n decision variables). Step : Otherwise, choose i to be the minimum inde, ( kj i K, such that b i α ij) ( b k α ) for all k K (ratio test with minimum inde for breaking ties). Step 5: Set B B {i} {j}, N N {i} {j}. Compute the new dictionary (). Go to Step.. Initialization In order to solve a linear program, Bland s rule, like every simple method, requires an initial primal feasible basis. A phase one may have to be eecuted to attain

41 such a basis. The simple method with Bland s rule is a two-phase algorithm: an auiliary problem must be set up and solved using Bland s pivoting rule to attain an initial primal feasible basis.. Proof of Finiteness Theorem. (Algorithm. is finite): The simple method is finite if the entering and leaving variables are selected by Bland s smallest inde rule in every iteration. The proof we present uses the ideas from the proof of finiteness of the least-inde crisscross algorithm by Fukuda and Terlaky [Fuk97]. Proof. Lets assume there eists a system on which the simple method using Bland s rule does not terminate. There are only a finite number bases, clearly not more than m n, hence we must assume the simple method using Bland s rule cycles. Let m ma c s.t: A b, be a system that causes algorithm. to cycle. We will assume that every variable in the system enters and leaves the basis during the cycle. Otherwise we can use a smaller eample of this system that cycles by removing the variables that are not involved. Let k be the largest inde of the system s variables. In order for k to enter and leave the basis, the following two situations must occur: Situation LEAVE ( k leaves the basis):

42 Figure : Largest-inde variable leaves basis in Blands rule Situation ENTER ( k enters the basis): Figure : Largest-inde variable enters basis in Blands rule Now consider the following two situations for any system Situation OPTIMAL (optimal solution): ma c, A b, : Figure 5: Optimal dictionary sign structure Situation UNBOUNDED (unbounded system):

43 Figure 6: Primal unbounded dictionary sign structure Both situations are terminal. By the Fundamental Theorem of Linear Programming the system cannot have an optimal solution and also be dual inconsistent (unbounded and/or have an unbounded direction). Only one of situation OPTIMAL and situation UNBOUNDED can occur for a given system ma c, A b,. Lets apply this fact to our cycling system with the following substitution: let k ( ) k. Eamine the original situations: Situation LEAVE ( k leaves the basis): Figure 7: Situation L after substitution Situation ENTER ( k enters the basis): Figure 8: Situation E after substitution Note that situation ENTER now corresponds to situation OPTIMAL, and situation LEAVE corresponds to situation UNBOUNDED (unbounded direction). In a cycle both situations must be encountered, but this contradicts the fact that a system cannot be both

44 inconsistent and have a feasible solution. Algorithm. is finite and either ends with an optimal solution or proof of inconsistency..5 Leicographic Increase A corollary of Theorem. is that during a pivot sequence of Bland s smallestinde simple method both almost terminal dictionaries, situation ENTER and situation LEAVE, cannot be both encountered. A nice application of this corollary is the construction of a vector L that increases leicographically after each iteration of Bland s simple method. The construction of vector L is defined by Fukuda and Matsui [Fuk89], but in the contet of another finite pivot rule (the least-inde criss-cross method. See chapter 5). We etend their results to Bland s smallest-inde rule: Let L be a - vector indeed by the mn variable indices in decreasing order; L L, L, L, L, ) ( m n m n L. Initially, set L (,,,). After every Bland rule pivot on the indices i and j, update L as follows: q ma{i, j} L ( L q) L k if k < q if k q if k > q k {, k k K, m n} Theorem. In Bland s smallest-inde rule, the vector L increases monotonically in the sense of leicographic ordering, and hence it terminates in a finite number of steps.

45 5 Proof. Let (i, j) be the net Bland rule pivot. Let q ma{i, j}. We show that with every iteration the vector L satisfies L q. This implies that the vector L strictly increases. Assume that L does not strictly increase after some iteration k, i.e. L t for some maimum pivot inde t before we update L. In order for this to occur, there must eist have been a previous iteration where t is also chosen as the maimum pivot inde. Let k be the most recent such iteration before k. L t at all iterations from k to k, and the maimum pivot inde at each iteration between k and k is lower than t. Thus t is the maimum inde of a variable that enters and leaves the basis during this pivot sequence of the criss-cross method where L does not increase. This corresponds to situations ENTER, and situations LEAVE respectively. The two situations cannot be both encountered during Bland s smallest-inde rule. Vector L strictly increases leicographically after every iteration..6 Compleity Results Avis and Chvátal [Av78] showed that the worst-case number of iterations required by Bland s pivoting rule is bounded from below by the n-th Fibonacci number. This lower bound was later imporoved to n, see [Sch86]. The eample Avis and Chvátal provide is a disguised form of the Klee-Minty cube, Maimize subject to : n j ε n j i j ε j i j j i j where < ε < ( i,,..., n) ( j,,..., n) (.)

46 6 for which they show that the number of iterations required, bland(n), is bounded from below by: bland ( n) 5 5 n n 5 5 Replacing the right-hand side of (.) by zeroes forms a new LP that is completely degenerate: Maimize subject to : n j ε n j i j ε j i j j i j where < ε < ( i,,..., n) ( j,,..., n) (.) Bland s smallest-inde rule generates the same sequence of pivots in (.) as is (.). This result shown by Avis and Chvátal provides a lower bound on the worst-case number of stalling iterations that the simple rule with Bland s rule might require. Avis and Chvátal also provided results from Monte-Carlo eperiments on a number of pivot rules run on various types of linear programs. These results will be discussed when we present eperimental results in chapter 7..7 Comments Bland published the first finite pivot rules for the simple method. Although an important and eciting discovery for theoretical purposes, it shares many of the costly attributes of Dantzig s largest-coefficient simple method. The ratio test to preserve feasibility restricts the potential pivot paths to optimality. However, Bland s findings

47 7 lead to the discovery of another pivot method makes use of non-simple paths: the leastinde criss-cross method.

48 8 5. Criss-Cross Methods 5. Introduction In 969, Zoints [Zo69] published a new method for solving linear programs that required no initialization (phase one/auiliary problem). The method, named the crisscross method, alternates between the primal simple method and the dual simple method while the basis remains neither primal feasible nor dual feasible. Once primal or dual feasibility is attained, then Zoint s criss-cross method reduces to the primal simple method or dual simple method respectively. Thus either leicographic ratio tests with Dantzig s rule or Bland s rule is required in order to avoid cycles in the simple method caused by degeneracy. However, even though Zoint s criss-cross method tends to reach optimality, this is not sufficient to prove finiteness of the method when dealing with bases that are both primal and dual infeasible as we will show in section 5.7. The first finite criss-cross algorithm, the least-inde criss-cross method, was discovered independently by Chang [Chn79], Terlaky [Ter85] and Wang [Wan87]. Jensen s general recursion, [Jen85], implicitly includes the criss-cross method. The least-inde criss-cross method is a descendant of Zoint s criss-cross with the observation that it is not necessary to maintain primal or dual feasibility in order to get a sequence of pivots that leads to the optimal solution. In this chapter we present the least-inde criss-cross method for solving linear programs.

49 9 5. Least-Inde Criss-Cross The least-inde criss-cross method is a finite algorithm for computing the optimality of a linear program. The method does not require any type of initialization (auiliary phase one problems), nor does it maintain primal/dual feasibility using ratio tests. It is an concise algorithm whose simplicity is due to its primal-dual symmetry. Problem 5.: Given a mn matri A, an m-dimensional vector b, and n-dimensional vector c, find the optimal solution of the linear program: ma c, A b,. Algorithm 5. (Least-Inde Criss-Cross): Add slack variables and let the initial dictionary be written as: B AB b AB ANN z cbab b ( cn cba B AN) N () Step : If the current basic solution is both primal feasible ( AB b ) and dual Step p: Let feasible ( c cbab AN ), then set A b N B B and N. The solution is optimal. Done. i B be the smallest inde of a variable with a negative entry in the b- column. Step d: Let j N be the smallest inde of a variable in the z-row with a positive coefficient. Step : If i < j, then go to Step p. Otherwise j < i, go to Step d. Step p: Let K be the subset of N, where for every k K, the coefficient α ik is positive. If K is empty, then stop: the linear program is primal inconsistent (the

50 certificate of infeasibility is given by the non-positive linear combination of the entering row). Otherwise, let j be the smallest inde in K. Go to Step 5. Step d: Let K be the subset of B, where for every k K, the coefficient α kj is negative. If K is empty, then stop: the linear program is dual inconsistent. Otherwise let i be the smallest inde in K. Go to Step 5. Step 5: Set B B {i} {j}, N N {i} {j}. Compute the new dictionary (). Go to Step. 5. Proof of Finiteness [Fuk97]. We present a simplified version of the proof by given by Fukuda and Terlaky Theorem 5. (Algorithm 5. is finite): The least-inde criss-cross method is finite. Proof. If the least-inde criss-cross method does not terminate then it must cycle since there are a finite number of possible bases. Lets assume that the least-inde criss-cross method cycles. Let ma c s.t: A b, be a system that causes algorithm 5. to cycle. We will assume that every variable in the system enters and leaves the basis during the cycle. Otherwise we can use a smaller eample of this system that cycles by removing the variables that are not involved. Let k be the largest inde of the system s variables. The contradiction lies in the proof that k cannot enter and leave the basis by the criss-cross method:

Optimization (168) Lecture 7-8-9

Optimization (168) Lecture 7-8-9 Optimization (168) Lecture 7-8-9 Jesús De Loera UC Davis, Mathematics Wednesday, April 2, 2012 1 DEGENERACY IN THE SIMPLEX METHOD 2 DEGENERACY z =2x 1 x 2 + 8x 3 x 4 =1 2x 3 x 5 =3 2x 1 + 4x 2 6x 3 x 6

More information

GETTING STARTED INITIALIZATION

GETTING STARTED INITIALIZATION GETTING STARTED INITIALIZATION 1. Introduction Linear programs come in many different forms. Traditionally, one develops the theory for a few special formats. These formats are equivalent to one another

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta

Standard Form An LP is in standard form when: All variables are non-negativenegative All constraints are equalities Putting an LP formulation into sta Chapter 4 Linear Programming: The Simplex Method An Overview of the Simplex Method Standard Form Tableau Form Setting Up the Initial Simplex Tableau Improving the Solution Calculating the Next Tableau

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

A primal-simplex based Tardos algorithm

A primal-simplex based Tardos algorithm A primal-simplex based Tardos algorithm Shinji Mizuno a, Noriyoshi Sukegawa a, and Antoine Deza b a Graduate School of Decision Science and Technology, Tokyo Institute of Technology, 2-12-1-W9-58, Oo-Okayama,

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b 4.5 Simplex method LP in standard form: min z = c T x s.t. Ax = b x 0 George Dantzig (1914-2005) Examine a sequence of basic feasible solutions with non increasing objective function values until an optimal

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5,

Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method. Reading: Sections 2.6.4, 3.5, Lecture 4: Algebra, Geometry, and Complexity of the Simplex Method Reading: Sections 2.6.4, 3.5, 10.2 10.5 1 Summary of the Phase I/Phase II Simplex Method We write a typical simplex tableau as z x 1 x

More information

AM 121: Intro to Optimization

AM 121: Intro to Optimization AM 121: Intro to Optimization Models and Methods Lecture 6: Phase I, degeneracy, smallest subscript rule. Yiling Chen SEAS Lesson Plan Phase 1 (initialization) Degeneracy and cycling Smallest subscript

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017

CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 CSC373: Algorithm Design, Analysis and Complexity Fall 2017 DENIS PANKRATOV NOVEMBER 1, 2017 Linear Function f: R n R is linear if it can be written as f x = a T x for some a R n Example: f x 1, x 2 =

More information

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Dr. Maddah ENMG 500 Engineering Management I 10/21/07 Computational Procedure of the Simplex Method The optimal solution of a general LP problem is obtained in the following steps: Step 1. Express the

More information

Scholars Research Library

Scholars Research Library Available online at www.scholarsresearchlibrary.com Scholars Research Library Archives of Applied Science Research, 2010, 2 (1) 28-36 (http://scholarsresearchlibrary.com/archive.html) ISSN 0975-508X CODEN

More information

Simplex method(s) for solving LPs in standard form

Simplex method(s) for solving LPs in standard form Simplex method: outline I The Simplex Method is a family of algorithms for solving LPs in standard form (and their duals) I Goal: identify an optimal basis, as in Definition 3.3 I Versions we will consider:

More information

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination 27th June 2005 Chapter 8: Finite Termination 1 The perturbation method Recap max c T x (P ) s.t. Ax = b x 0 Assumption: B is a feasible

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Week 2. The Simplex method was developed by Dantzig in the late 40-ties. 1 The Simplex method Week 2 The Simplex method was developed by Dantzig in the late 40-ties. 1.1 The standard form The simplex method is a general description algorithm that solves any LPproblem instance.

More information

Lecture 5. x 1,x 2,x 3 0 (1)

Lecture 5. x 1,x 2,x 3 0 (1) Computational Intractability Revised 2011/6/6 Lecture 5 Professor: David Avis Scribe:Ma Jiangbo, Atsuki Nagao 1 Duality The purpose of this lecture is to introduce duality, which is an important concept

More information

The simplex algorithm

The simplex algorithm The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Lecture slides by Kevin Wayne

Lecture slides by Kevin Wayne LINEAR PROGRAMMING I a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Lecture slides by Kevin Wayne Last updated on 7/25/17 11:09 AM Linear programming

More information

The Simplex Algorithm: Technicalities 1

The Simplex Algorithm: Technicalities 1 1/45 The Simplex Algorithm: Technicalities 1 Adrian Vetta 1 This presentation is based upon the book Linear Programming by Vasek Chvatal 2/45 Two Issues Here we discuss two potential problems with the

More information

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm.

1 date: February 23, 1998 le: papar1. coecient pivoting rule. a particular form of the simplex algorithm. 1 date: February 23, 1998 le: papar1 KLEE - MINTY EAMPLES FOR (LP) Abstract : The problem of determining the worst case behavior of the simplex algorithm remained an outstanding open problem for more than

More information

Discrete Optimization. Guyslain Naves

Discrete Optimization. Guyslain Naves Discrete Optimization Guyslain Naves Fall 2010 Contents 1 The simplex method 5 1.1 The simplex method....................... 5 1.1.1 Standard linear program................. 9 1.1.2 Dictionaries........................

More information

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm

LINEAR PROGRAMMING I. a refreshing example standard form fundamental questions geometry linear algebra simplex algorithm Linear programming Linear programming. Optimize a linear function subject to linear inequalities. (P) max c j x j n j= n s. t. a ij x j = b i i m j= x j 0 j n (P) max c T x s. t. Ax = b Lecture slides

More information

"SYMMETRIC" PRIMAL-DUAL PAIR

SYMMETRIC PRIMAL-DUAL PAIR "SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax

More information

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming Linear Programming Linear Programming Lecture Linear programming. Optimize a linear function subject to linear inequalities. (P) max " c j x j n j= n s. t. " a ij x j = b i # i # m j= x j 0 # j # n (P)

More information

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD Journal of the Operations Research Society of Japan Vol 54, No 4, December 2011, pp 191 200 c The Operations Research Society of Japan LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE

More information

A Strongly Polynomial Simplex Method for Totally Unimodular LP

A Strongly Polynomial Simplex Method for Totally Unimodular LP A Strongly Polynomial Simplex Method for Totally Unimodular LP Shinji Mizuno July 19, 2014 Abstract Kitahara and Mizuno get new bounds for the number of distinct solutions generated by the simplex method

More information

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14 The exam is three hours long and consists of 4 exercises. The exam is graded on a scale 0-25 points, and the points assigned to each question are indicated in parenthesis within the text. If necessary,

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150

More information

The Strong Duality Theorem 1

The Strong Duality Theorem 1 1/39 The Strong Duality Theorem 1 Adrian Vetta 1 This presentation is based upon the book Linear Programming by Vasek Chvatal 2/39 Part I Weak Duality 3/39 Primal and Dual Recall we have a primal linear

More information

Linear Programming for Planning Applications

Linear Programming for Planning Applications Communication Network Planning and Performance Learning Resource Linear Programming for Planning Applications Professor Richard Harris School of Electrical and Computer Systems Engineering, RMIT Linear

More information

0.1. Linear transformations

0.1. Linear transformations Suggestions for midterm review #3 The repetitoria are usually not complete; I am merely bringing up the points that many people didn t now on the recitations Linear transformations The following mostly

More information

Chapter 4 The Simplex Algorithm Part II

Chapter 4 The Simplex Algorithm Part II Chapter 4 The Simple Algorithm Part II Based on Introduction to Mathematical Programming: Operations Research, Volume 4th edition, by Wayne L Winston and Munirpallam Venkataramanan Lewis Ntaimo L Ntaimo

More information

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP 1 / 23 Repetition the simplex algorithm: sequence of pivots starting

More information

MATH 445/545 Test 1 Spring 2016

MATH 445/545 Test 1 Spring 2016 MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these

More information

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1

CSCI5654 (Linear Programming, Fall 2013) Lecture-8. Lecture 8 Slide# 1 CSCI5654 (Linear Programming, Fall 2013) Lecture-8 Lecture 8 Slide# 1 Today s Lecture 1. Recap of dual variables and strong duality. 2. Complementary Slackness Theorem. 3. Interpretation of dual variables.

More information

MAT016: Optimization

MAT016: Optimization MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The

More information

Linear & Integer programming

Linear & Integer programming ELL 894 Performance Evaluation on Communication Networks Standard form I Lecture 5 Linear & Integer programming subject to where b is a vector of length m c T A = b (decision variables) and c are vectors

More information

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive

Example. 1 Rows 1,..., m of the simplex tableau remain lexicographically positive 3.4 Anticycling Lexicographic order In this section we discuss two pivoting rules that are guaranteed to avoid cycling. These are the lexicographic rule and Bland s rule. Definition A vector u R n is lexicographically

More information

Properties of a Simple Variant of Klee-Minty s LP and Their Proof

Properties of a Simple Variant of Klee-Minty s LP and Their Proof Properties of a Simple Variant of Klee-Minty s LP and Their Proof Tomonari Kitahara and Shinji Mizuno December 28, 2 Abstract Kitahara and Mizuno (2) presents a simple variant of Klee- Minty s LP, which

More information

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3)

Gauss-Jordan Elimination for Solving Linear Equations Example: 1. Solve the following equations: (3) The Simple Method Gauss-Jordan Elimination for Solving Linear Equations Eample: Gauss-Jordan Elimination Solve the following equations: + + + + = 4 = = () () () - In the first step of the procedure, we

More information

Linear Programming. Chapter Introduction

Linear Programming. Chapter Introduction Chapter 3 Linear Programming Linear programs (LP) play an important role in the theory and practice of optimization problems. Many COPs can directly be formulated as LPs. Furthermore, LPs are invaluable

More information

Linear Programming Redux

Linear Programming Redux Linear Programming Redux Jim Bremer May 12, 2008 The purpose of these notes is to review the basics of linear programming and the simplex method in a clear, concise, and comprehensive way. The book contains

More information

OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH. Linear Programming Problem OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for

More information

Review Solutions, Exam 2, Operations Research

Review Solutions, Exam 2, Operations Research Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To

More information

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations The Simplex Method Most textbooks in mathematical optimization, especially linear programming, deal with the simplex method. In this note we study the simplex method. It requires basically elementary linear

More information

IE 400: Principles of Engineering Management. Simplex Method Continued

IE 400: Principles of Engineering Management. Simplex Method Continued IE 400: Principles of Engineering Management Simplex Method Continued 1 Agenda Simplex for min problems Alternative optimal solutions Unboundedness Degeneracy Big M method Two phase method 2 Simplex for

More information

CHAPTER 2. The Simplex Method

CHAPTER 2. The Simplex Method CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works

More information

ECE 307 Techniques for Engineering Decisions

ECE 307 Techniques for Engineering Decisions ECE 7 Techniques for Engineering Decisions Introduction to the Simple Algorithm George Gross Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign ECE 7 5 9 George

More information

In Chapters 3 and 4 we introduced linear programming

In Chapters 3 and 4 we introduced linear programming SUPPLEMENT The Simplex Method CD3 In Chapters 3 and 4 we introduced linear programming and showed how models with two variables can be solved graphically. We relied on computer programs (WINQSB, Excel,

More information

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory Instructor: Shengyu Zhang 1 LP Motivating examples Introduction to algorithms Simplex algorithm On a particular example General algorithm Duality An application to game theory 2 Example 1: profit maximization

More information

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.:

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.: ,-..., -. :', ; -:._.'...,..-.-'3.-..,....; i b... {'.'',,,.!.C.,..'":',-...,'. ''.>.. r : : a o er.;,,~~~~~~~~~~~~~~~~~~~~~~~~~.'. -...~..........".: ~ WS~ "'.; :0:_: :"_::.:.0D 0 ::: ::_ I;. :.!:: t;0i

More information

AM 121: Intro to Optimization Models and Methods Fall 2018

AM 121: Intro to Optimization Models and Methods Fall 2018 AM 121: Intro to Optimization Models and Methods Fall 2018 Lecture 5: The Simplex Method Yiling Chen Harvard SEAS Lesson Plan This lecture: Moving towards an algorithm for solving LPs Tableau. Adjacent

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem

More information

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method)

Developing an Algorithm for LP Preamble to Section 3 (Simplex Method) Moving from BFS to BFS Developing an Algorithm for LP Preamble to Section (Simplex Method) We consider LP given in standard form and let x 0 be a BFS. Let B ; B ; :::; B m be the columns of A corresponding

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Linear programming on Cell/BE

Linear programming on Cell/BE Norwegian University of Science and Technology Faculty of Information Technology, Mathematics and Electrical Engineering Department of Computer and Information Science Master Thesis Linear programming

More information

Relation of Pure Minimum Cost Flow Model to Linear Programming

Relation of Pure Minimum Cost Flow Model to Linear Programming Appendix A Page 1 Relation of Pure Minimum Cost Flow Model to Linear Programming The Network Model The network pure minimum cost flow model has m nodes. The external flows given by the vector b with m

More information

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod Contents 4 The Simplex Method for Solving LPs 149 4.1 Transformations to be Carried Out On an LP Model Before Applying the Simplex Method On It... 151 4.2 Definitions of Various Types of Basic Vectors

More information

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems

Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16: Linear programming. Optimization Problems Optimization WS 13/14:, by Y. Goldstein/K. Reinert, 9. Dezember 2013, 16:38 2001 Linear programming Optimization Problems General optimization problem max{z(x) f j (x) 0,x D} or min{z(x) f j (x) 0,x D}

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Chap6 Duality Theory and Sensitivity Analysis

Chap6 Duality Theory and Sensitivity Analysis Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we

More information

Termination, Cycling, and Degeneracy

Termination, Cycling, and Degeneracy Chapter 4 Termination, Cycling, and Degeneracy We now deal first with the question, whether the simplex method terminates. The quick answer is no, if it is implemented in a careless way. Notice that we

More information

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables Tomonari Kitahara and Shinji Mizuno February 2012 Abstract Kitahara

More information

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Dual Basic Solutions Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP: Observation 5.7. AbasisB yields min c T x max p T b s.t. A x = b s.t. p T A apple c T x 0 aprimalbasicsolutiongivenbyx

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

Introduction to optimization

Introduction to optimization Introduction to optimization Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 24 The plan 1. The basic concepts 2. Some useful tools (linear programming = linear optimization)

More information

ORF 522. Linear Programming and Convex Analysis

ORF 522. Linear Programming and Convex Analysis ORF 5 Linear Programming and Convex Analysis Initial solution and particular cases Marco Cuturi Princeton ORF-5 Reminder: Tableaux At each iteration, a tableau for an LP in standard form keeps track of....................

More information

3.3 Sensitivity Analysis

3.3 Sensitivity Analysis 26 LP Basics II 3.3 Sensitivity Analysis Analyse the stability of an optimal (primal or dual) solution against the (plus and minus) changes of an coefficient in the LP. There are two types of analyses

More information

3. THE SIMPLEX ALGORITHM

3. THE SIMPLEX ALGORITHM Optimization. THE SIMPLEX ALGORITHM DPK Easter Term. Introduction We know that, if a linear programming problem has a finite optimal solution, it has an optimal solution at a basic feasible solution (b.f.s.).

More information

Math 5593 Linear Programming Week 1

Math 5593 Linear Programming Week 1 University of Colorado Denver, Fall 2013, Prof. Engau 1 Problem-Solving in Operations Research 2 Brief History of Linear Programming 3 Review of Basic Linear Algebra Linear Programming - The Story About

More information

Lecture 9 Tuesday, 4/20/10. Linear Programming

Lecture 9 Tuesday, 4/20/10. Linear Programming UMass Lowell Computer Science 91.503 Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 9 Tuesday, 4/20/10 Linear Programming 1 Overview Motivation & Basics Standard & Slack Forms Formulating

More information

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

CO 602/CM 740: Fundamentals of Optimization Problem Set 4 CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum

More information

Linear Programming. 1 An Introduction to Linear Programming

Linear Programming. 1 An Introduction to Linear Programming 18.415/6.854 Advanced Algorithms October 1994 Lecturer: Michel X. Goemans Linear Programming 1 An Introduction to Linear Programming Linear programming is a very important class of problems, both algorithmically

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

Polynomiality of Linear Programming

Polynomiality of Linear Programming Chapter 10 Polynomiality of Linear Programming In the previous section, we presented the Simplex Method. This method turns out to be very efficient for solving linear programmes in practice. While it is

More information

Linear Programming, Lecture 4

Linear Programming, Lecture 4 Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex

More information

Introduction to Linear Programming

Introduction to Linear Programming Nanjing University October 27, 2011 What is LP The Linear Programming Problem Definition Decision variables Objective Function x j, j = 1, 2,..., n ζ = n c i x i i=1 We will primarily discuss maxizming

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 22 October 22, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 16 Table of Contents 1 The Simplex Method, Part II Ming Zhong (JHU) AMS Fall 2018 2 /

More information

CPS 616 ITERATIVE IMPROVEMENTS 10-1

CPS 616 ITERATIVE IMPROVEMENTS 10-1 CPS 66 ITERATIVE IMPROVEMENTS 0 - APPROACH Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information

III. Linear Programming

III. Linear Programming III. Linear Programming Thomas Sauerwald Easter 2017 Outline Introduction Standard and Slack Forms Formulating Problems as Linear Programs Simplex Algorithm Finding an Initial Solution III. Linear Programming

More information

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem

Algorithmic Game Theory and Applications. Lecture 7: The LP Duality Theorem Algorithmic Game Theory and Applications Lecture 7: The LP Duality Theorem Kousha Etessami recall LP s in Primal Form 1 Maximize c 1 x 1 + c 2 x 2 +... + c n x n a 1,1 x 1 + a 1,2 x 2 +... + a 1,n x n

More information

MATH 445/545 Homework 2: Due March 3rd, 2016

MATH 445/545 Homework 2: Due March 3rd, 2016 MATH 445/545 Homework 2: Due March 3rd, 216 Answer the following questions. Please include the question with the solution (write or type them out doing this will help you digest the problem). I do not

More information

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P) Lecture 10: Linear programming duality Michael Patriksson 19 February 2004 0-0 The dual of the LP in standard form minimize z = c T x (P) subject to Ax = b, x 0 n, and maximize w = b T y (D) subject to

More information

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 10: Linear programming duality and sensitivity 0-0 Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to

More information

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount

The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount The Simplex and Policy Iteration Methods are Strongly Polynomial for the Markov Decision Problem with Fixed Discount Yinyu Ye Department of Management Science and Engineering and Institute of Computational

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Farkas Lemma, Dual Simplex and Sensitivity Analysis

Farkas Lemma, Dual Simplex and Sensitivity Analysis Summer 2011 Optimization I Lecture 10 Farkas Lemma, Dual Simplex and Sensitivity Analysis 1 Farkas Lemma Theorem 1. Let A R m n, b R m. Then exactly one of the following two alternatives is true: (i) x

More information

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality

Example Problem. Linear Program (standard form) CSCI5654 (Linear Programming, Fall 2013) Lecture-7. Duality CSCI5654 (Linear Programming, Fall 013) Lecture-7 Duality Lecture 7 Slide# 1 Lecture 7 Slide# Linear Program (standard form) Example Problem maximize c 1 x 1 + + c n x n s.t. a j1 x 1 + + a jn x n b j

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem Volume 118 No. 6 2018, 287-294 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

More information