ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE

Similar documents
Criss-cross Method for Solving the Fuzzy Linear Complementarity Problem

SOME GENERALIZATIONS OF THE CRISS-CROSS METHOD. Emil Klafszky Tamas Terlaky 1. Mathematical Institut, Dept. of Op. Res.

Mathematical Programming II Lecture 5 ORIE 6310 Spring 2014 February 7, 2014 Scribe: Weixuan Lin

Advanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Part 1. The Review of Linear Programming

Copositive Plus Matrices

9.1 Linear Programs in canonical form

Structured Problems and Algorithms

MATHEMATICAL PROGRAMMING

The s-monotone Index Selection Rule for Criss-Cross Algorithms of Linear Complementarity Problems

Optimality, Duality, Complementarity for Constrained Optimization

The dual simplex method with bounds

BOOK REVIEWS 169. minimize (c, x) subject to Ax > b, x > 0.

Game theory: Models, Algorithms and Applications Lecture 4 Part II Geometry of the LCP. September 10, 2008

Economic Foundations of Symmetric Programming

More on positive subdefinite matrices and the linear complementarity problem

Infeasible Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

Absolute value equations

CO350 Linear Programming Chapter 6: The Simplex Method

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Improved Full-Newton-Step Infeasible Interior- Point Method for Linear Complementarity Problems

56:270 Final Exam - May

CHAPTER 2. The Simplex Method

Understanding the Simplex algorithm. Standard Optimization Problems.

Linear System with Hidden Z-Matrix and Complementary Condition arxiv: v1 [math.oc] 12 Jul 2018

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

1 Review Session. 1.1 Lecture 2

Part 1. The Review of Linear Programming

TRANSPORTATION PROBLEMS

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

+ 5x 2. = x x. + x 2. Transform the original system into a system x 2 = x x 1. = x 1

New Artificial-Free Phase 1 Simplex Method

4. Duality and Sensitivity

Introduction to optimization

Absolute Value Programming

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Chapter 1: Linear Programming

MATH2070 Optimisation

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Linear Programming. Linear Programming I. Lecture 1. Linear Programming. Linear Programming

Full-Newton-Step Interior-Point Method for the Linear Complementarity Problems

16 Chapter 3. Separation Properties, Principal Pivot Transforms, Classes... for all j 2 J is said to be a subcomplementary vector of variables for (3.

Linear Programming, Lecture 4

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Algorithm for QP thro LCP

TIM 206 Lecture 3: The Simplex Method

MAT016: Optimization

II. Analysis of Linear Programming Solutions

Example Bases and Basic Feasible Solutions 63 Let q = >: ; > and M = >: ;2 > and consider the LCP (q M). The class of ; ;2 complementary cones

The extreme rays of the 5 5 copositive cone

Simplex Algorithm Using Canonical Tableaus

Introduce the idea of a nondegenerate tableau and its analogy with nondenegerate vertices.

Linear programming: Theory

Linear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.

Math 5593 Linear Programming Week 1

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

F 1 F 2 Daily Requirement Cost N N N

On the projection onto a finitely generated cone

An upper bound for the number of different solutions generated by the primal simplex method with any selection rule of entering variables

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Chapter 6. Computational Complexity of Complementary Pivot Methods 301 ~q(n) =(2 n 2 n;1 ::: 2) T ^q(n) =(;2 n ;2 n ; 2 n;1 ;2 n ; 2 n;1 ; 2 n;2 ::: ;

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Lesson 27 Linear Programming; The Simplex Method

In Chapters 3 and 4 we introduced linear programming

DIPLOMARBEIT. Titel der Diplomarbeit. COMPASS: A Free Solver for Mixed Complementarity Problems. Verfasser. Stefan Schmelzer

OPERATIONS RESEARCH. Linear Programming Problem

The Primal-Dual Algorithm P&S Chapter 5 Last Revised October 30, 2006

A Review of Linear Programming

Dr. Maddah ENMG 500 Engineering Management I 10/21/07

Research Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007

AM 121: Intro to Optimization Models and Methods

CO350 Linear Programming Chapter 8: Degeneracy and Finite Termination

Lecture slides by Kevin Wayne

3. THE SIMPLEX ALGORITHM

Dual Basic Solutions. Observation 5.7. Consider LP in standard form with A 2 R m n,rank(a) =m, and dual LP:

Four new upper bounds for the stability number of a graph

Simplex method(s) for solving LPs in standard form

THE solution of the absolute value equation (AVE) of

Linear Programming Duality P&S Chapter 3 Last Revised Nov 1, 2004

IE 5531: Engineering Optimization I

Review Solutions, Exam 2, Operations Research

(P ) Minimize 4x 1 + 6x 2 + 5x 3 s.t. 2x 1 3x 3 3 3x 2 2x 3 6

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

IE 5531: Engineering Optimization I

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small.

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

Tensor Complementarity Problem and Semi-positive Tensors

MATH 445/545 Test 1 Spring 2016

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

Duality Theory, Optimality Conditions

A note on the unique solution of linear complementarity problem

Generating All Efficient Extreme Points in Multiple Objective Linear Programming Problem and Its Application

Systems Analysis in Construction

Linear Programming: Chapter 5 Duality

Linear Programming Methods

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

Transcription:

ON THE PARAMETRIC LCP: A HISTORICAL PERSPECTIVE Richard W. Cottle Department of Management Science and Engineering Stanford University ICCP 2014 Humboldt University Berlin August, 2014 1 / 55

The year 2014 2 / 55

The year 2014 A few important centennials 2 / 55

The year 2014 A few important centennials. 2 / 55

The year 2014 A few important centennials. The birth of GEORGE B. DANTZIG 2 / 55

This talk is meant to review 1 the origins of the PLCP 3 / 55

This talk is meant to review 1 the origins of the PLCP 2 some PLCP algorithms 3 / 55

This talk is meant to review 1 the origins of the PLCP 2 some PLCP algorithms 3 the equivalence of some PLCP algorithms 3 / 55

Developments in the early 50s: The SDPA E.M.L. BEALE, An alternative method for linear programming, Proceedings of the Cambridge Philosophical Society 50 (1954), 513 523. 4 / 55

Developments in the early 50s: The SDPA E.M.L. BEALE, An alternative method for linear programming, Proceedings of the Cambridge Philosophical Society 50 (1954), 513 523. G.B. DANTZIG, Composite Simplex-Dual Simplex Algorithm I. [Notes on Linear Programming: Part XI.] The RAND Corp., 26 April 1954. 4 / 55

Developments in the early 50s: The SDPA E.M.L. BEALE, An alternative method for linear programming, Proceedings of the Cambridge Philosophical Society 50 (1954), 513 523. G.B. DANTZIG, Composite Simplex-Dual Simplex Algorithm I. [Notes on Linear Programming: Part XI.] The RAND Corp., 26 April 1954. WM. ORCHARD-HAYS, A composite simplex algorithm II. RM-1275. The RAND Corp. 1954. 4 / 55

Developments in the early 60s G.B. DANTZIG, Linear Programming and Extensions, Princeton University Press, 1963, pp. 245 247. 5 / 55

Developments in the early 60s G.B. DANTZIG, Linear Programming and Extensions, Princeton University Press, 1963, pp. 245 247. R.W. COTTLE Nonlinear Programs with Positively Bounded Jacobians, Ph.D. thesis, University of California, Berkeley, 1964. 5 / 55

Developments in the early 60s G.B. DANTZIG, Linear Programming and Extensions, Princeton University Press, 1963, pp. 245 247. R.W. COTTLE Nonlinear Programs with Positively Bounded Jacobians, Ph.D. thesis, University of California, Berkeley, 1964. C.E. LEMKE AND J.T. HOWSON, JR. Equilibrium points of bimatrix games, SIAM Journal on Applied Mathematics 12 (1964), 413 423. 5 / 55

Developments in the early 60s G.B. DANTZIG, Linear Programming and Extensions, Princeton University Press, 1963, pp. 245 247. R.W. COTTLE Nonlinear Programs with Positively Bounded Jacobians, Ph.D. thesis, University of California, Berkeley, 1964. C.E. LEMKE AND J.T. HOWSON, JR. Equilibrium points of bimatrix games, SIAM Journal on Applied Mathematics 12 (1964), 413 423. C.E. LEMKE Bimatrix equilibrium points and mathematical programming, Management Science 11 (1965), 681 689. 5 / 55

Developments in the late 60s G.B. DANTZIG AND R.W. COTTLE Positive (semi-)definite programming, in (J. Abadie, ed.) Nonlinear Programming, North-Holland, 1967, pp. 55 73. R.W. COTTLE AND G.B. DANTZIG Complementary pivot theory of mathematical programming, Linear Algebra and its Applications 1 (1968), 103 125. R.W. COTTLE The principal pivoting method of quadratic programming, in (G.B. Dantzig and A.F. Veinott, Jr., eds.) Mathematics of the Decision Sciences. Part 1 [Lectures in Applied Mathematics, Volume 11], Providence, Rhode Island, 1968, pp. 144 162. 6 / 55

Developments in the early 70s S.R. MCCAMMON Complementary Pivoting. Ph.D. thesis, Rensselaer Polytechnic Institute, 1970. K.G. MURTY On the parametric complementarity problem, Engineering Summer Conference Notes, University of Michigan. August, 1971. A. RAVINDRAN Computational aspects of Lemke s complementary algorithm applied to linear programs. Opsearch 7 (1970), 241 262. C. VAN DE PANNE A complementary variant of Lemke s method for the linear complementarity problem. Mathematical Programming 7 (1974), 283 310. 7 / 55

Developments in the 80s I.J. LUSTIG Comparisons of Composite Simplex Algorithms, Ph.D. thesis, Stanford University, 1987. 8 / 55

... LCP algorithms LS1 Composite PPM PPMP SDPA LS1P 9 / 55

Primal LP as in SDPA Dantzig (1963) considered the primal LP: minimize 0 T v + c T x (P) subject to Iv + Ax = b A R m n x 0, v 0 The identity matrix I is the initial basis in [ I A ]. Neither b nor c is assumed to be nonnegative. 10 / 55

The dual of (P) Let à = [ I A ] and c T = [ 0 T c T ]. Then the dual of (P) would be maximize ỹ T b (D) subject to ỹ T à c T ỹ free 11 / 55

The dual of (P) Let à = [ I A ] and c T = [ 0 T c T ]. Then the dual of (P) would be maximize ỹ T b (D) subject to ỹ T à c T ỹ free A feasible ỹ must be nonpositive. The dual can be rewritten in terms of y = ỹ as (D) maximize y T b subject to y T A c T y 0 11 / 55

Optimality conditions for (P) The optimality conditions for (P) are: u = c + A T y v = b Ax u, v, x, y 0 x T u + y T v = 0 This can be viewed as a composite problem. It is also a special LCP. 12 / 55

Dantzig s Primal LP as in SDPA Dantzig (1963) formulated the self-dual parametric LP: minimize 0 T v + (c + d λ) T x (P) subject to Iv + Ax = b + d λ A R m n x 0, v 0 The identity matrix I is the initial basis in [ I A ]. Neither b nor c is assumed to be nonnegative. Parameter λ [0, λ], λ = min {min{bi }, min{c j }}. d 0, c + d λ 0 and d 0, b + d λ 0 13 / 55

Tableau for SDPA In tableau form, Dantzig s (P) would be given as ζ v x 1 0 I A b 1 0 T c T 0 where ζ denotes the value of the objective function. 14 / 55

Tableau for the parametric algorithm In tableau form, the parametric version of (P) would be v x 1 λ I A b d 0 T c T 0 0 1 0 T d T 0 0 λ The ζ-column has been omitted. Assumption: For all λ [0, λ] the vector (c + d λ, b + d λ) has at most one zero component (and exactly one when λ = λ). 15 / 55

The algorithmic concept If b 0 and c 0, x = 0 and y = 0 are optimal solutions of (P) and (D). Stop. Else λ > 0. If c r + d r λ = 0, attempt to perform (primal) a simplex method pivot step in the column of x r. If b r + d r λ = 0, attempt to perform a dual simplex method pivot step in the row of v r. If neither pivot is possible, stop. (P) has no optimal solution. Reduce λ to 0 or the next critical value. Repeat. 16 / 55

Primal simplex pivots If decreasing λ makes c r + d r λ = 0, we have d r > 0. The method acts as though c r + d r λ < 0 (as it would be for slightly smaller λ). This is followed by a minimum ratio test. The variable x r is increased as much as possible so as to maintain primal feasibility: x r min i { } bi : ā ir > 0 ā ir If ā ir 0 for all i, then (D) is infeasible. Stop. 17 / 55

Dual simplex pivots If decreasing λ makes br + d r λ = 0, we have d r > 0. The method acts as though br + d r λ < 0 (as it would be for slightly smaller λ). This is followed by a minimum ratio test. The variable y r is increased as much as possible so as to maintain dual feasibility: y r min j { } c j : ā rj > 0 ā rj If ā rj 0 for all j, then (P) is infeasible. Stop. 18 / 55

The LCP (q, M) in dictionary form The system equation w = q + Mz is represented in dictionary form as 1 z 1 z 2 z n w 1 q 1 m 11 m 12 m 1n w 2. q 2. m 21. m 22. m 2n. w n q 2 m n1 m n2 m nn 19 / 55

The (Symmetric) Principal Pivoting Method (PPM): Generalities General features: Limited to sufficient matrices Not parametric Works with a sequence of major cycles, each of which is associated with a distinguished basic variable Each major cycle ends with the distinguished variable becoming nonbasic or an unblocked driving (increasing nonbasic) variable Preserves complementarity of bases Monotonically reduces the number of negative variables 20 / 55

Principal pivoting For M R n n with det(m αα ) 0, the corresponding principal pivotal transformation of M is the matrix α (M) = M such that [ Mαα Mαᾱ Mᾱα Mᾱᾱ ] = [ M 1 αα Mᾱα M 1 αα Mαα 1M αᾱ M ᾱᾱ Mᾱα Mαα 1M αᾱ ] 21 / 55

The idea behind the PPM w = q + Mz Initially m ij = w i / z j and m ii 0 for all i. If q i < 0 and m ii > 0, increasing z i makes w i increase. If q k 0 and m ki < 0, increase of z i could make w k decrease to 0 (before w i increases to 0). If M is sufficient, m ii = 0, and m ki < 0, then m ik > 0, so the increase of z k would make w i increase to zero. 22 / 55

Steps of the (Symmetric) PPM (simple case) Step 0. Input data, choose µ < min q i < 0 Step 1. Determine the distinguished variable, w ν r Step 2. Determine the blocking variable, w ν s Step 3. Pivot w s, z r, return to Step 1 23 / 55

Steps of the (Symmetric) PPM (simple case) Step 0. Input data, choose µ < min q i < 0 Step 1. Determine the distinguished variable, w ν r Step 2. Determine the blocking variable, w ν s Step 3. Pivot w s, z r, return to Step 1 Many details have been omitted. 23 / 55

Steps of the (Symmetric) PPM (simple case) Step 0. Input data, choose µ < min q i < 0 Step 1. Determine the distinguished variable, w ν r Step 2. Determine the blocking variable, w ν s Step 3. Pivot w s, z r, return to Step 1 Many details have been omitted. The important features are: If s = r and m ν rr > 0, pivot w ν r, z ν r If s r and m ν ss > 0, pivot w ν s, z ν s If s r and m ν ss = 0, pivot { w ν s, z ν r, w ν r, z ν s } 23 / 55

Bisymmetry A matrix M R n n is bisymmetric if there exists an n n permutation matrix P such that [ ] PMP T G A T = A H where G and H are symmetric. Symmetry (M M T = 0) and skew-symmetry (M + M T = 0) are special cases of bisymmetry. 24 / 55

Invariance under principal pivoting If all principal pivotal transforms α of all members M of a class C of squares matrices belong to that class, the class is said to be invariant under principal pivoting. M C = α (M) C Examples of such classes are the positive semidefinite matrices and the bisymmetric matrices. 25 / 55

The LP case The matrix M of an LCP (q, M) associated with a linear program in nonnegative variables is Skew-symmetric Positive semidefinite 26 / 55

On K (M) and its convexity Given M R n n, the union of the corresponding complementary cones is the closed cone K (M) = α pos B α, B α := [I ᾱ M α ] = {q R n : (q, M) has a solution } 27 / 55

On K (M) and its convexity Given M R n n, the union of the corresponding complementary cones is the closed cone K (M) = α pos B α, B α := [I ᾱ M α ] = {q R n : (q, M) has a solution } The cone K (M) is convex if and only if for all q R n, FEA (q, M) = SOL (q, M) When this holds, we say M Q 0. 27 / 55

On the monotonicity of the parameter Suppose the LCP (q, M) represents the optimality conditions of an LP for which min q i < 0. 28 / 55

On the monotonicity of the parameter Suppose the LCP (q, M) represents the optimality conditions of an LP for which min q i < 0. In these parametric methods (and LS1), the parameter λ will first be increased to the value λ > 0. 28 / 55

On the monotonicity of the parameter Suppose the LCP (q, M) represents the optimality conditions of an LP for which min q i < 0. In these parametric methods (and LS1), the parameter λ will first be increased to the value λ > 0. Several authors have established that λ does not increase during the solution process. 28 / 55

On the monotonicity of the parameter Suppose the LCP (q, M) represents the optimality conditions of an LP for which min q i < 0. In these parametric methods (and LS1), the parameter λ will first be increased to the value λ > 0. Several authors have established that λ does not increase during the solution process. Why should this be the case? 28 / 55

On the monotonicity of the parameter Suppose the LCP (q, M) represents the optimality conditions of an LP for which min q i < 0. In these parametric methods (and LS1), the parameter λ will first be increased to the value λ > 0. Several authors have established that λ does not increase during the solution process. Why should this be the case? It has to do with the positive semi-definiteness of M and the invariance of positive semi-definiteness under principal pivoting. 28 / 55

Why the parameter does not increase in the LP case Let (q, M) represent an LP and min q i < 0 for some i. Let λ = min{λ 0 : FEA (q + dλ, M) }. M is skew-symmetric All skew-symmetric matrices are positive semidefinite All positive semidefinite matrices belong to P 0 Q 0 K (M) is convex for all M Q 0 q + dλ K (M) for all λ [λ, λ] The positive semi-definiteness is invariant under principal pivoting. 29 / 55

Lemke s Algorithm, Scheme 1 (LS1) Step 0. Initialization. Input (q, d, M) with q + dz 0 0 for all z 0 z 0 = max{ q i d i : q i < 0}. Stop if q 0. Else some w r blocks z 0. Pivot w r, z 0 where = arg max { q i d i : q i < 0}. Step 1. Finding blocking variable (if any). The new driving variable is the complement of the last blocking variable. Increase the driving variable. Stop if it is unblocked. (Interpret this outcome, if possible.) Step 2. Pivoting. If z 0 blocks the driving variable, pivot z 0, driving variable. A solution has been found. Otherwise pivot blocking variable, driving variable and return to Step 1 with the complement of the blocking variable as the new driving variable. 30 / 55

Lemke s Algorithm (Scheme 1) in parametric form (LS1P) Step 0. Input (q, d, M) where d 0 covers q. If q 0, stop: z = 0 solves (q, M). Otherwise, let z 0 = max i { q i /d i } and let r = arg max i { q i /d i }. Set ν = 0 and define (q ν, d ν, M ν ) = (q, d, M), (w ν, z ν ) = (w, z), z ν 0 = z 0. Step 1. Define w ν r as the distinguished variable and its complement z ν r as the driving variable. Apply LS1 on (q ν, d ν, M ν ). This will yield either a solution of (q ν, d ν, M ν ) or termination on a ray (i.e., with an unblocked driving variable). Step 2. If q ν 0, decrease z 0 to zero. Stop: a solution of (q, M) has been found. If d ν 0, stop. The algorithm terminates unsuccessfully (on a z 0 -ray). Otherwise min i d ν i < 0 < max i d ν i. Let z ν 1 0 denote the current value of z 0 ; it is either max{ q ν i /d ν i : d ν i > 0} or min{ q ν i /d ν i : d ν i < 0}. If z ν 1 0 = max{ qν i d ν : d ν i i > 0}, set zν 0 = min{ qν i i if z ν 1 0 = min{ qν i d ν i d ν : d ν i < 0}, set z ν 0 = max{ qν i d ν i : d ν i < 0}; : d ν i > 0}. (Put w ν = q ν + d ν z ν 0.) Return to Step1 with the unique r = arg min i w ν i. 31 / 55

Dantzig s claim Dantzig (1963) 32 / 55

Dantzig s claim Dantzig (1963) SDPA 32 / 55

Dantzig s claim Dantzig (1963) SDPA Lemke (1965) 32 / 55

Dantzig s claim Dantzig (1963) SDPA Lemke (1965) LS1 32 / 55

Dantzig s claim Dantzig (1963) SDPA Lemke (1965) LS1 Question: 32 / 55

Dantzig s claim Dantzig (1963) SDPA Lemke (1965) LS1 Question: How do the algorithms SDPA and LS1 compare when applied to the same LP? 32 / 55

Dantzig s claim Dantzig (1963) SDPA Lemke (1965) LS1 Question: How do the algorithms SDPA and LS1 compare when applied to the same LP? Dantzig s claim: 32 / 55

Dantzig s claim Dantzig (1963) SDPA Lemke (1965) LS1 Question: How do the algorithms SDPA and LS1 compare when applied to the same LP? Dantzig s claim: They are equivalent 32 / 55

Dantzig s claim Dantzig (1963) SDPA Lemke (1965) LS1 Question: How do the algorithms SDPA and LS1 compare when applied to the same LP? Dantzig s claim: They are equivalent 1970 32 / 55

Dantzig s claim Dantzig (1963) SDPA Lemke (1965) LS1 Question: How do the algorithms SDPA and LS1 compare when applied to the same LP? Dantzig s claim: They are equivalent 1970 Ravindran s paper, McCammon s thesis 32 / 55

Ravindran s paper A. RAVINDRAN, Computational aspects of Lemke s complementary pivot algorithm applied to linear programs, Opsearch 7 (1970), 241 262. 33 / 55

Ravindran s paper A. RAVINDRAN, Computational aspects of Lemke s complementary pivot algorithm applied to linear programs, Opsearch 7 (1970), 241 262. was published in the same year as McCammon s Ph.D. thesis (to be discussed shortly) 33 / 55

Ravindran s paper A. RAVINDRAN, Computational aspects of Lemke s complementary pivot algorithm applied to linear programs, Opsearch 7 (1970), 241 262. was published in the same year as McCammon s Ph.D. thesis (to be discussed shortly) compares Dantzig s SDPA with Lemke s Scheme 1 on LPs 33 / 55

Ravindran s paper A. RAVINDRAN, Computational aspects of Lemke s complementary pivot algorithm applied to linear programs, Opsearch 7 (1970), 241 262. was published in the same year as McCammon s Ph.D. thesis (to be discussed shortly) compares Dantzig s SDPA with Lemke s Scheme 1 on LPs gives a parametric, structure-preserving variant of Lemke s Scheme 1 on LPs does not establish the general equivalence of Lemke s Scheme 1 and its parametric form establishes that the parameter z 0 decreases monotonically 33 / 55

Ravindran s statement Those who are familiar with the complementary pivot theory, can easily see that in our algorithm we are essentially applying Lemke s method but to a reduced tableau, taking advantage of M-structure. Dantzig has claimed that the Lemke s complementary pivot method to solve linear programs is identical with respect to the pivot steps to his self-dual parametric algorithm. Though this fact is not very obvious to see, using our algorithm we can see that the pivot steps are identical to those in the self-dual parametric algorithm though Dantzig uses the idea of primal and dual simplex method while we make use of complementarity between the variables. Since our method is a condensed form of Lemke s method in some sense, we have shown that Dantzig s claim may be valid. 34 / 55

McCammon s thesis S.R. MCCAMMON, On Complementary Pivoting, Ph.D. thesis, Rensselaer Polytechnic Institute, Troy, N.Y., 1970. 35 / 55

McCammon s thesis S.R. MCCAMMON, On Complementary Pivoting, Ph.D. thesis, Rensselaer Polytechnic Institute, Troy, N.Y., 1970. was supervised by Prof. Carleton E. Lemke 35 / 55

McCammon s thesis S.R. MCCAMMON, On Complementary Pivoting, Ph.D. thesis, Rensselaer Polytechnic Institute, Troy, N.Y., 1970. was supervised by Prof. Carleton E. Lemke generalizes the covering vector for Lemke s method 35 / 55

McCammon s thesis S.R. MCCAMMON, On Complementary Pivoting, Ph.D. thesis, Rensselaer Polytechnic Institute, Troy, N.Y., 1970. was supervised by Prof. Carleton E. Lemke generalizes the covering vector for Lemke s method discusses a few standard matrix classes 35 / 55

McCammon s thesis S.R. MCCAMMON, On Complementary Pivoting, Ph.D. thesis, Rensselaer Polytechnic Institute, Troy, N.Y., 1970. was supervised by Prof. Carleton E. Lemke generalizes the covering vector for Lemke s method discusses a few standard matrix classes develops the parametric version of Lemke s algorithm (replacing the pseudo-variable z 0 by a parameter θ) 35 / 55

McCammon s thesis S.R. MCCAMMON, On Complementary Pivoting, Ph.D. thesis, Rensselaer Polytechnic Institute, Troy, N.Y., 1970. was supervised by Prof. Carleton E. Lemke generalizes the covering vector for Lemke s method discusses a few standard matrix classes develops the parametric version of Lemke s algorithm (replacing the pseudo-variable z 0 by a parameter θ) shows that the parametric pivoting algorithm and Lemke s algorithm are equivalent 35 / 55

McCammon s thesis S.R. MCCAMMON, On Complementary Pivoting, Ph.D. thesis, Rensselaer Polytechnic Institute, Troy, N.Y., 1970. was supervised by Prof. Carleton E. Lemke generalizes the covering vector for Lemke s method discusses a few standard matrix classes develops the parametric version of Lemke s algorithm (replacing the pseudo-variable z 0 by a parameter θ) shows that the parametric pivoting algorithm and Lemke s algorithm are equivalent discusses the principal pivoting method 35 / 55

McCammon s thesis (continued) notes the strict decrease of z 0 in the case where M is a P-matrix alludes to the fact that the choice of covering vector can influence the number of iterations and the solution found discusses the Dantzig s SDPA for LP and the restoration of skew symmetry after a principal block pivot of order 2 gives a direct numerical comparison of the SDPA and the parametric pivoting method (LS1P) the above comparison reaches a faulty conclusion which will be illustrated here 36 / 55

McCammon s statement At the time of initiating the study which culminated in this dissertation, it was not clear what the relationship between the principal pivoting method and Lemke s method was, and it was felt that an investigation would clarify the similarities and distinctions. This dissertation, indeed, sheds light on this relationship. A related study was generated by two remarks of Dantzig made to Professor Lemke: in a letter to Lemke in 1965, Professor Dantzig expressed the feeling that the use of one pseudo-vector, as in Lemke s method, was not the most efficient way to handle the problem. This suggested the use of n pseudo-variables, and is considered in this thesis. Professor Dantzig also remarked in a conversation with Lemke in 1966 that the Lemke method when applied to the linear programming problem was equivalent to the self-dual parametric algorithm. This suggested the attempt to view Lemke s method as a Parametric method, and gave rise to the Parametric Pivoting Method developed by the author. 37 / 55

van de Panne s paper C. VAN DE PANNE, A complementary variant of Lemke s method for the linear complementarity problem, Mathematical Programming 7 (1974), 283 310. 38 / 55

van de Panne s paper C. VAN DE PANNE, A complementary variant of Lemke s method for the linear complementarity problem, Mathematical Programming 7 (1974), 283 310. cites McCammon but not Ravindran 38 / 55

van de Panne s paper C. VAN DE PANNE, A complementary variant of Lemke s method for the linear complementarity problem, Mathematical Programming 7 (1974), 283 310. cites McCammon but not Ravindran gives an algorithm that is related to McCammon s (i.e., LS1P) but is multi-parametric 38 / 55

van de Panne s paper C. VAN DE PANNE, A complementary variant of Lemke s method for the linear complementarity problem, Mathematical Programming 7 (1974), 283 310. cites McCammon but not Ravindran gives an algorithm that is related to McCammon s (i.e., LS1P) but is multi-parametric supposedly has the advantage of allowing for infeasibility tests 38 / 55

van de Panne s paper C. VAN DE PANNE, A complementary variant of Lemke s method for the linear complementarity problem, Mathematical Programming 7 (1974), 283 310. cites McCammon but not Ravindran gives an algorithm that is related to McCammon s (i.e., LS1P) but is multi-parametric supposedly has the advantage of allowing for infeasibility tests asserts that the complementary variant and Lemke s method (LS1) are equivalent 38 / 55

van de Panne s statement The complementary variant is related to the parametric pivot method proposed by McCammon.... In the latter method only λ is varied as a parameter, while in the complementary variant also a number of nonbasic variables are varied as parameters. Both McCammon s method and the complementary variant are equivalent to Lemke s method in the sense that the same successive solutions are generated. McCammon s method and the complementary variant are identical as long as no fixed variables appear in the latter method. McCammon s statement... that this parametric pivoting method and Dantzig s self-dual parametric method for linear programming are not necessarily equivalent is based on errors in his example; Lemke s method and consequently McCammon s method and the complementary variant applied to linear programming problems are equivalent to Dantzig s method. 39 / 55

Lustig s thesis I.J. LUSTIG Comparisons of Composite Simplex Algorithms, Ph.D. thesis, Stanford University, 1987. 40 / 55

Lustig s thesis I.J. LUSTIG Comparisons of Composite Simplex Algorithms, Ph.D. thesis, Stanford University, 1987. was supervised by Prof. George B. Dantzig 40 / 55

Lustig s thesis I.J. LUSTIG Comparisons of Composite Simplex Algorithms, Ph.D. thesis, Stanford University, 1987. was supervised by Prof. George B. Dantzig took note of Ravindran s paper and McCammon s thesis, but not of van de Panne s paper 40 / 55

Lustig s thesis I.J. LUSTIG Comparisons of Composite Simplex Algorithms, Ph.D. thesis, Stanford University, 1987. was supervised by Prof. George B. Dantzig took note of Ravindran s paper and McCammon s thesis, but not of van de Panne s paper compared the SDPA with LS1 (but not LS1P) applied to LP 40 / 55

Lustig s thesis I.J. LUSTIG Comparisons of Composite Simplex Algorithms, Ph.D. thesis, Stanford University, 1987. was supervised by Prof. George B. Dantzig took note of Ravindran s paper and McCammon s thesis, but not of van de Panne s paper compared the SDPA with LS1 (but not LS1P) applied to LP noted strict decrease of the artificial variable z 0 in LS1 40 / 55

Lustig s thesis I.J. LUSTIG Comparisons of Composite Simplex Algorithms, Ph.D. thesis, Stanford University, 1987. was supervised by Prof. George B. Dantzig took note of Ravindran s paper and McCammon s thesis, but not of van de Panne s paper compared the SDPA with LS1 (but not LS1P) applied to LP noted strict decrease of the artificial variable z 0 in LS1 commented on McCammon s conclusion on Dantzig s claim 40 / 55

Lustig s comments McCammon (1970), in his Ph.D. thesis at Rensselaer Polytechnic Institute, considered Dantzig s claim as well. He presented his own parametric pivoting method for general linear complementarity problems... McCammon proved that his algorithm is equivalent to Lemke s algorithm applied to the LCP (q, M). McCammon, however, believed that the solution path of his algorithm did not correspond to that of Dantzig s self-dual algorithm. In the last section of Appendix II of his thesis, he applied his algorithm and Dantzig s algorithm to a numerical example in order to show that the two solution paths need not correspond. A careful examination of his calculations indicates that he applied the two algorithms to two different linear programs. A short description of McCammon s algorithm (without proof) is given by Lemke (1970), pp. 359 361. I find McCammon s proof and the statement of his algorithm somewhat unclear. It is possible that his algorithm is equivalent to Cottle s (1972) parametric pivoting algorithm. Cottle s algorithm is well-defined when M has positive principal minors or M is positive semi-definite. If they are equivalent, then it is not clear whether McCammon realized the necessity of having assumptions on the properties of M in order to make his algorithm well-defined. 41 / 55

Lustig s result (paraphrased) Theorem Assume that Dantzig s self-dual parametric algorithm is executed on a linear program (in inequality form) and that Lemke s algorithm is executed on the corresponding LCP and an optimal solution is found by the self-dual method in l iterations. Then the pivots of iteration t of the self-dual algorithm correspond in a precise way to the pivots of iterations 2t 1, 2t, and 2t + 1 of Lemke s algorithm for t = 1, 2,..., l. 42 / 55

McCammon s LP formulated and solved by SDPA 7. Dantzig s self-dual parametric algorithm Consider the problem minimize 4x 1 3x 2 subject to 1 1 1 2 ««x1 2 = x 2 1 «, x 1 0, x 2 0. The self-dual parametric method considers the problem:......................................................................................... minimize 4x 1 + (θ 3x 2) subject to x 1 0, x 2 0. 1 1 1 2 ««x1 θ 2 = x 2 θ 1 «, page 82 page 83 43 / 55

McCammon s LP: solved Dantzig-style in tableaux 1 2 3 4 y 1 y 2 x 1 x 2 1 θ 1 0 1 1 2 1 row1 0 1 1 2 1 1 row2 0 0 4 3 0 0 1 0 0 0 1 0 0 θ θ 3, y 1, x 2 (P) y 1 y 2 x 1 x 2 1 θ 1 0 1 1 2 1 row1 1 1 0 1 3 2 row2 4 0 0 1 8 4 1 0 0 0 1 0 0 θ θ [ 3 2, 2], y 2, x 2 (D) y 1 y 2 x 1 x 2 1 θ 1 0 1 1 2 1 row1 2 1 1 0 5 3 row2 3 0 1 0 6 3 1 1 0 1 0 2 1 θ θ [2, 3], x 2, x 1 (D) y 1 y 2 x 1 x 2 1 θ 2 1 1 0 5 3 row1 1 1 0 1 3 2 row2 5 1 0 0 11 6 1 1 1 0 0 3 2 θ θ [0, 3 2 ], ˆζ = 11 44 / 55

LS1P on McCammon s LP: formulation of the model McCammon formulated the system equation of a parametric LCP for solving his LP as follows: w = 4 3 2 1 + 0 1 1 1 θ + 0 0 1 1 0 0 1 2 1 1 0 0 1 2 0 0 z 45 / 55

LS1P on McCammon s LP: as he solved it in tableaux 1 2 3 1 θ z 1 z 2 z 3 z 4 w 1 4 0 0 0 1 1 w 2 3 1 0 0 1 2 w 3 2 1 1 1 0 0 w 4 1 1 1 2 0 0 θ 3; r = 2; w 4, z 2 1 θ z 1 w 4 z 3 z 4 w 1 4 0 0 0 1 1 w 2 3 1 0 0 1 2 w 3 5/2 3/2 1/2 1/2 0 0 z 2 1/2 1/2 1/2 1/2 0 0 θ 3; r = 2; w 2, z 4 1 θ z 1 w 4 z 3 w 2 w 1 5/2 1/2 0 0 1/2 1/2 z 4 3/2 1/2 0 0 1/2 1/2 w 3 5/2 3/2 1 1 0 0 z 2 1/2 1/2 1 2 0 0 θ [ 5 3 ], 3]; r = 3; z 3 46 / 55

LS1P on McCammon s LP: as he solved it in tableaux 1 2 3 1 θ z 1 z 2 z 3 z 4 w 1 4 0 0 0 1 1 w 2 3 1 0 0 1 2 w 3 2 1 1 1 0 0 w 4 1 1 1 2 0 0 θ 3; r = 2; w 4, z 2 1 θ z 1 w 4 z 3 z 4 w 1 4 0 0 0 1 1 w 2 3 1 0 0 1 2 w 3 5/2 3/2 1/2 1/2 0 0 z 2 1/2 1/2 1/2 1/2 0 0 θ 3; r = 2; w 2, z 4 1 θ z 1 w 4 z 3 w 2 w 1 5/2 1/2 0 0 1/2 1/2 z 4 3/2 1/2 0 0 1/2 1/2 w 3 5/2 3/2 1 1 0 0 z 2 1/2 1/2 1 2 0 0 θ [ 5 3 ], 3]; r = 3; z 3 OOPS! 46 / 55

LS1P on McCammon s LP: as he solved it in tableaux 1 2 3 1 θ z 1 z 2 z 3 z 4 w 1 4 0 0 0 1 1 w 2 3 1 0 0 1 2 w 3 2 1 1 1 0 0 w 4 1 1 1 2 0 0 θ 3; r = 2; w 4, z 2 1 θ z 1 w 4 z 3 z 4 w 1 4 0 0 0 1 1 w 2 3 1 0 0 1 2 w 3 5/2 3/2 1/2 1/2 0 0 z 2 1/2 1/2 1/2 1/2 0 0 θ 3; r = 2; w 2, z 4 1 θ z 1 w 4 z 3 w 2 w 1 5/2 1/2 0 0 1/2 1/2 z 4 3/2 1/2 0 0 1/2 1/2 w 3 5/2 3/2 1 1 0 0 z 2 1/2 1/2 1 2 0 0 θ [ 5 3 ], 3]; r = 3; z 3 OOPS! How can this be? 46 / 55

LS1P on McCammon s LP: formulation of the model Correct formulation of McCammon s system equation of a parametric LCP for solving his LP: w = 4 3 2 1 + 0 1 1 1 θ + 0 0 1 1 0 0 1 2 1 1 0 0 1 2 0 0 z 47 / 55

LS1P on McCammon s LP: correctly solved in 7 tableaux 1 θ z 1 z 2 z 3 z 4 w 1 4 0 0 0 1 1 w 2 3 1 0 0 1 2 w 3 2 1 1 1 0 0 w 4 1 1 1 2 0 0 θ 3; r = 2; w 3, z 2 1 θ z 1 w 3 w 2 z 4 w 1 1 1 0 0 1 1 z 3 3 1 0 0 1 2 z 2 2 1 1 1 0 0 w 4 5 3 1 2 0 0 θ [2, 3]; r = 3; w 1, w 2 1 θ z 2 w 3 w 1 z 4 w 2 1 1 0 0 1 1 z 3 4 0 0 0 1 1 z 1 2 1 1 1 0 0 w 4 3 2 1 1 0 0 θ [3/2, 2]; r = 4; w 2, z 4 1 θ z 1 w 3 z 3 z 4 w 1 4 0 0 0 1 1 w 2 3 1 0 0 1 2 z 2 2 1 1 1 0 0 w 4 5 3 1 2 0 0 θ 3; r = 2; w 2, z 3 1 θ z 1 w 3 w 1 z 4 w 2 1 1 0 0 1 1 z 3 4 0 0 0 1 1 z 2 2 1 1 1 0 0 w 4 5 3 1 2 0 0 θ [2, 3]; r = 3; z 2, z 1 1 θ z 2 w 3 w 1 w 2 z 4 1 1 0 0 1 1 z 3 5 1 0 0 2 1 z 1 2 1 1 1 0 0 w 4 3 2 1 1 0 0 θ [3/2, 2]; r = 4; w 4, z 2 48 / 55

LS1P on McCammon s LP: reformulated and solved 1 θ w 4 w 3 w 1 w 2 z 4 1 1 0 0 1 1 z 3 5 1 0 0 1 2 z 1 5 3 1 2 0 0 z 2 3 2 1 1 0 0 θ [0, 3/2]; solution: (z 1, z 2 ) = (5, 3) 49 / 55

PPPM on LP is SDPA Dantzig s LP in tableau form ζ v x 1 0 I A b 1 0 T c T 0 In dictionary form, the LCP for Dantzig s LP is 1 x y u c 0 A T v b A 0 50 / 55

Primal pivot v r, x s in Dantzig s tableau After primal pivot v r, x s, the Dantzig-style subtableau becomes v r x j x s 1 0 a ij a is b i 1 a rj a rs b r 0 c j c s 0 v r x j x s 1 a is a ij a rs a rj a is 0 b i a rs b r a is 1 a rj a rs b r 1 a rs c s c j a rs c s a rj 0 b r c s 51 / 55

Pivot v r, x s in LCP tableau In the LCP subtableau the pivot v r, x s produces 1 x j x s v i b i a ij a is v r b r a rj a rs 1 x j v r v i b i a rs b r a is a ij a rs + a rj a is a is x s b r a rj 1 1 a rs This is the first half of the 2 2 block pivot v r, x s, u s, y r. 52 / 55

Pivot u s, y r in LCP tableau In the LCP subtableau 1 y i y r u j c j u s c s a ji a si a jr a sr A = A T (a ji = a ij ) the pivot u s, y r produces 1 y i u s u j y r c j a sr c s a jr c s a ji a sr a jr a si a si a jr 1 1 a sr Since a ji = a ij for all i, j, the matrix resulting from u s, y r is the negative transpose of the one resulting from v r, x s 53 / 55

Comparison of SDPA and PPPM v r x j x s 1 SDPA PPPM a is a ij a rs a rj a is 0 b i a rs b r a is 1 a rj a rs b r 1 a rs c s c j a rs c s a rj 0 b r c s 1 x j v r v i b i a rs b r a is a ij a rs + a rj a is a is x s b r a rj 1 1 a rs 1 y i u s u j y r c j a sr c s a jr c s a ji a sr a jr a si a si a jr 1 1 a sr 54 / 55

Further remarks The conclusion is the same if the first pivot is of the Dual Simplex type. As applied to Linear Programming, the SDPA and the PPPM are equivalent in terms of iterates, space, and computational effort. Question: 55 / 55

Further remarks The conclusion is the same if the first pivot is of the Dual Simplex type. As applied to Linear Programming, the SDPA and the PPPM are equivalent in terms of iterates, space, and computational effort. Question: Did Dantzig realize this too? 55 / 55