OPERATIONS RESEARCH CENTER. ^ l^ COPY $. /^ UPDATING THE PRODUCT FORM OF THE INVERSE FOR THE REVERSED SIMPLEX METHOD

Similar documents
Chapter 4 Systems of Linear Equations; Matrices

LP. Lecture 3. Chapter 3: degeneracy. degeneracy example cycling the lexicographic method other pivot rules the fundamental theorem of LP

The System of Linear Equations. Direct Methods. Xiaozhou Li.

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Letting be a field, e.g., of the real numbers, the complex numbers, the rational numbers, the rational functions W(s) of a complex variable s, etc.

COMPUTATIONAL COMPLEXITY OF PARAMETRIC LINEAR PROGRAMMING +

Computational Linear Algebra

Lecture 11 Linear programming : The Revised Simplex Method

Math Models of OR: The Revised Simplex Method

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

IMPROVEMENT OF AN APPROXIMATE SET OF LATENT ROOTS AND MODAL COLUMNS OF A MATRIX BY METHODS AKIN TO THOSE OF CLASSICAL PERTURBATION THEORY

3 The Simplex Method. 3.1 Basic Solutions

Next topics: Solving systems of linear equations

4.5 Simplex method. LP in standard form: min z = c T x s.t. Ax = b

The Solution of Linear Systems AX = B

Physics 116A Determinants

Notes on Solving Linear Least-Squares Problems

(17) (18)

A Hybrid Method for the Wave Equation. beilina

Math Models of OR: Some Definitions

ALGEBRAIC PROPERTIES OF SELF-ADJOINT SYSTEMS*

Ğ ğ ğ Ğ ğ Öğ ç ğ ö öğ ğ ŞÇ ğ ğ

I-1. rei. o & A ;l{ o v(l) o t. e 6rf, \o. afl. 6rt {'il l'i. S o S S. l"l. \o a S lrh S \ S s l'l {a ra \o r' tn $ ra S \ S SG{ $ao. \ S l"l. \ (?

Solving Linear Systems Using Gaussian Elimination

Gaussian Elimination and Back Substitution

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Contents. 4.5 The(Primal)SimplexMethod NumericalExamplesoftheSimplexMethod

9.1 Linear Programs in canonical form

New Artificial-Free Phase 1 Simplex Method

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Institute for Computational Mathematics Hong Kong Baptist University

,.*Hffi;;* SONAI, IUERCANTII,N I,IMITDII REGD- 0FFICE: 105/33, VARDHMAN GotD[N PLNLA,R0AD No.44, pitampura, DELHI *ffigfk"

Lecture 2: The Simplex method

Definition 2.3. We define addition and multiplication of matrices as follows.

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Exhibit 2-9/30/15 Invoice Filing Page 1841 of Page 3660 Docket No

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

::::: OFTECHY. .0D 0 ::: ::_ I;. :.!:: t;0i f::t l. :- - :.. :?:: : ;. :--- :-.-i. .. r : : a o er -,:I :,--:-':: : :.:

A Piggybacking Design Framework for Read-and Download-efficient Distributed Storage Codes

GOLDEN SEQUENCES OF MATRICES WITH APPLICATIONS TO FIBONACCI ALGEBRA

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Fast Matrix Multiplication*

This can be accomplished by left matrix multiplication as follows: I

A NOTE ON CONFIDENCE BOUNDS CONNECTED WITH ANOVA AND MANOVA FOR BALANCED AND PARTIALLY BALANCED INCOMPLETE BLOCK DESIGNS. v. P.

Numerical Linear Algebra

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

A L A BA M A L A W R E V IE W

M340L Final Exam Solutions May 13, 1995

Introduction to Matrices

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Grilled it ems are prepared over real mesquit e wood CREATE A COMBO STEAKS. Onion Brewski Sirloin * Our signature USDA Choice 12 oz. Sirloin.

A graph theoretic approach to matrix inversion by partitioning

AM 121: Intro to Optimization Models and Methods

q-..1 c.. 6' .-t i.] ]J rl trn (dl q-..1 Orr --l o(n ._t lr< +J(n tj o CB OQ ._t --l (-) lre "_1 otr o Ctq c,) ..1 .lj '--1 .IJ C] O.u tr_..

Matrix decompositions

Applied Numerical Linear Algebra. Lecture 8

THE QR METHOD A = Q 1 R 1

APPH 4200 Physics of Fluids

Linear Programming: Simplex

1.3 Convergence of Regular Markov Chains

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Elementary Operations and Matrices

ABSOLUTELY FLAT IDEMPOTENTS

3.4 Elementary Matrices and Matrix Inverse

Vector and Matrix Norms. Vector and Matrix Norms

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

In Chapters 3 and 4 we introduced linear programming

Lecture 7: Vectors and Matrices II Introduction to Matrices (See Sections, 3.3, 3.6, 3.7 and 3.9 in Boas)

Chapter 3 - From Gaussian Elimination to LU Factorization

ALTERNATIVE DERIVATION OF SOME REGULAR CONTINUED FRACTIONS

row operation. = elementary matrix E with B = EA [B = AE]. with Id E by the same elementary operation.

The practical revised simplex method (Part 2)

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

MATH 2030: MATRICES. Example 0.2. Q:Define A 1 =, A. 3 4 A: We wish to find c 1, c 2, and c 3 such that. c 1 + c c

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS

ON MIXING AND PARTIAL MIXING

Chapter 3. Vector Spaces and Subspaces. Po-Ning Chen, Professor. Department of Electrical and Computer Engineering. National Chiao Tung University

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Solving Dense Linear Systems I

fl W12111 L5N

Sparsity. The implication is that we would like to find ways to increase efficiency of LU decomposition.

SOLUTIONS OF GAMES BY DIFFERENTIAL EQUATIONS G. W. Brown and J. von Neumann. 1_. The purpose of this note is to give a new proof for the

Appendix C Vector and matrix algebra

^ 4 ^1)<$^ :^t -*^> us: i. ^ v. é-^ f.. ^=2. \ t- "tì. _c^_. !r*^ o r , -UT -B6 T2T. - =s.- ; O- ci. \j-

a» [r^ivrrwr'c.)*(-) (,-)X*r

Krylov Subspace Methods that Are Based on the Minimization of the Residual

Polynomial expansions in the Borel region

Linear Equations and Matrix

Computation of the mtx-vec product based on storage scheme on vector CPUs

Solving Ax = b w/ different b s: LU-Factorization

The Dual Simplex Algorithm

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

Math x + 3y 5z = 14 3x 2y + 3z = 17 4x + 3y 2z = 1

( ).666 Information Extraction from Speech and Text

ON THE AVERAGE NUMBER OF REAL ROOTS OF A RANDOM ALGEBRAIC EQUATION

OH BOY! Story. N a r r a t iv e a n d o bj e c t s th ea t e r Fo r a l l a g e s, fr o m th e a ge of 9

Math 344 Lecture # Linear Systems

April 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition

Transcription:

-,i.im»»-i.wu, i^~*^~mi^^mmmrim*^f~*^mmm _^,^. [ CO ORC 64-33 DECEMBER 1964 < UPDATING THE PRODUCT FORM OF THE INVERSE FOR THE REVERSED SIMPLEX METHOD COPY $. /^ ^QQFICHE J. ^ 3-^ by George B. Dantzig R. P. Harvey R. D. McKnight OPERATIONS RESEARCH CENTER INSTITUTE OF ENGINEERING RESEARCH ^ l^ UNIVERSITY OF C A L I F 0 R N I A - B E R K E L E Y

UPDATING THE PRODUCT FORM OF THE INVERSE FOR THE REVISED SIMPLEX METHOD by G. B. Dantzig University of California at Berkeley R P, Harvey and R, D. McKnight Standard Oil Company of California December I96U ORC 64-55 The research of Roy Harvey and Robert McKnight was directly supported by Standard Oil Company of California. The research of George B. Dantzig was partly supported by 0. N.R. Contract Nonr-222(85), v/ r

r^s'^i^p' m UPDATING THE PRODUCT FCRM OF THE INVERSE FOR THE REVISED SIMPLEX METHOD A method by which the number of transformation matrices In the product form of the Inverse for the revised simplex method of solution of linear programs need never exceed m,the number of rows. Computer codes for solving linear programs by the simplex method usually use one of three forms in representing the problem during the course of solution. ISiese are (a) the standard form or original simplex method, (b) the revised simplex method with explicit inverse; and (c) the revised simplex method with inverse in product form (ref. l), [For a comparison of the relative efficiences of the three methods, see ref. 2 ] It Is hoped that the method to be proposed will at least partially alleviate one cf the principal disadvantages of (c), the product form algorithm, namely the need for frequent re-inverölon of the basis in order to reduce the number of transformations, without sacrificing too much of some of Its advantages, such as the spaxseness of the inverse and the ease with which the inverse Is kept current. The chief advantage of this proposal is that the number of non-zeros in the inverse representation Is conserved and remains approximately constant after the Initial build up The product form of an inverse with which we are concerned here is the indicated product of a number of elementary m x m transformation matrices, each such matrix being an identity matrix with the exception of one column, the so-called transformation column. Computer codes -1- T"

~ ^~ '' ' using the product form of the inverse need to store only these exceptional columns and their colnmn indices (usually only the non-zero elements of these columns and their row positions are stored). The normal procedure is to adjoin an additional elementary matrix to this product every simplex Iteration until such time as their number becomes too large for efficiency or the limit of storage capacity is reached; at which time the current basis matrix is re-inverted and k, k < m, new transformation matrices are formed. For our purposes, the inverse of the basis B is represented by the product of k elementary transformation matrices T. : ^ - T k T k-i T i Let us assume that these k transformation matrices T,, T,..., 11 correspond to colunns Qq, Op,..., 0. which have replaced, in turn, each of the unit vectors U., U,... U, in the initial unit basis. For the purpose of exposition, only, suppose that the rows have been reordered so that h = t (of course, this is not nectssary in practice) call U. 0 v f Th 6 representation of an outgoing column V in terms of a basis Q., Q,..., Q+ T> \i V t + i'*-'' V yields the usual system of product form relations : ^ v t - v*! + V^ + --- + T t,a + T t + i,t v t t i + --- + T «,t y m where T. ^0, t=l,2,...,k. c,t If,on the other hand, we denote oy a,, the coefficients resulting when relation (l) is solved for ^, that is: -2-

a i,t v/v ' '^ a t,t = ^t^ we have an alternative way to express the system of product form relations : (2) Q =* a ö + a Q +... + a V + a, ^V^ +... + a V t l,t 1 2,t 2 t^ t t+l,t t+1 m,t m where a t t ^ 0 ' tsl > 2 >"-> k Relation (2), which is the one we use in this paper, Is the representation of an in-going column a in terms cf uhe basis \f%>-" >\_ 1 >^t^t i"-,v m Now, suppose that a new vector & is chosen to enter the basis. If it were to replace one of the unit vectors still in the basis, we will adjoin the transformation to product form (2) in the usual way. Wo will not concern ourselves further with this case. If, however, it were to replace Q, then 0! k -i ^ 0 * and we b&ve the following: i i,l 1 2,1 2 r,l r K,l k m,i m Q = a 72 1.2^ +a 2.2 V 2 + -- +a r,2 V r + -- +a k,2 V k + -' + a m. 2 V m Q r.l^ Xr-A + a 2,r-l*2 + -' + Q r,r-l V r + -- + \r-l\ + -' + \r-l\ (3) ^^ a^^ +Q 2(r Q 2 +.. + a r>r V r +.. + a k>r V k +.. + a m)r V m /r+l i,r+l 1. 2,r+1^2 r,r+l r li,r+l k m,r+l m K 1^ i 2,k 2 r,k r r,k r m,k m where a ^0 for t=d,2,...,k by (2). -5-

Our objective is to eliminate Q using only a small amount of additional r working storage. Relation r is used as a starting "eliminator" and relations r+l, r+2,...,k+l are up-dated, resulting in a nev product form with Just k transformations, as shown in (k): \ - a M V l + a 2,l V 2 + -- + a r,l V r + G r + l,l Vl + -- + a k,l "k + -- + Q m,l V m ( h'-\,z\ +a 2,2 V 2 + -- +a r,2 V r + a r + l,2 Vl + - + ^,2 "k + -^ Q m,2 V m Vl* a i,r-a + a 2,r-l^ + " + a r,r-l V r + a r + l,r-l V r + l + -' + \,r~l\ + " + a m,r-l V G» a' a + a; Q 0 +..+ 0' c +a' F +..+ a' v +..+ 0' v T+l l,r u. 2,r 2 r,r r r+l,r r+1 k,r k m,r m m V m where a' ^0 for t=cr,...,k and for t > r either (C, F ) is t,t *- t t+l the same as (F^, V ) or (V^ n, F^), where F is the same as V. t t+l t+l' t ' r r Before giving a description of the algorithm, which is oriented tcwards a computer code adaptation, it might be informative to consider an example. It will be helpful here to assume that all of the relations involved are expressed in the form of (2). For the sake of some generality, however, we will not reorder the rows. Therefore, the unit vectors will be the U and the pivot positions will not necessarily be on the h t diagonal. We will indicate the pivots by underlining. The example shows the original array, a "motion picture" of the step-by-step procedure, and finally the updated array. Each step consists of updating a relation by the application of an eliminator, and then forming a revised eliminator to be used on the next step by the use of the updated relation. The W"

initial eliminator ie Just the original expression for Q,. An updated relation is allowed to pivot either on the same index position as that underlined in the origina 1 relation or the position underlined in the current eliminator (the next eliminator vill always have its underline in the alternative position to the one chosen for pivot). We will choose the one with the largest coefficient In absolute value, making an arbitrary choice in case of ties. ORIGINAL IM.ATIONS \- \- z - " 5 * o\ «2 - -^ - «1 " V \ S " "i + 0«! + u 5 - % %' S""! + V % «" o«+ e^ + oq^t % ENTER DSOP 1st ELIMINATOR: «X -. u l + 2U 2 'S OLD: % - - u l - «1 - u 3 + \ Step 1. ELIMINATE 0, : UPDATED ^ = -2U - 2U 0 + U ELIMINATE L' from the eliminator -5- ~r

FORM 2nd KLIM: \'-\ %-S*\ OLD: v u A *">-% Step 2. ELIMINATE 0 from the eliminator UPDATED Q» U l " ^2 + U^ ELIMINATE 11 FORM 5rd KI.IM: OLD: *L - "S *2 *\ % s- WS Step 5. ELIMINATE ^ from the eliminator UPDATED Q, - 2Q + %+ V* - u k ELIMINATE U : FORM Uth ELIM: \ -s 2 S + \ OLD: S" \ + % Step 4. ELIMINATE ^ from the eliminator UPDATED GL - -Q, - Q 0 + u, 5 5 2 k -6-

UPDATEI I RELATIONS Q 2 - -2U, - SU, + U^ %' \- «2 +ü 5 Q i, = 2 S + «2 + ^ - \ Q = -Q S +^ This algorithm is one of several possible variants for updating the transforroations. This one happens to proceed through the transformations in a forvard direction. Backyard working algorithms can be formulated along very similar lines. Method Assume that in addition to the foregoing k transformations a tt*l,2,...,k, that the following properly ordered aways are given: J ; J. = column name associated with basic variable with unit coefficient on row i of the canonical form, X ; x. J. F ; f.. s current value of basic variable. - storage containing the entering column (called J ) expressed in terms of the current basis. H ; h «pivet row corresponding to the t transformation. The following arrays are available for working storage: D ; 5^ = for updating a E; e ] fcr updating eliminators. for recording new pivot positions during updating. -7-

-»»CTWMIWWBIilBBaiWIWMIBBaPBIiiil^ mmmmimmm.jtmjmi'mfmii^mmimm'--^ The following one-cell arrays are given: * j name of column entering the basic set. t k = number of transformations a. / \St r = pivot row corresponding to the (k + 1) transformation F and the following one-cell arrays are available for working storage t = running Index on transformations. t p = pivot position of the updated a q 3 alternate pivot position not selected above. We will use superscripts with arrays D and E to differentiate their contents during the updating steps. Initialization x (A) Transform X: x! =^jx; =x-x;f, i^r J r i r o i J 1 J r i * (B) Replace J by new column name J (C) 1:1^ = 1 / x.» k+1 (D) Find t = s such that h, = r ; if none store a s-1 O.,, = *". with h, = r and go to (i) i,k+l 1 k+1 where (E) Set E 1 : ej = O^^ (?) Set q = r Algorithm (a) If t a k+1 go to (g) otherwise Form D : t t _t t ^ J 5 = Q e and b j => a + a e j for 1 * q q r,t q 1 L^t r,t i -8-

^j^^.wjwa^tm-twwpii^^^ iaä Wi»lWBBIBIiBBtP^^ BgpB»BW)^HWg»lBB«qHWIW^Bg^WW^»WipjlBW^IWMiw>i.i.iiMiii nuh'' '' *-'m~*~<m**-*"i-».mm'*t' " ' W-.I^.'.I>"M L» im i i^^^^pwjiill i. m^m^m^^^^^^** (L) Choose new p = q ; new q = h if I 5 I > I 6, otherwise t ' q ' ^ ' n ' new p -. h and q remains unchanged (c) Store updated a where OL =6 with updated h X.u - X X u"x P (d) If n. p go to (e); otherwise exchange xj with x' t h t p J, with J h t P, v r, t+1 t+1 t.-t Q t+1 t t+1 -t. / ^e) Form E : e - e /o ; e e. e 6_,, i ^ p F P P i i p i (f) Update L P h t ; Increase t by 1, go to (a) '- U\ 't * + t + t t -^ t ^ (g) Form D ': 6 - f e ; b= f T + f e for i / q q r q 1 L r i k t (h) Store updated a where a» 6 vith updated h - q X. K X K (i) Exit. Proof that there is always at least one non-zero element This algorithm on each updating step, except that last, makes a choice from one of two positions for the pivot location. It is established below that at least one of these two positions has a non-zero coefficient. Following and adding to the notation of the algorithm described, v / > th th earlier, denote by p(,t) and q(t) the p and q position choices th at the t etage. Suppose that r.he value of t for which h ^ r is s so that t -i s, 8+1,...^+! in the steps of the algorithm. Consider any value of t such that s 1 t. ^_ k. Let us make the inductive assumption that e q.:t-i) * : ' -9-

^ \ t t From a) &v,=qt,- ( - +Cl1 e h t -V t + a^\ ' v^0 6 q(t-l) =a r,t e q(t-l) If a =0 then 6 * 0 since the old pivot a ^ 0 : If a i^ 0 > then 6,. ^ 0 by our inductive assumption. Hence there r,t q(t-l) exists at least one non-zero pivot choice. We need also to show e / N f 0 q(t) There are two cases to consiaer: Case 1: Suppose q(l) = h and p(t) = q(t-l) implying a i 0. The elimination of q(t-l) term from the eliminator yields (see Step e) e t + l t q(t-l)^ h t> t r,t h^ -\,t e^) = eh t r^1 = T~ ^ 1 a r,t e q(t-l) a r,t Case 2: Othervise p(t) = h and q(t) = q(t-l) implying similarly a, + a e,* 0. In this case from (e) again t ^ t t ^ e a e., e /, N a t+l t h t r ' t ^t- 1^ q(t " l) V 1 q(t) HK q(t-l) H * a + a ~ e, T* a.+a ~ e ^t V 1 r ' t h t A' 1 r ' t h t fu To complete the inductive step we note that initially for t^s, Q O t + l e, v = e # 0. hence by induction e.. * 0 and also the coefficient q(s-l) r r ' q(t) r of the pivot 6, v / 0 for t=x6, s+1,..., k. Pit) Finally at step k+1 the pivot index is q(k) by (g). By definition f_ = O,., a^d ^ ^ 0 r^u- 8, from our inductive hypothesis -10-

so that the final pivot coefficient is also non-zero completing our proof. The procedure could be generalized to give more freedom as regards choice of pivot at the expense of carrying along more eliminators. As noted earlier, it is possible to have backvard elimination aa well as forward. -11'