A.l INTRODUCTORY CONCEPTS AND OPERATIONS umnx, = b, az3z3 = b, i = 1,... m (A.l-2)

Similar documents
Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Review Questions REVIEW QUESTIONS 71

LAPACK-Style Codes for Pivoted Cholesky and QR Updating. Hammarling, Sven and Higham, Nicholas J. and Lucas, Craig. MIMS EPrint: 2006.

LAPACK-Style Codes for Pivoted Cholesky and QR Updating

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Introduction to Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Review of Vectors and Matrices

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

POLI270 - Linear Algebra

Algebraic Equations. 2.0 Introduction. Nonsingular versus Singular Sets of Equations. A set of linear algebraic equations looks like this:

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

MTH 464: Computational Linear Algebra

A Review of Matrix Analysis

Matrix & Linear Algebra

Algorithm 853: an Efficient Algorithm for Solving Rank-Deficient Least Squares Problems

Linear Algebra and Matrix Inversion

Solution of Linear Equations

Chapter 2 Notes, Linear Algebra 5e Lay

B553 Lecture 5: Matrix Algebra Review

Matrices and Matrix Algebra.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

CPE 310: Numerical Analysis for Engineers

Linear Equations in Linear Algebra

Systems of Linear Equations and Matrices

Fundamentals of Engineering Analysis (650163)

NAG Fortran Library Routine Document F04CFF.1

MATH2210 Notebook 2 Spring 2018

Linear Algebra March 16, 2019

Systems of Linear Equations and Matrices

Chapter 1 Matrices and Systems of Equations

NAG Library Routine Document F07HAF (DPBSV)

Math Linear Algebra Final Exam Review Sheet

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Generalized interval arithmetic on compact matrix Lie groups

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

MAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices

Definition of Equality of Matrices. Example 1: Equality of Matrices. Consider the four matrices

Basic Concepts in Linear Algebra

Matrix operations Linear Algebra with Computer Science Application

MAT 2037 LINEAR ALGEBRA I web:

Chapter 7. Linear Algebra: Matrices, Vectors,

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Review of Basic Concepts in Linear Algebra

The Solution of Linear Systems AX = B

Presentation by: H. Sarper. Chapter 2 - Learning Objectives

Review of Matrices and Block Structures

LINEAR SYSTEMS, MATRICES, AND VECTORS

7.6 The Inverse of a Square Matrix

Roundoff Error. Monday, August 29, 11

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Elementary maths for GMT

1 Matrices and Systems of Linear Equations

Appendix C Vector and matrix algebra

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

3. Replace any row by the sum of that row and a constant multiple of any other row.

Notes on Row Reduction

Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

CHAPTER 6. Direct Methods for Solving Linear Systems

ORIE 6300 Mathematical Programming I August 25, Recitation 1

OR MSc Maths Revision Course

Section 9.2: Matrices.. a m1 a m2 a mn

DM559 Linear and Integer Programming. Lecture 2 Systems of Linear Equations. Marco Chiarandini

NAG Library Routine Document F08UBF (DSBGVX)

Applied Numerical Linear Algebra. Lecture 8

Foundations of Matrix Analysis

Digital Workbook for GRA 6035 Mathematics

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Gaussian Elimination and Back Substitution

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear Equations and Matrix

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Matrix notation. A nm : n m : size of the matrix. m : no of columns, n: no of rows. Row matrix n=1 [b 1, b 2, b 3,. b m ] Column matrix m=1

HOUSEHOLDER REDUCTION

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Lecture Summaries for Linear Algebra M51A

NAG Library Routine Document F08JDF (DSTEVR)

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Chapter 4. Solving Systems of Equations. Chapter 4

More Gaussian Elimination and Matrix Inversion

ELEMENTARY LINEAR ALGEBRA

NAG Toolbox for Matlab nag_lapack_dggev (f08wa)

a11 a A = : a 21 a 22

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

Matrices and systems of linear equations

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Gaussian elimination

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

Phys 201. Matrices and Determinants

Transcription:

Computer-Aided Modeling of Reactive Systems by Warren E. Stewart and Michael Caracotsios Copyright 0 2008 John Wiley & Sons, Inc. Appendix A Solution of Linear Algebraic Equations Linear systems of algebraic equations occur often in science and engineering as exact or approximate formulations of various problems. Such systems have appeared many times in this book. Exact linear problems appear in Chapter 2. and iterative linearization schemes are used in Chapters 6 and 7. This chapter gives a brief summary of properties of linear algebraic equation systems. in elementary and partitioned form, and of certain elimination methods for their solution. Gauss-Jordan elimination, Gaussian elimination, LU factorization, and their use on partitioned arrays are described. Some software for computational linear algebra is pointed out, and references for further reading are given. A.l INTRODUCTORY CONCEPTS AND OPERATIONS A system of m linear algebraic equations in n unknowns, a1121 ~ 1 2 x 2... alnxn = bl a2121 + a2222 +... + a2nxn = b2 (A.1-1)... arnlxl + ~ ~ 2 x 2 can be written more concisely as n 1 3=1 +... + umnx, = b, az3z3 = b, i = 1,.... m (A.l-2) Here at3 denotes the coefficient of the jth unknown x3 in the ith equation, and the numbers az3 and b, (hence x3) all are real. Often the number rn of equations is equal to the number n of unknowns. but exceptions are common in optimization and modeling and will be noted in Chapters 6 and 7. Array notations are convenient for such problems. Let the bold letters 177

178 Computer-Aided Modelang of Reactive Systems denote, respectively. the coefficient matrix A, the unknown column vector x, and the right-hand column vector b. Another way of representing an array is to enclose the expression for its elements in curly brackets: A = {a,,}, x = {x,}, b = {bt}. Defining the matrix-vector product (A.1-4) in accordance with Eq. (A.1-2), one can summarize Eq. (A.1-1) concisely as AX = b (A. 1-5) Methods for solving Eq. (A.1-1) can then be described in terms of operations on the arrays A and b. Before doing so. we pause to summarize a few essentials of linear algebra: fuller treatments are given in Stewart (1973) and in Xoble and Daniel (1988). Matrices can be added if their corresponding dimensions are equal. The addition is done element by element: A + B = {atj + bt,} (A.l-6) The difference A - B is formed analogously. It follows that two matrices are equal if their corresponding elements are equal; their difference is then a zero matrix 0, with every element equal to zero. RIultiplication of a matrix A by a scalar s is done element by element. and is commutative and associative as shown. Matrices A and B can be multiplied provided that they are conformable in the order written, that is, that the number of columns (column order) of the first matrix equals the number of rows (row order) of the second. The product C of a m x n. matrix A by a n x p matrix B has the elements (A. 1-8) Thus, the element in row z arid column k of the product matrix is the sum over j of the products (element j in row z of the first matrix) times (element j in column k of the second matrix). "hen AB equals BA. we say that the multiplication is tom7nututave; this property is limited to special pairs of square matrices of equal order. The transpose of a matrix A ii AT = {GJT = {a,,} (A. 1-9)

Solution of Linear Algebraic Equations 179 Thus, the rows of AT are the columns of A, and the columns of AT are the rows of A. A matrix that equals its transpose is called symmetric. The transpose of a product of square or rectangular matrices is given by (AB)T = BTAT (A.1-10) that is, by the product of the transposed matrices in reverse order. A vector x, by convention. denotes a column of elements, hence a onecolumn matrix {x21}. The transpose xt is a matrix {xlz} by Eq. (A.1-9) and is a row of elements, ordered from left to right The multiplication rule (A.1-8) and its corollary (A 1-10) thus hold directly for conforming products involving vectors. For example, Eq. (A. 1-5) takes the alternate form ztat = bt (A. 1-11) when transposed according to Eq. (A.1-10). Such transpositions arise naturally in certain computations with stoichiometric matrices, as noted in Chapter 2. A set of row or column vectors, vl..... up. is called lanearly zndependent if its only vanishing linear combination C,c,v, is the trivial one, with coefficients c, all zer0.l Such a set provides the baszs vectors vl....,up of a p-dimensional linear space of vectors C, c,v,, with the basas varaables el,.., cp as coordinates. Constructions of basis vectors for the rows or columns of the coefficient matrix arise naturally in solving linear algebraic equations. The number, T. of row vectors in any basis for the rows is called the rank of the matrix. and is also the number of column vectors in any basis for the columns. Equation (A.1-1) has solutions if and only if b is a linear combination of nonzero column vectors of A; then we say that b is in the column space of A. In such cases, if the rank. r, of A equals n there is a unique solution, whereas if r < n there is an infinity of solutions characterized by (n - r) free parameters. The possible outcomes are identified further in Section A.3. The situation is simplest if the matrix A is square (rn = n); then a unique solution vector x exists if and only if the rank of A equals its order n. Such a matrix A has a unique anverse, A-l, defined by and is said to be nonsingular. or regular. Here I is the unit matrix Otherwise. the set is called linearly dependent.

180 Computer-Azded Modeling of Reactive Systems of the same order as A. The solution of Eq. (A.l-5) is then expressible as x = A-lb (A. 1-14) whatever the value of the n-vector b. though for computations there are better schemes, as described in the following sections. A.2 OPERATIONS WITH PARTITIONED MATRICES It is frequently useful to classify the equations and/or the variables of a problem into subsets; then Eq. (A. 1-1) can be represented in partitioned forms such as All -412 (A.2-1) ( A21 A2.) (z;) = (2) A31 A32 Here each entry denotes a subarray and is shown accordingly in bold type. Thus, the vector x is partitioned here into two subvectors, and the matrix A is partitioned into six submatrices. The siibmatrix Ahk contains the coefficients in the row subset h for the elements of the subvector xk; thus, these arrays conform for multiplication in the order written. Equations (A.1-4,6,7.8. and 10) lead to analogous rules for addition and multiplication of partitioned arrays: thus. Eq. (A.l-4) yields the formula (A.2-2) for postmultiplication of a partitioned matrix { Ahk} by a conformally partitioned vector {xk}. Equation (A.l-8) yields the formula Chq = [AB]hq = AhkBkq (A.2-3) for the submatrices Chq of a product of conformally partitioned matrices {Ahk} and {Bkq}. Application of Eq. (A.2-2) to Eq. (A.2-1) yields the partitioned equations AllXl + A12x2 = bl -421x1 + A2222 = b2 (A.2-4) A31x1 + A3222 = b3 when the resulting row groups are written out. makes use of Eq. (A.2-3). k The following example

Solution of Linear Algebraic Equations 181 Example A.l. Symbolic Inversion of a Partitioned Matrix A formal inverse is desired for nonsingular matrices of the form (A.2-5) in which All and A22 are nonsingular (hence square). This result is used in Sections 6.5 and 7.5 for statistical estimation of parameter subsets. Let us denote the inverse by (A.2-6) Then Eqs. (A.1-12) and (A.2-3) require Interpreting this matrix equality as described in Eq. (A.1-6). we obtain four equations for the four unknown submatrices a, p, y, and 6: Premultiplication of Eq. (A.2-9) by A;: gives (A.2-8) (A. 2-9) (A. 2-10) (A.2-11) p = -A-~A 11 126 (A.2-12) Premultiplication of Eq. (A.2-10) by A; gives = -A-~A 22 21a (A.2-13) Insertion of the last result into Eq. (A.2-8) gives (A. 2-14) whence the submatrix a is (A.2-15) Insertion of Eq. (A.2-12) into Eq. (A.2-11) gives [-AziA~~Aiz + A2216 = I22 (A.2-16)

182 Computer-Aided Modeling of Reactive Systems whence the submatrix 6 is (A.2-17) Assembling these results, we obtain the formula A-l = [All - A12A,-:Azl]-' -AF:A12[A22 - AziA;,'Aiz]-l -A;iAzi[Aii - Ai~A;iAzi]-~ [A22- AziA;l'Aiz]-l (A.2-18) for the inverse of a matrix of any order, in terms of operations on its subarrays. A.3 GAUSS-JORDAN REDUCTION Gauss-Jordan reduction is a straightforward elimination method that solves for an additional unknown Xk at each stage. An augmented array (A.3-1) is initialized with the elements of Eq. (A.1-1) and is transformed by stages to the solution. Stage k goes as follows: 1. Find apk. the absolutely largest current element in column k that lies in a row not yet selected for pivoting. If cxpk is negligible. skip to step 4; otherwise, accept it as the pivot for stage k and go to step 2. 2. Eliminate xk from all rows but p by transforming the array as follows: (A.3-2) 3. Kormalize the current pivotal row as follows: 4. thus obtaining a coefficient of unity for x k in the solution. If k < min(rn.n). increase k by 1 and go to 1. Otherwise, the elimination is completed.

Solution of Linear Algebraic Equations 183 The use of the largest available pivot element, Qpk. in column k limits the growth of rounding error by ensuring that the multipliers (a,k/a,k) in Eq. (A.3-2) all have magnitude 1 or less. This algorithm takes min(m, n) stages. The total number of pivots accepted is the rank. r. of the equation system.the final array can be decoded by associating each coefficient column with the corresponding variable. Rearranging the array to place the pivotal rows and columns in the order of selection, one can write the results in the partitioned form (A. 3-4) Here I, is the unit matrix of order r, XI is the vector of r pzvotal variables (those whose columns were selected for pivoting), 2 2 is the vector of (n- r) nonpivotal variables, and Ai2 is the r x (n - r) matrix of transformed coefficients of the 22 variables. Expanding this partitioned equation, we obtain the pivotal equation set and the nonpivot a1 equation set XI + A',,x~ = bi (A.3-5) 0 = bk (A.3-6) The possible outcomes of the reduction can now be summarized: 1. If r = rn = n, the solution is unique. Equation (A.3-4) reduces to I,x = x = b' (A. 3-7) 2. and all the equations and variables are pivotal. If r = n < m, the solution is overdetermined. Equation (A.3-4) reduces to I,x = x = bi 0 = bh (A.3-8) (A.3-9) 3. If the vector bi is zero, the problem is consistent and the solution is given by Eq. (A.3-8); otherwise, no n-vector x can satisfy all the equations. Depending on the causes of the discrepancies, one may need to reject or revise certain data. or else seek a compromise solution that minimizes the discrepancies in some sense: see Chapters 2, 6, and 7. If r = m < n, the solution is underdetermined. Equation (A.3-4) reduces to the form in Eq. (A.3-5). with b: = b' since all rows are pivotal when r = m. Thus, the pivotal variables (the set XI) are obtained as explicit functions of the nonpivotal variables

184 Computer-Aided Modeling of Reactive Systems 4. (the set 22). The 2 2 variables are called the parameters of the solution. If r is less than both m and n, the reduction yields Eqs. (A.3-5) and (A.3-6). The vector b: provides a numerical consistency test. and the 2 2 variables serve as parameters of the solution. The algorithm just described can also compute the inverse of a nonsingular matrix. For simplicity, consider the equations to be ordered originally so that each diagonal element akk becomes a pivot. Then the foregoing reduction transforms the coefficient matrix A into a unit matrix. It follows that the algorithm is equivalent to premultiplication of the initial array by A-l. A simple way of computing A-'. therefore. is to include a unit matrix in the initial a array. The final array then takes the form A-l(A b I)=(I 2 A-') (A.3-10) giving A-' in the space where I was initially inserted. The inversion can be done more compactly by computing the inverse in the space originally occupied by A. Simple changes in steps 2 and 3 will accomplish this: 1. See above. 2. Extend the range of J in Eq. (A.3-2) to all J # p. After the computation of (c~~k/a,k)('l~) ~ insert that quotient as the new element &,I,. 3. Replace apk by 1. and then divide every element of row p by (4 cypk ' 4. See above. Should A be singular, this method still gives a basts znverse. namely. the inverse of the submatrix of A in which the pivots were found. A version of this algorithm using strictly positive. diagonal pivots is used for constrained parameter estimation in Subroutine GREGPLUS, as described in Sections 6.4 and 7.3: there the determinant of the basis submatrix is obtained directly as the product of the pivotal divisors. Basis inverse matrices also prove useful in Sections 6.6 and 7.5 for interval estimation of parameters and related functions. A.4 GAUSSIAN ELIMINATION Gauss gave a shorter algorithm that introduces fewer roundings. The method has two parts: elimination and backsubstitution. We describe it here for the array a of Eq. (A.3-1) with m = n. Stage k of the Gaussian elimination goes as follows: 1. Find CYpk. the absolutely largest of the elements Crkk..... ank. If Crpk is negligible, stop and declare the matrix singular. Otherwise, accept apk as the pivot for stage k. and (if p > k)

Solution of Linear Algebraic Equations 185 2. interchange rows p and k. thus bringing the pivot element to the (k,k) position. If k < n. eliminate x k from rows k + l?... n by transforming the array as follows [here (Old) denotes element values just before this calculation] : 3. If k < n, increase k by 1 and go to 1. Otherwise, the elimination is completed. At the end of the elimination, the array takes the upper triangular form Associating each coefficient u,~ with the corresponding element of x, we get the transformed equation system ~22x2 +... + u~,x, = b! (A. 4-3) which is readily solved, beginning with the last equation: (A.4-4) The solution is direct when Eqs. (A.4-4) are applied in the order given, since each right-hand member then contains only quantities already found. For computation of a particular solution vector 2, this method requires + (n3 - n) +n2 operations of multiplication or division. versus + (n3 - n) + n2 for the Gauss-Jordan method. Thus, the Gauss method takes about twothirds as many operations. The computation of an inverse matrix takes O(n3) operations of multiplication or division for either method.

186 Com,puter-Aided Modeling of Reactive Systems A.5 LU FACTORIZATION LU factorization is an elegant scheme for computing x for various values of the right-hand vector b. The Gaussian algorithm described in Section A.4 transforms the matrix A into an upper triangular matrix U by operations equivalent to premultiplication of A by a nonsingular matrix. Denoting the latter matrix by L-, one obtains the representation L-~A = u (A.5-1) for the transformation process. Preniultiplication of this formula by L then gives A=LU (A.5-2) as an LU factorization of A. In applications one never computes the matrix L, but works with the lower triangular matrix L- defined by the row interchange list p(k) and the lower triangular array of multipliers mzk = (@k/ckkk) for stages k = 1,..., n of the elimination. For efficient use of computer memory the diagonal multipliers mkk = 1 are not stored, while the other multipliers are saved as the subdiagonal elements of the transformed A matrix [in the region that is empty in Eq. (A.4-2).] Premultiplication of Eq. (A.l-5) by L-l, followed by use of (A.5-1). gives TJx = L- b = b (A.5-3) as a matrix statement of Eq. (A.4-3). Premultiplication of Eq. (A.5-3) by Up1 gives 2 = U-lb (A.5-4) as a formal statement of the back-solution algorithm in Eq. (A.4-4). This two-step processing of b can be summarized as x = U-l(L-lb) (A.5-5) which is faster and gives less rounding error than the alternative of computing the inverse niatrix and then using Eq. (A.1-14). The pivot selection procedure used here is known as partial pivoting, because only one column is searched for the current pivot. A safer but slower procedure. known as full pivoting, is to select the current pivot as the absolutely largest remaining eligible elenient in the current transformed matrix.

Solution of Linear Algebraic Equations 187 A.6 SOFTWARE Excellent implementations of linear algebra algorithms are available from several sources. including MATLAB, IMSL, NAG, LINPACK and LA- PACK. LINPACK and LAPACK are available from www.netlib.org. along with codes for many other numerical algorithms. The LINPACK codes were written for serial machines. whereas LAPACK is designed for efficient utilization of parallel machines as well. LAPACK codes are used where applicable in the software described in this book. PROBLEMS A.A Matrix and Matrix-Vector Multiplication Prove Eq. (A.l-8) by evaluating the contribution of element xk to the zth element of the product vector (AB)Ic = A(Bz). Here A is m x n, B is n x p. and IC is an arbitrary conforming vector. A.B Transposition Rule for Matrix Products Prove Eq. (A.1-10) by use of Eqs. (A.1-8) and (A.1-9). Again. take A to be m x n and B to be n x p. A.C Inversion Formula Verify Eq. (A.2-18) by premultiplication with the matrix of Eq. (A.2-5) according to Eq. (A.2-3). A.D Symmetry and Matrix Inversion Show. by use of Eqs. (A.1-12) and (A.1-lo), that every symmetric invertible matrix A has a symmetric inverse. A.E Inverse of a Matrix Product Consider a product P = ABC of nonsingular matrices of equal order. Show that ( A.E- 1) by forming P-lP and simplifying. This result is used in Chapters 6 and 7 to obtain probability distributions for auxiliary functions of model parameters. A.F Solution of a Linear System Use Gaussian elimination and back-substitution to solve the linear system of Eq. (A.1-2). with m = R = 3. ut3 = l/(i +j - l), and b: = (1.0. O}. The coefficient matrix A given here is called the Hzlbert matrzz; it arises in a nonrecommended brute-force construction of orthogonal polynomials from monomials tj.

188 Computer-Aided Modeling of Reactive Systems REFERENCES and FURTHER READING Anderson. E.) Z. Bai, C. Bischof, J. Demmel, J. J. Dongarra. J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney. S. Ostrouchov. and D. Sorensen, LAPACK Users' Guide, Society for Industrial and Applied hlathematics, Philadelphia (1992). Dongarra, J. J., J. R. Bunch, C. B. Moler, and G. W. Stewart, LINPACK Users' Guide, SIAM, Philadelphia (1979). Golub, G. H., and C. F. Van Loan. Matrix Computations, Johns Hopkins Gniversity Press, Baltimore (1983). Noble, B., and J. W. Daniel, ilpplied Linear Algebra, 3rd edition, Prentice-Hall, Englewood Cliffs) NJ (1988). Press, W. H.. B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes (FORTRAN Version), Cambridge University Press (1989). Stewart, G. W.. Introduction to Matrix Computations, Academic Press, New York (1973).