ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

Similar documents
Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Generalized Principal Pivot Transform

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

The DMP Inverse for Rectangular Matrices

The Drazin inverses of products and differences of orthogonal projections

Maths for Signals and Systems Linear Algebra in Engineering

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

arxiv: v1 [math.ra] 14 Apr 2018

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Review Questions REVIEW QUESTIONS 71

Properties of Matrices and Operations on Matrices

Chapter 2: Matrix Algebra

Foundations of Matrix Analysis

Group inverse for the block matrix with two identical subblocks over skew fields

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

First, we review some important facts on the location of eigenvalues of matrices.

Notes on Eigenvalues, Singular Values and QR

Elementary maths for GMT

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Chapter 7. Linear Algebra: Matrices, Vectors,

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B

Absolute value equations

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Gaussian Elimination and Back Substitution

Review Let A, B, and C be matrices of the same size, and let r and s be scalars. Then

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Math 60. Rumbos Spring Solutions to Assignment #17

3.4 Elementary Matrices and Matrix Inverse

Re-nnd solutions of the matrix equation AXB = C

c 2002 Society for Industrial and Applied Mathematics

Matrix Inequalities by Means of Block Matrices 1

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

of a Two-Operator Product 1

Moore Penrose inverses and commuting elements of C -algebras

Matrix operations Linear Algebra with Computer Science Application

Notes on Solving Linear Least-Squares Problems

The dual simplex method with bounds

Chapter 2 Notes, Linear Algebra 5e Lay

An Alternative Proof of the Greville Formula

MATH 2050 Assignment 6 Fall 2018 Due: Thursday, November 1. x + y + 2z = 2 x + y + z = c 4x + 2z = 2

2 Two-Point Boundary Value Problems

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

There are six more problems on the next two pages

Numerical Linear Algebra

Dragan S. Djordjević. 1. Introduction

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Lecture notes: Applied linear algebra Part 1. Version 2

AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 23: GMRES and Other Krylov Subspace Methods; Preconditioning

Chapter 3 Transformations

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

MAT 610: Numerical Linear Algebra. James V. Lambers

Performance Comparison of Relaxation Methods with Singular and Nonsingular Preconditioners for Singular Saddle Point Problems

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

Review of Vectors and Matrices

arxiv: v1 [math.na] 1 Sep 2018

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

5.6. PSEUDOINVERSES 101. A H w.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

A Note on Simple Nonzero Finite Generalized Singular Values

Chapter 1. Matrix Algebra

Linear Algebra V = T = ( 4 3 ).

Vector and Matrix Norms. Vector and Matrix Norms

15-780: LinearProgramming

Lecture 3: Linear Algebra Review, Part II

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

Scientific Computing

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Appendix C Vector and matrix algebra

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Matrix Algebra. Matrix Algebra. Chapter 8 - S&B

On group inverse of singular Toeplitz matrices

ON THE SINGULAR DECOMPOSITION OF MATRICES

4 ORTHOGONALITY ORTHOGONALITY OF THE FOUR SUBSPACES 4.1

Efficient and Accurate Rectangular Window Subspace Tracking

Matrix & Linear Algebra

MATH 2360 REVIEW PROBLEMS

Fundamentals of Engineering Analysis (650163)

Math 2331 Linear Algebra

On some matrices related to a tree with attached graphs

MATH2210 Notebook 2 Spring 2018

MTH 464: Computational Linear Algebra

3 (Maths) Linear Algebra

Institute for Computational Mathematics Hong Kong Baptist University

Large Scale Data Analysis Using Deep Learning

Transcription:

Volume 22, pp. 480-489, May 20 THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES XUZHOU CHEN AND JUN JI Abstract. In this paper, we study the Moore-Penrose inverse of a symmetric rank-one perturbed matrix from which a finite method is proposed for the minimum-norm least-squares solution to the system of linear equations Ax = b. This method is guaranteed to produce the required result. Key words. Finite method, Linear system, Moore-Penrose inverse, Symmetric rank-one update. AMS subject classifications. 5A06, 5A09, 65F05.. Introduction. Throughout this paper we shall use the standard notations in [2, 4]. C n and C m n stand for the n-dimensional complex vector space and the set of m n matrices over complex field C, respectively. For a matrix A C m n we denote R(A), N(A), A, and A the range, null space, conjugate transpose, and Moore-Penrose generalized inverse of A, respectively. It is well known that A = A for any nonsingular matrix A. Let A C m n and b C m. We consider the solution to the following system of linear equations (.) Ax = b. When A is nonsingular, the system of linear equations (.) has a unique solution A b. In general, the system may not have a unique solution or there may be no solution at all. We are interested in those x, called the least-squares solutions, which minimize b Ax 2. Among all the least-squares solutions, there is a unique x such that x 2 < x 2 for any other least-squares solution x. The x is called the minimum-norm least-squares (MNLS) solution to (.). It is shown [2, Theorem 2..] that A b is the the MNLS solution to (.). Received by the editors on November 22, 200. Accepted for publication on May 8, 20. Handling Editor: Daniel Szyld. Department of Computer Science, Fitchburg State University, Fitchburg, MA 0720, USA (xchen@fitchburgstate.edu). Department of Mathematics and Statistics, Kennesaw State University, Kennesaw, GA 3044, USA (jji@kennesaw.edu). 480

Volume 22, pp. 480-489, May 20 Minimum-Norm Least Squares Solution and Symmetric Rank-One Updates 48 For a nonsingular matrix B, B + cd is also nonsingular if + d B c 0 for vectors c and d. Moreover, we have (.2) (B + cd ) = B B cd B + d B c. The identity in (.2) is the well-known Sherman-Morrison formula for the inverse of rank-one perturbed matrix and was later generalized to the case of rank-k perturbation. The Moore-Penrose inverse of rank-one modified matrix B + cd for a general matrix B is also obtained in literature [2, Theorem 3..3]. More generalizations can be found in [3], [2] and [3]. Based on Sherman-Morrison formula, a recursive iterative scheme for the solution to the system of linear equations Ax = b with a nonsingular A was recently developed by Maponi [0]. Unfortunately, his algorithm breaks down even for a very simple problem (see Example 2 of Section 4) and a modification based on pivoting techniques has to be introduced to avoid the failure. In this paper, following the same idea of using rank-one updates employed by Maponi, we will develop a finite method which is guaranteed to carry out all the steps until the required solution is reached. Our approach is to construct a sequence of linear systems in such a way that the coefficient matrix of any linear system in the sequence is just a symmetric rankone modification of that of its previous system. By employing the rank-one update formula for generalized inverse, the MNLS solutions to all systems of linear equations in the sequence can be successively computed. Like the algorithms in [9, ], there is no need to compute the generalized inverses of all the intermediate matrices in the procedure. The proposed algorithm is different from the classic Greville finite algorithm [6] and its variant [5] in several aspects. One difference is that the algorithm in this paper is not streamlined while the other two algorithms are. However, the major difference, which makes our algorithm better than the algorithm in [5], lies in the way of handling the intermediate matrix sequences durng the iteration. We observe that the intermediate sequence P l of [5] for the MNLS solution is indeed A l, where A l is defined as in (2.4). Unlike the algorithm by Zhou et al. in which both P i and Q i of [5, Theorem 2.2] must be updated and matrix multiplication must be performed at each iteration, the sequence {A l } in our notations is never computed explicitly even though the Sherman-Morrison type of formula for {A l } is used in the derivation. Thus, our algorithm utilizes less required memory locations and has a lower computational complexity than the classic Greville algorithm and the algorithm by Zhou et al. [5]. The strategy of symmetric rank-one updating is also adopted in the online and real-time kernel recursive least-squares algorithm of [4].

Volume 22, pp. 480-489, May 20 482 Xuzhou Chen and Jun Ji The rest of the paper is organized as follows. The existing results on Moore- Penrose inverse of symmetric rank-one perturbed matrix will be summarized in Section 2. An effective finite method for computing the MNLS solution of a general linear system will be proposed in Section 3. Some examples will also be included in Section 4. 2. Symmetric rank-one update and its M-P inverse. Notice that the general linear system (.) is equivalent to the normal equation (2.) A Ax = A b so far as the MNLS solution is concerned. Let a i C n be the ith column of the matrix A. Thus, we have (2.2) A = (a a 2 a m ). Therefore, we can rewrite the normal equation (2.) as the following equivalent linear system ( m ) (2.3) a i a i x = A b. i= Obviously, the problem of obtaining the MNLS solution of linear system (.) becomes that of obtaining the Minimum-norm solution of linear system (2.3) whose coefficient matrix is the sum of m rank-one matrices. Define (2.4) l ˆb = A b, A 0 = 0, and A l = a i a i i= for l =,...,m. Our approach is to construct a finite sequence (2.5) x l = A lˆb, l = 0,,2,...,m, where x l is the MNLS solution of A l x = ˆb (l = 0,,2,...,m). However, we will not construct the sequence (2.5) by directly obtaining the MNLS solution for each linear system from cold-start. Note that A l = A l + a l a l is the rank-one modification of A l, we would compute x l in terms of x l starting from x 0 = A 0ˆb. We hope that the calculation of x 0 should be an easy one. In fact, the first linear system A 0 x = ˆb of the sequence has a trivial MNLS solution x 0 = A 0ˆb = 0. Consequently, the sequence {x l } m l=0 can be constructed recursively and effectively. After m iterations, x m = A mˆb will be finally reached. We note that the linear system (2.) is the same as A m x = ˆb.

Volume 22, pp. 480-489, May 20 Therefore, Minimum-Norm Least Squares Solution and Symmetric Rank-One Updates 483 ( m ) x m = A mˆb = a i a i A b = (A A) A b = A b, i= which is the MNLS solution of (.). Observe that A l = A l + a l a l. We have x l = A lˆb = (A l + a l a l ) ˆb. By utilizing the existing results of Moore-Penrose inverse of rank-one modified matrix A l and exploring its special structure, we will propose an effective method for computing the MNLS solution of the linear system A l x = ˆb from that of A l x = ˆb in the next section. To this end, we need to establish a relation between (A l + a l a l ) and A l. For each l =,2,...,m, define k l = A l a l, h l = a l A l, u l = (I A l A l )a l, v l = a l (I A l A l ), and β l = + a l A l a l. Theorem 2.. Let A l (l = 0,,2,...,m) be defined as in (2.4), A be partitioned as in (2.2), and β l = + a l A l a l. Then β l for each l. Proof. Obviously, β = + a A 0 a =. We now only need to prove the result for 2 l m. Observe that A l = l i= a ia i is positive semidefinite for l m. Let r be the rank of A l. Then A l is unitarily similar to a diagonal matrix, i.e., A l = UDU, where U is a unitary matrix and D = diag(σ,σ 2,...,σ r,0,...,0) with σ σ 2 σ r > 0. We can write A l = UD U which is also positive semidefinite due to the fact that D = diag(/σ,/σ 2,...,/σ r,0,...,0). Hence, we have β l = + a l A l a l. The general result for the Moore-Penrose inverse of a rank-one modified matrix can be found in [2, Theorem 3..3]. For general A C m n, c C m, d C n, the Moore-Penrose inverse of A + cd can be expressed in terms of A, c, and d with six distinguished cases. Due to the six distinct conditions and expressions, it is extremely difficult, if not impossible to apply the result for a general matrix directly to construct a recursive scheme for solving the MNLS solution of a linear system. However, when applied to our specially structured sequence (2.5), in view of Theorem 2., β l is real and nonzero which eliminates three cases. Due to the fact that ( A l ) = ( A l ) = A l, we also have h l = k l and v l = u l from which two of

Volume 22, pp. 480-489, May 20 484 Xuzhou Chen and Jun Ji three other cases can be combined into one. Thus, the six cases of [2, Theorem 3..3] are reduced to only two cases. Theorem 2.2. Let A l = A l + a l a l be defined as in (2.4).. If u l 0, then (2.6) (A l + a l a l )(A l + a l a l ) = (A l + a l a l ) (A l + a l a l ) = A l A l + u lu l and 2. If u l = 0, then (A l + a l a l ) = A l k lu l u l k l + β l u l u l. (2.7) (A l + a l a l )(A l + a l a l ) = (A l + a l a l ) (A l + a l a l ) = A l A l = A l A l and (A l + a l a l ) = A l β l k l k l. Proof. The result follows directly from [2, Theorem 3..3] and its proof. Details are omitted. 3. The finite method for the MNLS solution. Though the rank-one modified formulas stated in Theorem 2.2 involve the Moore-Penrose inverses of matrices, our method to be proposed will generate a sequence of vectors x,x 2,...,x m recursively without explicitly computing the Moore-Penrose generalized inverses of all the intermediate matrices A l (l =,...,m). To establish the iterative scheme from x l to x l, we first need to define two auxiliary sequences of vectors y s,t = A sa t and ỹ s,t = A s y s,t = A s A sa t. It is easily seen from A sa s = ( A sa s ) = A s ( A s ) = As (A s) = A s A s that y s,t and ỹ s,t are the MNLS solutions of the following two auxiliary linear systems respectively A s y = a t, A s ỹ = A s a t. In what follows, we will frequently employ the fact that a = a / a 2 for any non-zero column vector a. We will distinguish two cases in our analysis in view of Theorem 2.2. Case (u l 0). It is easily seen from Theorem 2.2 that (3.) x l = (A l + a l a l ) ˆb = xl k l u l ˆb ul k l ˆb + β l u l u l ˆb.

Volume 22, pp. 480-489, May 20 Minimum-Norm Least Squares Solution and Symmetric Rank-One Updates 485 (3.2) Note that k l u l ˆb = A l a l[(i A l A l )a l] ˆb = A l a l(a l ỹ l,l )ˆb a l (a l A l A l a l) = y l,l(a l ˆb ỹ l,lˆb) a l (a l ỹ l,l ). Similarly, we have u l kl ˆb = a l x l (a l ỹ l,l ) a l (a, l ỹ l,l ) and (3.3) β l u l u l ˆb = ( + a l A l a l) (a l ỹ l,l )(a l ỹ l,l )ˆb [a l (a l ỹ l,l )] 2 = (+a l y l,l)(a l ỹ l,l )ˆb [a l (a l ỹ l,l )] 2 (a l ỹ l,l ). Combining (3.) through (3.3), we have (3.4) x l = x l y l,l(a ˆb ỹ l l,lˆb) a l (a l ỹ l,l ) a l x l (a l ỹ l,l ) a l (a l ỹ l,l ) + (+a l y l,l)(a l ỹ l,l ) ˆb [a l (a l ỹ l,l )] 2 (a l ỹ l,l ). Following the same lines as in (3.2) (3.3), we can write k l u l a t = y l,la l (a t ỹ l,t ) a l (a, l ỹ l,l ) u l kl a t = (a l ỹ l,l )a l y l,t a l (a, l ỹ l,l ) β l u l u l a t = It is seen from Theorem 2.2 that ( + a l y l,l) [a l (a l ỹ l,l )] 2 (a l ỹ l,l )a l (a t ỹ l,t ). y l,t = A l a t = (A l + a l a l ) a t = (A l k lu l u l k l + β l u l u l )a t. Thus, we have an iterative scheme for y l,t : (3.5) y l,t = y l,t y l,la l (at ỹ l,t) a l (a l ỹ l,l ) (a l ỹ l,l )a l y l,t a l (a l ỹ l,l ) + (+a l y l,l) [a l (a l ỹ l,l )] 2 (a l ỹ l,l )a l (a t ỹ l,t ).

Volume 22, pp. 480-489, May 20 486 Xuzhou Chen and Jun Ji For the auxiliary sequence ỹ l,t = A l y l,t, we could multiply A l on both sides of (3.5) and then simplify the resulted expression. However, in our derivation we employ (2.6) instead: ( ) ỹ l,t = A l A l a t = A l A l + u lu l a t Therefore, we have = ỹ l,t + u l u l a t = ỹ l,t + (I A l A l )a l a l (I A l A l )a t a l (I A l A l )a l. (3.6) ỹ l,t = ỹ l,t + (a l ỹ l,l )a l (a t ỹ l,t ) a l (a. l ỹ l,l ) Case 2 (u l = 0). Observe that (3.7) β l = + a l A l a l = + a l y l,l and (3.8) k l k l ˆb = y l,l a l x l. It is seen from Theorem 2.2 that x l = (A l + a l a l ) ˆb = (A l β l k l k l which, together with (3.7) and (3.8), implies (3.9) x l = x l + a l y y l,l a l x l. l,l ) ˆb = xl β l k l k l ˆb, By following the same token, we can develop an iterative scheme for y l,t : (3.0) For ỹ l,t, in view of (2.7), we have y l,t = y l,t + a l y y l,l a l y l,t. l,l (3.) ỹ l,t = ỹ l,t. Finally, since A 0 = 0, we have x 0 = 0, y 0,t = 0, and ỹ 0,t = 0 (t =,2,...,m) from which we can compute the MNLS solution x m by applying (3.4) or (3.9) repeatedly with the help of auxiliary sequences y l,t and ỹ l,t (t = l +,l + 2,...,m and l < m). We summarize this procedure as follows.

Volume 22, pp. 480-489, May 20 Minimum-Norm Least Squares Solution and Symmetric Rank-One Updates 487 Procedure for the MNLS solution of Ax = b. Step 0 Input: b and A. Let a i be the ith column of A ; Step Initialization: compute ˆb = A b. Set x 0 = 0, y 0,t = 0 and ỹ 0,t = 0 for all t =,...,m; Step 2 For l =,2,...,m, (a) if a l ỹ l,l 0, then compute x l using (3.4); compute y l,t and ỹ l,t using (3.5) and (3.6) respectively for all t = l +,l + 2,...,m and l < m; (b) if a l ỹ l,l = 0, then compute x l using (3.9); compute y l,t and ỹ l,t using (3.0) and (3.) respectively for all t = l +,l + 2,...,m and l < m; Step 3 Output: the minimum-norm least squares solution x m. Let us analyze two cases in our method further. If u l = 0 for some l, then we have ( l ) a l = A l A l a l = a i a i A l a l l = (a i A l a l)a i. i= This means that if u l = 0 for some l, a l is a linear combination of {a i : i =,2,...,l }. Now, we end up this section with the following interesting result which shows that the opposite is also true. i= Theorem 3.. Let u i and a i (i =,2,...,m) be defined as before. Then, u l = 0 if and only if a l is a linear combination of {a,a 2,...,a l }. Proof. The definition of u l = (I A l A l )a l indicates that u l = 0 is equivalent to a l N(I A l A l ). Let Âl = [a a 2 a l ]. Obviously, we have A l =  l  l. It is easily seen that R(Âl ) = R(Âl  l ) = R(A l ) = R(A l A l ) = N(I A l A l ). Therefore, u l = 0 is equivalent to a l R(Âl ), i.e., a l is a linear combination of {a,a 2,...,a l }. As a consequence of Theorem 3., if A has full row rank, then A has full column rank which implies that {a,a 2,...,a l } is linearly independent for all l. Thus u l 0 for all l in this case. Although the procedure for the MNLS solution does not impose any restriction on the rows of matrix A, we believe that it will be better off if we could first prune linearly dependent rows of A as was discussed in [7] and then apply the procedure. By doing so we will actually remove the cases of u l = 0 and thus will be able to reduce computational error and possibly storage as well as computational complexities.

Volume 22, pp. 480-489, May 20 488 Xuzhou Chen and Jun Ji 4. Some examples. From the derivation of our method, we see that we have never formed the A l explicitly. Also, this method is free of matrix multiplication. One major advantage of this recursive method is that it will always carry out a result successfully. The method provides an alternative way for directly finding MNLS solution to a system of linear equations. We illustrate our method with three examples. (4.) Example. Let A = [ 2 3 4 5 6 ] and b = [ ]. By using our method, we generate a sequence of vectors 0 23/98 x 0 = 0, x = 23/49, x 2 = 0 69/98 /2 0 /2. x 2 is the minimum-norm solution to the system of linear equations (4.). (4.2) Example 2. Let A = 0 0 and b =. This time, we have 0 x 0 = 0, x = 0 5/9 5/9 5/9, x 2 = /4 /4 5/2, x 3 = 2 0. x 3 is the solution to the system of linear equations (4.2). Note this is a nonsingular matrix which is taken from [0]. As was pointed out by the author, Algorithm in [0] breaks down and a modification based on pivoting techniques has to be introduced to avoid the failure. However, as we can see, our method solves it easily. In either Examples or 2, A has full row rank. The sequence {x i } is indeed constructed by our method without calling Step 2(b) which confirms Theorem 3.. In the following example, both Step 2(a) and Step 2(b) are called up. (4.3) Example 3. Let A = 2 3 4 5 6 and b =.

Volume 22, pp. 480-489, May 20 Minimum-Norm Least Squares Solution and Symmetric Rank-One Updates 489 By using our method, we generate a sequence of vectors x 0 = [ 0 0 ] [ 33/25, x = 66/25 ] [, x 2 = 3 3/2 ] [, x 3 = x 3 is the MNLS solution to the system of linear equations (4.3). In this example, we do observe that u 0,u 2 0 but u 3 = 0, i.e., x 3 is computed in Step 2(b). ]. REFERENCES [] A. Bjorck. Numerical Methods for Least Squares Problems. SIAM, Philadelphia, 996. [2] S.L. Campbell and C.D. Meyer. Generalized Inverses of Linear Transformations. Pitman, London, 979. [3] X. Chen. The generalized inverses of perturbed matrices. Int. J. Comput. Math, 4:223 236, 992. [4] Y. Engel, S. Mannor, and R. Meir. The kernel recursive least-squares algorithm. IEEE Trans. Signal Process, 52:2275 2285, 2004. [5] T.N.E. Greville. The pseudo-inverse of a rectangular or singular matrix and its application to the solution of systems of linear equations. SIAM Rev., :38 43, 959. [6] T.N.E. Greville. Some applications of the pseudoinverse of a matrix. SIAM Rev., 2:5 22, 960. [7] V. Lakshmikantham, S.K. Sen, and S. Sivasundaram. Concise row-pruning algorithm to invert a matrix. Appl. Math. Comput., 60:7-24, 994. [8] V. Lakshmikantham and S.K. Sen. Computational Error and Complexity in Science and Engineering. Elsevier, Amsterdam, 2005. [9] E.A. Lord, V.Ch. Venkaiah, and S.K. Sen. A concise algorithm to solve under-/over-determined linear systems. Simulation, 54:239 240, 990. [0] P. Maponi. The solution of linear systems by using the Sherman-Morrison formula. Linear Algebra Appl., 420:276 294, 2007. [] S.K. Sen and S. Sen. O(n 3 ) g-inversion-free noniterative near-consistent linear system solver for minimum-norm least-squares and nonnegative solutions. J. Comput. Methods Sci. Eng., 6:7 85, 2006. [2] S.K. Sen and E.V. Krishnamurthy. Rank-augmented LU-algorithm for computing Moore- Penrose matric inverses. IEEE Trans. Computers, C-23, 99 20, 974. [3] S.R. Vatsya and C.C. Tai. Inverse of a perturbed matrix. Int. J. Comput. Math, 23:77 84, 988. [4] G.R. Wang, Y.M. Wei, and S. Qiao. Generalized Inverses: Theory and Computations. Science Press, Beijing, 2004. [5] J. Zhou, Y. Zhu, X.R. Li, and Z. You. Variants of the Greville formula with applications to exact recursive least squares. SIAM J. Matrix Anal. Appl., 24:50 64, 2002.