An Alternative Proof of the Greville Formula

Similar documents
Recursive Determination of the Generalized Moore Penrose M-Inverse of a Matrix

Proceedings of the Third International DERIV/TI-92 Conference

On the Relative Gain Array (RGA) with Singular and Rectangular Matrices

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

The generalized Schur complement in group inverses and (k + 1)-potent matrices

Notes on Eigenvalues, Singular Values and QR

Properties of Matrices and Operations on Matrices

On V-orthogonal projectors associated with a semi-norm

Moore-Penrose s inverse and solutions of linear systems

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

On Pseudo SCHUR Complements in an EP Matrix


Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Chapter 1. Matrix Algebra

Weaker assumptions for convergence of extended block Kaczmarz and Jacobi projection algorithms

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Re-nnd solutions of the matrix equation AXB = C

MATH36001 Generalized Inverses and the SVD 2015

New insights into best linear unbiased estimation and the optimality of least-squares

More on generalized inverses of partitioned matrices with Banachiewicz-Schur forms

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY MOORE PENROSE INVERSE OF BIDIAGONAL MATRICES. I

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

Introduction to Numerical Linear Algebra II

Analytical dynamics with constraint forces that do work in virtual displacements

This note derives marginal and conditional means and covariances when the joint distribution may be singular and discusses the resulting invariants.

The DMP Inverse for Rectangular Matrices

Moore [1, 2] introduced the concept (for matrices) of the pseudoinverse in. Penrose [3, 4] introduced independently, and much later, the identical

Linear Algebra V = T = ( 4 3 ).

MAC Module 2 Systems of Linear Equations and Matrices II. Learning Objectives. Upon completing this module, you should be able to :

Large Sample Properties of Estimators in the Classical Linear Regression Model

We use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Numerical Analysis Lecture Notes

EIGENVALUES AND SINGULAR VALUE DECOMPOSITION

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

Foundations of Matrix Analysis

Numerical Linear Algebra

Appendix C Vector and matrix algebra

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

On some matrices related to a tree with attached graphs

MATH2210 Notebook 2 Spring 2018

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Deep Learning Book Notes Chapter 2: Linear Algebra

Vector Spaces, Orthogonality, and Linear Least Squares

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

ECE 275A Homework #3 Solutions

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Determinantal divisor rank of an integral matrix

Lesson U2.1 Study Guide

arxiv: v4 [math.oc] 26 May 2009

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

c 2002 Society for Industrial and Applied Mathematics

NOTES ON MATRICES OF FULL COLUMN (ROW) RANK. Shayle R. Searle ABSTRACT

MAT 2037 LINEAR ALGEBRA I web:

Designing Information Devices and Systems II

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Matrix Inequalities by Means of Block Matrices 1

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Chapter 7. Linear Algebra: Matrices, Vectors,

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

1300 Linear Algebra and Vector Geometry

MAC Module 12 Eigenvalues and Eigenvectors

Lecture 6: Geometry of OLS Estimation of Linear Regession

Schur Complement of con-s-k-ep Matrices

Dragan S. Djordjević. 1. Introduction

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

arxiv: v1 [math.ra] 14 Apr 2018

MATRICES AND MATRIX OPERATIONS

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

Seminar on Linear Algebra

ELA

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

ECE 275A Homework # 3 Due Thursday 10/27/2016

12/1/2015 LINEAR ALGEBRA PRE-MID ASSIGNMENT ASSIGNED BY: PROF. SULEMAN SUBMITTED BY: M. REHAN ASGHAR BSSE 4 ROLL NO: 15126

Huang method for solving fully fuzzy linear system of equations

Linear Algebra Review

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Practical Linear Algebra: A Geometry Toolbox

Rank and inertia of submatrices of the Moore- Penrose inverse of a Hermitian matrix

Linear Equations and Matrix

UNIT 6: The singular value decomposition.

Applied Numerical Linear Algebra. Lecture 8

GROUPS. Chapter-1 EXAMPLES 1.1. INTRODUCTION 1.2. BINARY OPERATION

Stat 159/259: Linear Algebra Notes

EXPLICIT SOLUTION TO MODULAR OPERATOR EQUATION T XS SX T = A

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

Lecture II: Linear Algebra Revisited

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

This appendix provides a very basic introduction to linear algebra concepts.

Maths for Signals and Systems Linear Algebra in Engineering

The Drazin inverses of products and differences of orthogonal projections

3 - Vector Spaces Definition vector space linear space u, v,

Transcription:

JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 94, No. 1, pp. 23-28, JULY 1997 An Alternative Proof of the Greville Formula F. E. UDWADIA1 AND R. E. KALABA2 Abstract. A simple proof of the Greville formula for the recursive computation of the Moore-Penrose (MP) inverse of a matrix is presented. The proof utilizes no more than the elementary properties of the MP inverse. Key Words. Moore-Penrose inverse, simple proof for recursive relation, Greville formula. 1. Introduction The recursive determination of the MP inverse of a matrix has found extensive application in the fields of statistical inference and estimation theory (Refs. 1 and 2), and more recently in the field of analytical dynamics (Ref. 3). The reason for its extensive applicability is that it provides a systematic method to generate updates, whenever a sequential addition of data or new information is made available and updated estimates which take into account this additional information are required. The recursive scheme for the computation of the Moore-Penrose (MP) inverse of a matrix (Refs. 4 and 5) was ingeniously obtained in a famous paper by Greville in 1960 (Ref. 6). However, due to the complexity of the solution technique, the Greville proof is not quoted or outlined even in specialized texts which deal solely with generalized inverses of matrices (e.g., books like Refs. 2 and 7-9), though his result is invariably stated because of its wide applicability. In this paper, we present a simple proof of the Greville result based on nothing more than the elementary properties of the MP inverse of a matrix. The Greville result (1960) amounts to the following (Ref. 6). Let B be an m x k matrix, and let it be partitioned as B= [A, a], where A consists of 1Professor of Mechanical Engineering, Civil Engineering, and Business, University of Southern California, Los Angeles, California. 2Professor of Biomedical Engineering, Electrical Engineering, and Economics, University of Southern California, Los Angeles, California. 23 0022-3239/97/0700-0023$12.50/0 C 1997 Plenum Publishing Corporation

24 JOTA: VOL. 94, NO. 1, JULY 1997 the first k 1 columns of B and a is its last column. Since the case where a = 0 is trivial, we shall consider in what follows only the case where a^o. Then, the Moore-Penrose inverse of B can be written, utilizing knowledge of A +, as where, Thoughout this paper, the superscript + will indicate the MP-inverse. In applications, the column vector a comprises new or additional information, while the matrix A comprises accumulated past data. The generalized inverse B + of the updated matrix B is then sought, given that the generalized inverse A + of the matrix A corresponding to past accumulated data is available. Since right multiplication of AA + by any m-vector in the column space of A leaves that vector unchanged, Eq. (2a) when a^aa + a deals with a vector a, or new data, which is not in the column space of A. When a = AA + a, as in Eq. (2b), the vector a is in the column space of A. 2. Proof for the Recursive Determination of the Moore-Penrose Inverse of a Matrix Consider the least-squares problem Let the m x k matrix B be partitioned as [A, a], where A is an m x (k-1) matrix and a is an w-vector. Similarly, let the column vector x be partitioned as [ z s] where z is a (k- l)-vector and s is a scalar. Equation (3) can then be expressed in the following form: (Az + as-b) T (Az + as-b) = min, over all vectors z and scalars s. (4) The least-square minimum-length solution of (3), by the definition of the MP inverse, can be written as

JOTA: VOL. 94, NO. 1, JULY 1997 25 for arbitrary b. The solution given by Eq. (5) can be interpreted as follows. We are looking for all those pairs (z, s) from among all possible (k-1)- vectors z and scalars s such that is a minimum; from these pairs, we select that pair for which the (k-1)- vector z and the scalar s are such that Z T Z+S 2 is a minimum. First, we begin by setting s=s 0, where s 0 is some fixed scalar. Thus, we have Minimizing J(z, s 0 ) such that z T z is also a minimum for all (k- l)-vectors z, using the definition of the MP-inverse, we get Thus for a given s 0, the vector z is a function of s 0. Using Eq. (7) in Eq. (6), we can now find s 0 such that is a minimum. Depending on the vector c = (I-AA + )a, we must now deal with two distinct cases; c^o and c = 0. (i) For c^o, the unique value of s o which minimizes J(z(s o ), s o ) is given by But the MP inverse of a nonzero m-vector c is given by and so Eq. (9) becomes Moreover, since and the matrix I- AA + is symmetric and idempotent, Eq. (11) reduces simply to

26 JOTA: VOL. 94, NO. 1, JULY 199 Combining this last expression with Eq. (7), we can rewrite Eq. (5) as Noting that this is true for all w-vectors b, we obtain (ii) For c=0, we observe that J(z(s 0 ), s 0 ) as given in Eq. (8) is not a function of s 0. We thus only need to minimize over all values of s o, where z(s 0 ) is given by Eq. (7). For convenience, we shall write it as where The value of s 0 that minimizes J 1 is obtained from yielding Since is greater than zero, the S 0 obtained in Eq. (17) indeed gives a minimum for J\. Substituting the values of v 1 and v 2 into Eq. (17), we then obtain which may be simplified to where It is easy to show that e=0 if and only if a = 0. The if part of the statement is obvious. Since the denominator of (20) is never zero, e=0 implies

JOTA: VOL. 94, NO. I, JULY 1997 27 (A A T ) + a = 0. Taking the singular value decomposition of A to be A = UA V T (where A is nonsingular and square), e=0 then requires UA -2 UY T a = 0, which in turn requires that U T a = 0. But our condition c=0 implies that a = AA + a, which in turn requires that a= UU T a. Using the fact that U T a = 0, the last equation implies a = 0. Hence, a&0 implies e^0. Using Eq. (7), Eq. (5) can now be written as from which it follows, as before, that Equations (14) and (22) constitute the Greville result. Equation (14) is identical to Eq. (2a); when e T is set equal to c +, Eqs. (22) and (2b) become identical because c = e/(e T e). In the event that a, or equivalently e, is a null vector, then c is a null vector. D It is perhaps worthwhile noting that the three properties of the MP inverse which we have mainly used in obtaining the recursive relation are: (i) that the MP inverse solves the least-square minimum-length problem; (ii) the MP inverse of a column vector is proportional to its transpose; and (iii) the matrix (I-A A*) is symmetric and idempotent. References 1. RAO, C. R., A Note on a Generalized Inverse of a Matrix with Applications to Problems in Mathematical Statistics, Journal of the Royal Statistical Society, Series B, Vol. 24, pp. 152-158, 1962. 2. GRAYBILL, F., Matrices and Applications to Statistics, 2nd Edition, Wadsworth, Belmont, California, 1983. 3. UDWADIA, F. E., and KALABA, R. E., Analytical Dynamics: A New Approach, Cambridge University Press, Cambridge, England, 1996. 4. MOORE, E. H., On the Reciprocal of the General Algebraic Matrix, Bulletin of the American Mathematical Society, Vol. 26, pp. 394-395, 1920. 5. PENROSE, R. A., Generalized Inverse for Matrices, Proceedings of the Cambridge Philosophical Society, Vol. 51, pp. 406-413, 1955. 6. GREVILLE, T. N. E., Some Applications of the Pseudoinverse of a Matrix, SIAM Review, Vol. 2, pp. 15-22, 1960.

28 JOTA: VOL, 94, NO. 1, JULY 1997 7. ALBERT, A., Regression and the Moore-Penrose Pseudoinverse, Academic Press, New York, New York, 1972. 8. PRINOLE, R. M., and RAYNER, A. A., Generalized Inverse Matrices, Hafner, New York, New York, 1971. 9. RAO, C. R., and MITRA, S. K., Generalized Inverse of Matrices and Its Applications, Wiley, New York, New York, 1971.