THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX

Similar documents
Chapter 1. Matrix Algebra

A GENERALIZED INVERSE FOR MATRICES

1 Linear Algebra Problems

MATH36001 Generalized Inverses and the SVD 2015

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Linear Algebra Lecture Notes-II

Introduction to Matrix Algebra

Foundations of Matrix Analysis

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

Re-nnd solutions of the matrix equation AXB = C

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Diagonalization by a unitary similarity transformation

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Matrix Theory. A.Holst, V.Ufnarovski

Moore Penrose inverses and commuting elements of C -algebras

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Math 408 Advanced Linear Algebra

Lecture notes: Applied linear algebra Part 1. Version 2

The Singular Value Decomposition

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Symmetric and anti symmetric matrices

LinGloss. A glossary of linear algebra

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

Linear algebra for computational statistics

1 Last time: least-squares problems

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

MATHEMATICS 217 NOTES

Lecture 2: Linear operators

Research Article Constrained Solutions of a System of Matrix Equations

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Linear Algebra using Dirac Notation: Pt. 2

STAT200C: Review of Linear Algebra

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

Some results on the reverse order law in rings with involution

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Matrix Inequalities by Means of Block Matrices 1

Lecture Summaries for Linear Algebra M51A

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

Quantum Computing Lecture 2. Review of Linear Algebra

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Elementary linear algebra

Mathematical Methods wk 2: Linear Operators

Eigenvalues and Eigenvectors

Topic 1: Matrix diagonalization

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

A matrix over a field F is a rectangular array of elements from F. The symbol

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Properties of Matrices and Operations on Matrices

Almost Sharp Quantum Effects

Numerical Methods for Solving Large Scale Eigenvalue Problems

Generalized Principal Pivot Transform

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Math 108b: Notes on the Spectral Theorem

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Stat 159/259: Linear Algebra Notes

On Regularity of Incline Matrices

Matrices and Matrix Algebra.

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Linear Algebra and Matrix Inversion

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS

Linear Algebra. Workbook

NORMS ON SPACE OF MATRICES

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

The Singular Value Decomposition and Least Squares Problems

Algebra Exam Syllabus

Linear algebra 2. Yoav Zemel. March 1, 2012

Spectral inequalities and equalities involving products of matrices

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space

Chapter 4 Euclid Space

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Background Mathematics (2/2) 1. David Barber

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Maths for Signals and Systems Linear Algebra in Engineering

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Supplementary Notes on Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

arxiv: v1 [math.fa] 1 Oct 2015

1. General Vector Spaces

ECE 275A Homework #3 Solutions

Appendix A: Matrices

MATH 583A REVIEW SESSION #1

Notes on Mathematics

Notes on basis changes and matrix diagonalization

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Transcription:

THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX A Dissertation Submitted For The Award of the Degree of Master of Philosophy In Mathematics Purva Rajwade School of Mathematics Devi Ahilya Vishwavidyalaya, NACC Accredited Grade A ) Indore M.P.) 2013 2014

Contents Page No. Introduction 1 Chapter 1 Preliminaries 2 Chapter 2 A generalized inverse for matrices 6 Chapter 3 Method of elementary transformation to compute Moore-Penrose inverse 30 References 34

Introduction The dissertation is mainly a reading of two research papers [1], [2]) listed in the references. These papers study generalized inverse of matrices defined in [1]. It is defined for any matrix A and is a unique solution of following four equations AXA = A 1) XAX = X 2) AX) = AX 3) XA) = XA 4) The chapter 1 titled preliminaries contains some basic results which we shall use in subsequent chapters. It contains definitions of Hermitian idempotents, principal idempotent elements and polar representation of a matrix followed by some results from [3] and [4]. The chapter 2 starts with a definition of a generalization of the inverse of a matrix, as the unique solution of a certain set of equations. Such a generalized inverse exists for anyrectangular) matrix with complex elements. This generalized inverse is called the Moore-Penrose inverse. Lemma 2.4) proves A = A, A = A, for a non-singular matrix A = A 1 and other elementary results. We shall show that, using singular value decomposition, A = W B V where V and W are unitary and B is diagonal. A new type of spectral decomposition is given, A = α>0 αu α the sum is being finite over real values of α. Hence we get A = α U α Next, we find polar representation, A = HV, where H = αu α U α The chapter 3 gives a method to compute Moore-Penrose inverse by elementary transformations. 1

Chapter 1 Preliminaries Recall that the conjugate transpose A = A) T of a matrix A has following properties Since, T raceaa ) = A = A A + B) = A + B λa) = λa BA) = A B AA = 0 A = 0 n a i.a i = i=1 n a i, a i = i=1 n i=1 n a ij 2 i.e., the trace of AA is the sum of the squares of the moduli of the elements of A. Hence the last property. Observe that using fourth and fifth property we can obtain the rule Since, j=1 BAA = CAA BA = CA 1.1) BAA CAA )B C) = BAA CAA )B C ) = BAA B CAA B BAA C + CAA C = BA CA)A B BA CA)A C = BA CA)A B A C ) = BA CA)BA CA) 2

Similarly, and hence BA A CA A)B C) = BA CA )BA CA ) BA A = CA A BA = CA 1.2) Definition 1.1. Hermitian Idempotents: A Hermitian idempotent matrix is one satisfying EE = E, that is, Note 1.2. If E = E and E 2 = E then E = E and E 2 = E. If EE = E then E 2 = E EE = E EE = E EE = E EE = E E = E EE = E 2 = E Definition 1.3. Principal idempotent elements of a matrix: For any square matrix A there exists a unique set of matrices K λ defined for each complex number λ such that K λ K µ = δ λµ K λ 1.3) Kλ = I 1.4) AK λ = K λ A 1.5) A λi) K λ is nilpotent 1.6) the non-zero K λ s are called principal idempotent elements of a matrix. Remark 1.4. Existence of K λ s: Let ϕ x) = x λ 1) n 1... x λ r ) nr be the minimal polynomial of A where the factors x λ i ) n i are mutually coprime i.e., there exists f i x), f j x) such that f i x) x λ i ) n i + f j x) x λ j ) n j = 1 ; i j We can write ϕ x) = x λ i ) n i ψ i x) where r ψ i x) = x λ j ) n j j=1 j i 3

As ψ is are co-prime, there exist polynomials χ i x) such that χi x) ψ i x) = 1 put K λi = χ i x) ψ i x) with the other K λ s zero. So that K λ = I. Further, A λ i I)K λi = A λ i I)χ i x) ψ i x) [A λ i I)K λi ] n i = 0 If λ is not an eigen value of A, K λ is zero so that sum in equation 1.4) is finite. Further note that, K λ K µ = 0 if λ µ and as the sum K λ is finite, Hence, It is clear that K λ 2 = K λ. K λ K µ = δ λµ K λ. AK λ = K λ A. Theorem 1.5. Polar representation of a matrix: Any square matrix is the product of a Hermitian with an unitary matrix. Theorem 1.6. [[3], 3.5.6] : The following are equivalent 1. rab)=rb). 2. Row space of AB is same as row space of B. 3. B=DAB for some matrix D Theorem 1.7. Rank Cancellation laws [[3], 3.5.7] : 1. If ABC=ABD and rab)=rb) then BC=BD. 2. If CAB=DAB and rab)=ra) then CA=DA. Definition 1.8. Rank factorization: Let A be an m n matrix with rank r 1, then P, Q) is said to be a rank factorization of A if P m r, Q r n and A = P Q. Theorem 1.9. If any matrix A is idempotent then it s rank and trace are equal. 4

Proof. Let r 1 be the rank of A and P, Q) be a rank factorization of A. Then since A is idempotent i.e, A 2 = A P QP Q = P Q = P I r Q Since P can be cancelled on the left and Q can be cancelled on right since we can write P I r QP Q = P I r Q), we get Now, and Hence, the rank is equal to the trace. QP = I r tracei r ) = r tracep Q) = traceqp ) = r Theorem 1.10. [[3], 8.7.8] : A matrix is unitarily similar to a digonal matrix if and only if it is normal. Theorem 1.11. [[4], chapter 8, theorem 18] : Let V be a finite dimensional inner product space, and let T be a self-adjoint linear operator on V. Then there is an orthonormal basis for V, each vector of which is a characterstic vector for T. Corollary 1.12. [[4], chapter 8, corollary to theorem 18] : Let A be an n n Hermitian self-adjoint) matrix. Then there is a unitary matrix P such that P 1 AP is diagonal, that is, A is unitarily equivalent to diagonal matrix. Note 1.13. If two matrices A and B are Hermitian and having same eigen values then they are equivalent under a unitary transformation. Theorem 1.14. [[4], chapter 9, theorem 13] : Let V be a finite dimensional inner product space and T is a non-negative operator on V. Then T has a unique nonnegative square root that is, there is one and only one non-negative operator N on V such that N 2 = T. Theorem 1.15. [[4], chapter 9, theorem 14] : Let V be a finite dimensional inner product space and let T be any linear operator on V. Then there exists a unitary operator U on V and a non-negative operator N on V such that T = UN. The non-negative operator N is unique. If T is invertible, the operator U is also unique. Remark 1.16. If any matrix T is non-singular then it s polar representation is unique. 5

Chapter 2 A generalized inverse for matrices Following theorem gives the generalized inverse of a matrix. solution of a certain set of equations Theorem 2.1. The four equations It is the unique have a unique solution for any matrix A. AXA = A, 2.1) XAX = X 2.2) AX) = AX 2.3) XA) = XA 2.4) Proof. First, we observe that the equations 2.2) and 2.3) are equivalent to the single equation XX A = X 2.5) substitute equation 2.3) in 2.2) to get X AX) = X XX A = X Conversely, suppose equation 2.5) holds. We have AXX A = AX AX AX) = AX Obeserve that AXX A is Hermitian, Thus AX) = AX. If we put 2.3) in 2.5), we get equation 2.2). Similarly, equations 2.1) and 2.4) are equivalent to the single equation XAA = A 2.6) 6

Since, 2.1) and 2.4) gives Futher, if XAA = A then AXA = A A XA) = A A XA) ) = XAA = A XAA ) = AA X = A XAA X = XA XA) = XA since XAA X is Hermitian) Next, if we substitute 2.4) in 2.6), we get 2.1). Thus it is sufficient to find an X satisfying 2.5) and 2.6), such X will exists if a B can be found satisfying BA AA = A Then X = BA satisfies 2.6). Observe that, from equation 2.6), XAA = A XA) A = A from2.4)) A X A = A BA X A = BA XX A = X i.e., X also satisfies 2.5). As a matrix satisfies its characteristic equation, the expressions A A, A A) 2,... cannot be linearly independent i.e., there are λ is; i = 1, 2,..., k such that λ 1 A A + λ 2 A A) 2 + + λ k A A) k = 0 2.7) where λ 1, λ 2,..., λ k are not all zero. Note that k need not be unique. Let λ r be the first non-zero λ then 2.7) becomes λ r A A) r + λ r+1 A A) r+1 + + λ k A A) k = 0 7

If we put then A A) r = λ 1 r [λ r+1 A A) r+1 + + λ k A A) k] [ = λ 1 r λ r+1 I + λ r+2 A A + + λ k A A) k r 1] A A) r+1 B = λ 1 r [λ r+1 I + λ r+2 A A + + λ k A A) k r 1] We can write this equation as B A A) r+1 = A A) r B A A) r A A) = A A) r 1 A A) B A A) r A = A A) r 1 A by 1.2)) B A A) r = A A) r 1 by 1.1)) Thus, by repeated applications of 1.2) and 1.1), we get B A A) 2 = A A BA AA = A again by 1.2)), This is what was required. Now, to show that this X is unique. Let there be X and Y which satisfy 2.5) and 2.6). If we substitute 2.4) in 2.2) and 2.3) in 2.1), we get Y = A Y Y 2.8) Now X = XX A 2.5) A = A AY 2.9) = XX A AY by 2.9)) = XAY [since AXA = A AX) A = A X A A = A] = XAA Y Y by 2.8)) = A Y Y by 2.6)) = Y by 2.8)) 8

Thus the solution of 2.1), 2.2), 2.3), 2.4) is unique. Conversely, if A X X = X then A X XA = XA and LHS is Hermition, so, XA) = XA. Now, if we substitute XA) = XA in A X X = X, we get XAX = X which is 2.2). Thus, 2.4) and 2.2) are equivalent to 2.8). Similarly, 2.3) and 2.1) are equivalent to 2.9). Definition 2.2. Generalized inverse: The unique solution of AXA = A, XAX = X, AX) = AX, XA) = XA is called the gerneralized inverse of A. We write X = A. Note 2.3. To calculate A, we only need to solve the two unilateral linear equations and XAA = A 2.10) A AY = A 2.11) Put A = XAY. Note that XA and AY are Hermitian and satisfy Then, AXA = A = AY A Use cancellation laws) 1. AA A = AXAY A = AY A = A 2. A AA = XAY AXAY = XAXAY = XAY = A 3. AA ) = AXAY ) = AY ) = AY = AXAY = AA since AY is Hermitian) 4. A A) = XAY A) = XA) = XA = XAY A = A A since AX is Hermitian) 9

Thus, if X and Y are solutions of unilateral linear equations 2.10) and 2.11) then XAY is the generalized inverse. Moreover, 2.5) and 2.6) are also satisfied i.e., A A A = A 2.12) and 2.8) and 2.9) are, A AA = A 2.13) A A A = A 2.14) A AA = A 2.15) Lemma 2.4. 2.4.1 A = A. 2.4.2 A = A. 2.4.3 If A is non-singular A = A 1. 2.4.4 λa) = λ A. 2.4.5 A A) = A A. 2.4.6 If U and V are unitary, UAV ) = V A U. 2.4.7 If A = A i, where A i A j = 0 and A i A j = 0 whenever i j then A = A i. 2.4.8 If A is normal A A = AA and A n ) = A ) n. 2.4.9 A, A A, A and A A all have rank equal to trace of A A. Proof. 2.4.1. To show that A is the generalized inverse of A i.e., to show that A AA = A AA A = A A A ) = A A AA ) = AA which are 2.2), 2.1), 2.4), 2.3). Hence A = A. 2.4.2. To show that the generalized inverse of A is A i.e., to show that 2.1), 2.2), 2.3), 2.4) holds when X is replaced by A and A by A A A A = AA A) = A by 2.1)) A A A = A AA ) = A by 2.2)) 10

A A ) = A A) = A A since A = A) = A A) by equation 2.3)) = A A A A ) = AA ) = AA by equation 2.4)) 2.4.3. Observe = A A AA 1 A = A A 1 AA 1 = A 1 AA 1 ) = I = I = AA 1 A 1 A ) = I = I = A 1 A 2.4.4. To show that λ A is the generalized inverse of λa. λa) λ A ) λa) = λa 1 λ A λa = λaa A = λa λa) λa) λa) = λ A ) λa) λ A ) = 1 λ A λaλ A = λ A AA = λ A λa) λ A )) = λa 1 λ A ) = AA ) = AA = λa 1 λ A = λa) λ A ) λ A λa ) = 1 λ A λa) = A A ) = A A = 1 λ A λa = λ A ) λa) 2.4.5. To show that A A is the generalized inverse of A A. by2.1)) by2.2)) by2.3)) by2.4)) A A) A A ) A A) = A AA AA ) A = A AA AA A = A AA A by 2.2)) = A A by 2.1)) A A ) A A) A A ) = A AA A = A A by 2.2)) 11

A A ) A A) = A A A A = A A by 2.12)) = A A ) by 2.4)) = A A = A AA A by 2.15)) A A) A A ) = A AA A = A A) A A ) = A A) A A )) since 2.3) holds for A A) = A A ) A A) = A A ) A A) 2.4.6. To show that V A U is the generalized inverse of UAV. Note that since U and V are unitary, UU = U U = I, V V = V V = I. Then UAV ) V A U ) UAV ) = UAA AV = UAV by 2.1)) V A U ) UAV ) V A U ) = V A AA U = V A U by 2.2)) V A U ) UAV ) = UA V ) V A U ) = UA A U = U AA ) U = UAA U by 2.3)) = UAV ) V A U ) since V V = I) 12

UAV ) V A U ) = V A U ) UA V ) = V A A V = V A A ) V = V A AV by 2.4)) = V A U ) UAV ) since U U = I) 2.4.7. To show that A + i is the generalized inverse of A i ). First observe that, since A j = A j A j A j, 2.14) since A i A j = 0, whenever i j. A i A j = A i A ja j A j A i A j = 0, whenever i j Also as, A i = A i A i A i, 2.12) since A i A j = 0, whenever i j. A i A j = 0, whenever i j Now, i A i) ) j A j k A k) = i A i) = i ) A i A i A i ) j A j A j Similarly, as above = i A i ) ) ) A i Ai A i = ) A i 13

Then, i A i) j A j)) = j A j ) i A i ) = i A i = i A i ) A i A i = i A ia i = i A i) ) ] i A i [since A i A j = 0 Similarly, as above A i ) A j = i A i A j i j j 2.4.8. Since AA is Hermitian and as we have proved in 2.4.5). that A A) = A A, similarly we can show that AA ) = A A. By using this fact we see that A A = A A A A = A A) A A) by 2.4.5) = AA ) AA ) since A is normal) = A A AA = A A using 2.13)) = AA ) = AA since AA ) = AA ) Now, to show that A ) n is generlized inverse of A n. As AA = A A A n ) A ) n A n ) = AA A ) n = A n A ) n A) n A ) n = A AA ) n = A ) n A n A ) n) = AA ) n = AA ) ) n = AA ) n = A n A ) n A ) n A n ) = A A ) n = A A ) ) n = A A ) n = A ) n A n 14

So, 2.4.9. First note that A n ) = A ) n A A) 2 = A AA A = A A i.e., A A is an idempotent. By theorem 1.9), it s rank is trace of it. Remark 2.5. Since by equation 2.12), we can write A = A A A = A A) A by 2.4.5)) 2.16) so we can calculate the generalized inverse of a matrix A from the generalized inverse of A A. As A A is Hermitian it can be reduced to diagonal form by a unitary transformation i.e., A A = UDU where U is unitary and D = diag α 1, α 2,..., α n ). Then ) D = diag α 1, α 2,..., α n By 2.4.6) we can write, A A) = UD U A = UD U A by 2.16)) Note 2.6. By singular value decomposition, we know that any square matrix A can be written in the form A = V BW where V and W are unitary and B is diagonal. Also since AA and A A are both Hermitian and have the same eign values, there exists a unitary matrix T such that T AA T = A A by 1.13)), observe that, T A) T A) = T AA T = A A T A) T A) = A T T A = A A since T T = I) i.e., T A is normal and so diagonable by unitary transformation from 1.10)). As above by 2.4.6) we get A = W B V Remark 2.7. Observe that : M m n IR) M n m IR) A A) = A 15

Now, consider We have ) 1 0 A ϵ = 0 ϵ ) 1 0 A 1 ϵ = 0 ϵ 1 1 0 lim A ϵ = ϵ 0 0 0 which is a singular matrix but A ϵ is a non-singular. Thus, in this case ) 1 0 A ϵ A but A ϵ ) A) =. 0 0 But if rank of A is kept fixed then the function is continuous. Theorem 2.8. A necessary and sufficient condition for the equation AXB = C to have a solution is AA CB B = C in which case the general solution is where Y is arbitrary. ), X = A CB + Y A AY BB Proof. Suppose X satisfies AXB = C. Then C = AXB = AA AXBB B = AA CB B Conversely, if C = AA CB B, then X = A CB is a particular solution of C = AXB. For general solution we will have to solve AXB = 0. For where Y is arbitrary, we have, since AA A = A, BB B = B. X = Y A AY BB, AXB = AY B AA AY BB B = 0 Corollary 2.9. The general solution of the vector equation P x = c is x = P c + I P P ) y where y is arbitrary provided that the equation has a solution. 16

Proof. By the above theorem, where y is arbitrary. x = P ci + y P P yii = P c + y P P y = P c + I P P ) y Corollary 2.10. A necessary and sufficient condition for the equations AX = C, XB = D to have a common solution is that each equation should individually have a solution and that AD = CB. Proof. The condition is obviously necessary since, AX = C AXB = CB AD = CB since XB = D) Now, to show the condition is sufficient. Put X = A C + DB A ADB Then AX = A A C + DB A ADB ) and XB = AA C + ADB AA ADB = AA C + ADB ADB = AA C = A C + DB A ADB ) B = A CB + DB B A ADB B = A CB + DB B A CB since AD = CB) = DB B So X = A C + DB A ADB will be a solution if the condition AA C = C, DB B = D and AD = CB are satisfied. 17

Lemma 2.11. 2.11.1. A A, AA, I A A, I AA are all Hermitian idempotents. 2.11.2. If E is Hermitian idempotent, E = E. 2.11.3. K is idempotent if and only if there exist Hermitian idempotents Eand F such that K = F E) in which case K = EKF. Proof. 2.11.1. First to show that A A ) A A ) = A A. A A ) A A ) = A AA A = A A by 2.13)) Similarly, AA ) AA ) = A A ) = A A = AA A A since A A is Hermitian ) Now, I A A ) I A A ) = AA by 2.12)) = I A A ) I A A ) = I A A A A + A AA A = I A A A A + A A by 2.13)) = I A A and, I AA ) I AA ) = I AA ) I A A ) = I A A AA + AA A A = I A A AA + AA by 2.12)) = I AA ) = I AA since AA is Hermitian ) 2.11.2. Suppose E = E and E 2 = E. Then EEE = E 18

and EE) = E E = EE Therefore, 2.1), 2.2), 2.3) and 2.4) hold. 2.11.3. First let K be idempotent i.e., K 2 = K. As, K = K KK K = K K.KK K = K K ) KK )) K = F E) since K = K ) where F = K K and E = KK. Clearly F and E are Hermitian idempotents. Further, EKF = KK KK K = KK K = K Conversely, if K = F E) then K 2 = EKF ) 2 = EKF EKF since K = EKF ) = E F E) F E) F E) F = E F E) F put K = F E) ) since A AA = A ) = EKF = K so K is idempotent. Theorem 2.12. If and E λ = I {A λi) n } A λi) n F λ = I A λi) n {A λi) n }, 19

where n is sufficientely large e.g. the order of A), then the principal idempotent elements of A are given by K λ = F λ E λ ). Further, n can be taken as unity if and only if A is diagonalizable. Proof. First suppose that A is diagonalizable. Put and E λ = I A λi) A λi) F λ = I A λi) A λi) By 2.11.1) E λ and F λ are Hermitian idempotents. If λ is not an eigen value of A, then for no non-zero x, A λi) x 0 Ker A λi) = 0 A λi is invertible A λi) = A λi) 1 by 2.4.3)) A λi) A λi) = I E λ = F λ = 0 Now, [ ] A µi) E µ = A µi) I A µi) A µi) = A µi) A µi) A µi) A µi) = A µi) A µi) = 0 Similarly, So that, A µi) E µ = 0 2.17) F λ A λi) = 0 2.18) A µi) E µ = 0 AE µ = µe µ 20

F λ AE µ = µf λ E µ 2.19) F λ A µi) = 0 F λ A = λf λ 2.19) and 2.20) imply F λ AE µ = λf λ E µ 2.20) λf λ E µ = F λ AE µ = µf λ E µ λ µ) F λ E µ = 0 F λ E µ = 0 if λ µ 2.21) By 2.11.3) we have K λ = F λ E λ ) and K λ = E λ K λ F λ = E λ F λ E λ ) F λ 2.22) So, λ = µ K λ K µ = E λ F λ E λ ) F λ E λ ) F λ E λ ) F λ = E λ F λ E λ ) F λ = K λ and if λ µ, K λ K µ = 0, since F λ E µ = 0. Hence also equation 2.21) gives K λ K µ = δ λµ K λ 2.23) F λ K µ E ν = δ λµ δ µν F λ E λ 2.24) Next, Z α be any eigen vector of A corresponding to the eigen value α. i.e., A αi) Z α = 0). Then ) E α Z α = I A αi) A αi) Z α = Z α A αi) A αi) Z α = Z α Since A is diagonalizable, any column vector x conformable with A is expressible as a finite sum over all complex λ. i.e., x = Z λ = E λ x λ 21

Similarly, if y is conformable with A, it is expressible as y = y λf λ Now y K µ ) x = y λ F λ) K µ ) E ν x ν ) = y λ F λe λ x λ by 2.23) and 2.24)) = y λ F λ) E ν x ν ) = y x Hence, Kµ = I 2.25) Also from equation 2.17) we have A λi) E λ = 0 AE λ = λe λ AE λ F λ E λ ) F λ = λe λ F λ E λ ) F λ Also AK λ = λk λ by 2.22)) 2.26) F λ A λi) = 0 F λ A = λf λ From 2.26) and 2.27) we have E λ F λ E λ ) F λ A = λe λ F λ E λ ) F λ K λ A = λk λ by 2.22)) 2.27) AK λ = λk λ = K λ A 2.28) Thus conditions 1.5) and 1.6) are satisfied. Now, as K λ = I, A = λk λ 2.29) Conversely, let n = 1 and suppose A is not diagonalizable. Observe that, by 2.28) AK λ x = λk λ x 22

that is, for any vector x, K λ x is an eigen vector corresponding to λ. Therefore, x = K λ x gives x as a sum of eigen vectors of A. Note that 2.28) was deduced without assuming the diagonability of A. Now we shall prove that for any set of K λ s satisfying 1.3), 1.4), 1.5) and 1.6) we have K λ = F λ E λ ) where F λ and E λ are as defined. We must have where n is sufficiently large. This gives Kλ = I A λi) n K λ = 0 = K λ A λi) n E λ K λ = K λ = K λ F λ 2.30) As, for λ µ, x λi) n and x µi) n are co-prime, there are polynomials P x) and Qx) such that Now, Since Similarly, Hence I = A λi) n P A) + QA)A µi) n 2.31) F λ A λi) n = I A λi)a λi) )A λi) n = 0 A λi)a λi) A λi) = A λi A µi) n E µ = 0 F λ E λ = 0, if λ µ use2.31)) F λ K µ = 0 = K λ E µ, if λ µ use2.30)) F λ K λ = F λ, K λ E λ = E λ since K λ = I) 2.32) Now, use 2.30) and 2.32) to see that F λ E λ )K λ F λ E λ ) = F λ E λ K λ F λ E λ )K λ = K λ F λ E λ K λ ) = F λ E λ K λ K λ F λ E λ ) = K λ F λ E λ 23

These equations can be verified as below: F λ E λ )K λ F λ E λ ) = F λ E λ K λ F λ E λ = F λ K λ F λ E λ by2.30)) = F λ K λ E λ by2.30)) = F λ E λ by2.32)) K λ F λ E λ )K λ = K λ F λ E λ K λ = K λ F λ K λ by2.30)) = K λ F λ by2.32)) = K λ by2.30)) F λ E λ K λ ) = F λ K λ ) by2.30)) = F λ by2.32)) = F λ since F λ is Hermitian idempotent) = F λ K λ by2.32)) = F λ E λ K λ by2.30)) K λ F λ E λ ) = K λ E λ ) by2.30)) = E λ by2.32)) = E λ since E λ is Hermitian idempotent) = K λ E λ by2.32)) = K λ F λ E λ by2.30)) Hence, and K λ is unique. F λ E λ ) = K λ Corollary 2.13. If A is normal, it is diagonalizable and its principal idempotent elements are Hermitian. Proof. If A is normal then A λi) is also normal. Then by 2.4.8) A λi)a λi) = A λi) A λi) Then E λ = I A λi) A λi) = F λ and K λ = F λ E λ ) is Hermitian since E λ and F λ both are Hermitian. 24

Note 2.14. If A is normal then A = λe λ ) since A = λe λ ) = λe λ ) by 2.4.7)) = λ E λ by 2.4.4)) = λ E λ since Eλ is Hermitian so E = E ) A new type of Spectral Decomposition: In view of above note it is clear that if A is normal then we get a simple expression for A in terms of its principal idempotent elements. However, below, we prove a new type of spectral decomposition so that we get relatively simple expression for A. Theorem 2.15. Any matrix A is uniquely expressible in the form A = α>0 αu α this being a finite sum over real values of α, where if α β.thus using above note we can write U α = U α 2.33) U αu β = 0 2.34) U α U β = 0 2.35) A = α U α [since U α = U α] Proof. Equations 2.33), 2.34) and 2.35) can be comprehensively written as For α = β = γ U α U βu γ = δ αβ δ βγ U α 2.36) U α UαU α = U α UαU α Uα = Uα Also, note that U α U α and U αu α are both Hermitian. Therefore, by uniqueness of generalized inverse U α = U α 2.37) 25

Also, respectively imply U α U αu β = 0 and U α U βu β = 0 Define U αu β = 0 and U α U β = 0 by 1.2)) E λ = I A A λi) A A λi) The matrix A A is normal, being Hermitian and is non-negative definite. Hence the non-zero E λ s are it s principal idempotent elements. by corollary 2.13) and E λ = 0 unless λ 0. Thus A A = λe λ and Hence Put Then A A) = λ E λ A A = A A A A = A A) A A = λ λe λ = λ>0 E λ U α = αuα = Also, if α, β, γ > 0 then { α 1 AE α 2 if α > 0 0 otherwise α>0 αα 1 AE α 2 = A λ>0 U α U β U γ = α 1 AE α 2)β 1 AE β 2) γ 1 AE γ 2) E λ = AA A = A = α 1 β 1 γ 1 AE α 2E β 2 A AE γ 2 = α 1 β 1 γ 1 AE α 2E β 2 λ E λ )E γ 2 E α = E α ) = α 1 β 1 γ 1 AE α 2E β 2γ 2 E γ 2) E λ E µ = δ λµ E λ ) = δ αβ δ βγ α 1 AE α 2 For uniqueness suppose = δ αβ δ βγ U α A = α>0 αv α where V α V β V γ = δ αβ δ βγ V α 26

Then V α V α, α > 0 and I β>0 V β V β are the principal idempotent elements of A A corresponding to the eigen values α 2 and 0 respectively. Hence So V α V α = E α 2 where α > 0 U α = α 1 AE α 2 = α 1 β>0 βv β )V α V α = V α Note 2.16. Remark 2.17. Put A = α U α H = αu α U α Clearly, H is non-negative definite Hermitian since U α = 0 unless α > 0 and H 2 = α 2 U α U αu α U α = α 2 U α U α = α>0 This means H must be unique. Also Now αu α ) α>0 H = αu α U α) = α U α U α) = α U α U α αu α) = AA HH = αα U α U αu α U α = αα U α U α = αu α ) α U α) = AA Similarly, Hence, H H = AA HH = H H = AA Now since AA and A A are both Hermitian and having same eigen values, so they are equivalent under an unitary transformation by 1.12)), i.e. there is an unitary matrix W satisfying W A A = AA W Putting V = H A + W W A A 27

we get V V = H A + W W A A)A H + W A A W ) = H AA H + W A H W A AA H +H AW + W W W A AW H AA A W W A A A W + W A AA A W = H AA H + W A H W A H +H AW + I AA W W since W A A = AA W ) H AW W A A W + W A A W since A = A AA and W W = I) = H AA H + I AA = H H 2 H + I AA since H 2 = AA ) = AA HH + I AA = AA HH + I AA since H is nnd Hermitian H = H ) = AA ) 2 + I AA since HH = H H = AA ) = I since AA ) 2 = AA ) and HV = HH A + HW HW A A = AA A + HW HAA W since W A A = AA W ) = A + HW HH HW since HH = AA ) = A + HW HW = A which is polar representation of A. Remark 2.18. The polar representation is unique if A is non-singular by 1.15)). If we require A = HU, where U = U and UU = H H, the presentation is always 28

unique, and also exists for rectangular matrices. The uniqueness of H follows from AA = HUHU) = HUU H = HH HH = HH = HH = H 2 since H is Hermitian and H A = H HU = UU U = U If we put G = αu αu α we get alternative representation A = UG. In this case U = AG + W W A A 29

Chapter 3 Method of elementary transformation to compute Moore-Penrose inverse We consider following lemma 3.1) in [5]. Lemma 3.1. Suppose that A lc m n, B lc m p, C lc q n and D lc q p then ) A rd CA B) = r H AA H A H B CA H ra) D Theorem 3.2. Suppose that A lc m n, X lc k l, 1 k n, 1 l m. If ) r AH AA H A H Il 0 = ra) 3.1) I k, 0)A H X then X = I k, 0)A Il 0 ) Proof. Using lemma 3.1), we can write ) r AH AA H A H Il )) 0 = r X I k, 0)A Il + ra) 3.2) I k, 0)A H 0 X So if r AH AA H A H Il 0 I k, 0)A H X ) = ra) 30

then X = I k, 0)A Il 0 ) Method of elementary transformation to compute Moore-Penrose inverse: When k = n, l = m then and I k, 0) = I n Il 0 ) = I m and hence matrix in above theorem becomes A H AA H A H ) A H 0 Then to compute generalized inverse of a matrix, we follow the following steps: 1. Compute partitioned matrix ) A B = H AA H A H A H 0 so that X = 0. 2. Make the block matrix A H AA H becoming ) IrA) 0 0 0 by applying elementary transformations. In this process the block matrices A H of B1, 2) and B2, 1) will be transformed accordingly. 3. Make the block matrices of new partitioned matrix B1, 2) and B2, 1) be zero matrices by applying matrix I ra) which will give ) I 0 0 0 0 0 A In this process, X becomes X. 31

Numerical example: Let Then and A = A H = A H AA H = 1 0 1 0 1 1 1 1 0 1 0 1 0 1 1 1 1 0 3 0 3 0 3 3 3 3 0 1. Compute ) A B = H AA H A H A H 0 3 0 3 1 0 1 0 3 3 0 1 1 = 3 3 0 1 1 0 1 0 1 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 ) 2. To make block matrix A H AA H IrA) 0 of B1, 1) 0 0 3 0 3 1 0 1 3 0 3 1 0 1 0 3 3 0 1 1 0 3 3 0 1 1 3 3 0 1 1 0 r 1 1)+r 3 1 0 1 0 0 0 0 3 3 0 1 1 1 0 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 3 0 3 1 0 1 0 3 3 0 1 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 1 0 0 0 1 1 0 0 0 0 c 1 1)+c 3 3 0 0 1 0 1 0 3 3 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 1 1 0 0 0 c 2 1)+c 3 r 2 1)+r 3 32

3 0 0 1 0 1 0 3 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 3 0 0 0 0 0 0 3 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 1 3 3 0 1 0 0 0 0 1 1 0 1 3 0 1 3 c 1 1 3 +c 4 c 2 1 3 +c 5 3 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 3 3 0 1 0 0 1 1 3 3 1 1 0 1 1 0 3 3 Using theorem stated above, we have 3 0 0 0 0 1 0 3 0 0 1 1 0 0 0 0 0 0 0 0 0 1 0 0 3 0 1 0 0 0 0 1 1 0 1 3 0 0 3 0 0 0 0 0 0 3 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 3 1 0 1 0 0 1 3 1 1 0 1 3 1 1 0 3 3 1 1 0 3 3 1 3 1 3 0 0 3 1 1 3 3 c 1 1 3 +c 6 c 2 1 3 +c 6 33

References [1] R. PENROSE, A Generalized Inverse of Matrices, Proceedings of the Cambridge Philosophical Society, 51, 1955, 406-413. [2] W. GUO AND T. HUANG, Method of Elementary Transformation to Compute Moore - Penrose Inverse, Applied Mathematics and Computation, 216, 2010, 1614-1617. [3] A. RAMACHANDRA RAO AND P. BHIMASANKARAM, Linear Algebra, Hindustan Book Agency, c 2000. [4] K. HOFFMAN AND R. KUNZE, Linear Algebra, Pretice-Hall of India, c 1971. [5] G. W. STEWART, On the Continuity of the Generalized Inverse, Society of Industrial and Applied Mathematics, vol 17, no. 1, 1969, 33-45. 34