THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX

Size: px
Start display at page:

Download "THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX"

Transcription

1 THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX A Dissertation Submitted For The Award of the Degree of Master of Philosophy In Mathematics Purva Rajwade School of Mathematics Devi Ahilya Vishwavidyalaya, NACC Accredited Grade A ) Indore M.P.)

2 Contents Page No. Introduction 1 Chapter 1 Preliminaries 2 Chapter 2 A generalized inverse for matrices 6 Chapter 3 Method of elementary transformation to compute Moore-Penrose inverse 30 References 34

3 Introduction The dissertation is mainly a reading of two research papers [1], [2]) listed in the references. These papers study generalized inverse of matrices defined in [1]. It is defined for any matrix A and is a unique solution of following four equations AXA = A 1) XAX = X 2) AX) = AX 3) XA) = XA 4) The chapter 1 titled preliminaries contains some basic results which we shall use in subsequent chapters. It contains definitions of Hermitian idempotents, principal idempotent elements and polar representation of a matrix followed by some results from [3] and [4]. The chapter 2 starts with a definition of a generalization of the inverse of a matrix, as the unique solution of a certain set of equations. Such a generalized inverse exists for anyrectangular) matrix with complex elements. This generalized inverse is called the Moore-Penrose inverse. Lemma 2.4) proves A = A, A = A, for a non-singular matrix A = A 1 and other elementary results. We shall show that, using singular value decomposition, A = W B V where V and W are unitary and B is diagonal. A new type of spectral decomposition is given, A = α>0 αu α the sum is being finite over real values of α. Hence we get A = α U α Next, we find polar representation, A = HV, where H = αu α U α The chapter 3 gives a method to compute Moore-Penrose inverse by elementary transformations. 1

4 Chapter 1 Preliminaries Recall that the conjugate transpose A = A) T of a matrix A has following properties Since, T raceaa ) = A = A A + B) = A + B λa) = λa BA) = A B AA = 0 A = 0 n a i.a i = i=1 n a i, a i = i=1 n i=1 n a ij 2 i.e., the trace of AA is the sum of the squares of the moduli of the elements of A. Hence the last property. Observe that using fourth and fifth property we can obtain the rule Since, j=1 BAA = CAA BA = CA 1.1) BAA CAA )B C) = BAA CAA )B C ) = BAA B CAA B BAA C + CAA C = BA CA)A B BA CA)A C = BA CA)A B A C ) = BA CA)BA CA) 2

5 Similarly, and hence BA A CA A)B C) = BA CA )BA CA ) BA A = CA A BA = CA 1.2) Definition 1.1. Hermitian Idempotents: A Hermitian idempotent matrix is one satisfying EE = E, that is, Note 1.2. If E = E and E 2 = E then E = E and E 2 = E. If EE = E then E 2 = E EE = E EE = E EE = E EE = E E = E EE = E 2 = E Definition 1.3. Principal idempotent elements of a matrix: For any square matrix A there exists a unique set of matrices K λ defined for each complex number λ such that K λ K µ = δ λµ K λ 1.3) Kλ = I 1.4) AK λ = K λ A 1.5) A λi) K λ is nilpotent 1.6) the non-zero K λ s are called principal idempotent elements of a matrix. Remark 1.4. Existence of K λ s: Let ϕ x) = x λ 1) n 1... x λ r ) nr be the minimal polynomial of A where the factors x λ i ) n i are mutually coprime i.e., there exists f i x), f j x) such that f i x) x λ i ) n i + f j x) x λ j ) n j = 1 ; i j We can write ϕ x) = x λ i ) n i ψ i x) where r ψ i x) = x λ j ) n j j=1 j i 3

6 As ψ is are co-prime, there exist polynomials χ i x) such that χi x) ψ i x) = 1 put K λi = χ i x) ψ i x) with the other K λ s zero. So that K λ = I. Further, A λ i I)K λi = A λ i I)χ i x) ψ i x) [A λ i I)K λi ] n i = 0 If λ is not an eigen value of A, K λ is zero so that sum in equation 1.4) is finite. Further note that, K λ K µ = 0 if λ µ and as the sum K λ is finite, Hence, It is clear that K λ 2 = K λ. K λ K µ = δ λµ K λ. AK λ = K λ A. Theorem 1.5. Polar representation of a matrix: Any square matrix is the product of a Hermitian with an unitary matrix. Theorem 1.6. [[3], 3.5.6] : The following are equivalent 1. rab)=rb). 2. Row space of AB is same as row space of B. 3. B=DAB for some matrix D Theorem 1.7. Rank Cancellation laws [[3], 3.5.7] : 1. If ABC=ABD and rab)=rb) then BC=BD. 2. If CAB=DAB and rab)=ra) then CA=DA. Definition 1.8. Rank factorization: Let A be an m n matrix with rank r 1, then P, Q) is said to be a rank factorization of A if P m r, Q r n and A = P Q. Theorem 1.9. If any matrix A is idempotent then it s rank and trace are equal. 4

7 Proof. Let r 1 be the rank of A and P, Q) be a rank factorization of A. Then since A is idempotent i.e, A 2 = A P QP Q = P Q = P I r Q Since P can be cancelled on the left and Q can be cancelled on right since we can write P I r QP Q = P I r Q), we get Now, and Hence, the rank is equal to the trace. QP = I r tracei r ) = r tracep Q) = traceqp ) = r Theorem [[3], 8.7.8] : A matrix is unitarily similar to a digonal matrix if and only if it is normal. Theorem [[4], chapter 8, theorem 18] : Let V be a finite dimensional inner product space, and let T be a self-adjoint linear operator on V. Then there is an orthonormal basis for V, each vector of which is a characterstic vector for T. Corollary [[4], chapter 8, corollary to theorem 18] : Let A be an n n Hermitian self-adjoint) matrix. Then there is a unitary matrix P such that P 1 AP is diagonal, that is, A is unitarily equivalent to diagonal matrix. Note If two matrices A and B are Hermitian and having same eigen values then they are equivalent under a unitary transformation. Theorem [[4], chapter 9, theorem 13] : Let V be a finite dimensional inner product space and T is a non-negative operator on V. Then T has a unique nonnegative square root that is, there is one and only one non-negative operator N on V such that N 2 = T. Theorem [[4], chapter 9, theorem 14] : Let V be a finite dimensional inner product space and let T be any linear operator on V. Then there exists a unitary operator U on V and a non-negative operator N on V such that T = UN. The non-negative operator N is unique. If T is invertible, the operator U is also unique. Remark If any matrix T is non-singular then it s polar representation is unique. 5

8 Chapter 2 A generalized inverse for matrices Following theorem gives the generalized inverse of a matrix. solution of a certain set of equations Theorem 2.1. The four equations It is the unique have a unique solution for any matrix A. AXA = A, 2.1) XAX = X 2.2) AX) = AX 2.3) XA) = XA 2.4) Proof. First, we observe that the equations 2.2) and 2.3) are equivalent to the single equation XX A = X 2.5) substitute equation 2.3) in 2.2) to get X AX) = X XX A = X Conversely, suppose equation 2.5) holds. We have AXX A = AX AX AX) = AX Obeserve that AXX A is Hermitian, Thus AX) = AX. If we put 2.3) in 2.5), we get equation 2.2). Similarly, equations 2.1) and 2.4) are equivalent to the single equation XAA = A 2.6) 6

9 Since, 2.1) and 2.4) gives Futher, if XAA = A then AXA = A A XA) = A A XA) ) = XAA = A XAA ) = AA X = A XAA X = XA XA) = XA since XAA X is Hermitian) Next, if we substitute 2.4) in 2.6), we get 2.1). Thus it is sufficient to find an X satisfying 2.5) and 2.6), such X will exists if a B can be found satisfying BA AA = A Then X = BA satisfies 2.6). Observe that, from equation 2.6), XAA = A XA) A = A from2.4)) A X A = A BA X A = BA XX A = X i.e., X also satisfies 2.5). As a matrix satisfies its characteristic equation, the expressions A A, A A) 2,... cannot be linearly independent i.e., there are λ is; i = 1, 2,..., k such that λ 1 A A + λ 2 A A) λ k A A) k = 0 2.7) where λ 1, λ 2,..., λ k are not all zero. Note that k need not be unique. Let λ r be the first non-zero λ then 2.7) becomes λ r A A) r + λ r+1 A A) r λ k A A) k = 0 7

10 If we put then A A) r = λ 1 r [λ r+1 A A) r λ k A A) k] [ = λ 1 r λ r+1 I + λ r+2 A A + + λ k A A) k r 1] A A) r+1 B = λ 1 r [λ r+1 I + λ r+2 A A + + λ k A A) k r 1] We can write this equation as B A A) r+1 = A A) r B A A) r A A) = A A) r 1 A A) B A A) r A = A A) r 1 A by 1.2)) B A A) r = A A) r 1 by 1.1)) Thus, by repeated applications of 1.2) and 1.1), we get B A A) 2 = A A BA AA = A again by 1.2)), This is what was required. Now, to show that this X is unique. Let there be X and Y which satisfy 2.5) and 2.6). If we substitute 2.4) in 2.2) and 2.3) in 2.1), we get Y = A Y Y 2.8) Now X = XX A 2.5) A = A AY 2.9) = XX A AY by 2.9)) = XAY [since AXA = A AX) A = A X A A = A] = XAA Y Y by 2.8)) = A Y Y by 2.6)) = Y by 2.8)) 8

11 Thus the solution of 2.1), 2.2), 2.3), 2.4) is unique. Conversely, if A X X = X then A X XA = XA and LHS is Hermition, so, XA) = XA. Now, if we substitute XA) = XA in A X X = X, we get XAX = X which is 2.2). Thus, 2.4) and 2.2) are equivalent to 2.8). Similarly, 2.3) and 2.1) are equivalent to 2.9). Definition 2.2. Generalized inverse: The unique solution of AXA = A, XAX = X, AX) = AX, XA) = XA is called the gerneralized inverse of A. We write X = A. Note 2.3. To calculate A, we only need to solve the two unilateral linear equations and XAA = A 2.10) A AY = A 2.11) Put A = XAY. Note that XA and AY are Hermitian and satisfy Then, AXA = A = AY A Use cancellation laws) 1. AA A = AXAY A = AY A = A 2. A AA = XAY AXAY = XAXAY = XAY = A 3. AA ) = AXAY ) = AY ) = AY = AXAY = AA since AY is Hermitian) 4. A A) = XAY A) = XA) = XA = XAY A = A A since AX is Hermitian) 9

12 Thus, if X and Y are solutions of unilateral linear equations 2.10) and 2.11) then XAY is the generalized inverse. Moreover, 2.5) and 2.6) are also satisfied i.e., A A A = A 2.12) and 2.8) and 2.9) are, A AA = A 2.13) A A A = A 2.14) A AA = A 2.15) Lemma A = A A = A If A is non-singular A = A λa) = λ A A A) = A A If U and V are unitary, UAV ) = V A U If A = A i, where A i A j = 0 and A i A j = 0 whenever i j then A = A i If A is normal A A = AA and A n ) = A ) n A, A A, A and A A all have rank equal to trace of A A. Proof To show that A is the generalized inverse of A i.e., to show that A AA = A AA A = A A A ) = A A AA ) = AA which are 2.2), 2.1), 2.4), 2.3). Hence A = A To show that the generalized inverse of A is A i.e., to show that 2.1), 2.2), 2.3), 2.4) holds when X is replaced by A and A by A A A A = AA A) = A by 2.1)) A A A = A AA ) = A by 2.2)) 10

13 A A ) = A A) = A A since A = A) = A A) by equation 2.3)) = A A A A ) = AA ) = AA by equation 2.4)) Observe = A A AA 1 A = A A 1 AA 1 = A 1 AA 1 ) = I = I = AA 1 A 1 A ) = I = I = A 1 A To show that λ A is the generalized inverse of λa. λa) λ A ) λa) = λa 1 λ A λa = λaa A = λa λa) λa) λa) = λ A ) λa) λ A ) = 1 λ A λaλ A = λ A AA = λ A λa) λ A )) = λa 1 λ A ) = AA ) = AA = λa 1 λ A = λa) λ A ) λ A λa ) = 1 λ A λa) = A A ) = A A = 1 λ A λa = λ A ) λa) To show that A A is the generalized inverse of A A. by2.1)) by2.2)) by2.3)) by2.4)) A A) A A ) A A) = A AA AA ) A = A AA AA A = A AA A by 2.2)) = A A by 2.1)) A A ) A A) A A ) = A AA A = A A by 2.2)) 11

14 A A ) A A) = A A A A = A A by 2.12)) = A A ) by 2.4)) = A A = A AA A by 2.15)) A A) A A ) = A AA A = A A) A A ) = A A) A A )) since 2.3) holds for A A) = A A ) A A) = A A ) A A) To show that V A U is the generalized inverse of UAV. Note that since U and V are unitary, UU = U U = I, V V = V V = I. Then UAV ) V A U ) UAV ) = UAA AV = UAV by 2.1)) V A U ) UAV ) V A U ) = V A AA U = V A U by 2.2)) V A U ) UAV ) = UA V ) V A U ) = UA A U = U AA ) U = UAA U by 2.3)) = UAV ) V A U ) since V V = I) 12

15 UAV ) V A U ) = V A U ) UA V ) = V A A V = V A A ) V = V A AV by 2.4)) = V A U ) UAV ) since U U = I) To show that A + i is the generalized inverse of A i ). First observe that, since A j = A j A j A j, 2.14) since A i A j = 0, whenever i j. A i A j = A i A ja j A j A i A j = 0, whenever i j Also as, A i = A i A i A i, 2.12) since A i A j = 0, whenever i j. A i A j = 0, whenever i j Now, i A i) ) j A j k A k) = i A i) = i ) A i A i A i ) j A j A j Similarly, as above = i A i ) ) ) A i Ai A i = ) A i 13

16 Then, i A i) j A j)) = j A j ) i A i ) = i A i = i A i ) A i A i = i A ia i = i A i) ) ] i A i [since A i A j = 0 Similarly, as above A i ) A j = i A i A j i j j Since AA is Hermitian and as we have proved in 2.4.5). that A A) = A A, similarly we can show that AA ) = A A. By using this fact we see that A A = A A A A = A A) A A) by 2.4.5) = AA ) AA ) since A is normal) = A A AA = A A using 2.13)) = AA ) = AA since AA ) = AA ) Now, to show that A ) n is generlized inverse of A n. As AA = A A A n ) A ) n A n ) = AA A ) n = A n A ) n A) n A ) n = A AA ) n = A ) n A n A ) n) = AA ) n = AA ) ) n = AA ) n = A n A ) n A ) n A n ) = A A ) n = A A ) ) n = A A ) n = A ) n A n 14

17 So, First note that A n ) = A ) n A A) 2 = A AA A = A A i.e., A A is an idempotent. By theorem 1.9), it s rank is trace of it. Remark 2.5. Since by equation 2.12), we can write A = A A A = A A) A by 2.4.5)) 2.16) so we can calculate the generalized inverse of a matrix A from the generalized inverse of A A. As A A is Hermitian it can be reduced to diagonal form by a unitary transformation i.e., A A = UDU where U is unitary and D = diag α 1, α 2,..., α n ). Then ) D = diag α 1, α 2,..., α n By 2.4.6) we can write, A A) = UD U A = UD U A by 2.16)) Note 2.6. By singular value decomposition, we know that any square matrix A can be written in the form A = V BW where V and W are unitary and B is diagonal. Also since AA and A A are both Hermitian and have the same eign values, there exists a unitary matrix T such that T AA T = A A by 1.13)), observe that, T A) T A) = T AA T = A A T A) T A) = A T T A = A A since T T = I) i.e., T A is normal and so diagonable by unitary transformation from 1.10)). As above by 2.4.6) we get A = W B V Remark 2.7. Observe that : M m n IR) M n m IR) A A) = A 15

18 Now, consider We have ) 1 0 A ϵ = 0 ϵ ) 1 0 A 1 ϵ = 0 ϵ lim A ϵ = ϵ which is a singular matrix but A ϵ is a non-singular. Thus, in this case ) 1 0 A ϵ A but A ϵ ) A) =. 0 0 But if rank of A is kept fixed then the function is continuous. Theorem 2.8. A necessary and sufficient condition for the equation AXB = C to have a solution is AA CB B = C in which case the general solution is where Y is arbitrary. ), X = A CB + Y A AY BB Proof. Suppose X satisfies AXB = C. Then C = AXB = AA AXBB B = AA CB B Conversely, if C = AA CB B, then X = A CB is a particular solution of C = AXB. For general solution we will have to solve AXB = 0. For where Y is arbitrary, we have, since AA A = A, BB B = B. X = Y A AY BB, AXB = AY B AA AY BB B = 0 Corollary 2.9. The general solution of the vector equation P x = c is x = P c + I P P ) y where y is arbitrary provided that the equation has a solution. 16

19 Proof. By the above theorem, where y is arbitrary. x = P ci + y P P yii = P c + y P P y = P c + I P P ) y Corollary A necessary and sufficient condition for the equations AX = C, XB = D to have a common solution is that each equation should individually have a solution and that AD = CB. Proof. The condition is obviously necessary since, AX = C AXB = CB AD = CB since XB = D) Now, to show the condition is sufficient. Put X = A C + DB A ADB Then AX = A A C + DB A ADB ) and XB = AA C + ADB AA ADB = AA C + ADB ADB = AA C = A C + DB A ADB ) B = A CB + DB B A ADB B = A CB + DB B A CB since AD = CB) = DB B So X = A C + DB A ADB will be a solution if the condition AA C = C, DB B = D and AD = CB are satisfied. 17

20 Lemma A A, AA, I A A, I AA are all Hermitian idempotents If E is Hermitian idempotent, E = E K is idempotent if and only if there exist Hermitian idempotents Eand F such that K = F E) in which case K = EKF. Proof First to show that A A ) A A ) = A A. A A ) A A ) = A AA A = A A by 2.13)) Similarly, AA ) AA ) = A A ) = A A = AA A A since A A is Hermitian ) Now, I A A ) I A A ) = AA by 2.12)) = I A A ) I A A ) = I A A A A + A AA A = I A A A A + A A by 2.13)) = I A A and, I AA ) I AA ) = I AA ) I A A ) = I A A AA + AA A A = I A A AA + AA by 2.12)) = I AA ) = I AA since AA is Hermitian ) Suppose E = E and E 2 = E. Then EEE = E 18

21 and EE) = E E = EE Therefore, 2.1), 2.2), 2.3) and 2.4) hold First let K be idempotent i.e., K 2 = K. As, K = K KK K = K K.KK K = K K ) KK )) K = F E) since K = K ) where F = K K and E = KK. Clearly F and E are Hermitian idempotents. Further, EKF = KK KK K = KK K = K Conversely, if K = F E) then K 2 = EKF ) 2 = EKF EKF since K = EKF ) = E F E) F E) F E) F = E F E) F put K = F E) ) since A AA = A ) = EKF = K so K is idempotent. Theorem If and E λ = I {A λi) n } A λi) n F λ = I A λi) n {A λi) n }, 19

22 where n is sufficientely large e.g. the order of A), then the principal idempotent elements of A are given by K λ = F λ E λ ). Further, n can be taken as unity if and only if A is diagonalizable. Proof. First suppose that A is diagonalizable. Put and E λ = I A λi) A λi) F λ = I A λi) A λi) By ) E λ and F λ are Hermitian idempotents. If λ is not an eigen value of A, then for no non-zero x, A λi) x 0 Ker A λi) = 0 A λi is invertible A λi) = A λi) 1 by 2.4.3)) A λi) A λi) = I E λ = F λ = 0 Now, [ ] A µi) E µ = A µi) I A µi) A µi) = A µi) A µi) A µi) A µi) = A µi) A µi) = 0 Similarly, So that, A µi) E µ = ) F λ A λi) = ) A µi) E µ = 0 AE µ = µe µ 20

23 F λ AE µ = µf λ E µ 2.19) F λ A µi) = 0 F λ A = λf λ 2.19) and 2.20) imply F λ AE µ = λf λ E µ 2.20) λf λ E µ = F λ AE µ = µf λ E µ λ µ) F λ E µ = 0 F λ E µ = 0 if λ µ 2.21) By ) we have K λ = F λ E λ ) and K λ = E λ K λ F λ = E λ F λ E λ ) F λ 2.22) So, λ = µ K λ K µ = E λ F λ E λ ) F λ E λ ) F λ E λ ) F λ = E λ F λ E λ ) F λ = K λ and if λ µ, K λ K µ = 0, since F λ E µ = 0. Hence also equation 2.21) gives K λ K µ = δ λµ K λ 2.23) F λ K µ E ν = δ λµ δ µν F λ E λ 2.24) Next, Z α be any eigen vector of A corresponding to the eigen value α. i.e., A αi) Z α = 0). Then ) E α Z α = I A αi) A αi) Z α = Z α A αi) A αi) Z α = Z α Since A is diagonalizable, any column vector x conformable with A is expressible as a finite sum over all complex λ. i.e., x = Z λ = E λ x λ 21

24 Similarly, if y is conformable with A, it is expressible as y = y λf λ Now y K µ ) x = y λ F λ) K µ ) E ν x ν ) = y λ F λe λ x λ by 2.23) and 2.24)) = y λ F λ) E ν x ν ) = y x Hence, Kµ = I 2.25) Also from equation 2.17) we have A λi) E λ = 0 AE λ = λe λ AE λ F λ E λ ) F λ = λe λ F λ E λ ) F λ Also AK λ = λk λ by 2.22)) 2.26) F λ A λi) = 0 F λ A = λf λ From 2.26) and 2.27) we have E λ F λ E λ ) F λ A = λe λ F λ E λ ) F λ K λ A = λk λ by 2.22)) 2.27) AK λ = λk λ = K λ A 2.28) Thus conditions 1.5) and 1.6) are satisfied. Now, as K λ = I, A = λk λ 2.29) Conversely, let n = 1 and suppose A is not diagonalizable. Observe that, by 2.28) AK λ x = λk λ x 22

25 that is, for any vector x, K λ x is an eigen vector corresponding to λ. Therefore, x = K λ x gives x as a sum of eigen vectors of A. Note that 2.28) was deduced without assuming the diagonability of A. Now we shall prove that for any set of K λ s satisfying 1.3), 1.4), 1.5) and 1.6) we have K λ = F λ E λ ) where F λ and E λ are as defined. We must have where n is sufficiently large. This gives Kλ = I A λi) n K λ = 0 = K λ A λi) n E λ K λ = K λ = K λ F λ 2.30) As, for λ µ, x λi) n and x µi) n are co-prime, there are polynomials P x) and Qx) such that Now, Since Similarly, Hence I = A λi) n P A) + QA)A µi) n 2.31) F λ A λi) n = I A λi)a λi) )A λi) n = 0 A λi)a λi) A λi) = A λi A µi) n E µ = 0 F λ E λ = 0, if λ µ use2.31)) F λ K µ = 0 = K λ E µ, if λ µ use2.30)) F λ K λ = F λ, K λ E λ = E λ since K λ = I) 2.32) Now, use 2.30) and 2.32) to see that F λ E λ )K λ F λ E λ ) = F λ E λ K λ F λ E λ )K λ = K λ F λ E λ K λ ) = F λ E λ K λ K λ F λ E λ ) = K λ F λ E λ 23

26 These equations can be verified as below: F λ E λ )K λ F λ E λ ) = F λ E λ K λ F λ E λ = F λ K λ F λ E λ by2.30)) = F λ K λ E λ by2.30)) = F λ E λ by2.32)) K λ F λ E λ )K λ = K λ F λ E λ K λ = K λ F λ K λ by2.30)) = K λ F λ by2.32)) = K λ by2.30)) F λ E λ K λ ) = F λ K λ ) by2.30)) = F λ by2.32)) = F λ since F λ is Hermitian idempotent) = F λ K λ by2.32)) = F λ E λ K λ by2.30)) K λ F λ E λ ) = K λ E λ ) by2.30)) = E λ by2.32)) = E λ since E λ is Hermitian idempotent) = K λ E λ by2.32)) = K λ F λ E λ by2.30)) Hence, and K λ is unique. F λ E λ ) = K λ Corollary If A is normal, it is diagonalizable and its principal idempotent elements are Hermitian. Proof. If A is normal then A λi) is also normal. Then by 2.4.8) A λi)a λi) = A λi) A λi) Then E λ = I A λi) A λi) = F λ and K λ = F λ E λ ) is Hermitian since E λ and F λ both are Hermitian. 24

27 Note If A is normal then A = λe λ ) since A = λe λ ) = λe λ ) by 2.4.7)) = λ E λ by 2.4.4)) = λ E λ since Eλ is Hermitian so E = E ) A new type of Spectral Decomposition: In view of above note it is clear that if A is normal then we get a simple expression for A in terms of its principal idempotent elements. However, below, we prove a new type of spectral decomposition so that we get relatively simple expression for A. Theorem Any matrix A is uniquely expressible in the form A = α>0 αu α this being a finite sum over real values of α, where if α β.thus using above note we can write U α = U α 2.33) U αu β = ) U α U β = ) A = α U α [since U α = U α] Proof. Equations 2.33), 2.34) and 2.35) can be comprehensively written as For α = β = γ U α U βu γ = δ αβ δ βγ U α 2.36) U α UαU α = U α UαU α Uα = Uα Also, note that U α U α and U αu α are both Hermitian. Therefore, by uniqueness of generalized inverse U α = U α 2.37) 25

28 Also, respectively imply U α U αu β = 0 and U α U βu β = 0 Define U αu β = 0 and U α U β = 0 by 1.2)) E λ = I A A λi) A A λi) The matrix A A is normal, being Hermitian and is non-negative definite. Hence the non-zero E λ s are it s principal idempotent elements. by corollary 2.13) and E λ = 0 unless λ 0. Thus A A = λe λ and Hence Put Then A A) = λ E λ A A = A A A A = A A) A A = λ λe λ = λ>0 E λ U α = αuα = Also, if α, β, γ > 0 then { α 1 AE α 2 if α > 0 0 otherwise α>0 αα 1 AE α 2 = A λ>0 U α U β U γ = α 1 AE α 2)β 1 AE β 2) γ 1 AE γ 2) E λ = AA A = A = α 1 β 1 γ 1 AE α 2E β 2 A AE γ 2 = α 1 β 1 γ 1 AE α 2E β 2 λ E λ )E γ 2 E α = E α ) = α 1 β 1 γ 1 AE α 2E β 2γ 2 E γ 2) E λ E µ = δ λµ E λ ) = δ αβ δ βγ α 1 AE α 2 For uniqueness suppose = δ αβ δ βγ U α A = α>0 αv α where V α V β V γ = δ αβ δ βγ V α 26

29 Then V α V α, α > 0 and I β>0 V β V β are the principal idempotent elements of A A corresponding to the eigen values α 2 and 0 respectively. Hence So V α V α = E α 2 where α > 0 U α = α 1 AE α 2 = α 1 β>0 βv β )V α V α = V α Note Remark Put A = α U α H = αu α U α Clearly, H is non-negative definite Hermitian since U α = 0 unless α > 0 and H 2 = α 2 U α U αu α U α = α 2 U α U α = α>0 This means H must be unique. Also Now αu α ) α>0 H = αu α U α) = α U α U α) = α U α U α αu α) = AA HH = αα U α U αu α U α = αα U α U α = αu α ) α U α) = AA Similarly, Hence, H H = AA HH = H H = AA Now since AA and A A are both Hermitian and having same eigen values, so they are equivalent under an unitary transformation by 1.12)), i.e. there is an unitary matrix W satisfying W A A = AA W Putting V = H A + W W A A 27

30 we get V V = H A + W W A A)A H + W A A W ) = H AA H + W A H W A AA H +H AW + W W W A AW H AA A W W A A A W + W A AA A W = H AA H + W A H W A H +H AW + I AA W W since W A A = AA W ) H AW W A A W + W A A W since A = A AA and W W = I) = H AA H + I AA = H H 2 H + I AA since H 2 = AA ) = AA HH + I AA = AA HH + I AA since H is nnd Hermitian H = H ) = AA ) 2 + I AA since HH = H H = AA ) = I since AA ) 2 = AA ) and HV = HH A + HW HW A A = AA A + HW HAA W since W A A = AA W ) = A + HW HH HW since HH = AA ) = A + HW HW = A which is polar representation of A. Remark The polar representation is unique if A is non-singular by 1.15)). If we require A = HU, where U = U and UU = H H, the presentation is always 28

31 unique, and also exists for rectangular matrices. The uniqueness of H follows from AA = HUHU) = HUU H = HH HH = HH = HH = H 2 since H is Hermitian and H A = H HU = UU U = U If we put G = αu αu α we get alternative representation A = UG. In this case U = AG + W W A A 29

32 Chapter 3 Method of elementary transformation to compute Moore-Penrose inverse We consider following lemma 3.1) in [5]. Lemma 3.1. Suppose that A lc m n, B lc m p, C lc q n and D lc q p then ) A rd CA B) = r H AA H A H B CA H ra) D Theorem 3.2. Suppose that A lc m n, X lc k l, 1 k n, 1 l m. If ) r AH AA H A H Il 0 = ra) 3.1) I k, 0)A H X then X = I k, 0)A Il 0 ) Proof. Using lemma 3.1), we can write ) r AH AA H A H Il )) 0 = r X I k, 0)A Il + ra) 3.2) I k, 0)A H 0 X So if r AH AA H A H Il 0 I k, 0)A H X ) = ra) 30

33 then X = I k, 0)A Il 0 ) Method of elementary transformation to compute Moore-Penrose inverse: When k = n, l = m then and I k, 0) = I n Il 0 ) = I m and hence matrix in above theorem becomes A H AA H A H ) A H 0 Then to compute generalized inverse of a matrix, we follow the following steps: 1. Compute partitioned matrix ) A B = H AA H A H A H 0 so that X = Make the block matrix A H AA H becoming ) IrA) by applying elementary transformations. In this process the block matrices A H of B1, 2) and B2, 1) will be transformed accordingly. 3. Make the block matrices of new partitioned matrix B1, 2) and B2, 1) be zero matrices by applying matrix I ra) which will give ) I A In this process, X becomes X. 31

34 Numerical example: Let Then and A = A H = A H AA H = Compute ) A B = H AA H A H A H = ) 2. To make block matrix A H AA H IrA) 0 of B1, 1) r 1 1)+r c 1 1)+c c 2 1)+c 3 r 2 1)+r 3 32

35 c c 4 c c Using theorem stated above, we have c c 6 c c 6 33

36 References [1] R. PENROSE, A Generalized Inverse of Matrices, Proceedings of the Cambridge Philosophical Society, 51, 1955, [2] W. GUO AND T. HUANG, Method of Elementary Transformation to Compute Moore - Penrose Inverse, Applied Mathematics and Computation, 216, 2010, [3] A. RAMACHANDRA RAO AND P. BHIMASANKARAM, Linear Algebra, Hindustan Book Agency, c [4] K. HOFFMAN AND R. KUNZE, Linear Algebra, Pretice-Hall of India, c [5] G. W. STEWART, On the Continuity of the Generalized Inverse, Society of Industrial and Applied Mathematics, vol 17, no. 1, 1969,

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

A GENERALIZED INVERSE FOR MATRICES

A GENERALIZED INVERSE FOR MATRICES [ 406 ] A GENERALIZED INVERSE FOR MATRICES BY R. PENROSE Communicated by J. A. TODD Received 26 July 1954 This paper describes a generalization of the inverse of a non-singular matrix, as the unique solution

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

MATH36001 Generalized Inverses and the SVD 2015

MATH36001 Generalized Inverses and the SVD 2015 MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications

More information

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method

Lesson 3. Inverse of Matrices by Determinants and Gauss-Jordan Method Module 1: Matrices and Linear Algebra Lesson 3 Inverse of Matrices by Determinants and Gauss-Jordan Method 3.1 Introduction In lecture 1 we have seen addition and multiplication of matrices. Here we shall

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe ECE133A (Winter 2018) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

Re-nnd solutions of the matrix equation AXB = C

Re-nnd solutions of the matrix equation AXB = C Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.

More information

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ. Linear Algebra 1 M.T.Nair Department of Mathematics, IIT Madras 1 Eigenvalues and Eigenvectors 1.1 Definition and Examples Definition 1.1. Let V be a vector space (over a field F) and T : V V be a linear

More information

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications

Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,

More information

Diagonalization by a unitary similarity transformation

Diagonalization by a unitary similarity transformation Physics 116A Winter 2011 Diagonalization by a unitary similarity transformation In these notes, we will always assume that the vector space V is a complex n-dimensional space 1 Introduction A semi-simple

More information

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012 Name: Exam Rules: This is a closed book exam. Once the exam

More information

Matrix Theory. A.Holst, V.Ufnarovski

Matrix Theory. A.Holst, V.Ufnarovski Matrix Theory AHolst, VUfnarovski 55 HINTS AND ANSWERS 9 55 Hints and answers There are two different approaches In the first one write A as a block of rows and note that in B = E ij A all rows different

More information

Moore Penrose inverses and commuting elements of C -algebras

Moore Penrose inverses and commuting elements of C -algebras Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Math 408 Advanced Linear Algebra

Math 408 Advanced Linear Algebra Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Symmetric and anti symmetric matrices

Symmetric and anti symmetric matrices Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level. Existence of Generalized Inverse: Ten Proofs and Some Remarks R B Bapat Introduction The theory of g-inverses has seen a substantial growth over the past few decades. It is an area of great theoretical

More information

Linear algebra for computational statistics

Linear algebra for computational statistics University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set

MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur. Problem Set MTH 102: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set 6 Problems marked (T) are for discussions in Tutorial sessions. 1. Find the eigenvalues

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

Research Article Constrained Solutions of a System of Matrix Equations

Research Article Constrained Solutions of a System of Matrix Equations Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Linear Algebra using Dirac Notation: Pt. 2

Linear Algebra using Dirac Notation: Pt. 2 Linear Algebra using Dirac Notation: Pt. 2 PHYS 476Q - Southern Illinois University February 6, 2018 PHYS 476Q - Southern Illinois University Linear Algebra using Dirac Notation: Pt. 2 February 6, 2018

More information

STAT200C: Review of Linear Algebra

STAT200C: Review of Linear Algebra Stat200C Instructor: Zhaoxia Yu STAT200C: Review of Linear Algebra 1 Review of Linear Algebra 1.1 Vector Spaces, Rank, Trace, and Linear Equations 1.1.1 Rank and Vector Spaces Definition A vector whose

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Some results on the reverse order law in rings with involution

Some results on the reverse order law in rings with involution Some results on the reverse order law in rings with involution Dijana Mosić and Dragan S. Djordjević Abstract We investigate some necessary and sufficient conditions for the hybrid reverse order law (ab)

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

Matrix Inequalities by Means of Block Matrices 1

Matrix Inequalities by Means of Block Matrices 1 Mathematical Inequalities & Applications, Vol. 4, No. 4, 200, pp. 48-490. Matrix Inequalities by Means of Block Matrices Fuzhen Zhang 2 Department of Math, Science and Technology Nova Southeastern University,

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2017 LECTURE 5 STAT 39: MATHEMATICAL COMPUTATIONS I FALL 17 LECTURE 5 1 existence of svd Theorem 1 (Existence of SVD) Every matrix has a singular value decomposition (condensed version) Proof Let A C m n and for simplicity

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications

Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Almost Sharp Quantum Effects

Almost Sharp Quantum Effects Almost Sharp Quantum Effects Alvaro Arias and Stan Gudder Department of Mathematics The University of Denver Denver, Colorado 80208 April 15, 2004 Abstract Quantum effects are represented by operators

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

Generalized Principal Pivot Transform

Generalized Principal Pivot Transform Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra 1.1. Introduction SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

On Regularity of Incline Matrices

On Regularity of Incline Matrices International Journal of Algebra, Vol. 5, 2011, no. 19, 909-924 On Regularity of Incline Matrices A. R. Meenakshi and P. Shakila Banu Department of Mathematics Karpagam University Coimbatore-641 021, India

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS

GATE Engineering Mathematics SAMPLE STUDY MATERIAL. Postal Correspondence Course GATE. Engineering. Mathematics GATE ENGINEERING MATHEMATICS SAMPLE STUDY MATERIAL Postal Correspondence Course GATE Engineering Mathematics GATE ENGINEERING MATHEMATICS ENGINEERING MATHEMATICS GATE Syllabus CIVIL ENGINEERING CE CHEMICAL ENGINEERING CH MECHANICAL

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

The Singular Value Decomposition and Least Squares Problems

The Singular Value Decomposition and Least Squares Problems The Singular Value Decomposition and Least Squares Problems Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo September 27, 2009 Applications of SVD solving

More information

Algebra Exam Syllabus

Algebra Exam Syllabus Algebra Exam Syllabus The Algebra comprehensive exam covers four broad areas of algebra: (1) Groups; (2) Rings; (3) Modules; and (4) Linear Algebra. These topics are all covered in the first semester graduate

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Spectral inequalities and equalities involving products of matrices

Spectral inequalities and equalities involving products of matrices Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department

More information

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix

Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Analytical formulas for calculating the extremal ranks and inertias of A + BXB when X is a fixed-rank Hermitian matrix Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China

More information

On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space

On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space Sonja Radosavljević and Dragan SDjordjević March 13, 2012 Abstract For two given orthogonal, generalized or hypergeneralized

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms De La Fuente notes that, if an n n matrix has n distinct eigenvalues, it can be diagonalized. In this supplement, we will provide

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

arxiv: v1 [math.fa] 1 Oct 2015

arxiv: v1 [math.fa] 1 Oct 2015 SOME RESULTS ON SINGULAR VALUE INEQUALITIES OF COMPACT OPERATORS IN HILBERT SPACE arxiv:1510.00114v1 math.fa 1 Oct 2015 A. TAGHAVI, V. DARVISH, H. M. NAZARI, S. S. DRAGOMIR Abstract. We prove several singular

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Notes on Mathematics

Notes on Mathematics Notes on Mathematics - 12 1 Peeyush Chandra, A. K. Lal, V. Raghavendra, G. Santhanam 1 Supported by a grant from MHRD 2 Contents I Linear Algebra 7 1 Matrices 9 1.1 Definition of a Matrix......................................

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Matrix Arithmetic Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC BY-NC-SA)

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information