Generalized core inverses of matrices
|
|
- Johnathan Whitehead
- 5 years ago
- Views:
Transcription
1 Generalized core inverses of matrices Sanzhang Xu, Jianlong Chen, Julio Benítez and Dingguo Wang arxiv: v1 [math.ra 13 Sep 217 Abstract: In this paper, we introduce two new generalized inverses of matrices, namely, the i,m -core inverse and the (j,m)-core inverse. The i,m -core inverse of a complex matrix extends the notions of the core inverse defined by Baksalary and Trenkler [1 and the core-ep inverse defined by Manjunatha Prasad and Mohana [13. The (j,m)-core inverse of a complex matrix extends the notions of the core inverse and the DMP-inverse defined by Malik and Thome [12. Moreover, the formulae and properties of these two new concepts are investigated by using matrix decompositions and matrix powers. Key words: i, m -core inverse, (j, m)-core inverse, core inverse, DMP-inverse, core- EP inverse. AMS subject classifications: 15A9, 15A23. 1 Introduction LetC m n denote theset of all m n complex matrices. LetA, R(A) andrk(a) denotethe conjugate transpose, column space, and rank of A C m n, respectively. For A C m n, if X C n m satisfies AXA = A, XAX = X, (AX) = AX and (XA) = XA, then X is called a Moore-Penrose inverse of A. This matrix X is unique and denoted by A. A matrix X C n m is called an outer inverse of A if it satisfies XAX = X; is called a {2,3}-inverse of A if it satisfies XAX = X and (AX) = AX; is called a {1,3}-inverse of A if it satisfies AXA = A and (AX) = AX; is called a {1,2,3}-inverse of A if it satisfies AXA = A, XAX = X and (AX) = AX. The core inverse of a complex matrix was introduced by Baksalary and Trenkler [1. Let A C n n. A matrix X C n n is called a core inverse of A, if it satisfies AX = P A and R(X) R(A), here P A denotes the orthogonal projector onto R(A). If such a matrix exists, then it is unique and denoted by A #. For a square complex matrix A, one has that A is core invertible, A is group invertible, and rk(a) = rk(a 2 ) are three equivalent conditions (see [1). We denote C CM n = {A C n n rk(a) = rk(a 2 )}. LetA C n n. AmatrixX C n n suchthatxa k+1 = A k, XAX = X andax = XA is called the Drazin inverse of A and denoted by A D. The smallest integer k is called the Sanzhang Xu ( xusanzhang5222@126.com): School of Mathematics, Southeast University, Nanjing 2196, China. Jianlong Chen (Corresponding author: jlchen@seu.edu.cn): School of Mathematics, Southeast University, Nanjing 2196, China. Julio Benítez ( jbenitez@mat.upv.es): Universitat Politècnica de València, Instituto de Matemática Multidisciplinar, Valencia, 4622, Spain. Dingguo Wang ( dingguo95@126.com): School of Mathematical Sciences, Qufu Normal University, Qufu, , China. 1
2 Drazin index of A, denoted by ind(a). Any square matrix with finite index is Drazin invertible. If ind(a) 1, then the Drazin inverse of A is called the group inverse and denoted by A #. The DMP-inverse for a complex matrix was introduced by Malik and Thome [12. Let A C n n with ind(a) = k. A matrix X C n n is called a DMP-inverse of A, if it satisfies XAX = X, XA = A D A and A k X = A k A. If such a matrix X exists, then it is unique and denoted by A D,. Malik and Thome gave several characterizations of the DMP-inverse by using the decomposition of Hartwig and Spindelböck [1. The notion of the core-ep inverse for a complex matrix was introduced by Manjunatha Prasad and Mohana [13. A matrix X C n n is a core-ep inverse of A C n n if X is an outer inverse of A satisfying R(X) = R(X ) = R(A k ), where k is the index of A. If such a matrix X exists, then it is unique and denoted by A. In addition, 1 n and n will denote the n 1 column vectors all of whose components are 1 and, respectively. m n (abbr. ) denotes the zero matrix of size m n. If S is a subspace of C n, then P S stands for the orthogonal projector onto the subspace S. A matrix A C n n is called an EP matrix if R(A) = R(A ), A is called Hermitian if A = A and A is unitary if AA = I n, where I n denote the identity matrix of size n. Let N denotes the set of positive integers. 2 Preliminaries A related decomposition of the matrix decomposition of Hartwig and Spindelböck [1 was given in [2, Theorem 2.1 by Benítez, in [3 it can be found a simpler proof of this decomposition. Let us start this section with the concept of principal angles. Definition 2.1. [16 Let S 1 and S 2 be two nontrivial subspaces of C n. We define the principal angles θ 1,...,θ r [,π/2 between S 1 and S 2 by cosθ i = σ i (P S1 P S2 ), for i = 1,...,r, where r = min{dims 1,dimS 2 }. The real numbers σ i (P S1 P S2 ) are the singular values of P S1 P S2. The following theorem can be found in [2, Theorem 2.1. Theorem 2.2. Let A C n n, r = rk(a), and let θ 1,...,θ p be the principal angles between R(A) and R(A ) belonging to,π/2[. Denote by x and y the multiplicities of the angles and π/2 as a canonical angle between R(A) and R(A ), respectively. There exists a unitary matrix U C n n such that where M C r r is nonsingular, A = U [ MC MS U, (2.1) C = diag( y,cosθ 1,...,cosθ p,1 x ), [ diag(1y,sinθ S = 1,...,sinθ p ) p+y,n (r+p+y) x,p+y x,n (r+p+y), 2
3 and r = y + p + x. Furthermore, x and y + n r are the multiplicities of the singular values 1 and in P R(A) P R(A ), respectively. In this decomposition, one has C 2 +SS = I r. Recall that A always exists. We have that A # exists if and only if C is nonsingular in view of [2, Theorem 3.7. The following equalities hold A = U [ CM 1 S M 1 [ U, A # C = U 1 M 1 C 1 M 1 C 1 S U. By [3, Theorem 2, we have that A D = U [ (MC) D [(MC) D 2 MS U. (2.2) We also have [ AA Ir = U A # = A # AA = U U, (2.3) [ C 1 M 1 U. (2.4) Lemma 2.3. [17, Theorem 3.1 Let A C n n. Then A is core invertible if and only if there exists X C n n such that (AX) = AX, XA 2 = A and AX 2 = X. In this situation, we have A # = X. Lemma 2.4. Let A C n n. If there exists X C n n such that AX k+1 = X k and XA k+1 = A k for some k N, then for m N we have (1) A k = X m A k+m ; (2) X k = A m X k+m ; (3) A k X k = A k+m X k+m ; (4) X k A k = X k+m A k+m ; (5) A k = A m X m A k ; (6) X k = X m A m X k. Proof. (1). For m = 1, it is clear by the hypotheses. If the formula is true for m N, then X m+1 A k+m+1 = XX m A k+m A = XA k A = XA k+1 = A k. (3). It is easy to check that A k X k = A k+1 X k+1 by AX k+1 = X k. It is not difficult to check the equality A k X k = A k+m X k+m by induction. (5). From (1) we have A k = X k A 2k. Thus by AX k+1 = X k, we have A k = X k A 2k = AX k+1 A 2k = AX k XA 2k = A(AX k+1 )XA 2k = A 2 X k+2 A 2k = A 2 X 2 X k A 2k = = A m X m X k A 2k = A m X m A k. The proofs of (2), (4) and (6) are similar to the proofs of (1), (3) and (5), respectively. 3
4 Lemma 2.5. Let A C n n. If there exists X C n n such that AX k+1 = X k and XA k+1 = A k for some k N, then A D = X k+1 A k. Proof. Since A is Drazin invertible by XA k+1 = A k, we will check that A D = X k+1 A k. Have in mind, AX k+1 = X k and XA k+1 = A k, thus A(X k+1 A k ) = X k A k = X k (XA k+1 ) = X k+1 A k A. (2.5) That is, X k+1 A k and A are commute. Then by (1) and (4) in Lemma 2.4, we have that (X k+1 A k )A(X k+1 A k ) =X k+1 A k+1 X k+1 A k = X k A k (X k+1 A k ) =X k X k+1 A k A k = X k+1 X k A 2k = X k+1 A k. (2.6) From (1) in Lemma 2.4, we have that (X k+1 A k )A k+1 = X(X k A 2k )A = XA k+1 = A k. (2.7) Thus we have A D = X k+1 A k by the definition of the Drazin inverse and in view of (2.5), (2.6) and (2.7). Remark 2.6. From the proofs of Lemma 2.4 and Lemma 2.5, it is obvious that Lemma 2.4 and Lemma 2.5 are valid for rings. Moreover, we can get that for an element a R, a is Drazin invertible if and only if there exist x R and k N such that ax k+1 = x k and xa k+1 = a k, where R is a ring. The following lemma is similar to [12, Theorem 2.5. Lemma 2.7. Let A C n n be the form (2.1). Then A D, = U Proof. By (2.2) and (2.3), we can get A D = U [ Ir U [ (MC) D U. (2.8) [ (MC) D [(MC) D 2 MS U, respectively. Thus by the definition of DMP-inverse we have A D, = A D AA = U [ (MC) D [(MC) D 2 MS [ Ir U and AA = [ U (MC) = U D U. Lemma 2.8. [15, Corollary 3.3 Let A C n n be a matrix of index k. Then AA = A k (A k ). 4
5 3 i, m -core inverse Let us start this section by introducing the definition of the i, m -core inverse. Definition 3.1. Let A C n n and m,i N. A matrix X C n n is called an i,m -core inverse of A, if it satisfies X = A D AX and A m X = A i (A i ). (3.1) It will be proved that if X exists, then it is unique and denoted by A i,m. Theorem 3.2. Let A C n n. If exists X C n n such that (3.1) holds, then X is unique. Proof. Assume that X satisfies the system in (3.1), that is X = A D AX and A m X = A i (A i ). Thus X = A D AX = (A D ) m A m X = (A D ) m A i (A i ). Therefore, X is unique by the uniqueness of A D and A i (A i ). Theorem 3.3. The system in (3.1) is consistent if and only if i ind(a). In this case, the solution of (3.1) is X = (A D ) m A i (A i ). Proof. Assume that i ind(a). Let X = (A D ) m A i (A i ). We have A D AX = A D A(A D ) m A i (A i ) = (A D ) m A D AA i (A i ) = (A D ) m A i (A i ) = X; A m X = A m (A D ) m A i (A i ) = A D AA i (A i ) = A i (A i ). Thus, the system in (3.1) is consistent and the solution of (3.1) is X = (A D ) m A i (A i ). Ifthesystemin(3.1) isconsistent, thenexistsx suchthat X = A D AX anda m X = A i (A i ). Then X = A D AX = (A D ) m A m X = (A D ) m A i (A i ) and A i (A i ) = A m X = A m (A D ) m A i (A i ) = AA D A i (A i ). Hence A i = A i (A i ) A i = AA D A i (A i ) A i = AA D A i, that is i ind(a). Example 3.4. We will give [ an example that shows if i < ind(a), then the system in (3.1) 1 is not consistent. Let A =. It is easy to get ind(a) = 2 and A D =. Let i = 1 and suppose that X is the solution of system in (3.1), then X = A D AX =, which gives AA = A m X =, thus A = AA A =, this is a contradiction. Remark 3.5. If i ind(a), then A i,m+1 = AD A i,m. Remark 3.6. The i,m -core inverse is a generalization of the core inverse and the core- EP inverse. More precisely, we have the following statements: (1) If m = i = ind(a) = 1, then the 1,1 -core inverse coincides with the core inverse; (2) If m = 1 and i = ind(a), then the i,1 -core inverse coincides with the core-ep inverse. 5
6 For the convenience of the readers, in the following, we give some notes of (1) and (2) in Remark 3.6. (1). If m = i = ind(a) = 1, then A is group invertible and A D = A # and (3.1) is equivalent to X = A # AX and AX = AA. Thus X = A # AX = A # AA, (AX) = (AA ) = AA = AX, AX 2 = AA # AA A # AA = AA # AA AA # A = AA # A = X and XA 2 = A # AA A 2 = A # A 2 = A. Hence, 1,1 -core inverse coincides with the core inverse by Lemma 2.3. Note that if A is group invertible, then we have that X is the core inverse of A if and only if X = A # AX and AX = AA. (2). If m = 1 and i = ind(a), then by Theorem 3.3, A i,1 exists and A i,1 = AD A i (A i ). Let us denote X = A i,1 = AD A i (A i ). Observe that AX = A i (A i ) is Hermitian. Now, XAX = A D A i (A i ) A i (A i ) = A D A i (A i ) = X, that is X is an outer inverse of A. From A i = A D A i+1 = A D A i (A i ) A i A = XA i+1 we get R(A i ) R(X). Also, AX 2 = (AX)X = A i (A i ) A D A i (A i ) = A i (A i ) A i A D (A i ) = A D A i (A i ) = X, which implies X = (AX) X R(X ), therefore, R(X) R(X ). Finally, X = [A D A i (A i ) = A i (A i ) (A D ) implies R(X ) R(A i ). Hence R(X) = R(X ) = R(A i ). Therefore, the i,1 -core inverse coincides with the core-ep inverse by the definition of the core-ep inverse. From the above statement, we have the following theorem. Theorem 3.7. Let A C n n with i = ind(a). Then X is the core-ep inverse of A if and only if X = A D AX and AX = A i (A i ). Corollary 3.8. Let A C n n with 1 = ind(a). Then X is the core inverse of A if and only if X = A # AX and AX = AA. For any A C n n, either A l = for some l N, or A l for all positive integers. Moreover, if ind(a) = k, then G k B k is nonsingular (see [4, 6, 8), where A = B 1 G 1 is a full rank factorization of A and G l B l = B l+1 G l+1 is a full rank factorization of G l B l, l = 1,...,k 1. When A k, then it can be written as A k = k k B l G k+1 l. (3.2) We have the following results, (see [4, Theorem 4 or [7, Theorem 7.8.2): { k, when Gk B k is nonsingualr; ind(a) = k +1, when G k B k =. and k k A D B l (G k B k ) k 1 G k+1 l, when G k B k is nonsingualr; =, when G k B k =. (3.3) In the sequel, we always assume that A k. It is well-known that if A = EF is a full rank factorization of A, where r = rk(a), E C n r and F C r n, then (see [7, Theorem 1.3.2) A = F (FF ) 1 (E E) 1 E. (3.4) 6
7 Remark 3.9. The notations and results in above paragraph will be used many times in the sequel. We will investigate the i,m -core inverse of a matrix A C n n by using Remark 3.9. Theorem 3.1. Let A C n n with ind(a) = k. If i k, then A i,m = A k,m. Proof. Since ind(a) = k, we have R(A k ) = R(A i ) for any i k, and therefore, A k (A k ) = A i (A i ). Now, the conclusion follows from Theorem 3.3. Remark The proof of Theorem 3.1 also can be proved as follows. Since the proof in this remark will be used several times in the sequel, we write this proof here. Proof. If A is nilpotent, then A D =, hence by Theorem 3.3, one has A i,m = A k,m =. Therefore, we can assume that A k. By equality (3.2), we have k k A k = B l G k+1 l. (3.5) where A = B 1 G 1 is a full rank factorization of A and G l B l = B l+1 G l+1 is a full rank factorization of G l B l, l = 1,...,k 1. Let M = k B l, N = k G k+1 l and L = G k B k. Now, we will show that In fact, A i = A i = k k B l (G k B k ) i k G k+1 l = ML i k N. i i B l G k+1 l = B 1 B i G i G 1 = B 1 B i 1 (B i G i )G i 1 G 1 =B 1 B i 1 (G i 1 B i 1 )G i 1 G 1 = B 1 B i 2 (G i 2 B i 2 ) 2 G i 2 G 1 = =B 1 B k (G k B k ) i k G k G 1 = ML i k N. (3.6) If we let M 1 = ML i k, then A i = ML i k N = M 1 N is a full rank factorization of A i (see [8, p.183). Thus (A i ) = N (NN ) 1 (M 1 M 1) 1 M 1. (3.7) Note that NM = k k G k+1 l B l = L k. By Theorem 3.3, (3.3) and (3.7) we have A i,1 =AD A i (A i ) = ML k 1 NML i k N(A i ) =ML k 1 NML i k NN (NN ) 1 (M 1 M 1) 1 M 1 =ML i k 1 NN (NN ) 1 (M 1M 1 ) 1 M 1 =ML i k 1 (M 1M 1 ) 1 M 1 =ML i k 1 [(L i k ) M ML i k 1 (L i k ) M =ML i k 1 L k i (M M) 1 [(L i k ) 1 (L i k ) M =ML 1 (M M) 1 M. (3.8) 7
8 The last expression does not depend on i, then A i,1 = A k,1. Thus, by Remark 3.5, we have A i,m = AD A i,m 1 = AD (A D A i,m 2 ) = (AD ) 2 A i,m 2 = = (AD ) m 1 A i,1 = (A D ) m 1 A k,1 = A k,m. Remark By Theorem 3.1, it is enough to investigate the i = ind(a) = k case, when we discuss the i,m -core inverse of a matrix A C n n. That is, the Theorem 3.1 is a key theorem. Theorem Let A C n n with ind(a) = k and k,m N. If A = B 1 G 1 is a full rank factorization of A and G l B l = B l+1 G l+1 is a full rank factorization of G l B l, l = 1,...,k 1, then A k,m = ML m M, where M = k B l, N = k G k+1 l and L = G k B k. Proof. By the proof of Remark 3.11, we have A k,1 = ML 1 (M M) 1 M and NM = L k. Now, we will prove (A D ) s A k,1 = ML s 1 (M M) 1 M for any s N. By (3.3) we have A D = k B l (G k B k ) k 1 k G k+1 l = ML k 1 N. When s = 1, we have A D A k,1 =ML k 1 NML 1 (M M) 1 M = ML k 1 (NM)L 1 (M M) 1 M =ML k 1 L k L 1 (M M) 1 M = ML 2 (M M) 1 M. Assume that (A D ) s 1 A k,1 = ML s (M M) 1 M. Then (A D ) s A k,1 =AD (A D ) s 1 A k,1 = AD ML s (M M) 1 M Thus by Remark 3.5, we have =ML k 1 NML s (M M) 1 M = ML k 1 L k L s (M M) 1 M =ML s 1 (M M) 1 M. A k,m = (AD ) m 1 A k,1 = ML m (M M) 1 M = ML m M. In the following theorem, we will give a canonical form for the k,m -core inverse of a matrix A C n n by using the matrix decomposition in Theorem 2.2. We will also use the following simple fact: Let X C n m and b C n. If y C m satisfies X Xy = X b, then XX b = Xy. Theorem Let A C n n have the form (2.1) with ind(a) = k and m N. Then A k,m = U [ (MC) k 1,m Proof. Let r be the rank of A. By Theorem 3.3 we have A k,m = (AD ) m A k (A k ). U. (3.9) 8
9 Since A has the form given in Theorem 2.2 we have [ A k (MC) k (MC) = U k 1 MS Let b C n be arbitrary and let us decompose b = U [ b1 b 2 U. (3.1), where b 1 C r. Let x C n satisfy (A k ) A k x = (A k ) b [this x always exists because [ the normal equations always x1 have a solution. We can decompose x by writing x = U, where x 1 C r. Let us denote N = (MC) k 1 M. Using (3.1), [ [ CN NC NS U S N Therefore, [ x1 x 2 = U x 2 [ CN S N [ b1 CN N(Cx 1 +Sx 2 ) = CN b 1 and S N N(Cx 1 +Sx 2 ) = S N b 1. Premultiplying the first equality by C and the second equality by S and after, adding them, we get N N(Cx 1 +Sx 2 ) = N b 1, and hence, N(Cx 1 +Sx 2 ) = NN b 1. Now, [ [ A k (A k ) b = A k NC NS x1 x = U x 2 [ [ NCx1 +NSx = U 2 NN = U [ b 1 NN = U U b. Since b is arbitrary, A k (A k ) = U [ NN U. Now we will prove NN = (MC) k 1 [(MC) k 1. Recall that we have N = (MC) k 1 M, and so, R(N) R((MC) k 1 ). Since M is nonsingular, rk(n) = rk((mc) k 1 ), and therefore, R(N) = R((MC) k 1 ). Since NN and (MC) k 1 [(MC) k 1 are the orthogonal projectors onto R(N) andr((mc) k 1 ), respectively, weget NN = (MC) k 1 [(MC) k 1. By (2.2) we have A D = U [ (MC) D [(MC) D 2 MS b 2. U. (3.11) Thus, we have (A D ) m = U [ [(MC) D m [(MC) D m+1 MS U. Since ind(a) = k, we have A D A k+1 = A k. By using the above representations of A D and A k given in (3.1) and (3.11), respectively, [ (MC) D [(MC) D 2 [ MS (MC) k+1 (MC) k [ MS (MC) k (MC) = k 1 MS. 9
10 Therefore, (MC) D (MC) k M[C S = (MC) k 1 M[C S. (3.12) [ C Have in mind that we have C 2 + SS = I r. Thus, postmultiplying (3.12) by gives us (MC) D (MC) k M = (MC) k 1 M and from the nonsningularity of M we obtain (MC) D (MC) k = (MC) k 1, and so, ind(mc) k 1. Therefore we have A k,m =(AD ) m A k (A k ) [ [(MC) =U D m [(MC) D m+1 [ MS (MC) k 1 ((MC) k 1 ) [ [(MC) =U D m (MC) k 1 ((MC) k 1 ) U [ (MC) =U k 1,m U. U S Remark If we use the decomposition of Hartwig and Spindelböck [ in [1, Corollary (ΣK) 6, then an expression of the k,m -core inverse of A is A k,m = U k 1,m U, which is similar to the expression of A k,m in Theorem Since the proof of this result can be proved as the proof of Theorem 3.14, we omit this proof. Let A C n n with ind(a) = k. The Jordan Canonical form of A is P 1 AP = J, where P C n n is nonsingular and J C n n is a block diagonal matrix composed of Jordan blocks. In the following theorem, we will compute the k, m -core inverse by using the Jordan Canonical form of A. Theorem Let A C n n with ind(a) = k, then A k,m = P 1D m P [ 1, where A = D P P N 1 with D C r r is nonsingular, N is nilpotent and P = [P 1 P 2 with P 1 C n r. Proof. The Jordan Canonical form of A is P 1 AP = J, where P C n n is nonsingular and [ J C n n is a block diagonal matrix. Rearrange the elements of J such that A = D P P N 1, where D is nonsingular and N is nilpotent. It is well-known that A D = [ [ [ D 1 D P P 1 anda k k = P P 1. IfweletP = [P 1 P 2 andp 1 Q1 =, Q [ 2 (D then (A D ) m A k = [P 1 P 2 1 ) m [ [ D k Q1 = P Q 1 D k m Q 1. Observe that 2 A k = (P 1 D k )Q 1 is a full rank factorization of A k. Hence by (3.4) we have (A k ) =(P 1 D k Q 1 ) = Q 1(Q 1 Q 1) 1 [(P 1 D k ) P 1 D k 1 (P 1 D k ) =Q 1 (Q 1Q 1 ) 1 D k (P 1 P 1) 1 [(D k ) 1 (D k ) P 1 =Q 1 (Q 1Q 1 ) 1 D k (P 1 P 1) 1 P 1 =Q 1 D k P 1. 1
11 By Theorem 3.3, we have A k,m = (AD ) m A k (A k ). Thus we have A k,m =(AD ) m A k (A k ) = P 1 D k m Q 1 Q 1 D k P 1 = P 1D k m Q 1 Q 1(Q 1 Q 1) 1 D k P 1 =P 1 D m D k D k P 1 = P 1D m P 1. Proposition Let A C n n. If i ind(a), then A m A i,m R(A i ) along R(A i ). is the projector onto Proof. It is trivial. In the following proposition, we will investigate some properties of the i,m -core inverse. Proposition Let A C n n, m,i N. If i ind(a), then (1) A i,m is a {2,3}-inverse of Am ; (2) A i,m = (AD ) m P A i; (3) (A i,m )n = (A D ) m(n 1) P A i; (4) A i A i,m = A i,m Ai if and only if R(A i ) N((A D ) m ); (5) A i,m = A implies that A is EP. Proof. (1). By Theorem 3.3 we have A i,m = (AD ) m A i (A i ), thus A i,m Am A i,m =(AD ) m A i (A i ) A m (A D ) m A i (A i ) = (A D ) m A i (A i ) A i A m (A D ) m (A i ) =(A D ) m A i A m (A D ) m (A i ) = (A D ) m A m (A D ) m A i (A i ) =A D A(A D ) m A i (A i ) = (A D ) m A i (A i ) = A i,m. Thus A i,m is a {2,3}-inverse of Am in view of A m A i,m = Ai (A i ). (2) is trivial. (3). By (A i,m )2 = (A D ) m A i (A i ) (A D ) m A i (A i ) = (A D ) m (A D ) m A i (A i ) = (A D ) m A i,m and induction it is easy to check (3). (4). By R(I n A i (A i ) ) = N((A i ) ), we have A i A i,m = A i,m Ai A i (A D ) m A i (A i ) = (A D ) m A i (A i ) A i A i (A D ) m A i (A i ) = (A D ) m A i A i (A D ) m (I n A i (A i ) ) = R(I n A i (A i ) ) N(A i (A D ) m ) N((A i ) ) N((A D ) m ) N((A i ) ) N((A D ) m ) R(A i ) N((A D ) m ). 11
12 [ (MC) (5). Let A be written in the form (2.1). We have A i,m = U i 1,m U by Theorem Thus, A i,m = A implies MS =. From the nonsingularity of M, we have S =, which is equivalent to say that A is EP in view of [2, Theorem (j, m)-core inverse Let us start this section by introducing the definition of the (j, m)-core inverse. Definition 4.1. Let A C n n and m,j N. A matrix X C n n is called a (j,m)-core inverse of A, if it satisfies X = A D AX and A m X = A m (A j ). (4.1) Theorem 4.2. Let A C n n. If the system in (4.1) is consistent, then the solution is unique. Proof. Assume that X satisfies that (4.1), that is X = A D AX and A m X = A m (A j ). Then X = A D AX = (A D ) m A m X = (A D ) m A m (A j ) = A D A(A j ). Thus X is unique. By Theorem 4.2 if X exists, then it is unique and denoted by A j,m. Theorem 4.3. Let A C n n and m,j N. Then (1) If m ind(a), then the system in (4.1) is consistent and the solution is X = A D A(A j ) ; (2) If the system in (4.1) is consistent, then ind(a) max{j,m}. Proof. (1). Let X = A D A(A j ). We have A D AX = A D AA D A(A j ) = A D A(A j ) = X and A m X = A m A D A(A j ) = A D AA m (A j ) = A m (A j ). (2). If the system in (4.1) is consistent, then exits X C n n such that X = A D AX = (A D ) m A m X = (A D ) m A m (A j ) = A D A(A j ) and A m (A j ) = A m X = A m A D A(A j ) = A m (A D ) j A j (A j ). Thus A m (A j ) A j = A m (A D ) j A j (A j ) A j = A m (A D ) j A j = A m A D A. If m j, then A m A D A = A m (A j ) A j = A m j A j (A j ) A j = A m j A j = A m. That is ind(a) m. If j > m, then A j = A j (A j ) A j = A j m A m (A j ) A j = A j m A m A D A = A j A D A. That is ind(a) j. Therefore, ind(a) max{j,m}. Example 4.4. We will give an example that shows if m < ind(a), then the system in (4.1) is not consistent. Let A be the same matrix in Example 3.4. It is easy to get ind(a) = 2 and A D =. Let m = j = 1 and suppose that X is the solution of system in 4.1, then X = A D AX =, which gives AA = AX =, thus A = AA A =, this is a contradiction. 12
13 Example 4.5. The converse of Theorem 4.3 (1) is not true. Let m = 1 and j = 3. If we 1 let A = 1, then ind(a) = 3 and A 3 =. Hence X = is a solution of (4.1), but m < ind(a). Example 4.6. If ind(a) max{j, m}, then the system in (4.1) may be not consistent. If we let A = 1 1, then A 3 = A 2 = 1 1 1, A D = A and ind(a) = 2. Let m = 1 and j = 2, then ind(a) max{j,m}. It is easy to 2 1 check that (A 2 ) = If the system in (4.1) has a solution X, then 2 1 X = A D AX = A D A(A 2 ) and A(A 2 ) = AX = AA D A(A 2 ) = A 4 (A 2 ) = A 2 (A 2 ) would hold. But A(A 2 ) = 1 15 the system in (4.1) is not consistent = A 2 (A 2 ). Thus, Remark 4.7. If m ind(a) = k, it is not difficult to see that A j,m = A j,m+1. That is to say, the (j,m)-core inverse of A coincides with the (j,m+1)-core inverse of A. Thus, in the sequel, we only discuss the m = ind(a) case. Theorem 4.8. Let A,X C n n, k,j N. If ind(a) = k and X is the (j,k)-core inverse of A, then we have X j A j X j = (A D ) j(j 1) X j and XA j = A D A. Proof. By the definition of the (j,k)-core inverse, we have X = A D AX and A k X = A k (A j ). By X = A D A(A j ), it is easy to check that X n+1 = (A D ) j X n for arbitrary n N, which gives that X j = (A D ) j(j 1) X. XA j =A D A(A j ) A j = (A D ) j A j (A j ) A j = (A D ) j A j = A D A; X j A j X j =(A D ) j(j 1) XA j X j = (A D ) j(j 1) A D A(A j ) A j X j =(A D ) j(j 1) (A D ) j A j (A j ) A j X j = (A D ) j(j 1) (A D ) j A j X j =(A D ) j(j 1) A D AXX j 1 = (A D ) j(j 1) A D AA D A(A j ) X j 1 =(A D ) j(j 1) A D A(A j ) X j 1 = (A D ) j(j 1) XX j 1 =(A D ) j(j 1) X j. Corollary 4.9. Let A,X C n n and ind(a) = k. If X is the (1,k)-core inverse of A, then we have XAX = X and XA = A D A. The (j, m)-core inverse is a generalization of the core inverse and the DMP-inverse in view of Theorem
14 Remark 4.1. When j = m = 1 = ind(a), the equations in (4.1) are equivalent to XAX = X, XA = A # A and AX = AA. Thus AX = AA implies that (AX) = AX; XA = A # A gives that XA 2 = A and AXA = A; and X = XAX = A # AX = AA #, which means that R(X) R(A), then X = AY for some Y C n n, thus X = AY = AXAY = AX 2. Therefore, we have A # = X by Lemma 2.3. In a word, the (1,1)-core inverse coincides with the usual core inverse. Remark If we let j = 1 and m = ind(a), then the equations in (4.1) are equivalent to XAX = X, XA = A D A and A k X = A k A by Theorem 4.8. Thus (1,k)-core inverse coincides with the DMP-inverse. From Remark 4.11, Theorem 4.8 and the definition of the (j,k)-core inverse, we have the following theorem, which says that the conditions XAX = X and XA = A D A in the definition of the DMP-inverse can be replaced by X = A D AX. Theorem Let A C n n with k = ind(a). Then X C n n is the DMP-inverse of A if and only if X = A D AX and A k X = A k A. In the following theorem, we will give a canonical form for the (j,k)-core inverse of a matrix A C n n by using the matrix decomposition in Theorem 2.2. Theorem Let A C n n have the form (2.1) with ind(a) = k and j N. Then A j,k = U [ (MC) D (MC) j 1,k U. (4.2) Proof. By Theorem 4.3 and the idempotency of A D A we have A j,k = AD A(A j ) = (A D ) j A j (A j ). (4.3) From the proof of Theorem 3.14, we have A j (A j ) = U [ (MC) j 1 ((MC) j 1 ) U. (4.4) By (2.2) we have A D = U [ (MC) D [(MC) D 2 MS U, thus we have (A D ) j = U [ [(MC) D j [(MC) D j+1 MS U. (4.5) By the proof of Theorem 3.14, we have ind(mc) k 1 < k. From (4.3), (4.4) and (4.5), 14
15 we have A j,k =(AD ) j A j (A j ) [ [(MC) =U D j [(MC) D j+1 [ MS (MC) j 1 ((MC) j 1 ) [ [(MC) =U D j (MC) j 1 ((MC) j 1 ) U [ (MC) =U D [(MC) D j 1 (MC) j 1 ((MC) j 1 ) U [ (MC) =U D (MC) D MC((MC) j 1 ) U [ (MC) D (MC) =U j 1,k U. U Remark If we use the decomposition of Hartwig and Spindelböck [ in [1, Corollary (ΣK) 6, then an expression of the (j,k)-core inverse of A is A D j,k = U (ΣK) j 1,k U, which is similar to the expression of A j,k in Theorem Since the proof of this result can be proved like the proof of Theorem 4.13, we omit this proof. Theorem Let A C n n and ind(a) = k. If (A k X k ) = A k X k, AX k+1 = X k and XA k+1 = A k, then A is (k,k)-core invertible and A k,k = Xk. Proof. By Lemma 2.4 and Lemma 2.5, we have A k X k A k = A k, X k A k X k = X k, A k = X k A 2k and A D = X k+1 A k. Equalities (A k X k ) = A k X k and A k X k A k = A k imply that X k is a {1,3}-inverse of A k. From A D = X k+1 A k, we can obtain (A D ) k = X k 1 A D by induction. Thus A k,k =AD A(A k ) = (A D ) k A k (A k ) = (A D ) k A k (A k ) (1,3) =(A D ) k A k X k = (X k+1 A k ) k A k X k = X k 1 X k+1 A k A k X k =X 2k A 2k X k = X k (X k A 2k )X k = X k A k X k = X k. Proposition Let A C n n be a matrix with j ind(a) = k. If A is (j,k)-core invertible, then A j A j,k is the projector onto R(Aj ) along R(A j ). Proof. It is trivial. In the following proposition, we will investigate some properties of the (j, k)-core inverse. Proposition Let A C n n with j ind(a) = k. If A is (j,k)-core invertible, then (1) A j,k is a {1,2,3}-inverse of Aj ; 15
16 (2) A j,k = (AD ) j P A j; { (3) (A ((A D ) j (A j ) ) n 2 n is even j,k )n = A j ((A D ) j (A j ) ) n+1 2 n is odd ; (4) A j,k AD = (A D ) j+1 ; (5) A j A j,k = A j,k Aj if and only if R(A j ) N(A D ); (6) A j,k = A implies that A is EP. Proof. (1). By Theorem 4.3 we have A j,k = AD A(A j ) = (A D ) j A j (A j ), thus A j A j,k Aj =A j (A D ) j A j (A j ) A j = A j (A D ) j A j = A j A D A = A j ; A j,k Aj A j,k =(AD ) j A j (A j ) A j A j,k = AD AA j,k =A D AA D A(A j ) = A D A(A j ) = A j,k ; A j A j,k =Aj (A D ) j A j (A j ) = A j (A j ). (2) is trivial. (3). By (A j,k )2 = (A D ) j A j (A j ) (A D ) j A j (A j ) = (A D ) j (A j ) and induction it is easy to check (3). (4). A j,k AD = (A D ) j A j (A j ) A D = (A D ) j A j (A j ) (A D ) j A j A D = (A D ) j+1. (5). By R(I n A j (A j ) ) = N((A j ) ) and N(A D A) = N(A D ), we have A j A j,k = A j,k Aj A j (A D ) j A j (A j ) = (A D ) j A j (A j ) A j A j (A D ) j A j (A j ) = (A D ) j A j A j (A D ) j (I n A j (A j ) ) = R(I n A j (A j ) ) N(A D A) N((A j ) ) N(A D A) N((A j ) ) N(A D ) R(A j ) N(A D ). [ (MC) (6). LetAbewrittenintheform(2.1). WehaveA D j,k = U (MC) j 1,k U by Theorem Thus, A j,k = A implies MS =. From the nonsingularity of M, we have S =, which is equivalent to say that A is EP in view of [2, Theorem 3.7. In the following proposition, we shall give the the relationship between the (j, k)-core inverse and DMP-inverse and core-ep inverse. Proposition Let A C n n with ind(a) = k. Then A k,k = AD, (A D ) k 1 AA. 16
17 Proof. We have that A k (A k ) = AA by Lemma 2.8 and A D, = A D AA. Thus A k,k =AD A(A k ) = (A D ) k A k (A k ) = A D A k (A D ) k 1 (A k ) =A D AA A k (A D ) k 1 (A k ) = A D, (A D ) k 1 A k (A k ) =A D, (A D ) k 1 AA. In the following theorem, we will give a relationship between the i, m -core inverse and (j, m)-core inverse. Theorem Let A C n n with ind(a) = k. Then A k,m = A m,k for any m k. Proof. By Theorem 4.3, we have A m,k = AD A(A m ) = (A D ) k A k (A m ). By the proof of Remark 3.11, we have A k = MN and NM = L k, where M = k B l, N = k G k+1 l and L = G k B k. It is easy to see that (A D ) s = ML k s N for any s N by NM = L k. Thus (A D ) k = ML 2k N and (A D ) k A k = ML 2k NMN = ML 2k L k N = ML k N. By the proof of Remark 3.11, we have A m = ML m k N = M 1 N is a full rank factorization of A m, where M 1 = ML m k and (A m ) = N (NN ) 1 (M 1 M 1) 1 (M 1 ). By Theorem 3.13, we have A k,m = ML m M. In the following steps, we will show that A m,k = ML m M. From A m,k = (AD ) k A k (A m ), we have A k,m =(AD ) k A k (A m ) = ML k NN (NN ) 1 (M 1 M 1) 1 (M 1 ) =ML k (M 1 M 1) 1 (M 1 ) = ML k [(L m k ) M ML m k 1 (L m k ) M =ML k L k m (M M) 1 [(L m k ) 1 (L m k ) M =ML m (M M) 1 M = ML m M. Theorem 4.2. Let A C n n with i ind(a) = k, then A i,k = P 1D i P [ 1, where D A = P P N 1 with D C r r is nonsingular, N is nilpotent and P = [P 1 P 2 with P 1 C n r. Proof. It is easy to see that by Theorem 3.16 and Theorem ACKNOWLEDGMENTS This research is supported by the National Natural Science Foundation of China (No and No ), the Natural Science Foundation of Jiangsu Province (No. BK ). The first author is grateful to China Scholarship Council for giving him a purse for his further study in Universitat Politècnica de València, Spain. 17
18 References [1 O.M. Baksalary, G. Trenkler, Core inverse of matrices, Linear Multilinear Algebra. 58(6) (21), [2 J. Benítez, A new decomposition for square matrices, Electron. J. Linear Algebra. 2(21), [3 J. Benítez, X.J. Liu, A short proof of a matrix decomposition with applications, Linear Algebra Appl. 438 (213), [4 J.L. Chen, Group inverses and Drazin inverses of matrices over rings, (Chinese) J. Xinjiang Univ. Natur. Sci. 9(1) (1992), [5 J.L. Chen, H.H. Zhu, P. Patrício, Y.L. Zhang, Characterizations and representations of core and dual core inverses, Canad. Math. Bull. doi:1.4153/cmb [6 R.E. Cline, An application of representations for the generalized inverse of a matrix, Tech. Summary Rep. 592, Mathematics Research Center, U. S. Army, University of Wisconsin, Madison, [7 S.L. Campbell, C.D. Meyer, Generalized Inverses of Linear Transformations, Philadelphia, SIAM, 29. [8 R.E. Cline, Inverses of rank invariant powers of a matrix, SIAM J. Numer. Anal. 5 (1968), [9 M.C. Gouveia, R. Puystjens, About the group inverse and Moore-Penrose inverse of a product, Linear Algebra Appl. 15 (1991), [1 R.E. Hartwig, K. Spindelböck, Matrices for which A and A commute, Linear Multilinear Algebra. 14 (1983), [11 [ C.H. Hung, T.L. Markham, The Moore-Penrose inverse of a partitioned matrix M = A, Czech. Math. J. 25(3) (1975), B C [12 S.B. Malik, N. Thome, On a new generalized inverse for matrices of an arbitrary index, Appl. Math. Comput. 226 (214), [13 K. Manjunatha Prasad, K.S. Mohana, Core-EP inverse, Linear Multilinear Algebra. 62(6) (214), [14 D.S. Rakić, N. Č. Dinčić, D.S. Djordjević, Group, Moore-Penrose, core and dual core inverse in rings with involution. Linear Algebra Appl. 463 (214), [15 H.X. Wang, Core-EP decomposition and its applications, Linear Algebra Appl. 58 (216), [16 H.K. Wimmer, Canonical angles of unitary spaces and perturbations of direct complements, Linear Algebra Appl. 287 (1999)
19 [17 S.Z. Xu, J.L. Chen, X.X. Zhang, New characterizations for core inverses in rings with involution, Front. Math. China. 12(1) (217),
arxiv: v1 [math.ra] 14 Apr 2018
Three it representations of the core-ep inverse Mengmeng Zhou a, Jianlong Chen b,, Tingting Li c, Dingguo Wang d arxiv:180.006v1 [math.ra] 1 Apr 018 a School of Mathematics, Southeast University, Nanjing,
More informationarxiv: v1 [math.ra] 28 Jan 2016
The Moore-Penrose inverse in rings with involution arxiv:1601.07685v1 [math.ra] 28 Jan 2016 Sanzhang Xu and Jianlong Chen Department of Mathematics, Southeast University, Nanjing 210096, China Abstract:
More informationarxiv: v1 [math.ra] 24 Aug 2016
Characterizations and representations of core and dual core inverses arxiv:1608.06779v1 [math.ra] 24 Aug 2016 Jianlong Chen [1], Huihui Zhu [1,2], Pedro Patrício [2,3], Yulin Zhang [2,3] Abstract: In this
More informationNonsingularity and group invertibility of linear combinations of two k-potent matrices
Nonsingularity and group invertibility of linear combinations of two k-potent matrices Julio Benítez a Xiaoji Liu b Tongping Zhu c a Departamento de Matemática Aplicada, Instituto de Matemática Multidisciplinar,
More informationMoore Penrose inverses and commuting elements of C -algebras
Moore Penrose inverses and commuting elements of C -algebras Julio Benítez Abstract Let a be an element of a C -algebra A satisfying aa = a a, where a is the Moore Penrose inverse of a and let b A. We
More informationSome results on matrix partial orderings and reverse order law
Electronic Journal of Linear Algebra Volume 20 Volume 20 2010 Article 19 2010 Some results on matrix partial orderings and reverse order law Julio Benitez jbenitez@mat.upv.es Xiaoji Liu Jin Zhong Follow
More informationarxiv: v1 [math.ra] 15 Jul 2013
Additive Property of Drazin Invertibility of Elements Long Wang, Huihui Zhu, Xia Zhu, Jianlong Chen arxiv:1307.3816v1 [math.ra] 15 Jul 2013 Department of Mathematics, Southeast University, Nanjing 210096,
More informationA new decomposition for square matrices
Electronic Journal of Linear Algebra Volume 2 Volume 2 (21) Article 16 21 A new decomposition for square matrices Julio Benitez jbenitez@mat.upv.es Follow this and additional works at: http://repository.uwyo.edu/ela
More informationThe DMP Inverse for Rectangular Matrices
Filomat 31:19 (2017, 6015 6019 https://doi.org/10.2298/fil1719015m Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://.pmf.ni.ac.rs/filomat The DMP Inverse for
More informationThe Moore-Penrose inverse of differences and products of projectors in a ring with involution
The Moore-Penrose inverse of differences and products of projectors in a ring with involution Huihui ZHU [1], Jianlong CHEN [1], Pedro PATRÍCIO [2] Abstract: In this paper, we study the Moore-Penrose inverses
More informationELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES
ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES XIAOJI LIU, LINGLING WU, AND JULIO BENíTEZ Abstract. In this paper, some formulas are found for the group inverse of ap +bq,
More informationOn some linear combinations of hypergeneralized projectors
Linear Algebra and its Applications 413 (2006) 264 273 www.elsevier.com/locate/laa On some linear combinations of hypergeneralized projectors Jerzy K. Baksalary a, Oskar Maria Baksalary b,, Jürgen Groß
More informationarxiv: v1 [math.ra] 25 Jul 2013
Representations of the Drazin inverse involving idempotents in a ring Huihui Zhu, Jianlong Chen arxiv:1307.6623v1 [math.ra] 25 Jul 2013 Abstract: We present some formulae for the Drazin inverse of difference
More informationThe generalized Schur complement in group inverses and (k + 1)-potent matrices
The generalized Schur complement in group inverses and (k + 1)-potent matrices Julio Benítez Néstor Thome Abstract In this paper, two facts related to the generalized Schur complement are studied. The
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 432 21 1691 172 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Drazin inverse of partitioned
More informationReverse order law for the inverse along an element
Reverse order law for the inverse along an element Huihui Zhu a, Jianlong Chen a,, Pedro Patrício b a Department of Mathematics, Southeast University, Nanjing 210096, China. b CMAT-Centro de Matemática
More informationOn EP elements, normal elements and partial isometries in rings with involution
Electronic Journal of Linear Algebra Volume 23 Volume 23 (2012 Article 39 2012 On EP elements, normal elements and partial isometries in rings with involution Weixing Chen wxchen5888@163.com Follow this
More informationThe Moore-Penrose inverse of 2 2 matrices over a certain -regular ring
The Moore-Penrose inverse of 2 2 matrices over a certain -regular ring Huihui Zhu a, Jianlong Chen a,, Xiaoxiang Zhang a, Pedro Patrício b a Department of Mathematics, Southeast University, Nanjing 210096,
More informationarxiv: v1 [math.ra] 27 Jul 2013
Additive and product properties of Drazin inverses of elements in a ring arxiv:1307.7229v1 [math.ra] 27 Jul 2013 Huihui Zhu, Jianlong Chen Abstract: We study the Drazin inverses of the sum and product
More informationarxiv: v1 [math.ra] 21 Jul 2013
Projections and Idempotents in -reducing Rings Involving the Moore-Penrose Inverse arxiv:1307.5528v1 [math.ra] 21 Jul 2013 Xiaoxiang Zhang, Shuangshuang Zhang, Jianlong Chen, Long Wang Department of Mathematics,
More informationOperators with Compatible Ranges
Filomat : (7), 579 585 https://doiorg/98/fil7579d Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat Operators with Compatible Ranges
More informationThe Drazin inverses of products and differences of orthogonal projections
J Math Anal Appl 335 7 64 71 wwwelseviercom/locate/jmaa The Drazin inverses of products and differences of orthogonal projections Chun Yuan Deng School of Mathematics Science, South China Normal University,
More informationGeneralized Principal Pivot Transform
Generalized Principal Pivot Transform M. Rajesh Kannan and R. B. Bapat Indian Statistical Institute New Delhi, 110016, India Abstract The generalized principal pivot transform is a generalization of the
More informationEP elements in rings
EP elements in rings Dijana Mosić, Dragan S. Djordjević, J. J. Koliha Abstract In this paper we present a number of new characterizations of EP elements in rings with involution in purely algebraic terms,
More informationDragan S. Djordjević. 1. Introduction
UNIFIED APPROACH TO THE REVERSE ORDER RULE FOR GENERALIZED INVERSES Dragan S Djordjević Abstract In this paper we consider the reverse order rule of the form (AB) (2) KL = B(2) TS A(2) MN for outer generalized
More informationOn the Moore-Penrose and the Drazin inverse of two projections on Hilbert space
On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space Sonja Radosavljević and Dragan SDjordjević March 13, 2012 Abstract For two given orthogonal, generalized or hypergeneralized
More informationDrazin Invertibility of Product and Difference of Idempotents in a Ring
Filomat 28:6 (2014), 1133 1137 DOI 10.2298/FIL1406133C Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Drazin Invertibility of
More informationMultiplicative Perturbation Bounds of the Group Inverse and Oblique Projection
Filomat 30: 06, 37 375 DOI 0.98/FIL67M Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Multiplicative Perturbation Bounds of the Group
More informationFormulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms
Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms Dijana Mosić and Dragan S Djordjević Abstract We introduce new expressions for the generalized Drazin
More informationMoore-Penrose-invertible normal and Hermitian elements in rings
Moore-Penrose-invertible normal and Hermitian elements in rings Dijana Mosić and Dragan S. Djordjević Abstract In this paper we present several new characterizations of normal and Hermitian elements in
More informationOn pairs of generalized and hypergeneralized projections on a Hilbert space
On pairs of generalized and hypergeneralized projections on a Hilbert space Sonja Radosavljević and Dragan S Djordjević February 16 01 Abstract In this paper we characterize generalized and hypergeneralized
More informationRe-nnd solutions of the matrix equation AXB = C
Re-nnd solutions of the matrix equation AXB = C Dragana S. Cvetković-Ilić Abstract In this article we consider Re-nnd solutions of the equation AXB = C with respect to X, where A, B, C are given matrices.
More informationMOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE
J. Appl. Math. & Computing Vol. 19(2005), No. 1-2, pp. 297-310 MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE K. KAMARAJ AND K. C. SIVAKUMAR Abstract. The concept of the Moore-Penrose inverse
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationSome results about the index of matrix and Drazin inverse
Mathematical Sciences Vol. 4, No. 3 (2010) 283-294 Some results about the index of matrix and Drazin inverse M. Nikuie a,1, M.K. Mirnia b, Y. Mahmoudi b a Member of Young Researchers Club, Islamic Azad
More informationPrincipal pivot transforms of range symmetric matrices in indefinite inner product space
Functional Analysis, Approximation and Computation 8 (2) (2016), 37 43 Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/faac Principal pivot
More informationGroup inverse for the block matrix with two identical subblocks over skew fields
Electronic Journal of Linear Algebra Volume 21 Volume 21 2010 Article 7 2010 Group inverse for the block matrix with two identical subblocks over skew fields Jiemei Zhao Changjiang Bu Follow this and additional
More informationSome results on the reverse order law in rings with involution
Some results on the reverse order law in rings with involution Dijana Mosić and Dragan S. Djordjević Abstract We investigate some necessary and sufficient conditions for the hybrid reverse order law (ab)
More informationFormulas for the Drazin Inverse of Matrices over Skew Fields
Filomat 30:12 2016 3377 3388 DOI 102298/FIL1612377S Published by Faculty of Sciences and Mathematics University of Niš Serbia Available at: http://wwwpmfniacrs/filomat Formulas for the Drazin Inverse of
More informationOptimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications
Optimization problems on the rank and inertia of the Hermitian matrix expression A BX (BX) with applications Yongge Tian China Economics and Management Academy, Central University of Finance and Economics,
More informationThe Hermitian R-symmetric Solutions of the Matrix Equation AXA = B
International Journal of Algebra, Vol. 6, 0, no. 9, 903-9 The Hermitian R-symmetric Solutions of the Matrix Equation AXA = B Qingfeng Xiao Department of Basic Dongguan olytechnic Dongguan 53808, China
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationPartial isometries and EP elements in rings with involution
Electronic Journal of Linear Algebra Volume 18 Volume 18 (2009) Article 55 2009 Partial isometries and EP elements in rings with involution Dijana Mosic dragan@pmf.ni.ac.yu Dragan S. Djordjevic Follow
More informationResearch Article Some Results on Characterizations of Matrix Partial Orderings
Applied Mathematics, Article ID 408457, 6 pages http://dx.doi.org/10.1155/2014/408457 Research Article Some Results on Characterizations of Matrix Partial Orderings Hongxing Wang and Jin Xu Department
More informationA Note on the Group Inverses of Block Matrices Over Rings
Filomat 31:1 017, 3685 369 https://doiorg/1098/fil171685z Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat A Note on the Group Inverses
More informationMATH36001 Generalized Inverses and the SVD 2015
MATH36001 Generalized Inverses and the SVD 201 1 Generalized Inverses of Matrices A matrix has an inverse only if it is square and nonsingular. However there are theoretical and practical applications
More informationLinear Algebra and its Applications
Linear Algebra and its Applications 433 (2010) 476 482 Contents lists available at ScienceDirect Linear Algebra and its Applications journal homepage: www.elsevier.com/locate/laa Nonsingularity of the
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationFactorization of weighted EP elements in C -algebras
Factorization of weighted EP elements in C -algebras Dijana Mosić, Dragan S. Djordjević Abstract We present characterizations of weighted EP elements in C -algebras using different kinds of factorizations.
More informationOn Sum and Restriction of Hypo-EP Operators
Functional Analysis, Approximation and Computation 9 (1) (2017), 37 41 Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/faac On Sum and
More informationarxiv: v1 [math.ra] 7 Aug 2017
ON HIRANO INVERSES IN RINGS arxiv:1708.07043v1 [math.ra] 7 Aug 2017 HUANYIN CHEN AND MARJAN SHEIBANI Abstract. We introduce and study a new class of Drazin inverses. AnelementainaringhasHiranoinversebifa
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationEP elements and Strongly Regular Rings
Filomat 32:1 (2018), 117 125 https://doi.org/10.2298/fil1801117y Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat EP elements and
More informationMoore-Penrose Inverse and Operator Inequalities
E extracta mathematicae Vol. 30, Núm. 1, 9 39 (015) Moore-Penrose Inverse and Operator Inequalities Ameur eddik Department of Mathematics, Faculty of cience, Hadj Lakhdar University, Batna, Algeria seddikameur@hotmail.com
More informationELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES
ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES ZHONGPENG YANG AND XIAOXIA FENG Abstract. Under the entrywise dominance partial ordering, T.L. Markham
More informationA system of matrix equations and a linear matrix equation over arbitrary regular rings with identity
Linear Algebra and its Applications 384 2004) 43 54 www.elsevier.com/locate/laa A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity Qing-Wen Wang Department
More informationLinear and Multilinear Algebra. Linear maps preserving rank of tensor products of matrices
Linear maps preserving rank of tensor products of matrices Journal: Manuscript ID: GLMA-0-0 Manuscript Type: Original Article Date Submitted by the Author: -Aug-0 Complete List of Authors: Zheng, Baodong;
More informationTHE EXPLICIT EXPRESSION OF THE DRAZIN INVERSE OF SUMS OF TWO MATRICES AND ITS APPLICATION
italian journal of pure an applie mathematics n 33 04 (45 6) 45 THE EXPLICIT EXPRESSION OF THE DRAZIN INVERSE OF SUMS OF TWO MATRICES AND ITS APPLICATION Xiaoji Liu Liang Xu College of Science Guangxi
More informationDragan S. Djordjević. 1. Introduction and preliminaries
PRODUCTS OF EP OPERATORS ON HILBERT SPACES Dragan S. Djordjević Abstract. A Hilbert space operator A is called the EP operator, if the range of A is equal with the range of its adjoint A. In this article
More informationProduct of Range Symmetric Block Matrices in Minkowski Space
BULLETIN of the Malaysian Mathematical Sciences Society http://math.usm.my/bulletin Bull. Malays. Math. Sci. Soc. (2) 29(1) (2006), 59 68 Product of Range Symmetric Block Matrices in Minkowski Space 1
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationA revisit to a reverse-order law for generalized inverses of a matrix product and its variations
A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair
More informationElectronic Journal of Linear Algebra
Electronic Journal of Linear Algebra Volume 26 Volume 26 (2013) Article 30 2013 On k-potent matrices Oskar Maria Baksalary obaksalary@gmail.com Gotz Trenkler Follow this and additional works at: http://repository.uwyo.edu/ela
More informationChapter 1. Matrix Algebra
ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface
More informationSome additive results on Drazin inverse
Linear Algebra and its Applications 322 (2001) 207 217 www.elsevier.com/locate/laa Some additive results on Drazin inverse Robert E. Hartwig a, Guorong Wang a,b,1, Yimin Wei c,,2 a Mathematics Department,
More informationPredrag S. Stanimirović and Dragan S. Djordjević
FULL-RANK AND DETERMINANTAL REPRESENTATION OF THE DRAZIN INVERSE Predrag S. Stanimirović and Dragan S. Djordjević Abstract. In this article we introduce a full-rank representation of the Drazin inverse
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More informationof a Two-Operator Product 1
Applied Mathematical Sciences, Vol. 7, 2013, no. 130, 6465-6474 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2013.39501 Reverse Order Law for {1, 3}-Inverse of a Two-Operator Product 1 XUE
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
olume 10 2009, Issue 2, Article 41, 10 pp. ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG, AND HUA SHAO COLLEGE OF MATHEMATICS AND PHYSICS CHONGQING UNIERSITY
More informationOn Pseudo SCHUR Complements in an EP Matrix
International Journal of Scientific Innovative Mathematical Research (IJSIMR) Volume, Issue, February 15, PP 79-89 ISSN 47-7X (Print) & ISSN 47-4 (Online) wwwarcjournalsorg On Pseudo SCHUR Complements
More informationAbstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.
HIROSHIMA S THEOREM AND MATRIX NORM INEQUALITIES MINGHUA LIN AND HENRY WOLKOWICZ Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization
More informationChapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in
Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column
More informationWorkshop on Generalized Inverse and its Applications. Invited Speakers Program Abstract
Workshop on Generalized Inverse and its Applications Invited Speakers Program Abstract Southeast University, Nanjing November 2-4, 2012 Invited Speakers: Changjiang Bu, College of Science, Harbin Engineering
More informationA property of orthogonal projectors
Linear Algebra and its Applications 354 (2002) 35 39 www.elsevier.com/locate/laa A property of orthogonal projectors Jerzy K. Baksalary a,, Oskar Maria Baksalary b,tomaszszulc c a Department of Mathematics,
More information1 Linear Algebra Problems
Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and
More informationOn Regularity of Incline Matrices
International Journal of Algebra, Vol. 5, 2011, no. 19, 909-924 On Regularity of Incline Matrices A. R. Meenakshi and P. Shakila Banu Department of Mathematics Karpagam University Coimbatore-641 021, India
More informationAnalytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications
Analytical formulas for calculating extremal ranks and inertias of quadratic matrix-valued functions and their applications Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081,
More informationOn Euclidean distance matrices
On Euclidean distance matrices R. Balaji and R. B. Bapat Indian Statistical Institute, New Delhi, 110016 November 19, 2006 Abstract If A is a real symmetric matrix and P is an orthogonal projection onto
More informationOn V-orthogonal projectors associated with a semi-norm
On V-orthogonal projectors associated with a semi-norm Short Title: V-orthogonal projectors Yongge Tian a, Yoshio Takane b a School of Economics, Shanghai University of Finance and Economics, Shanghai
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationarxiv: v1 [math.ra] 16 Nov 2016
Vanishing Pseudo Schur Complements, Reverse Order Laws, Absorption Laws and Inheritance Properties Kavita Bisht arxiv:1611.05442v1 [math.ra] 16 Nov 2016 Department of Mathematics Indian Institute of Technology
More informationJordan Normal Form. Chapter Minimal Polynomials
Chapter 8 Jordan Normal Form 81 Minimal Polynomials Recall p A (x) =det(xi A) is called the characteristic polynomial of the matrix A Theorem 811 Let A M n Then there exists a unique monic polynomial q
More informationOn product of range symmetric matrices in an indefinite inner product space
Functional Analysis, Approximation and Computation 8 (2) (2016), 15 20 Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/faac On product
More informationSingular Value Inequalities for Real and Imaginary Parts of Matrices
Filomat 3:1 16, 63 69 DOI 1.98/FIL16163C Published by Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Singular Value Inequalities for Real Imaginary
More informationA proof of the Jordan normal form theorem
A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it
More informationON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES
ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES HANYU LI, HU YANG College of Mathematics and Physics Chongqing University Chongqing, 400030, P.R. China EMail: lihy.hy@gmail.com,
More informationWhen is the hermitian/skew-hermitian part of a matrix a potent matrix?
Electronic Journal of Linear Algebra Volume 24 Volume 24 (2012/2013) Article 9 2012 When is the hermitian/skew-hermitian part of a matrix a potent matrix? Dijana Ilisevic Nestor Thome njthome@mat.upv.es
More informationExpressions and Perturbations for the Moore Penrose Inverse of Bounded Linear Operators in Hilbert Spaces
Filomat 30:8 (016), 155 164 DOI 10.98/FIL1608155D Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://www.pmf.ni.ac.rs/filomat Expressions and Perturbations
More informationResearch Article Constrained Solutions of a System of Matrix Equations
Journal of Applied Mathematics Volume 2012, Article ID 471573, 19 pages doi:10.1155/2012/471573 Research Article Constrained Solutions of a System of Matrix Equations Qing-Wen Wang 1 and Juan Yu 1, 2 1
More informationLinear Algebra. Workbook
Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx
More informationHYPO-EP OPERATORS 1. (Received 21 May 2013; after final revision 29 November 2014; accepted 7 October 2015)
Indian J. Pure Appl. Math., 47(1): 73-84, March 2016 c Indian National Science Academy DOI: 10.1007/s13226-015-0168-x HYPO-EP OPERATORS 1 Arvind B. Patel and Mahaveer P. Shekhawat Department of Mathematics,
More informationResearch Article Partial Isometries and EP Elements in Banach Algebras
Abstract and Applied Analysis Volume 2011, Article ID 540212, 9 pages doi:10.1155/2011/540212 Research Article Partial Isometries and EP Elements in Banach Algebras Dijana Mosić and Dragan S. Djordjević
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationTHE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX
THE MOORE-PENROSE GENERALIZED INVERSE OF A MATRIX A Dissertation Submitted For The Award of the Degree of Master of Philosophy In Mathematics Purva Rajwade School of Mathematics Devi Ahilya Vishwavidyalaya,
More informationECE 275A Homework #3 Solutions
ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =
More informationThe Expression for the Group Inverse of the Anti triangular Block Matrix
Filomat 28:6 2014, 1103 1112 DOI 102298/FIL1406103D Published by Faculty of Sciences and Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacrs/filomat The Expression for the Group Inverse
More informationISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION
ISOLATED SEMIDEFINITE SOLUTIONS OF THE CONTINUOUS-TIME ALGEBRAIC RICCATI EQUATION Harald K. Wimmer 1 The set of all negative-semidefinite solutions of the CARE A X + XA + XBB X C C = 0 is homeomorphic
More informationDETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 4, Number 1, Pages 8 14 c 2008 Institute for Scientific Computing and Information DETERMINANTAL DIVISOR RANK OF AN INTEGRAL MATRIX RAVI
More information