Some additive results on Drazin inverse

Similar documents
The Drazin inverses of products and differences of orthogonal projections

ADDITIVE RESULTS FOR THE GENERALIZED DRAZIN INVERSE

Group inverse for the block matrix with two identical subblocks over skew fields

Formulas for the Drazin Inverse of Matrices over Skew Fields

Representations for the Drazin Inverse of a Modified Matrix

Additive results for the generalized Drazin inverse in a Banach algebra

The DMP Inverse for Rectangular Matrices

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

Displacement rank of the Drazin inverse

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

ELA

arxiv: v1 [math.ra] 25 Jul 2013

Formulae for the generalized Drazin inverse of a block matrix in terms of Banachiewicz Schur forms

Drazin Invertibility of Product and Difference of Idempotents in a Ring

arxiv: v1 [math.ra] 15 Jul 2013

Yimin Wei a,b,,1, Xiezhang Li c,2, Fanbin Bu d, Fuzhen Zhang e. Abstract

Moore-Penrose-invertible normal and Hermitian elements in rings

On EP elements, normal elements and partial isometries in rings with involution

Key words. Index, Drazin inverse, group inverse, Moore-Penrose inverse, index splitting, proper splitting, comparison theorem, monotone iteration.

A Note on the Group Inverses of Block Matrices Over Rings

The Moore-Penrose inverse of differences and products of projectors in a ring with involution

EP elements in rings

arxiv: v1 [math.ra] 27 Jul 2013

Some results about the index of matrix and Drazin inverse

On group inverse of singular Toeplitz matrices

EP elements and Strongly Regular Rings

Perturbation of the generalized Drazin inverse

THE EXPLICIT EXPRESSION OF THE DRAZIN INVERSE OF SUMS OF TWO MATRICES AND ITS APPLICATION

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Dragan S. Djordjević. 1. Introduction

The Expression for the Group Inverse of the Anti triangular Block Matrix

Linear Algebra and its Applications

arxiv: v1 [math.ra] 28 Jan 2016

arxiv: v1 [math.ra] 24 Aug 2016

On the Moore-Penrose and the Drazin inverse of two projections on Hilbert space

Weighted Drazin inverse of a modified matrix

Nonsingularity and group invertibility of linear combinations of two k-potent matrices

Dragan S. Djordjević. 1. Introduction and preliminaries

EXPLICIT SOLUTION OF THE OPERATOR EQUATION A X + X A = B

The Moore-Penrose inverse of 2 2 matrices over a certain -regular ring

arxiv: v1 [math.ra] 7 Aug 2017

On the Moore-Penrose Inverse in C -algebras

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

A revisit to a reverse-order law for generalized inverses of a matrix product and its variations

Some results on the reverse order law in rings with involution

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

arxiv: v1 [math.ra] 14 Apr 2018

arxiv: v1 [math.ra] 21 Jul 2013

Moore Penrose inverses and commuting elements of C -algebras

ELA THE MINIMUM-NORM LEAST-SQUARES SOLUTION OF A LINEAR SYSTEM AND SYMMETRIC RANK-ONE UPDATES

Factorization of weighted EP elements in C -algebras

RINGS IN WHICH EVERY ZERO DIVISOR IS THE SUM OR DIFFERENCE OF A NILPOTENT ELEMENT AND AN IDEMPOTENT

On the simplest expression of the perturbed Moore Penrose metric generalized inverse

Representation for the generalized Drazin inverse of block matrices in Banach algebras

Operators with Compatible Ranges

On some linear combinations of hypergeneralized projectors

Some results on matrix partial orderings and reverse order law

Workshop on Generalized Inverse and its Applications. Invited Speakers Program Abstract

Partial isometries and EP elements in rings with involution

Some Expressions for the Group Inverse of the Block Partitioned Matrices with an Invertible Subblock

The (2,2,0) Group Inverse Problem

Predrag S. Stanimirović and Dragan S. Djordjević

REPRESENTATIONS FOR THE GENERALIZED DRAZIN INVERSE IN A BANACH ALGEBRA (COMMUNICATED BY FUAD KITTANEH)

The characterizations and representations for the generalized inverses with prescribed idempotents in Banach algebras

Generalized Principal Pivot Transform

Linear Algebra and its Applications

APPLICATIONS OF THE HYPER-POWER METHOD FOR COMPUTING MATRIX PRODUCTS

Generalized core inverses of matrices

Minkowski Inverse for the Range Symmetric Block Matrix with Two Identical Sub-blocks Over Skew Fields in Minkowski Space M

SYMBOLIC IMPLEMENTATION OF LEVERRIER-FADDEEV ALGORITHM AND APPLICATIONS

Re-nnd solutions of the matrix equation AXB = C

On V-orthogonal projectors associated with a semi-norm

Linear Algebra and its Applications

Some inequalities for sum and product of positive semide nite matrices

Research Article On the Idempotent Solutions of a Kind of Operator Equations

A system of matrix equations and a linear matrix equation over arbitrary regular rings with identity

Solutions to Exam I MATH 304, section 6

A property of orthogonal projectors

Research Article Partial Isometries and EP Elements in Banach Algebras

ELA THE OPTIMAL PERTURBATION BOUNDS FOR THE WEIGHTED MOORE-PENROSE INVERSE. 1. Introduction. Let C m n be the set of complex m n matrices and C m n

Polynomials of small degree evaluated on matrices

Research Article Some Results on Characterizations of Matrix Partial Orderings

Weighted generalized Drazin inverse in rings

Abel rings and super-strongly clean rings

A note on solutions of linear systems

Diagonal and Monomial Solutions of the Matrix Equation AXB = C

Linear estimation in models based on a graph

Strongly Nil -Clean Rings

GENERALIZED POWER SYMMETRIC STOCHASTIC MATRICES

~ g-inverses are indeed an integral part of linear algebra and should be treated as such even at an elementary level.

Comparison of perturbation bounds for the stationary distribution of a Markov chain

ON HOCHSCHILD EXTENSIONS OF REDUCED AND CLEAN RINGS

Expressions and Perturbations for the Moore Penrose Inverse of Bounded Linear Operators in Hilbert Spaces

Improved Newton s method with exact line searches to solve quadratic matrix equation

Chapter 1. Matrix Algebra

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

A New Characterization of Boolean Rings with Identity

MATH 5524 MATRIX THEORY Problem Set 4

SUMS OF UNITS IN SELF-INJECTIVE RINGS

Transcription:

Linear Algebra and its Applications 322 (2001) 207 217 www.elsevier.com/locate/laa Some additive results on Drazin inverse Robert E. Hartwig a, Guorong Wang a,b,1, Yimin Wei c,,2 a Mathematics Department, North Carolina State University, Raleigh, NC 27695-8205, USA b Mathematics Department, Shanghai Normal University, Shanghai 200234, People s Republic of China c Department of Mathematics and Lab of Maths for Nonlinear Science, Fudan University, Shanghai 200433, People s Republic of China Received 18 June 1999; accepted 31 July 2000 Submitted by H.J. Werner Abstract Some additive perturbation results for Drazin inverses are given. In particular, a formula is given for the Drazin inverse of a sum of two matrices, when one of the products of these matrices vanishes. Some special applications of this are also considered. 2001 Elsevier Science Inc. All rights reserved. AMS classification: 15A09; 15A23 Keywords: Drazin inverse; Additive results; Perturbation 1. Background results If A is an n n complex matrix, then the Drazin inverse [4], denoted by A D,ofA is the unique matrix X satisfying the relations A k+1 X = A k, XAX = X, AX = XA, where k = Ind(A), the index of A, is the smallest nonnegative integer for which rank(a k ) = rank(a k+1 ). Corresponding author. E-mail addresses: guowangc@online.sh.cn (G. Wang), ymwei@fudan.edu.cn (Y. Wei). 1 Project 19971057 supported by National Natural Science Foundation of China. 2 Project 19901006 supported by National Natural Science Foundation of China, Doctoral Point Foundation of China and Science Foundation of Laboratory of Computational Physics. 0024-3795/01/$ - see front matter 2001 Elsevier Science Inc. All rights reserved. PII:S0024-3795(00)00257-3

208 R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 In particular, when Ind(A) = 1, the matrix X is called the group inverse of A,and is denoted by X = A #.IfAis nonsingular, then it is easily seen that Ind(A) = 0and A D = A 1. The concept of a Drazin inverse was shown to be very useful in various applied mathematical settings. For example, applications to singular differential or difference equations, Markov chains, cryptography, iterative method or multibody system dynamics can be found in the literature [6 10,12 15], respectively. This paper is concerned with the Drazin inverses (P + Q) D of the sums of two matrices P and Q. This problem was first considered by Drazin in 1958 in his celebrated paper [4]. Herein, it was proved that (P + Q) D = P D + Q D provided QP = PQ = 0. (1.1) The general question of how to express (P + Q) D as a function of P,Q,P D,Q D, without side conditions, is very difficult and remains open. The aim of this paper is to extend Drazin s result to the case where only the onesided condition PQ = 0 is assumed. We then use this new result to analyze a special class of perturbation of the type A X. We shall assume familiarity with the theory of Drazin inverses as given in [1] and we denote Z A = I AA D for square matrix A. In this paper, we wish to examine some special cases of (1.1) and then extend formula (1.1) to the noncommutative case where just PQ = 0. This case is useful in several applications, such as in the splitting of matrices and iteration theory. We start by discussing the commutative additive result. Lemma 1.1. If AB = BA and A = C A + N A,B= C B + N B are the core-nilpotent decompositions of A and B, respectively, then (A + B) D =(C A + C B ) D[ I + (C A + C B ) D (N A + N B ) ] 1 =(C A + C B ) D[ I + (C A + C B ) D N A ] 1 [ I + (C A + C B ) D N B ] 1. (1.2) Proof. Write A + B = (C A + C B ) + (N A + N B ), where N A + N B is nilpotent and apply Lemma 4 in [9]: if N k = 0andAN = NA, then (A + N) D = A D( I + A D N ) 1, A D = (A + N) D[ I (A + N) D N ] 1. For example, if B = Z A, then (A + Z A ) 1 = (C A + Z A ) 1 (I + N A ) 1, (1.3) where A + Z A = (I + N A )(C A + Z A ), A = (A Z A ) + Z A is commuting sum of an invertible and an idempotent matrix.

R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 209 We now turn to the noncommutative additive theorems. These theorems are motivated by the question of splitting a matrix into two or more suitable (and hopefully easier) parts, such that the Drazin inverse of the sum can compute in terms of the Drazin inverse of the pieces. Undoubtedly, these will also generalize the commutative case (1.1). 2. A one side splitting theorem Consider a given matrix A = P + Q, wherepq = 0. This amounts to being given a solution to the nonlinear equation XA = X 2. The best way of solving this is by going to canonical forms, and turn a horizontal problem into a vertical block problem. Indeed, this shall be precisely the way in which we tackle the one-sided Drazin inverse problem. Theorem 2.1. If PQ = 0, then (P + Q) D = ( I QQ D)[ I + QP D + +Q (P D ) ] P D + Q D[ I + Q D P + +(Q D ) P ]( I PP D), (2.1) (P + Q)(P + Q) D = ( I QQ D)[ I + QP D + +Q ( P D) ] PP D +QQ D[ I + Q D P + + ( Q D) P ] ( I PP D) + QQ D PP D, (2.2) where max{ind(p ), Ind(Q)} k Ind(P ) + Ind(Q). Proof. Under the assumption PQ = 0, we have P D Q = PQ D = 0, Z P Q = Q and PZ Q = P. (2.3) Using Cline s formula [3], (AB) D = A[(BA) D ] 2 B, we have ( [ ]) ( D [P ] ) D 2 [P ] (P + Q) D P PQ = [I,Q] =[I,Q], I I Q I in which we set PQ = 0. We now recall the result of Theorem 1 of [9], which states that [ ] D [ ] P 0 P D 0 = I Q R Q D, in which R = Q D P D + Z Q Y k (P D ) k+1 + (Q D ) k+1 Y k Z P and where Y k = Q + Q k 2 P + +QP k 2 + P with max{ind(p ), Ind(Q)} k Ind(P ) + Ind(Q).

210 R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 Hence, (P + Q) D =[I,Q] ([ P D 0 D]) 2 [ ] P R Q I =P D + QRP P D + QQ D RP + Q D. Substituting for R now yields the desired result of (2.1). It is straightforward to prove (2.2) from (2.1) and (2.3). There are now numerous special cases that follow at once. Corollary 2.1. Let PQ = 0 and suppose that max{ind(p ), Ind(Q)} k Ind(P ) + Ind(Q). (i) If Q is nilpotent, then (P + Q) D = P D + Q(P D ) 2 + +Q (P D ) k. (ii) If Q 2 = 0, then (P + Q) D = P D + Q(P D ) 2. (iii) If P is nilpotent, then (P + Q) D = Q D + (Q D ) 2 P + +(Q D ) k P. (iv) If P 2 = 0, then (P + Q) D = Q D + (Q D ) 2 P. (v) If P 2 = P, then (P + Q) D = (I QQ D )(I + Q + +Q )P + Q D (I P) and (P + Q) D (I P) = Q D (I P). (vi) If Q 2 = Q, then (P + Q) D = (I Q)P D + Q(I + P + +P ) (I PP D ) and (I Q)(P + Q) D = (I Q)P D. (vii) If PR = 0, then (P + Q) D R = (I QQ D )P D R + Q D R = Q D R. Theorem 2.1 may be used to obtain several additional perturbation results concerning the matrix Γ = A X. Needless to say these are rather special, since addition and inversion rarely mix. First a useful telescoping result. Lemma 2.1. If AF = FA and FX = X, then (AF X) k X = (A X) k X for all k = 0, 1,... (2.4) Proof. The case k = 1 is clear. Since AF = FA and (I F)X = 0, and all terms in the expansion of (A X) k X contain at least one power of X, we see that (I F )(A X) k X = 0. (2.5) Because (AF X)(A X) k X = AF (A X) k X X(A X) k X whichby(2.5) yields A(A X) k X X(A X) k X = (A X) k+1 X. (2.6) Now suppose that (AF X) k X = (A X) k X. Then (AF X) k+1 X = (AF X)(A X) k X, which by (2.6) reduces to (A X) k+1 X. We are now ready for our special perturbation results.

R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 211 Corollary 2.2. Consider Γ = A X and suppose that F is an idempotent that commutes with A. Also let max{ind(a), Ind(X)} k Ind(A) + Ind(X). If FX = X and R = ΓF = AF XF, then (A X) D =R D ( R D ) i+2 X(I F)A i ( I AA D) + ( I + R D X ) (I F)A D ( I RR D) k 2 (A X) i X(I F) ( A D) i+2. (2.7) Proof. Suppose that FX = X. Then Γ = A X = P + Q, where P = A(I F)and Q = AF FX.It follows from F 2 = F that (I F) 2 = I F and (I F) D = (I F). Since PQ = A(I F )(AF FX) = A(I F)AF = A 2 [(I F)F]=0, we may apply Theorem 2.1 to give (P + Q) D = ( I QQ D) V + W ( I PP D) = T 1 + T 2, where V = [ P D + Q ( P D) 2 + +Q ( P D) k ] and W = [ Q D + ( Q D) 2 ( P + + Q D ) k P ], which require that we compute Q D and P D. The latter is easily found because A and F commute. Thus, P D =[A(I F)] D = (I F)A D and PP D = (I F)AA D. On the other hand, in order to compute Q D we split Q further as Q = R S, where R = (A X)F = AF FXF and S = FX(I F).Since SR = FX(I F )(FA FXF) = FX[(I F)F](A XF) = 0 and S 2 = FX[(I F)F]X(I F) = 0 we at once see that and Q D = ( S + R) D = R D ( R D) 2 S

212 R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 QQ D = (R S) [ R D ( R D) 2 ] S from (iv) of Corollary 2.1. Now SR D = SR = 0, the latter reduces to QQ D = RR D R D S. Next, R D P = 0, because RP = (AF FXF)A(I F) = (A FX)[F(I F)]A = 0, we have Q D P = (R D ) 2 SP = (R D ) 2 XP. Likewise again as SR D = 0 we arrive at ( Q D ) 2 [ P = R D ( R D) 2 ][ ( S R D ) 2 ] ( XP = R D ) 3 XP. Repeating this we obtain ( Q D ) t+1 P t = ( R D) t+2 XP t for t = 1, 2,..., which when substituted yields the second term: T 2 =W ( I PP D) = [ R D ( R D) 2 ( S R D ) 3 ( XP R D ) k+1 XP ]( I PP D) = [ R D ( R D) 2 ( X(I F) R D ) 3 XA(I F) ( R D) k+1 XA (I F) ] [ R D ( R D) 2 ( X(I F) R D ) 3 XA(I F) ( R D) k+1 XA (I F) ] (I F)AA D =R D ( R D ) i+2 X(I F)A i ( I AA D). Let us next examine the first term T 1 = ( I QQ D) V = [ I ( RR D R D S )][ P D + Q ( P D) 2 + +Q ( P D) k]. We first compute the powers Q i (P D ) i+1 = (AF X) i (I F )(A D ) i+1. For i = 1, this gives (AF X)(I F )(A D ) 2 = X(I F )(A D ) 2, while for higher powers of i we may use Lemma 2.1 to arrive at Now Q i( P D) i+1 =(AF X) i 1 (AF X)(I F) ( A D) i+1 = (AF X) i 1 X(I F) ( A D) i+1 = (A X) i 1 X(I F) ( A D) i+1.

R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 213 S(A X) i 1 X =X(I F )(A X)(A X) i 2 X =XA(I F )(A X) i 2 X = =XA i 1 (I F)X =0 for all i, and R D (I F) = (R D ) 2 R(I F) = (R D ) 2 (A X)[F(I F)]=0. Thus, T 1 collapses to T 1 = ( I RR D + R D S ) (I F)A D + ( I RR D + R D S )[ Q ( P D) 2 + +Q ( P D) k] = [ I + R D X(I F) ] (I F)A D ( I RR D) (A X) i 1 X(I F) ( A D) i+1 i=1 = ( I + R D X ) (I F)A D ( I RR completing the proof. k 2 D) (A X) i X(I F) ( A D) i+2, Let us now use this result to analyze some special perturbations of the matrix A X. We thereby extend earlier work by several authors [12 14,16,17] and partially solve a problem posed in 1975 by Campbell and Meyer [2], who considered it difficult to establish the norm estimates for the perturbation of the Drazin inverse. In the following four special cases, we assume that FX = X and R = AF XF. Case (1) XF = 0. Clearly (R D ) i = (A D ) i F and S = X. Moreover, (A X) i FX = A i X for i 0. Thus, (2.7) reduces to (A X) D =A D ( F A D ) i+2 XA i ( I AA D) Case (1a) XF = 0andF = AA D. k 2 + ( I F + A D X ) A D A i( I AA D) X ( A D) i+2. (2.8) If we, in addition, assume that F = AA D, then XA D = 0 and (2.8) is reduced to (A X) D = A D ( A D ) i+2 XA i. (2.9)

214 R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 Case (1b) XF = 0andF = I AA D. In this case, A D X = 0 and (2.8) becomes k 2 (A X) D = A D A i X ( A D) i+2. (2.10) Case (2) F = AA D. Now AA D X = X and R = A 2 A D (I A D XAA D ) and (2.7) simplifies to (A X) D = R D ( R D ) i+2 XA i ( I AA D). (2.11) If we set U = I A D XAA D and V = I AA D XA D,thenUA D = A D V and R = A 2 A D U = VA 2 A D. If we assume now that U is invertible, then so is V and U 1 A D = A D V 1. It is now easily verified that R # exists and equals R # = U 1 A D = A D V 1. In fact RR # = A 2 A D UU 1 A D = AA D = A D V 1 VA 2 A D = R # R and R 2 R # = RAA D = R and R # RR # = U 1 A D AA D = U 1 A D = R #. We thus have: Case (2a) F = AA D, and U = I A D XAA D is invertible. In this case, (2.11) is just as (A X) D = R # ( R # ) i+2 XA i ( I AA D), (2.12) where R = A 2 A D U, R # = U 1 A D. In general, however, (R # ) i /= U i (A D ) i. Remark. The matrix U = I A D XAA D is invertible if and only if I A D X is nonsingular. This result generalizes the main results in [12 14,16,17]. Case (2b) F = I AA D. This time A D X = A D F = 0, and (2.7) changes as (A X) D =R D + ( I + R D X ) A D ( I RR k 2 D) (A X) i X ( A D) i+2, (2.13)

R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 215 where R = A(I AA D ) (I AA D )X(I AA D ). Case (3) AA D XF = XFAA D = XF, U = I A D XF is invertible and (AF ) # exists. Now R = AF XF = AF AA D FXF = AF ( I A D XF ) = AF U = VFA, where V = I XFA D. Furthermore, A D FV = UA D F. We may now conclude that U is invertible exactly when V is, in which case Y = U 1 A D F = A D FV 1. Now RY = AF U ( U 1 A D F ) = AA D F = A D FV 1 (V FA) = YR. Lastly, Y 2 R = U 1 A D F ( AA D F ) = U 1 A D F = Y and R 2 Y = RAA D F = A 2 A D F AA D FXFAA D = A 2 A D F XF. Now if (AF ) # exists, then AF = AF (AF ) # AF = AFF # A D AF = AFF # FAA D = A 2 A D F and then R 2 Y = AF XF = R, i.e.,y = R # and (2.7) becomes (A X) D = R # ( R # ) i+2 ( X I AA D ) A i. (2.14) Case (4) FX = XF = X. In this case, (2.7) reduces to just (A X) D = R D + (I F)A D. (2.15) If in addition to F = AA D, and U = I A D X is invertible, this reduces even further to [16] (A X) D = R D = U 1 A D. (2.16) Case (5) If X = A 2 A D,thenΓ is nilpotent and Γ D = 0. 3. Concluding remarks In this paper, we have constructed a formula for the Drazin inverse of P + Q when P and Q satisfy the one-sided condition PQ = 0. This result generalizes Drazin s previous result [4] and admits several special cases, one of which partially answers a problem on the perturbation of the Drazin inverse posed in 1975 by Campbell and Meyer [2].

216 R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 We may generalize Theorem 2.1 slightly as follows. Since ( [P ] ) D k ( [P ] ) k D PQ PQ = I Q = I Q [ P(P + Q) P(P + Q) Q (P + Q) (P + Q) Q ] D for all k = 1, 2,...,we may extend the above to the case where P(P + Q) Q = 0. In fact ( [P ] ) k D [P ] k 2 [ ] (P + Q) D PQ PQ P =[I,Q] I Q I Q I [ ] P(P + Q) D 0 =[I,Q] (P + Q) (P + Q) Q [ P(P + Q) k 3 P(P + Q) k 3 ][ ] Q P (P + Q) k 3 (P + Q) k 3. Q I This requires the computation of [P(P + Q) ] D and [(P + Q) Q] D, which may actually be easier than the computation of (P + Q) D. A second attempt to generalize Theorem 2.1 would be to assume that only P 2 Q = 0. Needless to say, this is best attempted via the block form, which in turn should give a suitable horizontal formula. This will be attempted elsewhere. Acknowledgment The authors would like to thank the referee for his/her comments on the presentation of this paper. References [1] A. Ben Israel, T.N.E. Greville, Generalized Inverses, Theory and Applications, Wiley, New York, 1974. [2] S.L. Campbell, C.D. Meyer Jr., Continuity properties of the Drazin inverse, Linear Algebra Appl. 10 (1975) 77 83. [3] R.E. Cline, An application of representation of a matrix, MRC Technical Report, # 592, 1965. [4] M.P. Drazin, Pseudoinverses in associative rings and semigroups, Amer. Math. Monthly 65 (1958) 506 514 (Drazin s original paper). [5] G. Ehrlich, Unit regular rings, Portugal. Math. 27 (1968) 209 212.

R.E. Hartwig et al. / Linear Algebra and its Applications 322 (2001) 207 217 217 [6] R. Gabriel, R.E. Hartwig, The Drazin inverse as a gradient, Linear Algebra Appl. 63 (1984) 237 252. [7] M. Hanke, Iterative consistency: a concept for the solution of singular linear system, SIAM J. Matrix Anal. Appl. 15 (1994) 569 577. [8] R.E. Hartwig, More on the Souriau-Frame algorithm, SIAM J. Appl. Math. 31 (1976) 42 46. [9] R.E. Hartwig, J.M. Shoaf, Group inverses and Drazin inverses of bidiagonal and triangular Toeplitz matrices, Austral. J. Math. Ser. A 24 (1977) 10 34. [10] R.E. Hartwig, J. Levine, Applications of the Drazin inverse to the Hill cryptographic system, Part III, Cryptologia 5 (1981) 67 77. [11] N. Jacobson, Lectures in Algebra, vol. 2, Van Nostrand, Princeton, NJ, 1953. [12] C.D. Meyer Jr., The condition number of a finite Markov chains and perturbation bounds for the limiting probabilities, SIAM J. Algebraic Discrete Methods 1 (1980) 273 283. [13] C.D. Meyer Jr., J.M. Shoaf, Updating finite Markov chains by using techniques of group inversion, J. Statist. Comput. Simulation 11 (1980) 163 181. [14] J.M. Shoaf, The Drazin inverse of a rank-one modification of a square matrix, Ph.D. Dissertation, North Carolina State University, NC, USA, 1975. [15] B. Simeon, C. Fuhrer, P. Rentrop, The Drazin inverse in multibody system dynamics, Numer. Math. 64 (1993) 521 539. [16] Y. Wei, G. Wang, The perturbation theory for the Drazin inverse and its applications, Linear Algebra Appl. 258 (1997) 179 186. [17] Y. Wei, On the perturbation of the group inverse and the oblique projection, Appl. Math. Comput. 98 (1999) 29 42.