Some results about the index of matrix and Drazin inverse

Similar documents
Group inverse for the block matrix with two identical subblocks over skew fields

Chapter 1. Matrix Algebra

Applied Matrix Algebra Lecture Notes Section 2.2. Gerald Höhn Department of Mathematics, Kansas State University

Multiplicative Perturbation Bounds of the Group Inverse and Oblique Projection

The Drazin inverses of products and differences of orthogonal projections

The Jordan Normal Form and its Applications

The DMP Inverse for Rectangular Matrices

Some additive results on Drazin inverse

Formulas for the Drazin Inverse of Matrices over Skew Fields

Linear Algebra Practice Problems

Key words. Index, Drazin inverse, group inverse, Moore-Penrose inverse, index splitting, proper splitting, comparison theorem, monotone iteration.

Recall the convention that, for us, all vectors are column vectors.

Chap 3. Linear Algebra

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

Computational Methods. Eigenvalues and Singular Values

arxiv: v1 [math.ra] 14 Apr 2018

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Properties of Matrices and Operations on Matrices

Elementary linear algebra

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Econ Slides from Lecture 7

STAT200C: Review of Linear Algebra

Generalized core inverses of matrices

Eigenvalues, Eigenvectors, and Diagonalization

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra: Matrix Eigenvalue Problems

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Eigenvalues, Eigenvectors, and Diagonalization

Linear Algebra and its Applications

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

The Eigenvalue Problem: Perturbation Theory

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Lecture Notes: Eigenvalues and Eigenvectors. 1 Definitions. 2 Finding All Eigenvalues

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Dragan S. Djordjević. 1. Introduction

Mathematical Foundations of Applied Statistics: Matrix Algebra

ELE/MCE 503 Linear Algebra Facts Fall 2018

Lecture Summaries for Linear Algebra M51A

Matrix Algebra, part 2

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Eigenvalues and Eigenvectors

Math 315: Linear Algebra Solutions to Assignment 7

CS 143 Linear Algebra Review

Math 240 Calculus III

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Lecture 1: Review of linear algebra

MOORE-PENROSE INVERSE IN AN INDEFINITE INNER PRODUCT SPACE

Linear System Theory

Recall : Eigenvalues and Eigenvectors

The Singular Value Decomposition

Eigenvalues, Eigenvectors. Eigenvalues and eigenvector will be fundamentally related to the nature of the solutions of state space systems.

Diagonalization of Matrix

EIGENVALUE PROBLEMS. Background on eigenvalues/ eigenvectors / decompositions. Perturbation analysis, condition numbers..

Chapter 3 Transformations

Introduction to Numerical Linear Algebra II

Eigenvalue and Eigenvector Problems

Lecture 10 - Eigenvalues problem

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

UCSD ECE269 Handout #18 Prof. Young-Han Kim Monday, March 19, Final Examination (Total: 130 points)

Knowledge Discovery and Data Mining 1 (VO) ( )

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

A Note on the Group Inverses of Block Matrices Over Rings

Eigenvalues and Eigenvectors

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Linear Algebra Highlights

New concepts: rank-nullity theorem Inverse matrix Gauss-Jordan algorithm to nd inverse

Linear Algebra Formulas. Ben Lee

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

This operation is - associative A + (B + C) = (A + B) + C; - commutative A + B = B + A; - has a neutral element O + A = A, here O is the null matrix

CS 246 Review of Linear Algebra 01/17/19

Online Exercises for Linear Algebra XM511

Linear Algebra in Actuarial Science: Slides to the lecture

Lecture notes: Applied linear algebra Part 1. Version 2

arxiv: v1 [math.ra] 27 Jul 2013

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

City Suburbs. : population distribution after m years

Practice Problems for the Final Exam

CHAPTER 3. Matrix Eigenvalue Problems

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

ELA ON THE GROUP INVERSE OF LINEAR COMBINATIONS OF TWO GROUP INVERTIBLE MATRICES

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Eigenvalues and Eigenvectors

The matrix will only be consistent if the last entry of row three is 0, meaning 2b 3 + b 2 b 1 = 0.

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Displacement rank of the Drazin inverse

MAT Linear Algebra Collection of sample exams

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Transcription:

Mathematical Sciences Vol. 4, No. 3 (2010) 283-294 Some results about the index of matrix and Drazin inverse M. Nikuie a,1, M.K. Mirnia b, Y. Mahmoudi b a Member of Young Researchers Club, Islamic Azad University-Tabriz Branch b Islamic Azad University-Tabriz Branch Abstract In this paper, some results about the index of matrix and Drazin inverse are given. We then explain applications of them in solving singular linear system of equations. Keywords: Index, Drazin inverse, Singular linear system, Jordan normal form. c 2010 Published by Islamic Azad University-Karaj Branch. 1 Introduction Let C n n denote the space comprising those matrices of order n, over the complex field. Any matrix A C n n have an index that we get Drazin inverse of A to make use of it. On the contrary to our expectation, for any matrix A C n n, even singular matrices, index and Drazin inverse of matrix A exists and is unique[10]. According to this truth, the aim of this paper is to give the new results of them. The Drazin inverse has various applications in the theory of finite Markov chains, the study of singular differentail and difference equations, the investigation of Cesaro-Neumann iterations, cryptograph, iterative methods in numerical analysis, multibody system dynamics and others[4,8,9]. Computing the Drazin inverse is a current issue in recent years[3,10]. Section 2 provides preliminaries for index of matrix and Drazin inverse. Applications 1 Corresponding Author. E-mail Address: nikoie m@yahoo.com

284 Mathematical Sciences Vol. 4, No. 3 (2010) of them in solving singular linear system of equations is also explained. Some new results on the index of matrix and Drazin inverse are given in section 3. Numerical examples to illustrate previous sections are given in section 4. 2 Preliminaries and Basic Definitions We first present some definitions and theorems about the index of matrix and Drazin inverse which are needed in this paper. Definition 2.1 Let A C n n. We say the nonnegative integer number k to be the index of matrix A, if k is the smallest nonnegative integer number such that rank(a k+1 ) = rank(a k ) (1) It is equivalent to the dimension of largest Jordan block corresponding to the zero eigenvalue of A [9]. The index of matrix A, is denoted by ind(a). For any matrix A C n n the unique Jordan normal form of a matrix A can be built [6]. Therefore for any matrix A C n n the index of A exists and is unique. Definition 2.2 A number λ C, is called an eigenvalue of the matrix A if there is a vector x 0 such that Ax = λx. Any such vector is called an eigenvector of A associated to the eigenvalue λ. The set L(λ) = {x (A λi)x = 0} forms a linear subspace of C n, of dimension ρ(λ) = n rank(a λi) The integer ρ(λ) = diml(λ) specifies the maximum number of linearly independent eigenvectors associated with the eigenvalue λ. It is easily seen that ϕ(µ) = det(a µi) is a nth-degree polynomial of the form ϕ(µ) = ( 1) n (µ n + α n 1 µ n 1 + + α 0 ) (2)

M. Nikuie et al. 285 It is called the characteristic polynomial of the matrix A. Its zeros are the eigenvalues of A. If λ 1,, λ k are the distinct zeros of ϕ(µ), then ϕ can be represented in the form ϕ(µ) = ( 1) n (µ λ 1 ) σ 1 (µ λ 2 ) σ2 (µ λ k ) σ k The integer σ i, which we also denote by σ(λ i ) = σ i, is called the multiplicity of the eigenvalue λ i. Theorem 2.3. If λ be an eigenvalue of matrix A C n n, then 1 ρ(λ) σ(λ) n (3) Proof See [6]. Definition 2.4 Let A C n n, with ind(a) = k. The matrix X of order n is the Drazin inverse of A,denoted by A D, if X satisfies the following conditions AX = XA, XAX = X, A k XA = A k (4) When ind(a) = 1, A D is called the group inverse of A, and denoted by A g. For any matrix A C n n, the Drazin inverse A D of A exists and is unique [3,10]. Theorem 2.5 Let A C n n, with ind(a) = k, rank(a k ) = r. We may assume that the Jordan normal form of A has the form as follows ( ) D 0 A = P P 1 0 N where P is a nonsingular matrix, D is a nonsingular matrix of order r, and N is a nilpotent matrix that N k = ō. Then we can write the Drazin inverse of A in the form ( D 1 ) 0 A D = P P 1 0 N When ind(a) = 1, it is obvious that N = ō [3,10]. Corollary 2.6. Let ind(a) = 1 and D = I. It is clear that D 1 = I, thus A = A g.

286 Mathematical Sciences Vol. 4, No. 3 (2010) Theorem 2.7 If Ax = b be a consistent or inconsistent singular linear system where ind(a) = k,then the linear system of equations A k Ax = A k b (5) is consistent. Proof The linear system Ax = b has solution if and only if rank(a) = rank[a b] [8]. From (1) we have rank(a k+1 ) = rank[a k+1 A k b]. Therefore (5) is consistent. According to[5]and properties of the Drazin inverse, in order to obtain the Drazin inverse the projection method solves consistent or inconsistent singular linear system Ax = b where ind(a) = k through solving the consistent singular linear system (5)[9]. Theorem 2.8 A D b is a solution of Ax = b, k = ind(a) (6) if and only if b R(A k ), and A D b is an unique solution of (6) provided that x R(A k ). We now explain applications of the index of matrix and Drazin inverse in solving singular linear system of equations. Singular linear system of equations arise in many different scientific applications. Notably, partial differential equations discretized with finite difference or finite element methods yield singular linear systems of equations. Large singular linear systems can be solved with either sparse decompositions techniques or with iterative methods. For consistent singular linear systems, these two approaches can be also combined into a method that uses approximate decomposition preconditioning for an iterative method. However, we cannot use preconditioned iterative method for inconsistent singular linear systems[9]. A cramer rule for A D b was given in [7]. Singular linear systems with index one arises in many applications, such as Markov chain modelling and numerical experiment on the perturbed Navier-Stokes equation. Therefore, for the singular linear system with unit index, we must solve the system AAx = Ab (7)

M. Nikuie et al. 287 which is consistent. So we can choose all kinds of preconditioned methods, such as PCG, PGMRES, etc., to solve system (7). In [9] Wei proposed a two step algorithm for solving singular linear system with index one. In an implementation we first solve the system Ay = Ab, the inner iteration, and then we solve Ax = y, the outer iteration. 3 Some Results In this section, we give some results about the index of matrix and Drazin inverse. Theorem 3.1 If A C n n be a nonsingular matrix, then ind(a) = 0. Proof. Let A C n n be nonsingular matrix, we know that rank(a) = rank(i n ) = n. Thus ind(a) = 0. Moreover the eigenvalues of A are nonzero. Therefore if A be a nonsingular matrix then ind(a) = 0 and A D = A 1, which satisfies the conditions (4). Theorem 3.2 If A C n n be a matrix with index one, then rank(a) = rank(a g ). Proof. From rank(a g ) = rank(a g AA g ) rank(aa g ) rank(a) we have rank(a g ) rank(a) (8) Since rank(a) = rank(aa g A) rank(a g A) rank(a g ) rank(a) rank(a g ) (9) Thus of (8), (9) we have rank(a) = rank(a g ). Corollary 3.3 Let A C n n where ind(a) > 1. We have rank(a D ) < rank(a). Theorem 3.4 Let A C n n,then A g exists if and only if rank(a) = rank(a 2 )[1]. Corollary 3.5 The index of any idempotent matrix equal one. The converse of this statement is not true always.

288 Mathematical Sciences Vol. 4, No. 3 (2010) Theorem 3.6 If A C n n be a symmetric matrix with index one, then A g = (A g ) T. Proof Since A g be group inverse of A we have AA g = A g A, A g AA g = A g, AA g A = A From A = A T we have (A g ) T A = A(A g ) T, (A g ) T A(A g ) T = (A g ) T, A(A g ) T A = A (10) From (10) by Definition (2.4), (A g ) T is group inverse of A. The group inverse of matrix A is unique. Therefore A g = (A g ) T. Corollary 3.7 For any matrix A C n n with index one, (A g ) g = A. Theorem 3.8. If A C n n where ind(a) = k,then ind(a) = ind(a T ). Proof Since ind(a) = k, by Definition(2.1) we have rank(a k+1 ) = rank(a k ). From (A n ) T = (A T ) n. We have (A k+1 ) T = (A T ) k+1, (A k ) T = (A T ) k Since rank(a) = rank(a T ) we have rank(a k+1 ) = rank((a T ) k+1 ), rank(a k ) = rank((a T ) k ) Thus, we have rank((a T ) k+1 ) = rank((a T ) k ). Therefore ind(a) = ind(a T ). Theorem 3.9. If λ be an eigenvalue of A C n n, then 1 λ is an eigenvalue of AD. Proof From Ax = λx, (x 0) we have AA D x = λa D x A D AA D x = λa D A D x From (4) we can get A D x = λa D A D x. Now if we set A D x = y we have 1 λ y = AD y. Therefore 1 λ is an eigenvalue of AD.

M. Nikuie et al. 289 Theorem 3.10. If A C n n be a nilpotent matrix that A k = ō, then ind(a) = k. Proof From A k = ō we have rank(a k ) = rank(ō) From A k+n = ō, n N we can get rank(a k+1 ) = rank(a k ). Therefore ind(a) = k. Corollary 3.11 Let A C n n be a nilpotent matrix of index n, so ind(a) = n and rank(a n ) = 0. Thus in Theorem 1.2, D = ō. Therefore A D = ō. Theorem 3.12. If A C n n, then 0 ind(a) n. Proof By Theorem 3.1, ind(a) = 0 for a nonsingular matrix A. By Theorem 3.10 ind(a) = n for a nilpotent matrix A such that A n = ō. Let λ 1,, λ k are the distinct zeros of the characteristic polynomial of the matrix A C n n, and λ 1 = 0 with multiplicity σ(λ 1 ) = k < n. From (2) any matrix A C n n has exactly n eigenvalue. By Theorem 2.3 ρ(λ 1 ) σ(λ 1 ) < n. From Definition 2.1 we can get 0 < ind(a) < n. Therefore 0 ind(a) n. 4 Numerical Examples We now give the following examples to explain the present results. Example 4.1 Determine the index and Drazin inverse of the following matrix 2 3 5 A = 1 4 5 1 3 4 It is obvious that rank(a) = rank(a 2 ), so ind(a) = 1. Moreover the matrix A has the eigenvalues λ 1 = 0, λ 2 = 1 with multiplicity σ(λ 1 ) = 1, ρ(λ 1 ) = 1

290 Mathematical Sciences Vol. 4, No. 3 (2010) σ(λ 2 ) = 2, ρ(λ 2 ) = 2 Thus Jordan normal form of matrix A has the following form [1] 0 0 1 3 4 J = P AP 1 = 0 [1] 0, P = 2 3 5 (11) 0 0 [0] 1 3 5 P is a nonsingular matrix. The dimension of largest Jordan block corresponding to the zero eigenvalue of (11) is equal to one. Moreover A = A 2,then A is an idempotent matrix, by Corollary 3.5 ind(a) = 1. From Theorem 2.5 we have ( ) 2 3 5 D 0 A = P 1 P = 1 4 5 0 N 1 3 4 ( 1 ) 0 wherein D = 0 1, is a nonsingular matrix of order 2, and N = ō, is a nilpotent matrix. Therefore the group inverse of A is ( ) 2 3 5 D 0 A g = P 1 P = 1 4 5 0 N 1 3 4 Moreover D = I, so of Corollary 2.6, we have A = A g. A,A g satisfies the conditions (4), thus A g is group inverse of A. The system 2x 1 x 2 + x 3 = 1 3x 1 + 4x 2 3x 3 = 1 5x 1 + 5x 2 4x 3 = 2 is inconsistent. Ind(A T ) = 1, it is clear that the following system is consistent. 2x 1 x 2 + x 3 = 3 3x 1 + 4x 2 3x 3 5x 1 + 5x 2 4x 3 = 5 = 8

M. Nikuie et al. 291 Example 4.2 Determine the index and Drazin inverse of the following matrix B = 1 1 3 5 2 6 2 1 3 It is obvious that rank(b 3 ) = rank(b 4 ), so ind(b) = 3. Moreover the matrix B has the eigenvalues λ = 0 with multiplicity σ(λ) = 3, ρ(λ) = 1 So Jordan normal form of matrix B has the following form 0 1 0 1 0 1 J = P BP 1 = 0 0 1, P = 1 0 0 (12) 0 0 0 1 1 3 P is a nonsingular matrix. The dimension of largest Jordan block corresponding to the zero eigenvalue of (12) equal to 3. Moreover B 3 = ō,then ind(b) = 3 By Theorem 2.5 we have 0 1 0 1 1 3 B = P 1 0 0 1 P = 5 2 6 0 0 0 Therefore the Drazin inverse of B is 2 1 3 0 0 0 B D = P 1 0 0 0 P = 0 0 0 0 Moreover B C 3 3 is a nilpotent matrix of index 3, so of Corollary 3.11 we have B D = ō. B, B D fulfills the conditions (4), thus B D is Drazin inverse of B. Example 4.3 Consider the following symmetric matrix 1 1 1 C = 1 1 3 1 1 1 1

292 Mathematical Sciences Vol. 4, No. 3 (2010) The matrix C has the eigenvalues λ 1 = 0, λ 2 = 1, λ 3 = 8 3. The index of matrix C is equal to one, because rank(c) = rank(c 2 ). So Jordan normal form of matrix C has the following form [1] 0 0 J = P CP 1 = 0 [ 8 3 ] 0, P = 0 0 [0] 1 11 3 11 1 11 9 22 3 11 9 22 1 2 0 1 2 (13) P is a nonsingular matrix. The dimension of largest Jordan block corresponding to the zero eigenvalue of (13) is equal to one. By Theorem 2.5 we have 1 6 1 C g = P 1 JP = 1 6 12 6 1 6 1 C, C g satisfies the conditions (4) thus C g is group inverse of C. It is clear that rank(c) = rank(c g ) and (C g ) g = C. Consider the following singular linear system of equation with index one x 1 x 2 x 3 = 3 x 1 + 1 3 x 2 x 3 = 1 x 1 x 2 x 3 = 3 3 Since b = 1 R(C), the solution of (14) is 3 5 Conclusions x = C g b = 1 6 1 6 12 6 1 6 1 3 3 4 1 = 3 2 3 3 4 (14) In this paper, we prove some properties of index of matrix and Drazin inverse and give some examples.

M. Nikuie et al. 293 Acknowledgment This paper is outcome of the project (No. 88724) that is supported by Young Researchers Club, Islamic Azad University-Tabriz Branch. The authors thank the Young Researchers Club, Islamic Azad University-Tabriz Branch for its supports. References [1] Bhaskara K., Meyer C., The Theory of Generalized inverse over commutative Rings, London and New York, 2002. [2] Campbell S.L., Meyer C., Generalized inverse of Linear Transformations, Pitman London, 1979. [3] Chen J., Xu Z. (2008) Comment on condition number of Drazin inverse and their condition number of singular systems, Applied Mathematics and Computation, 199, 512-526. [4] Chen J., Xu Z. (2008) Comment on condition number of Drazin inverse and their condition number of singular systems, Applied Mathematics and Computation, 199, 512-526. [5] Lin L., Wei Y., Zhang N. (2008) Convergence and quotient convergence of iterative methods for solving singular linear equations with index one, Linear Algebra and its Applications, 203, 35-45. [6] Stoer J., Bulirsch R., Introduction to Numerical Analysis, Springer, 1992. [7] Wang G. (1989) A cramer rule for finding the solution of a class of singular equations, Linear Algebra and its Applications, 1, 27-34. [8] Wei Y. (1998) Index splitting for the Drazin inverse and the singular linear system, Applied Mathematics and Computation, 95, 115-124.

294 Mathematical Sciences Vol. 4, No. 3 (2010) [9] Wei Y., Zhou J. (2006) A two-step algorithm for solving singular linear systems with index one, Applied Mathematics and Computation, 175, 472-485. [10] Zheng L. (2001) A characterization of the Drazin inverse, Linear Algebra and its Applications, 335, 183-188.