Electronic Journal of Linear Algebra Volume 20 Volume 20 (2010) Article 45 2010 The eigenvalue distribution of block diagonay dominant matrices and block H-matrices Cheng-Yi Zhang chengyizhang@yahoo.com.cn Shuanghua Luo Aiqun Huang Chengxian Xu Foow this and additional works at: http://repository.uwyo.edu/ela Recommended Citation Zhang, Cheng-Yi; Luo, Shuanghua; Huang, Aiqun; and Xu, Chengxian. (2010), "The eigenvalue distribution of block diagonay dominant matrices and block H-matrices", Electronic Journal of Linear Algebra, Volume 20. DOI: https://doi.org/10.13001/1081-3810.1398 This Article is brought to you for free and open access by Wyoming Scholars Repository. It has been accepted for inclusion in Electronic Journal of Linear Algebra by an authorized editor of Wyoming Scholars Repository. For more information, please contact scholcom@uwyo.edu.
THE EIGENVALUE DISTRIBUTION OF BLOCK DIAGONALLY DOMINANT MATRICES AND BLOCK H MATRICES CHENG-YI ZHANG, SHUANGHUA LUO, AIQUN HUANG, AND JUNXIANG LU Abstract. The paper studies the eigenvalue distribution of some special matrices, including block diagonay dominant matrices and block H matrices. A we-known theorem of Taussky on the eigenvalue distribution is extended to such matrices. Conditions on a block matrix are also given so that it has certain numbers of eigenvalues with positive and negative real parts. Key words. Eigenvalues, Block diagonay dominant, Block H matrix, Non-Hermitian positive (negative) definite. AMS subject classifications. 15A15, 15F10. 1. Introduction. The eigenvalue distribution of a matrix has important consequences and applications (see e.g., [4], [6], [9], [12]). For example, consider the ordinary differential equation (cf. Section 2.0.1 of [4]) (1.1) dx dt = A[x(t) ˆx], where A C n n and x(t), ˆx C n. The vector ˆx is an equilibrium of this system. It is not difficult to see that ˆx of system is globay stable if and only if each eigenvalue of A has positive real part, which concerns the eigenvalue distribution of the matrix A. The analysis of stability of such a system appears in mathematical biology, neural networks, as we as many problems in control theory. Therefore, there is considerable interest in the eigenvalue distribution of some special matrices A and some results are classical. For example, Taussky in 1949 [14] stated the foowing theorem. Received by the editors March 25, 2009. Accepted for publication July 31, 2010. Handling Editor: Joao Filipe Queiro. Department of Mathematics of School of Science, Xi an olytechnic University, Xi an, Shaanxi 710048,.R. China; Corresponding author (chengyizhang@yahoo.com.cn or zhangchengyi 2004@163.com). Department of Mathematics of School of Science, Xi an olytechnic University, Xi an, Shaanxi 710048,.R. China (iwantflyluo@163.com). Department of Mathematics of School of Science, Xi an Jiaotong University, Xi an, Shaanxi 710049,.R. China. Department of Mathematics of School of Science, Xi an olytechnic University, Xi an, Shaanxi 710048,.R. China (jun-xianglu@163.com). The work of this author was Supported by Natural Science Basic Research roject in Shaanxi rovince: 2010JQ1016. 621
622 C.-y. Zhang, S. Luo, A. Huang, and J. Lu Theorem 1.1. ([14]) Let A = (a ij ) C n n be strictly or irreducibly diagonay dominant with positive real diagonal entries a ii for a i N = {1,2,,n}. Then for arbitrary eigenvalue λ of A, we have Re(λ) > 0. Tong [15] improved Taussky s result in [14] and proposed the foowing theorem on the eigenvalue distribution of strictly or irreducibly diagonay dominant matrices. Theorem 1.2. ([15]) Let A = (a ij ) C n n be strictly or irreducibly diagonay dominant with real diagonal entries a ii for a i N. Then A has J + (A) eigenvalues with positive real part and J (A) eigenvalues with negative real part, where J + (A) = {i a ii > 0, i N}, J (A) = {i a ii < 0, i N}. Later, Jiaju Zhang [23], Zhaoyong You et al. [18] and Jianzhou Liu et al. [6] extended Tong s results in [15] to conjugate diagonay dominant matrices, generalized conjugate diagonay dominant matrices and H matrices, respectively. Liu s result is as foows. Theorem 1.3. ([6]) Let A = (a ij ) H n with real diagonal entries a ii for a i N. Then A has J + (A) eigenvalues with positive real part and J (A) eigenvalues with negative real part. Recently, Cheng-yi Zhang et al. ([21], [22]) generalized Tong s results in [15] to nonsingular diagonay dominant matrices with complex diagonal entries and established the foowing conclusion. Theorem 1.4. ([21], [22]) Given a matrix A C n n, if  is nonsingular diagonay dominant, where  = (â ij) C n n is defined by â ij = { Re(aii ), if i = j, a ij, if i j, then A has J R+ (A) eigenvalues with positive real part and J R (A) eigenvalues with negative real part, where J R+ (A) = {i Re(a ii ) > 0, i N}, J R (A) = {i Re(a ii ) < 0, i N}. However, there exists a dilemma in practical application. That is, for a large-scale matrix or a matrix which is neither a diagonay dominant matrix nor an H matrix, it is very difficult to obtain the property of this class of matrices. On the other hand, David G. Feingold and Richard S. Varga [2], Zhao-yong You and Zong-qian Jiang [18] and Shu-huang Xiang [17], respectively, generalized the concept of diagonay dominant matrices and proposed two classes of block diagonay dominant matrices, i.e., I block diagonay dominant matrices [2] and block diagonay dominant matrices [14], [15]. Later, Ben olman [10], F. Robert [11], Yong-zhong Song [13], L.Yu. Kolotilina [5] and Cheng-yi Zhang and Yao-tang Li [20]
Block Diagonay Dominant Matrices and Block H matrices 623 also presented two classes of block H matrices such as I block H matrices[11] and block H matrices[10] on the basis of the previous work. It is known that a block diagonay dominant matrix is not always a diagonay dominant matrix (an example is seen in [2, (2.6)]). So suppose a matrix A is not strictly (or irreducibly) diagonay dominant nor an H matrix. Using and appropriate partitioning of A, can we obtain its eigenvalue distribution when it is block diagonay dominant or a block H matrix? David G. Feingold and Richard S. Varga (1962) showed that an I block strictly or irreducibly diagonay dominant diagonay dominant matrix has the same property as the one in Theorem 1.1. The result reads as foows. Theorem 1.5. ([2]) Let A = (A lm ) s s Cs n n be I block strictly or irreducibly diagonay dominant with a the diagonal blocks being M matrices. Then for arbitrary eigenvalue λ of A, we have Re(λ) > 0. The purpose of this paper is to establish some theorems on the eigenvalue distribution of block diagonay dominant matrices and block H-matrices. Foowing the result of David G. Feingold and Richard S. Varga, the we-known theorem of Taussky on the eigenvalue distribution is extended to block diagonay dominant matrices and block H matrices with each diagonal block being non-hermitian positive definite. Then, the eigenvalue distribution of some special matrices, including block diagonay dominant matrices and block H-matrices, is studied further to give conditions on the block matrix A = (A lm ) s s Cs n n such that the matrix A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part; here J + (A) (J (A)) denotes the set of a indices of non-hermitian positive (negative) definite diagonal blocks of A and n k is the order of the diagonal block A kk for k J + (A) J (A). The paper is organized as foows. Some notation and preliminary results about special matrices including block diagonay dominant matrices and block H matrices are given in Section 2. The theorem of Taussky on the eigenvalue distribution is extended to block diagonay dominant matrices and block H-matrices in Section 3. Some results on the eigenvalue distribution of block diagonay dominant matrices and block H-matrices are then presented in Section 4. Conclusions are given in Section 5. 2. reliminaries. In this section we present some notions and preliminary results about special matrices that are used in this paper. Throughout the paper, we denote the conjugate transpose of the vector x, the conjugate transpose of the matrix A, the spectral norm of the matrix A and the cardinality of the set α by x H, A H, A and α, respectively. C m n (R m n ) wi be used to denote the set of a m n complex (real) matrices. Let A = (a ij ) R m n and B = (b ij ) R m n, we write A B,
624 C.-y. Zhang, S. Luo, A. Huang, and J. Lu if a ij b ij holds for a i = 1,2,,m, j = 1,2,,n. A matrix A = (a ij ) R n n is caed a Z matrix if a ij 0 for a i j. We wi use Z n to denote the set of a n n Z matrices. A matrix A = (a ij ) R n n is caed an M matrix if A Z n and A 1 0. M n wi be used to denote the set of a n n M matrices. The comparison matrix of a given matrix A = (a ij ) C n n, denoted by µ(a) = (µ ij ), is defined by µ ij = { aii, if i = j, a ij, if i j. It is clear that µ(a) Z n for a matrix A C n n. A matrix A C n n is caed H matrix if µ(a) M n. H n wi denote the set of a n n H matrices. A matrix A C n n is caed Hermitian if A H = A and skew-hermitian if A H = A. A Hermitian matrix A C n n is caed Hermitian positive definite if x H Ax > 0 for a 0 x C n and Hermitian negative definite if x H Ax < 0 for a 0 x C n. A matrix A C n n is caed non-hermitian positive definite if Re(x H Ax) > 0 for a 0 x C n and non-hermitian negative definite if Re(x H Ax) < 0 for a 0 x C n. Let A C n n, then H = (A+A H )/2 and S = (A A H )/2 are caed the Hermitian part and the skew-hermitian part of the matrix A, respectively. Furthermore, A is non-hermitian positive(negative) definite if and only if H is Hermitian positive (negative) definite (see [3,7,8]). Let x = (x 1,x 2,,x n ) T C n. The Euclidean norm of the vector x is defined by x = (x H x) = n i=1 x i 2 and the spectral norm of the matrix A C n n is defined by (2.1) A = sup 0 x C n If A is nonsingular, it is useful to point out that (2.2) ( ) Ax = sup ( Ay ). x y =1 A 1 1 = inf 0 x C n ( Ax x With (2.2), we can then define A 1 1 by continuity to be zero whenever A is singular. Therefore, for B C n n and 0 C C n m, ( B B 1 C 1 H ) x (2.3) = inf 0 x C n C H x if B is nonsingular, and ). (2.4) B 1 C 1 0 B 1 C
by continuity if B is singular. Block Diagonay Dominant Matrices and Block H matrices 625 A matrix A C n n (n 2) is caed reducible if there exists an n n permutation matrix such that ( ) A T A11 A 12 =, 0 A 22 where A 11 C r r, A 22 C (n r) (n r), 1 r < n. If no such permutation matrix exists, then A is caed irreducible. A = (a 11 ) C 1 1 is irreducible if a 11 0, and reducible, otherwise. A matrix A = (a ij ) C n n is diagonay dominant by row if (2.5) n a ii a ij j=1,j i holds for a i N = {1,2,,n}. If inequality in (2.5) holds strictly for a i N, A is caed strictly diagonay dominant by row; if A is irreducible and diagonay dominant with inequality (2.5) holding strictly for at least one i N, A is caed irreducibly diagonay dominant by row. By D n, SD n and ID n denote the sets ofmatriceswhich aren ndiagonaydominant, n n strictly diagonay dominant and n n irreducibly diagonay dominant, respectively. (2.6) Let A = (a ij ) C n n be partitioned as the foowing form A = A 12 A 1s A 22 A 2s......, A s1 A s2 A ss A 11 A 21 wherea isann l n l nonsingularprincipalsubmatrixofaforal S = {1,2,,s} and s l=1 n l = n. By Cs n n denote the set of a s s block matrices in C n n partitioned as (2.6). Note: A = (A lm ) s s Cs n n implies that each diagonal block A of the block matrix A is nonsingular for a l S. Let A = (A lm ) s s C n n s and S = {1,2,,s}, we define index sets J + (A) = {i A ii is non Hermitian positive definite, i S}, J (A) = {i A ii is non Hermitian negative definite, i S}. From the index sets above, we know that J + (A) = S shows that each diagonal block A of A is non-hermitian positive definite for a l S and J + (A) J (A) = S
626 C.-y. Zhang, S. Luo, A. Huang, and J. Lu shows that each diagonal block A of A is either non-hermitian positive definite or non-hermitian negative definite for a l S. Let A = (A lm ) s s Cs n n. Then A is caed block irreducible if either its I blockcomparisonmatrixµ I (A) = (w lm ) R s s orits blockcomparisonmatrix µ (A) = (m lm ) R s s is irreducible, where { A 1 w lm = 1 {, l = m 1, l = m (2.7) A lm, l m, m lm = A 1 A lm, l m. A block matrix A = (A lm ) s s Cs n n is caed I block diagonay dominant if its I block comparison matrix comparison matrix, µ I (A) D s. If µ I (A) SD s, A is I block strictly diagonay dominant; and if µ I (A) ID s, A is caed I block irreducibly diagonay dominant. Similarly, a block matrix A = (A lm ) s s Cs n n is caed block diagonay dominant if its block comparison matrix, µ (A) D s. If µ (A) SD s, A is block strictly diagonay dominant; and if µ (A) ID s, A is caed block irreducibly diagonay dominant. A block matrix A = (A lm ) s s Cs n n is caed an I block H matrix (resp., a block H matrix) if its I block comparison matrix µ I (A) = (w lm ) R s s (resp., its block comparison matrix µ (A) = (m lm ) R s s ) is an s s M matrix. Intherestofthispaper,wedenotethesetofas si block(strictly,irreducibly) diagonay dominant matrices, a s s block (strictly, irreducibly) diagonay dominant matrices, a s s I block H matrices and a s s block H matrices by IBD s (IBSD s, IBID s ), BD s ( BSD s, BID s ), IBH s and BH s, respectively. It foows that we wi give some lemmas to be used in the foowing sections. Lemma 2.1. (see [2,5]) If A block matrix A = (A lm ) s s IBSD s IBID s, then A is nonsingular. Lemma 2.2. IBSD s IBID s IBH s and BSD s BID s BH s roof. According to the definition of I block strictly or irreducibly diagonay dominant matrices, µ I (A) SD s ID s for any block matrix A IBSD s IBID s. Since SD s ID s H s (see Lemma 2.3 in [21]), µ I (A) H s. As a result, A IBH s coming from the definition of I block H matrices. Therefore, IBSD s IBID s IBH s. Similarly, we can prove BSD s BID s BH s. Lemma 2.3. Let A = (A lm ) s s Cs n n. Then A BH s (IBH s ) if and only if there exists a block diagonal matrix D = diag(d 1 I 1,,d s I s ), where d l > 0, I l is the n l n l identity matrix for a l S and s l=1 n l = n, such that AD BSD s (IBSD s ).
Block Diagonay Dominant Matrices and Block H matrices 627 roof. Using Theorem6.2.3(M 35 ) in [1, pp.136-137],the conclusionofthis lemma is obtained immediately. Lemma 2.4. (see [7) If a matrix A C n n is non-hermitian positive definite, then for arbitrary eigenvalue λ of A, we have Re(λ) > 0. Lemma 2.5. (see [16]) Let A C n n. Then (2.8) A = ρ(a H A), the spectral radius of A H A. In particular, if A is Hermitian, then A = ρ(a). 3. Some generalizations of Taussky s theorem. In this section, the famous Taussky s theorem on the eigenvalue distribution is extended to block diagonay dominant matrices and block H-matrices. The foowing lemmas wi be used in this section. Lemma 3.1. (see [8]) Let A = (a ij ) C n n be non-hermitian positive definite with Hermitian part H = (A+A H )/2. Then (3.1) A 1 H 1. Lemma 3.2. Let A C n n be non-hermitian positive definite with Hermitian part H = (A+A H )/2. Then for arbitrary complex number α 0 with Re(α) 0, we have (3.2) (αi +A) 1 H 1, where I is the identity matrix and A is the spectral norm of the matrix A. roof. Since A is non-hermitian positive definite, for arbitrary complex number α 0 with Re(α) 0, we have αi + A is non-hermitian positive definite. It then foows from Lemma 3.1 that (3.3) (αi +A) 1 [Re(α)I +H] 1. Since H is Hermitian positive definite, so is Re(α)I +H. Thus, the smaest eigenvalue of Re(α)I +H is τ(re(α)i +H) = Re(α)+τ(H). Foowing Lemma 2.5, we have (3.4) [Re(α)I +H] 1 = ρ([re(α)i +H] 1 1 ) = τ(re(α)i +H) 1 = Re(α)+τ(H) 1 τ(h) = ρ(h 1 ) = H 1.
628 C.-y. Zhang, S. Luo, A. Huang, and J. Lu Then it foows from (3.3) and (3.4) that (αi + A) 1 H 1, which completes the proof. Lemma 3.3. (see [2], The generalization of the Gersgorin Circle Theorem) Let A C n n be partitioned as (2.6). Then each eigenvalue λ of A satisfies that (3.5) (λi A ) 1 1 A lm holds for at least one l S. Theorem 3.4. Let A = (A lm ) s s Cs n n with J + (A) = S. If  IBSD s, where  = (Âlm) s s is defined by { H = (A  lm = +A H (3.6) )/2, l = m A lm, otherwise, then for any eigenvalue λ of A, we have Re(λ) > 0. roof. The conclusion can be proved by contradiction. Assume that λ be any eigenvalue of A with Re(λ) 0. Foowing Lemma 3.3, (3.5) holds for at least one l S. Since J + (A) = S shows that each diagonal block A of A is non-hermitian positive definite for a l S, it foows from Lemma 3.2 that and hence (λi A ) 1 H 1 (3.7) (λi A ) 1 1 H 1 1 for a l S. According to (3.5) and (3.7) we have that H 1 1 A lm holds for at least one l S. This shows µ I (Â) = (w lm) / SD s. As a result,  / IBSD s, which is in contradiction with the assumption  IBSD s. Therefore, the conclusion of this theorem holds. Theorem 3.5. Let A = (A lm ) s s Cs n n with J + (A) = S. If  IBH s, where  = (Âlm) s s is defined in (3.6), then for any eigenvalue λ of A, we have Re(λ) > 0. roof. Since  IBH s, it foows from Lemma 2.3 that there exists a block diagonal matrix D = diag(d 1 I 1,,d s I s ), where d l > 0, I l is the n l n l identity matrix for a l S and s l=1 n l = n, such that ÂD IBSD s, i.e., (3.8) H 1 1 d l > A lm d m
Block Diagonay Dominant Matrices and Block H matrices 629 holds for a l S. Multiply the inequality (3.8) by d 1 l, then (3.9) H 1 1 > d 1 l A lm d m = d 1 l A lm d m holds for a l S. Since J + (D 1 AD) = J + (A) = S and (3.9) shows D 1 ÂD IBSD s, Theorem 3.4 yields that for any eigenvalue λ of D 1 AD, we have Re(λ) > 0. Again, since A has the same eigenvalues as D 1 AD, for any eigenvalue λ of A, we have Re(λ) > 0. This completes the proof. Using Lemma 2.2 and Theorem 3.5, we can obtain the foowing coroary. Coroary 3.6. Let A = (A lm ) s s C n n s with J + (A) = S. If  IBID s, where  = (Âlm) s s is defined in (3.6), then for any eigenvalue λ of A, we have Re(λ) > 0. The foowing wi extend the result of Theorem 1.1 to block diagonay dominant matrices and block H matrices. First, we wi introduce some relevant lemmas. Lemma 3.7. (see [3]) Let A C n n. Then the foowing conclusions are equivalent. 1. A is Hermitian positive definite; 2. A is Hermitian and each eigenvalue of A is positive; 3. A 1 is also Hermitian positive definite. Lemma 3.8. Let A C n n be nonsingular. Then (3.10) A 1 1 = τ(a H A), where τ(a H A) denote the minimal eigenvalue of the matrix A H A. roof. It foows from equality (2.8) in Lemma 2.5 that (3.11) A 1 = ρ[(aa H ) 1 ] = ρ[(a H A) 1 ] = 1 τ(a H A), which yields equality (3.10). Lemma 3.9. Let A C n n be Hermitian positive definite and let B C n m. Then for arbitrary complex number α 0 with Re(α) 0, we have (3.12) (αi +A) 1 B A 1 B, where I is identity matrix and A is the spectral norm of the matrix A.
630 C.-y. Zhang, S. Luo, A. Huang, and J. Lu roof. The theorem is obvious if B = 0. Since A is Hermitian positive definite and α 0 with Re(α) 0, αi +A is non-hermitian positive definite. Hence, αi +A is nonsingular. As a result, (αi +A) 1 B 0 and consequently (αi +A) 1 B 0 for B 0. Thus, if B 0, it foows from (2.1) and (2.2) that for arbitrary vector x, y C m, x = y = 1, we have (3.13) A 1 B (αi +A) 1 B sup ( A 1 Bx ) x =1 = sup ( (αi +A) 1 By ) y =1 A 1 By sup ( (αi +A) 1 (set x = y) By ) y =1( A 1 ) By = inf y =1 ( (αi +A) 1 By A 1 ) z = inf 0 z C n (αi +A) 1 (set z = By C n ) ( z A 1 ) (αi +A)u = inf (u = (αi +A) 1 z C n ) 0 u C n u = [A 1 (αi +A)] 1 1 (from(2.2)) = (αa 1 +I) 1 1. According to Lemma 3.8, (3.14) (αa 1 +I) 1 1 = τ[(αa 1 +I) H (αa 1 +I)] = τ(i +2Re(α)A 1 + α 2 A 2 ). SinceAisHermitianpositivedefinite, itfoowsfromlemma3.7thata 1 anda 2 are also. Therefore, α 0, together with Re(α) 0, implies that 2Re(α)A 1 + α 2 A 2 is also Hermitian positive definite. As a result, τ(2re(α)a 1 + α 2 A 2 ) > 0. Thus, foowing (3.13) and (3.14), we get (3.15) A 1 B (αi +A) 1 B (αa 1 +I) 1 1 = τ(i +2Re(α)A 1 + α 2 A 2 ) = 1+τ(2Re(α)A 1 + α 2 A 2 ) > 1, from which it is easy to obtain (3.12). This completes the proof. Theorem 3.10. Let A = (A lm ) s s BSD s BID s with J + (A) = S. If  = A, where  is defined in (3.6), then for any eigenvalue λ of A, we have Re(λ) > 0. roof. We prove by contradiction. Assume that λ is any eigenvalue of A with Re(λ) 0. Then the matrix λi A is singular. Since  = A and (3.6) imply that the
Block Diagonay Dominant Matrices and Block H matrices 631 diagonal block A of A is Hermitian for a l S, J + (A) = S yields that the diagonal block A of A is Hermitian positive definite for a l S. Thus λi A is non- Hermitian negative definite for a l S. As a result, D(λ) = diag(λi A 11,,λI A ss ) is also non-hermitian negative definite. Therefore, the matrix A(λ) := [D(λ)] 1 (λi A) = (A lm ) s s C n n s is singular, where A = I l, the n l n l identity matrix and A lm = (λi A ) 1 A lm for l m and l,m S. It foows from Lemma 3.9 that (3.16) (λi A ) 1 A lm A 1 A lm for l m and l,m S. Since A BSD s BID s, (3.17) 1 A 1 A lm holds for a l S and the inequality in (3.17) holds strictly for at least one i S. Then from (3.16) and (3.17), we have (3.18) 1 (λi A ) 1 A lm holds for a l S and the inequality in (3.18) holds strictly for at least one i S. If A BSD s, then the inequality in (3.17) and hence the one in (3.18) both hold strictly for a i S. That is, A(λ) IBSD s. If A is block irreducible, then so is A(λ). As a result, A BID s yields A(λ) IBID s. Therefore, A BSD s BID s which implies A(λ) IBSD s IBID s. Using Lemma 2.1, A(λ) is nonsingular and consequently λi A is nonsingular, which contradicts the singularity of λi A. This shows that the assumption is incorrect. Thus, for any eigenvalue λ of A, we have Re(λ) > 0. Theorem 3.11. Let A = (A lm ) s s Cs n n with J + (A) = S. If  BSD s BID s, where  is defined by (3.6), then for any eigenvalue λ of A, we have Re(λ) > 0. roof. Since  BSD s BID s, it foows from Lemma 2.2 that  BH s and ÂH BH s. Foowing Lemma 2.3, there exists a block diagonal matrix D = diag(d 1 I 1,,d s I s ), where d l > 0, I l is the n l n l identity matrix for a l S and s l=1 n l = n, such that ÂH D = (DÂ)H BSD s. Again, since  is block strictly or irreducibly diagonay dominant by row, so is DÂ. Furthermore, (3.6) implies that diagonal blocks of  are a Hermitian, and hence, so are the diagonal blocks of D and (DÂ)H. Therefore, from the definition of block strictly or irreducibly
632 C.-y. Zhang, S. Luo, A. Huang, and J. Lu diagonay dominant matrix by row, we have (3.19) 1 (d l  ) 1 (d l  lm ) = (d l  ) 1 (d l A lm ) and (3.20) 1 > (d l  ) 1 (d m  H ml ) = (d l  ) 1 (d m A H ml ) hold for a l S. Therefore, according to (3.19) and (3.20), we have 1 = 1 1 1 2 1 1 2 1 (1 2 > 0, (d l A +d l A H ) 1 (d l A lm +d m A H ml ) (2d l  ) 1 (d l A lm +d m A H ml ) (d l  ) 1 (d l A ml )+(d l  ) 1 (d m A H ml) [ ] (d l  ) 1 (d l A ml + d l  ) 1 (d m A H ml) (d l  ) 1 (d l A ml )+(1 (d l  ) 1 (d m A H lm ) which indicates that DA+(DA) H = DÂ+(DÂ)H BSD s. Again, since J + (A) = S, the diagonal block of A+A H, A +A H = 2 is Hermitian positive definite for a l S. As a result, the diagonal block of DA+(DA) H, d l A +d l A H = 2d l  is also Hermitian positive definite for a l S. Thus, it foows from Theorem 3.10 that for any eigenvalue µ of DA+(DA) H, Re(µ) > 0. Since DA+(DA) H is Hermitian, each eigenvalue µ of DA+(DA) H is positive. Then Lemma 3.7 yields that DA+(DA) H is Hermitian positive definite. From the proof above, we conclude that there exists a Hermitian positive definite matrix D = diag(d 1 I 1,,d s I s ), where d l > 0, I l is the n l n l identity matrix for a l S and s l=1 n l = n, such that DA + (DA) H is Hermitian positive definite. It foows from Lyapunov s theorem (see [4], pp.96) that A is positive stable, i.e., for any eigenvalue λ of A, we have Re(λ) > 0, which completes the proof. Theorem 3.12. Let A = (A lm ) s s Cs n n with J + (A) = S. If  BH s, where  is defined in (3.6), then for any eigenvalue λ of A, we have Re(λ) > 0.
Block Diagonay Dominant Matrices and Block H matrices 633 roof. Similar to the proof of Theorem 3.5, we can obtain the proof of this theorem by Lemma 2.3 and Theorem 3.11. Using Theorem 3.5 and Theorem 3.12, we obtain a sufficient condition for the system (1.1) to be globay stable. Coroary 3.13. Let A = (A lm ) s s Cs n n with J + ( A) = S. If  BH s (IBH s ), where  is defined in (3.6), then the equilibrium ˆx of system (1.1) is globay stable. 4. The eigenvalue distribution of block diagonay dominant matrices and block H matrices. In this section, some theorems on the eigenvalue distribution of block diagonay dominant matrices and block H matrices are presented, generalizing Theorem 1.2, Theorem 1.3 and Theorem 1.4. Theorem 4.1. Let A = (A lm ) s s Cs n n with J + (A) J (A) = S. If  IBSD s, where  is defined in (3.6), then A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part. roof. Suppose every block Gersgorin disk of the matrix A given in (3.5) Γ l : (A λi) 1 1 A lm, l S. Let R 1 = Γ k, R 2 = Γ k. k J + (A) k J (A) Since J + (A) J (A) = S, then R 1 R 2 = Γ l. Therefore, it foows from Lemma 3.3 that each eigenvalue λ of the matrix A lies in R 1 R 2. Furthermore, R 1 lies on the right of imaginary axis, R 2 lies on the left of the imaginary axis in the imaginary coordinate plane. Then A has k J + (A)n k eigenvalues with positive real part in R 1, and k J (A)n k eigenvalues with negative real part in R 2. Otherwise, A has an eigenvalue λ k0 R 1 with Re(λ k0 ) 0 such that for at least one l J + (A), (4.1) (A λ 0 I) 1 1 A lm. Then it foows from Lemma 3.2 that (4.2) l S H 1 1 (A λ k0 I) 1 1 for at least one l J + (A). It then foows from (4.1) and (4.2) that (4.3) H 1 1 A lm
634 C.-y. Zhang, S. Luo, A. Huang, and J. Lu for at least one l J + (A). Inequality (4.3) contradicts  IBSD s. By the same method, we can obtain the same result if there exists an eigenvalue with nonnegative real part in R 2. Hence, if  IBSD s and J + (A) J (A) = S, then for an arbitrary eigenvalue λ i of A, we have Re(λ i ) 0 for i = 1,2,,n. Again, J + (A) J (A) = yields R 1 R 2 =. Since R 1 and R 2 are both closed sets and R 1 R 2 =, λ i in R 1 can not jump into R 2 and λ i R 2 can not jump into R 1. Thus, A has k J + (A)n k eigenvalues with positive real part in R 1 and k J (A)n k eigenvalues with negative real part in R 2. This completes the proof. Theorem 4.2. Let A = (A lm ) s s Cs n n with J + (A) J (A) = S. If  IBH s, where  is defined in (3.6), then A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part. roof. Since  IBH s, it foows from Lemma 2.3 and the proof of Theorem 3.5 that there exists a block diagonal matrix D = diag(d 1 I 1,,d s I s ), where d l > 0, I l is the n l n l identity matrix for a l S and s l=1 n l = n, suchthat D 1 ÂD IBSD s, i.e., (4.4) H 1 1 > d 1 l A lm d m = d 1 l A lm d m holds for a l S. Since J + (D 1 AD) = J + (A) and J (D 1 AD) = J (A), Theorem 3.4 yields that D 1 AD has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part. Again, since A has the same eigenvalues as D 1 AD, A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part. This completes the proof. Foowing Lemma 2.2 and Theorem 4.2, we have the foowing coroary. Coroary 4.3. Let A = (A lm ) s s Cs n n with J + (A) J (A) = S. If  IBID s, where  is defined in (3.6), then A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part. Now, we consider the eigenvalue distribution of block diagonay dominant matrices and block H matrices. In the foowing lemma, a further extension of the Gersgorin Circle Theorem is given. Lemma 4.4. If for each block row of the block matrix A = (A lm ) s s Cs n n, there exists at least one off-diagonal block not equal to zero, then for each eigenvalue λ of A, (4.5) (A λi) 1 A lm 1 holds for at least one l S, where (A λi) 1 A lm (defined in(2.4)) if A λi
Block Diagonay Dominant Matrices and Block H matrices 635 is singular and A lm 0 for l m and l,m S. roof. The proof is by contradiction. Assume that λ is an arbitrary eigenvalue of A such that (4.6) (A λi) 1 A lm < 1 holds for a l S. It foows from (4.6) that A λi is nonsingular for a l S. Otherwise, there exists at leas one l 0 S such that A l0l 0 λi is singular. Since there exists at least one off-diagonal block A l0m 0 for m S in the l 0 th block row, (2.4) yields (A l0l 0 λi) 1 A l0m and consequently, s (A l 0l 0 λi) 1 A l0m, which contradicts (4.6). Therefore, A λi is nonsingular for a l S, which leads to the nonsingularity of the block diagonal matrix D(λ) = diag(a 11 λi,,a ss λi). Further, (4.6) also shows A λi BSD s Thus, A(λ) := [D(λ)] 1 (A λi) = (A lm ) s s IBSD s. Then, we have from Lemma 2.1 that A(λ) is nonsingular. As a result, A λi is nonsingular, which contradicts the assumption that λ is an arbitrary eigenvalue of A. Hence, if λ is an arbitrary eigenvalue of A, then A λi cannot be block diagonay dominant, which gives the conclusion of this lemma. Theorem 4.5. Let A = (A lm ) s s BSD s with J + (A) J (A) = S. If  = A, where  = (Âlm) s s is defined in (3.6), then A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part. roof. The proof proceeds with the foowing two cases. (i) If for each block row of the block matrix A, there exists at least one offdiagonal block not equal to zero, one may suppose that every block Gersgorin disk of the matrix A given in (4.5) is Let G l : 1 R 1 = (A λi) 1 A lm, l S. k J + (A) G k, R2 = k J (A) G k. Since J + (A) J (A) = S, then R 1 R 2 = G l. Therefore, it foows from Lemma 4.4 that each eigenvalue λ of the matrix A lies in R 1 R 2. Furthermore, R 1 lies on the right of imaginary axis, R 2 lies on the left of the imaginary axis in the imaginary coordinate plane. Then A has k J + (A)n k eigenvalues with positive real part in R 1, and k J (A)n k eigenvalues with negative real part in R 2. Otherwise, assume l S
636 C.-y. Zhang, S. Luo, A. Huang, and J. Lu that A has an eigenvalue λ k0 R 1 such that Re(λ k0 ) 0. Therefore, we have from Lemma 4.4 that (4.7) 1 (A λ 0 I) 1 A lm holds for at least l J + (A). Since (3.6) and  = A imply that the diagonal blocks of A are a Hermitian, the diagonal block A of the block matrix A is Hermitian positive definite for a l J + (A). Then it foows from Lemma 3.9 that (4.8) (λi A ) 1 A lm A 1 A lm for a l J + (A) and m l, m S. Inequalities (4.7) and (4.8) yield that (4.9) 1 A 1 A lm holds for at least l J + (A). Inequality (4.9) contradicts A BSD s. In the same method, we can obtain the same result if there exists an eigenvalue with nonnegative real part in R 2. Hence, if A BSD s and J + (A) J (A) = S, then for arbitrary eigenvalue λ i of A, we have Re(λ i ) 0 for i = 1,2,,n. Again, J + (A) J (A) = yields R 1 R 2 =. Since R 1 and R 2 are a closed set and R 1 R 2 =, λ i in R 1 can not jump into R 2 and λ i R 2 can not jump into R 1. Thus, A has k J + (A)n k eigenvalues with positive real part in R 1 and k J (A)n k eigenvalues with negative real part in R 2. This completes the proof of (i). (ii) The foowing wi prove the case when there exist some block rows of the block matrix A with a their off-diagonal blocks equal to zero. Let ω S denote the set containing block row indices of such block rows. Then there exists an n n permutation matrix such that (4.10) A T = [ A(ω ) A(ω,ω) 0 A(ω) where ω = S ω, A(ω ) = (A lm ) l,m ω has no block rows with a their off-diagonal blocks equal to zero, A(ω) = (A lm ) l,m ω is a block diagonal matrix and A(ω,ω) = (A lm ) l ω,m ω. It is easy to see that the partition of (4.10) does not destroy the partition of (2.6). Further, (4.10) shows that ], (4.11) σ(a) = σ(a(ω )) σ(a(ω)), where σ(a) denotes the spectrum of the matrix A. Since and A(ω ) is a block principal submatrix of A and J + (A) J (A) = S, J+ [A(ω )] J [A(ω )] = ω. Further,
Block Diagonay Dominant Matrices and Block H matrices 637 A =  BSD s givesa(ω ) = Â(ω ) BSD ω. Again, since A(ω ) = (A lm ) l,m ω has no block rows with a their off-diagonal blocks equal to zero, i.e., for each block row of the block matrix A(ω ), there exists at least one off-diagonal block not equal to zero, it foows from the proof of (i) that A(ω ) has k J + [A(ω )] n k eigenvalues with positive real part and k J [A(ω )] n k eigenvalues with negative real part. Let s consider the matrix A(ω). A(ω) being a block principal submatrix of the block matrix A = (A lm ) s s BSD s with J + (A) J (A) = S gives J+ [A(ω)] J [A(ω)] = ω. Since A(ω) = (A lm) l,m ω is a block diagonal matrix, the diagonal block A of A(ω) is either non-hermitian positive definite or non-hermitian negative definite for each l ω, and consequently (4.12) σ(a(ω)) = l ωσ(a ) = σ(a ) σ(a ). l J + [A(ω)] l J [A(ω)] The equality (4.12) and Lemma 2.4 shows that A(ω) has k J + [A(ω)]n k eigenvalues with positive real part and k J [A(ω)]n k eigenvalues with negative real part. Since we have J + (A) J (A) = S = ω ω = ( J + [A(ω)] J [A(ω)]) ( J + [A(ω )] J [A(ω )] ) = ( J + [A(ω)] J+ [A(ω )] ) ( J [A(ω) J [A(ω )] ), (4.13) J + (A) = J+ [A(ω)] J+ [A(ω )], J (A) = J [A(ω)] J [A(ω )]. Again, ω = S ω implies ω ω =, which yields (4.14) J + [A(ω)] J+ [A(ω )] =, J [A(ω)] J [A(ω )] =. According to (4.13), (4.14) and the partition (2.6) of A, it is not difficult to see that (4.15) k J + (A)n k = k J + [A(ω)]n k + k J + [A(ω )] n k, k J (A)n k = k J [A(ω)]n k + k J [A(ω )] n k. From (4.15) and the eigenvalue distribution of A(ω ) and A(ω) given above, it is not difficult to see that A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part. We conclude from the proof of (i) and (ii) that the proof of this theorem is completed. Theorem 4.6. Let A = (A lm ) s s BH s with J + (A) J (A) = S. If  = A, where  = (Âlm) s s is defined in (3.6), then A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part.
638 C.-y. Zhang, S. Luo, A. Huang, and J. Lu roof. Since  = A BH s, it foows from Lemma 2.3 that there exists a block diagonal matrix D = diag(d 1 I n1,d 2 I n2,,d s I ns ), where d l > 0, I nl is the n l n l identity matrix for a l S and s n l = n, such that ÂD = AD BSD s, i.e., (4.16) 1 > l=1 (A d l ) 1 (A lm d m ) = (d 1 l A d l ) 1 (d 1 l A lm d m ) holds for a l S. Inequality (4.16) shows B = D 1 AD = D 1 ÂD = B BSD s. Since B = D 1 AD has the same diagonal blocks as the matrix A, it foows from Theorem4.5thatB has k J + (A)n k eigenvalueswithpositiverealpartand k J (A)n k eigenvalues with negative real part, so does A. Coroary 4.7. Let A = (A lm ) s s BID s with J + (A) J (A) = S. If  = A, where  = (Âlm) s s is defined in (3.6), then A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part. roof. The proof is obtain directly by Lemma 2.2 and Theorem 4.6. 5. Conclusions. This paper concerns the eigenvalue distribution of block diagonay dominant matrices and block block H matrices. Foowing the result of Feingold and Varga, a we-known theorem of Taussky on the eigenvalue distribution is extended to block diagonay dominant matrices and block H matrices with each diagonal block being non-hermitian positive definite. Then, the eigenvalue distribution of some special matrices including block diagonay dominant matrices and block H-matrices is studied further to give the conditions on the block matrix A such that the matrix A has k J + (A)n k eigenvalues with positive real part and k J (A)n k eigenvalues with negative real part. Acknowledgment. The authors would like to thank the anonymous referees for their valuable comments and suggestions, which actuay stimulated this work. REFERENCES [1] A. Berman and R. J. lemmons. Nonnegative Matrices in the Mathematical Sciences, Academic, New York, 1979. [2] D. G. Feingold and R. S. Varga. Block diagonay dominant matrices and generalizations of the Gerschgorin Circle Theorem, acific J. Math., 12: 1241-1250, 1962. [3] R. A. Horn and C. R. Johnson. Matrix Analysis, Cambridge University ress, New York, 1985. [4] R. A. Horn and C. R. Johnson. Topics in Matrix Analysis, Cambridge University ress, New York, 1991. [5] L. Yu. Kolotilina. Nonsingularity/singularity criteria for nonstrictly block diagonay dominant matrices. Linear Algebra Appl., 359:133 159, 2003.
Block Diagonay Dominant Matrices and Block H matrices 639 [6] J. Liu, Y. Huang. some properties on Schur complements of H-matrix and diagonay dominant matrices. Linear Algebra Appl., 389:365 380, 2004. [7] J.-j. Li. The positive definiteness of complex matrix. Mathematics in ractice and Theory, 25(2):59 63, 1995. [8] R. Mathias. Matrices with positive definite Hermitian part: Inequalities and linear systems. SIAM J. Matrix Anal. Appl., 13(2):640 654, 1992. [9] A. N. Michel and R.K. Mier. Qualitative Analysis of Large Dynamical Systems. Academic ress, New York, 1977. [10] B. olman. Incomplete blockwise factorizations of (block) H matrices. Linear Algebra Appl., 90:119 132, 1987. [11] F. Robert. Block H matrices et convergence des methods iterations classiques par blocs. Linear Algebra Appl., 2:223 265, 1969. [12] D. D. Siljak. Large-scale Dynamical Systems: Stability and Structure. Elsevier North-Hoand, Inc., New York, 1978. [13] Y.-z. Song. The convergence of block AOR iterative methods. Appl. Math., 6(1):39 45, 1993. [14] O. Taussky. A recurring theorem on determinants. Amer. Math. Monthly, 56:672 676, 1949. [15] W. Tong. On the distribution of eigenvalues of some matrices, Acta Mathematica Sinica. 20(4):273 275, 1977. [16] R. S. Varga. Matrix Iterative Analysis. rentice-ha, Inc., 1962. [17] S.-h. Xiang and Zhao-yong You. Weak block diagonay dominant matrices, weak block H-matrix and their applications. Linear Algebra Appl., 282:263 274, 1998. [18] Z.-y. You and Zong-qian Jiang. The diagonal dominance of block matrices. Journal of Xi an Jiaotong university, 18(3):123 125, 1984. [19] Z.-y. You and Lei Li. The distribution of eigenvalues of generalized conjugate diagonay dominant matrices. Journal of Mathematical Research and Exposition, 9(2):309 310, 1989. [20] C.-y. Zhang, Y.-t. Li, and F. Chen. On Schur complements of block diagonay dominant matrices. Linear Algebra Appl., 414:533 546, 2006. [21] C.-y. Zhang, C. Xu, and Y.-t. Li. The eigenvalue distribution on Schur complements of H matrices. Linear Algebra Appl., 422:250 264, 2007. [22] C.-y. Zhang, S. Luo, F. Xu, and C. Xu. The eigenvalue distribution on Schur complement of nonstrictly diagonay dominant matrices and general H-matrices. Electron. J. Linear Algebra, 18:801 820, 2009. [23] J. Zhang. The distribution of eigenvalues of conjugate diagonay dominant matrices. Acta Mathematica Sinica, 23(4):544 546, 1980.