Some bounds for the spectral radius of the Hadamard product of matrices

Similar documents
Some bounds for the spectral radius of the Hadamard product of matrices

Z-Pencils. November 20, Abstract

A note on estimates for the spectral radius of a nonnegative matrix

Generalizations of M-matrices which may not have a nonnegative inverse

Geometric Mapping Properties of Semipositive Matrices

Abed Elhashash and Daniel B. Szyld. Report August 2007

Spectral inequalities and equalities involving products of matrices

An estimation of the spectral radius of a product of block matrices

Properties for the Perron complement of three known subclasses of H-matrices

On the simultaneous diagonal stability of a pair of positive linear systems

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Note on the Jordan form of an irreducible eventually nonnegative matrix

MATH36001 Perron Frobenius Theory 2015

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

In 1912, G. Frobenius [6] provided the following upper bound for the spectral radius of nonnegative matrices.

arxiv: v1 [math.ra] 11 Aug 2014

Notes on Linear Algebra and Matrix Theory

Wavelets and Linear Algebra

On the spectra of striped sign patterns

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Maximizing the numerical radii of matrices by permuting their entries

Permutation transformations of tensors with an application

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Bounds for Levinger s function of nonnegative almost skew-symmetric matrices

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

Matrix Inequalities by Means of Block Matrices 1

Generalized Numerical Radius Inequalities for Operator Matrices

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

Properties of Matrices and Operations on Matrices

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

On the Schur Complement of Diagonally Dominant Matrices

Eventual Cone Invariance

ON THE QR ITERATIONS OF REAL MATRICES

The spectrum of a square matrix A, denoted σ(a), is the multiset of the eigenvalues

Abed Elhashash and Daniel B. Szyld. Report Revised November 2007

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

Spectral Properties of Matrix Polynomials in the Max Algebra

arxiv: v3 [math.ra] 10 Jun 2016

ELA

Computers and Mathematics with Applications. Convergence analysis of the preconditioned Gauss Seidel method for H-matrices

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

Ann. Funct. Anal. 5 (2014), no. 2, A nnals of F unctional A nalysis ISSN: (electronic) URL:

Positive definite preserving linear transformations on symmetric matrix spaces

H-matrix theory vs. eigenvalue localization

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

NOTE ON THE SKEW ENERGY OF ORIENTED GRAPHS. Communicated by Ivan Gutman. 1. Introduction

ELA

The Perron eigenspace of nonnegative almost skew-symmetric matrices and Levinger s transformation

ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP. 1. Introduction Given 0 < λ < 1, the λ-aluthge transform of X C n n [4]:

Review of Basic Concepts in Linear Algebra

Key words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius.

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

Improved Upper Bounds for the Laplacian Spectral Radius of a Graph

Interlacing Inequalities for Totally Nonnegative Matrices

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

Hierarchical open Leontief models

The Laplacian spectrum of a mixed graph

Some New Results on Lyapunov-Type Diagonal Stability

arxiv: v1 [math.fa] 19 Jul 2009

ON WEIGHTED PARTIAL ORDERINGS ON THE SET OF RECTANGULAR COMPLEX MATRICES

MINIMAL NORMAL AND COMMUTING COMPLETIONS

Minimizing the Laplacian eigenvalues for trees with given domination number

Nonnegative and spectral matrix theory Lecture notes

A property concerning the Hadamard powers of inverse M-matrices

On factor width and symmetric H -matrices

Compound matrices and some classical inequalities

Extremal numbers of positive entries of imprimitive nonnegative matrices

Basic Concepts in Linear Algebra

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

On the Hermitian solutions of the

Two Characterizations of Matrices with the Perron-Frobenius Property

Matrix functions that preserve the strong Perron- Frobenius property

On the distance and distance signless Laplacian eigenvalues of graphs and the smallest Gersgorin disc

An Even Order Symmetric B Tensor is Positive Definite

Sign patterns that allow eventual positivity

The eigenvalue distribution of block diagonally dominant matrices and block H-matrices

Some inequalities for sum and product of positive semide nite matrices

Trace Inequalities for a Block Hadamard Product

Perron Frobenius Theory

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Spectral radius of bipartite graphs

AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX

On The Inverse Mean First Passage Matrix Problem And The Inverse M Matrix Problem

A Characterization of the Hurwitz Stability of Metzler Matrices

Abed Elhashash, Uriel G. Rothblum, and Daniel B. Szyld. Report August 2009

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices

Review of Some Concepts from Linear Algebra: Part 2

arxiv: v2 [math-ph] 3 Jun 2017

SEMI-CONVERGENCE ANALYSIS OF THE INEXACT UZAWA METHOD FOR SINGULAR SADDLE POINT PROBLEMS

FALSENESS OF THE FINITENESS PROPERTY OF THE SPECTRAL SUBRADIUS

On generalized Schur complement of nonstrictly diagonally dominant matrices and general H- matrices

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

The spectral radius of edge chromatic critical graphs

Transcription:

Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard product of two nonnegative matrices are given. Some results involve M-matrices. 1 Introduction For any two n n matrices A = (a ij ) and B = (b ij ), the Hadamard product of A and B is A B := (a ij b ij ). It is known [6, p.358] that if A, B R n n are nonnegative matrices, then ρ(a B) ρ(a)ρ(b), (1) where ρ(a) denotes the spectral radius of A, and in this case it is equal to the Perron root of A. See [3] for some generalizations. It is a neat inequality and bears some symmetry, i.e., the inequality remains unchanged if A and B are switched (unlikely the usual matrix multiplication, Hadamard product is commutative). Due to the monotonicity of the Perron root of the nonnegative B R n n [5, p.491], one has max b ii ρ(b). (2) The lower bound is clearly attained when B is nonnegative diagonal. Is it possible to have a better bound like the following for ρ(a B), where A, B 0? ρ(a B) ρ(a) max b ii (3) If the suspected inequality (3) is true, it would provide a better and computationally simpler upper bound. However the answer is negative in view of the following example. Mathematics Subject Classifications: 15A42, 15A48. Address of the the first three authors: School of Applied Mathematics, University of Electronic Science and Technology of China, Chengdu, 610054, P.R. China. The research is supported by NCET of China and Foundation of the Key Lab of Comp. Phy., Beijing Appl. Phy. and Comp. Math. Institute. Department of Mathematics and Statistics, Auburn University, Auburn, AL 36849-5310, USA. e-mail: tamtiny@auburn.edu 1

EXAMPLE 1 A = 0 1, B = 1 0 1 2, A B = 2 1 0 2. 2 0 Evidently ρ(a) = 1, ρ(b) = 3, ρ(a B) = 2, and max i=1,2 b ii = 1. So ρ(a B) ρ(a) max 1 i 2 b ii. The example can be extended to the n n (n 3) case by attaching I n 2 via direct sum. In the next section we will provide a necessary condition for (3) to be valid. It turns out the condition is satisfied by an important class of matrices called inverse M-matrices. In Section 3 we will provide another upper bound. 2 Some bounds and diagonal dominance A matrix A is said to be diagonally dominant of its row entries (respectively, of its columns entries) if a ii a ij (respectively a ii a ji ) for each i = 1,..., n and all j i. Similarly we define diagonally subdominant of its row entries (respectively, of its columns entries) by reversing the inequalities. Strictly diagonal dominance is defined similarly [6, p.125]. THEOREM 1 Let A 0, B 0 be n n nonnegative matrices. If there exists a positive diagonal D such that 1. DBD 1 is diagonally dominant of its column (or row) entries, then ρ(b) tr B, (4) and ρ(a B) ρ(a) max b ii, (5) 2. DBD 1 is diagonally subdominant of its column (or row) entries, then ρ(b) tr B and ρ(a) min b ii ρ(a B). PROOF. (1) Notice that A (DBD 1 ) = D(A B)D 1 and hence ρ(a B) = ρ(a (DBD 1 )). Moreover the diagonal entries of B and DBD 1 are the same. So 2

we may assume that B is diagonally dominant of its column (or row) entries. Suppose B is diagonally dominant of its column entries. Then A B A diag (b 11,..., b nn ) A max b ii. (6) By the monotonicity of the Perron root [5, p.491] ρ(a B) ρ(a diag (b 11,..., b nn )) ρ(a) max b ii, (7) which yields (5) immediately. To obtain (4), set A = J n where J n is the n n matrix of all ones, in the first inequality of (7). Then ρ(b) ρ(j n diag (b 11,..., b nn )) = tr B, since the nonnegative matrix J n diag (b 11,..., b nn ) is at most rank 1. If B is diagonally dominant of its row entries, consider A T B T = (A B) T use the fact that the spectrum is invariant under transpose. The proof of the second statement is similar. and REMARK 1 1. The upper bound tr B in (4) is attainable by choosing B = J n. 2. Though max b ii ρ(b) is true for all nonnegative B, ρ(b) tr B in (4) may not hold if the assumption in the theorem is dropped, for example, the irreducible 0 1 B =. 1 0 3. It is not true that if A 0 and B 0 are both diagonally dominant of its (column) row entries, then ρ(a B) max a ii max b ii, for example, 2 1 2 1 4 1 A =, B =, A B =, 1 1.5 1 2 1 3 with ρ(a B) 4.6180 > 4 = max i=1,2 a ii max i=1,2 b ii. The following provides a representation of the maximum diagonal entry of B. COROLLARY 1 Let B 0 be an n n nonnegative matrix. If there exists a positive diagonal D such that DBD 1 is diagonally dominant of its column (or row) entries, then max{ρ(a B) : A 0, ρ(a) = 1} = max b ii. PROOF. Use Theorem 1 (1) and consider A = I n, the n n identity matrix. Let Z n := {A R n n : a ij 0, i j}. A matrix A Z n is called an M-matrix [1, 6] if there exists an P 0 and s > 0 such that A = si n P and s > ρ(p ), 3

where ρ(p ) is the spectral radius of the nonnegative P, I n is the n n identity matrix. Denote by M n the set of all n n nonsingular M-matrices. The matrices in M 1 n := {A 1 : A M n } are called inverse M-matrices. It is known that A Z n is in M 1 n if and only if A is nonsingular and A 0 [6, p.114-115]. It is clear that M n and M 1 n are invariant under similarity via a positive diagonal matrix D, i.e., DM n D 1 = M n and DM 1 n D 1 = M 1 n. It is also known that M 1 n is Hadamard-closed if and only if n 3 [8]. Compare the following with [6, p.377]. COROLLARY 2 Let A 0, B 0 be n n nonnegative matrices. If B M 1 n, then ρ(b) tr B, and Hence if B M 1 n, then ρ(a B) ρ(a) max b ii. max{ρ(a B) : A 0, ρ(a) = 1} = max b ii. PROOF. Since B 1 M n, there exists a positive diagonal D such that DB 1 D 1 is strictly row diagonally dominant [6, p.114-115]. Then the inverse DBD 1 is strictly diagonally dominant of its column entries [6, p.125], [9, Lemma 2.2] (or see Proposition 1 in the next section). Then apply Theorem 1 (1). 3 A sharper upper bound when B 1 is an M-matrix The inequality in Corollary 2 has an resemblance of [6, Theorem 5.7.31, p.375] which asserts that if A, B M n, then where τ(a B 1 ) τ(a) min β ii, (8) τ(a) = min{re λ : λ σ(a)}, and σ(a) denotes the spectrum of A M n. It is known that [6, p.129-130] τ(a) = 1 ρ(a 1 ), and is a positive real eigenvalue of A M n. The number τ(a) is often called the minimum eigenvalue of the M-matrix A. Indeed τ(a) = s ρ(p ), if A = si n P where s > ρ(p ), P 0 [6, p.130]. So τ(a) is a measure of how close A M n to be singular. 4

It is known that A B 1 M n if A, B M n [6, p.359]. Chen [2] provides a sharper lower bound for τ(a B 1 ) which clearly improves (8): [( τ(a B 1 aii ) τ(a)τ(b) min τ(a) + b ) ] ii τ(b) 1 βii. (9) Since a ii > τ(a) for all i = 1,..., n [1, p.159], (9) implies (8) immediately. Inequality (9) may be rewritten in the following form: ρ((a B 1 ) 1 ρ(a 1 )ρ(b 1 ) ) min [(a ii ρ(a 1 ) + b ii ρ(b 1 ) 1) β ii b ii ]. However it is not an upper bound for ρ(a B 1 ). In view of Corollary 2 and motivated by Chen s bound and its proof, we provide a sharper upper bound for ρ(a B), where A 0 and B M 1 n. We will need a lemma in [9, Lemma 2.2] (a weaker version is found in [7, Lemma 2.2]). The following is a slight extension since if A is a strictly diagonally dominant matrix, then it is nonsingular by the well-known Gersgorin theorem [6, p.31]. PROPOSITION 1 Let A C n n be a diagonally dominant nonsingular matrix by row (column respectively), i.e., a ii j i a ij ( a ii j i a ji, respectively), b ii for all i = 1,..., n. If A 1 = (b ij ), then k j b ji a jk k j b ii ( b ij a kj b ii, respectively). a jj a jj PROOF. Let A C n n be a diagonally dominant nonsingular matrix by row. Each a ii 0 otherwise the whole ith row would be zero and then A would be singular. Apply [6, p.129, Problem 17(a)] to have k j b ji a jk b ii. a jj Consider A T for the column version. THEOREM 2 Suppose A 0, B M 1 n. 1. If A is nilpotent, i.e., ρ(a) = 0, then ρ(a B) = 0. 2. If A is not nilpotent, then ρ(a B) ρ(a) ρ(b) max [( aii ρ(a) + β iiρ(b) 1 ) bii β ii ] ρ(a) max b ii. 5

PROOF. (1) If A is nilpotent nonnegative, then A is permutationally similar to a strictly upper triangular (nonnegative) matrix by [4, Theorem 3]. So is A B. Hence ρ(a B) = 0. (2) For the second inequality, observe max a ii ρ(a) for the nonnegative matrix A, b ii 0 and [1, p.159] β ii > τ(b 1 ) > 0 for all i = 1,..., n, (10) since B 1 M n. For the first inequality we essentially follow the idea of Chen s proof. Since ρ( ), Hadamard product, and taking inverse are continuous functions, we may assume that A 0 and B 1 = αi P M n, where P 0 with α > ρ(p ), are irreducible (thus B 0 is also irreducible since irreducibility is invariant under inverse operation), otherwise we place sufficiently small values ɛ > 0 in the positions in which A and P have zeroes. Let v and w be the right Perron eigenvectors of P T and A respectively, i.e., v, w R n are positive vectors such that So v T B 1 = τ(b 1 )v T, or equivalently, Define (B 1 ) T v = τ(b 1 )v, Aw = ρ(a)w. n v k β kj = τ(b 1 )v j, j = 1,..., n. k=1 C := V B 1, V := diag (v 1,..., v n ). Since the off diagonal entries β ij, i j, of B 1 are nonpositive and τ(b 1 ) > 0, C is strictly diagonally dominant by column. Apply Proposition 1 on C 1 = BD 1 to have b ij = b ij v j v j k j v kβ kj b ii v j β jj v i = k j v kβ kj bii = (β jj τ(b 1 ))v j bii, v j β jj v i v j β jj v i for all i j, since v j > 0, β kj 0 (k j), β jj > 0. Hence for i j, Now define a positive vector z R n : Then z i := b ij (β jj τ(b 1 ))v j b ii β jj v i. [(A B)z] i = a ii b ii z i + j i a ij b ij z j w i β ii v i (β ii τ(b 1 > 0, i = 1,..., n. )) a ii b ii z i + j i (β jj τ(b 1 ))v j b ii w j β jj a ij β jj v i v j (β jj τ(b 1 )) 6

= a ii b ii z i + b ii a ij w j v i j i = a ii b ii z i + b ii (ρ(a) a ii )w i [ v i = b ii z i a ii + 1 ] (ρ(a) a ii )(β ii τ(b 1 ) β ii = b [ ii ρ(a)τ(b 1 aii ) β ii ρ(a) + β ] ii τ(b 1 ) 1 z i [( ρ(a)τ(b 1 aii ) max ρ(a) + β ii τ(b 1 ) 1 ) bii β ii ] z i. By [1, p.28, Theorem 2.1.11], we have the desired result, since τ(b 1 ) = 1/ρ(B). Now by (2) we have aii ρ(a) + β bii iiρ(b) 1 β ii ρ(b) b ii = ρ(b)b ii, i = 1,..., n. β ii β ii So the second inequality (see Corollary 2) follows immediately. REMARK 2 The first inequality in Theorem 2 is no long true if we merely assume that B is nonsingular nonnegative. For example, if A = 0 1 0 0 0 1, B = 1 2 0 0 1 2, A B = 0 2 0 0 0 2, B 1 = 1 1 2 4 4 1 2, 9 1 0 0 2 0 1 2 0 0 2 4 1 then ρ(a) = 1, ρ(b) = 3, ρ(a B) = 2, but References ρ(a) ρ(b) max [( aii ρ(a) + β iiρ(b) 1 ) bii β ii ] = 2. [1] A. Berman and R.J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, Classics in Applied Mathematics, 9, SIAM, Philadelphia, 1994. [2] S.C. Chen, A lower bound for the minimum eigenvalue of the Hadamard product of matrices, Linear Algebra Appl., 378 (2004), 159 166. [3] L. Elsner, C.R. Johnson and J.A. Dias da Silva, The Perron root of a weighted geometric mean of nonnegative matrices, Linear and Multilinear Algebra, 24 (1988), 1 13. [4] J. Z. Hearon, Compartmental matrices with single root and nonnegative nilpotent matrices, Math. Biosci., 14 (1972), 135 142. [5] R.A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, 1985. 7

[6] R. A. Horn and C.R. Johnson, Topics in Matrix Analysis, Cambridge University Press, 1991. [7] Y. Song, On an inequality for the Hadamard product of an M-matrix and its inverse, Linear Algebra Appl., 305 (2000), 99 105. [8] B.Y. Wang, X. Zhang and F. Zhang, On the Hadamard product of inverse M- matrices, Linear Algebra Appl., 305 (2000), 23 31. [9] X. Yong and Z. Wang, On a conjecture of Fiedler and Markham, Linear Algebra Appl., 288 (1999), 259 267. 8