Some bounds for the spectral radius of the Hadamard product of matrices

Similar documents
Some bounds for the spectral radius of the Hadamard product of matrices

Z-Pencils. November 20, Abstract

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Spectral inequalities and equalities involving products of matrices

A note on estimates for the spectral radius of a nonnegative matrix

MATH36001 Perron Frobenius Theory 2015

Properties for the Perron complement of three known subclasses of H-matrices

Modified Gauss Seidel type methods and Jacobi type methods for Z-matrices

BOUNDS OF MODULUS OF EIGENVALUES BASED ON STEIN EQUATION

Generalizations of M-matrices which may not have a nonnegative inverse

Perron Frobenius Theory

Componentwise perturbation analysis for matrix inversion or the solution of linear systems leads to the Bauer-Skeel condition number ([2], [13])

Abed Elhashash and Daniel B. Szyld. Report August 2007

Properties of Matrices and Operations on Matrices

We first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix

Permutation transformations of tensors with an application

Compound matrices and some classical inequalities

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Notes on Linear Algebra and Matrix Theory

Key words. Strongly eventually nonnegative matrix, eventually nonnegative matrix, eventually r-cyclic matrix, Perron-Frobenius.

An Even Order Symmetric B Tensor is Positive Definite

Interlacing Inequalities for Totally Nonnegative Matrices

The spectrum of a square matrix A, denoted σ(a), is the multiset of the eigenvalues

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

THE PERTURBATION BOUND FOR THE SPECTRAL RADIUS OF A NON-NEGATIVE TENSOR

On the spectra of striped sign patterns

Symmetric Matrices and Eigendecomposition

Geometric Mapping Properties of Semipositive Matrices

Computational math: Assignment 1

Math 240 Calculus III

ON THE QR ITERATIONS OF REAL MATRICES

Algebra C Numerical Linear Algebra Sample Exam Problems

On the second Laplacian eigenvalues of trees of odd order

1 Linear Algebra Problems

A Sudoku Submatrix Study

Iowa State University and American Institute of Mathematics

An estimation of the spectral radius of a product of block matrices

For δa E, this motivates the definition of the Bauer-Skeel condition number ([2], [3], [14], [15])

Foundations of Matrix Analysis

On the simultaneous diagonal stability of a pair of positive linear systems

Topics in Applied Linear Algebra - Part II

Sign Patterns that Allow Diagonalizability

Spectral radius, symmetric and positive matrices

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

ELA

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

Review of Some Concepts from Linear Algebra: Part 2

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Nonlinear Programming Algorithms Handout

Section 3.9. Matrix Norm

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Lecture 7: Positive Semidefinite Matrices

AN EXTENSION OF YAMAMOTO S THEOREM ON THE EIGENVALUES AND SINGULAR VALUES OF A MATRIX

Chapter 3 Transformations

ELA

H-matrix theory vs. eigenvalue localization

Abed Elhashash and Daniel B. Szyld. Report Revised November 2007

On the Schur Complement of Diagonally Dominant Matrices

ALUTHGE ITERATION IN SEMISIMPLE LIE GROUP. 1. Introduction Given 0 < λ < 1, the λ-aluthge transform of X C n n [4]:

On generalized Schur complement of nonstrictly diagonally dominant matrices and general H- matrices

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

Sign Patterns of G-Matrices

NOTE ON THE SKEW ENERGY OF ORIENTED GRAPHS. Communicated by Ivan Gutman. 1. Introduction

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Cayley-Hamilton Theorem

Wavelets and Linear Algebra

Note on the Jordan form of an irreducible eventually nonnegative matrix

Math Matrix Algebra

Numerical Linear Algebra Homework Assignment - Week 2

Conditioning of the Entries in the Stationary Vector of a Google-Type Matrix. Steve Kirkland University of Regina

Chapter 2 Spectra of Finite Graphs

Linear Algebra: Characteristic Value Problem

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Jordan Journal of Mathematics and Statistics (JJMS) 5(3), 2012, pp A NEW ITERATIVE METHOD FOR SOLVING LINEAR SYSTEMS OF EQUATIONS

Sign Patterns with a Nest of Positive Principal Minors

c 1995 Society for Industrial and Applied Mathematics Vol. 37, No. 1, pp , March

Computing closest stable non-negative matrices

CONVERGENCE OF MULTISPLITTING METHOD FOR A SYMMETRIC POSITIVE DEFINITE MATRIX

The upper Jacobi and upper Gauss Seidel type iterative methods for preconditioned linear systems

Sign patterns that allow eventual positivity

Types of convergence of matrices. Olga Pryporova. A dissertation submitted to the graduate faculty

Computing Eigenvalues and/or Eigenvectors;Part 1, Generalities and symmetric matrices

The Signless Laplacian Spectral Radius of Graphs with Given Degree Sequences. Dedicated to professor Tian Feng on the occasion of his 70 birthday

Numerical Linear Algebra And Its Applications

Fiedler s Theorems on Nodal Domains

First, we review some important facts on the location of eigenvalues of matrices.

Chapter 3. Determinants and Eigenvalues

Markov Chains, Random Walks on Graphs, and the Laplacian

Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

Matrices and Determinants

MATH 581D FINAL EXAM Autumn December 12, 2016

Matrices and Linear Algebra

arxiv: v3 [math.ra] 10 Jun 2016

MAT 1332: CALCULUS FOR LIFE SCIENCES. Contents. 1. Review: Linear Algebra II Vectors and matrices Definition. 1.2.

Transcription:

Some bounds for the spectral radius of the Hadamard product of matrices Tin-Yau Tam Mathematics & Statistics Auburn University Georgia State University, May 28, 05 in honor of Prof. Jean H. Bevis

Some bounds for the spectral radius of the Hadamard product of two nonnegative matrices are given. Some results involve M- matrices. A joint work with Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, University of Electronic Science and Technology of China

I. Introduction Given A, B C n n, the Hadamard product of A and B is A B = (a ij b ij ). Example 1. 1 2 A =, B = 3 4 2 1, A B = 1 0 2 2 3 0 If A, B 0, then ρ(a B) ρ(a)ρ(b), (1) where ρ(a) is the spectral radius of A. Proof: The Kronecker product A B 0 since A, B 0. ρ(a B) = ρ(a)ρ(b).

A B is a principal submatrix of A B. Apply monotonicity of the Perron root. By monotonicity of the Perron root of the B 0, max b ii ρ(b). (2) i=1,...,n The lower bound is attained when B 0 is diagonal. Is it possible to have a better bound like the following for ρ(a B), where A, B 0? ρ(a B) ρ(a) max b ii (3) i=1,...,n

Example 2. 0 1 A =, B = 1 0 Evidently 1 2, A B = 2 1 0 2. 2 0 and ρ(a) = 1, ρ(b) = 3, ρ(a B) = 2, So max b ii = 1. i=1,2 ρ(a B) = 2 1 = ρ(a) = max 1 i 2 b ii.

Goal: provide a necessary condition for (3) to be valid. It turns out the condition is satisfied by an important class of matrices called inverse M- matrices.

II. Some bounds and diagonal dominance A matrix A is said to be diagonally dominant of its row entries (respectively, of its columns entries) if a ii a ij (respectively a ii a ji ) for each i = 1,..., n and all j i. Example 3. 4 1 A = 3 2 is diagonally dominant of its column entries but not of its row entries. Similarly we define diagonally subdominant of its row entries (respectively, of its columns entries) by reversing the inequalities. Strict diagonal dominance is defined similarly.

Theorem 1. Let A 0, B 0 be n n nonnegative matrices. If there exists a positive diagonal D such that 1. DBD 1 is diagonally dominant of its column (or row) entries, then and ρ(b) tr B, (4) ρ(a B) ρ(a) max b ii, (5) i=1,...,n 2. DBD 1 is diagonally subdominant of its column (or row) entries, then and ρ(a) ρ(b) tr B min b ii ρ(a B). i=1,...,n

Proof: A (DBD 1 ) = D(A B)D 1 and hence ρ(a B) = ρ(a (DBD 1 )). diag B = diag DBD 1. So we may assume that B is diagonally dominant of its column (or row) entries. A B A diag (b 11,..., b nn ) A max i=1,...,n b ii. By the monotonicity of the Perron root ρ(a B) ρ(a diag (b 11,..., b nn )) ρ(a) max i=1,...,n b ii, (6) which yields (5) immediately. To obtain (4), set A = J n in the first inequality of (6). Then ρ(b) ρ(j n diag (b 11,..., b nn )) = tr B,

since rank (J n diag (b 11,..., b nn )) 1. Remark 1. 1. The upper bound tr B in (4) is attained by B = J n. 2. Though max i=1,...,n b ii ρ(b) is true for B 0, ρ(b) tr B in (4) may not hold if the assumption in the theorem is dropped, for example, the irreducible 0 1 B = 1 0 3. It is not true that if A 0 and B 0 are both diagonally dominant of its (column) row entries, then ρ(a B) For example, 2 1 A =, B = 1 1.5 max i=1,...,n a ii max i=1,...,n b ii. 2 1, A B = 1 2 4 1, 1 3

with ρ(a B) 4.6180 > 4 = max i=1,2 a ii max i=1,2 b ii. Corollary 1. Let B 0 be an n n nonnegative matrix. If there exists a positive diagonal D such that DBD 1 is diagonally dominant of its column (or row) entries, then max{ρ(a B) : A 0, ρ(a) = 1} = max i=1,...,n b ii. M-matrices: Z n := {A R n n : a ij 0, i j}. A matrix A Z n is called an M-matrix if there exists an P 0 and s > 0 such that A = si n P and s > ρ(p ),

M n = the set of all n n nonsingular M- matrices. The matrices in M 1 n := {A 1 : A M n } are called inverse M-matrices. Known: Given A Z n. A M 1 n A is nonsingular and A 0. Corollary 2. Let A 0, B 0 be n n nonnegative matrices. If B M 1 n, then and ρ(b) tr B, ρ(a B) ρ(a) Hence if B M 1 n, then max i=1,...,n b ii. max{ρ(a B) : A 0, ρ(a) = 1} = max i=1,...,n b ii.

Proof: Since B 1 M n, there exists a positive diagonal D such that DB 1 D 1 is strictly row diagonally dominant. The inverse DBD 1 of DB 1 D 1 is strictly diagonally dominant of its column entries. Then apply Theorem 1 (1).

III. A sharper upper bound when B 1 is an M-matrix The inequality in Corollary 2 has an resemblance of a known result which asserts that if A, B M n, then where τ(a B 1 ) τ(a) min β ii, (7) i=1,...,n τ(a) = min{re λ : λ σ(a)}, and σ(a) is the spectrum of A M n. Known: τ(a) = 1 ρ(a 1 ) and is a positive eigenvalue of A M n.

The number τ(a) is often called the minimum eigenvalue of the M-matrix A. Indeed τ(a) = s ρ(p ), if A = si n P where s > ρ(p ), P 0. So τ(a) is a measure of how close A M n to be singular. Known: A B 1 M n if A, B M n. Chen (2004) provided a sharper lower bound for τ(a B 1 ) which improves (7): τ(a B 1 ) τ(a)τ(b) min i=1,...,n τ(a) min i=1,...,n β ii [( aii τ(a) + Since a ii > τ(a) for all i = 1,..., n. b ii τ(b) 1 ) ] βii b ii. (8) (9)

Inequality (8) may be rewritten in the following form: ρ((a B 1 ) 1 ) ρ(a 1 )ρ(b 1 ) min i=1,...,n [(a ii ρ(a 1 ) + b ii ρ(b 1 ) 1) β ii b ]. ii where A 0 and B M 1 n. However Chen s result is not an upper bound for ρ(a B 1 ). In view of Corollary 2 and motivated by Chen s bound and its proof, we provide a sharper upper bound for ρ(a B), where A 0 and B M 1 n.

Theorem 2. Suppose A 0, B M 1 n. 1. If A is nilpotent, i.e., ρ(a) = 0, then ρ(a B) = 0. 2. If A is not nilpotent, then ρ(a B) ρ(a) ρ(b) max i=1,...,n [ ] aii bii ρ(a) + β iiρ(b) 1 β ii ρ(a) max i=1,...,n b ii.

Remark 2. The first inequality in Theorem 2 is no long true if we merely assume that B is nonsingular nonnegative. For example, if and A B = then but A = 0 1 0 0 0 1, B = 1 0 0 0 2 0 0 0 2, B 1 = 1 2 0 0 9 1 2 0 0 1 2 2 0 1, 1 2 4 4 1 2 2 4 1 ρ(a) = 1, ρ(b) = 3, ρ(a B) = 2, ρ(a) ρ(b) = 2, max i=1,...,n not even nonnegative., [ ] aii bii ρ(a) + β iiρ(b) 1 β ii