Spectral inequalities and equalities involving products of matrices

Similar documents
Convexity of the Joint Numerical Range

b jσ(j), Keywords: Decomposable numerical range, principal character AMS Subject Classification: 15A60

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices

Matrix Inequalities by Means of Block Matrices 1

Some inequalities for sum and product of positive semide nite matrices

Interlacing Inequalities for Totally Nonnegative Matrices

Isometries Between Matrix Algebras

DAVIS WIELANDT SHELLS OF NORMAL OPERATORS

arxiv: v4 [quant-ph] 8 Mar 2013

Math 408 Advanced Linear Algebra

Decomposable Numerical Ranges on Orthonormal Tensors

Generalized Interlacing Inequalities

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

arxiv: v3 [math.ra] 22 Aug 2014

Maximizing the numerical radii of matrices by permuting their entries

CHI-KWONG LI AND YIU-TUNG POON. Dedicated to Professor John Conway on the occasion of his retirement.

1 Last time: least-squares problems

SPECTRALLY ARBITRARY FACTORIZATION: THE NONDEROGATORY CASE. 1. Introduction

Elementary linear algebra

Operators with numerical range in a closed halfplane

Uniqueness of the Solutions of Some Completion Problems

CANONICAL FORMS, HIGHER RANK NUMERICAL RANGES, TOTALLY ISOTROPIC SUBSPACES, AND MATRIX EQUATIONS

Compound matrices and some classical inequalities

RANKS OF QUANTUM STATES WITH PRESCRIBED REDUCED STATES

Some bounds for the spectral radius of the Hadamard product of matrices

Ranks and determinants of the sum of matrices from unitary orbits

PRODUCT OF OPERATORS AND NUMERICAL RANGE

Numerical Ranges and Dilations. Dedicated to Professor Yik-Hoi Au-Yeung on the occasion of his retirement

an Off-Diagonal Block of

ELLIPTICAL RANGE THEOREMS FOR GENERALIZED NUMERICAL RANGES OF QUADRATIC OPERATORS

FACTORING A QUADRATIC OPERATOR AS A PRODUCT OF TWO POSITIVE CONTRACTIONS

A survey on Linear Preservers of Numerical Ranges and Radii. Chi-Kwong Li

Some Convexity Features Associated with Unitary Orbits

LINEAR MAPS PRESERVING NUMERICAL RADIUS OF TENSOR PRODUCTS OF MATRICES

Distance Preserving Maps on Matrices. Chi-Kwong Li Department of Mathematics College of William and Mary Williamsburg, Virginia

Minimum number of non-zero-entries in a 7 7 stable matrix

G1110 & 852G1 Numerical Linear Algebra

A Note on Eigenvalues of Perturbed Hermitian Matrices

Foundations of Matrix Analysis

Chapter 0 Preliminaries

Review problems for MA 54, Fall 2004.

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Generalized Numerical Radius Inequalities for Operator Matrices

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Symmetric Norm Inequalities And Positive Semi-Definite Block-Matrices

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

1. General Vector Spaces

Algebra C Numerical Linear Algebra Sample Exam Problems

1 Linear Algebra Problems

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.

Some bounds for the spectral radius of the Hadamard product of matrices

Numerical Linear Algebra Homework Assignment - Week 2

Review of Some Concepts from Linear Algebra: Part 2

Control Systems. Linear Algebra topics. L. Lanari

Frame Diagonalization of Matrices

CHARACTERIZATIONS. is pd/psd. Possible for all pd/psd matrices! Generating a pd/psd matrix: Choose any B Mn, then

for all A M m and B M n Key words. Complex matrix, linear preserver, spectral norm, Ky Fan k-norm, Schatten p-norm, tensor product.

Characterization of half-radial matrices

3 (Maths) Linear Algebra

LinGloss. A glossary of linear algebra

On the adjacency matrix of a block graph

Topics in Applied Linear Algebra - Part II

Singular Value Inequalities for Real and Imaginary Parts of Matrices

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

Extremal Characterizations of the Schur Complement and Resulting Inequalities

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

AN ASYMPTOTIC BEHAVIOR OF QR DECOMPOSITION

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

ABSOLUTELY FLAT IDEMPOTENTS

Induced Norms, States, and Numerical Ranges

GENERALIZED NUMERICAL RANGES AND QUANTUM ERROR CORRECTION

Lecture notes: Applied linear algebra Part 1. Version 2

Linear Algebra in Actuarial Science: Slides to the lecture

ACI-matrices all of whose completions have the same rank

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

INVERSE CLOSED RAY-NONSINGULAR CONES OF MATRICES

On the Hermitian solutions of the

Conceptual Questions for Review

Nonlinear Programming Algorithms Handout

Linear Algebra. Workbook

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Spectral Theorem for Self-adjoint Linear Operators

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

9.1 Eigenvectors and Eigenvalues of a Linear Map

Diagonalizing Matrices

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Numerical Ranges of the Powers of an Operator

Recall the convention that, for us, all vectors are column vectors.

Chapter 3 Transformations

Symmetric Matrices and Eigendecomposition

Z-Pencils. November 20, Abstract

Higher rank numerical ranges of rectangular matrix polynomials

EE731 Lecture Notes: Matrix Computations for Signal Processing

Numerical Linear Algebra And Its Applications

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Transcription:

Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department of Mathematics, Iowa State University, Ames, Iowa 50011 (poon@math.iastate.edu) Abstract Let A 1,..., A k be n n matrices. We studied inequalities and equalities involving eigenvalues, diagonal entries, and singular values of A 0 = A 1 A k and those of A 1,..., A k. It is shown that the matrices attaining equalities often have special reducible structure. The results are then applied to study normality and reducibility of matrices, extending some results and answering some questions of Miranda, Wang and Zhang. Keywords: matrices, eigenvalues, singular values. AMS Classifications: 15A42, 15A18 1 Introduction Let A M n. Denote by s 1 (A) s n (A) the singular values of A, λ 1 (A),..., λ n (A) the eigenvalues of A with λ 1 (A) λ n (A), and d 1 (A),..., d n (A) the diagonal entries of A. Let A 1,..., A k M n and A = A 1 A k. It is known [1, 6] that for r = 1,..., n, one has λ j (A) s j (A) s j (A i ), (1.1) λ j (A) λ j (A) s j (A i ), (1.2) d j (A) d j (A) s j (A) s j (A i ). (1.3) In this paper, we characterize those matrices A 1,..., A k for which any one of the equalities in (1.1) (1.3) holds. If the equality under consideration does not involve r k s j (A i ) or r k s j (A i ), then one can only deduce conditions on the matrix A, and such conditions have been determined in [3]. Thus, we will focus on equalities that always involve the quantities r k s j (A i ) or r k s j (A i ). It turns out that the extreme matrices A 1,..., A k attaining the equalities often have special reducible structure. The results are then used to study normality and reducibility of matrices, extending the results in [7, 10] and answering 1 Research supported by an NSF grant. 1

some questions in [7]. For example, in [7, Lemma 3 and Theorem 2], characterizations were given to those matrices A 1, A 2 and A = A 1 A 2 in M n such that 2 s j (A) = s j (A i ) and 2 d j (A) = s j (A i ). Furthermore, it was suggested (Comment 2 in [7]) that the results can be extended to more than two matrices. We show in Section 2 (the discussion after Theorem 2.2) that this comment is not accurate and extend the results utilizing the information of the ranks of the matrices A 1,..., A k (Theorem 2.4). Furthermore, our results (Theorem 2.2 and Corollary 2.7) answer the questions raised in Comments 3 and 5 in [7]. In [10], the authors studied the equality cases in (1.2) when r = n and A i = B or B for a fixed B. Some sufficient conditions for the matrix B to be normal were given. However, those conditions are not necessary in general. A complete understanding of these conditions will follow readily from our results in Section 4 (see Corollary 4.2). We shall use the following notation of majorization in our discussion, see [6]. For two real vectors x and y in R n, if the sum of the m largest entries of x is not larger than that of y for m = 1,..., n, we write x w y; if in addition that the sum of all the entries of x is the same as that of y, we write x y. For two nonnegative vectors x and y in R n, if the product of the m largest entries of x is not larger than that of y for m = 1,..., n, we write ln x w ln y; if in addition that the product of all the entries of x is the same as that of y, we write ln x ln y. For a complex vector x = (x 1,..., x n ) we write x = ( x 1,..., x n ). We remark that our results are valid for real matrices as long as the statements do not involve complex numbers. 2 Main Theorems We begin with the following lemma which plays a crucial role in proving our main theorems. Lemma 2.1 Let A = A 1 A k, where k 2 and A i M n satisfies rank(a i ) r for all i = 1,..., k. Suppose s j (A) = s j (A i ) 2

If U 0 and U n are unitary matrices such that U 0 AU k = B C with B M r satisfying det(b) = r k s j (A i ), then there exist U 1,..., U k 1 such that U i 1A i U i = B i C i with B i M r with det(b i ) = r s j (A i ) for i = 1,..., k. Proof. We prove the result by induction on k 2. Suppose k = 2. Let U 0 and U 2 be unitary so that U 0 A 1 A 2 U 2 = B C with B M r such that det(b) = r s j (A 1 )s j (A 2 ). Let U 1 be unitary so that the last (n r) columns of U 1 is orthogonal to the first r columns ( ) ( ) of A 2 U 2. Then U1 B2 X A 2 U 2 = 2 for some B 0 C 2 M r. If U0 B1 X A 1 U 1 = 1 with 2 Y 1 C 1 B 1 M r, then the leading r r submatrix of U0 A 1 A 2 U 2 is just B 1 B 2. It is known [9] that s j (B i ) s j (A i ) for j = 1,..., r, and i = 1, 2. Hence s j (A 1 )s j (A 2 ) = det(b) = det(b 1 B 2 ) s j (B 1 )s j (B 2 ) s j (A 1 )s j (A 2 ). It follows that s j (A i ) = s j (B i ) for j = 1,..., r, and i = 1, 2. By Lemma 2.1 in [3], we conclude that A i = B i C i for i = 1, 2 as asserted. Now, suppose the result is valid for the product of k 1 matrices with k > 2. Let A = A 1 A k and Ã2 = A 2 A k. Then p s j (Ã2) p ki=2 s j (A i ) for p = 1,..., n. Let U 0 and U k be unitary such that U0 A 1 Ã 2 U k = B C with B M r satisfying det(a) = r k s j (A i ). By the induction assumption on the product A 1 Ã 2, we see that there exists U 1 such that U 0 A 1 U 1 = B 1 C 1 and U 1 Ã2U k = B 2 C 2 with det(b 1 ) = r s j (A 1 ), and hence det( B 2 ) = r s j (A)/ det(b 1 ) = r ki=2 s j (A i ). By induction assumption on U 1 Ã 2 U k, there exist unitary U 2,..., U k 1 such that Ui 1A i U i = B i C i with B i M r with det(b i ) = r s j (A i ) for i = 2,..., k. Theorem 2.2 Let 1 r n. Suppose A 1,..., A k M n, where k > 1, and A = k A j. Then s j (A) = s j (A i ) (2.1) if and only if one of the following is satisfied. (a) r = n. (b) One of the matrix A j has rank less than r. (c) There exist unitary matrices U 0, U 1,..., U k such that U i 1A i U i = B i C i so that B i M r has singular values s 1 (A i ),..., s r (A i ). 3

Consequently, λ j (A) = s j (A i ). (2.2) if and only if (a), (b), or (c) with U 0 = U k holds. Proof. If (a) or (b) holds then clearly we have (2.1). If (c) holds, then for X = [I r 0 r,n r ] we have s j (A i ) = det(xu 0 AU k X t ) s j (A). By (1.1), we see that (2.1) holds. Now, suppose A = A 1 A k satisfies (2.1). Assume that neither (a) nor (b) holds. Let U 0 and U k be unitary such that U 0 AU k = diag (s 1 (A),..., s n (A)). By Lemma 2.1, we get condition (c). The proof of the last assertion is similar. The only difference is in the last part. Suppose (2.2) is true. Then r λ j (A) = r s j (A). If neither (a) nor (b) holds, then by Theorem 2.2 in [3] there exists a unitary matrix U k such that Uk AU k = B C, where B M r has singular values s 1 (A),..., s r (A). By Lemma 2.1, we get condition (c). Consider matrices A 1,..., A k and A = A 1 A k in M n. When k = 2, it was shown in [7, Lemma 3, Theorem 2] that s j (A) = s j (A i ) (2.3) if and only if there exist unitary U 0, U 1, U 2 such that U i 1A i U i = diag (s 1 (A i ),..., s r (A i )) B i, (2.4) for i = 1, 2; and d j (A) = s j (A i ) (2.5) if and only if A = B C with B M r and there exists a diagonal unitary matrix D M r such that DB is positive semi-definite with eigenvalues s 1 (A i ) s r (A i ), and (2.4) holds. At the end of the paper, the author said (in Comment 2) that the same result holds for k > 2. However, this comment is not accurate. In fact, if rank(a 1 ) = p < r, then neither (2.3) nor (2.4) convey much information about s j (A) for j > p. To see an extreme example, let A 1 = 0. Then (2.3) and (2.5) hold, but nothing can be said about A 2,..., A k. Such a problem does not arise when k = 2 because of the following 4

reason: if A 1 has rank p < r, one can first find unitary X 0, X 1, X 2 so that Xi 1A i X i = diag (s 1 (A i ),..., s p (A i )) B i, where B 1 = 0. Then one can find unitary Y 1 and Y 2 so that Y1 B 2 Y 2 = diag (s p+1 (A 2 ),..., s n (A 2 )). As a result, U 0 = X 0, U 1 = X 1 (I p Y 1 ) and U 2 = X 2 (I p Y 2 ) satisfy (2.4) for i = 1, 2. To get around the problem mentioned above, we make use of the quantity m = { r if k = 2, min ({r} {rank(a i ) : 1 i k}) otherwise, (2.6) in our results. In particular, it follows from Theorem 2.4 below that if rank(a i ) r for all i, then (2.3) holds if and only if there exist unitary U 0,..., U k such that (2.4) holds for i = 1,..., k. Actually, (2.3) is just one of the many conditions leading to (2.4) as shown in the theorem following the next definition. Definition 2.3 Let P = [0, ). For r 1 let F r be the set of functions f : P r R such that for any x, y P r with ln x w ln y we have f(x) = f(y) if and only if x = Qy for a permutation matrix Q. The set F r contains many different functions, see [6, Chapter 3]. For examples, the mth elementary symmetric function E m (x 1,..., x r ) with 1 m < r, and the l p -norm on R r with 1 p. In particular, the function f(x 1,, x r ) = x 1 + + x r is F r. Thus it follows from (1.1) that conditions (a) (c) in the following theorem are equivalent. Theorem 2.4 Let A 1,..., A k M n with 1 < k. If 1 r n, then the following conditions are equivalent. (a) f(s 1 (A),..., s r (A)) = f( k s 1 (A i ),..., k s r (A i )) for all (or some) f F r. (b) s 1 (A) + + s r (A) = k s 1 (A i ) + + k s r (A i ). (c) s j (A) = k s j (A i ) for j = 1,..., r. (d) There exist unitary U 0,..., U k such that U i 1A i U i = diag (s 1 (A i ),..., s m (A i )) B i for i = 1,..., k, where m is defined as in (2.6). Proof. By the remark before the theorem, conditions (a) (c) are equivalent. The implication (d) (a) is clear. Now, suppose (c) holds. If k = 2, then condition (d) holds by the result in [7]. Suppose k > 2. Then s j (A) = k s j (A i ) for j = 1,..., m. We prove condition (d) by induction on m as follows. If m = 1, the result follows from Theorem 2.2. Suppose that the result is valid for the m 1 singular values. Since s 1 (A) = s 1 (A i ), by Theorem 2.2, there exist unitary X 0,..., X k such that X i 1A i X i = [s 1 (A i )] Ãi. By induction assumption on à 1,..., Ãk, there exist unitary Y 0,..., Y k such that Yi 1ÃiY i = diag (s 2 (A i ),..., s m (A i )) B i for i = 1,..., k. Setting U i = ([1] Y i )X i for i = 0,..., k, we get condition (d). 5

Corollary 2.5 Suppose A 1,..., A k M n with 1 < k, and 1 r n. Let m be defined as in (2.6). The following conditions (a.i) (a.iii) are equivalent, and conditions (b.i) (b.iii) are equivalent. (a.i) r d j (A) = r k s j (A i ). (a.ii) A = B C with B M r and there exists a diagonal unitary matrix D M r satisfying DB is positive semi-definite with eigenvalues s 1 (A i ) s r (A i ). (a.iii) There exist unitary matrices U 0,, U k M n such that U 0 D = Uk = V I n r for some diagonal unitary matrix D and Ui 1A i U i = diag (s 1 (A i ),..., s m (A i )) C i for i = 1,..., k. (b.i) r d j (A) = r k s j (A i ) (b.ii) A = B C where B M r is a unit multiple of a positive semi-definite matrix with eigenvalues s 1 (A i ) s r (A i ). (b.iii) There exist unitary matrices U 0,, U k M n such that U 0 = e 1t U k = V I n r with t R, and U i 1A i U i = diag (s 1 (A i ),..., s m (A i )) C i for i = 1,..., k. Proof. The implication (a.i) (a.ii) follows from the fact that (2.5) holds if and only if the last two inequalities in (1.3) become equalities, and Theorem 3.1 in [3]. The implication (a.ii) (a.iii) follows from Theorem 2.2. The implication (a.iii) (a.i) is clear. The proof of the equivalence of (b.i) (b.iii) is similar. We now turn to the inequalities in (1.2). Clearly, the first inequality becomes an equality if and only if all λ i (A) has the same argument for i = 1,..., r. The equality case of the second inequality can be treated in the same way as the equality case of r d j (A) = r k s j (A i ) once we put A in triangular form by a suitable unitary similarity. Corollary 2.6 Suppose A 1,..., A k M n with 1 < k, and 1 r n. Let m be defined as in (2.6). The following conditions are equivalent. (a) f( λ 1 (A),..., λ r (A) ) = f( k s 1 (A i ),..., k s r (A i )) for all (or some) f F r. (b) λ 1 (A) + + λ r (A) = k s 1 (A i ) + + k s r (A i ). 6

(c) λ j (A) = k s j (A i ) for j = 1,..., r. (d) There exist unitary U 0,..., U k such that U k U 0 is a diagonal matrix and U i 1A i U i = diag (s 1 (A i ),..., s m (A i )) B i i = 1,..., k. Corollary 2.7 Suppose A 1,..., A k M n with 1 < k, and 1 r n. Let m be defined as in (2.6). Then λ j (A) = s j (A i ) (2.7) if and only if there exist unitary U 0,..., U k such that U k U 0 is a scalar matrix and U i 1A i U i = diag (s 1 (A i ),..., s m (A i )) B i for i = 1,..., k. Define the generalized spectral radius by ρ(a 1,..., A k ) = max tr (Uj A j U j : U j is unitary and U j A j U j is triangular for j = 1,..., k }. The quantity ρ(c, A) is known as the C-spectral radius of A in the literature (see [4] and [5]) and has been studied as a generalization of the spectral radius of A (when C = diag (1, 0,..., 0)). It is known that ρ(c, A) n s j (C)s j (A) and the equality case has been determined. Here we study the equality case for the inequality n ρ(a 1,..., A k ) s j (A i ). Proposition 2.8 Let A 1,..., A k M n such that min{rank (A i ) : 1 i k} = r. Then n ρ(a 1,..., A k ) = s j (A i ) (2.8) if and only if A i is unitarily similar to diag (λ 1 (A i ),..., λ r (A i )) B i with B i M n r for all i so that k λ j (A i ) = e 1t k s j (A i ), t R, for all j = 1,..., r. Proof. The ( ) part is clear. Suppose (2.8) holds. Let U i be unitary such that U i A i U i is in upper triangular form satisfying ( k ) tr n (Ui A i U i = ρ(a 1,..., A k ) = s j (A i ) = s j (A i ). 7

Let A = r U j A j U j. Since λ j (A) s j (A) (Ui A i U i ), we see that all the inequalities become equalities. It follows that the first r diagonal entries of A must equal e 1t k s 1 (A i ),..., e 1t k s r (A i ) for some t R. Hence, the first r diagonal entries of each Uj A j U j must be λ j (A i ) with λ j (A i ) = s j (A i ) for j = 1,..., r. As a result, Uj A j U j = D j B j for some diagonal matrix D j M r. Applying a suitable permutation similarity to all D j yields the conclusion. 3 Powers of a Single Matrix The results in Section 2 can be used to study normality of matrices. We begin with the following theorem. Theorem 3.1 Let A M n. The following conditions are equivalent. (a) A is normal. (b) There exists m > 1 such that s j (A m ) = s j (A) m for j = 1,..., n. (c) There exists m > 1 such that f(s 1 (A m ),..., s n (A m )) = f(s 1 (A) m,..., s n (A) m ) for all (or some) f F n. (d) There exists m > 1 such that s 1 (A m ) + + s n (A m ) = s 1 (A) m +... + s n (A) m. (e) There exists m 1 such that λ j (A m ) = s j (A) m for j = 1,..., n. (f) There exists m 1 such that f( λ 1 (A m ),..., λ n (A m ) ) = f(s 1 (A) m,..., s n (A) m ) for all (or some) f F n. (g) There exists m 1 such that λ 1 (A m ) + + λ n (A m ) = s 1 (A) m +... + s n (A) m. (h) There exists m > 1 and a unitary U such that n d j (U A m U) = n s j (A) m. (i) There exists m > 1 and a unitary U such that n tr (A m U) = n s j (A) m. Proof. If (a) holds, then all other conditions hold. By Theorem 2.4, Corollary 2.5, and Corollary 2.6, if any of (b) (i) holds, then s j (A 2 ) = s j (A) 2 for all j = 1,..., n. As a result, tr A 2 (A ) 2 = tr (AA ) 2 and hence tr (AA A A)(AA A A) = 0. Thus A is normal. Corollary 3.2 Let A M n, and m 1. Then tr A m = n s j (A) m if and only if A is normal and the unitary part of A in its polar decomposition A = P U satisfies U m = e 1t I for some t R. 8

Proof. The sufficiency part is clear. For the necessity part, note that tr A m n = λ j (A m ) n n λ j (A m ) s j (A) m. Since inequalities become equalities, we see that A is normal by Proposition 3.1 and λ m j the same argument for all j. Thus U m is a scalar matrix. have Sometimes, we can use equality cases involving part of the singular values of A m to derive reducibility for A. Corollary 3.3 Suppose A M n. The following conditions are equivalent. (a) A is unitarily similar to B C, where B M r is normal with singular values s j (A) for j = 1,..., r. (b) There exists m 1 such that λ j (A m ) = s j (A) m for j = 1,..., r. (c) There exists m 1 such that λ 1 (A m ) + + λ r (A m ) = s 1 (A) m + + s r (A) m. (d) There exists m 1 such that f( λ 1 (A m ),..., λ r (A m ) ) = f(s 1 (A) m,..., s r (A) m ) for all (or some) f F n. Consequently, r λ(a m ) = r s j (A) m if and only if (a) holds and the unitary part of the polar decomposition of B = P U satisfies U m = e 1t I r. Corollary 3.4 Let A M n and m > 1. (i) We have r d j (A m ) = r s j (A) m if and only if A = B C, where B M r is normal with singular values s 1 (A),..., s r (A). (ii) We have r d j (A m ) = r s j (A) m if and only if A = B C, where B M r is normal with singular values s 1 (A),..., s r (A) and the unitary part of the polar decomposition of B = P U satisfies U m = e 1t I r. Corollary 3.5 Let A M n have rank at least r. The following conditions are equivalent. (a) A is unitarily similar B C with B M r satisfying det(b) = r s j (A). (b) There exists m 1 such that r λ j (A m ) = r s j (A) m. (c) A is unitarily similar to a matrix with diagonal entries d 1,..., d n such that r d j = r s j (A). 9

Apart from the above corollaries, for a given r < n, even if there exist m > 1 such that s j (A m ) = s j (A) m for all j = 1,..., r, we may not be able to get reducibility for A. For example, let A M n be the upper triangular Jordan block of zero. If r n 2, then s j (A 2 ) = s j (A) 2 for all j = 1,..., r. Clearly, A has no reducing subspace. Nonetheless, one can use the argument in [8] (see also [2, pp.44-45]) to get the following. Theorem 3.6 Let A M n, and let p be the degree of the minimal polynomial of A. The following conditions are equivalent. (a) A is normal. (b) There exists an integer m p such that s j (A m ) = s j (A) m for all j = 1,..., m. (c) There exists an integer m p such that s 1 (A m )+ +s n (A m ) = s 1 (A) m + +s n (A) m. (d) There exists an integer m p such that for all (or some) f F n, f(s 1 (A m ),..., s n (A m )) = f(s 1 (A) m,..., s n (A) m ). 4 Words involving A and A In [10], the authors used some trace equalities of matrices of the form A 1 A k with A i {A, A } to give sufficient condition for the normality of A. Such a product is denoted by W (A, A ) and referred to as a word with letters A or A. If W (A, A ) = A m or (A ) m, then we are back to the study in Section 3. We also exclude the words W (A, A ) = (AA ) m or (A A) m, which reduce to the problem of studying tr(x m ) with X = AA or A A. In the following, we show that one can get necessary and sufficient conditions for the trace equalities considered in [10], and obtain other equivalent conditions for normality in terms of other trace equalities. Theorem 4.1 Let A M n, and let W (A, A ) be a word of length m > 1 not equal to A m, (A ) m, (AA ) m/2 or (A A) m/2. Then the following conditions (a.i) (a.iii) are equivalent, and conditions (b.i) (b.iii) are equivalent. (a.i) f (s 1 (W (A, A )),..., s n (W (A, A ))) = f (s 1 (A) m,..., s n (A) m ) for all (or some) f F n. (a.ii) s j (W (A, A )) = s j (A) m for all j = 1,..., n. (a.iii) A is normal or W (A, A ) is of the form A(A A) (m 1)/2 or A (AA ) (m 1)/2. (b.i) f ( λ 1 (W (A, A )),..., λ n (W (A, A )) ) = f (s 1 (A) m,..., s n (A) m ) for all (or some) f F n. (b.ii) λ j (W (A, A )) = s j (A) m for all j = 1,..., n. 10

(b.iii) A is normal. Proof. (a.i) (a.ii) follows from the definition of f and the fact that ln (s 1 (W (A, A )),..., s n (W (A, A ))) ln (s 1 (A) m,..., s n (A) m ). (a.ii) (a.iii) Suppose (a.ii) holds. If there exists 2 consecutive A in W (A, A ), then there exist unitary matrices X, Y, Z such that X AY = Y AZ = diag (s 1 (A),..., s n (A)) by Theorem 2.3 (c). Thus, we see that A 2 have singular values s 1 (A) 2,..., s n (A) 2, and hence A is normal by Proposition 3.1. Similarly, if there exist 2 consecutive A in W (A, A ), then we are done. If none of the above two cases holds, then W (A, A ) must be of the form (AA ) r A or (A A) r A for some r. (a.iii) (a.i) is clear. The proof of (b.iii) (b.i) (b.ii) is similar. Suppose (b.ii) holds. Then (a.ii) holds, and then (a.iii) follows. Since XY and Y X have the same eigenvalues, if W (A, A ) has the form A(A A) (m 1)/2, we see that λ j ((A A) (m 1)/2 A) = λ j (W (A, A )) = s j (A) m for all j. Now, the last two letters of the word (A A) (m 1)/2 A are equal to A. We conclude that A is normal. Similarly, we can show that A is normal if W (A, A ) = A (AA ) (m 1)/2. The following corollary follows easily. Corollary 4.2 Let A M n, and let W (A, A ) be a word of length m > 1 not equal to A m, (A ) m, (AA ) m/2 or (A A) m/2. Suppose p of the letter in W (A, A ) equal A and q = m p. Then n tr W (A, A ) = s j (A) m if and only if A is unitarily similar to the diagonal matrix DB such that D is a diagonal unitary matrix and B = diag (s 1 (A),..., s n (A)) satisfying D p q = µi; in particular, (i) A is simply normal without additional condition if p = q, (ii) A is a multiple of a positive semi-definite matrix if p q = 1, (iii) A is a multiple of a Hermitian matrix if p q = 2. In [10], the authors proved that A is normal if n tr W (A, A ) = s j (A) m. (4.1) By Corollary 4.2, we see that one can get more precise information about A if (4.1) holds. Furthermore, we have the following corollary. 11

Corollary 4.3 Let A M n, and let W (A, A ) be a word with 2m letters so that m of the letters equal to A, but W (A, A ) (AA ) m or (A A) m. Then A is normal if and only if one or both of the following equalities holds: W (A, A ) = A m (A ) m, W (A, A ) = (A ) m A m. Remark 4.4 In general, one cannot get reducibility condition on A if the equality involves only part of the singular values of W (A, A ). For example, let A M n be the upper triangular elementary Jordan block of zero, and let W (A, A ) = AAA A. Then k λ j (W (A, A )) = k s j (W (A, A )) = k s j (A) 4 for k = 1,..., n 2, but A has no reducing subspace. References [1] A. Horn, On the singular values of a product of completely continuous operators, Proc. Nat. Acad. Sci. USA 36 (1950), 374-375. [2] R. A. Horn and C. R. Johnson. Topics in Matrix Analysis, Cambridge University Press, 1991. [3] C.K. Li, Matrices with some extremal properties, Linear Algebra Appl. 101 (1988), 255-267. [4] C.K. Li, C-numerical ranges and C-numerical radii, Linear and Multilinear Algebra 371 (1994), 51-82. [5] C.K. Li, T.Y. Tam and N.K. Tsing, The generalized spectral radius, numerical radius and spectral norm, Linear and Multilinear Linear Algebra 16 (1984), 215-237. [6] A.W. Marshall and I. Olkin, Inequalities: The Theory of Majorizations and Its Applications, Academic Press, 1979. [7] H.F. Miranda, Singular values, diagonal elements, and extreme matrices, Linear Algebra Appl. 305 (2000), 151-159. [8] V. Pták, Isometric parts of an operator and the critical exponent, Časopis pro Pěstovani Matematiky 101 (1976), 383-388. [9] R.C. Thompson, Principal submatrices IX: Interlacing inequalities for singular values of submatrices, Linear Algebra Appl. 5 (1972), 1-12. [10] B.-Y. Wang and F. Zhang, Words and normality of matrices, Linear and Multilinear Algebra 40 (1995), 111-118. 12