Positive definite preserving linear transformations on symmetric matrix spaces

Similar documents
MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

arxiv: v3 [math.ra] 10 Jun 2016

1 Last time: least-squares problems

Lecture 7: Positive Semidefinite Matrices

On Euclidean distance matrices

Simultaneous Diagonalization of Positive Semi-definite Matrices

1 Linear Algebra Problems

Linear and Multilinear Algebra. Linear maps preserving rank of tensor products of matrices

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

MATH 235. Final ANSWERS May 5, 2015

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices

Abstract. In this article, several matrix norm inequalities are proved by making use of the Hiroshima 2003 result on majorization relations.

Math Matrix Algebra

Convexity of the Joint Numerical Range

Lec 2: Mathematical Economics

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

9.1 Eigenvectors and Eigenvalues of a Linear Map

G1110 & 852G1 Numerical Linear Algebra

Some bounds for the spectral radius of the Hadamard product of matrices

arxiv: v3 [math.ra] 22 Aug 2014

Spectral inequalities and equalities involving products of matrices

Moore Penrose inverses and commuting elements of C -algebras

Matrix Inequalities by Means of Block Matrices 1

Department of Mathematics Technion - Israel Institute of Technology Haifa 32000, Israel

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Some inequalities for sum and product of positive semide nite matrices

The minimum rank of matrices and the equivalence class graph

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

Chapter 4 Euclid Space

Chapter Two Elements of Linear Algebra

The Singular Value Decomposition

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

Knowledge Discovery and Data Mining 1 (VO) ( )

Operators with numerical range in a closed halfplane

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Matrix Theory, Math6304 Lecture Notes from September 27, 2012 taken by Tasadduk Chowdhury

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1

City Suburbs. : population distribution after m years

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Linear Algebra Lecture Notes-II

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Linear Algebra Formulas. Ben Lee

Trace Inequalities for a Block Hadamard Product

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

N-WEAKLY SUPERCYCLIC MATRICES

1. General Vector Spaces

Introduction to Association Schemes

Uniqueness of the Solutions of Some Completion Problems

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Review of linear algebra

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

MINIMAL NORMAL AND COMMUTING COMPLETIONS

12x + 18y = 30? ax + by = m

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

ELA ON A SCHUR COMPLEMENT INEQUALITY FOR THE HADAMARD PRODUCT OF CERTAIN TOTALLY NONNEGATIVE MATRICES

Interlacing Inequalities for Totally Nonnegative Matrices

Hessenberg Pairs of Linear Transformations

Tensor Complementarity Problem and Semi-positive Tensors

Lecture notes: Applied linear algebra Part 1. Version 2

MA 265 FINAL EXAM Fall 2012

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Here is an example of a block diagonal matrix with Jordan Blocks on the diagonal: J

Multiple eigenvalues

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

LINEAR MAPS PRESERVING NUMERICAL RADIUS OF TENSOR PRODUCTS OF MATRICES

Symmetric Matrices and Eigendecomposition

Product Zero Derivations on Strictly Upper Triangular Matrix Lie Algebras

Math 108b: Notes on the Spectral Theorem

Diagonalizing Matrices

Chapter 4 & 5: Vector Spaces & Linear Transformations

Linear algebra 2. Yoav Zemel. March 1, 2012

Strictly Positive Definite Functions on a Real Inner Product Space

Constructing and (some) classification of integer matrices with integer eigenvalues

Math 315: Linear Algebra Solutions to Assignment 7

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Cheat Sheet for MATH461

Topics in linear algebra

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Matrix Completion Problems for Pairs of Related Classes of Matrices

The skew-symmetric orthogonal solutions of the matrix equation AX = B

Maths for Signals and Systems Linear Algebra in Engineering

On Euclidean distance matrices

Quadratic and Copositive Lyapunov Functions and the Stability of Positive Switched Linear Systems

Linear Algebra Practice Problems

MATH 583A REVIEW SESSION #1

An Introduction to Linear Matrix Inequalities. Raktim Bhattacharya Aerospace Engineering, Texas A&M University

Decomposable Numerical Ranges on Orthonormal Tensors

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

RANK AND PERIMETER PRESERVER OF RANK-1 MATRICES OVER MAX ALGEBRA

Problem # Max points possible Actual score Total 120

Math Linear Algebra Final Exam Review Sheet

I. Multiple Choice Questions (Answer any eight)

Lecture 8 : Eigenvalues and Eigenvectors

On a new approach to classification of associative algebras

Convex Invertible Cones, Nevalinna-Pick Interpolation. and the Set of Lyapunov Solutions

Spectrally arbitrary star sign patterns

Inertially arbitrary nonzero patterns of order 4

Transcription:

Positive definite preserving linear transformations on symmetric matrix spaces arxiv:1008.1347v1 [math.ra] 7 Aug 2010 Huynh Dinh Tuan-Tran Thi Nha Trang-Doan The Hieu Hue Geometry Group College of Education, Hue University 34 Le Loi, Hue, Vietnam dthehieu@yahoo.com August 10, 2010 Abstract Base on some simple facts of Hadamard product, characterizations of positive definite preserving linear transformations on real symmetric matrix spaces with an additional assumption rankt(e ii ) = 1,i = 1,2,...,n or T(A) > 0 A > 0, were given. AMS Subject Classification (2000): Primary 15A86; Secondary 15A18, 15A04. Keywords: Linear preserver problems, Hadamard product, Symmetric matrix, Positive definite. 1 Introduction In the recent years, one of active topics in the matrix theory is the linear preserver problems (LPPs). These problems involve linear transformations on matrix space that have special properties, leaving some functions, subsets, relations... invariant. Many LPPs were treated while a lot of another ones are still open. For more details about LPPs: the history, the results and open problems we refer the reader to [2], [4], [5], [8], and references therein. On real symmetric or complex Hermitian matrices, one can consider the LPP of inertia. We say A G(r,s,t) if r,s,t are the numbers of positive, negative and zero eigenvalues of The authors are supported in part by a Nafosted Grant. 1

a matrix A, respectively. The LPP of inertia asks to characterize a linear transformation T preserving G(r,s,t), i.e. T(G(r,s,t)) G(r,s,t). It is well-known (see [8]) that a linear preserver of G(r,s,t) on Hermitian matrices are of the form A WAW or A WA t W for some invertible W unless the cases of rs = 0 or r = s. The preservers of G(n, 0, 0), the class of positive definite matrices, on complex Hermitian or real symmetric matrices are not known although one can easily find many such non-standard ones, for example (on complex Hermitian matrices) A r W i AW i where W i,i = 1,2,...,r are invertible matrices. In general, these can not be reduced to a single congruence while there are some that can not be a sum of congruences (see [5, Section 5]). In this paper, we consider the LPP of G(n, 0, 0) on real symmetric matrices. For simplicity, sometimes we write A > 0 (A 0) instead of A is positive definite ( positive semi-definite ) and denote by S n (R) the space of real symmetric matrices of size n. Theorem 11 characterizes the linear transformations preserving both G(n, 0, 0) and ranke ii, i = 1,2,...,n. They are of the form A W(H A)W t where W is an invertible matrix, H is a positive semi-definite matrix, diagh = diagi n, and the symbol stands for the Hadamard product. As far as we known, this is the first time, the Hadamard product appears in a linear preserver problem. If a preserver on real symmetric matrices T is of the form A WAW t, where W is invertible, then it is obviously T(A) > 0 implies A > 0. The Theorem 12 shows that, this condition is also a sufficient one. Indeed, Theorem 12 characterizes the ones preserving G(n,0,0) with additional condition T(A) > 0 A > 0. It is proved that this condition implies rankt(e ii ) = 1, i = 1,2,...,n and hence T has the form as in Theorem 11. But in this case, the matrix H is of rank 1 and therefore we can express T in the standard form A WAW t.theproofofthetheoremalsoshowsthatthecondition T(A) > 0 A > 0 is equivalent to dett(a) 0 deta 0, i.e. preserving singularity ; T(A) = A or preserving the set of all singular, positive semi-definite symmetric matrices. Here A is the set of all positive definite symmetric matrices. 2 Some useful lemmas It is well-known that a symmetric matrix A is diagonalizable and associated with a quadratic form q(x) = Ax,x, x = (x 1,x 2,...,x n ) R n. By a fundamental theorem 2

of linear algebra, there exists an orthonormal basis {e 1,e 2,...,e n } such that q can be brought to a diagonal form In terms of matrices we can state q(x 1,x 2,...,x n ) = λ i x 2 i. Lemma 1 For any A S n (R), there exists an orthogonal matrix W such that WAW t is a diagonal matrix. If we set x i = λ i e i, then we have two similar lemmas. Lemma 2 For any A S n (R) of rank r, there exists linear independent (pairwise orthogonal) vectors x 1,x 2,...,x r such that A = r k ix i x t i, k i { 1,1}. Moreover if A is positive semi-definite, then A = r x ix t i. Lemma 3 For any positive semi-definite matrix A S n (R) of rank r, there exists an invertible matrix W such that A = W( r E ii)w t, where E ii = e i e t i. Moreover if A is invertible then A = WW t. Below are some more ones we need for the rest of the paper. Lemma 4 Let A = r x ix t i be a positive semi-definite matrix of rank r in S n (R). If every entry in its diagonal is non-zero, then there exists a vector x = (x 1,x 2,...,x n ), x i 0, i = 1,2,...,n, such that A = xx t + B, where B is a positive semi-definite matrix of rank r 1. Proof. Suppose that x i = (x i1,x i2,...,x in ). Let u 1 = α 1 x 1 + β 1 x 2 = (u 11,u 12,...,u 1n ) and v 1 = β 1 x 1 α 1 x 2, where α 1 and β 1 are chosen so that α 1,β 1 > 0; α 2 1 +β2 1 = 1 and whenever x 1k 0 or x 2k 0, u 1k 0. It is obviously that x 1 x t 1 + x 2x t 2 = u 1u t 1 + v 1v t 1. Let u 2 = α 2 u 1 + β 2 x 3 in such a way and so on, we get A = u n 1 u t n 1 + n 1 v iv t i. Let x = u n 1 and B = n 1 v iv t i, the lemma is proved. Lemma 5 Let A = r k ix i x t i be a matrix of rank r in S n (R), k i { 1,1}, i = 1,2,...r and x 1,,x r are linear independent. If x x 1,,x r, where x 1,,x r is the linear subspace generated by x 1,,x r, then rank(a+xx t ) ranka. Proof. It is clear that for any u x 1,x 2,...,x r, (A+xx t )(u) = 0. This means that rank(a+xx t ) r = ranka. Lemma 6 Let A 1,A 2,...,A n be non-emptyfinitesubsets of R n, A i {0}, i = 1,2,...,n. If for every i {1,2,...,n}, A i j i A j, then ranka 1 = ranka 2 =... = ranka n = 1. 3

Proof. Suppose v 1,v 2 A i are linear independent, then we can take a basic B of Aj containing v 1,v 2. Since card(b) n, there exists an index k i, such that A k is not belonging to B, i.e. A k j k A j, a contradiction. Lemma 7 A linear transformation (on symmetric matrices) preserving positive definiteness preserves positive semi-definiteness. Proof. Because T is continuous and the topological closure of A is B, where B is the set of all positive semi-definite symmetric matrices. 3 The Hadamard product The Hadamard product is a simple matrix product, sometimes called the entrywise product. This product is much less widely understood although it has nice properties and some applications in statistics and physics (see [2], [3]). For (m n)-matrices A = (a ij ) and B = (b ij ), the Hadamard product of A and B is another (m n)-matrix, denoted by A B and defined by A B = (a ij b ij ). It is easy to see that the Hadamard product is linear, commutative and has a nice relationship with diagonalizable matrices. Let A = (a ij ) be a diagonalizable matrix of size n and λ 1,λ 2,...,λ n be its eigenvalues. There exists an invertible matrix W such that A = WDW 1, where D = (d ij ) is the diagonal matrix whose entries on the diagonal are λ i, i.e. d ii = λ i, i = 1,2,...,n. We can verify the following a 11 a 22 a nn λ 1 λ 2. = (W (W 1 ) t.. λ n Theorem 8 (The Schur product theorem) If A, B are positive semi-definite then so is A B. Moreover, we have Proposition 9 If A S n (R) is positive definite, B S n (R) is positive semi-definite and all entries on the diagonal of B are non-zero then A B is positive definite. Proof. By Lemma 4, B = xx t +C, where x = (x 1,x 2,...,x n ), x i 0,i = 1,2,...,n and C 0. Suppose A = n x ix t i, where x 1,x 2,...,x n are linear independent. Because α i (x i x) = 0 ( α i x i ) x = 0 α i x i = 0, 4

the vectors x 1 x,x 2 x,...,x n x are linear independent. Thus, A B = A (xx t )+A C A (xx t ) = x i x t i (xx t ) = (x i x)(x i x) t > 0. The following proposition is useful for the proofs of the main theorems. Proposition 10 Let x = (x 1,x 2,...,x n ),y = (y 1,y 2,...,y n ) R n, where x i 0,i = 1,2,...,n. If x,y are linear independent, then there exists a singular positive semi-definite matrix A of rank n 1 such that A (xx t +yy t ) is positive definite. Proof. Without loss of generality, we can suppose that y 1 x 1 yn x n. Let e 1,e 2,...,e n be the standard basis of R n and A = n 1 x ix t i, where x 1 = e 1 + e n,x i = e i,i = 2,...n 1. We can see that A is a positive semi-definte matrix of rank n 1 and {x 1 y,x 1 x,x 2 x,...,x n 1 x} is a basis of R n. By Proposition 9, the proof is now proved because n 1 A (xx t +yy t ) = ( x i x t i ) (xxt +yy t ) n 1 ( x i x t i ) (xxt )+(x 1 x t 1 ) (yyt ) n 1 = (x i x)(x i x) t +(x 1 y)(x 1 y) t > 0. 4 The main theorems Theorem 11 Let T : S n (R) S n (R) is a linear transformation preserving positive definiteness and rankt(e ii ) = 1, i = 1,2,...,n. Then there exists an invertible matrix W and a positive semi-definite matrix H, diagh = diagi n, such that for every A S n (R) T(A) = W(H A)W t. (1) Proof. Since T preserves positive definiteness, T(I n ) is a positive definite matrix. Then there exists an invertible matrix W 1 such that T(I n ) = W 1 W1 t. We can verify that the linear operator T 1 (A) = W1 1 T(A)(W1 t) 1 also preserves positive definiteness, rank(t 1 (E ii ) = 5

1, i = 1,2,...,n and moreover T 1 (I n ) = I n. Suppose T 1 (E ii ) = u i u t i,i = 1,...,n. Since I n = T 1 (I n ) = T 1 (E ii ) = u i u t i > 0; {u 1,u 2,...,u n } is an orthonormal basis of R n. Let W 2 = [u 1,,u n ], the orthogonal matrix whose i-th column is u i and consider T 2 : S n (R) S n (R), T 2 (A) = W 2 T 1 (A)W t 2, A S n (R). We can verify that T 2 has the same properties as T 1 s and moreover T 2 (E ii ) = E ii,i = 1,...,n. Let T 2 (E ij +E ji ) = A. For every y = (y 1,y 2,...,y n ), where y i = 0 (i.e. T 2 (E ii )y,y = E ii y,y = 0), we claim that Ay,y = 0. (2) Indeed, suppose that there exists a such y R n such that Ay,y 0. We assume that Ay,y < 0. (The case of Ay,y > 0 has a similar proof). Then we can choose a small enough positive number ǫ and a big enough positive number β (says βǫ > 1) such that Ay,y +ǫ (I E ii )y,y < 0, and X = (β ǫ)e ii +ǫi +(E ij +E ji ) is positive definite. But then T 2 (X)y,y = Ay,y +ǫ (I E ii )y,y < 0, a contracdition. The equality (2) means that, all entries of A are zeros unless ones lying on the i-th column or the i-th row. The equality (2) also holds for y = (y 1,y 2,...,y n ), where y j = 0 and therefore all entries of A are zeros unless ones lying on the j-th column or the j-th row. Thus, for every i,j, i j T 2 (E ij +E ji ) = h ij (E ij +E ji ). Set H = (h ij ) n n,h 11 = h 22... = h nn = 1. Since H = T(1), where 1 is the square matrix whose all entries are 1, H positive semi-definite by Lemma 7. We can verify that T 2 (A) = A H for all A S n (R). Let W = W 2 W 1, we have (1). It is easy to see that, if T has the form (1), then T preserves positive definiteness, by Proposition 9, and rankt(e ii ) = 1 for all i = 1,2,...,n. Theorem 12 Let T : S n (R) S n (R) is a linear operator preserves positive definiteness. Then, T satisfies the condition T 1 (A) A if and only if there exists an invertible W such that for every A S n (R) T(A) = WAW t (3) 6

Proof. It is easy to see that, if T(A) = WAW t then T 1 (A) A. Now suppose that T 1 (A) A. First, we prove that rankt(e ii ) = 1,i = 1,,n. If there exists an index i such that T(E ii ) = {0}, then j i T(E jj) = T( j i E jj) = T(I n ) > 0, a contradiction because det j i E jj = 0. Thus, T(E ii ) {0}, i = 1,2,...,n. Suppose rankt(e ii ) = r i 1,i = 1,,n. Since T(E ii ) is positive semi-definite (Lemma 7), T(E ii ) = r i j=1 u iju t ij. Denote by A i = {u ij : j = 1,2,...,n}. If there exists an index i such that A i j i A j, then by Lemma 5, n = rankt(i n ) rank j i T(E jj), a contradiction. Thus, A i j i A j and hence rankt(e ii ) = 1, i = 1,,n by virtue of Lemma 6. By Theorem 11, there exists an invertible matrix W 1 such that T(A) = W 1 (H A)W t 1, A S n(r). Following the proof of Theorem 11, diagh = diagi n. Suppose that rank(h) = r, r > 1. By Lemma 4 and because rankh > 1, H = xx t + yy t + B, where x,y are linear independent, x = (x 1,x 2,...,x n ), x i 0,i = 1,2,...n and B is positive semidefinite. By Proposition 10, there exists a singular (and positive semi-definite) matrix A such that A (xx t + yy t ) is positive definite. But A H A (xx t + yy t ) > 0 and hence T(A) > 0. This contradiction means that rankh = 1 and we have H = uu t, where u = (u 1,u 2,...,u n ), u i 0, i = 1,2,...,n. Since diagh = diagi n, u i { 1,1},i = 1,2,...,n. It is not hard to see that A H = ( u i E ii )A( u i E ii ) and set W = W 1 ( u i E ii ). The theorem is proved. Remark. Positive definite preservers of standard form (expressed by a single congruence) are only ones satisfying T 1 (A) A. Following the proof of Theorem 12, it is not hard to prove the following Corollary 13 The condition T 1 (A) A in the Theorem 12 is equivalent to each one of the followings: 1. T 1 (C) C, where C is the set of all invertible symmetric matrices, i.e. T preserves singularity. 2. T preserves the set of all singular, positive semi-definite symmetric matrices. 3. T(A) = A. References [1] Cao, C. and Tang, X., Determinant Preserving Transformations on Symmetric matrix Spaces, Electronic Journal of Linear Algebra, Vol.11, 205-211 (2004). 7

[2] Horn, R. A. and Johnson, C. R., Topics in Matrix Analysis, Cambridge University Press, Cambridge (1991). [3] Johnson, C., Matrix Theory and Applications, American Mathematical Society, 1990. [4] Li, C. K. and Tsing, N. K., Linear preserver problems: a brief introduction and some special techniques, Directions in matrix theory (Auburn, AL, 1990). Linear Algebra Appl. 162/164 217-235 (1992). [5] Li, C.-K. and Pierce, S., Linear preserver problems, Amer. Math. Monthly, 108: 591-605 (2001). [6] Loewy, R., Linear maps which preserve a balanced nonsingular inertia class, Linear Algebra Appl. 134:165-179 (1990). [7] Loewy, R., Linear maps which preserve an inertia class, SIAM J. Matrix Anal. Appl. 11:107-112 (1990). [8] Loewy, R., A survey of linear preserver problems-chapter 3: Inertia preservers, Linear Multilinear Algebra 33: 2230 (1992). [9] Pierce, S. and Rodman, L., Linear Preservers of the class of Hermitian matrices with balanced inertia, Siam J. Matrix Anal. Appl. Vol. 9, No. 4, 461-472 (1988). 8