An Interlacing Property of Eigenvalues of Strictly Totally Positive Matrices

Similar documents
Nonlinear eigenvalue{eigenvector problems for STP matrices

Interlacing Inequalities for Totally Nonnegative Matrices

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in

OSCILLATION MATRICES F. R. GANTMACHER M. G. KREIN AND KERNELS AND SMALL VIBRATIONS OF MECHANICAL SYSTEMS REVISED EDITION AMS CHELSEA PUBLISHING

Inverse eigenvalue problems involving multiple spectra

Fiedler s Theorems on Nodal Domains

Strictly Positive Definite Functions on a Real Inner Product Space

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

Fiedler s Theorems on Nodal Domains

INVERSE TRIDIAGONAL Z MATRICES

CHARACTERISTIC ROOTS OF M-MATRICES

A SUFFICIENT CONDITION FOR STRICT TOTAL POSITIVITY OF A MATRIX. Thomas Craven and George Csordas

Math Matrix Algebra

Math 408 Advanced Linear Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

THE REAL AND THE SYMMETRIC NONNEGATIVE INVERSE EIGENVALUE PROBLEMS ARE DIFFERENT

Chapter 3 Transformations

Backward Error Analysis for Totally Positive Linear Systems*

Positive entries of stable matrices

Linear Algebra Practice Problems

Two Characterizations of Matrices with the Perron-Frobenius Property

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS

Bidiagonal decompositions, minors and applications

Sign patterns of the Schwarz matrices and generalized Hurwitz polynomials

Strictly positive definite functions on a real inner product space

Sign variation, the Grassmannian, and total positivity

The total positivity interval

Linear Algebra: Characteristic Value Problem

Example. We can represent the information on July sales more simply as

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Conceptual Questions for Review

arxiv: v1 [math.ca] 7 Jan 2015

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Abed Elhashash, Uriel G. Rothblum, and Daniel B. Szyld. Report August 2009

Lecture Summaries for Linear Algebra M51A

Detailed Proof of The PerronFrobenius Theorem

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Study Guide for Linear Algebra Exam 2

A NEW EFFECTIVE PRECONDITIONED METHOD FOR L-MATRICES

SOMEWHAT STOCHASTIC MATRICES

On the inverse of the adjacency matrix of a graph

Solving Homogeneous Systems with Sub-matrices

Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Chapter 3. Determinants and Eigenvalues

Note on deleting a vertex and weak interlacing of the Laplacian spectrum

Applications of The Perron-Frobenius Theorem

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

CLASSIFICATION OF TREES EACH OF WHOSE ASSOCIATED ACYCLIC MATRICES WITH DISTINCT DIAGONAL ENTRIES HAS DISTINCT EIGENVALUES

On Euclidean distance matrices

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Matrices related to linear transformations

Matrix Theory, Math6304 Lecture Notes from October 25, 2012

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Lecture 7: Positive Semidefinite Matrices

1 T 1 = where 1 is the all-ones vector. For the upper bound, let v 1 be the eigenvector corresponding. u:(u,v) E v 1(u)

Chapter 6: Orthogonality

Spectral Properties of Matrix Polynomials in the Max Algebra

1 Last time: least-squares problems

Recall the convention that, for us, all vectors are column vectors.

Chapter 1: Systems of Linear Equations

(v, w) = arccos( < v, w >

Math 240 Calculus III

On marker set distance Laplacian eigenvalues in graphs.


Perron eigenvector of the Tsetlin matrix

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Intrinsic products and factorizations of matrices

Z-Pencils. November 20, Abstract

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Properties of Linear Transformations from R n to R m

Lecture 18: The Rank of a Matrix and Consistency of Linear Systems

Math 3191 Applied Linear Algebra

Edinburgh, December 2009

Eigenvalues and Eigenvectors

1 Last time: inverses

Econ Slides from Lecture 8

Linear Algebra and its Applications

and let s calculate the image of some vectors under the transformation T.

On prescribing Ritz values and GMRES residual norms generated by Arnoldi processes

Applied Mathematics Letters. Comparison theorems for a subclass of proper splittings of matrices

Non-absolutely monotonic functions which preserve non-negative definiteness

Math 2331 Linear Algebra

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Introduction to Arithmetic Geometry Fall 2013 Lecture #17 11/05/2013

Inverse Eigenvalue Problems for Two Special Acyclic Matrices

CHAPTER 10 Shape Preserving Properties of B-splines

2 b 3 b 4. c c 2 c 3 c 4

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Arrangements and Amounts of Equal Minors in Totally Positive Matrices

Ma/CS 6b Class 20: Spectral Graph Theory

The Kemeny Constant For Finite Homogeneous Ergodic Markov Chains

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

ELE/MCE 503 Linear Algebra Facts Fall 2018

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Complementarity properties of Peirce-diagonalizable linear transformations on Euclidean Jordan algebras

Transcription:

An Interlacing Property of Eigenvalues of Strictly Totally Positive Matrices Allan Pinkus Abstract. We prove results concerning the interlacing of eigenvalues of principal submatrices of strictly totally positive matrices. 1. Introduction The central results concerning eigenvalues and eigenvectors of strictly totally positive (STP) matrices were proved by Gantmacher and Krein in their 1937 paper [4]. (An announcement appeared in 1935 in [3]. Chapter II of the book of Gantmacher and Krein [5] is a somewhat expanded version of this paper.) Among the results obtained in that paper is that an n n STP matrix A has n positive, simple eigenvalues, and that the n 1 positive, simple eigenvalues of the principal submatrices obtained by deleting the first (or last) row and column of A strictly interlace those of the original matrix. It has been known for many years that this interlacing property does not hold for all principal submatrices. An example may be found in the 1974 paper of Karlin and Pinkus [7]. Let λ 1 > > λ n > 0 denote the eigenvalues of A, and 1 > > n 1 > 0 the eigenvalues of the principal submatrix of A obtained by deleting its kth row and column. It easily follows from the Perron Theorem on positive matrices that λ 1 > 1 for each k. Moreover in 1985 Friedland [2] proved that and 1 > λ 2 n 1 > λ n for each k, 1 < k < n. In this note we prove the following result.

2 Allan Pinkus Theorem 1. Let A be an n n STP matrix as above. Then for each k, 1 < k < n, (where λ 0 = λ 1 ). λ 1 > > λ +1, = 1,..., n 1, Before proving this result we introduce some notation and recall some known facts. For a non-zero vector c = (c 1,..., c n ) IR n, we define two notions of the number of sign changes of the vector c. These are: S (c) the number of sign changes in the sequence c 1,..., c n with zero terms discarded. S + (c) the maximum number of sign changes in the sequence c 1,..., c n where zero terms are arbitrarily assigned values +1 and 1. For example S (1, 0, 1, 1, 0, 1) = 2, and S + (1, 0, 1, 1, 0, 1) = 4. A matrix A is said to be an STP matrix if all its minors are strictly positive. STP matrices were independently introduced by Schoenberg in 1930, see [8], and by Krein and some of his colleagues in the 1930 s. The main result of Gantmacher and Krein on the eigenvalues and eigenvectors of STP matrices is the following Theorem A. Let A be an n n STP matrix. Then A has n positive, simple eigenvalues. Let λ 1 > > λ n > 0 denote these eigenvalues and u k a real eigenvector (unique up to multiplication by a nonzero constant) associated with the eigenvalue λ k. Then p p q 1 S c i u i S + c i u i p 1 i=q for each 1 q p n (and c i not all zero). In particular S (u ) = S + (u ) = 1 for = 1,..., n. In addition, the eigenvalues of the principal submatrices obtained by deleting the first (or last) row and column of A strictly interlace those of the original matrix. Proofs of this result may be found in Gantmacher, Krein [4], in Gantmacher, Krein [5], and in Ando [1]. A very important property of STP matrices is that of variation diminishing. This was Schoenberg s initial contribution to the theory. The particular result we need may be found in Gantmacher, Krein [5], and in Karlin [6] and is the following. Theorem B. Let A be an n m STP matrix. Then for each vector x IR m, x 0, i=q S + (Ax) S (x).

Interlacing Property of Eigenvalues 3 2. Proof of Theorem 1 We first prove that for any k and = 1,..., n 1, > λ +1. Let u = (u 1,..., u n ) denote a real eigenvector of A with eigenvalue λ +1. That is, Au = λ +1 u. Then from Theorem A, S + (u) = S (u) =. Let A k denote the principal submatrix of A obtained by deleting the kth row and column. Let v denote a real eigenvector of A k with eigenvalue. That is, A k v = v. From Theorem A, S + (v) = S (v) = 1. From the vector v IR n 1 we construct the vector v IR n by simply inserting a 0 at the kth coordinate. That is, we will write (somewhat abusing notation) v = (v 1,..., v k 1, v k+1,..., v n ) T and set Let v be defined by Thus v = (v 1,..., v k 1, 0, v k+1,..., v n ) T. Av = v. v = (v 1,..., v k 1, w k, v k+1,..., v n ) T, for some easily calculated w k. From Theorem B and the definitions of v, v and v we have, 1 = S + (v) S + (v ) S (v ) = S (v) = 1. Thus S + (v ) = S (v ) = 1. If u k = 0, then the vector obtained by deleting u k from u is an eigenvector of A k with eigenvalue λ +1. Since this new vector also has exactly sign changes, it follows from Theorem A that λ +1 = +1 < µ(k).

4 Allan Pinkus If w k = 0, then v = v is an eigenvector of A with 1 sign changes. As such = λ > λ +1. Thus we may assume that both u k and w k are non-zero. Since u and v are defined up to multiplication by a non-zero constant we may assume, based on the above, that u k and w k are both positive. Set c = inf{c : c > 0, S (cv + u) 1}. We claim that c is a well-defined positive number. This is a consequence of the following. For all c sufficiently large, e.g., such that c v i > u i if v i 0, we have S (cv + u) S + (cv + u) S + (v ) = 1. (Here we have used the fact that cv k + u k = u k and w k are of the same sign.) For all c sufficiently small and positive, e.g., such that c v i < u i if u i 0, we have and S (cv + u) S (u) =. From the definition of c and continuity properties of S + and S, it follows that for all c c. Now Let From Theorem B, S (c v + u) 1 S + (cv + u) A(c v + u) = c v + λ +1 u. ĉ = c > 0. λ +1 S + (ĉv + u) S (c v + u) 1. Since ĉv + u and ĉv + u differ only in the kth coordinate, where both are positive, it follows that S + (ĉv + u) = S + (ĉv + u) 1. This implies from the above that ĉ > c. Thus > λ +1. The proof of the reverse inequality λ 1 > for any k and = 2,..., n 1 is essentially the same.

Interlacing Property of Eigenvalues 5 Let x be a real eigenvector of A with eigenvalue λ 1. That is, Ax = λ 1 x. Note that S + (x) = S (x) = 2. The vectors v, v, and v are as above. If w k = 0, then λ 1 = 1 > µ(k). We may thus assume that w k and x k are both positive. We set and now follow the previous proof. a = inf{a : a > 0, S (ax + v ) 2}, = λ < λ 1. If x k = 0, then Since STP matrices are dense in the set of all totally positive (TP) matrices (matrices all of whose minors are nonnegative), and because of the continuity of the eigenvalues as functions of the matrix entries, we have: Corollary 2. Let A be an n n TP matrix with eigenvalues (listed to their algebraic multiplicity). Let λ 1 λ n 0 1 n 1 0 denote the eigenvalues of the principal minor of A obtained by deleting its kth row and column. Then for any k, 1 < k < n, (where λ 0 = λ 1 ). λ 1 λ +1, = 1,..., n 1, References 1. Ando, T., Totally positive matrices, Lin. Alg. and Appl. 90 (1987), 165 219. 2. Friedland, S., Weak interlacing properties of totally positive matrices, Lin. Alg. and Appl. 71 (1985), 95 100. 3. Gantmacher, F. R., and M. G. Krein, Sur les matrices oscillatoires, C. R. Acad. Sci. (Paris) 201 (1935), 577 579. 4. Gantmacher, F. R., and M. G. Krein, Sur les matrices complètement non négatives et oscillatoires, Composito Math. 4 (1937), 445 476.

6 Allan Pinkus 5. Gantmacher, F. R., and M. G. Krein, Ostsillyatsionye Matritsy i Yadra i Malye Kolebaniya Mekhanicheskikh Sistem, Moskva-Leningrad, 1950; German transl. as Oszillationsmatrizen, Oszillationskerne und kleine Schwingungen mechanischer Systeme, Akademie Verlag, Berlin, 1960; English transl. as Oscillation Matrices and Kernels and Small Vibrations of Mechanical Systems, USAEC, 1961. 6. Karlin, S., Total Positivity. Volume 1, Stanford University Press, Stanford, CA, 1968. 7. Karlin, S., and A. Pinkus, Oscillation Properties of Generalized Characteristic Polynomials for Totally Positive and Positive Definite Matrices, Lin. Alg. and Appl. 8 (1974), 281 312. 8. Schoenberg, I. J., Über variationsvermindernde lineare Transformationen, Math. Z. 32 (1930), 321 328. Allan Pinkus Department of Mathematics Technion, I. I. T. Haifa, 32000 Israel pinkus@tx.technion.ac.il