SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES

Similar documents
The Maximum Dimension of a Subspace of Nilpotent Matrices of Index 2

PRIMITIVE SPACES OF MATRICES OF BOUNDED RANK.II

The Jordan Canonical Form

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

Spectral inequalities and equalities involving products of matrices

Determinants Chapter 3 of Lay

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Commuting nilpotent matrices and pairs of partitions

Linear Operators Preserving the Numerical Range (Radius) on Triangular Matrices

Foundations of Matrix Analysis

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Chapter 4. Matrices and Matrix Rings

Linear Systems and Matrices

ACI-matrices all of whose completions have the same rank

Ring Theory Problems. A σ

Infinite-Dimensional Triangularization

Bare-bones outline of eigenvalue theory and the Jordan canonical form

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

NOTES on LINEAR ALGEBRA 1

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

arxiv: v1 [math.rt] 7 Oct 2014

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

MATHEMATICS 217 NOTES

G1110 & 852G1 Numerical Linear Algebra

Chapter Two Elements of Linear Algebra

A proof of the Jordan normal form theorem

Math 408 Advanced Linear Algebra

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday

PROOF OF TWO MATRIX THEOREMS VIA TRIANGULAR FACTORIZATIONS ROY MATHIAS

The Cayley-Hamilton Theorem and the Jordan Decomposition

1.10 Matrix Representation of Graphs

MATH 2030: EIGENVALUES AND EIGENVECTORS

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK

First we introduce the sets that are going to serve as the generalizations of the scalars.

On Euclidean distance matrices

Lecture Notes in Linear Algebra

Sums of diagonalizable matrices

SOME REMARKS ON LINEAR SPACES OF NILPOTENT MATRICES

Determinants of Partition Matrices

Lecture Summaries for Linear Algebra M51A

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Control Systems. Linear Algebra topics. L. Lanari

0.1 Rational Canonical Forms

MATH 326: RINGS AND MODULES STEFAN GILLE

SOLOMON'S DESCENT ALGEBRA REVISITED M. D. ATKINSON

ENGR-1100 Introduction to Engineering Analysis. Lecture 21

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Matrices and Linear Algebra

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

RANK-WIDTH AND WELL-QUASI-ORDERING OF SKEW-SYMMETRIC OR SYMMETRIC MATRICES

1 Determinants. 1.1 Determinant

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Problems in Linear Algebra and Representation Theory

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

1182 L. B. Beasley, S. Z. Song, ands. G. Lee matrix all of whose entries are 1 and =fe ij j1 i m 1 j ng denote the set of cells. The zero-term rank [5

Chapter 2: Linear Independence and Bases

On the adjacency matrix of a block graph

1 Last time: least-squares problems

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

The Cyclic Decomposition of a Nilpotent Operator

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

Linear and Bilinear Algebra (2WF04) Jan Draisma

POLYNOMIAL EQUATIONS OVER MATRICES. Robert Lee Wilson. Here are two well-known facts about polynomial equations over the complex numbers

ARITHMETICALLY COHEN-MACAULAY BUNDLES ON THREE DIMENSIONAL HYPERSURFACES

NOTES ON BILINEAR FORMS

Two-boundary lattice paths and parking functions

Math Linear Algebra Final Exam Review Sheet

Chapter 7. Linear Algebra: Matrices, Vectors,

Topics in linear algebra

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Integral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract

Notes on the Matrix-Tree theorem and Cayley s tree enumerator

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Notes on the matrix exponential

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

implies that if we fix a basis v of V and let M and M be the associated invertible symmetric matrices computing, and, then M = (L L)M and the

Matrix Theory. A.Holst, V.Ufnarovski

A NOTE ON THE JORDAN CANONICAL FORM

FORBIDDEN MINORS FOR THE CLASS OF GRAPHS G WITH ξ(g) 2. July 25, 2006

M.6. Rational canonical form

On families of anticommuting matrices

Refined Inertia of Matrix Patterns

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

1.4 Solvable Lie algebras

NEW CONSTRUCTION OF THE EAGON-NORTHCOTT COMPLEX. Oh-Jin Kang and Joohyung Kim

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

Interlacing Inequalities for Totally Nonnegative Matrices

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Determinants. Beifang Chen

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

Determinants: Introduction and existence

Math 121 Homework 5: Notes on Selected Problems

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

18.S34 linear algebra problems (2007)

ANOTHER CLASS OF INVERTIBLE OPERATORS WITHOUT SQUARE ROOTS

Transcription:

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES M. D. ATKINSON Let V be an w-dimensional vector space over some field F, \F\ ^ n, and let SC be a space of linear mappings from V into itself {SC ^ Horn (V, V)) with the property that every mapping has at least r zero eigenvalues. If r = 0 this condition is vacuous but if r = 1 it states that SC is a space of singular mappings; in this case Flanders [1] has shown that dim#"^n(n 1) and that if dim #" = n(n 1) then either SC = Hom(K, U) for some (n l)-dimensional subspace U of V or there exists vev,v ^0 such that SC - {X e Horn {V, V): vx = 0}. At the other extreme r = n, the case of nilpotent mappings, a theorem of Gerstenhaber [2] states that dim SC ^ n(n -1) and that if dim SC = $n(n -1) then SC is the full algebra of strictly lower triangular matrices (with respect to some suitable basis of V). In this paper we intend to derive similar results for a general value of r thereby providing some common ground for the above theorems. The condition on the cardinal of F plays an important role in our proofs although we do not know whether it is essential to the theorems. Flanders required also the constraint char F ^ 2 at one point in his proof; we do not require this and in fact our type of argument allows thefieldin Flanders' theorem to have arbitrary characteristic. THEOREM 1. Let V be an n-dimensional vector space over somefieldf, \F\ ^ n, and let SC be a subspace o/hom (V, V). Suppose that every mapping in SC has at least r zero eigenvalues. Then dim^t ^ %r(r Proof. Gerstenhaber's theorem proves our result in the case n r = 0 and pro vides the first step of an induction on M r. Assume now that n > r > 0 and that the theorem holds for spaces whose transformations all have more than r zero eigenvalues. Since \r(r+l) + n{n r 1) < \r{r l) + n(n r) we can suppose that SC contains a transformation with precisely r zero eigenvalues. By an appropriate choice of basis for V we may take this to be (represented by) A " A = 0 with Ay an rxr nilpotent matrix and A 2 non-singular. By taking A x in rational canonical form we may take it to be strictly lower triangular. From now on we shall ( J K\ I partitioned so that J is an r x r matrix. M NJ Consider any matrix BeSC of the form [ n 1. For all a e F [ n mt A J 3 \0 NJ \ 0 N + aa 2 J has at least r zero eigenvalues and, except for at most n r eigenvalues of A2~ l N,N + %A 2 is non-singular. Hence for all but at most n r values of a, J + <xa, is Received 24 May, 1979. [BULL. LONDON MATH. SOC, 12 (1980), 89-95]

90 M. D. ATKINSON nilpotent. However the coefficients of the characteristic polynomial of J + cnay are polynomials in a which, since A x is singular, are of degree at most r 1. The nonleading coefficients of the characteristic polynomial all vanish for at least r values of a (since F ^ n) and so are identically zero i.e. J + CLA^ is nilpotent for all a and in particular J is nilpotent. Let / be the space of nilpotent rxr matrices which arise in this way; then dim / < %r(r 1). Let J,,..., J, be a basis for / and consider the r 2 1-vectors l pq whosefc-th component is (J k ) pq. Since t ^ %r{r 1) we may choose a set L of %r{r 1) index pairs (p, q) such that {l P q ' (P> <l) e L] spans the space generated by all the l pq. Let U be the complementary set of 2 r (>* + l) index pairs. Then every rxr matrix J may be written uniquely as J = J L + J V where {Ji) pq = 0if{p,q)eU and {J v ) pq = 0 if (p, q) e L. By the choice of L we have the condition J e J and J L = 0 implies J = 0, or equivalently $C and J L = 0 implies J v = 0. (*) By expressing each matrix of 9C in the form I L^, u.. I we may define a linear \ M NJ map $ on 9C by + Jy K\ 0 fj L 0 Then, by (*), -{( " " ) e SC \v& the kernel of 0. Let Clearly dim SC = dim range (0) + dim kernel {(/>) ^ dim $ + dim ^ + dim Jf + dim X < ir(r -1)+ («-r) 2 +dim ^T +dim Jf and the proof will be completed when we have shown that dim M + dim X ^ r{n r). To do this we shall construct a non-singular bilinear form 6: j/ rn _ r x s/ n. rr -> F where ^>n _ r, ^n- r, r denote the full spaces of r x (n r), {n r)xr matrices over F and show that 6 vanishes on X x Ji. This will prove that ^# is contained in the annihilator of X which has dimension r(n r) dim X, giving dim Ji ^ r{n r) dim X as required. '0 Consider any matrices KeX,M ejf. Then there exist Y = I I = 0> i; ) e 5" ( J H\ = (x.j) e 9C. Since A" + a/1 + /5Y has at least r zero eigenvalues for all

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES 91 a, j3 the coefficient of a"" r " l pa r " l in det {X + aa + p Y- XI) is zero (see [2] p. 615). The total coefficient of A*"" 1 in this determinant is (apart from a sign) the sum of all (w-r + l)-rowed principal minors of X + aa + fiyand we require the coefficient of a"" 1 "" 1 /? in this sum. Consider a typical one of these minors If N'+uA'j where J' + OLA\ is the principal submatrix of X + a.a + /? Ywhose rows are indexed by the set S {1, 2,..., r) and N' + aa' 2 is the principal submatrix whose rows are indexed by the set T {r + l,...,n} and where \S u T\ = n r + 1. By direct calculation the coefficient of a"" 1 " 1 /? in this minor is, to within a sign, Z ij e S s,re T c ij D styitx sj where (C y ) is the matrix of cofactors of A^ and (D st ) is the matrix of cofactors of A' 2. We shall express such a coefficient as where x = (x xx x x x ) and Z is an r{n r)x r(n r) matrix whose entries depend upon (C, v ) and (D st ). If \S\ > 1 then, since A\ is strictly lower triangular, C y = Ofor i,j e S,i ^ j and the matrix Z then has the form /0 with zeros above the diagonal and where each diagonal 0 represents an (n r) x (n r) matrix of zeros. If \S\ = \, say S = {i}, then \T\ = n-r, C n = 1 and {D st ) = E is the matrix of cofactors of A 2. In this case the matrix Z has the form 0 0

92 M. D. ATKINSON where each diagonal block is an (n r) x (n r) matrix, E is the i-th diagonal block and there are zeros elsewhere. Summing over all appropriate S, T we obtain the coefficient of a n ~ r ~ x flx r ~ l in the form yzx 1 where Z = and has zeros above the diagonal. Since A 2 is non-singular so also is E and thus Z is non-singular. This matrix Z defines our non-singular bilinear form vanishing on Jf x M and so completes the proof of the theorem. In the next theorem we consider the case where dim SC is maximal and determine the possibilities for SC. We first prove LEMMA. Let V be an n-dimensional vector space over somefieldf, \F\ ^ n, and let SCbea subspace of'horn (V', V). Suppose that W t and W 2 are ^-invariant subspaces ofv such that. (i) dim W i = s lt dim V/W 2 = s 2, dim W l r\w 2 = t, (ii) SC induces spaces ofnilpotent mappings in each of W t and V/W 2i (iii) every transformation in SC has at least r zero eigenvalues. Then dim 3C < \r{r -1) + n(n - r) - {s x - t){n -s 2-1). Proof W t + W 2 has dimension dim W x + dim W 2 dim Wj n W 2 = s l +n s 2 t and we may choose a basis of it having the form e lv> e t> fl>""> fsi-t> Q\i " 9n-n-t such that e lt..., e t is a basis for W x r\ W 2, e x,..., e t, f lt..., f S]. t is a basis for W ly and e lt... t e ti g lt... i g n _ n _ t is a basis for W 2. We complete this to a basis of V with hi,-.., h S2 _ sl+t. With respect to this basis each transformation of SC has matrix N, 0 0 0 ) A x N 2 0 0 A 2 0 B 0 N, and N 2 are nilpotent t x t and (s x t) x (s t -^ t) matrices since SC restricted to W x is a space of nilpotent mappings. JV 3 is a nilpotent (s 2 s t +t)x {s 2 s l +1) matrix since SC induces nilpotent transformations on V/{W i + W 2 ). Since the whole matrix has at least r zero eigenvalues the (n s 2 t)x(n-s 2 -t) matrix B has at least r s 2 t zero

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES 93 eigenvalues. Now, by Theorem 1, the space of all I * I has dimension at most \Ai N 2 J 1), the space of all matrices N 3 has dimension at most and the space of all matrices B has dimension at most Hr-s 2 -t){r-s 2 -t-l) + (n-s 2 -t)(n-r). A routine calculation now shows that dim SC < \r(r -1) + n(n - r) - (s x - t){n -s 2 -t). THEOREM 2. Let V be an n-dimensional vector space over some field F, \F\ ^ n, and let SC be a subspace of Horn (V, V). Suppose that every mapping in 9C has at least r zero eigenvalues and that SC has dimension precisely \r(r l) + n(n r). Then V contains SEinvariant subspaces V lt V 2 with V^ ^ V 2 and dim V x +dim V/V 2 = r and such that S operates on each of V^ and V/V 2 as the full algebra of strictly lower triangular matrices (with respect to suitable bases). Moreover the subspaces V l, V 2 are uniquely determined by 3C unless r = n. Proof. We continue to use the machinery and notation A, L, U, (j), f, M, Jf, tf, 6 introduced in the proof of Theorem 1. From n-r) = dim 3C < dim J^ + dim JT + dim Ji + dimjt we obtain dimj^ = ^r(r-l), dim^k = (n-r) 2, dim Jl + dim X = r{n-r) and M, Jf are the full annihilators of each other with respect to the bilinear form 9. Moreover, identifying S with < I I: L e if > etc., we have range(0) = 2 e Jt e jr. l\ / J In particular range(0) contains a matrix ( I and so SC contains a matrix. This latter matrix is similar to I J and so we could have taken this matrix as our matrix A to begin with. Let us suppose that this had been done. With this choice of A the bilinear form ( 0(1/, V) is easily seen to be tr(uv). H J\»,»r) be any matrix in SC and consider the characteristic M NJ polynomial det (X + aa-xl). In this polynomial the coefficients of

94 M. D. ATKINSON cc"~ r, OL n ~ r k,...,<x n ~ r k r ~ l are all zero and are plainly equal to the coefficients of X, X,..., k r ~ l in det (H-XI). Thus H is nilpotent. Let Jf be the space of matrices H arising in this way. Then %r(r 1) = dim S ^ dim 2tf ^ \r{j 1) by Gerstenhaber's theorem and so dim Jf = /-(r-l). By Gerstenhaber's theorem again M may be reduced to the full algebra of strictly lower triangular matrices with a similarity matrix (T 0\ 7. Therefore by transforming SC by the similarity matrix I I we can take Jf to be the full algebra of strictly lower triangular matrices. Having done this we may compute the coefficient of a"~ r ~ 1 A r ~ 1 in det (X + aa-xi). This yields tr (JM) = 0. (H K\ Consider any matrix Y=[ I in SC. Then for all MtM there exists N 0 A 1 e 9C and we have tr (JM) = 0 and, since Y+Ze%, tr ((K + J)M) = 0. Hence tr (KM) = 0 and K is in the annihilator of Jt with respect to 6 i.e. K e X. Hence contains ( ) for all strictly lower triangular H and all N. Now we show that either every matrix in SC has first row zero or that every matrix in X has r-th column zero. Suppose that this is not so, in which case we may find some matrix X = (x u ) which has neither first row nor r-th column zero, say x ly x,- r ^ 0 for some i,7 > r. As shown above #" contains also the matrix I I where H has ones in the (t + l,t) position, t 1, 2,..., r, and zeros elsewhere and N is any matrix with a single one in every row and column except for the i-th row and j-th column and zeros elsewhere. But then x^x,,. is the coefficient of a"" 2 in det (X + aw) and so is zero since r ^ 1, a contradiction. In terms of matrices the existence part of the theorem asserts that the matrices of SC are simultaneously similar to matrices where L y, L 2 are strictly lower triangular matrices of degrees r {, r 2 with r t + r 2 = r. We can now prove this statement by induction taking as an inductive hypothesis that the statement is true for smaller values of n and all appropriate values ofr. In the case where the matrices of SC have zero first row we consider the space of (n 1) x (n 1) matrices SC obtained by deleting the first row and first column. It is clear that each matrix of SC has at least r 1 zero eigenvalues and hence ^ %r-\){r-2) + {n-\){n-r). On the other hand from the definition of SC, $* dim^-(n-l) = Hr-l){r-2) + {n-l)(n-r). Thus equality holds and the inductive hypothesis can be applied to St. Hence, for some non-singular (n -1) x (n -1) matrix T the matrices of T~ X SCT have the form

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES 95 with L\, L 2 strictly lower triangular of degree r x 1, r 2 and r x 1 + r 2 = r 1. But then taking S = I I the matrices of S~ 1 %'S have the required form. In the case where the matrices of 9C have zero r-th column we use a similar argument taking St to be the space obtained by deleting the r-th row and r-th column from the matrices of SC. Finally, returning to the language of vector spaces and linear mappings, suppose that U x, U 2 are two subspaces of V which have all the stated properties of V X,V 2. Let W x = U x + V x, W 2 = U 2 n F 2,dim W x = s Xi dim V/W 2 = s 2,dim W x n W 2 = t. Then SC induces nilpotent mappings in both W x and V/W 2 and therefore 2C, W x, W 2 satisfy all the conditions of the lemma. It follows that ir(r -1) + n{n - r) ^ \r(r -1) + n{n - r) - (s x - t)(n -s 2-1). But (s 1 t)(n s 2 t) is non-negative and so is zero. Hence s x = tor n = s 2 + t. From s x = t it follows that W x < W 2 i.e. U x + V x ^ U 2 n V 2 and hence U x = V x, U 2 = V 2 since otherwise all transformations in 9 would have more than r zero eigenvalues, contradicting Theorem 1. From n = s 2 + tit follows that W 2 ^ W x and SC is a space of nilpotent matrices i.e. r = n. References 1. H. Flanders, "On spaces of linear transformations with bounded rank", J. London Math. Soc, 37 (1962), 10-16. 2. M. Gerstenhaber, "On nilalgebras and linear varieties of nilp&tent matrices I", Amer. J. Math., 80 (1958), 614-622. Department of Computing Mathematics, University College, Cardiff.