The Maximum Dimension of a Subspace of Nilpotent Matrices of Index 2

Similar documents
A DECOMPOSITION THEOREM FOR HOMOGENEOUS ALGEBRAS

SPACES OF MATRICES WITH SEVERAL ZERO EIGENVALUES

ACI-matrices all of whose completions have the same rank

PRIMITIVE SPACES OF MATRICES OF BOUNDED RANK.II

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

THE JORDAN-FORM PROOF MADE EASY

Definition 2.3. We define addition and multiplication of matrices as follows.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Elementary maths for GMT

COMMUTATIVE/NONCOMMUTATIVE RANK OF LINEAR MATRICES AND SUBSPACES OF MATRICES OF LOW RANK

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Commuting nilpotent matrices and pairs of partitions

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Math Linear Algebra Final Exam Review Sheet

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Elementary linear algebra

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra II. 2 Matrices. Notes 2 21st October Matrix algebra

First we introduce the sets that are going to serve as the generalizations of the scalars.

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Row Space, Column Space, and Nullspace

ORIE 6300 Mathematical Programming I August 25, Recitation 1

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

The Cayley-Hamilton Theorem and the Jordan Decomposition

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Matrix Theory. A.Holst, V.Ufnarovski

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

NORMS ON SPACE OF MATRICES

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Elementary Row Operations on Matrices

Determinants of Partition Matrices

SOME REMARKS ON LINEAR SPACES OF NILPOTENT MATRICES

The Determinant: a Means to Calculate Volume

Chapter 2 Subspaces of R n and Their Dimensions

CSL361 Problem set 4: Basic linear algebra

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Math 1060 Linear Algebra Homework Exercises 1 1. Find the complete solutions (if any!) to each of the following systems of simultaneous equations:

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

1 Linear Algebra Problems

The Jordan Canonical Form

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

COMMUTING PAIRS AND TRIPLES OF MATRICES AND RELATED VARIETIES

On families of anticommuting matrices

1 Last time: least-squares problems

Integral Similarity and Commutators of Integral Matrices Thomas J. Laffey and Robert Reams (University College Dublin, Ireland) Abstract

x y B =. v u Note that the determinant of B is xu + yv = 1. Thus B is invertible, with inverse u y v x On the other hand, d BA = va + ub 2

Notes on Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix

1 Last time: determinants

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

MATRICES. a m,1 a m,n A =

arxiv:math/ v2 [math.ac] 1 Dec 2007

Lecture Notes 1: Vector spaces

On the solvability of the commutative power-associative nilalgebras of dimension 6

Spectral inequalities and equalities involving products of matrices

Linear Algebra MAT 331. Wai Yan Pong

Ma/CS 6b Class 23: Eigenvalues in Regular Graphs

MATHEMATICS 217 NOTES

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III

MAT 2037 LINEAR ALGEBRA I web:

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

Discrete Applied Mathematics

MODEL ANSWERS TO THE THIRD HOMEWORK

(II.B) Basis and dimension

Topics in linear algebra

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

Reconstruction from projections using Grassmann tensors

LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups

1 Linear transformations; the basics

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

INVARIANT IDEALS OF ABELIAN GROUP ALGEBRAS UNDER THE MULTIPLICATIVE ACTION OF A FIELD, II

5 Linear Transformations

Dimension. Eigenvalue and eigenvector

MATH36001 Generalized Inverses and the SVD 2015

Welsh s problem on the number of bases of matroids

Chapter 4. Matrices and Matrix Rings

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

A 2 G 2 A 1 A 1. (3) A double edge pointing from α i to α j if α i, α j are not perpendicular and α i 2 = 2 α j 2

Knowledge Discovery and Data Mining 1 (VO) ( )

Math 240 Calculus III

LINEAR SYSTEMS, MATRICES, AND VECTORS

COLUMN RANKS AND THEIR PRESERVERS OF GENERAL BOOLEAN MATRICES

Homework 11/Solutions. (Section 6.8 Exercise 3). Which pairs of the following vector spaces are isomorphic?

On the adjacency matrix of a block graph

Matrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices

Chapter 5 Eigenvalues and Eigenvectors

Math 489AB Exercises for Chapter 2 Fall Section 2.3

Conceptual Questions for Review

36th United States of America Mathematical Olympiad

Transcription:

The Maximum Dimension of a Subspace of Nilpotent Matrices of Index L G Sweet Department of Mathematics and Statistics, University of Prince Edward Island Charlottetown, PEI C1A4P, Canada sweet@upeica J A MacDougall School of Mathematical and Physical Sciences, University of Newcastle Callaghan, NSW 8, Australia jimmacdougall@newcastleeduau Abstract A matrix M is nilpotent of index if M = Let V be a space of nilpotent n n matrices of index over a field k where card k > n and suppose that r is the maximum rank of any matrix in V The object of this paper is to give an elementary proof of the fact that dim V r(n r) We show that the inequality is sharp and construct all such subspaces of maximum dimension We use the result to find the maximum dimension of spaces of anti-commuting matrices and zero subalgebras of special Jordan Algebras keywords: nilpotent matrix, matrix rank MSC: 15A 1 Introduction An n n matrix M is nilpotent if M t = for some t > We are concerned with linear spaces of nilpotent matrices over a field k As far back as 1959, Gerstenhaber [4] showed that the maximum dimension of a space of nilpotent matrices was n(n 1) In this paper we are interested in matrices nilpotent of index Naturally such a space will have smaller dimension We are able to show that the maximum dimension of such a space depends on the maximum r of the ranks of matrices in the space: r(n r) This bound is sharp and we 1

characterize those spaces attaining this maximum dimension While this might seem to be a very specialized result, it has some important consequences It gives an immediate proof that r(n r) is the maximum possible dimension of a space of anti-commuting matrices over any field of card k > n/ (and char k ) It also shows that r(n r) is the maximum dimension of a zero subalgebra of a special Jordan Algebra All of the proofs involve only elementary linear algebra Related work has been done by Brualdi and Chavey [] They have investigated the more general problem of finding the maximal dimension of a space of nilpotent matrices of bounded index k Their arguments are combinatorial in nature and do not imply our result Atkinson and Lloyd [1] and others have also studied spaces of matrices of bounded rank, but their results do not overlap ours Preliminary Theorems Theorem 1 Let V be a space of n n matrices over a field k where card k n Let A V have the property that r = rank A rank X for every X V If a ker A then Ba Im A for all B V Proof The result is obvious when r = n so assume r < n Let S = { { a 1, a,, a k be a basis of ker A and extend S to a basis B1 = a1, a, a k, a k+1,, a n of k n Then T = { Aa k+1, Aa k+,, Aa n is a basis of Im A and we extend T to a basis B = { c 1, c, c k, Aa k+1,, Aa n of k n Now let the vectors in B 1 form the columns of a matrix Q and the vectors in B form the columns of a matrix P Then P 1 AQ = I r where I r is an r r identity matrix Let B be any matrix in V and assume P 1 B1 B BQ = B B 4 where B 4 is an r r matrix Then for any x k we have P 1 B1 B (B + xa)q = B B 4 + xi r Let S be any (r+1) (r+1) submatrix of P 1 (B+xA)Q containing B 4 +xi r Then det S = Since card k n > r, each term of this polynomial must be identically The fact that the coefficient of x n is implies that each element of B 1 must be So P 1 B BQ = B B 4

Now suppose a ker A Then a = x k x i a i = Q x k i=1 x 1 So we have Ba x 1 x x = BQ x k B = P B B 4 x k = P = n k j=1 y k+j A a k+j Im A x 1 y k+1 y n We note here that the proof actually only required the cardinality of the field k to be more than r, the maximum rank of a matrix in V Lemma {( 1 Let V) be a space of m n matrices over any field {( k and partition ) V as A1 A V = A Let W = {A A A 1 : A V and U = : A V 4 A A 4 Then dim W + dim U = dim V Proof Let dim W = s, dim U = t, and dim V = k There {( must exist ) independent matrices B 1, B, B s V so that if B i = i, Bi,1 B, then { B i, B i,4 B1,1, B,1,, B s,1 is a basis of W Extend B1, B, B s to a basis B = { B 1, B, B s,, B k of V For 1 j k s let B s+j = Bs+j,1 B s+j, B s+j, B s+j,4 Then B s+j,1 = c s+j,1 B 1,1 + c s+j, B,1 + + c s+j,s B s,1

for suitable scalars c s+j,1, c s+j,,, c s+j,s Replace B s+j by B s+j = B s+j (c s+j,1 B 1 + c s+j, B + + c s+j,s B s ) and let B = { B 1, B,, B s, B s+1, B s+,, B k It is easy to show that B is a basis of V and U = span { B s+1, B s+,, B k Note: Lemma 1 is more significant than it first appears For our proof we chose A 1 as the space W, but in principle there is nothing special about that choice In fact a result similar to the statement of the Lemma holds if A 1 is replaced by any set of r fixed positions in the matrices found in V, so long as U is chosen as the set of complementary positions We will use this principle repeatedly in Section We need the following lemma in Section It is equivalent to a known result, but we include a short and simple proof Lemma Let V be a subspace of M m,n (k) where k is any field and let V R = { A Mn,m (k) XA = for all X V Then dim V + dim V R mn Proof Let V i be the space spanned by the i th rows of the elements of V, and let Vi R be the space spanned by the i th columns of the elements of V R Then Vi R Nullspace of A i where A i is a matrix whose rows form a basis of V i This implies that dim V i + dim Vi R n But now dim V + dim V R m i=1 dim V i + m i=1 dim V i R = m i=1 (dim V i + dim Vi R ) mn We note that Lemmas 1 and hold for any scalar field k The following known result is needed in a later section It was first proved by Flanders [] The first step of our proof is similar to that of Flanders but then our argument is considerably shorter because of Theorem 1 and Lemmas 1 and We also note that the restriction on the size of the scalar field k has been removed by Meshulam [5] Theorem Let V be a space of n n matrices over a field k of card k n If rank A r for all A V then dim V nr 4

Proof As in the proof of Theorem 1 we can assume that each B V is of the form B1 B (1) B B 4 where B 4 is r r, and we showed there that B 1 = There we considered the determinant of any (r + 1) (r + 1) submatrix of P 1 (B + xa)q containing B 4 + xi r The fact that the coefficient of x n 1 in this polynomial must be implies that each row of B is orthogonal (in the usual sense) to each column of B Hence B B = Let W = { ( B 4 ) { B and U = V B In U, let W 1 = { ( B ) { B and U 1 = U ( ) B X If B = U 1 and C = U, then B + C U, which Y implies that B Y =, and so W 1 {(B ) R Now by Lemma Then by Lemma 1 dim W 1 + dim {(B ) = dim W 1 + dim U 1 (n r)r dim U = dim W 1 + dim U 1 (n r)r and dim W + dim U = dim V r + (n r)r = nr Using the techniques of the previous lemmas and theorems, we show how to construct (up to equivalence) all spaces of n n matrices of bounded rank r and dimension nr The derivation is somewhat tedious, but we include it here because we will use the same method in Section 4 to characterize the spaces of matrices of nilindex having maximum dimension Let V be such a space Then if A V we may assume that A A = A A 4 where A A =, dim W = dim {(A 4 ) = r and { A dim U = dim V = r(n r) A Suppose there exists a B U of the form B B = B 5

such that there is a non-zero entry b = b i,j in the B corner of B Let c = b k,l be any entry in the B corner of B Since dim W = r there must exist a matrix D in V such that D D = D D 4 in which the j th column of D 4 and the k th row of D 4 are filled with s and the remaining submatrix of D 4 consists of the identity I r 1 Now let S be the (r + 1) (r + 1) submatrix of B + xd containing b, c and D 4 Then det S = and the constant term of the polynomial is ±bc and so c = Hence B = ( B ) But if X U then X + xb U and it follows that in fact { A U = Finally let E be any matrix in V where E E = E E 4 and let X be any matrix in U Then E + xx V and hence E {(A ) R But dim {(A ) = dim U = r(n r) and so Lemma implies that E = ; therefore { A V = A 4 where A and A are arbitrary A similar argument shows that if no B U has a suitable b i,j in the B corner, then { V = A A 4 We summarize this discussion in the following Theorem Let V be a space of n n matrices of rank at most r over a field k with card k n If dim V = nr then up to equivalence V is of the form { { A or () A 4 A A 4 where A 4 is r r 6

The Main Result Theorem 4 Let V be a space of n n nilpotent matrices of index over a field k where card k > n/ Suppose rank X r for all X V Then dim V r(n r) Proof Let Q V such that rank Q = r By rearranging a Jordan basis it is easy to show that Q is similar to Ir where I r is an r r identity matrix Without loss of generality we replace Q by the above matrix Let A V In the proof of Theorem 1 we actually require that card k > r but we know that r n/ so we can apply Theorem 1 to assume that A = A 1 A A A 4 A 5 where A 1, A, and A 5 are r r matrices But (Q + A) = QA + AQ = and this shows that A 5 = A 1 We proceed in cases depending on whether any of A 1 or A and A 4 are Case 1 Suppose that A 1 = for every A V so that each A V is of the form A A A 4 Now let W = { (A ) { and U = A A 4 V Clearly dim W r In U, let W 1 = { (A ) { and U 1 = A 4 U Suppose B = B B 4 U and C = C 4 U 1 Then (B + C) = implies that B C 4 = So in U 1, if T = { (A 4 ) then T W1 R and by Lemma, and So by Lemma 1 dim W 1 + dim T = dim W 1 + dim U 1 (n r)r dim U = dim W 1 + dim U 1 (n r)r () dim V = dim W + dim U r + (n r)r = nr r 7

Case Suppose A( and A 4 do ) not exist In this case r = n/ and each A V A1 A is of the form A = A 1 If each A 1 = then dim V r = nr r and so we assume there exists an A 1 Let W = { (A 1 ) Let r 1 be the largest rank of any matrix in W Then W is a space of nilpotent matrices of index and bounded rank r 1 so by induction we may assume dim W r 1 (r r 1 ) As above, there exists a matrix in V which is similar to I r1 A A Q 1 = A A I r1 where A is an r 1 (r r 1 ) matrix Also Q 1 = implies that A = Now let { B 1 B U = B B 4 V If B U then (Q 1 + B) = implies that B = and so dim U r r 1 (r r 1 ) Hence by Lemma 1 dim V = dim W + dim U r r 1 r 1 + r r 1 (r r 1 ) = r = nr r Case Suppose there exists an A 1 and A and A 4 do exist Note that r < n/ Each A V is of the form A = A 1 A A A 4 A 1 Let W = { (A 1 ) Then as above, by induction we may assume dim W r r 1 r1 where r 1 is the largest rank of any matrix in W Also we may assume there is a matrix in V which is similar to I r1 A A A A A A Q 1 = A 4 A 4 I r1 where A is (r r 1 ) (n r), A is r 1 (n r), A 4 is (n r) r 1 and A 4 is (n r) (r r 1 ) Then Q 1 = implies that A = A 4 = Let 8

A A A { A A A U = A 4 A 4 V As above, if B U then (Q 1 + B) = implies that B B B B B B = B 4 where B is (r r 1 ) (r r 1 ) Also if x k then rank (Q 1 + xb) r and this implies that the rank B (r r 1 ) Hence if in U, S = { (B ) then dim S (r r 1 )(r r 1 ) by Theorem Now in U let W 1 = { B B { B and T = B B B (4) Then using Lemma 1 again: B dim W 1 = dim S + dim T (r r 1 )(r r 1 ) + rr 1 r 1 = r rr 1 + r 1 Finally let B { U 1 = B 4 U Using Lemma and the argument as found in Case 1 it can be shown that dim U 1 (n r)(r r 1 ) Now using Lemma 1 Simplifying: dim V = dim W + dim U = dim W + dim W 1 + dim U 1 rr 1 r1 + r rr 1 + r1 + (n r)(r r 1 ) But r< n/ so rr 1 < nr 1 and hence dim V nr r + rr 1 nr 1 dim V < r(n r) 9

There are some important consequences that follow immediately from this theorem Indeed, it was questions like these that originally interested us in spaces of nilpotent matrices Corollary 1 Let V be a space of anticommuting n n matrices over a field k where card k > n/ and char k If rank A r for all A k then dim V r(n r) Proof Since char k the matrices in V must be nilpotent of index Let A be the algebra of all n n matrices over a field k where char k Define a new multiplication as X Y = 1 (XY + Y X) Then A with its new operation is a Jordan Algebra It is often called a special Jordan Algebra Corollary Let A be a special Jordan Algebra constructed from n n matrices over a field k where card k > n/ and char k If A 1 is a zero subalgebra of A then dim A 1 r(n r) where r is the maximum rank of any matrix in A 1 Proof In a zero subalgebra X Y = and the result follows directly from Corollary 1 4 The Spaces of Maximum Dimension We now show that the inequality in our main result is sharp by constructing spaces which have the maximum dimension r(n r) In addition, the spaces constructed below are, up to similarity, the only ones reaching the maximum dimension Again we consider the three cases { Case 1 Let V 1 = A A where A is any r (n r) matrix and A is any r r matrix Clearly V 1 is a space of nilpotent matrices of index { and bounded rank r and dim V 1 = nr r Similarly V = A A 4 where A 4 is any (n r) r matrix and A is any r r matrix is also a space of nilpotent matrices of index and bounded rank r and dim V = nr r If a space is of this type has maximum dimension r(n r), we can show that these are the only such subspaces The argument is very similar to that in the derivation of Theorem, so we omit the details { A Case Let V = where A is any n/ n/ matrix (n is even ) Then V is a space of nilpotent matrices of index and bounded rank r = n/ and dim V = r(n r) = n /4 1

Again we can show that these are the only subspaces of this type that achieve maximum dimension The argument is similar to that of Theorem and we omit it Case Note that in this case we showed that dim V < nr r and so no subspaces of maximum dimension of this type exist References [1] Atkinson, M D & Lloyd, S, Large spaces of matrices of bounded rank Quart J Math Oxford Ser () 1 (198), no 1, 5 6 [] R A Brualdi & K L Chavey, Linear Spaces of Toepliz and Nilpotent Matrices, J Combin Theory Ser A 6 (199), 65 78 [] H Flanders, On Spaces of Linear Transformations with Bounded Rank, J London Math Soc 7 (196), 1 16 [4] M Gerstenhaber, On Nilalgebras and Varieties of Nilpotent Matrices III, Ann of Math 7() (1959), 167 5 [5] R Meshulam, On the maximal rank in a subspace of matrices, Quart J Math 6 (1985), 5 9 11