POLYNOMIAL EQUATIONS OVER MATRICES. Robert Lee Wilson. Here are two well-known facts about polynomial equations over the complex numbers

Similar documents
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

THREE LECTURES ON QUASIDETERMINANTS

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Online Exercises for Linear Algebra XM511

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Linear Algebra Practice Problems

Linear Algebra Practice Problems

MATH. 20F SAMPLE FINAL (WINTER 2010)

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Problem # Max points possible Actual score Total 120

Math 215 HW #9 Solutions

Problems in Linear Algebra and Representation Theory

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Solutions to Final Exam

LINEAR ALGEBRA REVIEW

SUPPLEMENT TO CHAPTERS VII/VIII

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

ANSWERS. E k E 2 E 1 A = B

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 221, Spring Homework 10 Solutions

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Linear Algebra II Lecture 13

Topics in linear algebra

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Final Exam Practice Problems Answers Math 24 Winter 2012

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Some notes on linear algebra

Answer Keys For Math 225 Final Review Problem

4. Linear transformations as a vector space 17

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Linear Algebra 1. M.T.Nair Department of Mathematics, IIT Madras. and in that case x is called an eigenvector of T corresponding to the eigenvalue λ.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 1553-C MIDTERM EXAMINATION 3

EXERCISES AND SOLUTIONS IN LINEAR ALGEBRA

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

I. Multiple Choice Questions (Answer any eight)

This MUST hold matrix multiplication satisfies the distributive property.

Study Guide for Linear Algebra Exam 2

Math 315: Linear Algebra Solutions to Assignment 7

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

Dimension. Eigenvalue and eigenvector

1. Select the unique answer (choice) for each problem. Write only the answer.

Math Linear Algebra Final Exam Review Sheet

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

Math 554 Qualifying Exam. You may use any theorems from the textbook. Any other claims must be proved in details.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

2. Every linear system with the same number of equations as unknowns has a unique solution.

Eigenvalues and Eigenvectors A =

Math 205, Summer I, Week 4b:

Lecture 12: Diagonalization

Topic 1: Matrix diagonalization

235 Final exam review questions

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Definition (T -invariant subspace) Example. Example

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 583A REVIEW SESSION #1

Diagonalization. Hung-yi Lee

Review problems for MA 54, Fall 2004.

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Chapter 3 Transformations

0.1 Rational Canonical Forms

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Eigenvalues, Eigenvectors, and Diagonalization

Math Final December 2006 C. Robinson

Linear Algebra Highlights

Solution. That ϕ W is a linear map W W follows from the definition of subspace. The map ϕ is ϕ(v + W ) = ϕ(v) + W, which is well-defined since

MA 265 FINAL EXAM Fall 2012

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0

Eigenvalues and Eigenvectors: An Introduction

Question 7. Consider a linear system A x = b with 4 unknown. x = [x 1, x 2, x 3, x 4 ] T. The augmented

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

MATH 1553 PRACTICE FINAL EXAMINATION

Math 20F Final Exam(ver. c)

EE5120 Linear Algebra: Tutorial 6, July-Dec Covers sec 4.2, 5.1, 5.2 of GS

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(VI.C) Rational Canonical Form

SPECTRAL THEORY EVAN JENKINS

Review of linear algebra

MAT Linear Algebra Collection of sample exams

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MATH Spring 2011 Sample problems for Test 2: Solutions

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 115A: SAMPLE FINAL SOLUTIONS

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Transcription:

POLYNOMIAL EQUATIONS OVER MATRICES Robert Lee Wilson Here are two well-known facts about polynomial equations over the complex numbers (C): (I) (Vieta Theorem) For any complex numbers x,..., x n (not necessarily distinct), there is a unique monic polynomial over C f(x) x n + a n x n +... + a x + a (x x )...(x x n ) such that the equation f(x) has roots x,..., x n (counting multiplicities). Then a n i ( ) i j <...<j i x j...x ji. (II) (Fundamental Theorem of Algebra) The equation over C x n + a n x n +... + a x + a has a root. Consequently, it has exactly n roots (counting multiplicities). This article describes the similar statements that can be made about a polynomial equation X n + A n X n +... + A X + A over M k (C), the algebra of k by k matrices over C. For simplicity I will assume n k. All the results can be extended to general n k, but the notation gets more complicated. Such problems for polynomials of low degree (particularly for n ) have been treated by several authors since they arise naturally in control Typeset by AMS-TEX

theory in functional analysis. See, for example, GHR,LR. For general n, the solution of the diagonalizable case is due to Fuchs Schwarz FS. The differences between the situation for equations over the complex numbers the situation for equations over matrices arise for two reasons: multiplication in M k (C) is not commutative not all matrices in M k (C) are invertible. Analogues of the Vieta Theorem Let X X be roots of the quadratic equation X + A X + A over any algebra (not necessarily commutative - e.g., M k (C)). Then we have X + A X + A Taking the difference gives X + A X + A. Replace by X X + A (X X ). X X X (X X ) + (X X )X. (This is the noncommutative version of the well-known formula u v (u+v)(u v) that holds in the commutative case.) We obtain Thus X (X X ) + (X X )X + A (X X ). () A (X X ) X (X X ) + (X X )X. This has two consequences: First, if X X is invertible we may multiply on the right by its inverse obtain A X + (X X )X (X X ). To make this look nicer, write y X

Then In other words: y (X X )X (X X ). A y + y A X A X y + y + y y. A (y + y ) A y y. These are close analogues of the formulas in the commutative case, but they involve rational functions, not just polynomials. This generalization of the Vieta Theorem (which can be extended to equations of degree n by using the theory of quasideterminants) is due to Gelf Retakh GR,GR. Second, equation () may be rewritten as () (X + A )(X X ) (X X )X. Now work in M (C) take X, X. Then (X X )X. But the rows of the matrix (X + A )(X X ) must be contained in the row space of the matrix X X 3.

Thus it is impossible to satisfy () hence there is no quadratic equation over M (C) with roots,. Solutions of polynomial equations- how many solutions can there be? Some examples: () X has no solution. a b To see this, suppose that is a solution observe that c d a b a + bc b(a + d) c d c(a + d) bc + d. If this is equal to, then a + d. But c(a + d), so c. But then (comparing the entries on the diagonal) a d, so a d, a contradiction. We will return to this example later see another, less computational, way to analyze it. () X has infinitely many solutions x For example, is a solution for any x C. (3) X has four solutions: 4 ± X. ± It is clear that these four matrices are solutions. To see that there are no other a b solutions, let X. Then, as before c d X a + bc b(a + d) c(a + d) bc + d. If this is equal to, then 4 b(a + d) c(a + d). 4

If a + d, then b c, a, d 4, giving the four solutions listed. If a + d, then a d so a d. But then, a + bc bc + d 4, a contradiction. (4) X has infinitely many solutions. The same analysis as in (3) shows that a b X c a is a solution whenever a + bc. If b c, these are just the matrices of reflections in C. Here is a question we will answer: For what values of l is there a quadratic equation X + A X + A with exactly l solutions? So far, we know that there is such an equation if l, 4, or. We have not yet seen the answer for l,, 3, 5, 6, 7, 8,... We want to find all solutions to (3) X + A X + A. First we need to think about how we can specify a solution X. There are several ways to do this: (a) Write down the entries of X, e.g., X. (b) Write down X (the first column of X) X (the second column of X). E.g., X X describes the same matrix as in (a). (c) Write down Xv Xv for any basis {v, v } of C. E.g., X 5

X 3 3 describes the same matrix as in (a) (b). (d) Write down a basis {v, v } for C consisting of eigenvectors for X specify the corresponding eigenvalue for each eigenvector. (Recall that a nonzero vector v is an eigenvector for X with corresponding eigenvalue λ if Xv λv.) Note that requiring that be an eigenvector for X with corresponding eigenvalue that be an eigenvector for X with corresponding eigenvalue 3 describes the same matrix as in (a), (b) (c). Note that replacing v i by c i v i where c, c describes the same matrix. Thus, if we let S { u C} { u we may assume that the eigenvectors are chosen from S. It is important to note that not every matrix can be described in this way. The matrices that can be described this way are called diagonalizable matrices. Diagonalizable solutions We can now find all diagonalizable solutions to equation (3). Let X be such a solution let v be an eigenvector for X with corresponding eigenvalue λ. Then } v (X + A X + A )v X v + A Xv + A v This is equivalent to requiring that λ v + λa v + A v (λ I + λa + A )v. det(λ I + λa + A ) v Null (λ I + λa + A ). Thus λ is a root of the equation det(t I + ta + A ), 6

a fourth degree equation. Each of its roots is a possible eigenvalue for X the corresponding eigenvectors may be taken from the nonzero elements of Now the matrix S Null (λ I + λa + A ). t I + ta + A is a matrix with entries in Ct. A general result (rational canonical form) says that P (t)(t I + ta + A )Q(t) f (t) f (t) where P (t) Q(t) are invertible matrices over Ct, detp (t) detq(t) f (t), f (t) are monic polynomials in t with f (t) dividing f (t). (The polynomial f (t) is the greatest common divisor of all of the entries of the matrix t I + ta + A.) Then det(t I + ta + A ) f (t)f (t) λ is a root of det(t I + ta + A ) if only if (t λ) f (t). Furthermore, if then λ I + λa + A so If (t λ) f but (t λ) f (t), then (t λ) f, Null (λ I + λa + A ) C. Null (λ I + λa + A ) Span {Q(λ) } is a one-dimensional subspace. We can now describe how to find all diagonalizable solutions of equation (3). Denote the roots of det(t I + ta + A ) by λ i, i m where m 4. First suppose f (t). Then, for any i, i m we have dim Null (λ i I + λ i A + A ). Therefore this space is spanned by a single vector of S. Denote this vector by v i. Then, for any pair (i, j), i < j m with v i v j, there is a solution X to equation (3) such that v i is an eigenvector with eigenvalue λ i v j is an eigenvector 7

with eigenvalue λ j. Every diagonalizable solution to equation (3) arises in this way. Note that in this case there are at most 6 ( 4 ) diagonalizable solutions. Next suppose f (t) has a single root λ. Then, m 3 (for f (t) has at most three roots, one of which is λ ). In this case, dim Null (λ I + λ A + A ) dim Null (λ i I + λ i A + A ) for i m. Therefore, Null (λ i I + λ ia + A ) is spanned by a single vector of S. Denote this vector by v i. Now, since Null (λ I + λ A + A ) C, λ I is a solution. Also, if i m, if v S v v i, there is a solution X to equation (3) such that v is an eigenvector for X with eigenvalue λ v i is an eigenvector for X with eigenvalue λ i. Here there are infinitely many choices for v hence infinitely many solutions. In addition, if v v 3, there is a solution X to equation (3) such that v is an eigenvector for X with eigenvalue λ v 3 is an eigenvector for X with eigenvalue λ 3. Every diagonizable solution to equation (3) arises in this way. Notice that in this case there are infinitely many diagonalizable solutions if only if m >. Finally, suppose f (t) has two roots λ λ. Then f (t) f (t) λ λ are the only roots of det(λ I + λa + A ). Furthermore, dim Null (λ i I + λ i A + A ), i. Then λ I, λ I are solutions for any pair (v, v ) of distinct elements of S there is a solution X to equation (3) such that v is an eigenvector for X with eigenvalue λ v is an eigenvector for X with eigenvalue λ. Every diagonalizable solution to equation (3) arises in this way. In this case there are infinitely many diagonalizable solutions. This discussion goes over without serious change to the case of equations of degree n over M k (C). The maximum finite number of diagonalizable solutions becomes ( ) nk k. This maximum finite number of diagonalizable solutions ( the analysis of the problem using eigenvectors eigenvalues) is due to Fuchs Schwarz FS. 8

Examples: () Find all diagonalizable solutions of X The possible eigenvalues are the roots of det(t I ) t 4.. Thus is the only possible eigenvalue. The possible eigenvectors are the nonzero elements of Null. These are just the nonzero multiples of. Thus we cannot find a basis for C consisting of eigenvectors for X, so there are no diagonalizable solutions. () Find all diagonalizable solutions of X. The possible eigenvalues are the roots of det(t I) t 4 so, again, is the only possible eigenvalue. The possible eigenvectors are the nonzero elements of the nullspace of the zero matrix, i.e., all nonzero vectors in C. Thus we may take any basis for C declare that both elements of the basis are eigenvectors with corresponding eigenvalues. This means that X is the zero matrix. Thus there is only one diagonalizable solution. (3) Find all diagonalizable solutions of X. 4 The possible eigenvalues are the roots of det(t I t ) det( 4 t ) 4 (t )(t 4) (t )(t + )(t )(t + ). Thus the possible eigenvalues are,,,. If or is an eigenvalue, the corresponding eigenvector must be a nonzero element in N ull, that is, a 3 nonzero multiple of. If or is an eigenvalue, the corresponding eigenvector 9

3, that is, a nonzero multiple of must be a nonzero element of Null. Thus, in determining X there are two choices ( ) for the eigenvalue to assign to two choices ( ) for the eigenvalue to assign to diagonalizable solutions. (4) Find all diagonalizable solutions of X The possible eigenvalues are the roots of det(t t I ) det( t.. Thus there are four ) (t )(t ) (t ) (t + ). Thus the possible eigenvalues are. In either case, any vector in S is a possible eigenvector. Thus we may choose {v, v } to be any pair of distinct elements of S, declare both elements to be eigenvectors for. This gives the identity matrix. We may also declare both elements to be eigenvectors for. This gives I. But we may also declare v to be an eigenvector corresponding to v to be an eigenvector corresponding to. There are infinitely many ways to do this, so there are infinitely many solutions. (Note that if we choose an orthogonal basis for C this procedure produces the matrix of a reflection.) Non-diagonalizable solutions It is well-known that if a k by k matrix has k distinct eigenvalues, then the matrix is diagonalizable. Thus if a by matrix is not diagonalizable, it can have only one eigenvalue. Note that if X + A X + A if we set then Y X + ri Y + (A ri)y + (r I ra + A ). Since λ is an eigenvalue of X if only if r +λ is an eigenvalue of Y, we may assume, by replacing X by X + ri for an appropriate r, that is an eigenvalue of X. Since we are assuming that is an eigenvalue of X we have that is the only eigenvalue of X. This implies that X (for the characteristic polynomial of X must be t by the Cayley-Hamilton Theorem, X satisfies its characteristic polynomial).

Hence it is sufficient to consider nilpotent solutions of X + A X + A. Since we are assuming that X is not diagonalizable, X. Hence there is a linearly independent set {v, v } such that Xv Then Xv v. v (X + A X + A )v X v + A Xv + A v A v v (X + A X + A )v X v + A Xv + A v A v + A v Recall that Ct denotes the algebra of polynomials in t with coefficients from the complex numbers. Let (t ) denote the ideal in Ct generated by t, that is, the set of all polynomials divisible by t. Then the quotient algebra Ct/(t ) denotes the algebra of polynomials where we set t. We write (Ct/(t )) for the set of all g (t) g (t) where g (t), g (t) Ct/(t ). Note that M (Ct) acts on (Ct/(t )) by matrix multiplication. Note also that we may write any element in (Ct/(t )) uniquely in the form w + tw where w, w C. Now let v, v be as above; that is, A v

A v + A v Note that this is equivalent to (4) (t I + ta + A )(v + tv ) in (Ct/(t )). Recall that P (t)(t I + ta + A )Q(t) f (t) ) f (t) where P (t) Q(t) are invertible matrices over Ct, detp (t) detq(t) f (t), f (t) are monic polynomials in t with f (t) dividing f (t). The same decomposition holds over Ct/(t ). Note that (4) has no solution unless t f (t); therefore, from now on we assume t f (t). Note that it t f (t) then our equation is X the set of solutions is the set of all nilpotent matrices. After setting t, the left-h column of Q(t) may be written as p + tp the right-h column may be written as q + tq, where p, p, q, q C. Note that {p, q } is linearly independent, since p q are the columns of the invertible matrix Q(). If f (t) t, then the set of solutions of (4) is Thus there is a solution X with t Q(t) + Q(t) + Q(t) Span {tp, q + tq, tq }. t X(ap + q ) q, X(q ) whenever {ap + q, q } is linearly independent. Since {p, q } is linearly independent, there are infinitely many such solutions.

If t f (t), then (4) has solution Q(t) + Q(t) Span{q t + tq, tq }. If {q, q } is linearly independent there is a unique nilpotent solution X with Xq q, Xq. If {q, q } is linearly dependent, there is no nilpotent solution. Example: Find all nilpotent solutions of X. We write t t t t t 4. Thus Since the set q, q {, } is linearly dependent, there is no nilpotent solution. Example: Find all nilpotent solutions of X + Thus Here t + t X + t + t t t t 4 t. q, q 3. t + t

so there is a unique nilpotent solution. How many solutions can there be? We can now answer our earlier question: How many solutions can the equation X + A X + A have? We have seen that if this equation has a finite number of solutions, then the number of diagonalizable solutions is at most ( m ) where m is the number of distinct roots of (5) det(t I + ta + A ). We have also seen that the equation can have a nilpotent solution if only if is a root of (5) with multiplicity greater than. Furthermore, the number of nilpotent solutions is, or. Therefore, if (5) has four distinct roots the number of solutions is finite, this number is at most 6. If (5) has a single repeated root (which we may assume to be ) the number of solutions is finite, there is at most non-diagonalizable solution 3 ( 3 ) diagonalizable solutions. Finally, if (5) has two repeated roots the number of solutions is finite, there are at most two non-diagonalizable solutions (one corresponding to each root of (5)). Also, m, so there is at most ( ) diagonalizable solution. Thus, in any case, the maximum possible finite number of solutions is 6. The following examples show that for each l,,, 3, 4, 5, 6, there is an equation with exactly l solutions. Each example may be analyzed using the methods above. Examples: () X + has no solutions. (a) X + We have X + has a unique (nilpotent) solution. det t + t t t 3 (t + ), so the possible eigenvalues for a solution X are. The possible eigenvectors corresponding to are the nonzero elements in the null space of. The 4

only element of S in this space is. The possible eigenvectors corresponding to are the nonzero elements in the null space of. The only element of S in this space is. Therefore there is no diagonalizable solution. t Now + t t t t + t t 4 + t 3. Thus q so there is a unique nilpotent solution (b) X + We have X +, q. has a unique (diagonalizable) solution. t det + t + t t (t + ), so the possible eigenvalues for a solution X are. The possible eigenvectors corresponding to are the nonzero elements in the null space of. The only element of S in this space is.the possible eigenvectors corresponding to are the nonzero elements in the null space of. The only element of S in this space is Now Thus. Thus there is a unique diagonalizable solution (t + 3)t (t + 3)t q. (t + ) t t t t + 3 (t + ) t (t + )., q 5

so there is no nilpotent solution. A non-diagonalizable solution X of this equation with eigenvalue - of multiplicity would correspond (upon replacing X by X +I) to a nilpotent solution of the equation X X + Thus, in this case,. Since + (t 3)t (t 3)t q t t + 3 (t ) (t ) + t t t (t )., q so there is no nilpotent solution. Therefore there is no non-diagonalizable solution with eigenvalue of the original equation. () X + X + has two solutions, neither of which is diagonalizable. We have t det + t t t (t + ), + t so the possible eigenvalues for a solution X are. The possible eigenvectors corresponding to are the nonzero elements in the null space of. The only element of S in this space is. The possible eigenvectors corresponding to are the nonzero elements in the null space of this space is Now t t. Thus there are no diagonalizable solutions.. The only element of S in t + t t + t t + t t (t + ). Thus q, q 6,

so there is a unique nilpotent solution. A non-diagonalizable solution X of this equation with eigenvalue - of multiplicity would correspond (upon replacing X by X +I) to a nilpotent solution of the equation X X +. Since t + t t t t t t t t (t ). Thus, in this case, q, q so there is unique nilpotent solution. This corresponds to the solution of the original equation. This is the unique non-diagonalizable solution with eigenvalue. (3) X + X + has three solutions, one nilpotent two diagonalizable. We have t det + t t t t (t + )(t ), t so the possible eigenvalues for a solution X are,,. The possible eigenvectors corresponding to are the nonzero elements in the null space of. The only element of S in this space is. The possible eigenvectors corresponding to are. The only element of S in this space the nonzero elements in the null space of is. The possible eigenvectors corresponding to are the nonzero elements in 7

the null space of. The only element of S in this space is two diagonalizable solutions:. Now (t 3 + t t)/ Thus,.Thus there are t + t t t t t + t t + t t. (t + )(t ) q, q so there is a unique nilpotent solution (4) X has four (diagonalizable) solutions. 4 We have t det t t (t )(t + )(t )(t + ). 4t Since this factors into distinct linear factors, all solutions are diagonalizable. We have found these solutions above. 3 (5) X + X + has five (diagonalizable) solutions. We have t det 3t + t t t(t )(t + )(t ), + t so the possible eigenvalues for a solution X are,,,. The possible eigenvectors corresponding to are the nonzero elements in the null space of. The only element of S in this space is. The possible eigenvectors corresponding 4 to are the nonzero elements in the null space of. The only element of S in this space is. The possible eigenvectors corresponding to are the nonzero elements in the null space of 6,.. The only element of S in this space is 8. The

possible eigenvectors corresponding to are the nonzero elements in the null space of 6. The only element of S in this space is. The 5 diagonalizable solutions 6 correspond to the pairs of eigenvalues (, ), (, ), (, ), (, ), (, ). (6) X X + has six (diagonalizable) solutions. 4 We have t det t t + t + t t(t )(t + )(t ), t so the possible eigenvalues for a solution X are,,,. The possible eigenvectors corresponding to are the nonzero elements in the null space of. 4 The only element of S in this space is. The possible eigenvectors corresponding. The only element of S to are the nonzero elements in the null space of in this space is. The possible eigenvectors corresponding to are the nonzero elements in the null space of. The only element of S in this space is. The 6 possible eigenvectors corresponding to are the nonzero elements in the null space 3 of. The only element of S in this space is. There is a diagonalizable solution corresponding to each pair of distinct possible eigenvalues. Thus the pair (, ) corresponds to ; the pair (, ) corresponds to ; the pair (, ) corresponds to ; the pair (, ) corresponds to ; the pair (, ) corresponds to ( ) X has infinitely many solutions. ; the pair (, ) corresponds to References FS D. Fuchs A. Schwarz, A Matix Vieta Theorem, E. B. Dynkin Seminar, Amer. Math. Soc. Transl. Ser 69 (996). GHR I. Gohberg, P. Lancaster, L. Rodman, Matrix polynomials, Academic Press, 98. 9.

GR I. Gelf V. Retakh, Noncommutative Vieta Theorem Symmetric Functions, Gelf Mathematical Seminars 993-95, Birkhauser (996). GR I. Gelf V. Retakh, Quasideterminants, I, Selecta Math 3 (997), no. 4. LR P. Lancaster L. Rodman, Algebraic Riccati Equations, Clarendon Press, 995. Department of Mathematics, Rutgers University, Piscataway, NJ 8854-89 E-mail address: rwilson@math.rutgers.edu