University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra PhD Preliminary Exam June 3, 24 Name: Exam Rules: This is a closed book exam Once the exam begins, you have 4 hours to do your best Submit as many solutions as you can All solutions will be graded and your final grade will be based on your six best solutions Each problem is worth 2 points Justify your solutions: cite theorems that you use, provide counter-examples for disproof, give explanations, and show calculations for numerical problems If you are asked to prove a theorem, do not merely quote that theorem as your proof; instead, produce an independent proof Begin each solution on a new page and use additional paper, if necessary Write only on one side of paper Write legibly using a dark pencil or pen Ask the proctor if you have any questions Good luck! 5 2 6 3 7 4 Total DO NOT TURN THE PAGE UNTIL TOLD TO DO SO Applied Linear Algebra Preliminary Exam Committee: Joshua French (Chair), Julien Langou, Anatolii Puhalskii
Assume the following general definition for a real positive semidefinite matrix: an n n real matrix A is said to be positive semidefinite if and only if, for all vector x in R n, x T Ax In particular, this definition allows real matrices which are not symmetric to be positive semidefinite (a) Prove that if A and B are real symmetric positive semidefinite matrices and matrix A is nonsingular, then AB has only real nonnegative eigenvalues ( pts) (b) Provide a counterexample showing that the requirement that the matrices are symmetric cannot be dropped ( pts) (a) Since A is symmetric positive definite, A /2 and A /2 are well defined The matrix AB has the same eigenvalues as the matrix A /2 ABA /2 = A /2 BA /2 The latter matrix is selfadjoint and positive semidefinite, so it has real nonnegative eigenvalues Note: The result also holds if we remove the assumption of A to be nonsingular In other words, A and B only need to be two n by n symmetric positive semidefinite matrices The proof gets a little trickier though (b) One needs to provide positive semidefinite matrices A and B, A nonsingular, such that AB has an eigenvalue which is not real and nonnegative Given question (a) we understand that either A or B (or both) have to be nonsymmetric To create a positive semidefinite matrix A, one simply takes a symmetric positive definite matrix H and then add an antisymmetric matrix S, then A = H +S is positive semidefinite matrix ( ) ( ) In our case, we can take A = and B = In this case A is positive semidefinite nonsingular, B is positive semidefinite, and AB does not have real nonnegative eigenvalues
2 (a) Suppose A and B are real-valued symmetric n n matrices Show that trace (AB) trace (A 2 ) trace (B 2 ) What are the conditions for equality to hold? ( pts) (b) Let A be a real m n matrix Show that ) trace(aa T ) trace( AA T When does equality hold? ( pts) (a) By the Cauchy-Schwarz Theorem, trace (AB) = a ij b ij i,j i,j ij b 2 ij = trace (A 2 ) trace (B 2 ) For equality to hold, one of the matrices has to be a scalar multiple of the other (b) Let AA T = P T DP, where D represents a nonnegative diagonal matrix and P represents an orthogonal matrix Then trace(aa T ) = trace(d) = λ i ( λi ) 2 = (trace(d /2 )) 2 = (trace((aa T ) /2 )) 2 i i i,j The fact that i λ i ( i λi ) 2 comes from developing the square on the right side Equality holds if and only if D has at most one nonzero entry, so AA T has at most one nonzero eigenvalue, so A has at most one nonzero singular value
3 Let f : M n (R) M n (R) A A T (a) What are the eigenvalues of f? ( pts) (b) Is f diagonalizable? If yes, give a basis of eigenvectors If no, give as many linearly independent eigenvectors as possible ( pts) It is clear that f 2 = I, therefore p(x) = (x )(x+) is such that p(f) = This implies that the eigenvalues of f are part of the set {, } Also p(f) = implies that f is diagonalizable since p only has single roots Now it is clear that any symmetric matric is eigenvector associated with eigenvalue, andthatan eigenvector associated witheigenvalue isasymmetricmatrix Ifwe call the subspace of symmetric matrices, S n, and E the eigenspace of f associated with eigenvalue, we have S n = E It is also clear that any antisymmetric matric is eigenvector associated with eigenvalue -, and that an eigenvector associated with eigenvalue - is an antisymmetric matrix If we call the subspace of antisymmetric matrices, A n, and E the eigenspace of f associated with eigenvalue, we have A n = E We know that M n = S n A n Therefore we can diagonalize f by taking a basis of S n and a basis of A n to form a basis of M n
4 Define the n n matrix a+b b b b b a a+b b b b A n = a a a+b b b a a a a+b b a a a a a+b (a) Compute D n = det(a n ) ( pts) (b) Give the value of D n for n =, a = 2, and b = ( pts) We perform (in this order) L n L n L n, then L n L n L n 2, and finally L 2 L 2 L (These transformations do not change the value of the determinant) We get a+b b b b b b a D n = b a a b a We develop with respect to last column and get D n = ( ) n b b a b a a b +a a+b b b b b a b a b a And so, we get D n = b n +ad n We have D = a+b (Note: We could get D from D = b+ad if we define D to be )
So we get D 2 = b 2 +ad = b 2 +ab+ Quick check: D 2 = a+b a b a+b = (a+b)2 ab = b 2 +ab+ So we get D 3 = b 3 +ad 2 = b 3 +ab 2 + b+a 3 Pursuing in an identical manner, we get D n = b n +ab n ++a n b+a n = We can simplify by noticing that So, if a b, we have And, if a = b, we get n a k b n k k= (a b)(b n +ab n ++a n b+a n ) = a n+ b n+ D n = an+ b n+ a b D n = (n+)a n (And we check that the latter expression for a = b is the limit of the expression for a b when b goes to a) For n =, a =, and b = 2, we get ( ) (2) ( ) 2 = 249 3 = 683
5 Suppose that u and v are vectors in a real inner product space V (a) Prove that ( u + v ) u,v u v (b) Prove or disprove the following identity: ( u + v ) u,v u v u+v ( pts) u+v ( pts) (a) Case: u,v Theinequalityfollowstriviallysinceanormisnonnegative Thus, the leftside is no more than while the right side is no less than Case 2: u,v > Squaring the left side we have ( u + v ) 2 u,v u,v u 2 v 2 ( u 2 + v 2 +2 u v ) u,v u v u 2 v 2 () = u v u,v + u,v +2 u,v v u (2) = u v u v + u v +2 u,v v u (3) = u+v 2 (4) Both () and (3) are obtained by applying the Cauchy-Schwarz inequality to u,v, while (2) and (4) are obtained by simplifying (b) Let u = (,), v = (,), and use a Euclidean inner product (dot product) Then the left side of the inequality becomes (+)()() = while the right side is (Note: one can also use one-dimensional vector: u = (), v = ( ))
6 Let V be a vector space Let f L(V) Let p be a projection (so p L(V) and is such that p 2 = p) Prove that Null(f p) = Null(p) (Null(f) Range(p)) (2 pts) Firstly, we would like to prove that Null(p) (Null(f) Range(p)) Null(f p) Note: We recall that if A, B and C are subspaces, to prove that A +B C, we just need to prove that A C and B C Null(p) Null(f p) Let x Null(p), then p(x) =, so (f p)(x) =, so x Null(f p) Null(f) Range(p) Null(f p) Letx Null(f) Range(p) Sincex Range(p), there exists y such that x = p(y) Since x Null(f), we have f(x) = Now let us look at (f p)(x) (Note: we want to prove that (f p)(x) = ) We have (f p)(x) = (f p)(p(y)) = f(p 2 (y)) = f(p(y)) = f(x) =, We have used the facts that 2: x = p(y), 3 4: p 2 = p, 4 5: p(y) = x, 5 6: f(x) = This proves that x Null(f p) We proved that (Null(p)+(Null(f) Range(p))) Null(f p) Secondly, we would like to prove that Null(f p) Null(p) (Null(f) Range(p)) Let x Null(f p), we can write x as where x = (x p(x))+p(x), (a) (x p(x)) Null(p) Indeed, p(x p(x)) = p(x) p 2 (x), but p = p 2 so p(x p(x)) =, so (x p(x)) Null(p) (b) p(x) Null(f) Range(p) Itisafactthatp(x) Range(p) Moreover, since x Null(f p), we have that (f p)(x) =, which proves that p(x) Null(f) So p(x) Null(f) Range(p)
Therefore we have that At this point, we proved that Null(f p) Null(p)+(Null(f) Range(p)) Null(f p) = Null(p)+(Null(f) Range(p)) It remains to prove that the sum is direct Let x Null(p) (Null(f) Range(p)), then x Range(p), so there exits u V such that x = p(u), but x Null(p), so p(x) =, so p 2 (u) =, but p 2 = p, so p(u) =, so x = We proved that Null(p) (Null(f) Range(p)) = {} so the sum in the previous paragraph is direct We are done and we can conclude that Null(f p) = Null(p) (Null(f) Range(p))
7 (a) Let n N\{,} (so n 2) and A M n (C) such that rank(a) = Prove that A is diagonalizable if and only if trace(a) ( pts) (b) Let a,a n C\{}, (so the a i s are nonzero complex numbers,) and A such ( ) ai a j that A = (This means that the entry (i,j) of A is a i i,j n a j ) Show that A is diagonalizable Give a basis of eigenvectors (with the associated eigenvalues) for A ( pts) (a) First we note that rank(a) = dim(null(a)) = n (by the rank theorem) So, if rank(a) = and n 2, then dim(null(a)) and so is an eigenvalue of A We call ν the geometric multiplicity of the eigenvalue, and µ the algebraic multiplicity of the eigenvalue We call E the eigenspace associated with the eigenvalue Now, since dim(null(a)) = n, we have that dim(e ) = n, or in other words, the geometric multiplicity of the eigenvalue, ν, is n We know that, for a given eigenvalue, the algebraic multiplicity is always greater than or equal to the geometric multiplicity For the eigenvalue, this reads: ν µ For a rank matrix, there are therefore only two cases: either ν = µ = n, or ν = n, µ = n case ν = µ = n In this case, since µ = n, there has to exist another eigenvalue λ different from zero (Because the sum of the algebraic multiplicities of the eigenvalues has to sum to n) For that eigenvalue λ, the geometric multiplicity, ν λ, is at least, but can be no more than (because ν = n and the sum of the algebraic multiplicities of two distinct eigenvalues has to be less than n) So ν λ = So we have ν λ = and ν = n, so A is diagonalizable We also note that, in this case, trace(a) = λ, (the trace is the sum of the eigenvalues counted with their multiplicities,) and so, in this case, trace(a) case ν = n,µ = n In this case, since µ = n, A only has the eigenvalue We also have that A is not diagonaliable and that trace(a) = Starting from a rank matrix, we found two possibilities Either ν = µ = n, in which case, A is diagonaliable and trace(a) Or ν = n,µ = n, in which case, A is not diagonaliable and trace(a) = This enables us to conclude that for a rank matrix A is diagonalizable trace(a)
(b) We observe that the matrix is of rank Indeed a A = a n a n ( a a n a n ) We also have trace(a) = n So by the previous question, we see that A is diagonalizable (since trace(a) We also see that A has eigenvalue with geometric multiplicity n and eigenvalue n with geometric multiplicity eigenvalue To find n linearly independent eigenvectors associated with eigenvalue, we want to find a basis for the null space of A, which is same as null space of ( ) a a n a n We have (for example) that x is a leading variable, and that x 2, x 3,, x n are free variables This gives for a general solution: a a a 3 a x 2 a a 3 x 3 a a n x n a a nx n x 2 x 3 x n x n = x 2 +x 3 ++x n a a n +x n a a n v = So a basis for E is for example a a a 3, v 2 =, v n 2 = a a n, v n = a a n eigenvalue n We see that an eigenvector for eigenvalue n is for example a v n = a n a n Answer: The above given (v,,v n ) is a basis of C n made of eigenvectors of A