University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018

Similar documents
University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 13, 2014

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 13, 2014

Linear Algebra Final Exam Solutions, December 13, 2008

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Name: Final Exam MATH 3320

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

PRACTICE PROBLEMS FOR THE FINAL

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 1553-C MIDTERM EXAMINATION 3

Test 3, Linear Algebra

Math 113 Homework 5. Bowei Liu, Chao Li. Fall 2013

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

Family Feud Review. Linear Algebra. October 22, 2013

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Definition (T -invariant subspace) Example. Example

Math 308 Practice Test for Final Exam Winter 2015

I. Multiple Choice Questions (Answer any eight)

Math 113 Practice Final Solutions

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Math Final December 2006 C. Robinson

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Linear Algebra 2 Spectral Notes

Math 110. Final Exam December 12, 2002

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

MA 265 FINAL EXAM Fall 2012

MATH PRACTICE EXAM 1 SOLUTIONS

MATH 1553 PRACTICE FINAL EXAMINATION

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Study Guide for Linear Algebra Exam 2

MAT Linear Algebra Collection of sample exams

Linear Algebra 2 Final Exam, December 7, 2015 SOLUTIONS. a + 2b = x a + 3b = y. This solves to a = 3x 2y, b = y x. Thus

MATH. 20F SAMPLE FINAL (WINTER 2010)

Eigenvalues and Eigenvectors A =

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

The Jordan Canonical Form

Math 25a Practice Final #1 Solutions

Linear Algebra. Min Yan

GQE ALGEBRA PROBLEMS

Solution to Final, MATH 54, Linear Algebra and Differential Equations, Fall 2014

Math 20F Final Exam(ver. c)

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

2. Every linear system with the same number of equations as unknowns has a unique solution.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

This is a closed book exam. No notes or calculators are permitted. We will drop your lowest scoring question for you.

MATHEMATICS 217 NOTES

LINEAR ALGEBRA SUMMARY SHEET.

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

LINEAR ALGEBRA REVIEW

Then since v is an eigenvector of T, we have (T λi)v = 0. Then

Fall 2016 MATH*1160 Final Exam

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Conceptual Questions for Review

Math 108b: Notes on the Spectral Theorem

1. General Vector Spaces

Math 3191 Applied Linear Algebra

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

Final Exam Practice Problems Answers Math 24 Winter 2012

1 Last time: least-squares problems

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

Bare-bones outline of eigenvalue theory and the Jordan canonical form

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

235 Final exam review questions

No books, notes, any calculator, or electronic devices are allowed on this exam. Show all of your steps in each answer to receive a full credit.

Math 414: Linear Algebra II, Fall 2015 Final Exam

MATH 1553 SAMPLE FINAL EXAM, SPRING 2018

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Spring 2014 Math 272 Final Exam Review Sheet

Review problems for MA 54, Fall 2004.

Problem # Max points possible Actual score Total 120

Linear Algebra: Sample Questions for Exam 2

Midterm 2 Solutions, MATH 54, Linear Algebra and Differential Equations, Fall 2014

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Vector Spaces and Linear Transformations

Math 396. Quotient spaces

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

JORDAN NORMAL FORM. Contents Introduction 1 Jordan Normal Form 1 Conclusion 5 References 5

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Q1 Q2 Q3 Q4 Tot Letr Xtra

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Transcription:

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018 Name: Exam Rules: This exam lasts 4 hours. There are 8 problems. Each problem is worth 20 points. All solutions will be graded and your final grade will be based on your six best problems. Your final score will be out of 120 points. You are not allowed to use books or any other auxiliary material on this exam. Start each problem on a separate sheet of paper, write only on one side, and label all of your pages in consecutive order (e.g., use 1-1, 1-2, 1-3,..., 2-1, 2-2, 2-3,... ). Read all problems carefully, and write your solutions legibly using a dark pencil or pen in essay-style using full sentences and correct mathematical notation. Justify your solutions: cite theorems you use, provide counterexamples for disproof, give clear but concise explanations, and show calculations for numerical problems. If you are asked to prove a theorem, you may not merely quote or rephrase that theorem as your solution; instead, you must produce a complete proof. Parts of a multipart question are not necessarily worth the same number of points. If you feel that any problem or any part of a problem is ambiguous or may have been stated incorrectly, please indicate your interpretation of that problem as part of your solution. Your interpretation should be such that the problem is not trivial. Please ask the proctor if you have any questions. 1. 5. 2. 6. 3. 7. 4. 8. Total DO NOT TURN THE PAGE UNTIL TOLD TO DO SO. Applied Linear Algebra Preliminary Exam Committee: Varis Carey, Stephen Hartke (Chair), and Yaning Liu.

Problem 1 Let V = P 2 (R), the space of real-valued polynomials of total degree less than or equal to 2. Let S = {x, 1 x, 1 x 2 } be a set of vectors in V. (a) Show that S is a basis of V. (b) Let T : V V given by p(x) xp (x). Find the matrix of T with respect to S. (c) Is T invertible? If not, express its nullspace in terms of S. (a) In terms of the standard basis (1, x, x 2 ) and writing the vectors in S as the columns of a matrix yield: 0 1 1 1 1 0 0 0 1 Exchanging row 1 and row 2 yields a matrix in upper triangular form with a pivot in every column, so the columns of S are linearly independant, and as the dimension of P 2 is 3, we have a basis. (b) T (x) = x [1, 0, 0] ; T (1 x) = T (1) T (x) = [0, 0, 0] + [ 1, 0, 0]; T (1 x 2 ) = T (1) T (x 2 ) = 2x 2 = 2S 1 2S 2 + 2S 3. So 1 1 2 M T = 0 0 2. 0 0 2 (c) M T has linearly dependant columns, so T is not invertible. Any constant function will get mapped to the 0 polynomial, so the nullspace of T (one-dimensional as M T has two pivots) is equal to c(s 1 + S 2 ), c R.

Problem 2 Suppose V 1, V 2,..., V s are subspaces of V. (a) Show that the sum of V 1, V 2,..., V s is a direct sum if and only if V i V j = 0, i = 1,..., s. j i (b) Show that the sum of V 1, V 2,..., V s is a direct sum if and only if V 1 V 2 = 0, (V 1 + V 2 ) V 3 = 0,..., (V 1 + V 2 + + V s 1 ) V s = 0. (a) Suppose V i j i V j = 0, i = 1,..., s. Let v k V k, k = 1,..., s such that v 1 + v 2 + + v i + + v s = 0. Then for each i = 1,..., s v i = v 1 v i 1 v i+1 v s V i V j = 0. j i So v i = 0, i = 1,...,. So s i=1 V i is a direct sum. Now suppose s i=1 V i is a direct sum. For each i and any v V i j i V j, we have So v = v 1 + + v i 1 + v i+1 + + v s V i. v 1 + + v i 1 + ( v) + v i+1 + + v s = 0. Since s i=1 V i is a direct sum, v = 0. So V i j i V j = 0, i = 1,..., s. (b) Suppose V 1 + + V s is a direct sum, then based on Part (a), we have V i j i V j = 0, i = 1,..., s. For each i, it is obvious that V i V j V i V j. j<i j i So V i j<i V j = 0, i = 1,..., s.

On the other hand, we use induction to show the sufficiency. For s = 2, V 1 V 2 = 0 obviously shows V 1 + V 2 is a direct sum. Now suppose the sufficiency holds for s = k, and we will show it also holds for s = k + 1. Suppose now we have V 1 V 2 = 0, (V 1 + V 2 ) V 3 = 0,..., (V 1 + V 2 + + V k ) V k+1 = 0. Based on the induction, the first k 1 conditions above suggest V 1 + + V k is a direct sum. In addition, the last condition above shows that (V 1 + + V k ) + V k+1 is a direct sum. Combining the two facts, it is easy to show that V 1 + + V k + V k+1 is a direct sum.

Let A be a real matrix satisfying A 3 = A. (a) Prove that A can be diagonalized. Problem 3 (b) If A is a 3 3 matrix, how many different possible similarity classes are there for A? (a) Note that A satisfies A 3 A = 0. Hence, the minimal polynomial of A divides x 3 x = x(x 1)(x + 1), which factors into linear factors. Each Jordan block in the Jordan canonical form of A is 1 1, since each eigenvalue is the root of a linear factor of the minimal polynomial. Thus, the Jordan canonical form of A is a diagonal matrix. (b) Since the minimal polynomial divides x 3 x, the possible eigenvalues of A are 0, 1, and 1. Let t λ denote the number of Jordan blocks for eigenvalue λ (equivalently, t λ is the geometric multiplicity of λ). Then t 0 + t 1 + t 1 = 3, where each t λ is a nonnegative integer. There are 10 solutions for (t 0, t 1, t 1 ): (3, 0, 0) (0, 1, 2) (1, 1, 1) (0, 3, 0) (0, 2, 1) (0, 0, 3) (1, 0, 2) (1, 2, 0) (2, 0, 1) (2, 1, 0) Alternatively, a combinatorial stars-and-bars argument shows there are ( 5 2) = 10 solutions. Thus, there are 10 different possible similarity classes.

Problem 4 Let T be a self-adjoint operator on an n-dimensional inner product space V. Let λ 0 be an eigenvalue of T. Show that the (algebraic) multiplicity of λ 0 equals dim E(λ 0, T ), i.e., the dimension of the eigenspace of T corresponding to λ 0. Let m be the algebraic multiplicity of λ 0. Then m dim E(λ 0, T ), since m is defined to be the dimension of the generalized eigenspace of T corresponding to λ 0, which is greater than or equal to dim E(λ 0, T ). Next we show that m dim E(λ 0, T ). Since T is self-adjoint, based on the Spectral Theorems, there exists an orthonormal basis e 1,..., e m such that λ 1 λ 2 M(T, (e 1,..., e m )) =... λ n So the diagonal elements are all the eigenvalues of T. WLOG, let λ 1 = = λ m = λ 0. Then T e i = λ 0 e i, i = 1,..., m. So e 1,..., e m are m linearly independent vectors in E(λ 0, T ) and dim E(λ 0, T ) m. As a result, we have m = dim E(λ 0, T ).

Problem 5 Let T and S be linear maps on inner product space V. For any vector v V, T v, T v = Sv, Sv. Show that range T and range S are isomorphic. Let ϕ : T v Sv for any v V. We show that ϕ is an isomorphism between range T and range S. First, the map ϕ is well-defined. Suppose T v = T w, w V. Then T (v w) = 0. So S(v w), S(v w) = T (v w), T (v w) = 0, 0 = 0. Hence S(v w) = 0, and Sv = Sw. The map is also a linear map, since ϕ(t v + T w) = ϕ(t (v + w)) = S(v + w) = Sv + Sw ϕ(λt v) = ϕ(t (λv)) = S(λv) = λsv. Finally, it is obvious that ϕ is surjective. In addition, suppose T v T w. Then Sv Sw. Otherwise if Sv = Sw, T v = T w. So ϕ is injective. So ϕ is an isomorphism between range T and range S.

Let V be a complex-valued vector space. Problem 6 (a) Give an example of an operator T that is surjective but not invertible. (b) Given an example of an operator S that has no eigenvalue. (c) Let V = C. Let U be the set of vectors in V with finitely many non-zero entries. Prove that U is an infinite-dimensional subspace of V. (a) As T is a surjective operator, but not invertible, V must be infinite dimensional. As T is not invertible, it must have a non-trivial nullspace. Consider the vector space of infinite lists, C. The left-shift operator T : V V, (z 1, z 2,..., ) (z 2, z 3,...) is clearly surjective as T (z 1, z 1, z 2,...) = (z 1, z 2,...), but it is not injective as (z 1, 0, 0,...) 0 V. T is linear, is trivial to check. (b) Again, as we are over the complex field, S must be infinite dimensional. Consider the right-shift operator S : V V, (z 1, z 2,...) (0, z 1, z 2,...). Let Sv = λv. Then 0 = λv 1. Thus λ = 0 or v 1 = 0. If λ = 0 then v = 0 V, so v is not an eigenvector by definition. If v 1 = 0, then (Sv) 2 = 0, which (as λ 0) now implies that v 2 = 0. By induction, v = 0 v, so again, v is not an eigenvector, and therefore S does not have an eigenvalue. (c) U is a subspace of V as ku will have finitely many non-zero entries, the vector of all zeros has finitely any non-zero entries, and if u 1 has M non-zeros and u 2 has N nonzero entries then u 1 + u 2 has at most M + N nonzeros, also finite, and hence u 1 + u 2 U. Clearly e i is in U, and e i cannot be written as a linear combination of e j, j i. We have an infinite linear independant set in U, so U is infinite-dimensional.

Problem 7 Let A R m n and B R n m. Prove that tr(ab) = tr(ba). Let e i be a unit vector in R q. C ij =< e i, Ce j >, so for square C, tr(c) = n i=1 < e i, Ce i >. AB is m m, so we have tr(ab) = m i=1 < e i, ABe i >. Note < e i, ABe i >=< A e i, Be i >. Be i is a column vector whose entries are the i-th column of B, i.e. (B 1i, B 2i,..., B ni ). A e i is a column vector whose entries are the i-th column of A = (A 1i, A 2i,..., A ni) = (A i1, A i2,..., A in ). The inner product of these two vectors is given by n j=1 A ijb ji. Combining, we have m < e i, ABe i >= i=1 We also have (by an almost identical argument) m i=1 n A ij B ji. j=1 tr(ba) = n < e j, BAe j >= j=1 n m B ji A ij. j=1 i=1 Switching the order of summation and using the commutativity of the reals finishes the proof.

Problem 8 Let W 1 and W 2 be two subspaces of an n-dimensional vector space V, and dim W 1 +dim W 2 = n. Show that there exists an operator T on V such that null T = W 1 and range T = W 2. Suppose dim W 1 = s and dim W 2 = m = n s. Let a basis of W 2 be v 1, v 2,..., v m. Let a basis of W 1 be w 1, w 2,..., w s, and extend it to a basis of V : Let T be the operator defined as w 1, w 2,..., w s, w s+1,..., w n. T w 1 = 0, T w 2 = 0,..., T w s = 0, T w s+1 = v 1, T w s+2 = v 2,..., T w n = v m. It is obvious that the operator exists. Note that range T = span (v 1,..., v m ) = W 2. In addition, it is obvious that Let any v V and Then W 1 = span (w 1,..., w s ) null T. v = k 1 w 1 + + k s w s + k s+1 w s+1 + + k n w n. T v = T (k 1 w 1 + + k s w s + k s+1 w s+1 + + k n w n ) = k s+1 T w s+1 + + k n T w n = 0 suggests k s+1 = = k n = 0. So v W 1, which means null T W 1. So null T = W 1.