University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 22, 2016

Similar documents
University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 23, 2015

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam May 25th, 2018

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 13, 2014

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam January 13, 2014

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Review problems for MA 54, Fall 2004.

235 Final exam review questions

CS 143 Linear Algebra Review

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Math 113 Final Exam: Solutions

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

MAT Linear Algebra Collection of sample exams

Linear Algebra Final Exam Solutions, December 13, 2008

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

Problem # Max points possible Actual score Total 120

MA 265 FINAL EXAM Fall 2012

Linear Algebra Massoud Malek

Definition (T -invariant subspace) Example. Example

Chapter 3 Transformations

Chapter 6: Orthogonality

EXAM. Exam 1. Math 5316, Fall December 2, 2012

The following definition is fundamental.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Applied Linear Algebra in Geoscience Using MATLAB

Family Feud Review. Linear Algebra. October 22, 2013

Conceptual Questions for Review

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Linear Algebra. Session 12

MATH 1553 SAMPLE FINAL EXAM, SPRING 2018

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Test 3, Linear Algebra

PRACTICE PROBLEMS FOR THE FINAL

Linear Algebra Review. Vectors

I. Multiple Choice Questions (Answer any eight)

Reduction to the associated homogeneous system via a particular solution

Math Linear Algebra II. 1. Inner Products and Norms

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

DEPARTMENT OF MATHEMATICS

DEPARTMENT OF MATHEMATICS

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Lecture 7: Positive Semidefinite Matrices

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

There are two things that are particularly nice about the first basis

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 1553 PRACTICE FINAL EXAMINATION

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

UNIT 6: The singular value decomposition.

4. Linear transformations as a vector space 17

Final Exam Practice Problems Answers Math 24 Winter 2012

Linear Algebra- Final Exam Review

LINEAR ALGEBRA KNOWLEDGE SURVEY

1. General Vector Spaces

Solutions to Review Problems for Chapter 6 ( ), 7.1

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Name: Final Exam MATH 3320

Elementary linear algebra

Linear Algebra. and

Q1 Q2 Q3 Q4 Tot Letr Xtra

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Math 3191 Applied Linear Algebra

Background Mathematics (2/2) 1. David Barber

Linear Algebra. Min Yan

Functional Analysis Review

7. Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms

Math 110. Final Exam December 12, 2002

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Columbus State Community College Mathematics Department Public Syllabus

Further Mathematical Methods (Linear Algebra)

Spring 2014 Math 272 Final Exam Review Sheet

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

Designing Information Devices and Systems II

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Linear Algebra problems

MATH 369 Linear Algebra

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

NATIONAL UNIVERSITY OF SINGAPORE DEPARTMENT OF MATHEMATICS SEMESTER 2 EXAMINATION, AY 2010/2011. Linear Algebra II. May 2011 Time allowed :

Answer Keys For Math 225 Final Review Problem

Math 113 Practice Final Solutions

Math 312 Final Exam Jerry L. Kazdan May 5, :00 2:00

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

Linear algebra II Homework #1 due Thursday, Feb A =

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

MATH 1553-C MIDTERM EXAMINATION 3

Math Linear Algebra Final Exam Review Sheet

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Transcription:

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra PhD Preliminary Exam January 22, 216 Name: Exam Rules: This exam lasts 4 hours There are 8 problems Each problem is worth 2 points All solutions will be graded and your final grade will be based on your six best problems Your final score will count out of 12 points You are not allowed to use books or any other auxiliary material on this exam Start each problem on a separate sheet of paper, write only on one side, and label all of your pages in consecutive order (eg, use 1-1, 1-2, 1-3,, 2-1, 2-2, 2-3, Read all problems carefully, and write your solutions legibly using a dark pencil or pen in essay-style using full sentences and correct mathematical notation Justify your solutions: cite theorems you use, provide counterexamples for disproof, give clear but concise explanations, and show calculations for numerical problems If you are asked to prove a theorem, you may not merely quote or rephrase that theorem as your solution; instead, you must produce an independent proof If you feel that any problem or any part of a problem is ambiguous or may have been stated incorrectly, please indicate your interpretation of that problem as part of your solution Your interpretation should be such that the problem is not trivial Please ask the proctor if you have any other questions 1 5 2 6 3 7 4 8 Total DO NOT TURN THE PAGE UNTIL TOLD TO DO SO Applied Linear Algebra Preliminary Exam Committee: Varis Carey, Stephen Hartke, and Julien Langou (Chair

Problem 1 Let V be an inner product space over C, with inner product u, v (a Prove that any finite set S of nonzero, pairwise orthogonal vectors is linearly independent (b If T : V V is a linear operator satisfying T (u, v = u, T (v for all u, v V, prove that all eigenvalues of T are real (c If T : V V is a linear operator satisfying T (u, v = u, T (v for all u, v V, prove that the eigenvectors of T associated with distinct eigenvalues λ and µ are orthogonal Solution: (Note in the solution we write T (u as T u This is standard notation (a Let S = (v 1,, v n be a finite set of n nonzero, pairwise orthogonal vectors We want to prove that (v 1,, v n is linearly independent Let α 1,, α n be n complex numbers such that α 1 v 1 + + α n v n = We want to prove that α 1 = = α n = Applying v 1, to the previous equality, we get = v 1, α 1 v 1 + + α n v n = α 1 v 1, v 1 + α n v 1, v n = α 1 v 1, v 1 (First equality is what we start with Second is first slot linearity of, Third is pairwise orthogonality So we get α 1 v 1, v 1 = But v 1 is nonzero, so v 1, v 1 is not zero, and so this previous equality implies α 1 = With a similar method repeated n 1 times, we get α 2 = = α n = (b Let T : V V be a self-adjoint linear operator (So that we have T (u, v = u, T (v for all u, v V Let λ be an eigenvalue of T We want to prove that λ is real Let v be an associated (nonzero eigenvector of λ We have T v = λv Applying v, to the previous equality, we get v, T v = v, λv = λ v, v Applying, v to the same equality, we get Since v, T v = T v, v, we get T v, v = λv, v = λ v, v λ v, v = λ v, v Since v is nonzero, so v, v is not zero, and so this previous equality implies So λ is real λ = λ

(c Let T : V V be a self-adjoint linear operator (So that we have T (u, v = u, T (v for all u, v V Let (u, λ and (v, µ be two eigencouples of T where λ and µ are distinct We want to prove that u, v = We have that T u = λu, applying, v, we get T u, v = λu, v = λ u, v = λ u, v (The last equality comes from part (b: λ is real We also have that T v = µv, applying u,, we get u, T v = u, µv = µ u, v But u, T v = T u, v, so we get that in other words, λ u, v = µ u, v, (λ µ u, v = We have that λ and µ are distinct, so (λ µ, and so we must have u, v =

Problem 2 ( λ (a Let A be a 2-by-2 real matrix of the form A = where λ R Prove that A λ has a square root: that is, there exists a matrix B such that B 2 = A (b Prove that a real symmetric matrix having the property that every negative eigenvalue occurs with even multiplicity has a square root Solution: ( λ λ Observe that B 2 = A (a Let A be a 2-by-2 real matrix of the form A = Method 1: Take B = ( λ 1 where λ R Method 2: (Much longer but might be more intuitive Two cases: (a ( Either λ is nonnegative in which case we have that a square root of A is B = λ 1/2 λ 1/2 ( (b Or λ is negative in which case we have that a square root of A is B = ( λ 1/2 ( λ 1/2 For both cases, we exhibited a square root of A We note that, for the negative case, we were inspired by ( 1 1 2 = ( 1 1 This can be understood by saying that two 9-degree rotation is same as a 18-degree rotation (b Let A be a real symmetric matrix having the property that every negative eigenvalue occurs with even multiplicity (We note that the question does not specify whether the multiplicity is algebraic or geometric Since the matrix is symmetric, we remind that the geometric multiplicity and the algebraic multiplicity are the same Since A is symmetric, we can use the spectral theorem to diagonalize A in an orthonormal basis So we have that there exist real matrices V and D such that A = V DV T, D is diagonal with the eigenvalues of A on its diagonal ordered from the smallest to largest, V is orthogonal matrix (ie V V T = V T V = I Now we pay special attention to matrix D We explain why D has a square root, D 1/2, below We call λ i,, λ 1, the i negative eigenvalues of A ordered from smallest to largest We call λ + 1,, λ + j, the j nonnegative eigenvalues of A ordered from smallest

to largest The matrix D therefore is made of scalar diagonal blocks of the form λi It looks like: D = D λ i Dλ 1 By part (a, we see that each of the D λ block has a square root Either λ is nonnegative and then simply B λ = λ 1/2 I Or λ is negative, in which case the dimension of D λ is even, (by assumption of the problem and so we can split D λ in 2-by-2 identical scalar D λ + 1 ( λ blocks of the form λ ( ( λ 1/2 ( λ 1/2 whether λ is nonnegative or negative And so we set And we have D λ + j, which each has a square root, (see part (a, for example So we see that each of the D λ block has a square root, B λ, B D = B λ i Bλ 1 B λ + 1 D = B D B D This explains why D has a square root, B D So now we call B A = V B D V T and we see that A = V DV T = V B D B D V T = V B D (V T V B D V T = (V B D V T (V B D V T = B A B A So that B A is a square root of A B λ + j

Problem 3 Let A and B be two complex square matrices, and suppose that A and B have the same eigenvectors Show that if the minimal polynomial of A is (x + 1 2 and the characteristic polynomial of B is x 5, then B 3 = Solution: Since A and B have the same eigenvectors, the matrices A and B have the same dimension (Note: since we are working with complex square matrices, matrices A and B have at least one eigenvector, and so the statement A and B have the same eigenvectors cannot be vacuously true Since the characteristic polynomial of B is of degree 5, matrices B is 5-by-5; consequently, so is A Since the minimal polynomial of A is (x + 1 2, we see that the Jordan form of A is (a either (b or 1 1 1 1 1 1 1 so 2 Jordan blocks of size 2 and 1 Jordan block of size 1, in this case dim(null(a+i = 3; 1 1 1 1 1 1 so 1 Jordan block of size 2 and 3 Jordan blocks of size 1, in this case dim(null(a+i = 4 So either dim(null(a + I is 3 or 4 Since the characteristic polynomial of B is x 5, is the only eigenvalue of B Since A and B have the same eigenvectors, and since B has only one eigenvalue,, and since dim(null(a + I is 3 or 4, then dim(null(b is either 3 or 4 as well So B has either 3 or 4 Jordan blocks This means that the possible Jordan block structures for B are (a (3 Jordan blocks, dim(null(b = 3, B has 2 Jordan blocks of size 2 and 1 Jordan block of size 1, 1 1,,

In that case, the minimal polynomial of B is x 2 And so B 2 = And so indeed B 3 = (b (3 Jordan blocks, dim(null(b = 3, B has 1 Jordan block of size 3 and 2 Jordan blocks of size 1, 1 1 In that case, the minimal polynomial of B is x 3 And so B 3 = (c (4 Jordan blocks, dim(null(b = 4, B has 1 Jordan block of size 2 and 3 Jordan blocks of size 1, 1 In that case, the minimal polynomial of B is x 2 And so B 2 = And so indeed B 3 = In all three cases, we see that B 3 =

Problem 4 ( A Let A be an m-by-n complex matrix Let B = A Prove that B 2 = A 2 Solution: By definition of the 2-norm, we have that We also know that A 2 = A 2 = B 2 = max Ax 2, x R n st x 2 =1 max y R m st y 2 =1 A y 2, max Bz 2 z R (m+n st z 2 =1 A 2 = A 2 Let x in R n such that x 2 = 1 and Ax 2 = A 2 Consider the vector z in R m+n Now we have that ( A and Bz 2 = A z = ( x z 2 = ( x ( x 2 = x 2 = 1 2 = ( Ax 2 = Ax 2 = A 2 Since B 2 = max z R (m+n st z 2 =1 Bz 2, and since we have found a z such that z 2 = 1 and Bz 2 = A 2, this proves that A 2 B 2 Let z in R (m+n such that z 2 = 1 Let x in R n and y in R m such that ( x z = y We have x 2 2 + y 2 2 = 1 We now look at Bz 2 We have that ( ( A x Bz 2 = A y ( A 2 = y Ax 2 = A y 2 2 + Ax 2 2 But, on the one hand, Ax 2 A 2 x 2 And, on the other hand, A y 2 A 2 y 2 and, since A 2 = A 2, we get A y 2 A 2 y 2 So Bz 2 A 2 x 2 2 + y 2 2,

and, since x 2 2 + y 2 2 = 1, we find Bz 2 A 2 We proved that, for all z such that z 2 = 1, we have Bz 2 A 2, so, since B 2 = max z R (m+n st z 2 =1 Bz 2, we get that We can now conclude B 2 A 2 B 2 = A 2 Note: Another way to go about this problem is to know the relation between the SVD of A and the eigenvalues of B For example, in the case where m n, A has n singular values We denote these singular values are (σ 1,, σ n Then B has m + n eigenvalues and they are (σ 1,, σ n, σ 1,, σ n,,,, where we have m n s at the end (To make m+n eigenvalues Indeed m + n = n + n + (m n The proof of this results is by construction (The construction can be hinted by the work above in this problem actually Below is the proof of this result Note that once we know this result it is clear that B 2 = A 2 Indeed (1 the 2-norm is same as the maximum singular value so A 2 = max i σ i, and (2, for a symmetric matrix (ie B, the 2-norm is same as the maximum absolute value eigenvalue Proof of previous claimed result (Note: proof would have been needed for full credit Let m n Let A be m-by-n We consider the full singular value decomposition of A We have ( Σ A = (U 1 U 2 W where (a U 1 is m-by-n, U 2 is (m n-by-n, Σ is n-by-n, is (m n-by-n and W is n-by-n, (b (U 1 U 2 is orthogonal: (U 1 U 2 (U 1 U 2 = (U 1 U 2 (U 1 U 2 = I m, (c W is orthogonal: W W = W W = I n, (d Σ is diagonal with singular values of A on diagonal Now we form V = ( 2 U 2 2 1 U 2 1 U 2 2 W 2 W 2 2 and D = Σ Σ (Note: the matrix in D is (m n-by-(m n We claim (simple computations that (a B = V DV, (b V V = V V = I m+n, (c D is diagonal So V is the matrix of eigenvectors of B and the eigenvalues of B are in D A similar result exists for n m

Problem 5 ( 1 Given the 2-by-2 real matrix A =, determine the set of all real a, b such that a b (a A is orthogonal (b A is symmetric positive definite (c A is nilpotent (d A is unitarily diagonalizable Solution: (a We want A to be orthogonal That is A T A = AA T = I This means that the row of A has to form an orthonormal basis The first row of A is ( 1, the second row has to be ( ±1 We have ( 1 A =, a = ±1, b = ±1 (b We want A to be symmetric positive definite Clearly by symmetry a has to be 1 Now we know that a symmetric positive definite matrix has a positive diagonal (Because, we have that for all nonzero vector x, we need x T Ax > ; and we also have that a ii = e T i Ae i And we see that a 11 is so no value b can make A symmetric positive definite There is no solution (c We want A to be nilpotent For A to be nilpotent, we need A 2 = We have ( ( ( 1 1 a b = a b a b ab a + b 2 For this to be the zero matrix, we want a = and b = We have ( 1 A =, a =, b = (d We want A to be unitarily diagonalizable This is equivalent to we want A to be normal For A to be normal, we need A T A = AA T We have ( ( ( ( 1 a a 1 = a b 1 b 1 b a b This leads to ( 1 b b a 2 + b 2 = ( a 2 ab ab 1 + b 2 So this leads to either( a = 1 and b is any real number or ( a = 1 and b = We have ( ( ( ( 1 1 either A =, a = 1, b R or A = 1 b 1

Note: Question (a and (b are not related Problem 6 (a We consider C([, 1], the space of continuous function on [, 1] We dot C([, 1] with the inner product f, g = f(xg(xdx We consider the subspace P 1 of all polynomials of degree 1 or less on the unit interval x 1 Find the least squares approximation to the function f(x = x 3 by a polynomial p P 1 on the interval [, 1], ie, find p P 1 that minimizes p f 2 (b We consider the vector space P 2 of all polynomials of degree 2 or less on the unit interval x 1 We consider the set of functions Solution: S = {p P 2 : p(xdx = p (xdx} Show that this is a linear subspace of P 2, determine its dimension and find a basis for S (a Goal: We want to find the orthogonal projection p of f onto P 1 (Orthogonal in the sense of the given inner product Method: (1 Get a basis (e 1, e 2 of P 1, (2 Construct an orthonormal basis (q 1, q 2 of P 1 from (e 1, e 2 using the Gram-Schmidt procedure i r 11 = e 1, ii q 1 = e 1 /r 11, iii r 12 = q 1, e 2, iv w = e 2 r 12 q 1, v r 22 = w, vi q 2 = w/r 22 (3 Compute p, the orthogonal projection of f onto P 1, with the formula Computation: p = q 1, f q 1 + q 2, f q 2 (1 e 1 = 1 and e 2 = x is a basis of P 1, let us take this one,

(2 i ii iii iv v vi e 1 2 = e 1, e 1 = dx = 1 so r 11 = e 1 = 1, q 1 = e 1 /r 11 = 1, r 12 = q 1, e 2 = x dx = 1 2, w = e 2 r 12 q 1 = x 1 2, r 22 = w = (x 1 2 2 dx = 1 12, q 2 = w/r 22 = 3(2x 1 (3 i (1, 2 3x 3 is an orthonormal basis of P 1 ii iii q 2, f = q 1, f = p = q 1, f q 1 + q 2, f q 2 = 1 4 + ( 3 2 x 3 dx = 1 4, 3(2x 1x 3 dx = 3 2 3, 3( 3(2x 1 = 9 1 x 1 5 The orthogonal projection of f onto P 1 is p = 9 x 1 It is also the least squares 1 5 approximation to the function f(x = x 3 by a polynomial p P 1 on the interval [, 1] That is p is such that it minimizes q f 2 over all q in P 1 (b To prove that S is a linear subspace, one takes two polynomials p and q in S and two real numbers α and β and proves that the polynomial αp + βq is in S This is routine exercise We skip the writing here We now want to find a basis of S and find its dimension Let p P 2, then p writes If p is in S, it must sastifies the constraint p = ax 2 + bx + c p(xdx = p (xdx, that is ( ax 2 + bx + c dx = (2ax + b dx,

that is that is 1 3 a + 1 2 b + c = a + b, 4a + 3b 6c = Two linearly independent vectors (a, b, c satisfying this equation are for example: (3,, 2 and (, 2, 1 So a basis for S is for example ( 3x 2 + 2, 2x + 1 S is of dimension 2

Problem 7 Let E, F, and G be vector spaces Let f L(E, F and g L(F, G Prove that: Range(g f = Range(g Null(g + Range(f = F Solution: We assume that Null(g + Range(f = F It is clear that Range(g f Range(g We want to prove that Range(g Range(g f Let x Range(g We want to prove that x Range(g f Since x Range(g, there exists y F such that x = g(y Since y F and F = Null(g + Range(f ( by our assumption, there exists u Null(g and v Range(f such that y = u + v Since v Range(f, there exists w E such that v = f(w Now we have that x = g(y, y = u + v and v = f(w, so x = g(u + f(w, so x = g(u + (g f(w, but u Null(g, so x = (g f(w, so x Range(g f We assume that Range(g f = Range(g It is clear that Null(g + Range(f F We want to prove that F Null(g + Range(f Let x F We want to prove that x Null(g+Range(f So we want to find u Null(g and v Range(f such that x = u+v Clearly, we have that g(x Range(g so ( by our assumption g(x Range(g f, so there exists z E such that g(x = (g f(z Let u = x f(z and v = f(z We have that ( please check (1 x = u+v, (2 u Null(g and (3 v Range(f So x Null(g+Range(f

Problem 8 Let A and B be n n complex matrices such that AB = BA Show that if A has n distinct eigenvalues, then A, B, and AB are all diagonalizable Solution: This question was given as question #4 in the June 212 Linear Algebra Preliminary Exam Please check answer there