Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

Similar documents
Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 118, Fall 2014 Final Exam

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

235 Final exam review questions

Symmetric Matrices and Eigendecomposition

1. Select the unique answer (choice) for each problem. Write only the answer.

Solutions to Final Exam

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Final Exam. Linear Algebra Summer 2011 Math S2010X (3) Corrin Clarkson. August 10th, Solutions

Transpose & Dot Product

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

g(t) = f(x 1 (t),..., x n (t)).

Transpose & Dot Product

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

8. Diagonalization.

Linear Algebra Massoud Malek

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

Math 113 Final Exam: Solutions

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 147, Homework 1 Solutions Due: April 10, 2012

Math Linear Algebra Final Exam Review Sheet

Math 315: Linear Algebra Solutions to Assignment 7

Problem # Max points possible Actual score Total 120

Linear Algebra: Matrix Eigenvalue Problems

Final Exam Practice Problems Answers Math 24 Winter 2012

MIT Final Exam Solutions, Spring 2017

A PRIMER ON SESQUILINEAR FORMS

February 11, 2019 LECTURE 5: THE RULES OF DIFFERENTIATION.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Chapter 3 Transformations

Section 3.5 The Implicit Function Theorem

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Conceptual Questions for Review

MATH 235. Final ANSWERS May 5, 2015

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Math 113 Winter 2013 Prof. Church Midterm Solutions

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Final. for Math 308, Winter This exam contains 7 questions for a total of 100 points in 15 pages.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 21b Final Exam Thursday, May 15, 2003 Solutions

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Exercise Sheet 1.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Review of Linear Algebra

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

The Singular Value Decomposition

Differentiation. f(x + h) f(x) Lh = L.

MATH 583A REVIEW SESSION #1

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Chapter 3. Differentiable Mappings. 1. Differentiable Mappings

Name: MATH 3195 :: Fall 2011 :: Exam 2. No document, no calculator, 1h00. Explanations and justifications are expected for full credit.

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

1. General Vector Spaces

Problem 1: Solving a linear equation

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

THE INVERSE FUNCTION THEOREM

Math Matrix Algebra

There are two things that are particularly nice about the first basis

Note: Every graph is a level set (why?). But not every level set is a graph. Graphs must pass the vertical line test. (Level sets may or may not.

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

The Implicit Function Theorem 1

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

2.3. VECTOR SPACES 25

Math 308 Practice Final Exam Page and vector y =

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Math 24 Spring 2012 Sample Homework Solutions Week 8

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

1 9/5 Matrices, vectors, and their applications

Math 54 Second Midterm Exam, Prof. Srivastava October 31, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

MATH 223 FINAL EXAM APRIL, 2005

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

FFTs in Graphics and Vision. The Laplace Operator

REVIEW OF DIFFERENTIAL CALCULUS

MATH 1553-C MIDTERM EXAMINATION 3

Linear algebra II Homework #1 due Thursday, Feb. 2 A =. 2 5 A = When writing up solutions, write legibly and coherently.

MATH 221, Spring Homework 10 Solutions

Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains

MA 265 FINAL EXAM Fall 2012

Math 291-3: Lecture Notes Northwestern University, Spring 2016

Course Summary Math 211

Analysis II: The Implicit and Inverse Function Theorems

12x + 18y = 30? ax + by = m

Name: Final Exam MATH 3320

Multivariable Calculus

Math Linear Algebra II. 1. Inner Products and Norms

LINEAR ALGEBRA QUESTION BANK

MATH 205 HOMEWORK #3 OFFICIAL SOLUTION. Problem 1: Find all eigenvalues and eigenvectors of the following linear transformations. (a) F = R, V = R 3,

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Some Function Problems SOLUTIONS Isabel Vogt Last Edited: May 24, 2013

Transcription:

Math 29-2: Final Exam Solutions Northwestern University, Winter 206 Determine whether each of the following statements is true or false f it is true, explain why; if it is false, give a counterexample (a) f A and B are diagonalizable 2 2 matrices, then A + B is diagonalizable (b) There is no 3 3 matrix which transforms a sphere of radius into a sphere of radius 3 and at the same time transforms a cube with edges of length 2 into one with edges of length (c) The limit lim (x,y) (0,0) x 4 y 4 (x 2 +y 4 ) 3 does not exist Solution (a) This is false For instance, A = ( 0 0 0 ) and B = ( 0 0 ) are both diagonalizable, but A + B = ( 0 ) is not (A point of clarification: some of attempted solutions claimed that the zero matrix was not diagonalizable, but the zero matrix is diagonal and any diagonal matrix is diagonalizable!) (b) This is true f A was going to be such a matrix A, then we would need to have volume of output sphere = det A (volume of input sphere) based on the expansion factor interpretation of det A, so in particular this would imply that det A > But at the same time a larger cube is being transformed into a smaller cube, which would require that det A <, which is not compatible with the first inequality (c) This is true Approaching the origin along x = 0 gives 0 as the value for the limit, while approaching along x = y 2 gives 8 since when x = y2 we have: x 4 y 4 (x 2 + y 4 ) 3 = y2 (2y 4 ) 3 = y2 8y 2 Since approaching (0, 0) along different directions gives different values for the limit, the limit in question does not exist 2 Suppose Q is an n n matrix such that Qx Qy = 4(x y) for all x, y R n Show that det Q = ±2 n Hint: Figure out how to relate Q to something which is orthogonal Proof The given equality gives ( ) ( ) 2 Qx 2 Qy = x y for all x, y R n This says that 2 Q preserves dot products, so 2Q is orthogonal and hence ( ) det 2 Q = ± since the determinant of an orthogonal matrix is ± On the other hand, ( ) ( ) n det 2 Q = det Q 2

since 2 Q is obtained by multiplying each of the n rows of Q by 2, and by multilinearity of the determinant, each such row operation scales the determinant by that same amount Thus so det Q = ±2 n as claimed 2 n Q = ±, Proof 2 Using properties of transposes, we have x 4y = 4(x y) = Qx Qy = x Q T Qy for all x, y R n This implies that Q T Qy = 4y for any y (since in general, x v = x w for all x implies v = w), so Q T Q = 4 Thus (det Q) 2 = (det Q T )(det Q) = det(q T Q) = det(4) = 4 n, where det(4) = 4 n since 4 is a diagonal matrix consisting of n fours down the diagonal Hence det Q = ±2 n as claimed Proof 3 The given equality gives Qx 2 = Qx Qx = 4(x x) = 4 x 2 for any x R n, so Qx = 2 x for any x Also, for any nonzero x, y we have: Qx Qy 4(x y) = Qx Qy 2 x 2 y = x y x y The first fraction is cosine of the angle between Qx and Qy and the second is cosine of the angle between x and y, so this equality implies that these angles must be the same and hence Q preserves angles Now, take the parallelopiped in R n formed by the standard basis vectors e,, e n Applying Q scales the length of each edge of this parallelopiped by 2 and leaves all right angles as right angles, so the resulting parallelopiped has volume given by the products of the lengths of the edges, which 2 n since each such length is 2 times the corresponding length of in the original parallelopiped Thus the expansion factor det Q is 2 n, so det Q = ±2 n as claimed 3 For n 2, let T : P n (R) P n (R) be the linear transformation defined by T (p(x)) = x 2 p (x) That is, T sends a polynomial p(x) to x 2 p (x) Determine all eigenvalues and eigenvectors of T Solution The n 2 stipulation is actually irrelevant here it was leftover from an earlier version of this problem where asked to show that T was not diagonalizable, which requires n 2 First, we have T () = 0 and T (x) = 0, so any constant polynomial and any polynomial of degree is an eigenvector with eigenvalue 0 For any 2 k n, we have T (x k ) = x 2 [k(k )x k 2 ] = k(k )x k, so x k and any of its scalar multiples is an eigenvector of T with eigenvalue k(k ) So far this gives that anything of the form cx k for 0 k n and c R is an eigenvector with eigenvalue 2

k(k ), but since among these is the basis, x,, x n of P n (R), there can be no more eigenvectors Hence we have found all eigenvalues and eigenvectors Alternatively, denote an arbitrary p(x) P n (R) by p(x) = a 0 + a x + + a n x n n order for this to be an eigenvector of T, we need: 2a 2 x 2 + 3(2)a 3 x 3 + + + n(n )a n n = λ(a 0 + a x + a 2 x 2 + a 3 x 3 + + a n x n ), since left side is T (p(x)) = x 2 p (x) and the right side is λp(x) Comparing coefficients on both sides shows that such an equality holds only when a 2 = = a n = 0 and λ = 0, or a i = 0 for i k and λ = k(k ), where k in the latter case is between 2 and n Thus we get the same eigenvalues and eigenvectors as in the first method 4 Let A be an n n symmetric matrix Show that all eigenvalues of A are positive if and only if x T Ax > 0 for all nonzero x R n Proof This was a discussion problem Orthogonally diagonalize A to get an orthonormal eigenbasis u,, u n of R n Denote by c,, c n the coordinates corresponding to this basis and by λ,, λ n the associated eigenvalues Then we can rewrite q(x) = x T Ax as q(c) = λ c 2 + + λ n c 2 n relative to this basis f all the eigenvalues of A are positive, then q(c) > 0 for all c 0 since each term in the rewritten expression for q is nonnegative and at least one will be strictly positive Conversely, if q(x) > 0 for all x 0, then in particular q(u i ) = λ i > 0 since the coordinates for u i are c i = and c j = 0 for j i Hence all the eigenvalues of A are positive 5 Suppose f : R n R m has the property that for any x R n, f(x) x and Show that f is the zero function f(x + h) f(x) lim = 0 h 0 h Proof This was also a discussion problem, only with an extra assumption The given property implies that Df(a) = 0 at every a R n ndeed, writing the given limit as f(a + h) f(a) 0h lim = 0, h 0 h where 0 in the numerator denotes the zero matrix, shows that f is differentiable at a with 0 being the matrix satisfying the required property of the derivative of f at a Hence Df(a) = 0 as claimed Thus, all entries of Df(a) are zero, so all partial derivatives of all components of f are 0 at a Since this is true for any a R n, f is constant Since f(0) 0 = 0, we must have f(0) = 0, so the constant which f equals must be zero as claimed 3

6 Suppose g : R n R m and F : R m+n R m are each differentiable and that F (g(x), x) = 0 for all x R n Write the Jacobian matrix of F at a point y R m+n as DF (y) = ( A(y) B(y) ) where A(y) is the m m matrix consisting of the first m columns of DF (y) and B is the m n matrix consisting of the last n columns f det A(g(x), x) 0 for each x R n, show that Dg(x) = A(g(x), x) B(g(x), x) Hint: Express the function x F (g(x), x) as a composition of differentiable functions What s the point? This problem seems to have come out of nowhere, but it is actually a part of the statement of what is known as the mplicit Function Theorem, which is one of the most important theorems in analysis The setup is as follows Say we have some equation of the form F (y,, y m, x,, x n ) = 0 The mplicit Function Theorem says that under the determinant assumption given in the problem, the variables y,, y m can be solved for in terms of the variables x,, x n, or more concretely, the given equation implicitly defines y = (y,, y n ) as a function of x = (x,, x n ), meaning there is some differentiable function g such that y = g(x) (Think of how the equation of a circle x 2 + y 2 = implicitly defines y as a function of x via y = ± x 2 This is also analogous to how when solving a system of linear equations Ax = b, the non-free variables can be expressed in terms of the free variables; indeed, the mplicit Function Theorem is precisely the non-linear analog of this fact about systems of linear equations) The point of this problem is that it describes the Jacobian matrix of the implicit function g in terms of the Jacobian matrix of F Note, however, that you do not need to know anything about this theorem in order to solve this problem, which is just an application of the chain rule Proof Consider the function h : R n R m+n defined by h(x) = (g(x), x) The composition F h is the function x F (g(x), x), which we are assuming in the problem to be identically zero Hence D(F h)(x) = 0 at all x R n On the other hand, the chain rule gives: D(F h)(x) = DF (h(x))dh(x) = ( A(g(x), x) B(g(x), x) ) ( ) Dg(x), ( ) where Dh(x) = Dg(x) is the (m + n) n matrix whose upper portion is the m n matrix Dg(x) and lower portion the n n identity matrix (This is found by computing the partial derivatives of h(x,, x n ) = (g (x,, x n ),, g m (x,, x n ), x,, x n ), where g,, g m are the components of g; the partial derivatives of the first m components contribute to Dg and the partial derivatives of the final n components are either 0 s or s, which is what gives the identity matrix portion) Thus we have: 0 = ( A(g(x), x) B(g(x), x) ) ( Dg(x) ) = A(g(x), x)dg(x) + B(g(x), x) 4

Since det A(g(x), x) 0, A(g(x), x) is invertible, and solving for Dg(x) gives Dg(x) = A(g(x), x) B(g(x), x) as claimed (This was certainly the toughest problem, but illustrates the power of being able to express derivatives in terms of Jacobian matrices, in particular when it comes to working out some chain rule application) 7 Suppose f : R n R is C 2 Fix a unit vector u R n and define the differentiable function g : R n R by g(x) = D u f(x) Show that the maximal directional derivative of g at x R n occurs in the direction of D 2 f(x)u, which is the product of the Hessian matrix D 2 f(x) and the vector u What s the point? Again, this problem seems to have come out of nowhere But actually, this problem is related to the notion of a second-order directional derivative, which is analogous to second-order partial derivatives only that we differentiate in arbitrary directions Given unit vectors u and v, the second-order directional derivative D v D u f(x) measures how the directional derivative in the u-direction changes as you move in the v-direction t turns out that if f is C 2, this second-order directional derivative is given by D v D u f(x) = v T D 2 f(x)u Thus, just as the Jacobian matrix Df(x) encodes information about all possible directional derivatives, the Hessian matrix D 2 f(x) encodes information about all possible second-order directional derivatives Note, however, that you do not need to know anything about second-order directional derivatives in order to solve this problem Proof Since f is differentiable, D u f(x) = f(x) u for any x R n Thus g is explicitly given by g(x) = D u f(x) = f(x) u = f x (x)u + + f x n (x)u n where u = (u,, u n ) The direction of the maximal directional derivative of g at x R n is given by g(x), so we must compute this gradient Differentiating the explicit expression for g given above with respect to each of x,, x n shows that: (x)u x 2 + + 2 f x x n (x)u n g(x) =, x n x (x)u + + 2 f (x)u x 2 n n where the i-th entry is the derivative of expression can be written as (x) 2 f x 2 x x n (x) g(x) = x n x (x) 2 f (x) x 2 n f x (x)u + + f x n (x)u n with respect to x i But this which is the transpose of D 2 f(x) times u Since f is C 2, D 2 f(x) is symmetric so we get g(x) = (D 2 f(x)) T u = D 2 f(x)u as the direction of the maximal directional derivative at x as claimed u u n, 5