MIT Final Exam Solutions, Spring 2017

Similar documents
Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Linear Algebra Primer

Conceptual Questions for Review

18.06 Professor Johnson Quiz 1 October 3, 2007

Review problems for MA 54, Fall 2004.

2. Every linear system with the same number of equations as unknowns has a unique solution.

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra- Final Exam Review

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

18.06SC Final Exam Solutions

A Brief Outline of Math 355

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

235 Final exam review questions

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

. = V c = V [x]v (5.1) c 1. c k

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

MATH 310, REVIEW SHEET 2

MAT Linear Algebra Collection of sample exams

Solutions to Final Exam

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Math Linear Algebra Final Exam Review Sheet

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Linear Algebra Exercises

Calculating determinants for larger matrices

Maths for Signals and Systems Linear Algebra in Engineering

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

18.06 Quiz 2 April 7, 2010 Professor Strang

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

18.06 Problem Set 7 Solution Due Wednesday, 15 April 2009 at 4 pm in Total: 150 points.

ANSWERS. E k E 2 E 1 A = B

MATH 221, Spring Homework 10 Solutions

MATH 369 Linear Algebra

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

Summer Session Practice Final Exam

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Singular Value Decomposition

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Symmetric and anti symmetric matrices

CAAM 335 Matrix Analysis

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra Practice Problems

6 EIGENVALUES AND EIGENVECTORS

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Definition (T -invariant subspace) Example. Example

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

CS 143 Linear Algebra Review

There are six more problems on the next two pages

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Math Homework 8 (selected problems)

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

18.06 Problem Set 10 - Solutions Due Thursday, 29 November 2007 at 4 pm in

1 9/5 Matrices, vectors, and their applications

Linear Algebra Review

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Final Exam Practice Problems Answers Math 24 Winter 2012

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

LINEAR ALGEBRA QUESTION BANK

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

ECE 275A Homework #3 Solutions

Linear Algebra Highlights

MATH 310, REVIEW SHEET

2 b 3 b 4. c c 2 c 3 c 4

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

The University of Texas at Austin Department of Electrical and Computer Engineering. EE381V: Large Scale Learning Spring 2013.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Lecture notes: Applied linear algebra Part 1. Version 2

Linear Algebra March 16, 2019

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Problem 1: Solving a linear equation

P = A(A T A) 1 A T. A Om (m n)

This MUST hold matrix multiplication satisfies the distributive property.

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

MATH 235. Final ANSWERS May 5, 2015

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 54 HW 4 solutions

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

1 Matrices and vector spaces

Chapter 7. Linear Algebra: Matrices, Vectors,

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010

Cheat Sheet for MATH461

Transcription:

MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of A, what is its rank, and what are the dimensions of C(A T ) and N(A T )? Solution: A must be a 5 matrix of rank (the dimension of the column space). C(A T ) must have the same dimension, and N(A T ) must have dimension =. (b) Give one possible matrix A with this C(A) and N(A). Solution: We have to make all the columns out of the two C(A) vectors. Let s make the first column (,, ). From the first nullspace vector, the second column must then be (,, ); from the second nullspace vector, the fifth column must be (,, ); from the third nullspace vector, the third column must be (,, ). The fourth column must be independent and give us the other C(A) vector, so we can just make it (,, ). Hence, our A matrix is A = Of course, there are many other possible solutions.

(c) Give a right-hand side b for which Ax = b has a solution, and give all the solutions x for your A from the previous part. (Hint: you should not have to do Gaussian elimination.) Solution: we just need b to be in the column space, e.g. b = (,, ). Then a particular solution is (,,,, ), and to get all possible solutions we just need to add any multiples of the nullspace vectors: x = + c + c + c for any scalars c, c, c. (Again, there are many possible solutions to this part.) (d) For b =, the equation Ax = b as no solutions. Instead, give another right-hand side ˆb for which Aˆx = ˆb is solvable and yields a least-square solution ˆx (i.e. ˆx minimizes Ax b ). ˆb must be the of b onto the subspace. (Hint: if you find yourself solving a system of equations, you are missing a way to do it much more easily. The answer should not depend on your choice of A matrix in part b.) Solution: We just need to project b onto C(A). If you look closely, you ll notice that the basis of C(A) given above is actually orthogonal (but not orthonormal), so the orthogonal projection is easy. If we call the two basis vectors a and a, then the projection is a a ˆb T = a T a b + a a T 5/6 a T a b = a + a = / /6 We could also have used the projection formula P b for P = Â(ÂT Â)  T where  = ( ) a a =. This is pretty easy, too, since ( )  T  =, making it easy to invert (it is diagonal because the basis is orthogonal). Problem : Suppose you have data points (x i, y i ) for i =,,...,, and you want to fit them to a power-law curve y(x) = ax b for some a and b. Equivalently, you

want to fit log y i to log y = log(ax b ) = b log x + log a. Describe how to find a and b to minimize the sum of the squares of the errors: s(a, b) = (b log x i + log a log y i ). i= ( ) b Write down a system of equations for the vector z = ; you log a can leave your equations in the form of a product of matrices/vectors as long as you say what the matrices/vectors are. (Hint: rewrite it as an 8.6-style least-squares problem with matrices/vectors.) Solution: In linear-algebra form, we write s = Az b where A is the matrix log x log x A =.. log x and b is the -component vector b = log y log y. log y Then minimizing s over z is just an ordinary least-squares problem, and the solution is given by the normal equations A T Az = A T b, which is a system of equations. Problem : Suppose that real matrix A = ( a a a a ) has four orthogonal but not orthonormal columns a i with lengths a =, a =, a =, a =. (That is, a T i a j = for i j.) (a) Write an explicit expression for the solution x to Ax = b in terms of dot products, additions, and multiplications by scalars. Solution: All we are doing is writing b = a x + a x + a x + a x, i.e. we are writing b in the basis of the columns of A. Since this is an orthogonal basis, we have seen many times in class that we just need to take dot products: a T i b = at i a ix i for i =,,,, so x i = a T i b/at i a i and x = a T b/a T a a T b/a T a a T b/a T a a T b/a T a = a T b/ a T b a T b/9 a T b/

a T a Equivalently, since A T A is the diagonal matrix D = a T a a T a 9, we have D A T A = I, which means that A = D A T, and hence x = D A T b. If you write it out, this is essentially the same as the answer above (note that inverting a diagonal matrix is easy). a T a = (b) Write A as a sum of four rank- matrices. Solution: If you understand how rank- matrices (outer products) work, there is a trivial way to do this. If we let e =, e =, e =, and e =, then A = a e T + a e T + a e T + a e T. (c) If we write the matrix B = A Hence, for any x, Bx x =. 6, then what is BT B? Solution: If B = AS where S is the diagonal matrix above, then B T B = S T A T AS = SDS where D is the diagonal matrix above, and multiplying out SDS gives B T B = 6 9 = 6I. (That is, B/6 is actually a unitary matrix.) Hence Bx = (Bx) H (Bx) = xh B T Bx = 6x H x = 6 x, and Bx x = 6. (d) Write the SVD A = UΣV T : explicitly give the singular values σ (diagonal of Σ) and the singular vectors (columns of U, V, possibly in terms of the columns a i ). Hint: what is A T A, and what are its eigenvectors (this

should give you either U or V ) and eigenvalues (related somehow to σ)? Recall also from homework that AV = UΣ. Solution: Σ = with U = ( a a a a ) and V = I. There are many ways to see this. The easiest way is by inspection if you really understand the SVD: the columns of A are already orthogonal, so we just need to normalize them to length to get an orthonormal basis U of the column space, where we recover A just by multiplying by the diagonal matrix Σ of the lengths. But, if we want to do it the long way, it is not too bad either. The long way is to first find the eigenvalues and eigenvectors of A T A = V Σ T ΣV T. The nonzero eigenvalues are the σ i, and the corresponding eigenvectors are the columns of V. But this is easy, since A T A is diagonal! Hence we see that the singular values σ are just the lengths,,, of the four columns of A. The eigenvectors of a diagonal matrix are just V = I. Then we get U from AV Σ : that is (as we saw in homework), for each right singular vector v i and nonzero singular value σ i, there is a corresponding left singular vector u i = Av i /σ i. But since V = I and the σ i are just the lengths of the four columns of A, we immediately see that U consists of the columns of A normalized by their lengths. Problem : Suppose that A and B are two real m m matrices, and B is invertible. (a) Circle which (if any) of the following must be true: C(A) = C(AB), C(A) = C(BA), N(A) = N(AB), N(A) = N(BA). Solution: C(A) = C(AB) and N(A) = N(BA). Proof (not required): Since B is invertible, if y C(A) then y = Ax for some x and y = ABz for z = B x, hence y C(AB). Similarly, if Ax = [x N(A)] then BAx = [x N(BA)] and vice versa (multiplying both ( sides) by B ). However, ( ) C(A) C(BA): a counterexample is A = and B =, for which C(A) is spanned by (, ) and C(BA) is spanned by (, ). Similarly, N(A) N(AB) for the same counterxample: N(A) is spanned by (, ) and N(AB) is spanned by (, ). 5

(b) Circle which (if any) of the following matrices must be symmetric: ABBA, A T BB T A, A T BAB T, A T BB A, A T (B T ) B A? Solution: A T BB T A, A T BB A = A T A, and A T (B T ) B A = A T (B ) T B A are symmetric, as can easily be seen by applying the transpose formula (the transpose of the product is the product of the transposes in reverse order). (c) If B is a projection matrix (as well as invertible), then AB =. Solution: An invertible projection matrix must be B = I, hence AB = A. (d) Do AB and BA have the same eigenvalues? counter-example if false. Give a reason if true, a Solution: They are similar (AB = B BAB), and hence have the same eigenvalues. (e) Suppose A has rank r. Say as many true things as possible about the eigenvalues of C = A T B T BA that would not necessarily be true if C were just a random m m matrix. It is real-symmetric, so the eigenvalues of C are real. N(A T B T BA) = N(A) so C has exactly m r zero eigenvalues, and it is positive semidefinite so the remaining r eigenvalues are positive. Problem 5: You have a matrix A with the factorization: A = } {{ } B } {{ } C=B T (a) What is the product of the eigenvalues of A? Solution: The product of the eigenvalues is det A = det B det C. B and C are triangular matrices so their determinants are just the products of their diagonals = =, hence det A = = 6. (b) Solve Ax = 7 for x. (Hint: if you find yourself doing Gaussian elimination, you are missing something.) Solution: The purpose of Gaussian elimination is to factorize A into a 6

product of triangular matrices, but this A is already factorized into a product of triangular matrices. So, we just need to do one back-substitution and one forward-substitution step. In particular, since Ax = b = BCx, we let Cx = y and first solve By = b for y by forward substitution (since B is upper triangular) and then solve Cx = y for x by backsubstitution. To solve By = b: y y y = 7 = y = = y + y = = y = = y y + y = 7 = y =. Now that we know y = (,, ), we solve Cx = y by backsubstitution: x x = = x = = x x = = x x = = x + x + x = = x = and hence the solution is x = (,, ). There are alternative methods. You could multiply out BC to find A =. If you stare at this matrix for a little while, you might 6 notice that (,, 7) is the sum of the first and third columns, which immediately gives the solution (this often works in the toy problems of 8.6 where the solutions are almost always small integers). You could, of course, do Gaussian elimination on A; this works, but is a lot of pointless work because triangular factors of A were already given to you! You could also write x = A b = C B b, but if you find yourself explicitly computing inverse matrices you are almost always doing more work than you should you should read an expression like B b as solve By = b (usually by elimination, but in this case by forward-substitution). (c) Gaussian elimination (without row swaps) produces an A = LU factorization, but you can tell at a glance that this is not the same as the factorization above, because L is always a lower-triangular matrix with s on the diagonal. Find the ordinary LU factorization of A (the matrices L and U) by multiplying and/or dividing the factors above with some diagonal matrix. Solution: We just write A = BC = BD DC where D is the diagonal matrix. This divides the columns of B by the diagonal 7

elements to make them equal to : A = }{{} L } {{ } U = Problem 6: (a) One of of the eigenvalues of A = 5 5 is λ =. Find the other two eigenvalues. (Hint: one eigenvalue should be obvious from inspection of A, since A is. You shouldn t need to explicitly write down and solve any cubic equation, because once you find two eigenvalues you can get the third from the of A.) Solution: A is obviously singular (the second and third rows are the same), so it must have an eigenvalue λ =. The trace is + + 5 = 9 which must be the sum of the eigenvalues, so the third eigenvalue must be λ = 9 = 6. (b) For a matrix B, the polynomial det(b λi) has three roots λ =,.,.7. You find that, for some vector x, the vector B n x is getting longer and longer as n grows. It must be the case that B is a matrix. Approximately what is B x / B x? Solution: The matrix B must be defective. (If it were diagonalizable, then B n x would only contain λ n terms, which don t grow since λ here.) It is with distinct eigenvalues, so one of the eigenvalues must be a double root, corresponding to a Jordan block and a single additional Jordan vector to supplement the three eigenvectors. For such a defective matrix, there is a term (from the Jordan vector) that goes as nλ n. For this to grow, it must be the λ = eigenvalue that is repeated, giving a term that grows proportional to n. This must dominate for large n, hence we should have B x / B x. More explicitly, call the three eigenvectors x, x, x, and the Jordan vector j. If we write x = c x + d j + c x + c x in this basis for some coefficients c, d, c, c, then we will have: B n x = n c x + n d j + n n d x +. n c x + (.7) n c x, and for large n this is nd x. 8

(c) A positive Markov matrix M has a steady-state eigenvector is M n for n? What Solution: M n x must approach a multiple of the steady-state eigenvector for any x. Which multiple? Multiplying by a Markov matrix conserves the sum of the entries of a vector, and the sum of the entries of (,,, ) is, so it must approach 6 (d) For a real matrix C, and almost any randomly chosen initial x(), the equation dx dt = Cx has solutions that are oscillating and decaying (for large t) as a function of t. Circle all of the things that could possibly be true of C: symmetric, anti-symmetric, orthogonal, Markov, diagonalizable, defective, singular, diagonal. Solution: to have solutions that are oscillating and decaying, there must be a complex eigenvalue λ with a negative real part. Furthermore, to get this for almost any initial condition, we must have that the other solutions are decaying (faster) too: the other eigenvalues must have (larger) negative real parts. This immediately rules out symmetric (real λ), antisymmetric (imaginary λ), Markov (a λ = ), and singular (a λ = ) matrices. It can t be diagonal because the matrix is real, and a real diagonal matrix has real eigenvalues (the diagonal entries). This leaves: orthogonal, diagonalizable, or defective. All of those are possible. (Orthogonal matrices can have complex eigenvalues with negative real parts, as long as λ =. Both diagonalizable and defective matrices can have such eigenvalues, of course. Defective matrices give a term in x(t) that goes like te λt, but this is still decaying if λ has a negative real part.) Problem 7: The matrix A is real-symmetric and positive-definite. recurrence equation x n x n+ = A (x n + x n+ ) for a sequence of vectors x, x,... Using it, we write a (a) For any x n, the recurrence relation above defines a unique solution x n+. Why? 9

Solution: First, let s rearrange the recurrence to get an equation for x n+. We have x n Ax n = x n+ + Ax n+ or, in matrix form: (I + A)x n+ = (I A)x n = x n+ = (I + A) (I A)x n. This always has a unique solution since I + A is invertible: A is positivedefinite with eigenvalues λ >, so I + A has eigenvalues + λ >. (b) If Av = λv and x = v, give an equation for x n in terms of v, n, and λ. Solution: From above, x n = [ (I + A) (I A) ] n x. If x is an eigenvector v of A (also an eigenvector of I + A and I A), then A just acts like a scalar λ and we get [ ] n λ x n = v + λ (c) For an arbitrary x, does the length of the solution x n grow with n, decay with n, oscillate, or approach a nonzero steady-state? Solution: We can expand any vector x in the basis of the eigenvectors v, v,..., v n, i.e. x = i c iv i. Each term uses the formula from the previous part, so x n = [ ] n λi c i v i. + λ i i But each of these terms is decaying: λ i > (A is positive-definite), so λ < + λ and the ratio has λi +λ i <. So, xn as n. (d) Suppose A is and the eigenvalues are λ =., λ =, λ =, λ =, and the corresponding eigenvectors are v, v, v, and v (all normalized to length v k = ). The initial x is some arbitrary vector. Write an exact formula for x n in terms of x and these eigenvectors and eigenvalues you should get a sum of four terms. Which term should typically dominate for large n? Solution: As above, but we can furthermore use the fact that the eigenvectors are orthonormal (they are orthogonal since it is a real-symmetric matrix with distinct eigenvalues, and we normalized them to length ) to say that the coefficients are c i = v T i x, and hence x n = [ λi i= + λ i ] n v i v T i x = For large n, the first term obviously dominates. [ ] n [.9 v v T x + n v v T x + ] n [ v v T x + ] n v v T x..