The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

Similar documents
Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MA 265 FINAL EXAM Fall 2012

Recall : Eigenvalues and Eigenvectors

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

2. Every linear system with the same number of equations as unknowns has a unique solution.

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Test 3, Linear Algebra

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

Dimension. Eigenvalue and eigenvector

Math Final December 2006 C. Robinson

MATH 220 FINAL EXAMINATION December 13, Name ID # Section #

MATH 2360 REVIEW PROBLEMS

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

LINEAR ALGEBRA REVIEW

MATH 221, Spring Homework 10 Solutions

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Solutions to Final Exam

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

NAME MATH 304 Examination 2 Page 1

2 Eigenvectors and Eigenvalues in abstract spaces.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Summer Session Practice Final Exam

Study Guide for Linear Algebra Exam 2

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

and let s calculate the image of some vectors under the transformation T.

MATH 369 Linear Algebra

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

Review problems for MA 54, Fall 2004.

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

235 Final exam review questions

Calculating determinants for larger matrices

Lecture 12: Diagonalization

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Eigenvalues, Eigenvectors, and Diagonalization

Math 369 Exam #2 Practice Problem Solutions

1. Select the unique answer (choice) for each problem. Write only the answer.

MAT Linear Algebra Collection of sample exams

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

Math 113 Winter 2013 Prof. Church Midterm Solutions

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

Math 315: Linear Algebra Solutions to Assignment 7

Chapter 3. Determinants and Eigenvalues

Math 20F Final Exam(ver. c)

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Announcements Monday, October 29

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

MATH 1553, C. JANKOWSKI MIDTERM 3

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

PRACTICE PROBLEMS FOR THE FINAL

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

ANSWERS. E k E 2 E 1 A = B

Definition (T -invariant subspace) Example. Example

MATH PRACTICE EXAM 1 SOLUTIONS

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Eigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto

(a) only (ii) and (iv) (b) only (ii) and (iii) (c) only (i) and (ii) (d) only (iv) (e) only (i) and (iii)

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Midterm 2 Solutions, MATH 54, Linear Algebra and Differential Equations, Fall 2014

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra: Sample Questions for Exam 2

MATH. 20F SAMPLE FINAL (WINTER 2010)

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Part I True or False. (One point each. A wrong answer is subject to one point deduction.)

Chapter 4 & 5: Vector Spaces & Linear Transformations

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Linear Algebra Primer

The definition of a vector space (V, +, )

(the matrix with b 1 and b 2 as columns). If x is a vector in R 2, then its coordinate vector [x] B relative to B satisfies the formula.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Math 20F Practice Final Solutions. Jor-el Briones

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Eigenvalue and Eigenvector Homework

Spring 2014 Math 272 Final Exam Review Sheet

MATH 310, REVIEW SHEET 2

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

Total 100

Conceptual Questions for Review

Lecture 7: Positive Semidefinite Matrices

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Linear Algebra Highlights

MATH 1553-C MIDTERM EXAMINATION 3

Lecture Summaries for Linear Algebra M51A

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Transcription:

A. [ 3. Let A = 5 5 ]. Find all (complex) eigenvalues and eigenvectors of The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute 3 λ A λi =, 5 5 λ from which det(a λi) = (3 λ)(5 λ) ()( 5) = λ 8λ + 5 + = λ 8λ + 5. Set this equal to zero to get λ 8λ + 5 =. Use the quadratic formula and we have λ = ( 8) ± ( 8) 4()(5) () = 8 ± 36 = 4 ± 3i. The eigenspace corresponding to λ = 4 + 3i is the null space of 3i A (4 + 3i)I =. 5 3i Since λ is an eigenvalue of A, the matrix must be singular, so the second row is a scalar multiple of the first. The equation [ then ] gives us ( 3i)x + x =, from which x = ( + 3i)x, and so is an eigenvector corresponding + 3i to λ = 4 + 3i. We can replace i by i everywhere to get that is an eigenvector 3i of A corresponding to λ = 4 3i. These are bases for their respective eigenspaces, so the [ eigenvectors ] corresponding to λ = 4 + 3i are all nonzero scalar multiples of. Similarly, + 3i [ the eigenvectors ] corresponding to λ = 4 3i are all nonzero scalar multiples of. The nonzero matters, as the zero vector is never an eigenvector 3i of anything. Full credit was given for finding one eigenvector corresponding to each eigenvalue.

. Find bases for the row space and column space of the matrix 3 3 3 5 3 4 We start by putting the matrix in row echelon form. 3 3 R 3 5 +R 3 4 3R 3 5 5 7 5 5 7 8 8 3.4 5 5 7 8 8 3.4 9. 3.4. 5R +8R 3.4 9. swap swap From this, we can see that the first, second, and fourth columns are the pivot columns, so these corresponding columns in the original matrix form a basis for the column space. 3, 3, 3 5 4

The nonzero rows of the matrix in row echelon form are a basis for the row space. {[ 3 ], [.4], [ ]} The top three rows of the original matrix do not form a basis for the row space, however, as the first row is the sum of the second and third rows, so they are not linearly independent. I was surprised at how many different answers people gave for the row space. Most of the answers were correct, too. The final answer depends on how far you go toward reduced row echelon form before deciding that it s obvious which columns are pivot columns and stop.

3. Let L : P P be a linear transformation defined by L(p(x)) = xp (x), where p (x) is the derivative of p(x). Let S = {, +x, +x+x } be an ordered basis for P. Find the matrix that represents L with respect to the basis S. We apply L to the vectors in S and compute L() =, L( + x) = x, and L( + x + x ) = x + x. The matrix should have columns consisting of [L()] S, [L( + x)] S, and [L( + x + x )] S. The first of these is trivial, as [] S =. For the others, we need to compute [x] S and [x + x ] S. There are various ways to do this, and if you could write each of these as a linear combination of the vectors in S by hand, that was fine. A more systematic way is to apply the natural isomorphism M : P R 3 a given by M(a + bx + cx ) = b, so that we can do all computations in R 3. c If we do this, then our basis S becomes L(+x) = x becomes we wish to write, and,,, and L(+x+x ) = x+x becomes If we make a matrix A = Ax = b, for each of b = as linear combinations of and b =,. Thus,,, then this is equivalent to solving. You can do this by row operations. It turns outthat A is pretty easy to compute by cofactors, so we can compute A =, and then the solution is x = A b. From this, we compute [L( + x)] S = = [L( + x + x )] S = =.

Now we have all of the columns for the matrix of L with respect to the basis S, so we make the matrix.

4. Let A = 3 5 3. Find matrices P and D with D diagonal such that either A = P DP or D = P AP. Be sure to specify whether you want A = P DP or D = P AP. You are not required to compute P. Since A is upper triangular, we can read off the eigenvalues as the numbers on its diagonal and get λ = 3,,. The matrix D should be a diagonal matrix 3 with the eigenvalues on the diagonal, so D =. The matrix P should have its columns be eigenvectors corresponding to λ = 3,, and, respectively. Since A is a 3 3 matrix with three distinct eigenvalues, each of the eigenspaces must have dimension, and it suffices to find an eigenvector for each eigenvalue. 5 3 For λ = 3, we have A 3I = 5. The first column is clearly not a pivot column, so x can be anything. Since we only need one eigenvector, let stake x =. Back substitution yields x 3 = and x =, from which we get For λ =, we have A ( )I = 5 5 3 4. From this, it is clear that the second column is not a pivot column, so x can be anything. Back substitution quickly yields x 3 =, so the top equation gives us 5x +5x + =, and so x = x. If we set x =, we get x =, and is an eigenvector. For λ =, we have A I = 5 3 4. This time, the third column is not a pivot column, so x 3 can be anything. The second row gives us 4x x 3 =, from which x 3 = 4x. One easy solution to this is x 3 = 4, x =. The top row gives us x +5x 3x 3 =, from which x = 3x 3 5x = 3(4) 5( ) = 7. Thus, 7 4 If we take P = P AP. is an eigenvector. 7 4, then we get AP = P D, from which D =

5. Find the best fit line in the sense of ordinary least squares to the points (, ), (, ), and (3, ). The usual equation for a line is y = mx + b. If we plug in the three points, we get = m + b, = m + b, and = 3m + b. These give us a system of equations 3 [ m b ] = The best solution in the sense of least squares is We can compute A T A = ˆx = (A T A) A T b [ 3 ] 3 =. [ 4 6 6 3 det(a T A) = (4)(3) (6)(6) = 6 (A T A) = 3 6 6 6 4 A T 3 b = 9 = 5 ˆx = (A T A) A T b = 3 6 9 = 3 = 6 6 4 5 6 4 [ m Therefore, the constants for the best fit line are = 7 b 3 m = and b = 7 3, so y = x + 7 3. ] [ 7 3 ] ], from which Scoring on this problem ended up being close to binary, as it wasn t hard if you knew how, but a little under half of the class didn t.

6. Let L : R n R m be a linear transformation with ker L = {}. Show that m n. By Theorem 6.3, L(x) = Ax, for some m n matrix A. The kernel of L is the null space of A, so ker L = {} means that the null space of A is {}. This means that Ax = has only the trivial solution x =. Therefore, every column of A must be a pivot column. A has n columns, and hence n pivot columns. Each pivot column requires a pivot in a distinct row, so A has at least n rows. Since the number of rows of A is m, we have m n. Another approach is to cite Theorem 6.6, which states that dim ker L+ dim range L = dim V. From the setup, we have V = R n, so dim V = n. If ker L = {}, then dim ker L = dim {} =. Plugging these in, we get dim range L = n. The range of L is a subspace of R m, so we have dim range L dim R m = m, from which n m. The first solution was the intended solution to this problem, though I was aware that there are a number of ways to do the problem. A number of students tried something along the lines of the second solution, but mostly didn t catch that the range of L is a subspace of R m.

7. Let A be a matrix with only one eigenvalue, whether real or complex. Show that A is diagonalizable if and only if A is a scalar matrix. Suppose first that A is diagonalizable. Then A = P DP for some diagonal matrix D. Every entry on the diagonal of D must be an eigenvalue of A. Since A has only one eigenvalue, say λ, all of the entries on the diagonal of D are the same. Therefore, D = λi is a scalar matrix. We can compute A = P DP = P (λi)p = P (λp ) = λ(p P ) = λi, so A is a scalar matrix. For the converse, if A is a scalar matrix, then it is diagonal, and we can take D = A and P = I to get P DP = IAI = A, so A is diagonalizable. Instead of saying that diagonalizable implies scalar, we can show that not scalar implies not diagonalizable. If A is not a scalar matrix, but has λ as its only eigenvalue, then A λi O, for otherwise, we would have A = λi. If A is n n, then A λi has rank at least, and hence nullity of at most n. Therefore, there are at most n linearly independent eigenvectors corresponding to the eigenvalue λ. Since λ is the only eigenvalue, there are not n linearly independent eigenvectors of A, and so A is not diagonalizable. When I wrote this problem, I had initially thought of making it ask you to show that diagonalizable implies scalar for a matrix with only one eigenvalue. But then I thought, well, the converse is completely trivial, as you only need to observe that a scalar matrix is already diagonal, and hence diagonalizable. So the idea was to make it an if and only if problem to pad it with a few easy points. I didn t expect more students to be able to do the hard direction than the easy one.

8. Let A be a matrix. It can be shown that det(a T A) and det(aa T ) are always well-defined. Either prove that det(a T A) = det(aa T ) or else give a counterexample. Let A =. We can compute A T A = = [], from which det(a T A) =. We can also compute AA T = =, from which det(aa T ) =. Therefore, det(a T A) det(aa T ). If you assume that A is square, then it is always true that det(a T A) = det(aa T ). However, the problem does not assert that A is square. Assuming that A is square and proceeding to prove that det(a T A) = det(aa T ) would get you half credit. Only three people got this problem right. If you pick A to be a non-square matrix whose rows or columns are a linearly independent set of vectors, it will be a counterexample. In particular, if you pick a non-square matrix A and fill in numbers at random, it will usually be a counterexample. So the statement isn t just barely false, but wildly false unless you assume that all matrices are square, as most of the class did.