Math 110 Linear Algebra Midterm 2 Review October 28, 2017

Similar documents
Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Calculating determinants for larger matrices

Math Linear Algebra Final Exam Review Sheet

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Definition (T -invariant subspace) Example. Example

and let s calculate the image of some vectors under the transformation T.

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Review problems for MA 54, Fall 2004.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

LINEAR ALGEBRA REVIEW

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MAT 1302B Mathematical Methods II

Linear Algebra 2 More on determinants and Evalues Exercises and Thanksgiving Activities

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 221, Spring Homework 10 Solutions

MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Recall : Eigenvalues and Eigenvectors

Math 3191 Applied Linear Algebra

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Dimension. Eigenvalue and eigenvector

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Final Exam Practice Problems Answers Math 24 Winter 2012

4. Linear transformations as a vector space 17

2. Every linear system with the same number of equations as unknowns has a unique solution.

Study Guide for Linear Algebra Exam 2

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

1 9/5 Matrices, vectors, and their applications

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MATH 310, REVIEW SHEET 2

Matrices related to linear transformations

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

MAT1302F Mathematical Methods II Lecture 19

MATH 220 FINAL EXAMINATION December 13, Name ID # Section #

Math 21b. Review for Final Exam

City Suburbs. : population distribution after m years

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MATH 1553, C. JANKOWSKI MIDTERM 3

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear algebra II Tutorial solutions #1 A = x 1

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MTH 464: Computational Linear Algebra

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Announcements Monday, November 06

MATH 1553-C MIDTERM EXAMINATION 3

MATH 1553 PRACTICE FINAL EXAMINATION

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Columbus State Community College Mathematics Department Public Syllabus

Summer Session Practice Final Exam

2 b 3 b 4. c c 2 c 3 c 4

Math 205, Summer I, Week 4b:

Eigenvalues, Eigenvectors, and Diagonalization

REVIEW FOR EXAM II. The exam covers sections , the part of 3.7 on Markov chains, and

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

ANSWERS. E k E 2 E 1 A = B

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Math 240 Calculus III

Second Midterm Exam April 14, 2011 Answers., and

Definitions for Quizzes

Chapter 5. Eigenvalues and Eigenvectors

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

Eigenvectors. Prop-Defn

1. General Vector Spaces

Announcements Wednesday, November 01

Online Exercises for Linear Algebra XM511

235 Final exam review questions

Math 308 Final, Autumn 2017

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

1 Last time: determinants

EIGENVALUES AND EIGENVECTORS

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Linear Algebra Highlights

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

TBP MATH33A Review Sheet. November 24, 2018

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Lecture 11: Eigenvalues and Eigenvectors

Evaluating Determinants by Row Reduction

Linear Algebra- Final Exam Review

Math 2030, Matrix Theory and Linear Algebra I, Fall 2011 Final Exam, December 13, 2011 FIRST NAME: LAST NAME: STUDENT ID:

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Solving a system by back-substitution, checking consistency of a system (no rows of the form

MAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Announcements Monday, October 29

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Transcription:

Math 11 Linear Algebra Midterm Review October 8, 17 Material Material covered on the midterm includes: All lectures from Thursday, Sept. 1st to Tuesday, Oct. 4th Homeworks 9 to 17 Quizzes 5 to 9 Sections in the book related to this material are Section.5, Change of Coordinate Matrices Section 3.1, Elementary Matrices Sections 3.3 and 3.4, Systems of Linear Equations Sections 4.1 to 4.4, Determinants Section 5.1, Eigenvalues and Eigenvectors Section 5., Diagonalization Section 5.4, Invariant Subspaces and the Cayley Hamilton Theorem Section 7.1, Jordan Canonical Form, esp. Jordan Blocks and Generalized Eigenvectors In particular it is a good idea to go back through the homework assignments and read through them with an eye towards understanding what is meant by the hints, and especially by the sections titled Think. Now is the time to assemble your understanding of the material into a more cohesive whole, so think about general ideas as well as computations. This document will review some small subset of the material from the past 5 weeks, with an emphasis on applying concepts to some particular problems. The material presented here is in no way comprehensive, so make sure to review what you need to from other course materials. Best of luck studying! Change of Coordinates and Diagonalization Recall that if T : V W is a linear transformation and β = {β 1,..., β n } and γ = {γ 1,..., γ m } are ordered bases of V and W respectively, then [T ] γ β is the matrix of T with respect to the bases β and γ, whose i-th column is the vector [T (β i )] γ, the coefficients of T (β i ) when represented as a linear combination of the vectors in γ. Now suppose that V = W, so that all operations are happening within the same vector space. Then Q = [Id] γ β is the change of coordinates matrix from basis β to basis γ. It has the useful properties that Q[v] β = [v] γ for any vector v V [T ] β = Q 1 [T ] γ Q for any linear operator T : V V A linear operator T : V V is called diagonalizable if there is an ordered basis β such that [T ] β is diagonal. Note that this is the case iff there exists a basis of V consisting of eigenvectors of T, in which case [T ] β is diagonal with diagonal entries equal to the eigenvalues of T. 1

Another important characterization of diagonalizable operators is the following. T : V V is diagonalizable iff The characteristic polynomial of T, given by char T (t) = det(t t Id), splits into a product of linear factors Each eigenspace E λ corresponding to an eigenvalue λ has dimension (or geometric degree) equal to the algebraic degree of λ in char T (t). In this case, an eigenbasis for T can be formed by taking a union of bases for each of the eigenspaces E λ. Problem 1. Let T be the transformation in R 3 which is a rotation about the axis spanned by v r = (1, 1, ) by 9 counterclockwise. Find the matrix [T ] E for T in terms of the standard basis E. Is T diagonalizable? Is T diagonalizable if considered as a linear operator on C 3 over the field C? Solution. Begin by representing T in terms of a basis which is related to the geometry of the transformation. y ( 1, 1, ) v r = (1, 1, ) x (,, ) z In particular, as pictured above, the vector v r itself is fixed by the transformation, and the vectors v 1 = ( 1, 1, ) and v = (,, ) are contained in the plane perpendicular to v r which is fixed by the rotation. Further, these two vectors are perpendicular inside of the fixed plane, so that v 1 maps to v under the rotation, and v maps to v 1. This gives a very simple form for the transformation in terms of the basis β = {v r, v 1, v }: β = {(1, 1, ), ( 1, 1, ), (,, 1 )}, [T ] β = 1 1 To find [T ] E, we will need to apply the change of coordinates formula. [T ] E = [Id] E β [T ] β [Id] β E In particular, the change of coordinates matrix [Id] E β can be formed by using the column vectors for the vectors in β as the columns of a matrix, so 1 1 [Id] E β = 1 1

Further, we have [Id] β E = ([Id] E β ) 1, so a short computation using row reduction gives Finally, we are able to compute [Id] β E = 1 1 1 1 1 [T ] E = [Id] E β [T ] β [Id] β E 1 1 = 1 1 1 1 1 1 1 1 1 1 = 1 1 1 1 1 1 1 1 1 1/ 1/ / = 1/ 1/ / / / At this point, it is good to check that the matrix A = [T ] E which we computed actually does what we want: Av r = v r, Av 1 = v, and Av = v 1. In fact this, is the case. For diagonalizability, we start by computing the characteristic polynomial of T. Notice that we can compute the characteristic polynomial with the determinantal formula using the matrix of T with respect to any basis we want. With that in mind, the [T ] β is a much simpler matrix, so it will probably make our computation a little easier. We have 1 λ char T (λ) = det([t ] β λi 3 ) = λ 1 = (1 λ) λ 1 1 λ 1 λ = (λ 1)(λ + 1) The polynomial λ +1 has no real roots, so it doesn t factor into a product of linear terms with real coefficients. In particular, this means that char T (λ) doesn t split over R, and thus the operator is not diagonalizable. However, thinking of T as a linear operator over C, the story is different. We have char T (λ) = (λ 1)(λ + 1) = (λ 1)(λ + i)(λ i) Thus the characteristic polynomial does split over C, and in fact it has three distinct eigenvalues. This already implies that T is diagonalizable over C since the geometric multiplicity of an eigenvalue is always at least 1. We can compute the eigenvectors associated to these eigenvalues by finding a basis for the null spaces of the matrices B λ = [T ] β λi 3. We have 1 i 1 + i B 1 = 1 1, B i = i 1, B i = i 1 1 1 1 i 1 i After row reducing, we find reduced row echelon forms 1 1 B 1 1, B i 1 i, 1 B i 1 i 3

This gives a single eigenvector u λ spanning each eigenspace, 1 [u 1 ] β =, [u i ] β = i, [u i ] β = 1 1 i Thus γ = {u 1, u i, u i } is an eigenbasis of T, and D = [T ] γ = 1 i i is a diagonal matrix. Notice that because we computed the null space of the matrix [T λi] β, the eigenvectors we obtained are represented using coordinates in terms of β. To diagonalize [T ] E, we will need to do one last computation to find the right change of coordinates matrix. Since we ve expressed γ as vectors in terms of β, we have 1 [Id] β γ = i 1 1 i Since we already have a change of coordinates matris from β to E, we compute 1 1 1 1 i 1 Q = [Id] E γ = [Id] E β [Id] β γ = 1 1 i 1 = 1 i 1 1 i i Then with these choices of D and Q, we finally can diagonalize [T ] E by Q 1 [T ] E Q = D Determinants Recall that for a square n n matrix A, we can consider the determinant of A in a number of different ways which can be useful theoretically or computationally in different contexts. In particular, we ve looked at rook placements, cofactor expansion, and row reduction as methods for working with determinants. In particular, we know the following properties of determinants. If A and B are n n matrices, then Determinants are multiplicative: det(ab) = det(a) det(b) Determinants are preserved by similarity: If A B, then det(a) = det(b) Determinants characterize invertibility: A is invertible iff det(a), and if A is invertible, then det(a 1 ) = 1/ det(a) Determinants are preserved by transposes: det(a t ) = det(a) In particular, because determinants are preserved by similarity, for a linear transformation T : V V where V is a finite-dimensional vector space, we can define det(t ) = det([t ] β ), where β is any basis of V. This makes sense (is well-defined ) since for different bases β and γ, the matrices [T ] β and [T ] γ are similar by the change of coordinates formula, so the determinant doesn t depend on which basis you choose. Also recall how determinants interact with elementary row operations: Adding a multiple of one row of a matrix to another doesn t change the determinant. Multiplying a row of a matrix by a scalar α multiplies the determinant by α. Swapping two rows of a matrix changes the sign of the determinant. 4

The same properties also hold for the corresponding elementary column operations. Finally, recall a favorite important characteristic of determinants: The determinant of a matrix can be thought of as a polynomial, with the coordinates of the matrix as the variables. This comes from the rook placement definition of determinants. For instance: ( ) a b det = ad bc c d This is a polynomial with variables a, b, c, and d. In general, for an n n determinant, the polynomial will have n! terms, each of which is a product of n different variables from the entries of the matrix. Problem. Compute the determinant det(a), where 1 1 1 A = 1 1 4 4 5 1 6 3 1 1 Solution. We row reduce A, using only the operation of adding a scalar multiple of one row to another, to form a new matrix B which is upper triangular. This matrix will have the same determinant of A, and the determinant of an upper triangular matrix is equal to the product of its diagonal entries. We compute 1 1 1 1 1 1 1 1 1 A 1 5 1 1 5 3 4 1 5 3 4 = B 1 4 4 9 6 6 Thus the determinant of A is given by the product of the diagonal entries of B, or 1 1 3 6 = 18. Problem 3. Let N be an n n matrix which is nilpotent, i.e. for which there is some integer k such that N k =. 1. Prove that N ti n is invertible, with inverse (N ti n ) 1 = ( t k )(N k 1 + tn k 1 + + t k N + t k 1 I n ). Use this fact to prove that char N (t) = ( 1) n t n. 3. Prove that in fact, N n =. Solution. For the first part, we simply multiply: (N ti n ) ( t k ) ( N k 1 + tn k 1 + + t k N + t k 1 I n ) = ( t k ) ( N k + tn k 1 + + t k 1 N tn k 1 t k 1 N t k I n ) = ( t k ) (N k t k I n ) = ( t k ) ( t k I n ) = I n For the second part, first multiply the equation I n = (N ti n ) (N ti n ) 1 on both sides by t k, and then use properties of determinants to compute as follows. ( t k ) n = ( 1) n t nk = det( t k I n ) = det ((N ti n ) (N k 1 + tn k 1 + + t k N + t k 1 ) ) I n = det(n ti n ) det ( N k 1 + tn k 1 + + t k N + t k 1 ) I n 5

Now note that both of the determinants at the end of this computation are polynomials in t since the entries of each matrix are themselves polynomials in t. In particular, this means that det(n ti n ) divides ( 1) n t nk. Since the characteristic polynomial has degree n and leading coefficient ( 1) n, the only possibility is that char N (t) = det(n ti n ) = ( 1) n t n. For the last part, apply the Cayley-Hamilton theorem, which states that a matrix s characteristic polynomial kills the matrix. In this case, this shows us that ( 1) n N n =, so in particular, N n =. Invariant and Cyclic Subspaces Recall that if V is a vector space, W V, and T : V V is a linear operator on V, we say that W is T-invariant if T (w) W for any vector w W. An equivalent statement is that T (W ) W. We have identified a number of subspaces that end up being T -invariant for a linear operator T, including V itself, and the trivial subspace {} The image Im(T ) of T, and also the image Im(T k ) of T k for any k 1 Any eigenspace E λ of T, especially E = ker(t ) Any generalized eigenspace K λ of T Recall also that if x V, then the space W = Span({x, T (x), T (x),...}) which is the span of all images of x under repeated applications of T is called the T-cyclic subspace generated by x. For the rest of this document, denote this subspace by C T (x), but note that this is not standard notation, so do not use it (in a midterm, say) without first explaining what it means. Finally, remember that a subspace W is called T-cyclic if W = C T (x) for some vector x W, in which case x is called a cyclic generator of W. One important property of cyclic subspaces is that a T -cyclic subspace is always T -invariant, but not necessarily vice versa. Problem 4. Give a linear operator T : R R and a subspace W of V such that W is T -invariant, but not T -cyclic. Solution. Let T = the zero operator which maps every vector to the zero vector, and let W = R. W is T -invariant because for any vector v W, T (v) = W. However, for any x, we have C T (x) = Span({x, T (x), T (x),...}) = Span({x,,,...}) = Span({x}) so in particular C T (x) has dimension at most 1. Since W = R has dimension, it is impossible that W = C T (x) for any x, so we see that W can t be expressed in that form, and thus is not T -cyclic. Problem 5. Let V = P n (R) be the vector space of polynomials of degree at most n with real coefficients, and let T be the derivative operator. Prove that V is T -cyclic. What happens if V = P (R) is the space of all polynomials, with no restriction on degree? Solution. P n (R) is T -cyclic because it can be expressed as C T (p) where p is the polynomial given by p(x) = x n. To see this, notice that the set {p, T (p), T (p),...} can be written as {x n, nx n 1, n(n 1)x n,..., n!x, n!,,,...} However, aside from the zero vector, this gives a basis of P n (R). In particular, the subspace C T (p) spans the entire space P n (R), so we have that P n (R) = C T (p), and the space is T -cyclic. For V = P (R) the story is a little different. Notice first that for a polynomial p of degree k, taking the derivative decreases the degree by 1. In particular, by applying T to p multiple times, you will never get 6

a polynomial which has degree larger than the degree of k which you started with. This is problematic however, because that means that the set {p, T (p), T (p), T 3 (p),...} consists only of polynomials of degree at most k, and so we have C T (p) P n (R) P (R) C T (p) forms a subspace of (in fact is equal to) the polynomials of degree at most n, and thus is a proper subspace of P (R). This means that P (R) can t be the cyclic subspace C T (p) for any generating polynomial p, so in this case, we see that the space is not T -cyclic. 7