MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Similar documents
MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH Spring 2011 Sample problems for Test 2: Solutions

MATH 304 Linear Algebra Lecture 33: Bases of eigenvectors. Diagonalization.

Math 310 Final Exam Solutions

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 221, Spring Homework 10 Solutions

235 Final exam review questions

Eigenvalues and Eigenvectors A =

MA 265 FINAL EXAM Fall 2012

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Answer Keys For Math 225 Final Review Problem

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MATH 115A: SAMPLE FINAL SOLUTIONS

1 Last time: least-squares problems

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Eigenvalues and Eigenvectors

MAT Linear Algebra Collection of sample exams

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Review Notes for Linear Algebra True or False Last Updated: January 25, 2010

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MATH. 20F SAMPLE FINAL (WINTER 2010)

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

LINEAR ALGEBRA SUMMARY SHEET.

Math Final December 2006 C. Robinson

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

Chapter 3 Transformations

Math 3191 Applied Linear Algebra

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

PRACTICE PROBLEMS FOR THE FINAL

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Chapter 6: Orthogonality

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

Quizzes for Math 304

Solutions to Final Exam

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Eigenvalues, Eigenvectors, and Diagonalization

Review problems for MA 54, Fall 2004.

Linear Algebra. and

Reduction to the associated homogeneous system via a particular solution

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Lecture 7: Positive Semidefinite Matrices

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

2. Every linear system with the same number of equations as unknowns has a unique solution.

ANSWERS. E k E 2 E 1 A = B

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Generalized Eigenvectors and Jordan Form

Study Guide for Linear Algebra Exam 2

Math 21b Final Exam Thursday, May 15, 2003 Solutions

Math 308 Practice Test for Final Exam Winter 2015

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

LINEAR ALGEBRA QUESTION BANK

I. Multiple Choice Questions (Answer any eight)

Definition (T -invariant subspace) Example. Example

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

Conceptual Questions for Review

Linear Algebra- Final Exam Review

Definition: An n x n matrix, "A", is said to be diagonalizable if there exists a nonsingular matrix "X" and a diagonal matrix "D" such that X 1 A X

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Math 205, Summer I, Week 4b:

Dimension. Eigenvalue and eigenvector

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

MATH 1553, Intro to Linear Algebra FINAL EXAM STUDY GUIDE

and let s calculate the image of some vectors under the transformation T.

Problem 1: Solving a linear equation

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

UNIT 6: The singular value decomposition.

City Suburbs. : population distribution after m years

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Math 314H Solutions to Homework # 3

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

HOSTOS COMMUNITY COLLEGE DEPARTMENT OF MATHEMATICS

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Diagonalization. Hung-yi Lee

Math 315: Linear Algebra Solutions to Assignment 7

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

6 Inner Product Spaces

Announcements Monday, October 29

Jordan Canonical Form Homework Solutions

Math 240 Calculus III

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Definitions for Quizzes

MATH 235. Final ANSWERS May 5, 2015

Transcription:

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent: the matrix of L with respect to some basis is diagonal; there exists a basis for V formed by eigenvectors of L. The operator L is diagonalizable if it satisfies these conditions. Let A be an n n matrix. Then the following conditions are equivalent: A is the matrix of a diagonalizable operator; A is similar to a diagonal matrix, i.e., it is represented as A = UBU 1, where the matrix B is diagonal; there exists a basis for R n formed by eigenvectors of A. The matrix A is diagonalizable if it satisfies these conditions. Otherwise A is called defective.

Theorem 1 If v 1,v 2,...,v k are eigenvectors of a linear operator L associated with distinct eigenvalues λ 1,λ 2,...,λ k, then v 1,v 2,...,v k are linearly independent. Theorem 2 Let λ 1,λ 2,...,λ k be distinct eigenvalues of a linear operator L. For any 1 i k let S i be a basis for the eigenspace associated with the eigenvalue λ i. Then the union S 1 S 2 S k is a linearly independent set. Corollary Let A be an n n matrix such that the characteristic equation det(a λi) = 0 has n distinct real roots. Then (i) there exists a basis for R n consisting of eigenvectors of A; (ii) all eigenspaces of A are one-dimensional.

Example. A = ( ) 2 1. 1 2 The matrix A has two eigenvalues: 1 and 3. The eigenspace of A associated with the eigenvalue 1 is the line spanned by v 1 = ( 1, 1). The eigenspace of A associated with the eigenvalue 3 is the line spanned by v 2 = (1, 1). Eigenvectors v 1 and v 2 form a basis for R 2. Thus the matrix A is diagonalizable. Namely, A = UBU 1, where ( ) ( ) 1 0 1 1 B =, U =. 0 3 1 1

Example. A = 1 1 1 1 1 1 0 0 2. The matrix A has two eigenvalues: 0 and 2. The eigenspace corresponding to 0 is spanned by v 1 = ( 1, 1, 0). The eigenspace corresponding to 2 is spanned by v 2 = (1, 1, 0) and v 3 = ( 1, 0, 1). Eigenvectors v 1,v 2,v 3 form a basis for R 3. Thus the matrix A is diagonalizable. Namely, A = UBU 1, where B = 0 0 0 0 2 0 0 0 2, U = 1 1 1 1 1 0 0 0 1.

Problem. Diagonalize the matrix A = ( ) 4 3. 0 1 We need to find a diagonal matrix B and an invertible matrix U such ( ) that A = ( UBU ) 1. x1 x2 Suppose that v 1 =, v 2 = is a basis for y 1 R 2 formed by eigenvectors of A, i.e., Av i = λ i v i for some λ i R. Then we can take ( ) ( ) λ1 0 x1 x B =, U = 2. 0 λ 2 y 1 y 2 Note that U is the transition matrix from the basis v 1,v 2 to the standard basis. y 2

( ) 4 3 Problem. Diagonalize the matrix A =. 0 1 Characteristic equation of A: 4 λ 3 0 1 λ = 0. (4 λ)(1 λ) = 0 = λ 1 = 4, λ 2 = 1. ( ) ( ) 1 1 Associated eigenvectors: v 1 =, v 0 2 =. 1 Thus A = UBU 1, where ( ) 4 0 B =, U = 0 1 ( ) 1 1. 0 1

Problem. Let A = ( ) 4 3. Find A 0 1 5. We know that A = UBU 1, where ( ) ( ) 4 0 1 1 B =, U =. 0 1 0 1 Then A 5 = UBU 1 UBU 1 UBU 1 UBU 1 UBU ( ) ( ) ( ) 1 1 1 1024 0 1 1 = UB 5 U 1 = 0 1 0 1 0 1 ( ) ( ) ( ) 1024 1 1 1 1024 1023 = =. 0 1 0 1 0 1

Problem. Let A = such that C 2 = A. ( ) 4 3. Find a matrix C 0 1 We know that A = UBU 1, where ( ) ( ) 4 0 1 1 B =, U =. 0 1 0 1 Suppose that D 2 = B for some matrix D. Let C = UDU 1. Then C 2 = UDU 1 UDU 1 = UD 2 U 1 = UBU 1 = A. ( ) ( ) 4 0 We can take D = 2 0 =. 0 1 0 1 ( ) ( ) ( ) ( ) 1 1 2 0 1 1 2 1 Then C = =. 0 1 0 1 0 1 0 1

Initial value problem for a system of linear ODEs: { dx dt = 4x + 3y, dy dt = y, x(0) = 1, y(0) = 1. The system can be rewritten in vector form: ( dv 4 3 = Av, where A = dt 0 1 ), v = ( x y ). Matrix A is diagonalizable: A = UBU 1, where ( ) ( ) 4 0 1 1 B =, U =. 0 1 0 1 ( ) w1 Let w = be coordinates of the vector v relative to the w 2 basis v 1 = (1, 0) T, v 2 = ( 1, 1) T of eigenvectors of A. Then v = Uw = w = U 1 v.

It follows that dw dt = d dt (U 1 v) = U 1dv dt = U 1 Av = U 1 AUw. Hence dw dt = Bw { dw1 dt = 4w 1, dw 2 dt = w 2. General solution: w 1 (t) = c 1 e 4t, w 2 (t) = c 2 e t, where c 1, c 2 R. Initial condition: w(0) = U 1 v(0) = ( 1 ( ) 1 1 1 0 1) 1 Thus w 1 (t) = 2e 4t, w 2 (t) = e t. Then ( ) ( )( ) x(t) 1 1 2e 4t = Uw(t) = y(t) 0 1 e t = = ( ) 1 1 1 0 1)( 1 = ( 2e 4t e t e t ). ( 2 1).

There are two obstructions to diagonalization. They are illustrated by the following examples. ( ) 1 1 Example 1. A =. 0 1 det(a λi) = (λ 1) 2. Hence λ = 1 is the only eigenvalue. The associated eigenspace is the line t(1, 0). ( ) 0 1 Example 2. A =. 1 0 det(a λi) = λ 2 + 1. = no real eigenvalues or eigenvectors (However there are complex eigenvalues/eigenvectors.)

Topics for Test 2 Coordinates and linear transformations (Leon 3.5, 4.1 4.3) Coordinates relative to a basis Change of basis, transition matrix Matrix transformations Matrix of a linear mapping Orthogonality (Leon 5.1 5.6) Inner products and norms Orthogonal complement, orthogonal projection Least squares problems The Gram-Schmidt orthogonalization process Eigenvalues and eigenvectors (Leon 6.1, 6.3) Eigenvalues, eigenvectors, eigenspaces Characteristic polynomial Diagonalization

Sample problems for Test 2 Problem 1 (15 pts.) Let M 2,2 (R) denote the vector space of 2 2 matrices with real entries. Consider a linear operator L : M 2,2 (R) M 2,2 (R) given by L ( x y z w ) = ( x y z w ) ( ) 1 2. 3 4 Find the matrix of the operator L with respect to the basis ( ) ( ) ( ) ( ) 1 0 0 1 0 0 0 0 E 1 =, E 0 0 2 =, E 0 0 3 =, E 1 0 4 =. 0 1

Problem 2 (20 pts.) Find a linear polynomial which is the best least squares fit to the following data: x 2 1 0 1 2 f (x) 3 2 1 2 5 Problem 3 (25 pts.) Let V be a subspace of R 4 spanned by the vectors x 1 = (1, 1, 1, 1) and x 2 = (1, 0, 3, 0). (i) Find an orthonormal basis for V. (ii) Find an orthonormal basis for the orthogonal complement V.

Problem 4 (30 pts.) Let A = 1 2 0 1 1 1. 0 2 1 (i) Find all eigenvalues of the matrix A. (ii) For each eigenvalue of A, find an associated eigenvector. (iii) Is the matrix A diagonalizable? Explain. (iv) Find all eigenvalues of the matrix A 2. Bonus Problem 5 (15 pts.) Let L : V W be a linear mapping of a finite-dimensional vector space V to a vector space W. Show that dim Range(L) + dim ker(l) = dim V.

Problem 1. Let M 2,2 (R) denote the vector space of 2 2 matrices with real entries. Consider a linear operator L : M 2,2 (R) M 2,2 (R) given by L ( x y z w ) = ( x y z w ) ( ) 1 2. 3 4 Find the matrix of the operator L with respect to the basis ( ) ( ) ( ) ( ) 1 0 0 1 0 0 0 0 E 1 =, E 0 0 2 =, E 0 0 3 =, E 1 0 4 =. 0 1 Let M L denote the desired matrix. By definition, M L is a 4 4 matrix whose columns are coordinates of the matrices L(E 1 ), L(E 2 ), L(E 3 ), L(E 4 ) with respect to the basis E 1, E 2, E 3, E 4.

( ) ( ) 1 0 1 2 L(E 1 ) = = 0 0 3 4 ( ) ( ) 0 1 1 2 L(E 2 ) = = 0 0 3 4 ( ) ( ) 0 0 1 2 L(E 3 ) = = 1 0 3 4 ( ) ( ) 0 0 1 2 L(E 4 ) = = 0 1 3 4 ( ) 1 2 = 1E 0 0 1 +2E 2 +0E 3 +0E 4, ( ) 3 4 = 3E 0 0 1 +4E 2 +0E 3 +0E 4, ( ) 0 0 = 0E 1 2 1 +0E 2 +1E 3 +2E 4, ( ) 0 0 = 0E 3 4 1 +0E 2 +3E 3 +4E 4. It follows that 1 3 0 0 M L = 2 4 0 0 0 0 1 3. 0 0 2 4

Thus the relation (x1 ) ( y 1 x y = z 1 w 1 z w ) ( ) 1 2 3 4 is equivalent to the relation x 1 1 3 0 0 x y 1 z 1 = 2 4 0 0 y 0 0 1 3 z. 0 0 2 4 w w 1

Problem 2. Find a linear polynomial which is the best least squares fit to the following data: x 2 1 0 1 2 f (x) 3 2 1 2 5 We are looking for a function f (x) = c 1 + c 2 x, where c 1, c 2 are unknown coefficients. The data of the problem give rise to an overdetermined system of linear equations in variables c 1 and c 2 : c 1 2c 2 = 3, c 1 c 2 = 2, c 1 = 1, c 1 + c 2 = 2, c 1 + 2c 2 = 5. This system is inconsistent.

We can represent the system as a matrix equation Ac = y, where 1 2 3 1 1 ( ) A = 1 0 1 1, c = c1 2, y = c 2 1 2. 1 2 5 The least squares solution c of the above system is a solution of the normal system A T Ac = A T y: 1 2 3 ( ) 1 1 1 1 1 1 1 ( ) ( ) 2 1 0 1 2 1 0 c1 1 1 1 1 1 2 = 1 1 c 2 2 1 0 1 2 1 2 1 2 5 ( )( ) ( ) { 5 0 c1 3 c1 = 3/5 = 0 10 20 c 2 = 2 c 2 Thus the function f (x) = 3 + 2x is the best least squares fit 5 to the above data among linear polynomials.

Problem 3. Let V be a subspace of R 4 spanned by the vectors x 1 = (1, 1, 1, 1) and x 2 = (1, 0, 3, 0). (i) Find an orthonormal basis for V. First we apply the Gram-Schmidt orthogonalization process to vectors x 1,x 2 and obtain an orthogonal basis v 1,v 2 for the subspace V: v 1 = x 1 = (1, 1, 1, 1), v 2 = x 2 x 2 v 1 v 1 = (1, 0, 3, 0) 4 (1, 1, 1, 1) = (0, 1, 2, 1). v 1 v 1 4 Then we normalize vectors v 1,v 2 to obtain an orthonormal basis w 1,w 2 for V: v 1 = 2 = w 1 = v 1 = 1 v 1 (1, 1, 1, 1) 2 v 2 = 6 = w 2 = v 2 v 2 = 1 6 (0, 1, 2, 1)

Problem 3. Let V be a subspace of R 4 spanned by the vectors x 1 = (1, 1, 1, 1) and x 2 = (1, 0, 3, 0). (ii) Find an orthonormal basis for the orthogonal complement V. Since the subspace V is spanned by vectors (1, 1, 1, 1) and (1, 0, 3, 0), it is the row space of the matrix ( ) 1 1 1 1 A =. 1 0 3 0 Then the orthogonal complement V is the nullspace of A. To find the nullspace, we convert the matrix A to reduced row echelon form: ( 1 1 1 1 1 0 3 0 ) ( ) 1 0 3 0 1 1 1 1 ( ) 1 0 3 0. 0 1 2 1

Hence a vector (x 1, x 2, x 3, x 4 ) R 4 belongs to V if and only if ( ) x 1 ( ) 1 0 3 0 x 2 0 0 1 2 1 = 0 x 3 x 4 { x1 + 3x 3 = 0 x 2 2x 3 + x 4 = 0 { x1 = 3x 3 x 2 = 2x 3 x 4 The general solution of the system is (x 1, x 2, x 3, x 4 ) = = ( 3t, 2t s, t, s) = t( 3, 2, 1, 0) + s(0, 1, 0, 1), where t, s R. It follows that V is spanned by vectors x 3 = (0, 1, 0, 1) and x 4 = ( 3, 2, 1, 0).

The vectors x 3 = (0, 1, 0, 1) and x 4 = ( 3, 2, 1, 0) form a basis for the subspace V. It remains to orthogonalize and normalize this basis: v 3 = x 3 = (0, 1, 0, 1), v 4 = x 4 x 4 v 3 v 3 = ( 3, 2, 1, 0) 2 (0, 1, 0, 1) v 3 v 3 2 = ( 3, 1, 1, 1), v 3 = 2 = w 3 = v 3 v 3 = 1 2 (0, 1, 0, 1), v 4 = 12 = 2 3 = w 4 = v 4 = 1 v 4 2 ( 3, 1, 1, 1). 3 Thus the vectors w 3 = 1 2 (0, 1, 0, 1) and w 4 = 1 2 3 ( 3, 1, 1, 1) form an orthonormal basis for V.

Problem 3. Let V be a subspace of R 4 spanned by the vectors x 1 = (1, 1, 1, 1) and x 2 = (1, 0, 3, 0). (i) Find an orthonormal basis for V. (ii) Find an orthonormal basis for the orthogonal complement V. Alternative solution: First we extend the set x 1,x 2 to a basis x 1,x 2,x 3,x 4 for R 4. Then we orthogonalize and normalize the latter. This yields an orthonormal basis w 1,w 2,w 3,w 4 for R 4. By construction, w 1,w 2 is an orthonormal basis for V. It follows that w 3,w 4 is an orthonormal basis for V.

The set x 1 = (1, 1, 1, 1), x 2 = (1, 0, 3, 0) can be extended to a basis for R 4 by adding two vectors from the standard basis. For example, we can add vectors e 3 = (0, 0, 1, 0) and e 4 = (0, 0, 0, 1). To show that x 1,x 2,e 3,e 4 is indeed a basis for R 4, we check that the matrix whose rows are these vectors is nonsingular: 1 1 1 1 1 0 3 0 0 0 1 0 0 0 0 1 = 1 3 0 0 1 0 0 0 1 = 1 0.

To orthogonalize the basis x 1,x 2,e 3,e 4, we apply the Gram-Schmidt process: v 1 = x 1 = (1, 1, 1, 1), v 2 = x 2 x 2 v 1 v 1 = (1, 0, 3, 0) 4 (1, 1, 1, 1) = (0, 1, 2, 1), 4 v 1 v 1 v 3 = e 3 e 3 v 1 v 1 v 1 v 1 e 3 v 2 2 6 (0, 1, 2, 1) = ( 1 4, 1 12, 1 12, 1 12 v 2 = (0, 0, 1, 0) 1 (1, 1, 1, 1) 4 v 2 v 2 ) = 1 ( 3, 1, 1, 1), 12 v 4 = e 4 e 4 v 1 v 1 v 1 v 1 e 4 v 2 v 2 v 2 v 2 e 4 v 3 v 3 v 3 v 3 = (0, 0, 0, 1) 1 4 1 1/12 (1, 1, 1, 1) (0, 1, 2, 1) 1 ( 3, 1, 1, 1) = 6 1/12 12 = ( 0, 1, 0, 1 2 2) = 1(0, 1, 0, 1). 2

It remains to normalize vectors v 1 = (1, 1, 1, 1), v 2 = (0, 1, 2, 1), v 3 = 1 ( 3, 1, 1, 1), v 12 4 = 1 (0, 1, 0, 1): 2 v 1 = 2 = w 1 = v 1 = 1 v 1 (1, 1, 1, 1) 2 v 2 = 6 = w 2 = v 2 v 2 = 1 6 (0, 1, 2, 1) v 3 = 1 12 = 1 2 3 = w 3 = v 3 = 1 v 3 2 ( 3, 1, 1, 1) 3 v 4 = 1 2 = w 4 = v 4 v 4 = 1 2 (0, 1, 0, 1) Thus w 1,w 2 is an orthonormal basis for V while w 3,w 4 is an orthonormal basis for V.

Problem 4. Let A = 1 2 0 1 1 1. 0 2 1 (i) Find all eigenvalues of the matrix A. The eigenvalues of A are roots of the characteristic equation det(a λi) = 0. We obtain that 1 λ 2 0 det(a λi) = 1 1 λ 1 0 2 1 λ = (1 λ) 3 2(1 λ) 2(1 λ) = (1 λ) ( (1 λ) 2 4 ) = (1 λ) ( (1 λ) 2 )( (1 λ) + 2 ) = (λ 1)(λ + 1)(λ 3). Hence the matrix A has three eigenvalues: 1, 1, and 3.

Problem 4. Let A = 1 2 0 1 1 1. 0 2 1 (ii) For each eigenvalue of A, find an associated eigenvector. An eigenvector v = (x, y, z) of the matrix A associated with an eigenvalue λ is a nonzero solution of the vector equation (A λi)v = 0 1 λ 2 0 1 1 λ 1 x y = 0 0. 0 2 1 λ z 0 To solve the equation, we convert the matrix A λi to reduced row echelon form.

First consider the case λ = 1. The row reduction yields A + I = 2 2 0 1 2 1 1 1 0 1 2 1 0 2 2 0 2 2 Hence 1 1 0 0 1 1 1 1 0 0 1 1 1 0 1 0 1 1. 0 2 2 0 0 0 0 0 0 (A + I)v = 0 { x z = 0, y + z = 0. The general solution is x = t, y = t, z = t, where t R. In particular, v 1 = (1, 1, 1) is an eigenvector of A associated with the eigenvalue 1.

Secondly, consider the case λ = 1. The row reduction yields 0 2 0 1 0 1 1 0 1 1 0 1 A I = 1 0 1 0 2 0 0 1 0 0 1 0. 0 2 0 0 2 0 0 2 0 0 0 0 Hence (A I)v = 0 { x + z = 0, y = 0. The general solution is x = t, y = 0, z = t, where t R. In particular, v 2 = ( 1, 0, 1) is an eigenvector of A associated with the eigenvalue 1.

Finally, consider the case λ = 3. The row reduction yields A 3I = 2 2 0 1 2 1 1 1 0 1 2 1 1 1 0 0 1 1 0 2 2 0 2 2 0 2 2 1 1 0 0 1 1 1 1 0 0 1 1 1 0 1 0 1 1. 0 2 2 0 0 0 0 0 0 Hence (A 3I)v = 0 { x z = 0, y z = 0. The general solution is x = t, y = t, z = t, where t R. In particular, v 3 = (1, 1, 1) is an eigenvector of A associated with the eigenvalue 3.

Problem 4. Let A = 1 2 0 1 1 1. 0 2 1 (iii) Is the matrix A diagonalizable? Explain. The matrix A is diagonalizable, i.e., there exists a basis for R 3 formed by its eigenvectors. Namely, the vectors v 1 = (1, 1, 1), v 2 = ( 1, 0, 1), and v 3 = (1, 1, 1) are eigenvectors of the matrix A belonging to distinct eigenvalues. Therefore these vectors are linearly independent. It follows that v 1,v 2,v 3 is a basis for R 3. Alternatively, the existence of a basis for R 3 consisting of eigenvectors of A already follows from the fact that the matrix A has three distinct eigenvalues.

Problem 4. Let A = 1 2 0 1 1 1. 0 2 1 (iv) Find all eigenvalues of the matrix A 2. Suppose that v is an eigenvector of the matrix A associated with an eigenvalue λ, that is, v 0 and Av = λv. Then A 2 v = A(Av) = A(λv) = λ(av) = λ(λv) = λ 2 v. Therefore v is also an eigenvector of the matrix A 2 and the associated eigenvalue is λ 2. We already know that the matrix A has eigenvalues 1, 1, and 3. It follows that A 2 has eigenvalues 1 and 9. Since a 3 3 matrix can have up to 3 eigenvalues, we need an additional argument to show that 1 and 9 are the only eigenvalues of A 2. One reason is that the eigenvalue 1 has multiplicity 2.

Bonus Problem 5. Let L : V W be a linear mapping of a finite-dimensional vector space V to a vector space W. Show that dim Range(L) + dim ker(l) = dim V. The kernel ker(l) is a subspace of V. It is finite-dimensional since the vector space V is. Take a basis v 1,v 2,...,v k for the subspace ker(l), then extend it to a basis v 1,v 2,...,v k,u 1,u 2,...,u m for the entire space V. Claim Vectors L(u 1 ), L(u 2 ),...,L(u m ) form a basis for the range of L. Assuming the claim is proved, we obtain dim Range(L) = m, dim ker(l) = k, dim V = k + m.

Claim Vectors L(u 1 ), L(u 2 ),...,L(u m ) form a basis for the range of L. Proof (spanning): Any vector w Range(L) is represented as w = L(v), where v V. Then v = α 1 v 1 + α 2 v 2 + + α k v k + β 1 u 1 + β 2 u 2 + + β m u m for some α i,β j R. It follows that w = L(v) = α 1 L(v 1 )+ +α k L(v k )+β 1 L(u 1 )+ +β m L(u m ) = β 1 L(u 1 ) + + β m L(u m ). Note that L(v i ) = 0 since v i ker(l). Thus Range(L) is spanned by the vectors L(u 1 ),...,L(u m ).

Claim Vectors L(u 1 ), L(u 2 ),...,L(u m ) form a basis for the range of L. Proof (linear independence): Suppose that t 1 L(u 1 ) + t 2 L(u 2 ) + + t m L(u m ) = 0 for some t i R. Let u = t 1 u 1 + t 2 u 2 + + t m u m. Since L(u) = t 1 L(u 1 ) + t 2 L(u 2 ) + + t m L(u m ) = 0, the vector u belongs to the kernel of L. Therefore u = s 1 v 1 + s 2 v 2 + + s k v k for some s j R. It follows that t 1 u 1 +t 2 u 2 + +t m u m s 1 v 1 s 2 v 2 s k v k = u u = 0. Linear independence of vectors v 1,...,v k,u 1,...,u m implies that t 1 = = t m = 0 (as well as s 1 = = s k = 0). Thus the vectors L(u 1 ), L(u 2 ),...,L(u m ) are linearly independent.