c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

Similar documents
MAT1302F Mathematical Methods II Lecture 19

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

22A-2 SUMMER 2014 LECTURE 5

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Eigenvectors and Hermitian Operators

A Brief Outline of Math 355

18.06 Quiz 2 April 7, 2010 Professor Strang

= W z1 + W z2 and W z1 z 2

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Math 416, Spring 2010 The algebra of determinants March 16, 2010 THE ALGEBRA OF DETERMINANTS. 1. Determinants

Linear Algebra, Summer 2011, pt. 2

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

Math 103, Summer 2006 Determinants July 25, 2006 DETERMINANTS. 1. Some Motivation

Review problems for MA 54, Fall 2004.

Schur s Triangularization Theorem. Math 422

1 Last time: least-squares problems

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

Math 108b: Notes on the Spectral Theorem

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

Problems for M 11/2: A =

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Topics in linear algebra

MAA507, Power method, QR-method and sparse matrix representation.

Recitation 8: Graphs and Adjacency Matrices

Diagonalizing Matrices

Chapter 4 & 5: Vector Spaces & Linear Transformations

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Conceptual Questions for Review

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

MITOCW watch?v=ztnnigvy5iq

Notes on basis changes and matrix diagonalization

MAT 1302B Mathematical Methods II

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Practice problems for Exam 3 A =

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Roberto s Notes on Linear Algebra Chapter 10: Eigenvalues and diagonalization Section 3. Diagonal matrices

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Linear Algebra, Summer 2011, pt. 3

Chapter 6: Orthogonality

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 310, REVIEW SHEET

k is a product of elementary matrices.

Frame Diagonalization of Matrices

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

Lecture 2: Linear operators

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Recitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Chapter 4. Solving Systems of Equations. Chapter 4

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

Math 407: Linear Optimization

There are two things that are particularly nice about the first basis

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

6 EIGENVALUES AND EIGENVECTORS

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Jordan normal form notes (version date: 11/21/07)

Numerical Linear Algebra Homework Assignment - Week 2

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

Lecture 4: Applications of Orthogonality: QR Decompositions

0 2 0, it is diagonal, hence diagonalizable)

A PRIMER ON SESQUILINEAR FORMS

The Spectral Theorem for normal linear maps

Lecture 1 Systems of Linear Equations and Matrices

Lecture 11: Diagonalization

MATH 583A REVIEW SESSION #1

MATHEMATICS 217 NOTES

12x + 18y = 30? ax + by = m

LINEAR ALGEBRA KNOWLEDGE SURVEY

MITOCW ocw f99-lec30_300k

Dot Products, Transposes, and Orthogonal Projections

4. Linear transformations as a vector space 17

Definition (T -invariant subspace) Example. Example

Lecture 2: Change of Basis

Linear Algebra Lecture Notes-II

Methods of Mathematical Physics X1 Homework 2 - Solutions

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Eigenvalues and Eigenvectors

Agenda: Understand the action of A by seeing how it acts on eigenvectors.

MATH 235. Final ANSWERS May 5, 2015

Lecture 15, 16: Diagonalization

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Differential Equations

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Recall the convention that, for us, all vectors are column vectors.

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Notes on Eigenvalues, Singular Values and QR

Mathematical Methods wk 2: Linear Operators

I. Multiple Choice Questions (Answer any eight)

MIT Final Exam Solutions, Spring 2017

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Section 6.4. The Gram Schmidt Process

Math 215 HW #11 Solutions

Spectral Graph Theory Lecture 2. The Laplacian. Daniel A. Spielman September 4, x T M x. ψ i = arg min

Transcription:

LECTURE LECTURE 2 0. Distinct eigenvalues I haven t gotten around to stating the following important theorem: Theorem: A matrix with n distinct eigenvalues is diagonalizable. Proof (Sketch) Suppose n = 2, and let λ and λ 2 be the eigenvalues, v, v 2 the eigenvectors. But for all we know, v and v 2 are not linearly independent! Suppose they re not; then we have with c and c 2 not both 0. Multiplying both sides by A, get c v + c 2 v 2 = 0 c λ v + c 2 λ 2 v 2 = 0. Multiplying first equation by λ gives and subtracting gives c λ v + c 2 λ v 2 = 0 c 2 (λ 2 λ ) v 2 = 0 and from this since λ 2 λ! we can conclude c 2 = 0. It follows similarly that c = 0, contradiction. A similar argument gives the result for any n, but it s not as easy as Strang makes it seem; it requires the fact that the Vandermonde matrix is invertible (see Strang, p.98). Apropos of nothing, I also want to comment: Fact. A is invertible if and only if 0 is not an eigenvalue of A.. Symmetric, Hermitian, unitary matrices Spectral theorem: A (real) symmetric matrix is diagonalizable. Strangely enough, the best way to prove this (and I think Strang s proof is very good) is to use complex matrices. Definitions: Recall that the complex conjugate of a number a + bi is a bi. Similarly, the complex conjugate of a matrix A is the matrix obtained by replacing each entry with its complex conjugate. A H ( A Hermitian ) is the complex conjugate of A T.

A is symmetric if A = A T. A is Hermitian if A = A H. A is unitary if AA H =. Note that unitary is the complex analogue of orthogonal. Indeed, a real unitary matrix is orthogonal. Note also that (AB) H = B H A H. Give the example of heat diffusion on a circle to suggest the ubiquity of symmetric matrices. Examples: A typical Hermitian matrix is [ i i Compute, just for fun, that the eigenvalues are 0 and 2. That they re real numbers, despite the fact that the matrix is complex, is no coincidence! We might want to analyze this before we think about unitary matrices too much. Definition: Let v be a vector with complex entries. We define. v = v H v (positive square root.) Before we prove the spectral theorem, let s prove a theorem that s both stronger and weaker. Theorem. Let A be an arbitrary matrix. There exists a unitary matrix U such that U AU is upper triangular. We don t have to assume A is symmetric, as in the spectral theorem, but we get a weaker conclusion as a result. We proceed as follows. We know A has at least one eigenvector v. Let U be a unitary matrix of the form v... v n where the latter columns are unspecified. (How do we know there is such a U? The Gram-Schmidt process...) What does U AU look like? Well, what does it do to e? It sends it to U AU e = U A v = λ U v = λ e. 2

So U AU looks like λ 0 0 0 (Of course, it is only for convenience that we write a 3 3 matrix; the proof works for any n.) And now we move on down to the southeast corner of the matrix, which is a 2 2 matrix A 2. Now A 2 has an eigenvector v 2. So write U 2 a matrix with a in the upper corner and v 2 as the second column. This is a little hard for me to type. What does U2 U AU U 2 look like? Well, multiplying by U 2 and U2 doesn t affect the first row or column at all; but it turns the second column into λ 2 0 just as U turned the first column into λ 2 0 0. Now we just continue this process until we get a triangular matrix similar to A. I should remark here that to believe the proof (which is well written-up in Strang) you should really work out a 3 3 example on your own. Once you do that, you will really see that the argument is correct. Before you do that, I think it may not be so convincing. Remark: Our favorite example of non-diagonalizable matrices, [ 0 0 0 is certainly similar to a triangular matrix, because it is itself triangular! Note that the fact that U was unitary played essentially no role in the above proof. Why did we bother? We bothered because now we re ready to prove the spectral theorem. In fact, we ll prove something even a little better., 3

Theorem: Every Hermitian matrix is diagonalizable. In particular, every real symmetric matrix is diagonalizable. Proof. Let A be a Hermitian matrix. By the above theorem, A is triangularizable that is, we can find a unitary matrix U such that U AU = T with T upper triangular. Lemma. U AU is Hermitian. Proof of Lemma. (U AU) H = U H A H (U ) H = U AU. Now we re done! Because if T is upper triangular, then T H is lower triangular. Ask: can we write down a triangular Hermitian matrix? Is there such a thing? Take some time to talk about that. So we have shown that in fact A is similar to a real diagonal matrix, which is to say that A is diagonalizable, and all the eigenvalues of A are real the very phenomenon we observed for the Hermitian matrix above! More than that: suppose A is real symmetric. Then the fact that the eigenvalues of A are real means that the eigenvectors of A are also real. So U is a real unitary matrix, so UU H = UU T = I; that is, U is an orthogonal matrix. This proves the remarkable fact that the eigenvectors of a symmetric matrix are mutually orthogonal. Theorem. Let A be a Hermitian matrix. Then the eigenvalues of A are real and the eigenvectors are orthogonal (in the Hermitian sense: v H i v j = 0. LECTURE 3 Example: I want to talk about the diffusion of heat around a circular loop of wire. Suppose you have such a loop, and you start by heating one point. How quickly does the heat diffuse over the whole loop? Draw the circle, and choose n nodes where we re going to measure temperature. Then the temperature is governed by the difference equation x t+ = A x t where A is a matrix with a ij = /2 whenever i j = or when i j = n. (This matrix is a pain to type.) For n = 3, for instance, the matrix is 0 /2 /2 A 3 = /2 0 /2 /2 /2 0 4

Question. Suppose there are 0 nodes. Start with a temperature of at the top node, 0 elsewhere. How long will it take before the temperature is approximately even; say that the temperature is within 0.0 of /0 everywhere? Well, one writes the initial state as c v + c 2 v 2 +... + c 0 v 0 One sees that λ = is an eigenvalue with eigenvector v =.. We know that the eigenvectors are all real numbers (which corresponds with our intiution that there shouldn t be any oscillation in the temperature it should just progress steadily towards the equilibrium.) Physical intuition also tells us that all eigenvalues are between and ; otherwise the temperatures would spiral out of control! (It would be nice to prove this fact mathematically, though.) So write the eigenvalues in order of absolute value: And write our initial state = λ λ 2 λ 3... λ 0. x 0 = c v + c 2 v 2 +... + c 0 v 0. Observation 0: c = (/0). Observation : c i v i < for all i. Because c i v i is nothing other than the orthogonal projection of x 0 onto the line. So after a long time, x k = A k x 0 = (/0) v + c 2 λ k 2 v 2 +... + c 0 λ k 0 v 0. So the discrepancy between x k and the uniform temperature distribution is d = c 2 λ k 2 v 2 +... + c 0 λ k 0 v 0. How big is this discrepancy? Well, note that d λ k 2 + λ k 3 +... + λ k 0 00λ k 2 5

so in particular, if 00λ k 2 < 0.0 it follows that the length of d, whence certainly the difference at any point between the temperature and /0, is less than 0.0. So it all comes down to estimating λ 2. How the heck are we supposed to do that? This kind of problem is hugely important. In this simple case, the answer: we know λ 2 = cos π/0 = 0.9995. (This can be proved by mathematical induction using the theory of Tchebysheff polynomials.) And we find that, a priori, 8860 units of time suffice! To be honest, I m not a good enough computer programmer to check how good an approximation this is. Compare with the seven-shuffle theorem. 2. Similar matrices represent the same linear transformation with respect to different bases. I just want to say a word about this important idea. Let W be a basis for R n. [ [ Example: w =, w 2 = Let T : R n R n be a linear transformation. We can write and T w = c w + c 2 w 2 T w 2 = c 2 w + c 22 w 2 In that case, write [ c c [T W = 2 c 2 c 22. Remark. If W is the standard basis, then [T W is the usual matrix representation of T. Remark. If W consists of eigenvectors for T, then [T W is diagonal. [ As in the case above, where T is the transformation usually written as 9/8 7/8. 7/8 9/8 6

Remark. Let V and W be two bases. We can write w = s v + s 2 v 2 and w 2 = s 2 v + s 22 v 2 And the corresponding matrix [ s s S = 2 s 2 s 22 is called (by me) the connecting matrix between V and W. For instance, if V is the standard basis, then S is the usual matrix w... w n. Fact. [T W = S [T V S. In particular, the two matrices are similar. This is a little messy to prove, so I shan t; it s easiest if we take V to be the standard basis. Remark: Note that the eigenvalues of a linear transformation do not depend on the basis; that fits well with our knowledge that similar matrices have the same eigenvalues. 3. Jordan canonical form OK, that was fun. Now I want to talk a little about Jordan canonical form. Remember our goal: given A, find a nice matrix which is similar to A. We ve already seen that we can find a matrix similar to A which is upper triangular. We can do a little better; we can find a matrix J similar to A which is of Jordan block form; I ll write this on the board, say it makes it not too bad to compute the exponential. 7