A matrix times a column vector is the linear combination of the columns of the matrix weighted by the entries in the column vector.

Similar documents
LS.2 Homogeneous Linear Systems with Constant Coefficients

Systems of Algebraic Equations and Systems of Differential Equations

Announcements Wednesday, November 01

An overview of key ideas

Systems of Second Order Differential Equations Cayley-Hamilton-Ziebur

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Conceptual Questions for Review

Part II Problems and Solutions

Announcements Wednesday, November 01

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Chapter 5. Eigenvalues and Eigenvectors

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra Exercises

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Properties of Linear Transformations from R n to R m

Study Guide for Linear Algebra Exam 2

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

MITOCW MITRES18_005S10_DiffEqnsMotion_300k_512kb-mp4

Math Matrix Algebra

4. Linear transformations as a vector space 17

18.06 Quiz 2 April 7, 2010 Professor Strang

18.03SC Practice Problems 2

CAAM 335 Matrix Analysis

MAT 1302B Mathematical Methods II

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

MITOCW ocw f99-lec23_300k

[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of

Note: Please use the actual date you accessed this material in your citation.

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Copyright (c) 2006 Warren Weckesser

20D - Homework Assignment 5

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

7 Planar systems of linear ODE

21 Linear State-Space Representations

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Appendix: A Computer-Generated Portrait Gallery

Lecture Notes for Math 251: ODE and PDE. Lecture 27: 7.8 Repeated Eigenvalues

Math 205, Summer I, Week 4b:

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Calculating determinants for larger matrices

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Linear Algebra - Part II

D(f/g)(P ) = D(f)(P )g(p ) f(p )D(g)(P ). g 2 (P )

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Problems for M 10/26:

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Chapter 2 Notes, Linear Algebra 5e Lay

Announcements Monday, October 29

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Solutions of Spring 2008 Final Exam

Math 312 Lecture Notes Linear Two-dimensional Systems of Differential Equations


Math 3191 Applied Linear Algebra

1 The relation between a second order linear ode and a system of two rst order linear odes

4. Sinusoidal solutions

Eigenvalues for Triangular Matrices. ENGI 7825: Linear Algebra Review Finding Eigenvalues and Diagonalization

3.024 Electrical, Optical, and Magnetic Properties of Materials Spring 2012 Recitation 1. Office Hours: MWF 9am-10am or by appointment

Math 2331 Linear Algebra

Dimension. Eigenvalue and eigenvector

Numerical Linear Algebra Homework Assignment - Week 2

Math 266: Phase Plane Portrait

Notice that these numbers march out along a spiral. This continues for all powers of 1+i, even negative ones.

Eigenspaces and Diagonalizable Transformations

Review Notes for Linear Algebra True or False Last Updated: January 25, 2010

MAT1302F Mathematical Methods II Lecture 19

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Solutions Problem Set 8 Math 240, Fall

Solution Set 7, Fall '12

Chapter 30 Minimality and Stability of Interconnected Systems 30.1 Introduction: Relating I/O and State-Space Properties We have already seen in Chapt

TMA Calculus 3. Lecture 21, April 3. Toke Meier Carlsen Norwegian University of Science and Technology Spring 2013

This is our rough sketch of the molecular geometry. Not too important right now.

Introduction to Matrices

and let s calculate the image of some vectors under the transformation T.

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra Practice Problems

Solution to Homework 1

Math 3108: Linear Algebra

MATH 1553 PRACTICE MIDTERM 3 (VERSION A)

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Practice problems for Exam 3 A =

Linear Algebra. Rekha Santhanam. April 3, Johns Hopkins Univ. Rekha Santhanam (Johns Hopkins Univ.) Linear Algebra April 3, / 7

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra- Final Exam Review

EK102 Linear Algebra PRACTICE PROBLEMS for Final Exam Spring 2016

An Algorithmist s Toolkit September 10, Lecture 1

MATLAB Project 2: MATH240, Spring 2013

Section 8.2 : Homogeneous Linear Systems

18.02SC Multivariable Calculus, Fall 2010 Transcript Recitation 34, Integration in Polar Coordinates

Eigenvalues and Eigenvectors

MATH 1553 SAMPLE FINAL EXAM, SPRING 2018

Chapter 6 Balanced Realization 6. Introduction One popular approach for obtaining a minimal realization is known as Balanced Realization. In this appr

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Announcements Wednesday, November 7

Transcription:

18.03 Class 33, Apr 28, 2010 Eigenvalues and eigenvectors 1. Linear algebra 2. Ray solutions 3. Eigenvalues 4. Eigenvectors 5. Initial values [1] Prologue on Linear Algebra. Recall [a b ; c d] [x ; y] = x[a ; c] + y[b ; d] : A matrix times a column vector is the linear combination of the columns of the matrix weighted by the entries in the column vector. When is this product zero? One way is for x = 0 = y. If [a ; c] and [b ; d] point in different directions, this is the ONLY way. But if they lie along a single line, we can find x and y so that the sum cancels. Write A = [a b ; c d] and u = [x ; y], so we have been thinking about A u = 0 as an equation for u. It always has the "trivial" solution u = 0 = [0 ; 0] : 0 is a linear combination of the two columns in a "trivial" way, with 0 coefficients, and we are asking when it is a linear combination of them in a different, "nontrivial" way. We get a nonzero solution [x ; y] exactly when the slopes of the vectors [a ; c] and [b ; d] coincide: c/a = d/b, or ad - bc = 0. This combination of the entries in A is so important that it's called the "determinant" of the matrix: det(a) = ad - bc We have found: Theorem: Au = 0 has a nontrivial solution exactly when det A = 0. If A is a larger *square* matrix the same theorem still holds, with the appropriate definition of the number det A. [2] Solve u' = Au : for example with A = [-2 1 ; -4 3]. The Mathlet "Linear Phase Portraits: Matrix entry" shows that some trajectories seem to be along rays from the origin. (We saw these also in the rabbits example on Monday. They were not present in the Romeo and Juliet example, though!) That is to say, we are going to look for a solution of the form u(t) = r(t) v, v not 0 One thing for sure: u'(t) also points in the same or reverse direction:

u'(t) = r'(t) v Use the equation to express u'(t) in terms of A and v : u'(t) = A u(t) = A r v = r A v r' v = r A v So Av points in the same or the reverse direction as v : A v = lambda v for some number lambda. This letter is always used in this context. An *eigenvalue* for A is a number lambda such that A v = lambda v for some nonzero v. A vector v such that A v = lambda v is an eigenvector for value lambda. [Thus the zero vector is an eigenvector for every lambda. A number is an eigenvalue for A exactly when it possesses a nonzero eigenvector.] I showed how this looks on the Mathlet Matrix/Vector. [3] This is a pure linear algebra problem: A is a square matrix, and we are looking for nonzero vectors v such that A v = lambda v for some number lambda. Let's try to find v. In order to get all the v's together, right the right hand side as lambda v = (lambda I) v where I is the identity matrix [1 0 ; 0 1], and lambda I is the matrix with lambda down the diagonal. Then we can put this on the left: 0 = A v - (lambda I) v = (A - lambda I) v Don't forget, we are looking for a nonzero v. We have just found an exact condition for such a solution: det(a - lambda I) = 0 This is an equation in lambda ; we will find lambda first, and then set about solving for v (knowing in advance only that there IS a nonzero solution). In our example, then, we subtract lambda from both diagonal entries and then take the determinent: A - lambda I = [ -2 - lambda, 1 ; -4, 3 - lambda ] det ( A - lambda I ) = (-2-lambda)(3-lambda) + 4 = 1 - lambda + lambda^2-2 = lambda^2 - lambda - 2 This is the "characteristic polynomial" p_a(lambda) = det( A - lambda I )

of A, and its roots are the "characteristic values" or "eigenvalues" of A. In our case, p_a(lambda) = (lambda + 1)(lambda - 2) and there are two roots, lambda_1 = -1 and lambda_ 2 = 2. (The order is irrelevant.) [4] Now we can find those special directions. There is one line for lambda_1 and another for lambda_2. We have to find nonzero solution v to (A - lambda I) v = 0 eg with lambda = lambda_1 = -1, A - lambda = [ -1 1 ; -4 4 ] There is a nontrivial linear relation between the columns: A [ 1 ; 1 ] = 0 All we are claiming is that A [ 1 ; 1 ] = - [ 1 ; 1 ] and you can check this directly. Any such v (even zero) is called an "eigenvector" of A. It means that there is a ray solution of the form r(t) v where v = [1;1]. We have r' v = r A v = r lambda v so (since v is nonzero) r' = lambda r and solving this goes straight back to Day One: r = c e^{lambda t} so for us solution: r = c e^{-t} and we have found our first straight line u = e^{-t} [1;1] In fact we've found all solutions which occur along that line: u = c e^{-t} [1;1] Any one of these solutions is called a "normal mode." General fact: the eigenvalue turns out to play a much more important role than it looked like it would: the ray solutions are *exponential* solutions, e^{lambda t} v, where lambda is an eigenvalue for

the matrix and v is a nonzero eigenvector for this eigenvalue. The second eigenvalue, lambda_2 = 2, leads to A - lambda I = [ -4 1 ; -4 1 ] and [ -4 1 ; -4 1 ] v = 0 has nonzero solution v = [1;4] so [1;4] is a nonzero eigenvector for the eigenvalue lambda = 2, and there is another ray solution e^{2t} [1;4] [5] The general solution to u' = Au will be a linear combination of the two eigensolutions (as long as there are two distinct eigenvalues). In our example, the general solution a linear combination of the normal modes: u = c1 e^{-t} [1;1] + c2 e^{2t} [1;4] We can solve for c1 and c2 using an initial condition: say for example u(0) = [1 ; 0]. Well, u(0) = c1 [1 ; 1] + c2 [1 ; 4] = [ c1 + c2 ; c1 + 4 c2 ] and for this to be [1 ; 0] we must have 3 c2 = -1 : c2 = -1/3 ; so c1 = - 4 c2 = 4/3 : u(t) = (4/3) e^{-t} [1 ; 1] - (1/3) e^{2t} [1 ; 4]. When t is very negative, -10, say, the first term is very big and the second tiny: the solution is very near the line through [1 ; -1]. As t gets near zero, the two terms become comparable and the solution curves around. As t gets large, 10, say, the second term is very big and the first is tiny: the solution becomes asymptotic to the line through [1 ; 1].

MIT OpenCourseWare http://ocw.mit.edu 18.03 Differential Equations Spring 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.