Understanding the Matrix Exponential

Similar documents
systems of linear di erential If the homogeneous linear di erential system is diagonalizable,

Test #2 Math 2250 Summer 2003

MAT1302F Mathematical Methods II Lecture 19

Jordan normal form notes (version date: 11/21/07)

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

Control Systems. Dynamic response in the time domain. L. Lanari

MTH 464: Computational Linear Algebra

In these chapter 2A notes write vectors in boldface to reduce the ambiguity of the notation.

Properties of Linear Transformations from R n to R m

Math 108b: Notes on the Spectral Theorem

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

MATH 53H - Solutions to Problem Set III

and let s calculate the image of some vectors under the transformation T.

Math 2331 Linear Algebra

Periodic Linear Systems

Final Exam Practice Problems Answers Math 24 Winter 2012

Definition: An n x n matrix, "A", is said to be diagonalizable if there exists a nonsingular matrix "X" and a diagonal matrix "D" such that X 1 A X

Eigenvalues, Eigenvectors, and Diagonalization

Lecture Notes on Ordinary Differential Equations. Christopher P. Grant

Definition (T -invariant subspace) Example. Example

DIAGONALIZABLE LINEAR SYSTEMS AND STABILITY. 1. Algebraic facts. We first recall two descriptions of matrix multiplication.

DIFFERENTIAL EQUATIONS REVIEW. Here are notes to special make-up discussion 35 on November 21, in case you couldn t make it.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

2.3. VECTOR SPACES 25

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

Math 3191 Applied Linear Algebra

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0

Constant coefficients systems

MATH 320, WEEK 7: Matrices, Matrix Operations

7 Planar systems of linear ODE

Lecture 15, 16: Diagonalization

Matrix Solutions to Linear Systems of ODEs

Math 315: Linear Algebra Solutions to Assignment 7

4. Linear transformations as a vector space 17

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

MATH 583A REVIEW SESSION #1

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

Summer Session Practice Final Exam

π 1 = tr(a), π n = ( 1) n det(a). In particular, when n = 2 one has

MATH 1553-C MIDTERM EXAMINATION 3

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

EE263: Introduction to Linear Dynamical Systems Review Session 5

Lecture 6. Eigen-analysis

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Homework 3 Solutions Math 309, Fall 2015

Diagonalization. P. Danziger. u B = A 1. B u S.

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MAT 1302B Mathematical Methods II

Topic # /31 Feedback Control Systems

MATH 1553 PRACTICE MIDTERM 3 (VERSION A)

Control Systems I. Lecture 4: Diagonalization, Modal Analysis, Intro to Feedback. Readings: Emilio Frazzoli

Linear Algebra Practice Problems

Diagonalization of Matrix

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

21 Linear State-Space Representations

MATH 1553, C. JANKOWSKI MIDTERM 3

Linear Algebra Practice Problems

Math 1553 Worksheet 5.3, 5.5

Family Feud Review. Linear Algebra. October 22, 2013

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

LIE ALGEBRAS: LECTURE 7 11 May 2010

You may use a calculator, but you must show all your work in order to receive credit.

Calculus C (ordinary differential equations)

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Lecture 7. Please note. Additional tutorial. Please note that there is no lecture on Tuesday, 15 November 2011.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

v = 1(1 t) + 1(1 + t). We may consider these components as vectors in R n and R m : w 1. R n, w m

Eigenvalues and Eigenvectors

Math 489AB Exercises for Chapter 1 Fall Section 1.0

MATH 221, Spring Homework 10 Solutions

Example: Filter output power. maximization. Definition. Eigenvalues, eigenvectors and similarity. Example: Stability of linear systems.

Section 3: Complex numbers

NORMS ON SPACE OF MATRICES

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Eigenvalues and Eigenvectors

Linear Algebra- Final Exam Review

APPPHYS 217 Tuesday 6 April 2010

Recall : Eigenvalues and Eigenvectors

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 10

October 4, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Eigenvalues, Eigenvectors, and an Intro to PCA

Math 205, Summer I, Week 4b:

B5.6 Nonlinear Systems

Math 314H Solutions to Homework # 3

Non Isolated Periodic Orbits of a Fixed Period for Quadratic Dynamical Systems

Chapter 5 Eigenvalues and Eigenvectors

Chapter 4: Higher Order Linear Equations

September 26, 2017 EIGENVALUES AND EIGENVECTORS. APPLICATIONS

Study Guide for Linear Algebra Exam 2

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Transcription:

Transformations Understanding the Matrix Exponential Lecture 8 Math 634 9/17/99 Now that we have a representation of the solution of constant-coefficient initial-value problems, we should ask ourselves: What good is it? Does the power series formula for the matrix exponential provide an efficient means for calculating exact solutions? Not usually Is it an efficient way to compute accurate numerical approximations to the matrix exponential? Not according to Matrix Computations by Golub and Van Loan Does it provide insight into how solutions behave? It is not clear that it does There are, however, transformations that may help us handle these problems Suppose that B,P L(R n, R n ) are related by a similarity transformation; ie, B QP Q 1 for some invertible Q Calculating, we find that e B k0 B k (QP Q 1 ) k k0 QP k Q 1 k0 ( ) P k Q Q 1 Qe P Q 1 k0 It would be nice if, given B, wecouldchooseq so that P were a diagonal matrix, since e diag{p 1,p 2,,p n} diag{e p 1,e p 2,,e pn } Unfortunately, this cannot always be done Over the next few lectures, we will show that what can be done, in general, is to pick Q so that P S + N, where S is a semisimple matrix with a fairly simple form, N is a nilpotent matrix of a fairly simple form, and S and N commute (Recall that a matrix is semisimple if it is diagonalizable over the complex numbers and that a matrix is nilpotent if some power of the matrix is 0) The forms of S and N are simple enough that we can calculate their exponentials fairly easily, and then we can multiply them to get the exponential of S + N We will spend a significant amount of time carrying out the project described in the previous paragraph, even though it is linear algebra that some 1

of you have probably seen before Since understanding the behavior of constant coefficient systems plays a vital role in helping us understand more complicated systems, I feel that the time investment is worth it The particular approach we will take follows chapters 3, 4, 5, and 6, and appendix 3 of Hirsch and Smale fairly closely Eigensystems Given B L(R n, R n ), recall that that λ C is an eigenvalue of B if Bx λx for some nonzero x R n or if Bx λx for some nonzero x C n, where B is the complexification of B; ie, the element of L(C n, C n )which agrees with B on R n (Just as we often identify a linear operator with a matrix representation of it, we will usually not make a distinction between an operator on a real vector space and its complexification) A nonzero vector x for which Bx λx for some scalar λ is an eigenvector An eigenvalue λ with corresponding eigenvector x form an eigenpair (λ, x) If an operator A L(R n, R n ) is chosen at random, it would almost surely have n distinct eigenvalues {λ 1,,λ n } and n corresponding linearly independent eigenvectors {x 1,,x n } If this is the case, then A is similar to the (possibly complex) diagonal matrix λ 1 0 0 0 0 0 0 λ n More specifically, λ 1 0 0 A x 1 x n 0 0 x 1 x n 0 0 λ n If the eigenvalues of A are real and distinct, then this means that tλ 1 0 0 1 ta x 1 x n 0 0 x 1 x n, 0 0 tλ n 2 1

and the formula for the matrix exponential then yields e tλ 1 0 0 1 e ta x 1 x n 0 0 x 1 x n 0 0 e tλn This formula should make clear how the projections of e ta x 0 grow or decay as t ± The same sort of analysis works when the eigenvalues are (nontrivially) complex, but the resulting formula is not as enlightening In addition to the difficulty of a complex change of basis, the behavior of e tλ k is less clear when λ k is not real One way around this is the following Sort the eigenvalues (and eigenvectors) of A so that complex conjugate eigenvalues {λ 1, λ 1,,λ m, λ m } come first and are grouped together and so that real eigenvalues {λ m+1,,λ r } come last For k m, set Reλ k R, b k Imλ k R, y k Rex k R n, and z k Imx k R n Then Ay k A Re x k ReAx k Reλ k x k (Reλ k )(Re x k ) (Im λ k )(Im x k ) y k b k z k, and Az k A Im x k ImAx k Imλ k x k (Imλ k )(Re x k )+(Reλ k )(Im x k ) b k y k + z k Using these facts, we have A QP Q 1,where Q z 1 y 1 z m y m x m+1 x r and 3

P a 1 b 1 0 0 0 0 0 0 b 1 a 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 a m b m 0 0 0 0 0 0 b m a m 0 0 0 0 0 0 λ m+1 0 0 0 0 0 0 0 0 0 0 λ r In order to compute e ta from this formula, we ll need to know how to compute e ta k,where ak b A k k b k This can be done using the power series formula An alternative approach is to realize that x(t) : e ta k c y(t) d is supposed to solve the IVP ẋ x b k y ẏ b k x + y x(0) c y(0) d (1) Since we can check that the solution of (1) is x(t) e t (c cos b k t d sin b k t) y(t) e akt (d cos b k t + c sin b k t) 4,

we can conclude that e ta k e t cos b k t e akt sin b k t e akt sin b k t e akt cos b k t Putting this all together and using the form of P,weseethate ta Qe tp Q 1,where e tp B1 0 0 T, B 2 B 1 e a1t cos b 1 t e a1t sin b 1 t 0 0 0 0 e a1t sin b 1 t e a1t cos b 1 t 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 e amt cos b m t e amt sin b m t 0 0 0 0 e amt sin b m t e amt cos b m t, B 2 e λ m+1t 0 0 0 0 0 0 e λrt, and 0 is a 2m (r m 1) block of 0 s This representation of e ta shows that not only may the projections of e ta x 0 grow or decay exponentially, they may also exhibit sinusoidal oscillatory behavior 5