Homework 3 Solutions Math 309, Fall 2015

Similar documents
and let s calculate the image of some vectors under the transformation T.

Differential equations

Matrices related to linear transformations

Math 3191 Applied Linear Algebra

Announcements Wednesday, November 7

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 2142 Homework 5 Part 1 Solutions

Eigenspaces. (c) Find the algebraic multiplicity and the geometric multiplicity for the eigenvaules of A.

Math 1553, Introduction to Linear Algebra

MATH 320, WEEK 11: Eigenvalues and Eigenvectors

Study Guide for Linear Algebra Exam 2

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Dimension. Eigenvalue and eigenvector

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

Definition (T -invariant subspace) Example. Example

From Lay, 5.4. If we always treat a matrix as defining a linear transformation, what role does diagonalisation play?

Ma 227 Review for Systems of DEs

Generalized Eigenvectors and Jordan Form

You may use a calculator, but you must show all your work in order to receive credit.

Reduction to the associated homogeneous system via a particular solution

Designing Information Devices and Systems I Discussion 4B

Linear Algebra: Matrix Eigenvalue Problems

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Math 1553 Worksheet 5.3, 5.5

Linear Algebra Practice Problems

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

LINEAR ALGEBRA SUMMARY SHEET.

Diagonalization. Hung-yi Lee

LINEAR EQUATIONS OF HIGHER ORDER. EXAMPLES. General framework

MATH 24 EXAM 3 SOLUTIONS

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

MATH 320 INHOMOGENEOUS LINEAR SYSTEMS OF DIFFERENTIAL EQUATIONS

Work sheet / Things to know. Chapter 3

Announcements Wednesday, November 7

City Suburbs. : population distribution after m years

MTH 464: Computational Linear Algebra

20D - Homework Assignment 5

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Properties of Linear Transformations from R n to R m

MAT1302F Mathematical Methods II Lecture 19

Problems for M 11/2: A =

MATH 221, Spring Homework 10 Solutions

Econ Slides from Lecture 7

Row Space, Column Space, and Nullspace

After we have found an eigenvalue λ of an n n matrix A, we have to find the vectors v in R n such that

Eigenvalues and Eigenvectors A =

Math 215 HW #9 Solutions

Solutions Problem Set 8 Math 240, Fall

Linear Algebra- Final Exam Review

Math53: Ordinary Differential Equations Autumn 2004

3.3 Eigenvalues and Eigenvectors

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Usually, when we first formulate a problem in mathematics, we use the most familiar

Even-Numbered Homework Solutions

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Math 113 Homework 5 Solutions (Starred problems) Solutions by Guanyang Wang, with edits by Tom Church.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Eigenvalues and Eigenvectors

Symmetric and anti symmetric matrices

We have already seen that the main problem of linear algebra is solving systems of linear equations.

Section 8.2 : Homogeneous Linear Systems

Mon Mar matrix eigenspaces. Announcements: Warm-up Exercise:

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Eigenvalues and Eigenvectors. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

Jordan normal form notes (version date: 11/21/07)

Section 1.4: Second-Order and Higher-Order Equations. Consider a second-order, linear, homogeneous equation with constant coefficients

CAAM 335 Matrix Analysis

EIGENVALUES AND EIGENVECTORS 3

Lecture Notes for Math 251: ODE and PDE. Lecture 30: 10.1 Two-Point Boundary Value Problems

CHEE 319 Tutorial 3 Solutions. 1. Using partial fraction expansions, find the causal function f whose Laplace transform. F (s) F (s) = C 1 s + C 2

Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Solving a system by back-substitution, checking consistency of a system (no rows of the form

REVIEW PROBLEMS FOR MIDTERM II MATH 2373, FALL 2016 ANSWER KEY

Final Exam Practice Problems Answers Math 24 Winter 2012

Eigenvalues and Eigenvectors

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Math 205, Summer I, Week 4b:

Math 314H Solutions to Homework # 3

Differential Equations

Section 5.5. Complex Eigenvalues

Solutions to Final Exam

SOLUTIONS: ASSIGNMENT Use Gaussian elimination to find the determinant of the matrix. = det. = det = 1 ( 2) 3 6 = 36. v 4.

Announcements Wednesday, November 01

Math 315: Linear Algebra Solutions to Assignment 7

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

c Igor Zelenko, Fall

M.A.P. Matrix Algebra Procedures. by Mary Donovan, Adrienne Copeland, & Patrick Curry

Lecture 5. Complex Numbers and Euler s Formula

Transcription:

Homework 3 Solutions Math 39, Fall 25 782 One easily checks that the only eigenvalue of the coefficient matrix is λ To find the associated eigenvector, we have 4 2 v v 8 4 (up to scalar multiplication) 2 v 2 Since the corresponding eigenspace is clearly one-dimensional, the eigenvalue λ is deficient, and so there exists an associated generalized eigenvector w To find it, we solve ( 4 2 8 4 ) w w 2 2w 2 w 2 2 Therefore, our generalized eigenvector will have the form w + k /2 2 Here k is an arbitrary constant, and so we set it equal to and plug it into our formula for the solution: [ ] x(t) c + c 2 2 t + 2 /2 785 We compute the characteristic polynomial of the coefficient matrix to be λ 3 + 3λ 2 4, and looking for small integer roots we observe that λ is such a root Polynomial division shows that λ 2 is the only other root We first find the eigenvector associated to the (simple) eigenvalue λ : 2 2 2 2 v v 2 v 3

Solving this system however you like, we find that, up to scalar multiplication, 3 v 4 2 We next compute the eigenspace associated to λ 2: v 2 v 2 v 3 Again solving this system any way you like, we conclude that the eigenspace is one-dimensional and generated by v 2 Since the eigenvalue λ 2 thus has geometric multiplicity and algebraic multiplicity 2, it is deficient and so there exists an associated generalized eigenvector: 2 w w 2 w 3 The last equation here implies w 2 + w 3, and plugging that into the second equation gives us w We thus conclude that our generalized eigenvector has the form w + k Plugging this all into our solution formula when n 3, we find that 3 x(t) c e t 4 + c 2 e 2t + c 3 te 2t + e 2t 2 2

Note: The answer in your textbook looks a bit different than this, and this is because they chose a generalized eigenvector which corresponds to k in my computation, that is + 786 The characteristic polynomial is given by λ det λ λ 3 + 3λ + 2 λ Checking for small integer roots and using polynomial division, we see that this factors as (λ 2)(λ + ) 2 and hence our eigenvalues are λ 2 and λ Computing the eigenvector for λ 2 amounts to solving 2 2 2 v v 2 v 3 Since the third row is the sum of the first two, this is equivalent to solving the system { 2v + v 2 + v 3 v 2v 2 + v 3 It s easy to see (for example, by subtracting the first equation from the second to get 3v 3v 2 and then plugging that result into either equation) that the solution to this system is v v 2 v 3, so we set v 2 Now to find the eigenspace associated to λ we solve v v 2, v 3 3

which is easily seen to be equivalent to the equation v v 2 v 3 Our eigenspace thus consists of vectors of the form v v 2 v 3 v v 2 v 2 v 2 + v 3 v 3 v 3 This, of course, means that the eigenspace E is two-dimensional and spanned by the eigenvectors v () and v (2) Since the algebraic and geometric multiplicity of λ are both 2, this eigenvalue is not deficient and there is thus no need to look for generalized eigenvectors By our usual formula, the general solution to this problem thus has the form x(t) c e 2t + c 2 e t + c 3 e t 785 Let P be the coefficient matrix Observe that the characteristic polynomial for P is given by a λ b det λ 2 (a + d)λ + ad bc () c d λ Observe that in each of the formulas we have learned for the general solution of the homogeneous equation x P x, the only way we will always have x(t) as t is if all eigenvalues have Re(λ) < (ie, negative real part) Now if r and r 2 are the two (possibly identical) eigenvalues of P, the characteristic polynomial factors as (λ r )(λ r 2 ) λ 2 (r + r 2 )λ + r r 2 (2) There are two cases to consider: either r, r 2 are both real or they are complex conjugates In the former case, the roots are both negative if and only if their 4

sum is negative and their product is positive Comparing () and (2), this is equivalent to r + r 2 a + d < and r r 2 ad bc > as claimed Now if r and r 2 are complex conjugates, we automatically have r r 2 with equality holding if and only if r r 2 (recall that (a + bi)(a bi) a 2 + b 2 ) Since complex conjugates have the same real part, they will both have negative real part if and only if their sum is negative and their product, being nonzero, is positive, so the conclusion again follows by comparing () and (2) 794 We will do this and the following problems from Section 79 using diagonalization/jordan normal forms One easily computes the characteristic polynomial of the coefficient matrix P to be λ 2 +λ 6, and so the eigenvalues of P are λ 2 and λ 3 We find that the corresponding eigenspaces are spanned by v 2 and v 3 4 respectively We thus compute the diagonalizing matrix T and its inverse to be T T 4 4/5 /5 4 5 /5 /5 Setting x T y as usual, the given equation is equivalent to 2 y y + h 3 where (4/5)e h T g 2t (2/5)e t (/5)e 2t + (2/5)e t This, of course, implies that y satisfies y 2y + 4 5 e 2t 2 5 et d dt (e 2t y ) 4 5 e 4t 2 5 e t and so integrating yields e 2t y 5 e 4t + 2 5 e t + c y (t) 5 e 2t + 2 5 et + c e 2t 5

A very similar computation gives us that y 2 (t) 5 e 2t + et + c 2 e 3t, and so our general solution is given by (/5)e x(t) T y 2t + (2/5)e t + c e 2t 4 (/5)e 2t + (/)e t + c 2 e 3t (/2)e t + c e 2t + c 2 e 3t e 2t + c e 2t 4c 2 e 3t We express our answer in the more illuminating form (/2)e t x(t) e 2t + c e 2t + c 2 e 3t 4 797 The characteristic polynomial of the coefficient matrix P is λ 2 2λ 3, and so the eigenvalues of P are λ 3 and λ We easily find associated eigenvectors: v 3 2 and v 2 We thus compute the diagonalizing matrix T and its inverse: T T 2 /2 /4 2 2 4 2 /2 /4 Letting x T y, we thus have where y 3 y + h (3/4)e h T t g (5/4)e t Solving for y and y 2 as in the previous problem, we find that (3/8)e y t + c e 3t (5/8)e t + c 2 e t 6

and hence (3/8)e x(t) T y y t + c e 3t 2 2 2 2 (5/8)e t + c 2 e t (/4)e t + c e 3t + c 2 e t 2e t + 2c e 3t 2c 2 e t We again record our final answer in the form (/4)e t x(t) 2e t + c e 3t + c 2 2 e t 2 79 (BONUS!) This problem would be straightforward using the method of undetermined coefficients, but it is a bit tricky to do using a complex diagonalizing matrix One easily checks that the eigenvalues of the coefficient matrix are λ i and λ i To find the corresponding (complex) eigenvector for λ i, we must solve ( 2 i 5 2 i ) v v 2 5 v i 2 i (up to scalar mult) Now we learned that the eigenvector corresponding the the conjugate eigenvalue is simply the conjugate of this, and so a diagonalizing matrix for the coefficient matrix is 5 5 T 2 i 2 + i whose inverse is given by T i ( 2 + i 5 2 + i 5 ) (/5)i + / (/2)i (/5)i + / (/2)i Hence, defining h as in the previous problems, we have (/5)i + / (/2)i h T g (/5)i + / (/2)i cos t (/2)i cos t (/2)i cos t At this point, we recall the following two important identities, both of which follow directly from Euler s formula: cos t 2 (eit + e it ) (3) sin t 2 i(eit e it ) (4) 7

Now setting x T y as usual (observe that y could thus be complex), we have that i (/2)i cos t y y + i (/2)i cos t Let s solve the first component equation using (3): y iy + 4 ieit + 4 ie it d dt (e it y ) 4 i + 4 ie 2it Integrating yields e it y 4 it 8 e 2it + c For the purposes of this problem, we will assume all arbitrary constants are zero because we are only interested in finding the particular solution (we learned how to find the general solution in Section 76) We thus conclude that y (t) 4 iteit 8 e it Rather than going through a nearly identical computation to compute y 2, observe by conjugating the differential equation for y 2 that y 2 : y is a solution for the second component equation We thus conclude (with zero arbitrary constants) that We thus conclude that x(t) T y y 2 (t) 4 ite it 8 eit ( 5 5 ) 4 iteit 8 e it 2 i 2 + i 4 ite it 8 eit Multiplying this out, we see that ( ) 5 x(t) 4 ti(eit e it ) 5 8 (eit + e it ) 2 ti(eit e it ) + 4 t(eit + e it ) 4 (eit + e it ) 8 i(eit e it ) Though this looks messy, I ve suggestively grouped terms so that we can apply the identities (3) and (4): ( 5 x(t) t sin t 5 cos t ) 2 4 t sin t + t cos t cos t + sin t 2 2 4 8

Note: As has happened before, this particular solution is not the precise particular solution given in the back of your textbook However, recall that if one adds any solution to the homogeneous problem x P x to a particular solution of the nonhomogeneous equation, then we will obtain a new particular solution So what s happened here is that my particular solution and the book s differ by a solution to the homogeneous problem Seeing that the book expresses this homogeneous part of the solution as ( ) ( ) 5 cos t 5 sin t x hom (t) c + c 2 cos t + sin t 2, cos t + 2 sin t we observe that if we choose c /4 and c 2 and add the resulting function to the book s particular solution, you will obtain mine 796 Since φ is a general solution to the nonhomogeneous problem, it satisfies φ P φ + g Now v, being a particular solution to the same nonhomogeneous problem, satisfies v P v + g Subtracting the second equation from the first gives us φ v P (φ v), and so clearly u : φ v is a solution to the homogeneous equation x P x Then since φ φ v + v u + v, the assertion is proved 9