Contents. 6 Systems of First-Order Linear Dierential Equations. 6.1 General Theory of (First-Order) Linear Systems

Similar documents
5 Eigenvalues and Diagonalization

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

1 Matrices and Systems of Linear Equations

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Row Space, Column Space, and Nullspace

Second-Order Linear ODEs

Linear transformations

4. Higher Order Linear DEs

Chapter 1. Vectors, Matrices, and Linear Spaces

Additional Homework Problems

Chapter 5 Eigenvalues and Eigenvectors

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 313 Chapter 5 Review

Chapter 3 Transformations

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MAT Linear Algebra Collection of sample exams

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

5 Linear Dierential Equations

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

CHAPTER 3. Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Eigenvalues and Eigenvectors

c Igor Zelenko, Fall

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

Math Final December 2006 C. Robinson

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Math 205, Summer I, Week 4b:

Eigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto

Eigenvalues and Eigenvectors A =

1. General Vector Spaces

Homework For each of the following matrices, find the minimal polynomial and determine whether the matrix is diagonalizable.

9 - Matrix Methods for Linear Systems

Eigenvalues & Eigenvectors

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Math Camp Notes: Linear Algebra II

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

EIGENVALUES AND EIGENVECTORS 3

Study Guide for Linear Algebra Exam 2

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Lecture Notes for Math 251: ODE and PDE. Lecture 30: 10.1 Two-Point Boundary Value Problems

Math Linear Algebra Final Exam Review Sheet

Recall : Eigenvalues and Eigenvectors

Homework 3 Solutions Math 309, Fall 2015

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Chapter 3. Determinants and Eigenvalues

Math 215 HW #9 Solutions

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]

Chapter 3. Vector spaces

McGill University Math 325A: Differential Equations LECTURE 12: SOLUTIONS FOR EQUATIONS WITH CONSTANTS COEFFICIENTS (II)

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Eigenvalues and Eigenvectors

Numerical Linear Algebra Homework Assignment - Week 2

3 Matrix Algebra. 3.1 Operations on matrices

MAT 22B - Lecture Notes

Goals: Second-order Linear Equations Linear Independence of Solutions and the Wronskian Homogeneous DEs with Constant Coefficients

Jordan normal form notes (version date: 11/21/07)

Quizzes for Math 304

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

MATH 320, WEEK 11: Eigenvalues and Eigenvectors

Linear differential equations with constant coefficients Method of undetermined coefficients

Math 369 Exam #2 Practice Problem Solutions

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

5 and A,1 = B = is obtained by interchanging the rst two rows of A. Write down the inverse of B.

4. Linear transformations as a vector space 17

CHAPTER 5 REVIEW. c 1. c 2 can be considered as the coordinates of v

Chapter Two Elements of Linear Algebra

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Notes on the matrix exponential

MTH 464: Computational Linear Algebra

Lecture 22: Section 4.7

Lec 2: Mathematical Economics

PROBLEM SET. Problems on Eigenvalues and Diagonalization. Math 3351, Fall Oct. 20, 2010 ANSWERS

LINEAR ALGEBRA SUMMARY SHEET.

1. TRUE or FALSE. 2. Find the complete solution set to the system:

Method of Frobenius. General Considerations. L. Nielsen, Ph.D. Dierential Equations, Fall Department of Mathematics, Creighton University

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Lecture 12: Diagonalization

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Linear Algebra. and

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

Chapter 1: Systems of Linear Equations

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

MA 262, Fall 2017, Final Version 01(Green)

Test 3, Linear Algebra

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

VII Selected Topics. 28 Matrix Operations

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Transcription:

Dierential Equations (part 3): Systems of First-Order Dierential Equations (by Evan Dummit, 26, v 2) Contents 6 Systems of First-Order Linear Dierential Equations 6 General Theory of (First-Order) Linear Systems 62 Eigenvalue Method (Nondefective Coecient Matrices) 3 6 Systems of First-Order Linear Dierential Equations In many (perhaps most) applications of dierential equations, we have not one but several quantities which change over time and interact with one another Examples include the populations of the various species in an ecosystem, the concentrations of molecules involved in a chemical reaction, the motion of objects in a physical system, and the availability and production of items (goods, labor, materials) in economic processes In this chapter, we will outline the basic theory of systems of dierential equations As with the other dierential equations we have studied, we cannot solve arbitrary systems in full generality: in fact it is very dicult even to solve individual nonlinear dierential equations, let alone a system of nonlinear equations We will therefore restrict our attention to systems of linear dierential equations: as with our study of higher-order linear dierential equations, there is an underlying vector-space structure to the solutions which we will explain We will discuss how to solve many examples of homogeneous systems having constant coecients 6 General Theory of (First-Order) Linear Systems Before we start our discussion of systems of linear dierential equations, we rst observe that we can reduce any system of linear dierential equations to a system of rst-order linear dierential equations (in more variables): if we dene new variables equal to the higher-order derivatives of our old variables, then we can rewrite the old system as a system of rst-order equations Example: Convert the single 3rd-order equation y + y = to a system of rst-order equations If we dene new variables z = y and w = y = z, then the original equation tells us that y = y, so w = y = y = z Thus, this single 3rd-order equation is equivalent to the rst-order system y = z, z = w, w = z Example: Convert the system y +y = and y 2 +y +y 2 sin(x) = e x to a systen of rst-order equations If we dene new variables z = y and z 2 = y 2, then z = y = y + and z 2 = y 2 = e x y y 2 sin(x) = e x z z 2 sin(x) So this system is equivalent to the rst-order system y = z, y 2 = z 2, z = y +, z 2 = e x z z 2 sin(x) Thus, whatever we can show about solutions of systems of rst-order linear equations will carry over to arbitrary systems of linear dierential equations So we will talk only about systems of rst-order linear dierential equations from now on Denition: The standard form of a system of rst-order linear dierential equations with unknown functions y,,, y n is y = a, (x) y + a,2 (x) + + a,n (x) y n + q (x) y 2 = a 2, (x) y + a 2,2 (x) + + a 2,n (x) y n + q 2 (x) y n = a n, (x) y + a n,2 (x) + + a n,n (x) y n + q n (x) for some functions a i,j (x) and q i (x) for i, j n

a, (x) a,n (x) We can write this system more compactly using matrices: if A =, q = a n, (x) a n,n (x) q (x) y (x) y (x), and y = so that y =, we can write the system more compactly as q n (x) y = Ay + q y n (x) y n(x) We say that the system is homogeneous if q =, and it is nonhomogeneous otherwise Most of the time we will be dealing with systems with constant coecients, in which the entries of A are constant functions An initial condition for this system consists of n pieces of information: y (x ) = b, (x ) = b 2,, y n (x ) = b n, where x is the starting value for x and the b i are constants Equivalently, it is a condition of the form y(x ) = b for some vector b We also have a version of the Wronskian in this setting for checking whether function vectors are linearly independent: y, (x) y n, (x) Denition: Given n vectors v =,, v n = of length n with functions as entries, y,n (x) y n,n (x) y, y,2 y,n,,2,n their Wronskian is dened as the determinant W = y n, y n,2 y n,n By our results on row operations and determinants, we immediately see that n function vectors v,, v n of length n are linearly independent if their Wronskian is not the zero function Many of the theorems about general systems of rst-order linear equations are very similar to the theorems about nth order linear equations Theorem (Existence-Uniqueness): For a system of rst-order linear dierential equations, if the coecient functions a i,j (x) and nonhomogeneous terms p j (x) are each continuous in an interval around x for all i, j n, then the system y = a, (x) y + a,2 (x) + + a,n (x) y n + p (x) y 2 = a 2, (x) y + a 2,2 (x) + + a 2,n (x) y n + p 2 (x) y n = a n, (x) y + a n,2 (x) + + a n,n (x) y n + p n (x) with initial conditions y (x ) = b,, y n (x ) = b n has a unique solution (y,,, y n ) on that interval This theorem is not trivial to prove and we will omit the proof Example: The system y = e x y +sin(x) z, z = 3x 2 y has a unique solution for every initial condition y(x ) = b, z(x ) = b 2 Proposition: Suppose y par is one solution to the matrix system y = Ay + q Then the general solution y gen to this equation may be written as y gen = y par + y hom, where y hom is a solution to the homogeneous system y = Ay Proof: Suppose that y and are solutions to the general equation Then ( y ) = y 2 y = (Ay +q) (A +q) = A(y ), meaning that their dierence y is a solution to the homogeneous equation 2

Theorem (Homogeneous Systems): If the coecient functions a i,j (x) are continuous on an interval I for each i, j n, then the set of solutions y to the homogeneous system y = Ay on I is an n-dimensional vector space Proof: First, the solution space is a subspace, since it satises the subspace criterion: S: The zero function is a solution S2: If y and are solutions, then (y + ) = y + y 2 = A(y + ) so y + is also a solution S3: If α is a scalar and y is a solution, then (αy) = αy = α(ay) = A(αy) so αy is also a solution Now we need to show that the solution space is n-dimensional We will do this by nding a basis Choose any x in I By the existence part of the existence-uniqueness theorem, for each i n there exists a function z i such that z i (x ) is the ith unit coordinate vector of R n, with z i,i (x ) = and x i,j (x ) for all j i The functions z, z 2,, z n are linearly independent because their Wronskian matrix evaluated at x = x is the identity matrix (In particular, the Wronskian is not the zero function) Now suppose y is any solution to the homogeneous equation, with y(x ) = c Then the function z = c z + c 2 z 2 + + z n also has z(x ) = c and is a solution to the homogeneous equation But by the uniqueness part of the existence-uniqueness theorem, there is only one such function, so we must have y(x) = z(x) for all x: therefore y = c z + c 2 z 2 + + z n, meaning that y is in the span of z, z 2,, z n This is true for any solution function y, so z, z 2,, z n span the solution space Since they are also linearly independent, they form a basis of the solution space, and because there are n of them, we see that the solution space is n-dimensional If we combine the above results, we can write down a fairly nice form for the solutions of a general system of rst-order dierential equations: Corollary: The general solution to the nonhomogeneous equation y = Ay +q has the form y = y par +C z + C 2 z 2 + + C n z n, where y par is any one particular solution of the nonhomogeneous equation, z,, z n are a basis for the solutions to the homogeneous equation, and C,, C n are arbitrary constants This corollary says that, in order to nd the general solution, we only need to nd one function which satises the nonhomogeneous equation, and then solve the homogeneous equation 62 Eigenvalue Method (Nondefective Coecient Matrices) We now restrict our discussion to homogeneous rst-order systems with constant coecients: those of the form y = a, y + a,2 + + a,n y n y 2 = a 2, y + a 2,2 + + a 2,n y n y n = a n, y + a n,2 + + a n,n y n y which we will write in matrix form as y = Ay with y = and A = y n a, a,2 a,n a 2, a 2,2 a 2,n a n, a n,2 a n,n 3

Our starting point for solving such systems is to observe that if v = c c 2 is an eigenvector of A with eigenvalue λ, then y = c c 2 eλx is a solution to y = Ay This follows simply by dierentiating y = e λx v with respect to x: we see y = λe λx v = λy = Ay In the event that A has n linearly independent eigenvectors, we will therefore obtain n solutions to the dierential equation If these solutions are linearly independent, then since we know the solution space is n-dimensional, we would be able to conclude that our solutions are a basis for the solution space Theorem (Eigenvalue Method): If A has n linearly independent eigenvectors v, v 2,, v n with associated eigenvalues λ, λ 2,, λ n, then the general solution to the matrix dierential system y = Ay is given by y = C e λx v + C 2 e λ2x v 2 + + C n e λnx v 2, where C,, C n are arbitrary constants Recall that if λ is a root of the characteristic equation k times, we say that λ has multiplicity k If the eigenspace for λ has dimension less than k, we say that λ is defective The theorem allows us to solve the matrix dierential system for any nondefective matrix Proof: By the observation above, each of e λx v, e λ2x v 2,, e λnx v n is a solution to y = Ay We claim that they are a basis for the solution space To show this, we know by our earlier results that the solution space of the system y = Ay is n- dimensional: thus, if we can show that these solutions are linearly independent, we would be able to conclude that our solutions are a basis for the solution space We can compute the Wronskian of these solutions: after factoring out the exponentials from each column, we obtain W = e (λ+ +λn)x det(m), where M = v v n The exponential is always nonzero and the vectors v, v 2,, v n are (by hypothesis) linearly independent, meaning that det(m) is nonzero Thus, W is nonzero, so e λx v, e λ2x v 2,, e λnx v n are linearly independent Since these solutions are therefore a basis for the solution space, we immediately conclude that the general solution to y = Ay has the form y = C e λx v + C 2 e λ2x v 2 + + C n e λnx v 2, for arbitrary constants C,, C n The theorem allows us to solve all homogeneous systems of linear dierential equations whose coecient matrix A has n linearly independent eigenvectors (Such matrices are called nondefective matrices) Example: Find all functions y and such that y = y 3 y 2 = y + 3 The coecient matrix is A =, whose characteristic polynomial is det(ti A) = t 3 t = (t )(t ) + 3 = t 2 6t + 8 = (t 2)(t 4) Thus, the eigenvalues of A are λ = 2, 4 For λ = 2, we want to nd the nullspace of 3 the row-echelon form is 2 3 2 3 = By row-reducing we nd 3 3, so the 2-eigenspace is -dimensional and is spanned by 4

4 3 For λ = 4, we want to nd the nullspace of 4 the row-echelon form is Thus, the general solution to the system is 3 3 = By row-reducing we nd, so the 4-eigenspace is -dimensional and is spanned by y = C 3 e 2x + C 2 e 4x y = y 3 + 7y 3 Example: Find all functions y,, y 3 such that = y + y 3 y 3 = y + 3y 3 3 7 The coecient matrix is A =, whose characteristic polynomial is det(ti A) = 3 t 3 7 t + t + 3 = t3 + 3t 2 + 2t = t(t + )(t + 2) Thus, the eigenvalues of A are λ =,, 2 3 7 For λ =, we want to nd the nullspace of By row-reducing we nd the row-echelon 3 form is 2, so the -eigenspace is -dimensional and is spanned by 2 For λ =, we want to nd the nullspace of form is 3 2 3 7 2 By row-reducing we nd the row-echelon, so the ( )-eigenspace is -dimensional and is spanned by For λ = 2, we want to nd the nullspace of form is Thus, the general solution to the system is 3 3 7 3 By row-reducing we nd the row-echelon, so the ( 2)-eigenspace is -dimensional and is spanned by y y 3 = C 2 + C 2 3 e x + C 3 e 2x In the event that the coecient matrix has complex-conjugate eigenvalues, we generally want to rewrite the resulting solutions as real-valued functions Suppose A has a complex eigenvalue λ = a+bi with associated eigenvector v = w +iw 2 Then λ = a bi has an eigenvector v = w iw 2 (the conjugate of v), so we obtain the two solutions e λx v and e λx v to the system y = Ay Now we observe that 2 (eλx v+e λx v) = e ax (w cos(bx) w 2 sin(bx)), and 2i (eλx v e λx v) = e ax (w sin(bx)+ w 2 cos(bx)), and the latter solutions are real-valued Thus, to obtain real-valued solutions, we can replace the two complex-valued solutions e λx v and e λx v with the two real-valued solutions e ax (w cos(bx) w 2 sin(bx)) and e ax (w sin(bx) + w 2 cos(bx)), which are simply the real part and imaginary part of e λx v respectively Example: Find all real-valued functions y and such that y = y 2 = y

The coecient matrix is A = t 2 + Thus, the eigenvalues of A are λ = ±i For λ = i, we want to nd the nullspace of i is, whose characteristic polynomial is det(ti A) = i i, so the i-eigenspace is -dimensional and spanned by t t = By row-reducing we nd the row-echelon form i For λ = i we can take the complex conjugate of the eigenvector for λ = i to see that eigenvector y = C i e ix + C 2 The general solution, as a complex-valued function, is We want real-valued solutions, so we must replace the complex-valued solutions i with real-valued ones We have λ = i and v = + Thus, the equivalent real-valued solutions are sin(x) cos(x) = cos(x) y The system's solution is then i so that w = cos(x) = C cos(x) sin(x) and w 2 = + C 2 sin(x) cos(x) cos(x) sin(x) = sin(x) i e ix i e ix and i and is an e ix sin(x)+ y Example: Find all real-valued functions y and such that = 3y 2 = y + 3 2 The coecient matrix is A =, whose characteristic polynomial is det(ti A) = t 3 2 t = t 2 4t + Thus, the eigenvalues of A are λ = 2 ± i by the quadratic formula + i 2 For λ = 2 + i, we want to nd the nullspace of By row-reducing we nd the rowechelon form is, so the i-eigenspace is -dimensional and spanned by + i i + i i For λ = 2 i we can take the complex conjugate of the eigenvector for λ = 2 + i to see that is an eigenvector The general solution, as a complex-valued function, is ( Thus, the real-valued solutions are e 2x y The system's solution is then y = C + i We want real-valued solutions, so we must replace the complex-valued solutions i e (2 i)x with real-valued ones We have λ = 2 + i and v = + i so that w = and w 2 = ) ( cos(x) sin(x) and e 2x = C e 2x cos(x) sin(x) cos(x) i e (2+i)x +C 2 + i e (2 i)x e (2+i)x and sin(x) + + C 2 e 2x sin(x) + cos(x) sin(x) ) cos(x) 6

Example: Find all real-valued functions y,, y 3 such that y = 3y 7 3y 3 = y 4 2y 3 y 3 = y + 2 + 2y 3 3 7 3 The coecient matrix is A = 4 2, whose characteristic polynomial is det(ti A) = 2 2 t 3 7 3 t + 4 2 2 t 2 = (t + )(t2 2t + 2) The eigenvalues of A are λ =, ± i by the quadratic formula 4 7 3 For λ =, we want to nd the nullspace of 3 2 By row-reducing we nd the row-echelon 2 3 form is, so the ( )-eigenspace is -dimensional and spanned by For λ = + i, we want to nd the nullspace of row-echelon form is 3i 2 i 2 + i 7 3 + i 2 2 + i, so the ( + i)-eigenspace is spanned by By row-reducing we nd the + 3i 2 + i For λ = i we can take the complex conjugate of the eigenvector for λ = + i to see that is an eigenvector The general solution, as a complex-valued function, is y y 3 = C e x +C 2 + 3i 2 + i 3i 2 i e (+i) + 3i C 3 2 i e ( i), but we need to replace the complex-valued solutions with real-valued ones We have λ = + i and v = 2 + 3 i so that w = 3 2 and w 2 = Thus, the real-valued solutions are e x 2 cos(x) 3 sin(x) and e 2x 2 sin(x) + Then y y 3 = C e x + C 2 e x cos(x) 3 sin(x) 2 cos(x) sin(x) cos(x) + C 3 e x sin(x) + 3 cos(x) 2 sin(x) + cos(x) sin(x) 3 cos(x) Well, you're at the end of my handout Hope it was helpful Copyright notice: This material is copyright Evan Dummit, 22-26 You may not reproduce or distribute this material without my express permission 7