LS.5 Theory of Linear Systems

Similar documents
Math 322. Spring 2015 Review Problems for Midterm 2

4. Linear Systems. 4A. Review of Matrices ) , show that AB BA (= A A A).

Homogeneous Linear Systems and Their General Solutions

Chapter Homogeneous System with Constant Coeffici

Second Order Linear Equations

VANDERBILT UNIVERSITY. MATH 2610 ORDINARY DIFFERENTIAL EQUATIONS Practice for test 1 solutions

4. Linear Systems. 4A. Review of Matrices A-1. Verify that =

Chapter 4 & 5: Vector Spaces & Linear Transformations

Sec. 7.4: Basic Theory of Systems of First Order Linear Equations

u u + 4u = 2 cos(3t), u(0) = 1, u (0) = 2

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Homogeneous Linear Systems of Differential Equations with Constant Coefficients

ORDINARY DIFFERENTIAL EQUATIONS

MATH 3321 Sample Questions for Exam 3. 3y y, C = Perform the indicated operations, if possible: (a) AC (b) AB (c) B + AC (d) CBA

Relevant sections from AMATH 351 Course Notes (Wainwright): 1.3 Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls): 1.1.

Homework #6 Solutions

Section 1.4: Second-Order and Higher-Order Equations. Consider a second-order, linear, homogeneous equation with constant coefficients

Math Ordinary Differential Equations

Chapter Two Elements of Linear Algebra

Lecture 31. Basic Theory of First Order Linear Systems

Linear Algebra- Final Exam Review

Math 211. Lecture #6. Linear Equations. September 9, 2002

ORDINARY DIFFERENTIAL EQUATIONS

Linear Independence and the Wronskian

CHAPTER 1. Theory of Second Order. Linear ODE s

EIGENVALUES AND EIGENVECTORS 3

1 Systems of Differential Equations

Chapter 7. Homogeneous equations with constant coefficients

Eigenvalues and Eigenvectors

Third In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 3 December 2009 (1) [6] Given that 2 is an eigenvalue of the matrix

Linear Differential Equations. Problems

Understand the existence and uniqueness theorems and what they tell you about solutions to initial value problems.

(II.B) Basis and dimension

Department of Mathematics IIT Guwahati

22.2. Applications of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes

Course 216: Ordinary Differential Equations

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

Eigenvalues and Eigenvectors 7.1 Eigenvalues and Eigenvecto

Ordinary Differential Equation Theory

2.3 Terminology for Systems of Linear Equations

LINEAR EQUATIONS OF HIGHER ORDER. EXAMPLES. General framework

Sections 1.5, 1.7. Ma 322 Spring Ma 322. Jan 24-28

Chapter 13: General Solutions to Homogeneous Linear Differential Equations

Extreme Values and Positive/ Negative Definite Matrix Conditions

1+t 2 (l) y = 2xy 3 (m) x = 2tx + 1 (n) x = 2tx + t (o) y = 1 + y (p) y = ty (q) y =

9 - Matrix Methods for Linear Systems

f(s)ds, i.e. to write down the general solution in

Additional Homework Problems

I. Impulse Response and Convolution

21 Linear State-Space Representations

Span & Linear Independence (Pop Quiz)

Properties of Linear Transformations from R n to R m

Linear Algebra. Preliminary Lecture Notes

Solution: In standard form (i.e. y + P (t)y = Q(t)) we have y t y = cos(t)

Modelling and Mathematical Methods in Process and Chemical Engineering

Kevin James. MTHSC 206 Section 12.5 Equations of Lines and Planes

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS. David Levermore Department of Mathematics University of Maryland.

Linear Independence. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

A Second Course in Elementary Differential Equations

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

LS.1 Review of Linear Algebra

Linear Algebra. Preliminary Lecture Notes

Construction of Lyapunov functions by validated computation

Class Meeting # 2: The Diffusion (aka Heat) Equation

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Solutions to Final Exam

B. Differential Equations A differential equation is an equation of the form

CHAPTER 5. Higher Order Linear ODE'S

MATH 2250 Final Exam Solutions

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

A brief introduction to ordinary differential equations

Ca Foscari University of Venice - Department of Management - A.A Luciano Battaia. December 14, 2017

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Math 250B Midterm III Information Fall 2018 SOLUTIONS TO PRACTICE PROBLEMS

Applied Differential Equation. November 30, 2012

Linear algebra for MATH2601: Theory

The integrating factor method (Sect. 1.1)

DIFFERENTIAL EQUATIONS REVIEW. Here are notes to special make-up discussion 35 on November 21, in case you couldn t make it.

Systems of Ordinary Differential Equations

systems of linear di erential If the homogeneous linear di erential system is diagonalizable,

Linear Independence x

Chapter 4. Higher-Order Differential Equations

Appendix A: Matrices

Introduction to Vectors

Systems of Linear ODEs

. The following is a 3 3 orthogonal matrix: 2/3 1/3 2/3 2/3 2/3 1/3 1/3 2/3 2/3

JUST THE MATHS UNIT NUMBER 9.9. MATRICES 9 (Modal & spectral matrices) A.J.Hobson

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Differential Equations and Modeling

Section 8.2 : Homogeneous Linear Systems

3 Matrix Algebra. 3.1 Operations on matrices

Linear Algebra Practice Problems

Second Order and Higher Order Equations Introduction

Exercises Chapter II.

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

MA22S3 Summary Sheet: Ordinary Differential Equations

Chapter 8. Rigid transformations

18.S34 linear algebra problems (2007)

18.06 Professor Strang Quiz 3 May 2, 2008

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Transcription:

LS.5 Theory of Linear Systems 1. General linear ODE systems and independent solutions. We have studied the homogeneous system of ODE s with constant coefficients, (1) x = Ax, where A is an n n matrix of constants (n = 2,3). We described how to calculate the eigenvalues and corresponding eigenvectors for the matrix A, and how to use them to find n independent solutions to the system (1). With this concrete experience solving low-order systems with constant coefficients, what can be said in general when the coefficients are not constant, but functions of the independent variable t? We can still write the linear system in the matrix form (1), but now the matrix entries will be functions of t: (2) x = a(t)x+b(t)y y = c(t)x+d(t)y, ( ) x = y ( )( ) a(t) b(t) x c(t) d(t) y or in more abridged notation, valid for n n linear homogeneous systems, (3) x = A(t)x. Note how the matrix becomes a function of t we call it a matrix-valued function of t, since to each value of t the function ( rule assigns ) a matrix: a(t0 ) b(t t 0 A(t 0 ) = 0 ) c(t 0 ) d(t 0 ) In the rest of this chapter we will often not write the variable t explicitly, but it is always understood that the matrix entries are functions of t. We will sometimes use n = 2 or 3 in the statements and examples in order to simplify the exposition, but the definitions, results, and the arguments which prove them are essentially the same for higher values of n. Definition 5.1 Solutions x 1 (t),...,x n (t) to (3) are called linearly dependent if there are constants c i, not all of which are 0, such that (4) c 1 x 1 (t)+...+c n x n (t) = 0, for all t. If there is no such relation, i.e., if (5) c 1 x 1 (t)+...+c n x n (t) = 0 for all t all c i = 0, the solutions are called linearly independent, or simply independent. The phrase for all t is often in practice omitted, as being understood. This can lead to ambiguity; to avoid it, we will use the symbol 0 for identically 0, meaning: zero for all t ; the symbol 0 means not identically 0, i.e., there is some t-value for which it is not zero. For example, (4) would be written 21,

22 18.03 NOTES: LS. LINEAR SYSTEMS c 1 x 1 (t)+...+c n x n (t) 0. Theorem 5.1 If x 1,...,x n is a linearly independent set of solutions to the n n system x = A(t)x, then the general solution to the system is (6) x = c 1 x 1 +...+c n x n Such a linearly independent set is called a fundamental set of solutions. This theorem is the reason for expending so much effort in LS.2 and LS.3 on finding two independent solutions, when n = 2 and A is a constant matrix. In this chapter, the matrix A is not constant; nevertheless, (6) is still true. Proof. There are two things to prove: (a) All vector functions of the form (6) really are solutions to x = Ax. This is the superposition principle for solutions of the system; it s true because the system is linear. The matrix notation makes it really easy to prove. We have (c 1 x 1 +...+c n x n ) = c 1 x 1 +...+c n x n = c 1 Ax 1 +...+c n Ax n, since x i = Ax i ; = A(c 1 x 1 +...+c n x n ), by the distributive law (see LS.1). (b) All solutions to the system are of the form (6). This is harder to prove, and will be the main result of the next section. 2. The existence and uniqueness theorem for linear systems. For simplicity, we stick with n = 2, but the results here are true for all n. There are two questions that need answering about the general linear system (2) x = a(t)x+b(t)y y = c(t)x+d(t)y ; in matrix form, ( ) x = y ( )( ) a(t) b(t) x c(t) d(t) y The first is from the previous section: to show that all solutions are of the form x = c 1 x 1 +x 2 x 2, where the x i form a fundamental set (i.e., neither is a constant multiple of the other). (The fact that we can write down all solutions to a linear system in this way is one of the main reasons why such systems are so important.) An even more basic question for the system (2) is, how do we know that has two linearly independent solutions? For systems with a constant coefficient matrix A, we showed in the previous chapters how to solve them explicitly to get two independent solutions. But the general non-constant linear system (2) does not have solutions given by explicit formulas or procedures. The answers to these questions are based on following theorem. Theorem 5.2 Existence and uniqueness theorem for linear systems..

LS.5 THEORY OF LINEAR SYSTEMS 23 If the entries of the square matrix A(t) are continuous on an open interval I containing t 0, then the initial value problem (7) x = A(t)x, x(t 0 ) = x 0 has one and only one solution x(t) on the interval I. The proof is difficult, and we shall not attempt it. More important is to see how it is used. The three theorems following answer the questions posed, for the 2 2 system (2). They are true for n > 2 as well, and the proofs are analogous. In the theorems, we assume the entries of A(t) are continuous on an open interval I; then the conclusions are valid on the interval I. (For example, I could be the whole t-axis.) Theorem 5.2A Linear independence theorem. Let x 1 (t) and x 2 (t) be two solutions to (2) on the interval I, such that at some point t 0 in I, the vectors x 1 (t 0 ) and x 2 (t 0 ) are linearly independent. Then a) the solutions x 1 (t) and x 2 (t) are linearly independent on I, and b) the vectors x 1 (t 1 ) and x 2 (t 1 ) are linearly independent at every point t 1 of I. Proof. a) By contradiction. If they were dependent on I, one would be a constant multiple of the other, say x 2 (t) = c 1 x 1 (t); then x 2 (t 0 ) = c 1 x 1 (t 0 ), showing them dependent at t 0. b) By contradiction. If there were a point t 1 on I where they were dependent, say x 2 (t 1 ) = c 1 x 1 (t 1 ), then x 2 (t) and c 1 x 1 (t) would be solutions to (2) which agreed at t 1, hence by the uniqueness statement in Theorem 5.2, x 2 (t) = c 1 x 1 (t) on all of I, showing them linearly dependent on I. Theorem 5.2B General solution theorem. a) The system (2) has two linearly independent solutions. b) If x 1 (t) and x 2 (t) are any two linearly independent solutions, then every solution x can be written in the form (8), for some choice of c 1 and c 2 : (8) x = c 1 x 1 +c 2 x 2 ; Proof. Choose a point t = t 0 in the interval I. a) According to Theorem 5.2, there are two solutions x 1, x 2 to (3), satisfying respectively the initial conditions (9) x 1 (t 0 ) = i, x 2 (t 0 ) = j, where i and j are the usual unit vectors in the xy-plane. Since the two solutions are linearly independent when t = t 0, they are linearly independent on I, by Theorem 5.2A. b) Let u(t) be a solution to (2) on I. Since x 1 and x 2 are independent at t 0 by Theorem 5.2, using the parallelogram law of addition we can find constants c 1 and c 2 such that (10) u(t 0 ) = c 1x 1 (t 0 )+c 2x 2 (t 0 ). The vector equation (10) shows that the solutions u(t) and c 1x 1 (t)+c 2x 2 (t) agree at t 0 ; therefore by the uniqueness statement in Theorem 5.2, they are equal on all of I, that is, u(t) = c 1x 1 (t)+c 2x 2 (t) on I.

24 18.03 NOTES: LS. LINEAR SYSTEMS 3. The Wronskian We saw in chapter LS.1 that a standard way of testing whether a set of n n-vectors are linearly independent is to see if the n n determinant having them as its rows or columns is non-zero. This is also an important method when the n-vectors are solutions to a system; the determinant is given a special name. (Again, we will assume n = 2, but the definitions and results generalize to any n.) Definition 5.3 Let x 1 (t) and x 2 (t) be two 2-vector functions. Wedefinetheir Wronskian to be the determinant (11) W(x 1,x 2 )(t) = x 1(t) x 2 (t) y 1 (t) y 2 (t) whose columns are the two vector functions. The independence of the two vector functions should be connected with their Wronskian not being zero. At least for points, the relationship is clear; using the result mentioned above, we can say (12) W(x 1,x 2 )(t 0 ) = x 1(t 0 ) x 2 (t 0 ) y 1 (t 0 ) y 2 (t 0 ) = 0 x 1(t 0 ) and x 2 (t 0 ) are dependent. However for vector functions, the relationship is clear-cut only when x 1 and x 2 are solutions to a well-behaved ODE sysem (2). The theorem is: Theorem 5.3 Wronskian vanishing theorem. On an interval I where the entries of A(t) are continuous, let x 1 and x 2 be two solutions to (2), and W(t) their Wronskian (11). Then either a) W(t) 0 on I, and x 1 and x 2 are linearly dependent on I, or b) W(t) is never 0 on I, and x 1 and x 2 are linearly independent on I. Proof. Using (12), there are just two possibilities. a) x 1 and x 2 are linearly dependent on I; say x 2 = c 1 x 1. In this case they are dependent at each point of I, and W(t) 0 on I, by (12); b) x 1 and x 2 are linearly independent on I, in which case by Theorem 5.2A they are linearly independent at each point of I, and so W(t) is never zero on I, by (12). Exercises: Section 4E

M.I.T. 18.03 Ordinary Differential Equations 18.03 Notes and Exercises c Arthur Mattuck and M.I.T. 1988, 1992, 1996, 2003, 2007, 2011 1