ORDINARY DIFFERENTIAL EQUATIONS

Similar documents
ORDINARY DIFFERENTIAL EQUATIONS

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar.

Linear Independence x

can only hit 3 points in the codomain. Hence, f is not surjective. For another example, if n = 4

LS.5 Theory of Linear Systems

Linear Independence. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

1.7 Inequalities. Copyright Cengage Learning. All rights reserved.

Work sheet / Things to know. Chapter 3

Section 3.1 Second Order Linear Homogeneous DEs with Constant Coefficients

Linear Algebra Practice Problems

Linear Algebra: Sample Questions for Exam 2

Math 322. Spring 2015 Review Problems for Midterm 2

Math 256: Applied Differential Equations: Final Review

Determine whether the following system has a trivial solution or non-trivial solution:

Homework #6 Solutions

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MATH 1553-C MIDTERM EXAMINATION 3

Lecture 18: Section 4.3

Homogeneous Linear Systems and Their General Solutions

Linear Independence and the Wronskian

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Quintic beam closed form matrices (revised 2/21, 2/23/12) General elastic beam with an elastic foundation

4.6 Bases and Dimension

Lecture 31. Basic Theory of First Order Linear Systems

Math 54 HW 4 solutions

MTH 464: Computational Linear Algebra

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

This MUST hold matrix multiplication satisfies the distributive property.

Second Order and Higher Order Equations Introduction

Chapter 3: Theory Review: Solutions Math 308 F Spring 2015

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Math 3C Lecture 25. John Douglas Moore

LS.1 Review of Linear Algebra

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS. David Levermore Department of Mathematics University of Maryland.

Elementary Linear Algebra, Second Edition, by Spence, Insel, and Friedberg. ISBN Pearson Education, Inc., Upper Saddle River, NJ.

Definition (T -invariant subspace) Example. Example

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Solutions of Linear system, vector and matrix equation

y + 3y = 0, y(0) = 2, y (0) = 3

CISC-102 Winter 2016 Lecture 17

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

Linear Algebra. Min Yan

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

The Theory of Second Order Linear Differential Equations 1 Michael C. Sullivan Math Department Southern Illinois University

Inequalities. Some problems in algebra lead to inequalities instead of equations.

Math 308 Spring Midterm Answers May 6, 2013

Solving Linear Systems Using Gaussian Elimination

MATH 3321 Sample Questions for Exam 3. 3y y, C = Perform the indicated operations, if possible: (a) AC (b) AB (c) B + AC (d) CBA

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam

Final Exam Practice Problems Answers Math 24 Winter 2012

Foundations of Discrete Mathematics

MATH 196, SECTION 57 (VIPUL NAIK)

Columbus State Community College Mathematics Department. CREDITS: 5 CLASS HOURS PER WEEK: 5 PREREQUISITES: MATH 2173 with a C or higher

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

Section 1.1 System of Linear Equations. Dr. Abdulla Eid. College of Science. MATHS 211: Linear Algebra

1 9/5 Matrices, vectors, and their applications

This pre-publication material is for review purposes only. Any typographical or technical errors will be corrected prior to publication.

ECS 178 Course Notes QUATERNIONS

Linear Equations in Linear Algebra

Linear algebra and differential equations (Math 54): Lecture 10

Dynamic Programming Lecture #4

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

Lecture 6 & 7. Shuanglin Shao. September 16th and 18th, 2013

Lectures on Linear Algebra for IT

a. See the textbook for examples of proving logical equivalence using truth tables. b. There is a real number x for which f (x) < 0. (x 1) 2 > 0.

4 Vector Spaces. 4.1 Basic Definition and Examples. Lecture 10

General elastic beam with an elastic foundation

Vector Spaces ปร ภ ม เวกเตอร

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

MATH10212 Linear Algebra B Homework Week 4

More examples of mathematical. Lecture 4 ICOM 4075

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

On-Line Geometric Modeling Notes VECTOR SPACES

Math 51 Tutorial { August 10

MATH240: Linear Algebra Exam #1 solutions 6/12/2015 Page 1

Section-A. Short Questions

9.5 HONORS Determine Odd and Even Functions Graphically and Algebraically

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Notation. 0,1,2,, 1 with addition and multiplication modulo

Chapter 1A -- Real Numbers. iff. Math Symbols: Sets of Numbers

Chapter 6 Page 1 of 10. Lecture Guide. Math College Algebra Chapter 6. to accompany. College Algebra by Julie Miller

Lecture Notes in Mathematics. Arkansas Tech University Department of Mathematics. The Basics of Linear Algebra

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Chapter Two Elements of Linear Algebra

ORDINARY DIFFERENTIAL EQUATIONS

DIFFERENTIAL EQUATIONS

MATH10212 Linear Algebra B Homework 7

1.4 Mathematical Equivalence

Equations and Inequalities

is any vector v that is a sum of scalar multiples of those vectors, i.e. any v expressible as v = c 1 v n ... c n v 2 = 0 c 1 = c 2

The Vector Space. [3] The Vector Space

THE LOGIC OF COMPOUND STATEMENTS

College Algebra with Corequisite Support: Targeted Review

Introduction to Real Analysis Alternative Chapter 1

9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws

chapter 11 ALGEBRAIC SYSTEMS GOALS

Ordinary Differential Equation Introduction and Preliminaries

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Signature. Printed Name. Math 312 Hour Exam 1 Jerry L. Kazdan March 5, :00 1:20

Transcription:

Page 1 of 20 ORDINARY DIFFERENTIAL EQUATIONS Lecture 11 Fundamental Solution Sets for Linear Homogeneous Second-Order ODEs (Revised 22 March 2009 @ 09:16) Professor Stephen H Saperstone Department of Mathematical Sciences George Mason University Fairfax, VA 22030 email: sap@gmu.edu Copyright 2009 by Stephen H Saperstone All rights reserved 11.1 EXISTENCE & UNIQUENESS OF SOLUTIONS The discussion immediately following the heading Initial Conditions in Lecture 10 suggests how to establish existence and uniqueness for IVPs when the underlying ODE is of the form In particular, the calculations in 10.11 demonstrate that the constants of integration and are uniquely determined by the initial conditions. The following theorem is an analog to the FTEU for first-order IVPs. Because of the simple nature of a linear homogeneous second-order ODE with constant coefficients, we are able to provide a strong and conclusive theorem below for the existence and uniqueness of solutions. The proof of the theorem is not hidden in a hyperlink. The ideas developed in the proof are needed later in the development of properties of solutions to linear homogeneous second-order ODEs. Theorem 11.1 (Existence & Uniqueness - EU for Homogeneous Linear Second- Order IVP) For any values of the constants and the IVP has a unique solution whose maximal interval of definition is the entire -axis.

Page 2 of 20 PROOF: According to Theorem 10.9, the general solution to the ODE of Eqn. (11.1) has the form where the functions and are determined by the characteristic roots and from the following table: Table 11.1 - Component Solutions to Eqn. (11.1) Then The initial conditions imply Since we may regard and as numbers (remember, and are known from Table 11.1 and the initial time is given), we can rewrite as a system of linear algebraic equations with unknowns, the variables and According to Cramer's Rule this system has a unique solution for and if and only if the determinant of the coefficients The value of this determinant depends on the functions and the initial time Denote its value by Then We verify in each of the three cases that CASE 1: :

Page 3 of 20 Since the last expression is is never zero for any value of CASE 2: : Again, the last expression is is never zero for any value of CASE 3: : Since the last expression is is never zero for any value of Q.E.D.

Page 4 of 20 11.2 THE WRONSKIAN The expression defined in Eqn. (11.2) extends to ANY pair of functions (not necessarily solutions to a linear second-order homogeneous ODE) and any time Definition 11.2 (Wronskian) Given any two functions and defined for all, the Wronskian of and is given by the formula and is defined for all In view of special solution pairs listed in Table 11.1 of Theorem 11.1, the following table summarizes the calculations made in that proof. Table 11.2 - The Wronskian for Eqn. (11.2) Since all solutions of are defined for all we can replace in the definition of by any time. Thus we have the important definition regarding solutions to the ODE Although and refer to the particular solutions identified in Table 11.2, the expression for the determinant can be based on ANY two solutions - not necessarily the pair for each case. The following Theorem of Abel characterizes the Wronskian. Theorem 11.3 (Abel - The Wronskian Condition) Suppose and are any two solutions to the ODE Then the Wronskian

Page 5 of 20 1. satisfies the first-order linear (Abel's) ODE and 2. is either never zero for any or zero for all PROOF ABEL's THEOREM We see from Table 11.2 that for all Not all pairs of solutions have this property as shown in the next example. 11.4 We saw in 10.6 that the ODE has the general solution By first setting and we get the solution Now set and to get the solution The Wronskian of the pair is Thus just because and are both solutions, it does not follow that their Wronskian is nonzero. The problem here is that is a multiple of End of 11.4 A solution pair to the ODE for which is never zero is special. We will see later that the Wronskian distinguishes amongst pairs of solutions that can serve as a general solution and those that cannot. For now we single out those solution pairs for which the Wronskian is never zero. Definition 11.5 (FSS) A pair of solutions and to the ODE is called a Fundamental Set of Solutions (FSS) if is never zero on Thus each of the pairs following FSS Theorem. listed in Table 11.2 is a FSS. We summarize the preceeding discussion in the Theorem 11.6 (FSS) Every linear homogeneous ODE of the form

Page 6 of 20 has a FSS. Moreover, every solution to Eqn. (11.2) on can be expressed in the form for some choice of constants and The constants are uniquely determined from any set of initial conditions. 11.3 LINEAR INDEPENDENCE OF SOLUTIONS Suppose is an FSS for Eqn. (11.2).The Wronskian condition for the FSS has an important geometric interpretation. Since for all the columns of the matrix must be linearly independent for all the zero vector. Thus the only way for This means that no nonzero linear combination of the columns is to be satisfied for all is for In particular, with regard to the first component of this vector equation, the only way that is for and This property may be better understood when we examine its negation; that is suppose there are some nonzero and so that for all Then we could express one of the functions and as a multiple of the other. For instance, if then from we can write Thus one of the solutions is a multiple of the other. This would imply that Since for all the ONLY WAY that for all is for and

Page 7 of 20 Definition 11.7 (Linear Independence of Functions on ) Two functions and defined on are called linearly independent (abbreviated LI) if the ONLY WAY the linear combination is for and If and are not linearly independent, they are called linearly dependent (abbreviated LD) In view of the Wronskian condition (Abel's Theorem), we can summarize the preceeding in the following theorem. Theorem 11.8 (Linear Independence of Solutions) Suppose and are solutions to the ODE Then and are linearly independent if and only if the Wronskian on. 11.9 Show that the functions and are LI by using the definition of LI. Solution: Suppose for all We must show that As for all, it follows that this equation must hold when Thus Thus Finally, if for all, then must equal zero. Thus and are LI. End of 11.9 11.10 Show that the functions and are LI by using Theorem 11.8. Solution: Calculate

Page 8 of 20 As is never zero on, then the functions and are LI. End of 11.10 11.11 Show that the functions and are LI by using the definition of LI. Solution: Suppose for all We must show that As for all, it follows that this equation must hold when and when (We choose these values so that we get a pair of equations for and Thus and Together these equations imply End of 11.11 and Note that the hypothesis of Theorem 11.8 assumes that the functions and are solutions to the ODE In the case of and we know that these constitute a FSS for the ODE. (The reader should verify this.) Consequently, we are allowed to use the Wronskian to determine LI of the pair as we did in 11.10. In the case of and we suspect that this pair isn't a FSS for any ODE of the form Eqn. (11.2). This is because neither function nor appears in Table 11.2. None of the possibilities for the characteristic roots can give rise to either of these functions. Nevertheless, it is instructive to calculate in this case. Calculate 11.12 where Solution: To evaluate we must differentiate the piecewise representation of It will be easiest to do this if we first express

Page 9 of 20 piecewise. Then Although on, we cannot conclude that and are LD. This conclusion would be valid if both functions are solutions to an ODE of the form As this is not the case, then Theorem 11.8 is silent regarding the LI and/or LD of and End of 11.12 The definitions of the Wronskian and linear independence of functions and do not require them to be solutions to the ODE But Abel's theorem (the Wronskian condition) says that if and ARE solutions, then on. This leaves open the possibility that and can be linear independent on yet on providing that and ARE NOT solutions. This fact is confirmed by the following simple example. Indeed, we conclude that the functions and from 11.12 CANNOT be solutions to ANY ODE of the form 11.13 The functions and are linearly independent on, yet is neither zero for all nor nonzero for all Solution: To establish linear independence we must show that implies and As for all, it follows that this equation must hold when and when (We choose these values so that we get a pair of equations for and Thus and

Page 10 of 20 Together these equations imply and Hence is LI. Now calculate which is zero at just one point, This one exception is enough to break the rule. Thus and cannot be solutions to an ODE of the form End of 11.13 Alert 11.14 The hypotheses of Theorem 11.8 and Abel's theorem require that and be solutions to Otherwise the conclusion of the theorems is false. For instance we just saw in 11.12 that the Wronskian of the functions and is zero for all. In order that this result does not contradict the conclusion of Abel's theorem, it must be that and cannot be solutions of a linear homogeneous second-order ODE. End of Alert 11.14 The definition of linear independence can be extended to any number of functions. In the case of just two functions and we can extend the colinearity property of LD vectors in the plane to obtain a necessary and sufficient condition for LD of two functions. Theorem 11.15 (Simple Test for LI) Two functions and defined on are linearly independent if and only if their quotient is not a constant function on. Using this characterization of LI, we readily see that in each of the three cases the pairs of functions that constitute the general solution to are LI on Table 11.3 - FSS for Eqn. (11.2) The reader should verify that each of the pairs in Table 11.3 is LI on is LI because For example the pair

Page 11 of 20 is certainly nonconstant on 11.16 Show that the functions and are LI by using Theorem 11.15. Solution: Calculate which is nonconstant on End of 11.16 11.17 Show that the functions and are LI by using Theorem 11.15. Solution: Calculate which is nonconstant on End of 11.17 11.4 FUNDAMENTAL SOLUTION SETS It turns out that the pairs listed in Table 11.3 aren't the only bases for solutions to Theorem 11.18 (Fundamental Solution Set) A pair of solutions to the ODE is a FSS if and only if the pair is LI. Verify that 11.19

Page 12 of 20 is a FSS for Solution: We see from Table 11.2 that so is never zero. Hence, is a FSS for To show that is a FSS, we need to show that (1) and are solutions to (2) and are LI on To show (1) we need only observe that both and are linear combinations of the solutions and To show (2) we apply the definition of LI to the pair Let Then Since and is are LI on then Simple algebra implies that and Thus and are LI. End of 11.18 11.18 suggests that given a FSS for the ODE we can create another FSS by taking appropriate linear combinations of and The following theorem shows how to generate all possible fundamental solution sets for the ODE. Theorem 11.19 (All Possible FSS) Suppose and are Fundamental solution sets for the ODE Then there is a invertible matrix C so that What are the implications of this theorem? 1. ALL fundamental solution sets are related by a invertible matrix that is easy to calculate. 2. There are infinitely many fundamental solution sets. We include the proof of this theorem because the steps involved are useful in problem solving. Proof of All Possible FSS: Suppose and are Fundamental solution sets for the ODE

Page 13 of 20 In particular, since is a solution, then must be a linear combination of and, say Likewise, since is a solution, then must also be a linear combination of and, say Since and are LI then Calculate: Then for all implies As are LI then But, are the only values of and that solves these two equations. In other words, the ONLY solution to the vector equation is According to Items 5 and 4 of the Invertible Matrix Theorem It follows that also has nonzero determinant so that C is invertible. Q.E.D. 11.20 Determine a FSS for the ODE Solution: The characteristic equation is Factor as to get roots According to Table 11.3 a FSS is

Page 14 of 20 End of 11.20 Consider the ODE 11.21 Determine a FSS that satisfies and Solution: Start with the FSS from 11.20 Set First find values of and that satisfy Substitute to get Then and so that Next find values of and that satisfy Substitute to get Then and so that Thus the FSS we want is End of 11.21 Solve the IVP 11.22 Solution: The characteristic equation is Factorization is too difficult so we use the quadratic formula:

Page 15 of 20 From Table 11.1 we have and and a FSS is The general solution is given by To compute the values of and from the initial conditions, we first need a formula for The values for and imply Then and so that the solution to the IVP is End of 11.22 11.5 TRANSLATION INVARIANCE Because the independent variable is not present in the ODE time-translations of solutions are solutions. That is, if is a solution, then is a solution for any This property is easily verified as follows. Since is a solution, then is true for all Hence for any must be true for all Hence is a solution. Thus if is a solution to the IVP there is no loss in generality by translating the solution so that the value of solution is at and the value of the derivative is at. This is the significance of the following theorem. All Clocks Start at Theorem Suppose 11.23 (Translate solutions to start at ) is a solution to the IVP Then is a solution to the IVP PROOF: We saw immediately before the statement of Theorem 11.23 that It is also easy to verify that the initial conditions must solve the ODE are also satisfied.

Page 16 of 20 Indeed since solves the IVP of Eqn. (11.1), then we must have Next differentiate to get Then Thus solves the IVP of Eqn. (11.3) Q.E.D. Visual evidence of Theorem 11.23 is presented in Lecture 12 as well as in the following animation TIME TRANSLATION (& UNIQUENESS) ANIMATION 11.6 THE IDEAL FSS Frequently we need to calculate a special FSS with the following property Definition A FSS 11.24 (Ideal FSS) that satisfies is called ideal. Note that an ideal FSS for an ODE must be unique. This is because solutions to any IVP of the form must be unique. In particular, the solutions to each of the IVPs and are unique. Moreover, these two solutions constitute an ideal FSS. Check to see that 11.25 constitute an ideal FSS Solution: Calculate

Page 17 of 20 Then End of 11.25 11.26 Check to see that and does not constitute an ideal FSS. Then show how can be transformed to an ideal FSS Solution: Calculate Then We leave it to the reader to check that yields an ideal FSS. End of 11.26 11.27 Determine the ideal FSS for Solution: We see from Table 11.2 that is a FSS for Set and Denote by the ideal FSS called for. According to Theorem 11.19 there is a invertible matrix C so that The requirements for an ideal FSS imply that In order to use these requirements we must first differentiate Eqn. (11.4) to obtain

Page 18 of 20 Set in Eqn. (11.4) and use Eqn. (11.5a) to get Similarly, set in Eqn. (11.6) and use Eqn. (11.5b) to get These two vector equations can be combined to get the single matrix equation It follows that the coefficients and we want are the entries of the inverse of the matrix In fact set Then we have the matrix equation which implies Use row reduction (or any other applicable) method to get Finally from Eqn. (11.4) we have the ideal FSS or The reader might recognize these functions from calculus, namely By definition (from calculus) Note that

Page 19 of 20 and Alternatively, the calculation of the coefficients may be done without matrix methods. You just need to solve the system of 4 scalar equations obtained from Eqn. (11.6a), namely and from Eqn. (11.6b) The first and third of these equations yield End of 11.27 and Remark 11.28 Because the IVP has a unique solution, the solution formulas for differing FSS must be algebraically equivalent. The next example illustrates Remark 11.28. Solve the IVP 11.29 using two different FSS: and. Solution: Each of the bases give rise to apparently distinct general solutions, namely and (We use different symbols for the coefficients as their values will differ when calculating them to solve the IVP.) The solution to the IVP for the FSS was obtained in 10.12 To calculate the solution to the IVP for the FSS we first set in Eqn. (11.8): or (where we have used the definitions of and given above). It follow that Now differentiate Eqn. (11.8) with respect to

Page 20 of 20 Let in Eqn. (11.9): or It follow that So the solution to the IVP is As the solution to the IVP must be unique, we need to check that the two solutions and are algebraically equivalent. We get End of 11.29 Theorem 11.30 (Why an ideal FSS is called "ideal") If is the ideal FSS for the ODE Then for any initial condition the solution to the IVP has the form Solve the IVP 11.31 using its ideal FSS. Solution: The ideal FSS is from 11.27. It follows immediately from Theorem 11.30 that the solution of the IVP is End of 11.31