First-Order Ordinary Differntial Equations II: Autonomous Case. David Levermore Department of Mathematics University of Maryland.

Similar documents
ORDINARY DIFFERENTIAL EQUATION: Introduction and First-Order Equations. David Levermore Department of Mathematics University of Maryland

ORDINARY DIFFERENTIAL EQUATIONS: Introduction and First-Order Equations. David Levermore Department of Mathematics University of Maryland

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS I: Introduction and Analytic Methods David Levermore Department of Mathematics University of Maryland

First-Order Ordinary Differntial Equations: Classification and Linear Equations. David Levermore Department of Mathematics University of Maryland

Slope Fields: Graphing Solutions Without the Solutions

Lecture 7 - Separable Equations

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS. David Levermore Department of Mathematics University of Maryland.

Math 246, Spring 2011, Professor David Levermore

(1 + 2y)y = x. ( x. The right-hand side is a standard integral, so in the end we have the implicit solution. y(x) + y 2 (x) = x2 2 +C.

FIRST-ORDER SYSTEMS OF ORDINARY DIFFERENTIAL EQUATIONS III: Autonomous Planar Systems David Levermore Department of Mathematics University of Maryland

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS II: Graphical and Numerical Methods David Levermore Department of Mathematics University of Maryland

First In-Class Exam Solutions Math 246, Professor David Levermore Tuesday, 21 February log(2)m 40, 000, M(0) = 250, 000.

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Variable Coefficient Nonhomogeneous Case

1 Mathematical toolbox

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

Numerical solution of ODEs

1.2. Direction Fields: Graphical Representation of the ODE and its Solution Let us consider a first order differential equation of the form dy

How to Use Calculus Like a Physicist

Solution. Your sketch should show both x and y axes marked from 5 to 5 and circles of radius 1, 3, and 5 centered at the origin.

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Solutions of the Sample Problems for the First In-Class Exam Math 246, Fall 2017, Professor David Levermore

UCLA Math 135, Winter 2015 Ordinary Differential Equations

MIDTERM 1 PRACTICE PROBLEM SOLUTIONS

Solution of the 8 th Homework

MATH 353 LECTURE NOTES: WEEK 1 FIRST ORDER ODES

Review for the First Midterm Exam

First In-Class Exam Solutions Math 410, Professor David Levermore Monday, 1 October 2018

In economics, the amount of a good x demanded is a function of the price of that good. In other words,

Solution. It is evaluating the definite integral r 1 + r 4 dr. where you can replace r by any other variable.

Separable First-Order Equations

A REVIEW OF RESIDUES AND INTEGRATION A PROCEDURAL APPROACH

Econ Slides from Lecture 10

MSM120 1M1 First year mathematics for civil engineers Revision notes 4

3 Algebraic Methods. we can differentiate both sides implicitly to obtain a differential equation involving x and y:

CIS 2033 Lecture 5, Fall

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method David Levermore Department of Mathematics University of Maryland

Introduction to Functions

Ordinary Differential Equation Introduction and Preliminaries

Some Review Problems for Exam 2: Solutions

22: Applications of Differential Calculus

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS IV: Laplace Transform Method. David Levermore Department of Mathematics University of Maryland

Andrej Liptaj. Study of the Convergence of Perturbative Power Series

HIGHER-ORDER LINEAR ORDINARY DIFFERENTIAL EQUATIONS II: Nonhomogeneous Equations. David Levermore Department of Mathematics University of Maryland

Math 180, Exam 2, Spring 2013 Problem 1 Solution

Infinite series, improper integrals, and Taylor series

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

Econ 204 Differential Equations. 1 Existence and Uniqueness of Solutions

Sample Final Questions: Solutions Math 21B, Winter y ( y 1)(1 + y)) = A y + B

Derivatives and Continuity

Helpful Concepts for MTH 261 Final. What are the general strategies for determining the domain of a function?

D1.3 Separable Differential Equations

Tricky Asymptotics Fixed Point Notes.

There is a function, the arc length function s(t) defined by s(t) = It follows that r(t) = p ( s(t) )

. For each initial condition y(0) = y 0, there exists a. unique solution. In fact, given any point (x, y), there is a unique curve through this point,

1.2 Functions What is a Function? 1.2. FUNCTIONS 11

Analysis II - few selective results

3.4 Using the First Derivative to Test Critical Numbers (4.3)

FIRST-ORDER ORDINARY DIFFERENTIAL EQUATIONS III: Numerical and More Analytic Methods David Levermore Department of Mathematics University of Maryland

MATH 415, WEEKS 7 & 8: Conservative and Hamiltonian Systems, Non-linear Pendulum

The Mean Value Theorem. Oct

First In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 20 September 2018

DIFFERENTIAL EQUATIONS

Ordinary Differential Equations

First-Order ODEs. Chapter Separable Equations. We consider in this chapter differential equations of the form dy (1.1)

On linear and non-linear equations.(sect. 2.4).

Math 300 Introduction to Mathematical Reasoning Autumn 2017 Inverse Functions

x x 1 x 2 + x 2 1 > 0. HW5. Text defines:

September Math Course: First Order Derivative

8.5 Taylor Polynomials and Taylor Series

MTH101 Calculus And Analytical Geometry Lecture Wise Questions and Answers For Final Term Exam Preparation

THE RESIDUE THEOREM. f(z) dz = 2πi res z=z0 f(z). C

TMA 4195 Mathematical Modelling December 12, 2007 Solution with additional comments

Lecture 19: Solving linear ODEs + separable techniques for nonlinear ODE s

MATH The Chain Rule Fall 2016 A vector function of a vector variable is a function F: R n R m. In practice, if x 1, x n is the input,

Incompatibility Paradoxes

Modeling Via Differential Equations

2 Lecture Defining Optimization with Equality Constraints

10550 PRACTICE FINAL EXAM SOLUTIONS. x 2 4. x 2 x 2 5x +6 = lim x +2. x 2 x 3 = 4 1 = 4.

20. The pole diagram and the Laplace transform

Lecture 13 - Wednesday April 29th

Polynomial Approximations and Power Series

Math 207 Honors Calculus III Final Exam Solutions

Introduction to Population Dynamics

Page 1. These are all fairly simple functions in that wherever the variable appears it is by itself. What about functions like the following, ( ) ( )

9. Birational Maps and Blowing Up

Problem set 7 Math 207A, Fall 2011 Solutions

Lecture 6: Backpropagation

2015 Math Camp Calculus Exam Solution

Astronomy 8824: Numerical Methods Notes 2 Ordinary Differential Equations

Marching on the BL equations

The Comparison Test. J. Gonzalez-Zugasti, University of Massachusetts - Lowell

Third In-Class Exam Solutions Math 246, Professor David Levermore Thursday, 3 December 2009 (1) [6] Given that 2 is an eigenvalue of the matrix

V. Graph Sketching and Max-Min Problems

1.1 Motivation: Why study differential equations?

Assignment-10. (Due 11/21) Solution: Any continuous function on a compact set is uniformly continuous.

II. An Application of Derivatives: Optimization

(df (ξ ))( v ) = v F : O ξ R. with F : O ξ O

Multistep Methods for IVPs. t 0 < t < T

a x a y = a x+y a x a = y ax y (a x ) r = a rx and log a (xy) = log a (x) + log a (y) log a ( x y ) = log a(x) log a (y) log a (x r ) = r log a (x).

Differentiation. Table of contents Definition Arithmetics Composite and inverse functions... 5

Transcription:

First-Order Ordinary Differntial Equations II: Autonomous Case David Levermore Department of Mathematics University of Maryland 25 February 2009 These notes cover some of the material that we covered in class on first-order ordinary differential equations. As the presentation of this material in class was somewhat different from that in the book, I felt that a written review closely following the class presentation might be appreciated.

2 3. First-Order Equations: Autonomous Case 3.. Autonomous Equations. The next simple form of (.0) to treat is that of autonomous equations. These have the form dt = g(y). (3.) Here the derivative of y with respect to t is given by a function g(y) that is independent of t. 3... Recipe for Solving Autonomous Equations. Just as we did for the linear case, we will reduce the autonomous case to the explicit case. The trick to doing this is to consider t to be a function of y. This trick works over intervals over which the solution y(t) is a strictly monotonic function of t. This will be the case over intervals where g(y(t)) is never zero. In that case, by the chain rule we have dt = dt = g(y). This is an explicit equation for the derivative of t with respect to y. It can be integrated to obtain g(y) = G(y) + c, where G (y) = and c is any constant. (3.2) g(y) We claim that if you solve this equation for y as a differentiable function of t then the result will be a solution of (3.) wherever g(y) 0. Indeed, suppose that Y (t) is differentiable and satisfies G ( Y (t) ) + c. Upon differentiating both sides of this equation with respect to t we see that = G ( Y (t) ) dy (t) dt = g ( dy (t) ), wherever g ( Y (t) ) 0. Y (t) dt It follows that y = Y (t) satifies (3.). Notice that (3.2) gives solutions of (3.) implicitly as G(y) + c, where c is an arbitrary constant. To find an explicit solution, you must solve this equation for y as a function of t. This means finding an inverse function of G namely, a function G with property that G ( G(y) ) = y for every y in some subset of the domain of G.

3 For every such an inverse function, a family of explicit solutions to (3.) is then given by y = G (t c). (3.3) As we will see in examples below, such a solution may not exist for every value of t c, or there may be more than one solution. 3..2. Initial-Value Problems for Autonomous Equations. In order to pick a unique solution from the family (3.3) one must impose an additional condition that determines c. As for linear equations, we do this by imposing an initial condition of the form y(t I ) = y I, where t I is called the initial time and y I is called the initial value. The combination of the differential equation (3.) with the above initial condition is the initial-value problem There are two possibilities: either g(y I ) = 0 or g(y I ) 0. d g(y), y(t I) = y I. (3.) If g(y I ) = 0 then it is clear that y(t) = y I is a solution of (3.) that is defined for every t. Because this solution does not depend on t it is called a stationary solution. Every zero of g is also called a stationary point because it yields such a stationay solution. These points are also sometimes called either equilibrium points or critical points. Example. Consider the autonomous equation d y y3. Because y y 3 = y(2 y)(2 + y), we see that y = 0, y 2, and y = 2 are stationary points of this equation. On the other hand, if g(y I ) 0 then you can try to use recipe (3.2) to obtain an implicit relation G(y) + c. The initial condition then implies that t I = G(y I ) + c, whereby c = t I G(y I ). The solution of the initial-value problem (3.) is then implicitly given by G(y) + t I G(y I ). To find an explicit solution, you must solve this equation for y as a function of t. There may be more than one solution of this equation. If so, be sure to take the one that satisfies the initial condition. There will always be exactly solution that does satisfy the initial condition for all times t in some open interval that contains the initial time t I. The largest such interval is the interval of existence for the solution.

Example. Find the solution to the initial-value problem Identify its interval of existence. d y2, y(0) = y o. Solution. We see that y = 0 is the only stationary point of this equation. Hence, if y o = 0 then y(t) = 0 is a stationary solution whose interval of existence is (, ). So let us suppose that y o 0. The solution is given implicitly by y = 2 y + c. The initial condition then implies that 0 = y o + c, whereby c = /y o. The solution is therefore given implicitly by y + y o. This may be solved for y explicitly by first noticing that y = y ot, y o y o and then taking the reciporical of each side to obtain y = y o y o t. It is clear from this formula that the solution ceases to exist when /y o. Therefore if y o > 0 then the interval of existence of the solution is (, /y o ) while if y o < 0 then the interval of existence of the solution is (/y o, ). Notice that both of these intervals contain the initial time 0. Finally, notice that our explicit solution recovers the stationary solution when y o = 0 even though it was derived assuming that y o 0. Remark. The above example alrea shows some important differences between nonlinear and linear equations. For one, it shows that for nonlinear equations the interval of existence can depend on the initial data. This is in marked contrast with linear equations where you can read off the interval of existence from the coefficient and the forcing. Another difference it shows is that solutions of nonlinear equations can blow-up even when the equation has no singularities in it. This is in marked contrast with linear equations where the solution will not blow-up if the coefficient and forcing are continuous everywhere.

You might think that the blow-up seen in the last example had something to do with the fact that the stationary point y = 0 was bad for our recipe. However, the next example shows that blow-up happens even when there are no stationary points. Example. Find the solution to the initial-value problem Give its interval of existence. d + y2, y(0) = y o. Solution. Because + y 2 > 0, we see there are no stationary solutions of this equation. Solutions are given implicitly by The initial condition then implies that + y 2 = tan (y) + c. 0 = tan (y o ) + c, whereby c = tan (y o ). Here we adopt the common convention that π 2 < tan (y o ) < π 2. The solution is therefore given implicitly by tan (y) tan (y o ). This may be solved for y explicitly by first noticing that tan (y) = t + tan (y o ), and then taking the tangent of both sides to obtain y = tan ( t + tan (y o ) ). Because tan becomes singular at π 2 and π 2, the interval of existence for this solution is (tan (y o ) π 2, tan (y o ) + π 2 ). Notice that this interval contains the initial time 0 because π 2 < tan (y o ) < π 2. The next example shows another way in which solutions of nonlinear equations can break down. Example. Find the solution to the initial-value problem Give its interval of existence. d y, y(0) = y o 0. 5

6 Solution. Notice that because the right-hand side of the differential equation does not make sense when y = 0, no solution may take that value. In particuluar, we require that y o 0. Because /y 0, we see there are no stationary solutions of this equation. Solutions are therefore given implicitly by y = 2 y2 + c. The initial condition then implies that 0 = 2 y 2 o + c, whereby c = 2 y o 2. The solution is therefore given implicitly by 2 y 2 o 2 y2. This may be solved for y explicitly by first noticing that y 2 = y 2 o 2t, and then taking the square root of both sides to obtain y = ± y 2 o 2t. You must then choose the sign of the square root so that the solution agrees with the initial data i.e. positive when y o > 0 and negative when y o < 0. In either case you obtain where y = sgn(y o ) y 2 o 2t, if y o > 0, sgn(y o ) = 0 if y o = 0, if y o < 0. Finally, notice that to keep the argument of the square root positive we must require that t < 2 y 2 o. The interval of existence for this solution is therefore (, 2 y 2 o ). Remark. In the above example the solution does not blow-up as t approaches 2 y o 2. Indeed, the solution y(t) approaches 0 as t approaches 2 y o 2. However, the derivative of the solution is given by d y = sgn(y o) y 2 o 2t, which does blow-up as t approaches 2 y 2 o. This happens because the solution approaches the value 0 where the right-hand side of the equation is not defined.

7 The next example shows another instance in which we must be careful. Example. Find solutions to the initial-value problem d 3y 2 3, y(0) = 0. Solution. We see that y = 0 is a stationary point of this equation. Therefore y(t) = 0 is a stationary solution whose interval of existence is (, ). However, let us carry out our recipe to see where it leads. Nonstationary solutions are given implicitly by 3y 2 3 = 3 y 2 3 = y 3 + c. Upon solving this for y we find y = (t c) 3 where c is an arbitrary constant. The initial condition then implies that 0 = (0 c) 3 = c 3, whereby c = 0. We have thereby found two solutions of the initial-value problem: y(t) = 0 and y(t) = t 3. As we will now show, there are many others. Let a and b be any two numbers such that a 0 b and a < b. Then define y(t) by (t a) 3 for t < a, y(t) = 0 for a t < b, (t b) 3 for b t. You will understand this function better if you graph it as we did in class. It is clearly a differentiable function with 3(t a) 2 for t < a, dt (t) = 0 for a t < b, 3(t b) 2 for b t, whereby it clearly satisfies the initial-value problem. Its interval of existence is (, ). Similarly, for every a 0 you can construct the solution { (t a) 3 for t < a, y(t) = 0 for a t, while for every b 0 you can construct the solution { 0 for t < b, y(t) = (t b) 3 for b t.

8 Remark. The above example shows a very important difference between nonlinear and linear equations. Specifically, it shows that for nonlinear equations an initial-value problem may not have a unique solution. 3..3. Existence and Uniqueness for Autonomous Equations. The non-uniqueness seen in the last example arises because g(y) = 3y 2 3 does not behave nicely at the stationary point y = 0. It is clear that g is continuous at 0, but because g (y) = 2y 3 we see that g is not differentiable at 0. The following fact states the differentiablity of g is enough to ensure that the solution of the initial-value problem exists and is unique. Theorem. Let g(y) be a function defined over an open interval (a, b) such that g is continuous over (a, b), g has a finite number of zeros in (a, b) and is differentiable at each of these zeros. Then for every inital point y o in (a, b) there exists a unique solution y = Y (t) to the initial-value problem d g(y), y(0) = y o, (3.5) for so long as Y (t) remains within (a, b). Moreover, this solution is continuously differentiable and is determined by our recipe. Namely, either g(y o ) = 0 and Y (t) = y o is a stationary solution, or g(y o ) 0 and Y (t) is a nonstationary solution that satisfies G ( Y (t) ) = t, Y (0) = y o, where G(y) is defined by the definite integral G(y) = y y o g(z) dz, whenever the point y is in (a, b) and neither y nor any point between y and y o is a stationary point of g. In particular, if g is differentiable over (, ) then the initial-value problem (3.5) has a unique solution that either exists for all time or blows up in a finite time. Moreover, this solution is continuously differentiable and is determined by our recipe. This behavior can be seen in the examples above with g(y) = y 2 and g(y) = + y 2. Indeed, it is seen whenever g(y) is a polynomial. The above theorem implies that if the initial point y o lies between two stationary points within (a, b) then the solution Y (t) exists for all time. This is because the uniqueness assertion implies Y (t) cannot cross any stationary point, and is therefore trapped within (a, b). In particular, if g is differentiable over (, ) then the only solutions that might

blows up in a finite time are those that are not trapped above and below by stationary points. Example. If g(y) = y 2 then the only stationary point is y = 0. Because g(y) > 0 when y 0 you see that every nonstationary solution Y (t) will be an increasing function of t. This fact is verified by the formula we derived earlier, Y (t) = y o y o t. When y o > 0 the interval of existence is (, /y o ) and we see that Y (t) + as t /y o while Y (t) 0 as t. In this case the solution is trapped below as t by the stationary point y = 0. Similarly, when y o < 0 the interval of existence is (/y o, ) and we see that Y (t) as t /y o while Y (t) 0 as t. In this case the solution is trapped above as t by the stationary point y = 0. You do not need to find an explicit solution to determine the interval of existence of a solution. Rather, you can do it directly from the equation G(y) = t, where G(y) = y y o g(z) dz. For example, if g(y) > 0 over [y o, ) then G(y) will be increasing over [y o, ) and the solution Y (t) will then exist for all t 0 such that t < lim y + G(y). If the above limit is finite then the solution blows up as t approaches the value of the right-hand side above. Similarly, if g(y) > 0 over (, y o ] then G(y) will be increasing over (, y o ] and the solution Y (t) will then exist for all t 0 such that lim G(y) < t. y If the above limit is finite then the solution blows down as t approaches the value of the left-hand side above. Finally, finding explicit solutions by this our recipe can get complicated even when g(y) is a fairly simple polynomial. Example. Find the general solution of d y y3. Solution. Because y y 3 = y(2 y)(2 + y), we see that y = 0, y 2, and y = 2 are stationary solutions of this equation. Nonstationary solutions are given implicitly by y y. 3 9

0 A partial fraction decomposition yields the identity y y 3 = y(2 y)(2 + y) = y + 8 2 y 8 2 + y. (You should be able to write down such partial fraction identities directly.) Hence, solutions are given implicitly by y + 8 2 y 8 2 + y = log( y ) 8 log( 2 y ) 8 log( 2 + y ) + c ( ) y = 8 log(y2 ) 8 log( 2 y ) 8 log( 2 + y ) + c = 2 8 log y 2 + c where c is an arbitrary constant. Upon solving this for y 2 we find which leads to ( ) y 2 8(t c) = log, y 2 y 2 y 2 = e8(t c). This can then be broken down into two cases. First, if y 2 < then which implies that and finally that y 2 y 2 = e8(t c), y 2 = + e 8(t c), y = ± + e. 8(t c) These solutions exist for every time t. They vanish as t and approach ±2 as t. On the other hand, if y 2 > then y 2 y 2 = e8(t c), which implies that y 2 = e 8(t c).

So long as the denominator is positive, we find the solutions y = ± e. 8(t c) The denominator is positive if and only if t > c. Therefore these solutions exist for every time t > c. They diverge (blow-up) to ± as t c + and approach ±2 as t. We therefore have found the stationary solutions y = 0, y 2, and y = 2, and the nonstationary solutions y = when < y < 2 and t > c, e 8(t c) y = when 2 < y < 0, + e 8(t c) y = + when 0 < y < 2, + e 8(t c) y = + when 2 < y < and t > c, e 8(t c) where c is an arbitrary constant. This list includes every solution of the equation.