Power Series Solutions for Ordinary Differential Equations

Similar documents
Power Series Solutions of Ordinary Differential Equations

Section 5.2 Series Solution Near Ordinary Point

Math 2233 Homework Set 7

Series Solutions Near a Regular Singular Point

Lecture 4: Frobenius Series about Regular Singular Points

Geometric Series and the Ratio and Root Test

The First Derivative and Second Derivative Test

The First Derivative and Second Derivative Test

1 Series Solutions Near Regular Singular Points

Series Solutions of Differential Equations

Solving Differential Equations Using Power Series

Problem 1 Kaplan, p. 436: 2c,i

Solving Differential Equations Using Power Series

MATH 312 Section 6.2: Series Solutions about Singular Points

Power Series Solutions We use power series to solve second order differential equations

Power Series and Analytic Function

Geometric Series and the Ratio and Root Test

LECTURE 14: REGULAR SINGULAR POINTS, EULER EQUATIONS

Review for Exam 2. Review for Exam 2.

Series Solution of Linear Ordinary Differential Equations

Fall Math 3410 Name (Print): Solution KEY Practice Exam 2 - November 4 Time Limit: 50 Minutes

Equations with regular-singular points (Sect. 5.5).

Chapter 5.2: Series solution near an ordinary point

SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

Math Assignment 11

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls):

Series solutions of second order linear differential equations

2 Series Solutions near a Regular Singular Point

Series Solutions. 8.1 Taylor Polynomials

MA22S3 Summary Sheet: Ordinary Differential Equations

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

Math Exam 2, October 14, 2008

Review: Power series define functions. Functions define power series. Taylor series of a function. Taylor polynomials of a function.

7.3 Singular points and the method of Frobenius

Complex Numbers. Outline. James K. Peterson. September 19, Complex Numbers. Complex Number Calculations. Complex Functions

Complex Numbers. James K. Peterson. September 19, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Consequences of Continuity

ODE Homework Series Solutions Near an Ordinary Point, Part I 1. Seek power series solution of the equation. n(n 1)a n x n 2 = n=0

5. Series Solutions of ODEs

e y [cos(x) + i sin(x)] e y [cos(x) i sin(x)] ] sin(x) + ey e y x = nπ for n = 0, ±1, ±2,... cos(nπ) = ey e y 0 = ey e y sin(z) = 0,

12d. Regular Singular Points

LECTURE 9: SERIES SOLUTIONS NEAR AN ORDINARY POINT I

Proofs Not Based On POMI

This practice exam is intended to help you prepare for the final exam for MTH 142 Calculus II.

Ch 5.4: Regular Singular Points

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Lecture Notes on. Differential Equations. Emre Sermutlu

swapneel/207

Two Posts to Fill On School Board

1 Solution to Homework 4

Math 334 A1 Homework 3 (Due Nov. 5 5pm)

Project One: C Bump functions

Review of Power Series

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Bessel s Equation. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Fall Department of Mathematics

The Method of Frobenius

The Method of Undetermined Coefficients.

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

Consequences of Continuity

Power Series in Differential Equations

Lecture 10. (2) Functions of two variables. Partial derivatives. Dan Nichols February 27, 2018

Proofs Not Based On POMI

Consider an ideal pendulum as shown below. l θ is the angular acceleration θ is the angular velocity

Degree Master of Science in Mathematical Modelling and Scientific Computing Mathematical Methods I Thursday, 12th January 2012, 9:30 a.m.- 11:30 a.m.

McGill University Math 325A: Differential Equations LECTURE 12: SOLUTIONS FOR EQUATIONS WITH CONSTANTS COEFFICIENTS (II)

Derivatives and the Product Rule

Derivatives in 2D. Outline. James K. Peterson. November 9, Derivatives in 2D! Chain Rule

Upper and Lower Bounds

Chapter 4. Series Solutions. 4.1 Introduction to Power Series

Introduction Derivation General formula List of series Convergence Applications Test SERIES 4 INU0114/514 (MATHS 1)

11.10a Taylor and Maclaurin Series

Series Solutions of Linear ODEs

Chapter 8: Taylor s theorem and L Hospital s rule

Power Series Solutions And Special Functions: Review of Power Series

Welcome to Math 257/316 - Partial Differential Equations

Polynomials. Math 4800/6080 Project Course

Generating Function Notes , Fall 2005, Prof. Peter Shor

Series solutions to a second order linear differential equation with regular singular points

Chapter 5.3: Series solution near an ordinary point

A DARK GREY P O N T, with a Switch Tail, and a small Star on the Forehead. Any

Method of Frobenius. General Considerations. L. Nielsen, Ph.D. Dierential Equations, Fall Department of Mathematics, Creighton University

Lecture IX. Definition 1 A non-singular Sturm 1 -Liouville 2 problem consists of a second order linear differential equation of the form.

SOLUTIONS ABOUT ORDINARY POINTS

LESSON 8.1 RATIONAL EXPRESSIONS I

EVALUATING A POLYNOMIAL

1 Lecture 24: Linearization

LESSON 7.1 FACTORING POLYNOMIALS I

The method of Fröbenius

Introduction Derivation General formula Example 1 List of series Convergence Applications Test SERIES 4 INU0114/514 (MATHS 1)

Series Solutions of Linear Differential Equations

1. A polynomial p(x) in one variable x is an algebraic expression in x of the form

and the compositional inverse when it exists is A.

CALCULUS JIA-MING (FRANK) LIOU

General Power Series

Mathematics 104 Fall Term 2006 Solutions to Final Exam. sin(ln t) dt = e x sin(x) dx.

Differential Equations Class Notes

Further Mathematical Methods (Linear Algebra) 2002

Sin, Cos and All That

Taylor and Maclaurin Series

The Theory of Second Order Linear Differential Equations 1 Michael C. Sullivan Math Department Southern Illinois University

Transcription:

Power Series Solutions for Ordinary Differential Equations James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University December 4, 2017

Outline 1 Power Series Solutions of Ordinary Differential Equations 2

Power Series Solutions of Ordinary Differential Equations You all probably know how to solve ordinary differential equations like y + 3y + 4y = 0 which is an example of a linear second order differential equation with constant coefficients. To find a solution we find twice differentiable function y(x) which satisfies the given dynamics. But what about a model like x y + 4x 2 y + 5y = 0 or y + 3xy + 6x 3 y = 0? The coefficients here are not constants; they are polynomial functions of the independent variable x. You don t usually see how to solve such equations in your first course in ordinary differential equations. In this course, we have learned more of the requisite analysis that enables us to understand what to do. However, the full proofs of some of our assertions will elude us as there are still theorems in this area that we do not have the background to prove. These details can be found in older books such as E. Ince, Ordinary Differential Equations from 1926.

Power Series Solutions of Ordinary Differential Equations We are going to solve such models such as p(x) y + q(x)y + r(x)y = 0 y(x 0 ) = y 0, y (x 0 ) = y 1 using a power series solution given by y(x) = a n(x x 0 ) n. We must use a powerful theorem from Ince s text: Theorem Let p(x) y + q(x)y + r(x)y = 0 be a given differential equation where p(x), q(x) and r(x) are polynomials in x. We say a is an ordinary point of this equation if p(a) 0 and otherwise, we say a is a singular point. The solution to this equation is y(x) = a n(x a) n at any ordinary point a and the radius of convergence R of this solution satisfies R = d where d is the distance from a to the nearest root of the coefficient polynomial functions p.

Power Series Solutions of Ordinary Differential Equations Comment In general, d is the distance to the nearest singularity of the coefficient functions whose definition is more technical than being a root of a polynomial if the coefficient functions were not polynomials. Proof Chapter 5 and Chapter 15 in Ince discusses the necessary background to study n th order ordinary differential equations of the form w (n) + p 1 (z) w (n 1) +... p n 1 (z)w (1) + p n (z) w = 0 w (i) (z 0 ) = z i, 0 i n 1 where w(z) is a function of the complex variable z = x + i y in the complex plane C. To understand this sort of equation, you must study what is called functions of a complex variable. Each complex coefficient function is assumed to be analytic at z 0 which means we can write

Power Series Solutions of Ordinary Differential Equations Proof p 1 (z) = p 2 (z) = ak 1 (z z 0 ) k k=0 ak 2 (z z 0 ) k k=0. =. p i (z) = ak i (z z 0 ) k k=0. =. p n (z) = ak n (z z 0 ) k where each of these power series converge in the ball in the complex plane centered at z 0 of raidus r i. k=0

Power Series Solutions of Ordinary Differential Equations Proof Hence, there is a smallest radius r = min{r 1,..., r n } for which they all converge. In the past, we have studied convergence of power series in a real variable x x 0 and the radius of convergence of such a series give an interval of convergence (r x 0, r + x 0 ). Here, the interval of convergence becomes a circle in the complex plane z 0 = x 0 + i y 0. B r (z 0 ) = {z = x + i y : (x x 0 ) 2 + (y y 0 ) 2 < r} However, the coefficient functions need not be so nice. They need not be analysic at the point z 0. A more general class of coefficient function are those whose power series expansions have a more general form at a given point ζ. A point ζ is called a regular singular point of this equation is each of the coefficient function p i can be written as pi(z) = (z ζ) i P i (z) where P is analytic at ζ. Thus, if ζ = 0 was a regular singular point we could write P i (z) = k=0 b iz k and have

Power Series Solutions of Ordinary Differential Equations Proof p 1 (z) = 1 z P 1(z) = 1 z p 2 (z) = 1 z 2 P 2(z) = 1 z 2. =. bk 1 z k k=0 bk 2 z k k=0 p i (z) = 1 z i P i(z) = 1 z i bk i z k. =. p n (z) = 1 z n P n(z) = 1 z i k=0 bk n z k k=0 Now our model converted to the complex plane is p(z) y + q(z)y + r(z)y = 0 which can be rewritten as

Power Series Solutions of Ordinary Differential Equations Proof y + q(z) p(z) y + r(z) p(z) y = 0. Thus p 1 (z) = q(z) p(z) and p 2(z) = r(z) p(z). Let s consider an example: say p(z) = z, q(z) = 2z 2 + 4z 3 and r(z) = z + 2. Then p 1 (z) = 2z2 +4z 3 z regular singular point. and p 2 (z) = z+2 z = z2 +2z z 2 and so z = 0 is a Note if p(z) = z 2, the root of p occurs with multiciplicty two and the requirements of a regular singular point are not met. Note the zero of p(z) here is just z = 0 and so we can solve this equation at a point like z = 1 as a power series expansion y(z) = k=0 a k(z 1) k and the radius of convergence R = 1 as that is the distance to the zero z = 0 of p(z).

Power Series Solutions of Ordinary Differential Equations Proof The proof that the radius of convergence is the R mentioned is a bit complicated and is done in Chapter 16 of Ince. Once you have taken a nice course in complex variable theory it should be accessible. So when you get a chance, look it up!

Example Solve the model y + y = 0 using power series methods. The coefficient functions here are constants, so the power series solution can be computed at any point a and the radius of convergence will be R =. Let s find a solution as a = 0. We assume y(x) = a nx n. From our study of power series, we know the first and second derived series have the same radius of convergence and differentiation can be done term by term. Thus, y = y = na n x n 1 n=1 n(n 1)a n x n 2 n=2

The model we need to solve then becomes n(n 1)a n x n 2 + a n x n = 0 n=2 The powers of x in the series have different indices. Change summation variables in the first one to get k = n 2 = n(n 1)a n x n 2 = (k + 2)(k + 1)a k+2 x k n=2 k=0 as k = n 2 tells us n = k + 2 and n 1 = k + 1. Since the choice of summation variable does not matter, relabel the k above back to an n to get (k + 2)(k + 1)a k+2 x k = k=0 (n + 2)(n + 1)a n+2 x n

The series problem to solve is then (n + 2)(n + 1)a n+2 x n + a n x n = 0 Hence, we have, for all x {(n + 2)(n + 1)a n+2 + a n }x n = 0 The only way this can be true is if each of the coefficients of x n vanish. This gives us what is called a recursion relation the coefficients must satisfy. For all n 0, we have (n + 2)(n + 1)a n+2 + a n = 0 = a n+2 = (n + 2)(n + 1) a n (1) By direct calculation, we find

a 2 = a 0 1 2 a 3 = a 1 2 3 = a 1 ( 3! a 4 = a 2 3 4 = 1 a 5 = a 3 4 5 = a 1. 5! a 0 1 2 ) 1 3 4 = a 2 1 2 3 4 = a 2 4! In general, it is impossible to detect a pattern in these coefficients, but here we can find a pattern easily. We see We have found the solution is y(x) = a 0 + a 1 x + a 0 ( 1) 2 x 2 a 2k = ( 1) k a 0 (2k)! a 2k+1 = ( 1) k a 1 (2k + 1)! 2! + a 1( 1) 3 x 3 3! + a 0( 1) 4 x 4 4! + a 1( 1) 5 x 5 5! +

Define two new series by y 0 (x) = 1 + ( 1) 2 x 2 2! y 1 (x) = x + ( 1) 3 x 3 3! + + ( 1) k x 2k (2k)! + + + a 1 ( 1) k x 2k+1 (2k + 1)! + It is easy to prove that for any convergent series, α a nx n = αa nx n. The series 2n x ( 1)2n (2n)!) and 2n+1 x ( 1)2n+1 (2n+1)!) converge for all x by the ratio test, so we can write the solution y(x) as y(x) = a 0 ( 1) 2n x 2n (2n)!) + a 1 = a 0 y 0 (x) + a 1 y 1 (x) ( 1) 2n+1 x 2n+1 (2n + 1)!)

Since the power series for y 0 and y 1 converge on R, we also know how to compute their derived series to get y 0(x) = y 1(x) = ( 1) 2n (2n) x 2n 1 (2n)!) ( 1) 2n+1 x 2n (2n + 1) (2n + 1)!) n=1

Next, note y 0 (x) and y 1 (x) are linearly independent on R as if then we also have c 0 y 0 (x) + c 1 y 1 (x) = 0, x c 0 y 0(x) + c 1 y 1(x) = 0, x Since these equations must hold for all x, in particular they hold for x = 0 giving c 0 y 0 (0) + c 1 y 1 (0) = c 0 (1) + c 1 (0) = 0 c 0 y 0(0) + c 1 y 1(0) = c 0 (0) + c 1 (1) = 0 In matrix - vector form this become [ ] [ ] 1 0 c0 0 1 c 1 = [ ] 0 0

The determinant of the coefficient matrix of this linear system is positive and so the only solution is c 0 = c 1 = 0 implying these functions are linearly independent on R. Note y 0 is the solution to the problem and y 1 solves As usual the solution to y + y = 0, y(0) = 1, y (0) = 0 y + y = 0, y(0) = 0, y (0) = 1 y + y = 0, y(0) = α, y (0) = β is the linear combination of the two linearly independent solutions y 0 and y 1 gving y(x) = αy 0 (x) + βy 1 (x) Of course, from earier courses and our understanding of the Taylor Series expansions of cos(x) and sin(x), we also know y 0 (x) = cos(x) and y 1 (x) = sin(x). The point of this example, is that we can do a similar analysis for the more general problem with polynomial coefficients.

Example Solve the model y + 6y + 9y = 0 using power series methods. Again, the coefficient functions here are constants, so the power series solution can be computed at any point a and the radius of convergence will be R =. Let s find a solution at a = 0. We assume y(x) = a nx n. Thus, y = y = na n x n 1 n=1 n(n 1)a n x n 2 n=2 The model we need to solve then becomes n(n 1)a n x n 2 + 6 na n x n 1 + 9 n=2 n=1 a n x n = 0

The powers of x in the series have different indices. We change the summation variables in both derivative terms to get k = n 1 = na n x n 1 = (k + 1)a k+1 x k k = n 2 = n=1 k=0 n(n 1)a n x n 2 = (k + 2)(k + 1)a k+2 x k n=2 k=0 Since the choice of summation variable does not matter, relabel the k above back to an n to get (k + 1)a k+1 x k = (n + 1)a n+1 x n k=0 (k + 2)(k + 1)a k+2 x k = k=0 (n + 2)(n + 1)a n+2 x n The series problem to solve is then (n + 2)(n + 1)a n+2 x n + 6 (n + 1)a n+1 x n + 9 a n x n = 0

Hence, we have, for all x {(n + 2)(n + 1)a n+2 + 6(n + 1)a n+1 + 9 a n }x n = 0 The only way this can be true is if each of the coefficients of x n vanish. This gives us what is called a recursion relation the coefficients must satisfy. For all n 0, we have (n + 2)(n + 1)a n+2 + 6(n + 1)a n+1 + 9 a n = 0 = a n+2 = 6(n + 1)a n+1 + 9a n (n + 2)(n + 1) By direct calculation, we find a 2 = 6a 1 + 9a 0 1 2 = 3a 1 9 2 a 0 a 3 = 6(2)a 2 + 9a 1 = 2a 2 3 2 3 2 a 1 ( = 2 3a 1 9 ) 2 a 0 3 2 a 1 = 6a 1 + 9a 0 3 2 a 1 = 9a 0 + 9 2 a 1

a 4 = 6(3)a 3 + 9a 2 = 3 3 4 2 a 3 3 4 a 2 = 3 ( 9a 0 + 9 ) 2 2 a 1 3 ( 3a 1 9 ) 4 2 a 0 = 27 4 a 1 27 2 a 0 + 9 4 a 1 + 27 8 a 0 = 9 2 a 1 81 8 a 0 We have found the solution is ( y(x) = a 0 + a 1 x + 3a 1 9 ) 2 a 0 x 2 + ( + 9 2 a 1 81 ) 8 a 0 x 4 + Define two new series by y 0 (x) = 1 9 2 x 2 + 9x 3 81 8 x 4 + y 1 (x) = x 3x 2 + 9 2 x 3 9 2 x 4 + ( 9a 0 + 9 ) 2 a 1 x 3

Hence, we can write the solution as y(x) = a 0 {1 9 2 x 2 9x 3 + 81 8 x 4 + } + a 1 {x 3x 2 + 9 2 x 3 9 2 x 4 + } = a 0 y 0 (x) + a 1 y 1 (x) Our theorem guarantees that the solution y(x) converges in R, so the series y 0 we get setting a 0 = 1 and a 1 = 0 converges too. Setting a 0 = 0 and a 1 = 1 then shows y 1 converges. Since the power series for y 0 and y 1 converge on R, we also know how to compute their derived series to get y 0(x) = 9x + 27x 2 81 2 x 3 + y 1(x) = 1 6x + 27 2 x 2 18x 3 +

Next, note y 0 (x) and y 1 (x) are linearly independent on R as if then we also have c 0 y 0 (x) + c 1 y 1 (x) = 0, x c 0 y 0(x) + c 1 y 1(x) = 0, x Since these equations must hold for all x, in particular they hold for x = 0 giving c 0 y 0 (0) + c 1 y 1 (0) = c 0 (1) + c 1 (0) = 0 c 0 y 0(0) + c 1 y 1(0) = c 0 (0) + c 1 (1) = 0 which tells us c 0 = c 1 = 0 implying these functions are linearly independent on R.

Note y 0 is the solution to the problem y + 6y + 9y = 0, y(0) = 1, y (0) = 0 The general solution is y(x) = Ae 3x + Bxe 3x and for these initial conditions, we find A = 1 and B = 3 giving y 0 (x) = (1 + 3x)e 3x. Using the Taylor Series expansion of e 3x, we find ( y 3x + 3x e 3x = 1 3x + 9 2 x 2 27 ) 6 x 3 + ( +3x 1 3x + 9 2 x 2 27 ) 6 x 3 + ( ) 9 = 1 + (3x 3x)x + 2 9 x 2 + = 1 9 2 x 2 9x 3 + which is the series we found using the power series method! ( 27 6 + 27 2 ) x 3 +

Then y 1 solves y + 6y + 9y = 0, y(0) = 0, y (0) = 1 The general solution is y(x) = Ae 3x + Bxe 3x and for these initial conditions, we find A = 0 and B = 1 giving y 1 (x) = x e 3x. Using the Taylor Series expansion of e 3x, we find ( x e 3x = x 1 3x + 9 2 x 2 27 ) 6 x 3 + = x 3x 2 + 9 2 x 3 9 2 x 4 which is the series we found using the power series method! As usual the solution to y + 6y + 9y = 0, y(0) = α, y (0) = β is the linear combination of the two linearly independent solutions y 0 and y 1 gving y(x) = αy 0 (x) + βy 1 (x)

Homework 38 38.1 Solve using the Power Series method. y + 4y = 0 1 Use our theorems to find the radius of convergence R. 2 Find the two solutions y 0 and y 1. 3 Show y 0 and y 1 are linearly independent. 4 Write down the Initial Value Problem y 0 and y 1 satisfy. 5 Find the solution to this model with y(0) = 3 and y (0) = 2. 6 Express y 0 and y 1 in terms of traditional functions.

Homework 38 38.2 Solve using the Power Series method. y + y 6y = 0 1 Use our theorems to find the radius of convergence R. 2 Find the two solutions y 0 and y 1. 3 Show y 0 and y 1 are linearly independent. 4 Write down the Initial Value Problem y 0 and y 1 satisfy. 5 Find the solution to this model with y(0) = 1 and y (0) = 4. 6 Express y 0 and y 1 in terms of traditional functions.