Power Series Solutions of Ordinary Differential Equations

Similar documents
Power Series Solutions for Ordinary Differential Equations

Math 2233 Homework Set 7

Section 5.2 Series Solution Near Ordinary Point

Lecture 4: Frobenius Series about Regular Singular Points

Series Solutions Near a Regular Singular Point

Geometric Series and the Ratio and Root Test

Series Solutions of Differential Equations

The First Derivative and Second Derivative Test

The First Derivative and Second Derivative Test

Problem 1 Kaplan, p. 436: 2c,i

1 Series Solutions Near Regular Singular Points

2 Series Solutions near a Regular Singular Point

Geometric Series and the Ratio and Root Test

Power Series Solutions We use power series to solve second order differential equations

MATH 312 Section 6.2: Series Solutions about Singular Points

General Power Series

Proofs Not Based On POMI

Math Assignment 11

Review: Power series define functions. Functions define power series. Taylor series of a function. Taylor polynomials of a function.

Fall Math 3410 Name (Print): Solution KEY Practice Exam 2 - November 4 Time Limit: 50 Minutes

Equations with regular-singular points (Sect. 5.5).

Chapter 5.2: Series solution near an ordinary point

7.3 Singular points and the method of Frobenius

MA22S3 Summary Sheet: Ordinary Differential Equations

Math Exam 2, October 14, 2008

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls):

LECTURE 14: REGULAR SINGULAR POINTS, EULER EQUATIONS

Project One: C Bump functions

Review for Exam 2. Review for Exam 2.

Complex Numbers. Outline. James K. Peterson. September 19, Complex Numbers. Complex Number Calculations. Complex Functions

Complex Numbers. James K. Peterson. September 19, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Consequences of Continuity

Series Solutions. 8.1 Taylor Polynomials

e y [cos(x) + i sin(x)] e y [cos(x) i sin(x)] ] sin(x) + ey e y x = nπ for n = 0, ±1, ±2,... cos(nπ) = ey e y 0 = ey e y sin(z) = 0,

Consequences of Continuity

Proofs Not Based On POMI

12d. Regular Singular Points

Solving Differential Equations Using Power Series

Solving Differential Equations Using Power Series

SERIES SOLUTION OF DIFFERENTIAL EQUATIONS

Power Series and Analytic Function

swapneel/207

Series Solution of Linear Ordinary Differential Equations

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

5. Series Solutions of ODEs

June 2011 PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 262) Linear Algebra and Differential Equations

LECTURE 9: SERIES SOLUTIONS NEAR AN ORDINARY POINT I

This practice exam is intended to help you prepare for the final exam for MTH 142 Calculus II.

11.10a Taylor and Maclaurin Series

ODE Homework Series Solutions Near an Ordinary Point, Part I 1. Seek power series solution of the equation. n(n 1)a n x n 2 = n=0

Derivatives in 2D. Outline. James K. Peterson. November 9, Derivatives in 2D! Chain Rule

Ch 5.4: Regular Singular Points

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

1 Solution to Homework 4

Two Posts to Fill On School Board

Polynomial Solutions of the Laguerre Equation and Other Differential Equations Near a Singular

Chapter 4. Series Solutions. 4.1 Introduction to Power Series

8.7 MacLaurin Polynomials

The Method of Undetermined Coefficients.

Math 2Z03 - Tutorial # 6. Oct. 26th, 27th, 28th, 2015

The Method of Frobenius

1 Lecture 24: Linearization

Series solutions of second order linear differential equations

Lecture 10. (2) Functions of two variables. Partial derivatives. Dan Nichols February 27, 2018

Lecture Notes on. Differential Equations. Emre Sermutlu

Series Solutions of Linear ODEs

Power Series Solutions to the Legendre Equation

Math 2142 Homework 5 Part 1 Solutions

SOLUTIONS ABOUT ORDINARY POINTS

McGill University Math 325A: Differential Equations LECTURE 12: SOLUTIONS FOR EQUATIONS WITH CONSTANTS COEFFICIENTS (II)

A Brief Review of Elementary Ordinary Differential Equations

More On Exponential Functions, Inverse Functions and Derivative Consequences

Derivatives and the Product Rule

Chapter 3. Reading assignment: In this chapter we will cover Sections dx 1 + a 0(x)y(x) = g(x). (1)

6 Second Order Linear Differential Equations

17.2 Nonhomogeneous Linear Equations. 27 September 2007

FROBENIUS SERIES SOLUTIONS

Upper and Lower Bounds

Lecture 13: Series Solutions near Singular Points

Series solutions to a second order linear differential equation with regular singular points

Taylor Polynomials. James K. Peterson. Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Introduction Derivation General formula List of series Convergence Applications Test SERIES 4 INU0114/514 (MATHS 1)

Chapter 5.3: Series solution near an ordinary point

Ma 530 Power Series II

Welcome to Math 257/316 - Partial Differential Equations

1 Lecture 8: Interpolating polynomials.

A DARK GREY P O N T, with a Switch Tail, and a small Star on the Forehead. Any

The Theory of Second Order Linear Differential Equations 1 Michael C. Sullivan Math Department Southern Illinois University

Math 334 A1 Homework 3 (Due Nov. 5 5pm)

A field F is a set of numbers that includes the two numbers 0 and 1 and satisfies the properties:

A( x) B( x) C( x) y( x) 0, A( x) 0

Section 4.2: Mathematical Induction 1

Math Review for Exam Answer each of the following questions as either True or False. Circle the correct answer.

QUADRATIC FORMS. A mathematical vignette. Ed Barbeau, University of Toronto

Bessel s Equation. MATH 365 Ordinary Differential Equations. J. Robert Buchanan. Fall Department of Mathematics

Consider an ideal pendulum as shown below. l θ is the angular acceleration θ is the angular velocity

Calculus IV - HW 3. Due 7/ Give the general solution to the following differential equations: y = c 1 e 5t + c 2 e 5t. y = c 1 e 2t + c 2 e 4t.

Solving systems of ODEs with Matlab

EVALUATING A POLYNOMIAL

Lecture IX. Definition 1 A non-singular Sturm 1 -Liouville 2 problem consists of a second order linear differential equation of the form.

Transcription:

Power Series Solutions for Ordinary Differential Equations James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University December 4, 2017 Outline Power Series Solutions of Ordinary Differential Equations A Constant Coefficient Example

You all probably know how to solve ordinary differential equations like y + 3y + 4y = 0 which is an example of a linear second order differential equation with constant coefficients. To find a solution we find twice differentiable function y(x) which satisfies the given dynamics. But what about a model like x y + 4x 2 y + 5y = 0 or y + 3xy + 6x 3 y = 0? The coefficients here are not constants; they are polynomial functions of the independent variable x. You don t usually see how to solve such equations in your first course in ordinary differential equations. In this course, we have learned more of the requisite analysis that enables us to understand what to do. However, the full proofs of some of our assertions will elude us as there are still theorems in this area that we do not have the background to prove. These details can be found in older books such as E. Ince, Ordinary Differential Equations from 1926. We are going to solve such models such as p(x) y + q(x)y + r(x)y = 0 y(x0) = y0, y (x0) = y1 using a power series solution given by y(x) = an(x x0)n. We must use a powerful theorem from Ince s text: Theorem Let p(x) y + q(x)y + r(x)y = 0 be a given differential equation where p(x), q(x) and r(x) are polynomials in x. We say a is an ordinary point of this equation if p(a) 0 and otherwise, we say a is a singular point. The solution to this equation is y(x) = an(x a)n at any ordinary point a and the radius of convergence R of this solution satisfies R = d where d is the distance from a to the nearest root of the coefficient polynomial functions p.

Comment In general, d is the distance to the nearest singularity of the coefficient functions whose definition is more technical than being a root of a polynomial if the coefficient functions were not polynomials. Proof Chapter 5 and Chapter 15 in Ince discusses the necessary background to study n th order ordinary differential equations of the form w (n) + p1(z) w (n 1) +... pn 1(z)w (1) + pn(z) w = 0 w (i) (z0) = zi, 0 i n 1 where w(z) is a function of the complex variable z = x + i y in the complex plane C. To understand this sort of equation, you must study what is called functions of a complex variable. Each complex coefficient function is assumed to be analytic at z0 which means we can write Proof p1(z) = p2(z) = ak 1 (z z0) k ak 2 (z z0) k. =. pi(z) = ak i (z z0) k. =. pn(z) = ak n (z z0) k where each of these power series converge in the ball in the complex plane centered at z0 of raidus ri.

Proof Hence, there is a smallest radius r = min{r1,..., rn} for which they all converge. In the past, we have studied convergence of power series in a real variable x x0 and the radius of convergence of such a series give an interval of convergence (r x0, r + x0). Here, the interval of convergence becomes a circle in the complex plane z0 = x0 + i y0. Br (z0) = {z = x + i y : (x x0) 2 + (y y0) 2 < r} However, the coefficient functions need not be so nice. They need not be analysic at the point z0. A more general class of coefficient function are those whose power series expansions have a more general form at a given point ζ. A point ζ is called a regular singular point of this equation is each of the coefficient function pi can be written as pi(z) = (z ζ) i Pi(z) where P is analytic at ζ. Thus, if ζ = 0 was a regular singular point we could write Pi(z) = bizk and have Proof p1(z) = 1 z P1(z) = 1 bk 1 z k z p2(z) = 1 z 2 P2(z) = 1 z 2 bk 2 z k. =. pi(z) = 1 z i Pi(z) = 1 z i bk i z k. =. pn(z) = 1 z n Pn(z) = 1 z i b n k z k Now our model converted to the complex plane is p(z) y + q(z)y + r(z)y = 0 which can be rewritten as

Proof Thus p1(z) = q(z) p(z) y + q(z) p(z) y + r(z) p(z) y = 0. and p2(z) = r(z) p(z). Let s consider an example: say p(z) = z, q(z) = 2z 2 + 4z 3 and r(z) = z + 2. Then p1(z) = 2z2 +4z 3 z regular singular point. and p2(z) = z+2 z = z2 +2z z 2 and so z = 0 is a Note if p(z) = z 2, the root of p occurs with multiciplicty two and the requirements of a regular singular point are not met. Note the zero of p(z) here is just z = 0 and so we can solve this equation at a point like z = 1 as a power series expansion y(z) = ak(z 1)k and the radius of convergence R = 1 as that is the distance to the zero z = 0 of p(z). Proof The proof that the radius of convergence is the R mentioned is a bit complicated and is done in Chapter 16 of Ince. Once you have taken a nice course in complex variable theory it should be accessible. So when you get a chance, look it up!

Example Solve the model y + y = 0 using power series methods. The coefficient functions here are constants, so the power series solution can be computed at any point a and the radius of convergence will be R =. Let s find a solution as a = 0. We assume y(x) = anx n. From our study of power series, we know the first and second derived series have the same radius of convergence and differentiation can be done term by term. Thus, y = y = nanx n 1 n=1 n(n 1)anx n 2 n=2 The model we need to solve then becomes n(n 1)anx n 2 + anx n = 0 n=2 The powers of x in the series have different indices. Change summation variables in the first one to get k = n 2 = n(n 1)anx n 2 = (k + 2)(k + 1)ak+2x k n=2 as k = n 2 tells us n = k + 2 and n 1 = k + 1. Since the choice of summation variable does not matter, relabel the k above back to an n to get (k + 2)(k + 1)ak+2x k = (n + 2)(n + 1)an+2x n

The series problem to solve is then (n + 2)(n + 1)an+2x n + anx n = 0 Hence, we have, for all x {(n + 2)(n + 1)an+2 + an}x n = 0 The only way this can be true is if each of the coefficients of x n vanish. This gives us what is called a recursion relation the coefficients must satisfy. For all n 0, we have an (n + 2)(n + 1)an+2 + an = 0 = an+2 = (n + 2)(n + 1) By direct calculation, we find (1) a2 = a0 1 2 a3 = a1 2 3 = a1 ( 3! a4 = a2 3 4 = 1 a5 = a3 4 5 = a1. 5! a0 1 2 ) 1 3 4 = a2 1 2 3 4 = a2 4! In general, it is impossible to detect a pattern in these coefficients, but here we can find a pattern easily. We see We have found the solution is y(x) = a0 + a1x + a0 ( 1) 2 x 2 a2k = ( 1) k (2k)! a0 a2k+1 = ( 1) k (2k + 1)! a1 2! + x 3 a1( 1)3 3! + x 4 a0( 1)4 4! + x 5 a1( 1)5 5! +

Define two new series by y0(x) = 1 + ( 1) 2 x 2 + + ( 1) k x 2k 2! (2k)! + y1(x) = x + ( 1) 3 x 3 3! + + a1( 1) k x 2k+1 (2k + 1)! + It is easy to prove that for any convergent series, α anx n = αanx n. The series 2n x ( 1)2n (2n)!) and 2n+1 x ( 1)2n+1 (2n+1)!) converge for all x by the ratio test, so we can write the solution y(x) as y(x) = a0 ( 1) 2n x 2n (2n)!) + a1 ( 1) 2n+1 x 2n+1 (2n + 1)!) = a0 y0(x) + a1 y1(x) Since the power series for y0 and y1 converge on R, we also know how to compute their derived series to get y 0(x) = y 1(x) = ( 1) 2n (2n) x 2n 1 (2n)!) n=1 ( 1) 2n+1 x 2n (2n + 1) (2n + 1)!)

Next, note y0(x) and y1(x) are linearly independent on R as if then we also have c0 y0(x) + c1 y1(x) = 0, x c0 y 0(x) + c1 y 1(x) = 0, x Since these equations must hold for all x, in particular they hold for x = 0 giving c0 y0(0) + c1 y1(0) = c0(1) + c1(0) = 0 c0 y 0(0) + c1 y 1(0) = c0 (0) + c1 (1) = 0 In matrix - vector form this become [ ] [ ] 1 0 c0 0 1 c1 = [ 0 0] The determinant of the coefficient matrix of this linear system is positive and so the only solution is c0 = c1 = 0 implying these functions are linearly independent on R. Note y0 is the solution to the problem and y1 solves y + y = 0, y(0) = 1, y (0) = 0 y + y = 0, y(0) = 0, y (0) = 1 As usual the solution to y + y = 0, y(0) = α, y (0) = β is the linear combination of the two linearly independent solutions y0 and y1 gving y(x) = αy0(x) + βy1(x) Of course, from earier courses and our understanding of the Taylor Series expansions of cos(x) and sin(x), we also know y0(x) = cos(x) and y1(x) = sin(x). The point of this example, is that we can do a similar analysis for the more general problem with polynomial coefficients.

Example Solve the model y + 6y + 9y = 0 using power series methods. Again, the coefficient functions here are constants, so the power series solution can be computed at any point a and the radius of convergence will be R =. Let s find a solution at a = 0. We assume y(x) = anx n. Thus, y = y = nanx n 1 n=1 n(n 1)anx n 2 n=2 The model we need to solve then becomes n(n 1)anx n 2 + 6 nanx n 1 + 9 anx n = 0 n=2 n=1 The powers of x in the series have different indices. We change the summation variables in both derivative terms to get k = n 1 = k = n 2 = nanx n 1 = (k + 1)ak+1x k n=1 n(n 1)anx n 2 = (k + 2)(k + 1)ak+2x k n=2 Since the choice of summation variable does not matter, relabel the k above back to an n to get (k + 1)ak+1x k = (k + 2)(k + 1)ak+2x k = The series problem to solve is then (n + 1)an+1x n (n + 2)(n + 1)an+2x n (n + 2)(n + 1)an+2x n + 6 (n + 1)an+1x n + 9 anx n = 0

Hence, we have, for all x {(n + 2)(n + 1)an+2 + 6(n + 1)an+1 + 9 an}x n = 0 The only way this can be true is if each of the coefficients of x n vanish. This gives us what is called a recursion relation the coefficients must satisfy. For all n 0, we have 6(n + 1)an+1 + 9an (n + 2)(n + 1)an+2 + 6(n + 1)an+1 + 9 an = 0 = an+2 = (n + 2)(n + 1) By direct calculation, we find 6a1 + 9a0 a2 = = 3a1 9 1 2 2 a0 6(2)a2 + 9a1 a3 = = 2a2 3 2 3 2 a1 = 2 ( 3a1 92 ) a0 3 2 a1 = 6a1 + 9a0 3 2 a1 = 9a0 + 9 2 a1 6(3)a3 + 9a2 a4 = = 3 3 4 2 a3 3 4 a2 = 3 (9a0 + 92 ) 2 a1 3 ( 3a1 92 ) 4 a0 = 27 4 a1 27 2 a0 + 9 4 a1 + 27 8 a0 = 9 2 a1 81 8 a0 We have found the solution is y(x) = a0 + a1x + ( 3a1 92 ) a0 x 2 + (9a0 + 92 ) a1 x 3 + ( 92 a1 818 ) a0 x 4 + Define two new series by y0(x) = 1 9 2 x 2 + 9x 3 81 8 x 4 + y1(x) = x 3x 2 + 9 2 x 3 9 2 x 4 +

Hence, we can write the solution as y(x) = a0 {1 9 2 x 2 9x 3 + 81 8 x 4 + } + a1{x 3x 2 + 9 2 x 3 9 2 x 4 + } = a0 y0(x) + a1 y1(x) Our theorem guarantees that the solution y(x) converges in R, so the series y0 we get setting a0 = 1 and a1 = 0 converges too. Setting a0 = 0 and a1 = 1 then shows y1 converges. Since the power series for y0 and y1 converge on R, we also know how to compute their derived series to get y 0(x) = 9x + 27x 2 81 2 x 3 + y 1(x) = 1 6x + 27 2 x 2 18x 3 + Next, note y0(x) and y1(x) are linearly independent on R as if then we also have c0 y0(x) + c1 y1(x) = 0, x c0 y 0(x) + c1 y 1(x) = 0, x Since these equations must hold for all x, in particular they hold for x = 0 giving c0 y0(0) + c1 y1(0) = c0(1) + c1(0) = 0 c0 y 0(0) + c1 y 1(0) = c0 (0) + c1 (1) = 0 which tells us c0 = c1 = 0 implying these functions are linearly independent on R.

Note y0 is the solution to the problem y + 6y + 9y = 0, y(0) = 1, y (0) = 0 The general solution is y(x) = Ae 3x + Bxe 3x and for these initial conditions, we find A = 1 and B = 3 giving y0(x) = (1 + 3x)e 3x. Using the Taylor Series expansion of e 3x, we find ( ) y 3x + 3x e 3x = 1 3x + 9 2 x 2 27 6 x 3 + ( +3x 1 3x + 9 2 x 2 27 ) 6 x 3 + ( ) 9 = 1 + (3x 3x)x + 2 9 x 2 + = 1 9 2 x 2 9x 3 + which is the series we found using the power series method! ( 27 6 + 27 2 ) x 3 + Then y1 solves y + 6y + 9y = 0, y(0) = 0, y (0) = 1 The general solution is y(x) = Ae 3x + Bxe 3x and for these initial conditions, we find A = 0 and B = 1 giving y1(x) = x e 3x. Using the Taylor Series expansion of e 3x, we find ( x e 3x = x 1 3x + 9 2 x 2 27 ) 6 x 3 + = x 3x 2 + 9 2 x 3 9 2 x 4 which is the series we found using the power series method! As usual the solution to y + 6y + 9y = 0, y(0) = α, y (0) = β is the linear combination of the two linearly independent solutions y0 and y1 gving y(x) = αy0(x) + βy1(x)

Homework 38 38.1 Solve using the Power Series method. y + 4y = 0 0.1 Use our theorems to find the radius of convergence R. 0.2 Find the two solutions y0 and y1. 0.3 Show y0 and y1 are linearly independent. 0.4 Write down the Initial Value Problem y0 and y1 satisfy. 0.5 Find the solution to this model with y(0) = 3 and y (0) = 2. 0.6 Express y0 and y1 in terms of traditional functions. Homework 38 38.2 Solve using the Power Series method. y + y 6y = 0 0.1 Use our theorems to find the radius of convergence R. 0.2 Find the two solutions y0 and y1. 0.3 Show y0 and y1 are linearly independent. 0.4 Write down the Initial Value Problem y0 and y1 satisfy. 0.5 Find the solution to this model with y(0) = 1 and y (0) = 4. 0.6 Express y0 and y1 in terms of traditional functions.