Notes for Expansions/Series and Differential Equations

Similar documents
Mathematics 324 Riemann Zeta Function August 5, 2005

Last Update: March 1 2, 201 0

On Exponential Decay and the Riemann Hypothesis

INFINITE SEQUENCES AND SERIES

From Calculus II: An infinite series is an expression of the form

Chapter 11 - Sequences and Series

Summation Techniques, Padé Approximants, and Continued Fractions

February 13, Option 9 Overview. Mind Map

2tdt 1 y = t2 + C y = which implies C = 1 and the solution is y = 1

Advanced Mathematical Methods for Scientists and Engineers I

{ } is an asymptotic sequence.

We have been going places in the car of calculus for years, but this analysis course is about how the car actually works.

35. RESISTIVE INSTABILITIES: CLOSING REMARKS

HUDSONVILLE HIGH SCHOOL COURSE FRAMEWORK

Convergence sums and the derivative of a sequence at

Power series and Taylor series

ENGI 9420 Lecture Notes 1 - ODEs Page 1.01

DIFFERENTIAL EQUATIONS

Introduction and Review of Power Series

Mathematical Methods for Physics and Engineering

CHAPTER 4. Series. 1. What is a Series?

AP Calculus BC Scope & Sequence

Contents Ordered Fields... 2 Ordered sets and fields... 2 Construction of the Reals 1: Dedekind Cuts... 2 Metric Spaces... 3

Fourier Theory on the Complex Plane II Weak Convergence, Classification and Factorization of Singularities

INTRODUCTION TO REAL ANALYSIS II MATH 4332 BLECHER NOTES

Infinite Series. 1 Introduction. 2 General discussion on convergence

Boyce/DiPrima/Meade 11 th ed, Ch 1.1: Basic Mathematical Models; Direction Fields

2t t dt.. So the distance is (t2 +6) 3/2

Introduction to Series and Sequences Math 121 Calculus II Spring 2015

10.1 Sequences. Example: A sequence is a function f(n) whose domain is a subset of the integers. Notation: *Note: n = 0 vs. n = 1.

8.5 Taylor Polynomials and Taylor Series

Lecture 4: Numerical solution of ordinary differential equations

Complex Analysis Slide 9: Power Series

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Appendix A. Sequences and series. A.1 Sequences. Definition A.1 A sequence is a function N R.

Antiderivatives. Definition A function, F, is said to be an antiderivative of a function, f, on an interval, I, if. F x f x for all x I.

Answer Key 1973 BC 1969 BC 24. A 14. A 24. C 25. A 26. C 27. C 28. D 29. C 30. D 31. C 13. C 12. D 12. E 3. A 32. B 27. E 34. C 14. D 25. B 26.

Lecture 1 Notes: 06 / 27. The first part of this class will primarily cover oscillating systems (harmonic oscillators and waves).

Math 1b Sequences and series summary

THE RADIUS OF CONVERGENCE FORMULA. a n (z c) n, f(z) =

Linear algebra and differential equations (Math 54): Lecture 19

On Exponential Decay and the Riemann Hypothesis

Let s Get Series(ous)

Bernoulli Polynomials

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

MA22S3 Summary Sheet: Ordinary Differential Equations

Upon completion of this course, the student should be able to satisfy the following objectives.

1 Review of di erential calculus

DIFFERENTIATION RULES

Week 2: Sequences and Series

AP Calculus BC Syllabus

كلية العلوم قسم الرياضيات المعادالت التفاضلية العادية

Some Fun with Divergent Series

Series Solutions of Linear ODEs

MATH 1231 MATHEMATICS 1B CALCULUS. Section 4: - Convergence of Series.

Separation of Variables in Linear PDE: One-Dimensional Problems

C.7. Numerical series. Pag. 147 Proof of the converging criteria for series. Theorem 5.29 (Comparison test) Let a k and b k be positive-term series

AP Calculus Chapter 9: Infinite Series

MATHEMATICAL PRELIMINARIES

MA 137 Calculus 1 with Life Science Applications The Chain Rule and Higher Derivatives (Section 4.4)

TAYLOR AND MACLAURIN SERIES

Chapter 10. Infinite Sequences and Series

Topic 7 Notes Jeremy Orloff

Quantum Mechanics for Scientists and Engineers. David Miller

AP Calculus BC Syllabus Course Overview

Integrals. D. DeTurck. January 1, University of Pennsylvania. D. DeTurck Math A: Integrals 1 / 61

Series. Xinyu Liu. April 26, Purdue University

Answers to Problem Set Number MIT (Fall 2005).

CONSEQUENCES OF POWER SERIES REPRESENTATION

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics

MATH 1242 FINAL EXAM Spring,

Learning Objectives for Math 166

Given a sequence a 1, a 2,...of numbers, the finite sum a 1 + a 2 + +a n,wheren is an nonnegative integer, can be written

Unit IV Derivatives 20 Hours Finish by Christmas

Unit IV Derivatives 20 Hours Finish by Christmas

Similar to sequence, note that a series converges if and only if its tail converges, that is, r 1 r ( 1 < r < 1), ( 1) k k. r k =

8 Example 1: The van der Pol oscillator (Strogatz Chapter 7)

Taylor and Maclaurin Series. Copyright Cengage Learning. All rights reserved.

Ordinary Differential Equations

We denote the space of distributions on Ω by D ( Ω) 2.

11.8 Power Series. Recall the geometric series. (1) x n = 1+x+x 2 + +x n +

Sequences and Series

THE GAMMA FUNCTION AND THE ZETA FUNCTION

July 21 Math 2254 sec 001 Summer 2015

Analysis II: Basic knowledge of real analysis: Part IV, Series

Alternating Series, Absolute and Conditional Convergence Á + s -1dn Á + s -1dn 4

INDEX. Bolzano-Weierstrass theorem, for sequences, boundary points, bounded functions, 142 bounded sets, 42 43

MOMENTS OF HYPERGEOMETRIC HURWITZ ZETA FUNCTIONS

Chapter 4. Repeated Trials. 4.1 Introduction. 4.2 Bernoulli Trials

DIFFERENCE EQUATIONS

Math 141: Lecture 19

An introduction to Mathematical Theory of Control

L p Functions. Given a measure space (X, µ) and a real number p [1, ), recall that the L p -norm of a measurable function f : X R is defined by

Summary WI1401LR: Calculus I

Chapter 3 Higher Order Linear ODEs

Euler and the modern mathematician. V. S. Varadarajan Department of Mathematics, UCLA. University of Bologna

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

HSC Marking Feedback 2017

Perturbation theory for anharmonic oscillations

13 PDEs on spatially bounded domains: initial boundary value problems (IBVPs)

Transcription:

Notes for Expansions/Series and Differential Equations In the last discussion, we considered perturbation methods for constructing solutions/roots of algebraic equations. Three types of problems were illustrated starting from the simplest: regular (straightforward) expansions, non-uniform expansions requiring modification to the process through inner and outer expansions, and singular perturbations. Before proceeding further, we first more clearly define the various types of expansions of functions of variables.. Convergent and Divergent Expansions/Series Consider a series, which is the sum of the terms of a sequence of numbers. Given a sequence { a, a, a3, a4, a5,..., a n.. }, the nth partial sum S n is the sum of the first n terms of the sequence, that is, Sn n ak. k = = () A series is convergent if the sequence of its partial sums {,,,...,..} S S S S converges. In 3 a more formal language, a series converges if there exists a limit such that for any arbitrarily small positive number ε>0, there is a large integer N such that for all n N, A sequence that is not convergent is said to be divergent. Sn ε. () Examples of convergent and divergent series: The reciprocals of powers of produce a convergent series: + + + + + +... =. (3) 4 8 6 3 The reciprocals of positive integers produce a divergent series: + + + + + +... (4) 3 4 5 6 Alternating the signs of the reciprocals of positive integers produces a convergent series: + + +... = ln. (5) 3 4 5 6 The reciprocals of prime numbers produce a divergent series: + + + + +... (6) 3 5 7 3 n

Convergence tests: There are a number of methods for determining whether a series is convergent or divergent. Comparison test: The terms of the sequence {,, 3, 4, 5,..., n.. } those of another sequence {,,,,,...,..} bn bn converges, then so does diverges, then so does a a a a a a are compared to b b b3 b4 b5 b n. If, for all n, 0 an bn an. However, if, for all n, 0 bn an an., and, and Ratio test: Assume that for all n, a n > 0. Suppose that there exists an r > 0 such that a lim n + n an = r. (7) If r <, the series converges. If r >, then the series diverges. If r =, the ratio test is inconclusive, and the series may converge or diverge. Root test or nth root test. Suppose that the terms of the sequence under consideration are non-negative, and that there exists r > 0 such that lim n n an = r. (8) If r <, the series converges. If r >, the series diverges. If r =, the root test is inconclusive, and the series may converge or diverge. Root test is equivalent to ratio test. Integral test: The series can be compared to an integral to establish convergence or divergence. Let f(n) = a n be a positive and monotone decreasing function. If t f ( x) dx = lim f ( x) dx < t, (9) then the series converges. If however, the integral diverges, the series does so as well.

a Limit comparison test: If { an}, { b n} >0, and the limit lim n n bn then an converges if and only if bn converges. exists and is not zero, Alternating series test: Also known as the Leibniz criterion, the alternating series test states that for an alternating series of the form decreasing, and has a limit of 0, then the series converges. Cauchy condensation test: If { a } an converges if and only if n k a k k = an ( )n, if { a n} is monotone is a monotone decreasing sequence, then converges. Other tests for convergence include Dirichlet's test, Abel's test and Raabe's test. Conditional and absolute convergence: Note that for any sequence {,,,,,...,..} a a a3 a4 a5 a n, an an for all n. Therefore, an an. This means that if not vice-versa). an converges, then an also converges (but If the series an converges, then the series an is absolutely convergent. absolutely convergent sequence is one in which the length of the line created by joining together all of the increments to the partial sum is finitely long. The power series of the exponential function is absolutely convergent everywhere. An If the series an conditionally convergent. converges but the series an diverges, then the series an is The Riemann series theorem states that if a series converges conditionally, it is possible to rearrange the terms of the series in such a way that the series converges to any value, or even diverges. 3

Uniform convergence: Let {,,,,...,..} f f f f f be a sequence of functions. The series 3 4 n converge uniformly to f if the sequence {S n } of partial sums defined by converges uniformly to f. Cauchy convergence criterion: Sn( x) fn( x) The Cauchy convergence criterion states that a series fn is said to = (0) an converges if and only if the sequence of partial sums is a Cauchy sequence. This means that for every ε > 0, there is a positive integer N such that for all n m N we have which is equivalent to Radius of convergence: n ak < ε, () k = m n+ m lim an = 0. () n k = n m The radius of convergence of a power series is a non-negative quantity, either a real number or that represents a range (within the radius) in which the function will converge. For a complex power series f defined as: f ( z) = cn ( z a) n 0 (3) where a is a constant, the center of the disk of convergence, c n is the n th complex coefficient, and z is a complex variable. The radius of convergence r is a nonnegative real number or, such that the series converges if z a < r, and diverges if z a > r. In other words, the series converges if z is close enough to the center and diverges if it is too far away. convergence is infinite if the series converges for all complex numbers z. The radius of 4

The radius of convergence can be found by applying the root test to the terms of the series. The root test uses the number C = limsup n f (4) n where ƒ n is the nth term c n (z a) n ("lim sup" denotes the limit superior). The root test states that the series converges if C < and diverges if C >. It follows that the power series converges if the distance from z to the center a is less than n r = / lim sup n cn, (5) n and diverges if the distance exceeds that number. Note that r = /0 is interpreted as an infinite radius, meaning that ƒ is an entire function. The limit involved in the ratio test is usually easier to compute, but the limit may fail to exist, in which case the root test should be used. The ratio test uses the limit In the case of a power series, this can be used to find that L f lim n + =. (6) n fn c r = lim n. (7) n c n+. Asymptotic Expansions An asymptotic expansion, asymptotic series or Poincaré expansion is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. Let n be a sequence of continuous functions on some domain, and let L be a (possibly infinite) limit point of the domain. Then the sequence constitutes an asymptotic scale or guage functions if for every n, ϕ ( ) ( ( )) n+ x = o ϕn x as x L. If f is a continuous function on the domain of the guage functions, an asymptotic expansion of f with respect to the scale is a formal series In this case, we write a ( ) nϕn x such that, for any fixed N, 0 N f ( x) = a ( x) + O( ( x)) as x L nϕn ϕ n +. (8) 0 n n. (9) 0 f ( x) ~ a ϕ ( x) as x L 5

The most common type of an asymptotic expansion is a power series in either positive or negative terms. While a convergent Taylor series fits the definition as given, a non-convergent series is what is usually intended by the phrase. Methods of generating such expansions include the Euler-Maclaurin summation formula and integral transforms such as the Laplace and Mellin transforms. Repeated integration by parts will often lead to an asymptotic expansion. Examples of asymptotic expansions: Gamma function Exponential integral (0) Riemann zeta function () where B m are Bernoulli numbers and is a rising factorial. This expansion is valid for all complex s and is often used to compute the zeta function by using a large enough value of N, for instance N > s. Error function () (3) We now move on to differential equations, and proceed in the same manner, staring from the simplest to more complex. 6

. Introduction to Perturbations Techniques for Differential Equations (http://www.sm.luth.se/~johanb/applmath/chapen) Consider the example of the systems shown in the picture above. Let the mass of the earth be m.the motion of the earth around the sun is an ideal case, where we have no influences from other celestial bodies. The motion y = y(t) of the earth is governed by Newton's law my = Fsun. (4) We now imagine the motion being perturbed by a comet passing close to the earth. Then, the perturbed system will have an extra term my = Fsun + ε Fcomet. (5) It is not unreasonable to assume that the solution of the perturbed problem might be of the form y( t) y 0( t) ε y( t) ε y( t)... = + + +, (6) where y 0 (t) is the solution of the unperturbed (original) problem and the additional terms are correction terms.. The Main Idea Consider the second order differential equation, written in a compact form as F( t, y, y, y; ε ) = 0, 0 t t0, (7) 7

where is a small parameter, 0 < ε <<. We will try to solve this equation by assuming a straightforward perturbation expansion expansion y( t) y 0( t) ε y( t) ε y( t)... = + + +, (8) determiing the first few terms (say, y 0 (t), y (t), y (t),...) of the expansion, and then using the sum of first few terms, say y appr ( t) y0( t) ε y( t) ε y( t) = + + (9) as an approximation to the solution of the differential equation. Here y 0 (t) is the leading or the zeroth-order term, the solution of the unperturbed problem F( t, y, y, y ;0) = 0, 0 t t (30) 0 0 0 0. The terms ε y( t), ε y( t),... are higher-order terms that are usually "small" in an asymptotic sense. Remark: The unperturbed problem can in many cases be solved exactly. 3. Example: Motion of a Particle in a Nonlinear Resistive Medium Suppose that a body with mass m and initial velocity V 0 moves in a fluid in a rectilinear motion. Let the body experience resistance to its motion with the resistance force governed by F av bv = (3) where v = v(t), t > 0, is the speed of the body at time t and a and b are positive constants with b << a. Newton's second law then gives the equation of motion for the body as dv m av bv, v(0) V0. dt = + = (3) Remark: If, b=0, then the equation is linear and its solution is 8

( at / m) v( t) = V0e. (33) Scaling: Introduce the dimensionless variables y = v / V0, x = t /( m / a). (34) Then, y is the non-dimensional speed and x is the non-dimensional time. By chain rule, the time derivative of speed transforms to dv = dv dy dx = V dy a 0 dt dy dx dt dx m, (35) and thus the differential equation in terms of non-dimensional variables is dy = y + ε y, x > 0, dx y(0) =, (36a,b) where we have introduced a small parameter ε by the definition ε = bv 0 / a <<. Remark : The unperturbed problem is given by dy = y, x > 0, dx y(0) =, (37a,b) and it has the solution y( x) = e ( x). Remark : The perturbed or the complete problem (equations (36a,b)) in the present case can actually be solved exactly without relying on any approximation techniques, and this exact solution is (see Chapter notes) Perturbation Solution: e ( x) y( x) =. + ε ( e x (38) ) Consider the perturbation approach to solving the above equation now for small ε. The equation is 9

dy dx = y + ε y, y (0) =. (39a,b) We expand the solution y(x,ε) in a perturbation series (in powers of ε for sufficiently small ε << ): 0 y( x, ε ) = y ( x) + ε y ( x) + ε y ( x) +... (40) Substituting this expansion into the differential equation yields: y ( x) + ε y ( x) + ε y ( x) +... 0 y ( x) + ε y ( x) + ε y ( x) +...] y ( x) + ε y ( x) + ε y ( x) +...) = [ 0 + ε ( 0 y ( x) + ε[ y ( x) + y( x)] + ε y ( x) + y ( x) y ( x)] +... 0 = 0 [ 0, (4) where the last expression gives terms collected in powers of ε. solution expansion into the initial condition gives: Also inserting the 0 ε ε Now consider the limiting case, that is, let ε 0. Then y 0 (0) =, that is y (0) + y (0) + y (0) +... =. (4) ε y + ε y (0) + (0) +... =. (43) Now, let us subtract on both sides and divide the remaining expression with ε (this is allowed as ε 0). We get ε ε 3 Again taking the limit ε 0, we get and if we repeat this procedure, we get y (0) + y (0) + y (0) +... = 0. (44) 3 y (0) =, (45) y (0) = y (0) = y (0) =... = 0. (46) 0

Now, comparing terms of various powers of ε on both sides of the differential equation (equation (4)) as well as the initial condition (equation (46)), we get the following sequence of equations and the associated initial conditions: 0 : ε ε : : ε y 0( x) = y0, x > 0, y0(0) = y0( x) = e x (47) y ( x) = y + y0, x > 0, y(0) = 0 or y ( x) + y = e, y(0) = 0 x y( x) = e e x x y ( x) = y + y0y, x > 0, y(0) = 0 x 3x or y ( x) + y = ( e e ), y(0) = 0 x x 3x y( x) = e e + e. (48) (49) In the above table, solutions of the sequence of equations for y ( x), y ( x), y ( x), y ( x)... are also given. 0 3 An approximate solution is thus yappr ( x) = y0 + ε y + ε y = e + ε ( e e ) + ε ( e e + e ). x x x x x 3x (50) Recall that in the language of asymptotic expansions, this is a three-term solution. Comparison with Exact Solution: Recall that the exact solution of the system dy = y + ε y, y (0) =, dx (39a,b) is given by e ( x) y( x) =. + ε ( e x (38) )

This solution can be expanded in powers of ε utilizing the following binomial expansion: = + z = z + z z + ( + z) 3 ( )... (5) x for small values of z. Using the expression z = ε ( e ) in equation (5), we get y ( ) ( ) [ ( ) ( ) exact x = e x = e x ε e x + ε e x...] = + ε ( e x ) = e x + ε ( e x e x) + ε ( e x e x + e 3x) +... (5) Now compare this solution with the approximate three-term solution derived earlier (repeated here): yappr ( x) = y0 + ε y + ε y = e + ε ( e e ) + ε ( e e + e ). Clearly, the error E in the approximation thus is x x x x x 3x (50) E = y ( ) 3 4 exact y x = m ( x) + m( x)... appr ε ε for some functions m (x), m (x),.... As we discussed in the context of algebraic equations, 3 3 the error E is of order ε and we write it as E=O( ε ), where O stands for big Oh. 4. A Nonlinear Oscillator Consider a mass m that is suspended from a spring, where the restoring force F in the spring is related to the stretch in the spring by F ky t ay t a 3 = ( ) + ( ), <<. (53) The distance y is the displacement of the mass particle and there is no gravity, or the system is in horizontal plane. Newton s second law then gives the equation of motion as d y m ky ay dt with the initial conditions 3 =, (54)

dy y(0) = A, (0) = 0, (55a,b) dt that is, the particle is displaced initially a distance A and then released. To proceed with the analysis, it is convenient to first rescale the equation. This involves scaling of time as well as the displacement. Let t τ =, u = y / A. (56a,b) m / k The equation of motion and the initial conditions then transform to: d u 3 du u εu 0, u(0), (0) 0. dτ + + = = = (57a,b,c) dτ Here ε aa k <<, which is assumed to be the small parameter representing small nonlinear effects and it is a dimensionless parameter. This is Duffing s equation we studied before when we considered second-order conservative systems. We will here try to solve this equation using the straightforward perturbation expansion. Let us assume the solution of Duffing s equation in the form of the straightforward expansion in powers of ε: u( τ, ε ) = u ( τ ) + εu ( τ ) + ε u ( τ ) +... (58) 0 Substituting in the rescaled differential equation (equation (57a)) as well as initial conditions (equations (57b,c)) gives: d [ u0( τ ) + εu( τ ) + ε u( τ ) +...] + u 0( τ ) + εu( τ ) + ε u( τ ) +... dτ + ε[ u ( τ ) + εu ( τ ) + ε u ( τ ) +...] = 0, 3 0 u (0) + εu (0) + ε u (0) +... =, 0 u (0) + εu (0) + ε u (0) +... = 0. 0 (59a,b,c) Now we compare equal powers of ε for each of the expressions to get the sequence of initial value problems: ε 0 : d [ u0( τ )] + u 0( τ ) = 0, u0(0) =, u 0(0) = 0; dτ u ( τ ) = cos( τ ) 0 (60a,b,c) (6) 3

ε : d [ u( τ )] 3 + u ( τ ) + u0 = 0, u(0) = 0, u (0) = 0; dτ d [ u( τ )] 3 + u ( τ ) = cos ( τ ), u(0) = 0, u (0) = 0; dτ u ( τ ) + u ( τ ) = [3cos( τ ) + cos(3 τ )]/ 4 u ( τ ) = [3cos( τ ) + cos(3 τ )]/ 3 3τ sin( τ ) /8 (6a,b,c) (63) For these equations (60) and (6) for the first two terms in the solution expansion, we have constructed solutions that are given in equations (6) and (63). An approximate two-term solution (or a two-term approximation) is then Note that: u = cos( τ ) + ε[{cos(3 τ ) cos( τ )}/ 3 3τ sin( τ ) / 8]. (64) appr (i) the leading term cos(τ) seems correct. (ii) if τ < T 0 for some constant T 0 that is bounded and ε is "small", the correction term in the parenthesis is bounded and is "small". (iii) if we let τ to be large (τ ), the correction term can be large even though ε is small. More specifically, the correction term (second term) becomes of the same order as the first term in the expansion. So, the validity of this solution approximation depends on τ, that is, the expansion is non-uniform. Recall the situation in algebraic case when a similar circumstance arose. Remark: The problem in (iii) is due to the secular term 3τ sin( τ ) /8. (65) There are many approaches to remove this difficulty. These include the method of Poincare-Lindstedt, the method of multiple time-scales, the method of strained coordinates, the method of averaging etc. We will study some of these methods now. All these techniques introduce a way that removes the terms resulting in secular term in the solution. 5. Poincaré-Lindstedt method Poincaré-Lindstedt's method is one of the many methods to avoid secular terms in the solution. The basic idea is to scale time through an unknown ω which is determined such that the solution becomes uniform. So, we let 4

where (66) (67) and then the solution is expressed in straight-forward expansion Example: We apply this technique now through the example of the Duffing's equation: (68) again. Using the change of variables (69a,b,c) the derivatives transform via chain rule to (70a,b) The equations (69) are thus transformed to (7) (7a,b,c) Substituting the expressions for and u in these expressions, we get and (73) (74a,b) 5

Now, comparing terms of equal powers of for the first two powers: in equations (73) as well as (74a,b), we get (75) The first right hand or non-homogeneous term in equation (76) is in resonance with the homogeneous solution for the differential equation. It will thus lead to a solution of the form τcosτ for equation (76), or the first correction term which is of order ε. This can be avoided by setting the coefficient of this term in equation (76) to zero. Thus, choosing (76) (77) we see that we can avoid the secular term in the solution at order ε. The equation (76) then reduces to with the particular solution (78) Combining solutions for the two terms, we have a first-order perturbation solution of the Duffing s equation (equations (57)): (79) where, recalling that and, we have (80) (8) 6

6. Order-Notation. We write if (8) (83) Then, we say that f is small-order of g as oh. goes to 0. This small o is also called small We write (84) if there is a positive constant M such that (85) for all belonging to some neighborhood of 0. One says that f is large-order of g as goes to 0. This large O is also called Big oh. Example: since (86) (87) Example: (88) since (89) 7

for all. However, we don't have that since (90) (9) 7. Regular perturbation does not always work Example: Consider the boundary value problem that depends on a small parameter: Let us assume a solution for y=y(t,ε) in the form of a straight forward expansion : (9a,b,c) and substitute in the equations (9) to get the differential equation (93) and the boundary conditions (94) (95) Comparing for different powers of, we get at the lowest order: (96a,b,c) Note that: (i) the general solution of the differential equation in equation (96a), which is a firstorder equation as opposed to the original second-order system, is (97) 8

(ii) the boundary condition at t=0 is (98) which gives the solution. (99) Note that the boundary value at t=, (00) is then not fulfilled. (iii) using the boundary value at t=, (0) we get the solution (0) but then the boundary value at t=0, (03) is not fulfilled. This shows that the regular perturbation expansion does not work for this problem, at least in this way. 8. Inner and outer approximations. Now, we want to discover the exact nature of the solution of the differential equation (boundary value problem) and boundary conditions in equations (9a,b,c), which will also allow us to understand why the above perturbation approach failed. Again consider the problem (9a,b,c) 9

Note that this is linear second-order differential equation with constant coefficients. So, the solutions should be easy for you to derive. It can be showed that the exact solution y=y(t) is (04) This exact solution is plotted in the figure below. Now we consider the perturbation techniques. ) Let. Then, as seen above, we get the unperturbed problem (05) The solution of the differential equation is t y( t) = Ce. (06) The solution that fulfills the boundary condition at t= is (07) 0

This solution is called an outer solution that well agrees with the exact solution for "large" t (t close to ), as shown in the figure. Clearly, for small t, the outer solution deviates significantly from the true solution. ) For "small" t (t close to 0) a good approximation to the solution (called inner approximation) is (08) This solution is also plotted in the figure. Clearly, it approximates the exact solution near t=0, but fails outside this neighborhood. 9. Singular perturbation - when does regular perturbation not work? We saw in the case of algebraic equations that there can be many reasons for the failure of straightforward expansions. This includes situations when ) the highest order derivative is multiplied by. ) the problem totally changes characteristics when the parameter is equal to zero. 3) the problem is defined over an infinite domain. 4) singular points are present in the domain. 5) the equation models physical processes with several time- or length scales. -5 are called singular perturbation problems. In many cases one deals with problems containing boundary layers. We can roughly treat these problems by i) letting we get a good approximation for the outer region. ii) rescale the problem to get an inner approximation. iii) match inner and outer approximations. Singular perturbation is a matched asymptotic expansion. We now elaborate on these for the example under consideration in the following sections.

0. The outer approximation We get the outer approximation by substituting in the equations (96a,b,c). In our last example that implied that we should solve the equation This has the solution (09) (here the subscript o refers to outer ) which well agrees with the exact solution (0) since () () for small. Continuing further, if t=o(), we have that which implies that (3). The inner approximation. To understand how the solution behaves near t=0, we rescale the problem by putting in the original boundary value problem (4) (96a,b,c)

Let us define (5) so that we get (6) The equation then transforms to (7) Consider the coefficients of the three terms in the equation above: (8) The problem in the original equation is that the coefficient of the highest derivative y" is small compared to the others. To avoid this problem we thus choose the main coefficient to be of the same order as one of the other coefficients and that the other two coefficients are comparably small. We demonstrate this dominant terms argument (previously considered as well in the algebraic equations and in differential equations) with the procedure below (recall that is small): 3

Case ): Case ): Case 3): We see that we only have one possibility and that is Case, where the main coefficient is relatively larger than the remaining coefficients. We therefore choose The transformed equation then becomes (9) (0) In this equation on an expanded time interval of τ=t/ε, equation (0), ε appears as a regular perturbation parameter and so we can again use straightforward expansion for the solution z(τ). Thus, at the lowest order, we now put to get the equation with the solution () () 4

Note that this is a second order equation and so there are two constants of integration. Thus, both boundary conditions can be used. The boundary condition z(0)=y(0)=0 yields Thus, this solution, also called an inner approximation, is (3) (4) The remaining problem is to determine the constant a and match the inner and outer approximations.. Matching. The inner and outer solutions as well as their regions of applicability are shown in the figure below. In the overlapping region we let 5

(5) and introduce the intermediate variable (6) This is a time scale that is "between" the inner time scale (7) and the outer time scale (8) and thus the name intermediate time scale. To be able to match the approximations we require that the outer and inner approximations (written with respect to the intermediate variable) must agree in the limit as that is (9) for a fix (positive) value of, the intermediate variable. In our case, this means that (30) This implies that a=e and that the inner approximation therefore must be (3) This is known as the matching principle in the method of matched asymptotic expansions. 6

Finally, we want to find a solution that is valid in all the interval [0,]. We therefore construct y u from the inner and outer approximations minus their common limit e (since it otherwise would have been counted twice) in the overlapping region: (3) When t is in the outer region the second term is small and y u is thus approximately which is exactly the outer approximation. When t is in the boundary layer, the first term is close to e and y u is thus approximately which is exactly the inner approximation. In the overlapping region, both the inner and the outer approximations are approximately equal to e, which makes the sum of y i and y o close to e there, that is, twice as much as it should be. That is why we have to subtract the common limit from the sum. If we insert y u in the original differental equation, we see that (33) that is, y u satisfies the equation exactly on the interval (0,). If we investigate the boundary conditions we see that (34) The left condition is exactly fulfilled and the right is fulfilled up to for any n > 0, since approximation on the interval [0,]. We thus see that y u is a unique good 3. Another example of singular perturbations. Example: Consider the differential equation (35) 7

This again is a boundary value problem on a bounded domain with as a small parameter. We get the outer approximation by putting and solving the equation with the outer boundary condition: that is, (36) (37) We now look for the inner approximation. Let us define the inner variable by the scaling The equation is then transformed to (38) where Comparing the leading coefficient with the other coefficients ( is being assumed to be small): Case): Case ): We see that in Case, the remaining coefficient is much smaller than the other two. We therefore choose and make the change of variables 8

Equation (38) then becomes (39) We get the inner approximation (at the lowest order) when we put equation: and solve the (40) which in the original variables is (4) The constants a and b need to be determined by first imposing the boundary condition at t=0, and then by matching the inner and outer solutions at an intermediate scale. The condition y(0)= gives (4) Let us match these approximations in equations (37) and (4). Let us introduce the intermediate variable The matching condition (43) then implies that (44) that is, The inner approximation finally becomes The final composite approximation y u is obtained by adding the inner and outer approximations and subtracting their common limit in the overlapping region (45) 9