Asymptotic Expansions

Similar documents
JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

Convergence of sequences and series

Problem Set 1 October 30, 2017

Considering our result for the sum and product of analytic functions, this means that for (a 0, a 1,..., a N ) C N+1, the polynomial.

AP Calculus Chapter 9: Infinite Series

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

8.5 Taylor Polynomials and Taylor Series

Infinite series, improper integrals, and Taylor series

221A Lecture Notes Steepest Descent Method

Chapter 11. Taylor Series. Josef Leydold Mathematical Methods WS 2018/19 11 Taylor Series 1 / 27

Homework 1 Solutions Math 587

Section x7 +

1 + lim. n n+1. f(x) = x + 1, x 1. and we check that f is increasing, instead. Using the quotient rule, we easily find that. 1 (x + 1) 1 x (x + 1) 2 =

Math 1B, lecture 15: Taylor Series

u xx + u yy = 0. (5.1)

AP Calculus Testbank (Chapter 9) (Mr. Surowski)

Fourth Week: Lectures 10-12

Complex Analysis Homework 9: Solutions

Notes for Expansions/Series and Differential Equations

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

B553 Lecture 1: Calculus Review

Introduction and Review of Power Series

Lecture 9. = 1+z + 2! + z3. 1 = 0, it follows that the radius of convergence of (1) is.

Polynomial Approximations and Power Series

TAYLOR AND MACLAURIN SERIES

221A Lecture Notes Convergence of Perturbation Theory

c n (x a) n c 0 c 1 (x a) c 2 (x a) 2...

4 Uniform convergence

Complex Analysis Slide 9: Power Series

Power series and Taylor series

Solutions Final Exam May. 14, 2014

Taylor and Maclaurin Series

THE GAMMA FUNCTION AND THE ZETA FUNCTION

MTH101 Calculus And Analytical Geometry Lecture Wise Questions and Answers For Final Term Exam Preparation

Lecture 9: Taylor Series

Chapter 11 - Sequences and Series

INFINITE SEQUENCES AND SERIES

Section Taylor and Maclaurin Series

McGill University Math 354: Honors Analysis 3

SOLVED PROBLEMS ON TAYLOR AND MACLAURIN SERIES

Series Solutions. 8.1 Taylor Polynomials

Week 2: Sequences and Series

Existence and Uniqueness

Math 1b Sequences and series summary

Bernoulli Polynomials

MATH 118, LECTURES 27 & 28: TAYLOR SERIES

e x = 1 + x + x2 2! + x3 If the function f(x) can be written as a power series on an interval I, then the power series is of the form

Advanced Calculus Math 127B, Winter 2005 Solutions: Final. nx2 1 + n 2 x, g n(x) = n2 x

Metric Spaces and Topology

Terminology Notation Definition Big oh notation f(s) = O(g(s)) (s S) There exists a constant c such that f(s) c g(s) for all s S

Classnotes - MA Series and Matrices

Review: Power series define functions. Functions define power series. Taylor series of a function. Taylor polynomials of a function.

Mathematic 108, Fall 2015: Solutions to assignment #7

Series Solution of Linear Ordinary Differential Equations

Differentiation. Table of contents Definition Arithmetics Composite and inverse functions... 5

8.7 Taylor s Inequality Math 2300 Section 005 Calculus II. f(x) = ln(1 + x) f(0) = 0

September Math Course: First Order Derivative

3.4 Introduction to power series

Laplace s Equation in Cylindrical Coordinates and Bessel s Equation (I)

1 Review of di erential calculus

Let s Get Series(ous)

1 Question related to polynomials

MATH 1A, Complete Lecture Notes. Fedor Duzhin

(3) Let Y be a totally bounded subset of a metric space X. Then the closure Y of Y

Taylor series. Chapter Introduction From geometric series to Taylor polynomials

function independent dependent domain range graph of the function The Vertical Line Test

Numerical Analysis: Interpolation Part 1

MATH 163 HOMEWORK Week 13, due Monday April 26 TOPICS. c n (x a) n then c n = f(n) (a) n!

Lecture 4: Numerical solution of ordinary differential equations

Maxwell s equations for electrostatics

We are going to discuss what it means for a sequence to converge in three stages: First, we define what it means for a sequence to converge to zero

Relevant sections from AMATH 351 Course Notes (Wainwright): Relevant sections from AMATH 351 Course Notes (Poulin and Ingalls):

Chapter 3a Topics in differentiation. Problems in differentiation. Problems in differentiation. LC Abueg: mathematical economics

Introduction to Numerical Analysis

Some Background Material

Mathematics 324 Riemann Zeta Function August 5, 2005

Ma 530 Power Series II

8.3 Partial Fraction Decomposition

MA30056: Complex Analysis. Revision: Checklist & Previous Exam Questions I

the convolution of f and g) given by

Integration in the Complex Plane (Zill & Wright Chapter 18)

Study # 1 11, 15, 19

MAXIMA AND MINIMA CHAPTER 7.1 INTRODUCTION 7.2 CONCEPT OF LOCAL MAXIMA AND LOCAL MINIMA

= 2 x y 2. (1)

Lecture 32: Taylor Series and McLaurin series We saw last day that some functions are equal to a power series on part of their domain.

Module 3. Function of a Random Variable and its distribution

11.10a Taylor and Maclaurin Series

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,

Real Analysis Math 131AH Rudin, Chapter #1. Dominique Abdi

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

Derivatives and the Product Rule

Return probability on a lattice

Separation of Variables in Linear PDE: One-Dimensional Problems

MORE APPLICATIONS OF DERIVATIVES. David Levermore. 17 October 2000

Analysis II - few selective results

Analysis/Calculus Review Day 3

Continuity. Chapter 4

Partial differential equation for temperature u(x, t) in a heat conducting insulated rod along the x-axis is given by the Heat equation:

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

FIRST YEAR CALCULUS W W L CHEN

Transcription:

Asymptotic Expansions V 1.5.4 217/1/3 Suppose we have a function f(x) of single real parameter x and we are interested in an approximation to f(x) for x close to x. The approximation might depend on whether x is approached from below or from above. Nevertheless, assume that once the direction is fixed lim x x f(x) = A exists. The limit value A can be finite, or, but in all cases what we are really interested to know is the rate at which f(x) A as x x, i.e., how the function f(x) approaches asymptotically the value A as x x along the chosen direction. In short, we would like to know the asymptotic behavior of f(x) as x x. Its asymptotic behavior can be studied by comparing the function f(x) as x x with some known functions, called gauge functions. Asymptotic analysis thus requires to determine the relative order of functions. The above example is a particular case of coordinate perturbations of a single real variable function. Perturbation methods generally deal with functions f(x 1,..., x n ; ɛ 1,..., ɛ m ) depending on two sets of arguments: the set (x 1,..., x n ) are called variables and the set (ɛ 1,..., ɛ m ) are called parameters. The distinction between the two sets is not intrinsic and depends on the context. Asymptotic analysis can be used for both parameter and coordinate perturbation methods. The interest in the asymptotic analysis of functions stem because it provides approximations to the functions close the point(s) of interest. Suppose we are interested in an approximation to the function f(x; ɛ) for x in some region D as the parameter ɛ approaches some distinguished value ɛ. Unless otherwise stated we assume ɛ = and that ɛ approach through the real positive values: ɛ +. In the following, unless necessary, we shall not specify the direction along which the distinguished value is approached. Nevertheless, it must be kept in mind that the asymptotic behavior may depends on the chosen path. An approximation to f(x; ɛ) is a function g(x; ɛ) which is uniformly close to f(x; ɛ) as ɛ for all x in D. Definition. The function g(x; ɛ) is an approximation to f(x; ɛ) as ɛ uniformly valid to order η(ɛ) for x D if f(x; ɛ) g(x; ɛ) lim = uniformly for x D, ɛ η(ɛ) i.e., if for any small positive number δ there exists ɛ δ > such that for all x D f(x; ɛ) g(x; ɛ) δ η(ɛ) for < ɛ < ɛ δ, Preprint submitted to Elsevier October 4, 217

with δ and ɛ δ independent of x. The function η(ɛ) is called gauge function. The simplest gauge functions are powers,..., ɛ 2, ɛ 1, 1, ɛ, ɛ 2,... but other choices are possible. Even if a function of a single variable f(x) can always be seen as a function f(x; ɛ) where x, or ɛ, is kept fixed, the case of functions with more than one argument need some consideration. Thus, we first consider real-valued functions of a single variable f(x) and then extend the definitions to the case of functions f(x; ɛ). Generalization to other cases is straightforward. 1. The Order Relations The concept of order of magnitude of functions is central in asymptotic analysis. The precise mathematical definition of the intuitive idea of same order of magnitude, smaller order of magnitude and equivalent is provided by the Bachmann-Landau symbols O, o and. 1.1. The O and symbols The formal definition of the O-symbol used in asymptotical analysis is: Definition. Let D R, x a condensation point of D and f, g : D\x R real functions defined on D, excluding eventually x. We say f = O(g) as x x if there exist a positive constant C and a punctured neighborhood U C (x ) of x such that f(x) C g(x) for all x U C (x ) D. (1) We say that f(x) is asymptotically bounded by g(x) as x x, or simply f(x) is order big O of g(x) as x x. Notice that if the ratio f/g is defined then (1) implies that f/g is bounded above by C in U C (x ) D, or, equivalently that lim f(x) x x g(x) <, in D. The above definition does not inquire into the fate of f and g in x. In x they can be either defined or not. If f = O(g) as x x for all x in D then f is bounded by g everywhere in D. In this case we say that f = O(g) in D. The O-symbol provides a one-side bound, the statement f = O(g) as x x indeed does not necessarily imply that f and g are of the same order of magnitude. Consider for example the functions f = x α and g = x β, where α and β are constants with α > β. Then x α = O(x β ) as x + because the choice C = δ α β and U C () : < x < δ satisfies (1). Moreover, x α /x β as x + in this case. The reverse x β = O(x α ) as x + is clearly not true. Two functions f and g are strictly of the same order of magnitude if f = O(g) and g = O(f) as x x. To stress this point one sometimes introduces the notation f = O s (g), as x x 2

for f strictly of order g to indicate f = O(g) and g = O(f) as x x. In this case lim x x f/g exists and it is neither zero or infinity. We shall not emphasize the distinction and use only the symbol O. However, the possible one-side nature of O-symbol should always be kept in mind. If lim x x (f/g) exists and is equal to 1 we say that f(x) is asymptotic equal to g(x) as x x, or simply f(x) goes as g(x) as x x, and write For example, as x f g, as x x. sin x x, sin(7x) = O(x), 1 cos x = O ( x 2) log(1 + x) x, e x 1, sin(1/x) = O(1). Notice that 1 = O(sin(1/x)) as x is not true because sin(1/x) vanishes in every neighborhood of x =. Similarly as x + cosh(3x) = O ( e 3x), log(1 + 7x) log x, a n x n = O ( x N). In the case of functions f(x; ɛ), with x varying over some specified domain D and ɛ on the interval I : < ɛ ɛ 1, we say that n=1 f(x; ɛ) = O [ g(x; ɛ) ] as ɛ in D, if for each x in D there exist a positive number C x and a punctured neighborhood N Cx of ɛ =, such that f(x; ɛ) C x g(x; ɛ) for all ɛ N Cx I. (2) If the ratio f/g exists then from (2) follows that lim f(x; ɛ) ɛ g(x; ɛ) <, for all x D. The value of the limit may depend on x. If the positive constant C and the neighborhood N C do not depend on x the relation is said to be uniformly valid in D. For example cos(x + ɛ) = O(1) as ɛ uniformly in R, e ɛx 1 = O(ɛ) as ɛ nonuniformly in R. Notice that, while not uniformly valid in D, a relation might be uniformly valid in a subdomain of D. For example, the second example is uniformly valid in the subdomain x [ : 1] R. As in the case of a single variable function, if lim ɛ (f/g) = 1 for all x D we say that f is asymptotic equal to g in D and write f(x; ɛ) g(x; ɛ) as ɛ for x D. 3

1.2. The o-symbol The formal definition of the o-symbol is: Definition. Let D R, x a condensation point of D and f, g : D\x R real functions defined on D, excluding eventually x. We say f = o(g) as x x if for every δ > there exists a punctured neighborhood U δ (x ) of x such that f(x) δ g(x) for all x U δ (x ) D. (3) This inequality indicates that f becomes arbitrarily small compared to g as x x. Thus, if the ratio f/g is defined (3) implies that lim f(x) x x g(x) =. Notice that f = o(g) as x x always implies f = O(g) as x x not in strict sense. The converse is clearly not true. Moreover, the asymptotic equivalence f(x) g(x) as x x means that f(x) = g(x) + o ( g(x) ) as x x. If f = o(g) as x x we say that f(x) is asymptotically smaller than g(x) as x x, or simply f(x) is order little o of g(x) as x x. Often the alternative notation f g as x x, or f much smaller than g as x x, is used in place of f = o(g). As examples, as x x 3 = o(x), cos x = o ( x 1), sin(7x) = o(1), log(1 + x 2 ) = o(x), e x 1 = o ( x 1/3), e 1/x2 = o ( x n) for all n N, while as x + x 2 = o ( x 3), log x = o(x), a n x n = o ( x N+1). n=1 Generalization to functions f(x, ɛ) with x in some domain D and ɛ in the ɛ-interval I : < ɛ ɛ 1 is immediate. The statement f(x, ɛ) = o [ g(x, ɛ) ] as ɛ for x D, means that for each x in D and any given δ > there exist a punctured neighborhood I δ (x) : < ɛ ɛ δ (x) of ɛ =, such that f(x, ɛ) δ g(x, ɛ) for all ɛ in I δ (x) I. This inequality implies that if the ratio f/g exists then lim f(x; ɛ) ɛ g(x; ɛ) =, for all x D. 4

If ɛ δ is independent of x we say that f(x; ɛ) = o(g(x; ɛ)) as ɛ + uniformly in D. For example cos(x + ɛ) = o(ɛ 1 ) as ɛ uniformly in R, e ɛx 1 = o(ɛ 1/2 ) as ɛ nonuniformly in R. e ɛx 1 = o(ɛ 1/2 ) as ɛ uniformly in D : x 1. In asymptotic analysis the O (in strict sense) and symbols are usually more relevant than the o-symbol because the latter hide so much information. We are more interested to know how a function behaves close to some point, rather than just knowing that it is much smaller than some other functions. For example, sin x = x + o ( x 2) as x tells us that sin x x vanishes faster than x 2 as x, while sin x = x + O ( x 3) as x (in strict sense) tells us that sin x x vanishes as x 3 as x. 2. Asymptotic Expansion An asymptotic expansion describes the asymptotic behavior of a function close to a point in terms of some reference functions, the gauge functions. The definition was introduced by Poincaré (1886) and provides a mathematical framework for the use of many divergent series. We shall call the set of functions ϕ n : D\x R, n =, 1,..., an asymptotic sequence as x x, if for each n we have ϕ n+1 (x) = o [ ϕ n (x) ] as x x. Some examples of asymptotic sequence are: Example 2.1. The functions ϕ n (x) = (x x ) n form an asymptotic sequence as x x. Example 2.2. The functions ϕ n (x) = x n/2, ϕ n (x) = (log x) n and ϕ n (x) = (sin x) n are examples of asymptotic sequences as x +, while the functions ϕ n (x) = x n form an asymptotic sequence as x +. Example 2.3. The sequence {log x, x 2, x 3,... } form an asymptotic sequence as x +. Example 2.4. The sequence {..., x 2, x 1, 1, x, x 2,... } form an asymptotic sequence as x, while the sequence {..., x 2, x, 1, x 1, x 2,... } form an asymptotic sequence as x. The asymptotic expansion of functions f(x) can be defined as: Definition. Let the set of functions ϕ n (x), n =, 1, 2,..., form an asymptotic sequence as x x and a n a sequence of numbers independent of ɛ. Then the 5

formal series a nϕ n (x) is an asymptotic expansion of f(x) as x x with gauge functions ϕ n (x) if, and only if, for any N: or, equivalently, f(x) a n ϕ n (x) = o [ ϕ N (x) ] as x x, (4) N 1 f(x) a n ϕ n (x) = O [ ϕ N (x) ] as x x. (5) In this case we shall write f(x) a n ϕ n (x), as x x. (6) Asymptotic expansion of this form are also called asymptotic series. The above definition of asymptotic expansion is not general enough to handle functions of more than one variable. Thus we use the more general definition: Definition. Let the set of functions ϕ n (ɛ), n =, 1, 2,..., form an asymptotic sequence as ɛ. We say that the sequence of functions g n (x; ɛ) form an asymptotic sequence of approximations to f(x; ɛ) as ɛ uniformly valid for x in some domain D with gauge functions ϕ n (ɛ), if uniformly in D. f(x; ɛ) g n (x; ɛ) lim =, for all n, (7) ɛ ϕ n (ɛ) The endpoints of the domain D where the approximation is valid may be functions of ɛ and may depend on n. Intervals with ɛ-dependent boundary are essential in asymptotic analysis because gauge functions are not unique. If (7) holds then a method to construct an asymptotic expansion of f(x; ɛ) is to find a sequence of functions a n (x; ɛ) such that g n (x; ɛ) = n a m (x; ɛ). (8) m= Thus, since (7) holds, the formal series a n(x; ɛ) is a generalized asymptotic expansion of f(x; ɛ) and we write f(x; ɛ) a n (x; ɛ) as ɛ, (9) uniformly in D. Notice that nothing is said about the convergence of the series. Often the series (9) does not converge, and even if it does, it may not converge 6

to f(x; ɛ). The formal expression (9) is just the restatement of the asymptotic result (7) with (8). It is left as exercise to show that this definition when applied to functions of single variable f(x) is equivalent to (6). In the case of functions f(x; ɛ) the a n (x; ɛ) may not have a simple expression in terms of ϕ n (ɛ). Nevertheless following (6): Definition. The asymptotic expansion of the function f(x; ɛ), with x varying in some domain D, as ɛ in the sense of Poincaré, or simply of Poincaré type, is an asymptotic expansion of the form f(x; ɛ) a n (x) ϕ n (ɛ), as ɛ, (1) uniformly in D. The functions a n (x) are independent of ɛ and play the role of coefficients of the asymptotic expansion. Poincaré type asymptotic expansions have useful properties not shared by generalized asymptotic expansion, for example they can be added or integrated term by term, hence they are largely used in asymptotic analysis. Their form is, however, not general enough to handle all problems that can be studied with perturbation techniques. Nevertheless, they can generally be used as starting point to construct more suitable generalized asymptotic expansions. Since Poincaré type expansions simply extend the definition of asymptotic expansion of functions f(x) to functions f(x; ɛ), they are usually introduced using the following equivalent definition: Definition. Let the set of functions ϕ n (ɛ), n =, 1, 2,..., form an asymptotic sequence as ɛ and a n (x) a sequence of functions independent of ɛ. Then the formal series a n(x)ϕ n (ɛ) is an asymptotic expansion of Poincaré type of f(x; ɛ) as ɛ with gauge functions ϕ n (ɛ) uniformly valid for x in some domain D if, and only if, for any N: or, equivalently, uniformly in D. f(x; ɛ) a n (x) ϕ n (ɛ) = o [ ϕ N (ɛ) ] as ɛ, (11) N 1 f(x; ɛ) a n (x) ϕ n (ɛ) = O [ ϕ N (ɛ) ] as ɛ, (12) 2.1. Uniqueness of Poincaré type Expansions and Subdominance The asymptotic expansion of a function is not unique because there exists an infinite number of asymptotic sequences that can be used in the expansion. However, given an asymptotic sequence of gauge functions ϕ n the Poincaré type 7

expansion in terms of ϕ n is unique. Consider for simplicity a function f(x) of a single variable, the results holds for the Poincaré type expansions of functions f(x; ɛ) as well. Then from (5) it follows that a N = lim x x N 1 f(x) a n ϕ n (x) ϕ N (x) Thus, if the function f admits an asymptotic expansion with respect to the sequence of gauge functions ϕ n, the expansion is unique. The converse is not true. For any asymptotic sequence ϕ n (x) as x x there exists always a function ψ(x) which is transcendentally small with respect to the sequence, i.e., ψ = o ( ϕ n ), n, as x x. One may then form a sequence of transcendentally small terms, by say positive powers of ψ. Consequence of this is that an asymptotic expansion does not determines the function f uniquely, hence different functions may have the same asymptotic expansion with respect to the given asymptotic sequence of gauge functions ϕ n. This is illustrated in by following example. Example 2.5. For any constant c, the functions. f(x) = 1 1 x and g(x) = 1 1 x + ce 1/x, have the same asymptotic expansion as x + in terms of the gauge functions ϕ n (x) = x n : f(x) g(x) 1 + x + x 2 +... as x +, because e 1/x is transcendentally small with respect to x n : e 1/x = o(x n ) as x + for every n N. An asymptotic expansion establishes an equivalence relation among functions: if for all n f(x) g(x) = o [ ϕ n (x) ] as x x then the functions f and g are asymptotically equivalent with respect to the gauge functions ϕ n. The function h(x) = f(x) g(x) is said subdominant to the asymptotic expansion as x x. An asymptotic expansion is thus asymptotic to a whole class of functions that differ one another by subdominant functions. Notice that the concept of transcendentally smallness is an asymptotic one. Transcendentally small terms might be numerically important for values of x not too close to x, even if these values seem small. 8

2.2. Uniformly vs Nonuniformly valid Asymptotic Expansions In perturbation calculations one often encounters situations where the quantity to be expanded is a function f(x; ɛ), where x is a variable independent of ɛ varying in some domain D, and ɛ the perturbation parameter, say ɛ. In these cases one generally constructs an asymptotic expansion of the form (1) with some gauge functions ϕ n (ɛ). The functions ϕ n (ɛ) are not uniquely determined by the problem. Rather, these are typically found in the course of the solution of the problem. Due to this arbitrariness different asymptotic approximation can be constructed. Nevertheless, only asymptotic expansions uniformly valid in the whole domain D leads to useful approximations to f(x; ɛ). An asymptotic expansion of f(x; ɛ), e.g., a n(x)ϕ n (ɛ), is in general only formal. However, even if not convergent, the partial sum of its first N terms is well defined for any finite N and gives an approximation to the function f(x; ɛ). Thus we can write N 1 f(x; ɛ) a n (x) ϕ n (ɛ) = R N (x; ɛ) where R N (x; ɛ) is the reminder or error of the approximation. The asymptotic properties of the gauge functions ϕ n (ɛ) ensures that for any fixed N and x D, see (7) or (12), R N (x; ɛ) = O ( ϕ N (ɛ) ), as ɛ. (13) The asymptotic expansion gives both an approximation to f(x; ɛ) as ɛ and the estimation of the error in the approximation. To be an useful approximation, however, the error should be a function of ɛ independent of the value of x, i.e., (13) must be uniformly valid in D. Useful asymptotic expansions of f(x; ɛ) are only those uniformly valid in D. Notice that since ϕ n (ɛ) = o ( ϕ n 1 (ɛ) ) as ɛ the requirement of uniform validity implies that each coefficient a n (x) of a Poincaré type expansion cannot be more singular than a n 1 (x) for all x in the domain D. In other words, as ɛ each term a n (x)ϕ n (ɛ) must be a small correction to the preceding term irrespective of the value of x. Example 2.6. The expansion sin ( x + ɛ ) = (1 ɛ22 ) +... = sin x + ɛ cos x ɛ2 2 ) sin x + (ɛ ɛ3 3! +... cos x ɛ3 sin x cos x +..., as ɛ, 3! is an uniformly valid asymptotic expansion in R. The coefficient of any power ɛ n is bounded for all x R, then a n (x) is no more singular than a n 1 (x) and the expansion is uniformly valid in R. Example 2.7. Consider the function f(x; ɛ) = 1 x + ɛ 9

defined for all x and ɛ >. If x >, then f(x; ɛ) 1 ] ɛx ɛ2 [1 + x x 2 +..., as ɛ. (14) Each term of this expansion is singular at x =, and is more singular than the preceding one. The expansion is not uniformly valid in the domain D : x. However, it is uniformly valid in any domain D : < x x where x is a positive number independent of ɛ. Uniform validity breaks down as we get too close to x =, signaling that some change must occurs near x =. Indeed for x = the expansion has a completely different form: f(; ɛ) 1, as ɛ. (15) ɛ The expansion (14) becomes nonuniformly valid when two successive terms are of the same order, i.e., ɛ = O(1) x = O(ɛ) as ɛ. x If we introduce the variable ξ = x/ɛ and rewrite f(x; ɛ) in terms of ξ the different asymptotic expansion emerges: f(ɛξ; ɛ) 1 ɛ 1, as ɛ, (16) ξ + 1 which is uniformly valid in any domain D : ξ ξ < 1/ɛ where ξ is a constant. Notice that the expansion (16) is uniformly valid in the region where the expansion (14) it is not. Moreover, (16) matches with (15) in the limit ξ + and with (14) as ξ 1. We shall give the precise meaning of the vague word matches later, when discussing asymptotic matching. 2.3. Asymptotic versus Convergent series Convergence of a series in an intrinsic property of the coefficients of the series, one can indeed prove the convergence of a series without knowing the function to which it converges. Asymptotic is, instead, a relative concept. It concerns both the coefficients of the series and the function to which the series is asymptotic. To prove that an asymptotic series is asymptotic to the function f(x) both the coefficients of the series and the function f(x) must be considered. As a consequence, a convergent series need not be asymptotic and an asymptotic series need not be convergent. To understand the difference between convergence and asymptotic consider the asymptotic series a nϕ n (x), where a n are some coefficients and ϕ n (x) an asymptotic sequence as x x. The series is in general formal because it may not converge. The partial sum S N (x) = N 1 1 a n ϕ(x)

is, nevertheless, well defined for any N. The convergence of the series is concerned with the behavior of S N (x) as N for fixed x, whereas the asymptotic property as x x with the behavior of S N (x) as x x for fixed N. A convergent series defines for each x a unique limiting sum S (x). However, at difference with an asymptotic expansion, it does not provide information on how rapidly it converges nor on how well S N (x) approximates S (x) for fixed N. An asymptotic series, on the contrary, does not define a unique limiting function f(x), but rather a whole class of asymptotically equivalent functions. Hence it does not provide an arbitrary accurate approximation to f(x) for x x. Still, the partial sums S N (x) provide good approximations to the value of f(x) for x sufficiently close to x. The approximation, nevertheless, depends on how x is close to x. This leads to the problem of the optimal truncation of the asymptotic expansion, i.e., the optimal number of terms to be retained in the partial sum. 1 The following example illustrate these differences. Example 2.8. Consider the error function, or Gauss error function, defined as erf(x) = 2 x e t2 dt. (17) π Since the Maclaurin series of e z is absolutely convergent for all z, expanding e t2 in power series of t 2 and integrating term by term, we obtain the convergent power series expansion of erf(x): erf(x) = 2 [ x 1 π 3 x3 + 1 ] 2 5 x5 +... = 2 ( 1) n π (2n + 1)n! x2n+1. The series converges for all x R because the ratio between the (n + 1)-th and the n-th term u n+1 u n = 2n + 1 x 2 2n + 3 n + 1 is smaller than 1 for all x R and n large enough. For moderate values of x the convergence is, however, very slow. For example, 31 terms are needed to have an approximation to erf(3) with an accuracy of 1 5. A better result can be obtained using an asymptotic expansion. To derive an asymptotic expansion of erf(x) we rewrite (17) as, erf(x) = 1 2 e t2 dt π x = 1 1 s 1/2 e s ds. π x 2 1 The optimal truncation can be affected by the presence of transcendentally small subdominant functions because they may give finite contributions. 11

Integrating by parts twice we get: [ erf(x) = 1 1 e x2 π x 1 2 [ = 1 1 e x2 π x x 2 e x 2 2x 3 + 3 4 s 3/2 e s ds x 2 ] s 5/2 e s ds The procedure can be easily iterated to any order N introducing the functions F n (x) = s n 1/2 e s ds x [ 2 = s n 1/2 e s leading to the series ( = e x2 x 2n+1 n + 1 2 x 2 + erf(x) = 1 1 π F (x) ( n + 1 ) ] s n 1/2 1 e s ds 2 x 2 ) F n+1 (x), [ ] = 1 1 e x2 π x 1 2 F 1(x) = 1 1 [ ( 1 e x2 π x 1 2x 3 =... ) + 1 3 ] 2 2 F 2(x) N 1 = 1 e x2 n (2n 1)!! ( 1) π 2 n x 2n+1 + R N (x), where (2n 1)!! = 1 3 (2n 1), and R N (x) = ( 1) N 1 π (2N 1)!! 2 N F N (x). The series diverges for any value of x because the ratio between the (n + 1)-th and the n-th term u n+1 (2n + 1)!! u n = 2 n+1 x 2n+3 2n x 2n+1 (2n 1)!! = 2n + 1 2x 2, (18) is larger than 1 whenever 2n + 1 > 2x 2. In spite of that, R N (x) = C N s N 1/2 e s ds C N x 2N+1 x 2 e x2 x 2 C N x 2N+1, 12 e s ds ].

where C N = (2N 1)!!/(2 N π), so that: N 1 erf(x) [1 e x2 n (2n 1)!! 1 ( 1) π 2 n Thefore the series: erf(x) 1 e x2 π x 2n+1 ] ( e x 2 ) = O x 2N+1, as x. n (2n 1)!! 1 ( 1) 2 n, (19) x2n+1 is an asymptotic expansion of erf(x) as x. Only the first two terms of (19) suffice to have an approximation to erf(3) with an accuracy of 1 5. In spite of this, we cannot improve the approximation at will by adding more terms. From (18) it follows that the ratio between two successive terms of the series become larger than 1 as n x 2. Hence, the asymptotic series (19) has an optimal truncation for N equal to the largest integer smaller than x 2. In this example we have encountered an asymptotic expansion of the form f(x) g(x) a n ϕ n (x), as x x where the function g(x) is bounded as x x. In the example g(x) e x2, clearly bounded as x. This reflects the non uniqueness of gauge functions: since g(x) is bounded as x x the functions ϕ n (x) = g(x) ϕ n (x) form an asymptotic sequence as x x, thus we can equally well write f(x) a n ϕ n (x) as x x. This provides additional flexibility, but should be used with care because differences might be numerically relevant for x not too close to x. Example 2.9. The asymptotic expansion of the function f(x) = 1/(1 x) as x with gauge function ϕ n (x) = x n is: 1 1 x x n, as x. (2) The series with gauge functions ϕ n (x) = (1 + x) x n : 1 1 x (1 + x) x n, (21) is also an asymptotic expansion of f(x) as x because the function g(x) = 1+ x is bounded as x. Both expressions, (2) and (21), provide an asymptotic expansion of f(x) = 1/(1 x) as x, but the approximation to f(x) for x close to are different. The difference decreases as x approaches, but for small finite value of x it might be numerically relevant. For example for x =.1 the first three terms of (2) give 1.11 while those of (21) give 1.221. 13

2.4. Asymptotic power expansion Asymptotic expansions of the form f(x) a n (x x ) n, as x x, are called asymptotic power expansions, or asymptotic power series, and are among the most common and useful asymptotic expansions. One of the fundamental notions of real analysis is the Taylor expansion of infinitely differentiable C functions. If f(x) is a smooth, infinitely differentiable, function in a neighborhood of x = x, then the Taylor theorem ensures that f (n) (x ) f(x) = (x x ) n + o ( x x N), as x x. n! Hence, f(x) has the asymptotic power expansion f(x) f (n) (x ) (x x ) n as x x. (22) n! If, and only if, f(x) is analytic at x = x the asymptotic power series converges to f(x) in a neighborhood x x < r of x. In this case the series gives the usual Taylor series representation of the function about x. Neverteless, the Taylor expansion of f(x) at x = x does not provide an asymptotic expansion of f(x) as x x 1 x, even if it converges. Thus partial sums of the Taylor expansion do not usually give good approximations to f(x) as x x 1 x. If f(x) is C but not analytic at x = x the series (22) need not converge in any neighborhood of x, or if it does converge, its sum need not necessarily be f(x). See Example 2.5, where the function g(x) is C but not analytic at x = for any c. For asymptotic power expansions the following theorem, due to Borel, provides, in some sense, the inverse of the Taylor theorem. The Borel theorem states that for any sequence of real numbers a n there exists an infinitely differentiable function f(x) such that a n = f (n) (x )/n! and f(x) a n (x x ) n, as x x. The function need not be unique. Hence, unlike convergent Taylor series, there are no restrictions on the growth rate of the coefficients a n of an asymptotic power series, as shown in the following example. Example 2.1. Consider the integral (Euler, 1754): f(x) = dt e t 1 + xt, (23) 14

defined for x. Integrating by parts we get f(x) = dt 1 1 + xt One more integration by parts gives, Thus, after N integration by parts: where d dt e t = 1 x dt f(x) = 1 x + 2x 2 e t dt (1 + xt) 3. N 1 f(x) = ( 1) n n! x n + R N (x), R N (x) = ( 1) N N! x N e t dt (1 + xt) N+1. e t (1 + xt) 2. The coefficients of the power series grow factorially and the ratio between two successive terms u n /u n 1 = nx becomes larger than 1 as n > 1/x. The series does not converge for any x >, nevertheless, for any N R N (x) N! x N e t dt N! xn (1 + xt) N Thus R N (x) = o(x N 1 ) as x, and e t dt 1 + xt ( 1) n n! x n, as x +. dt e t N! x N. (24) The lack of convergence of the series can be traced back to the fact that the integral (23) is not defined for x <, because the integrand has a nonintegrable singularity at t = 1/x. Any convergent power series must converge in a disk of finite radius centered at x =. Since the asymptotic power series does not converge, its partial sum does not provide an arbitrarily accurate approximation to the integral (23) for any fixed x >. However, from (24) it follows that for any fixed N the error is polynomial in x. What is the optimal truncation of the series, that is the one which gives the best approximation? The ratio between two successive terms u n /u n 1 = nx grows with n and becomes larger than 1 as n > 1/x. The best approximation then occurs when N [1/x], for which R N (1/x)!x 1/x. Using the Stirlng s formula N! 2πNN N e N valid for N 1, the error of the optimal truncation reads: R N (x) 2π x e 1/x, as x +. Thus, while for arbitrary N the partial sum is polynomially accurate in x, the optimal truncated sum is exponentially accurate. 15

2.5. Stokes Phenomenon The asymptotic expansion as z z of complex function f(z) with an essential singularity in z = z is typically valid only in wedge-shaped regions α < arg(z z ) < β, and the function has different asymptotic expansion in different wedges. For definitiveness we shall discuss the case z =, however the same scenario appears for essential singularities at any z C. The change of the form of the asymptotic expansion across the boundaries of the wedges is called Stokes phenomenon. The origin of the Stokes phenomenon can be understood as follows. If two functions f(z) and g(z) are asymptotic equal as z this means that the function h(z) = f(z) g(z) is subdominant as compared with g(z) as z. As z approaches the boundary of the wedge of validity of the expansion the subdominant h(z) grows in magnitude compared to the dominant term g(z). At the boundary of the wedge the functions h(z) and g(z) are of the same order of magnitude and the distinction between dominant and subdominant disappears. As z crosses the boundary the functions h(z) and g(z) exchange their role and on the other side g(z) h(z) as z. This exchange is the Stokes phenomenon, and it is typical of exponential functions. Example 2.11. Consider the complex function f(z) = sinh z = ez e z 2 In the half plane Re z > both sinh z and e z grows exponentially as z and sin z 1 2 ez, as z, Re z >. In this region the difference h(z) = 1 2 e z between sinh z and g(z) = 1 2 ez is transcendentally small as z. In the half plane Re z < the function h(z) = 1 2 e z is no longer transcendentally small, hence sin z 1 2 e z, as z, Re z <. In the region Re z < the function g(z) = 1 2 ez is subdominant. The two leading asymptotic expansions 1 2 ez and 1 2 e z switch from being dominant to subdominant on the line Re z = (anti-stokes lines), where they are purely oscillatory. They are most unequal in magnitude on the line Im z = (Stoke lines), where they are purely real and either exponential growing or decreasing as z. The Stokes and anti-stokes lines are not unique and do not really have a precise definition because the region where the function f(z) has some asymptotic expansion is rather vague refelcting the non uniqueness of gauge functions. In the above example, we could equally well take as Stokes line any line Im z = a. As a general definition, the anti-stokes lines are roughly where some term in the asymptotic expansion changes from increasing to decreasing. Hence, anti-stokes lines bound regions where the function has different asymptotic behaviors. The Stokes lines are lines along which some term approaches infinity or zero fastest. 16

3. Elementary Operations on Poicaré type Asymptotic Expansions In singular perturbation methods we usually assume some form of the expansions, substitute them into the equations, an perform elementary operations such as addiction, subtraction, exponentiation, integration and differentiation. These operations are generally carried out without justification, although the expansions can be divergent. Nevertheless conditions under which these operations are justified can be given. In the following we shall discuss some elementary properties of the Poincaré type expansion (1) of functions f(x; ɛ). If not specified the gauge functions ϕ n (ɛ) are generic. 3.1. Equating Coefficients. It is not strictly correct to write a n (x)ϕ n (ɛ) b n (x)ϕ n (ɛ), as ɛ (25) because asymptotic series can only be asymptotic to functions. However what we mean with (25) is that the functions to which a n (x)ϕ n (ɛ) and b n (x)ϕ n (ɛ), are asymptotic are asymptotically equivalent with respect to the gauge functions ϕ n (ɛ) as ɛ, i.e, as ɛ they differ by terms o(ϕ n (ɛ)) for all n. The uniqueness of the Poincaré type expansions given the gauge functions ϕ n (ɛ) then implies that the coefficients of ϕ n (ɛ) in the two series must agree. Hence, we can equate the coefficient of the two expansions in (25) and say that from (25) it follows that a n (x) = b n (x). 3.2. Addition and Subtraction. Addition and subtraction are in general justified, thus αf(x; ɛ) + βg(x; ɛ) [ αan (x) + βb n (x) ] ϕ n (ɛ), as ɛ. where α and β are constants and a n (x) and b n (x) the expansion coefficients of f and g, respectively. The proof is simple. 3.3. Multiplication and Division. Multiplication of asymptotic expansions is justified only if the result is still an asymptotic expansion. The formal product of a n ϕ n and b n ϕ n generates all products ϕ n ϕ m of gauge functions and generally it is not possible to rearrange them into an asymptotic sequence. Multiplication is justified only for asymptotic sequences ϕ n such that the products ϕ n ϕ m lead to an asymptotic 17

sequence, or posses asymptotic expansions. An important class of asymptotic expansions with this property are asymptotic power expansions. In this case the multiplication is straightforward. Suppose f(x; ɛ) a n(x)ɛ n and g(x; ɛ) b n(x)ɛ n as ɛ, then f(x; ɛ) g(x; ɛ) c n (x) ɛ n as ɛ, where n c n (x) = a m (x) b n m (x). (26) m= Proof. Using (26), and rearranging the sums, the reminder fg N c nɛ n takes the form, f(x; ɛ)g(x; ɛ) m= n a m b n m ɛ n = = f(x; ɛ)g(x; ɛ) [ = g(x; ɛ) f(x; ɛ) + m= a m ɛ m m= N n=m a m ɛ m] m= b n m ɛ n m a m ɛ m[ N m g(x; ɛ) b p ɛ p]. To simplify the notation the dependence of a n (x) and b n (x) on x is not shown. By assumption g(x; ɛ) b (x) = O(ɛ) as ɛ and f(x; ɛ) g(x; ɛ) a n ɛ n = o ( ɛ N), b n ɛ n = o ( ɛ N ). p= Hence, both terms in the last equality are o(ɛ N ) as ɛ, and f(x; ɛ) g(x; ɛ) which concludes the proof. c n (x) ɛ n = o ( ɛ N ), as ɛ, The division of asymptotic expansions can be handled treating it as the inverse of the multiplication, and hence division is not justified in general. In 18

the particular case of asymptotic power expansions when b (x) using (26) it follows: f(x; ɛ) g(x; ɛ) d n (x) ɛ n as ɛ, with d (x) = a (x)/b (x) and d n (x) = 1 [ b (x) a n (x) n 1 m= ] d m (x) b n m (x), n 1. If the expansion coefficients a n (x) and/or b n (x) are functions of x the division of the asymptotic expansions might lead to non-uniformly valid expansions. When this occurs the division is not justified anymore. The Example 2.7, with f(x; ɛ) 1 and g(x; ɛ) x + ɛ as ɛ, illustrates this point. 3.4. Exponentiation. Exponentiation of asymptotic expansions is generally not justified. As for multiplication, exponentiation produces all products of gauge functions. Thus the exponentiation can be justified only for asymptotic expansions such that the products ϕ n ϕ m form an asymptotic sequence, or posses asymptotic expansions. As for the division, if the coefficients of the asymptotic expansion are functions of x exponentiation might results in a non-uniformly valid expansion, and the exponentiation is not justified anymore. For instance, Example 2.7 shows that the asymptotic expansion of (x + ɛ) 1 as ɛ in not uniformly valid for x. The following example further illustrate this point. Example 3.1. Consider the function f(x; ɛ) with the asymptotic expansion f(x; ɛ) x + ɛ as ɛ +, uniformly valid for x. Then f(x; ɛ) x + ɛ x [1 + 2ɛ ] x ɛ2 8x 2 +... as ɛ +. This expansion is uniformly valid on any domain D : < x x, where x is a positive constant, but not in the domain x. The asymptotic expansion breaks down when x/ɛ = O(1), and indeed when x = the expansion has a completely different form: f(; ɛ) ɛ as ɛ +. 3.4.1. Integration. Asymptotic expansions can be generally integrated term-by-term with respect to the expansion parameter ɛ and/or the variable x. 19

If f(x; ɛ) and the gauge functions ϕ n (ɛ) are integrable functions of ɛ in the interval I : ɛ [; ɛ ] where ɛ is constant, then term-by-term integration is justified and ɛ ɛ dɛ f(x; ɛ ) a n (x) dɛ ϕ n (ɛ ) as ɛ. Proof. The proof is straightforward. Since f N a nϕ n = o(ϕ N ) as ɛ, then: ɛ ɛ ɛ dɛ f(x; ɛ ) a n (x) dɛ ϕ n (ɛ ) = dɛ [ ] f(x; ɛ ) a n (x) ϕ n (ɛ ) ɛ = dɛ o [ ϕ N (ɛ ) ] [ ɛ = o ] dɛ ϕ N (ɛ ), as ɛ. In the case of asymptotic power expansions f(x; ɛ) a n (x) ɛ n as ɛ, termwise integration is always justified and leads to ɛ dɛ f(x; ɛ a n (x) ) n + 1 ɛn+1, as ɛ. On the contrary, the first two terms of the asymptotic expansion f(x; ɛ) a n (x) ɛ n, as ɛ, are not integrable at ɛ =, and termwise integration of the series is not justified. Nevertheless, dt [ f(x; t) a (x) a 1 (x) t 1] a n (x) n 1 ɛ1 n, as ɛ. ɛ The proof is straightforward and is left as exercise. The extension to asymptotic expansions with gauge functions ϕ n (ɛ) = ɛ αn as ɛ, or ϕ n (ɛ) = ɛ αn as ɛ, with α n+1 > α n is trivial. Similarly, if f(x; ɛ) and a n (x) are integrable functions of x in some interval I : x [x, x 1 ], then for any x I x x dx f(x ; ɛ) ϕ n (ɛ) dx a n (x ), as ɛ. x x The proof is straightforward. n=2 2

3.5. Differentiation. Termwise differentiation of asymptotic expansions, either with respect to the perturbation parameter ɛ or with respect to the variable x, might lead to nonuniformly valid asymptotic expansions. When this occurs the differentiation of the asymptotic expansion term-by-term is not justified. In addition to this, subdominant terms might give non-subdominant contribution under differentiation as illustrated by the following example. Example 3.2. The functions f(x) and g(x) = f(x) + e (x x) 2 sin ( e (x x) 2), have the same asymptotic power expansion as x x because e (x x) 2 is transcendentally small with respect to any power (x x ) n as x x and the last term is subdominant. However their derivatives f (x) and ( g (x) = f (x) + 2 e (x x) 2 (x x ) 3 sin e (x x) 2) 2 ( (x x ) 3 cos e (x x) 2), do not have the same power expansion as x x because (x x ) 3 is not subdominant to (x x ) n as x x. To differentiate term-by-term the asymptotic expansion f(x; ɛ) a n (x) ϕ n (ɛ), as ɛ, (27) some additional conditions are needed. For instance, assume it is known that the partial derivative f(x; ɛ)/ ɛ exists and possess an asymptotic expansion as ɛ with some gauge functions ψ n (ɛ) and expansion coefficients a n (x). If f(x; ɛ)/ ɛ and ψ n (ɛ) are integrable functions of ɛ in the interval I : ɛ ɛ, then the asymptotic expansion can be integrated term-by-term with respect to ɛ leading to the asymptotic expansion (27) of f(x; ɛ) with ϕ n (ɛ) = ɛ dɛ ψ n (ɛ ). Under these conditions termwise differentiation with respect to ɛ of (27) is clearly justified and leads to: f(x; ɛ) ɛ a n (x) ϕ n(ɛ), as ɛ. Similarly, if it is known that f(x; ɛ)/ x exists and possess an asymptotic expansion as ɛ with gauge functions ϕ n (ɛ) and expansion coefficients b n (x). And, moreover, f(x; ɛ)/ x and b n (x) are integrable functions of x in the interval I : x x x 1, then the asymptotic expansion can be integrated term-byterm with respect to x obtaining the asymptotic expansion (27) of f(x; ɛ) with a n (x) = x dx b n (x ) valid for x I. Thus under these conditions termwise 21

differentiation of the asymptotic expansion (27) with respect to x is justified and leads to: f(x; ɛ) a x n(x) ϕ n (ɛ), as ɛ. We shall came back on this point when discussing the asymptotic expansion of the solutions of differential equations. 4. Asymptotic Expansion of Integrals Asymptotic expansion of integrals is a collection of techniques originally developed because many special functions used in Physics and Applied Mathematics have integral representations. However, asymptotic expansion of integrals find applications in different fields, such as Statistical Mechanics and Field Theory. We shall not discuss the asymptotic expansion of integrals here, but rather consider just two examples to illustrate the techniques. 4.1. Generalized Stieltjeis Series and Integrals. Consider the function f(x) defined by the Stieltjeis integral: f(x) = where ρ(t) is a generic non-negative function: dt ρ(t) 1 + xt, x (28) ρ(t), t. We have already encountered this type of integral in Example 2.1 with ρ(t) = e t. There we obtained the asymptotic power expansion of f(x) as x by successive integration by parts. For a generic function ρ(t) this procedure cannot be applied. Nevertheless, an asymptotic power expansion of f(x) as x can be obtained by substituting into the integrand the asymptotic powers expansion 1 1 + xt ( xt) n, as x, and integrating term-by-term. Thus, if the moments a n = dt ρ(t) t n n (29) exists and are finite for any n, the function f(x) as x has the asymptotic expansion: f(x) ( 1) n a n x n, as x +, (3) called Stieltjeis Series. The series does not converge for any function ρ(t). A convergent power series in x converges inside a disk of finite radius centered at 22

x =. The Stieltjeis integral (28) is however not defined for negative x because the singularity at t = 1/x is not integrable. The Stieltjeis Series (3) therefore cannot be convergent. Proof. The proof that (3) is an asymptotic power expansion as x is simple. By using (29) and the identity valid for any N, we have z n = 1 zn+1, 1 z Then f(x) f(x) ( 1) n a n x n = = = ( 1) n a n x n x N+1 [ N 1 dt ρ(t) 1 + xt ( xt) n] [ 1 dt ρ(t) 1 + xt 1 ] ( xt)n+1 1 + xt ρ(t) dt 1 + xt ( xt)n+1. dt ρ(t) t N+1 = a N+1 x N+1 and f(x) ( 1) n a n x n = o ( x N ), as x, which concludes the proof. 4.2. Euler Gamma Function. The Euler Gamma function Γ(ν) is defined as Γ(ν) = dt t ν 1 e t, Re ν >. (31) For simplicity we consider the case of real positive ν. By using the functional relation νγ(ν) = Γ(1 + ν) and introducing the new variable t = ν(y + 1), Γ(ν) can be written as where Γ(ν) = 1 ν Γ(1 + ν) = νν e ν dy e νh(y), (32) 1 h(y) = ln(1 + y) y. The function h(y) has a global maximum at y =, see Figure 1. When ν 1 23

Figure 1: Function h(y) = ln(1 + y) y. only the neighborhood of y = contributes significantly to the integral because contributions from y not in the neighborhood are exponentially depressed. Then Γ(ν) can be estimated for ν 1 by restricting the integration to the neighborhood of y =, where h(y) attains its maximum value (Laplace s method). We thus divide the interval of integration into three regions: 1 dy e νh(y) = ɛ 1 ɛ dy e νh(y) + dy e νh(y) + ɛ ɛ dy e νh(y). were ɛ 1 is arbitrary, but sufficiently small so that in the interval ɛ y ɛ the function h(y) can be replaced by the Maclaurin expansion: h(y) = 1 2 y2 + 1 3 y3 1 4 y4 + O(y 5 ). (33) The parameter ɛ may depend on ν, but for the moment it is sufficient to know that ɛ 1 so that (33) can be used. The functional dependence of ɛ on ν will be fixed later to make the whole calculation consistent. As ν + the first and third integrals are of the order ɛ 1 ɛ ( dy e νh(y) = O e νh( ɛ)) = O ( e νɛ2 /2 ), ( dy e νh(y) = O e νh(ɛ)) = O ( e νɛ2 /2 ). Provided ɛ ν 1, i.e., ν 1/2 ɛ 1, their contribution is exponentially small as ν 1. 24

Replacing h(y) into the second integral by the expansion (33), and introducing the variable x = y ν, the integral in (32) becomes as ν + 1 ɛ ν dy e νh(y) ν 1/2 ɛ ν dx e 1 2 x2 + 1 3 ν x3 1 4ν x4 +O(x 5 /ν 3/2) + O ( e νɛ2 /2 ). (34) Now we use the arbitrariness of ɛ and take ɛ = ν a with 1/3 < a < 1/2. This choice not only ensures that ɛ 1 and ɛ ν 1 as ν, but also that for x ɛ ν: x m ν m/2 1 xn 1 m > n, n, m = 3, 4, 5,.... νn/2 1 The exponential under the integral in (34) is then of the form exp[ x 2 /2+g ν (x)] with g ν (x) 1 as ν + in the whole domain of integration [ ɛ ν, ɛ ν]. Expanding the exponential exp[g ν (x)] in powers of g ν (x) we obtain: 1 ɛ ν dy e νh(y) ν 1/2 ɛ ν dx e 1 x2[ 2 1 + g ν (x) + 1 ] 2 g ν(x) 2 +... + O ( e νɛ2 /2 ). The integration limits ±ɛ ν can be safely extended to ± with an error of order O ( e ) νɛ2 /2, exponentially small as ν +. Thus, introducing the Gaussian average: + dx f = e 1 2 x2 f(x), 2π the asymptotic expansion of the function Γ(ν) as ν + can be written as Γ(ν) ν ν 1/2 e ν [ 2π 1 + g µ + 1 2 g2 µ + 1 ] 3! g3 µ +..., where g µ (x) = 1 3 ν x3 1 4ν x4 + O(x 5 /ν 3/2 ). The averages can be evaluated using the well known result for the moments of the Gaussian distribution of zero average and variance 1: x 2n = (2n 1)!! and x 2n+1 =. To the leading order we have g µ = 1 4ν x4 + O(1/ν 2 ) = 3 4ν + O(1/ν2 ), g 2 µ = 1 9ν x6 + O(1/ν 2 ) = 5 3 9ν + O(1/ν2 ). and g 3 µ = O(1/ν 2 ), so that Γ(ν) ν ν 1/2 e ν [ 2π 1 + 1 ] 12ν + O(1/ν2 ), as ν +, (35) known as Stirling s formula, after James Stirling, even if it was first stated by Abraham de Moivre. 25

The proof that (35) is an asymptotic expansion of Γ(ν) as ν + is straightforward. It is enough to compute the next terms in the expansion and estimate the error when the series is truncated to the first N terms. The contributions of order O ( e νɛ2 /2 ) are transcendentally small with respect to any power ν n as ν + and can be ignored. The asymptotic expansion (35) of Γ(ν) has been derived using the integral representation (31) and the Laplace s method. An asymptotic expansion of Γ(ν) as ν + can be obtained also starting from the integral representation ln Γ(ν) = ( ν 1 ) ln ν ν + ln 2π + 2 2 dt arctg( t ν ) e 2πt 1. As ν + only values t/ν 1 contributes to the integral because otherwise the integrand is O(e 2πν ). Then using the Maclaurin series arctg(x) = valid for x < 1, the integral becomes, ( 1) n=1 n 1 x2n 1 dt arctg( t ν ) e 2πt 1 ( 1) n 1 (2n 1) ν 2n 1 n=1 2n 1, dt t 2n 1 e 2πt 1. (36) We have neglected the terms O(e 2πν ) transcendentally small with respect to any ν n as ν +. The integral in (36) is equal to dt t 2n 1 e 2πt 1 = ( 1)n 1 B 2n, (37) 4n where B n are the Bernoulli numbers given by the coefficients of the expansion z e z 1 = z n B n, < z < 2π. n! Inserting (37) into (36) leads to the asymptotic expansion ln Γ(ν) ( ν 1 ) ln ν ν + ln 2π + 2 n=1 B 2n 2n(2n 1) ν 2n 1, as ν +. The first even Bernoulli numbers are B 2 = 1/6, B 4 = 1/3 and B 6 = 1/42, so ln Γ(ν) ( ν 1 ) ln ν ν+ln 2π+ 1 2 12ν 1 36ν 3 + 1 126ν 5 +O(ν 7 ), as ν +. The series is not convergent, but the error is of the order of the first omitted term, as can be proved using the Taylor s Theorem and the properties of the integration of asymptotic expansion. 26