Mathematische Methoden der Physik II

Similar documents
Oliver Kullmann Computer Science Department Swansea University. MRes Seminar Swansea, November 17, 2008

Fourier Transform & Sobolev Spaces

Algebra. Übungsblatt 10 (Lösungen)

Algebra. Übungsblatt 12 (Lösungen)

Algebra. Übungsblatt 8 (Lösungen) m = a i m i, m = i=1

Ordinals and Cardinals: Basic set-theoretic techniques in logic

A simple method for solving the diophantine equation Y 2 = X 4 + ax 3 + bx 2 + cx + d

Power series solutions for 2nd order linear ODE s (not necessarily with constant coefficients) a n z n. n=0

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

1. Einleitung. 1.1 Organisatorisches. Ziel der Vorlesung: Einführung in die Methoden der Ökonometrie. Voraussetzungen: Deskriptive Statistik

Math 489AB A Very Brief Intro to Fourier Series Fall 2008

Math 172 Problem Set 8 Solutions

Covariograms of convex bodies in the plane: A remark on Nagel s theorem

Power. Wintersemester 2016/17. Jerome Olsen

D-optimally Lack-of-Fit-Test-efficient Designs and Related Simple Designs

1 Fourier Integrals on L 2 (R) and L 1 (R).

Vectors in Function Spaces

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution

Übungen zur Quantenmechanik (T2)

MATH 220: INNER PRODUCT SPACES, SYMMETRIC OPERATORS, ORTHOGONALITY

SOLUTION FOR HOMEWORK 7, STAT p(x σ) = (1/[2πσ 2 ] 1/2 )e (x µ)2 /2σ 2.

CONVERGENCE OF THE FOURIER SERIES

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

u xx + u yy = 0. (5.1)

ELEMENTARY APPLICATIONS OF FOURIER ANALYSIS

6 Div, grad curl and all that

1.5 Approximate Identities

Statistical Methods in Particle Physics

Mathematical Methods for Physics and Engineering

5. Atoms and the periodic table of chemical elements

Fourier Series. 1. Review of Linear Algebra

1. Positive and regular linear operators.

NOTES ON FOURIER SERIES and FOURIER TRANSFORM

Lecture Characterization of Infinitely Divisible Distributions

Notions such as convergent sequence and Cauchy sequence make sense for any metric space. Convergent Sequences are Cauchy

Philosophiekolloquium FB Philosophie KGW

Der topologische Zyklenraum

Dangerous and Illegal Operations in Calculus Do we avoid differentiating discontinuous functions because it s impossible, unwise, or simply out of

Partial Differential Equations

On Boundary Value Problems in Circle Packing

Counterexamples in the Work of Karl Weierstraß

The Liapunov Method for Determining Stability (DRAFT)

Riemann s ζ-function

4.1 Analysis of functions I: Increase, decrease and concavity

Lecture 1. 1, 0 x < 1 2 1, 2. x < 1, 0, elsewhere. 1

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

1.3.1 Definition and Basic Properties of Convolution

Seminar Pareigis Wess: WS 1996/97 Liinearkompakte algebraische Strukturen (Korrigierte Fassung v. 3.Sept. 2008)

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,

So far we have limited the discussion to state spaces of finite dimensions, but it turns out that, in

MATH 425, FINAL EXAM SOLUTIONS

We denote the space of distributions on Ω by D ( Ω) 2.

h(x) lim H(x) = lim Since h is nondecreasing then h(x) 0 for all x, and if h is discontinuous at a point x then H(x) > 0. Denote

Solutions to Homework 11

Public-Key Technique 1. Public-Key Cryptography. Agenda. Public-Key Technique 2. L94 - Public-Key Cryptography. L94 - Public-Key Cryptography

MATH 205C: STATIONARY PHASE LEMMA

Solutions to the Exercises * on Multiple Integrals

Last Update: April 7, 201 0

1 Fourier Integrals of finite measures.

Continuity. Chapter 4

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case.

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space

Math 4263 Homework Set 1

Introduction. J.M. Burgers Center Graduate Course CFD I January Least-Squares Spectral Element Methods

LECTURE 5: THE METHOD OF STATIONARY PHASE

Compromise, consensus, and the iteration of means

What is the Gibbs Paradox

Slope Fields: Graphing Solutions Without the Solutions

5.3 Definite Integrals and Antiderivatives

FOURIER TRANSFORMS. 1. Fourier series 1.1. The trigonometric system. The sequence of functions

FOUNDATIONS OF A GENERAL THEORY OF FUNCTIONS OF A VARIABLE COMPLEX MAGNITUDE

Measure and Integration: Solutions of CW2

x λ ϕ(x)dx x λ ϕ(x)dx = xλ+1 λ + 1 ϕ(x) u = xλ+1 λ + 1 dv = ϕ (x)dx (x))dx = ϕ(x)

Some Background Material

Fourth Week: Lectures 10-12

We have to prove now that (3.38) defines an orthonormal wavelet. It belongs to W 0 by Lemma and (3.55) with j = 1. We can write any f W 1 as

Math221: HW# 7 solutions

ORTHODOX AND NON-ORTHODOX SETS - SOME PHILOSOPHICAL REMARKS

Biophysics of Macromolecules

1 Review of di erential calculus

Oscillatory integrals

(x + 3)(x 1) lim(x + 3) = 4. lim. (x 2)( x ) = (x 2)(x + 2) x + 2 x = 4. dt (t2 + 1) = 1 2 (t2 + 1) 1 t. f(x) = lim 3x = 6,

Topics in Harmonic Analysis Lecture 1: The Fourier transform

Analysis/Calculus Review Day 2

CORRESPONDENCE BETWEEN ELLIPTIC CURVES IN EDWARDS-BERNSTEIN AND WEIERSTRASS FORMS

Approximation Theory

Complex Analysis MATH 6300 Fall 2013 Homework 4

MATH 31B: MIDTERM 2 REVIEW. sin 2 x = 1 cos(2x) dx = x 2 sin(2x) 4. + C = x 2. dx = x sin(2x) + C = x sin x cos x

Double Kernel Method Using Line Transect Sampling

Solutions Final Exam May. 14, 2014

Part 1. The simple harmonic oscillator and the wave equation

JASSON VINDAS AND RICARDO ESTRADA

Lecture 4: Fourier Transforms.

The Gibbs Phenomenon

Green s Functions and Distributions

Sequential Point Estimation of a Function of the Exponential Scale Parameter

Fourier Series. ,..., e ixn ). Conversely, each 2π-periodic function φ : R n C induces a unique φ : T n C for which φ(e ix 1

Solutions to Problems in Jackson, Classical Electrodynamics, Third Edition. Chapter 2: Problems 11-20

f(s) e -i n π s/l d s

Math The Laplacian. 1 Green s Identities, Fundamental Solution

Transcription:

Ergänzungen zur Vorlesung Mathematische Methoden der Physik II J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version 27/0/200

2 version 27/0/200 (J. Mark Heinzle, WS 200/)

Fourier analysis Fourier series Wir erinnern zuerst an jenen Satz, der hinreichende Bedingungen formuliert, unter welchen die Fourierreihe einer Funktion f(θ) gliedweise differenziert werden kann, um die Fourierreihe von f (θ) zu erhalten: Sei f -periodisch, stetig, und zweifach stückweise stetig differenzierbar. Die Fourierreihe von f sei gegeben durch f(θ) = X n Zc ne inθ. Dann ist die Fourierreihe von f durch f (θ) = X n Z inc ne inθ gegeben. Analog kann auch die trigonometrische ( reelle ) Fourierreihe von f(θ) gliedweise differenziert werden um jene für f (θ) zu erhalten. Let us consider an example. The triangle wave is (the -periodic extension of) the function f(θ) = θ (θ (,]). (.) 2 0 The Fourier series of the triangle wave has been computed in an earlier example. We have f(θ) = 2 4 cos nθ n 2, (.F) n N odd version 27/0/200 (J. Mark Heinzle, WS 200/) 3

where N odd denotes the odd natural numbers. Note that f is continuous and piecewise continuously differentiable, hence the Fourier series converges pointwise. We are interested in the Fourier series of the function g(θ) = sgn θ (θ (,]). (.2) This function is the (piecewise) derivative of the triangle wave (.), i.e., for θ 0 and θ we have f (θ) = g(θ). Hence, to obtain the Fourier series of (.2) we merely differentiate the original Fourier series (.F) term by term, f(θ) = 2 4 g(θ) = f (θ) = 4 n N odd cos nθ n 2, (.F) (sinnθ)n n 2 n N odd = 4 sin nθ. (.2F) n n N odd It is important to note that term by term differentiation of the Fourier series of f(θ) is possible because the assumption of the theorem are met, i.e., the function f(θ) is continuous and twice piecewise continuously differentiable. Note that it is straightforward to check these assumptions: Continuity of f(θ) is immediate; f (θ) = g(θ) is piecewise continuous and piecewise continuously differentiable, because f (θ) = g (θ) is the zero function except at the two points, 0 and, where the derivative is ill-defined, i.e., g (θ) = 0 ( θ (,] with θ 0 and θ ) (.3) 4 version 27/0/200 (J. Mark Heinzle, WS 200/)

A natural question to ask is whether we may continue (.F) and (.2F). f(θ) = 2 4 cos nθ n 2, n N odd (.F) g(θ) = f (θ) = 4 (sin nθ)n n 2 = 4 sin nθ, n n N odd n N odd (.2F) g (θ) = f (θ) = 4 (cos nθ)n = 4 cos nθ. n n N odd n N odd (.3F) However, there is something wrong here. The function g (θ) (= f (θ)) is the zero function (except at the two points, 0 and ±, where it is ill-defined). Therefore, its Fourier series should be zero, shouldn t it? We see that it is not possible (at least not in a straightforward manner) to continue the term by term differentiation. But this is to be expected. The theorem tells us that term by term differentiation is alright if the function is continuous (and twice piecewise continuously differentiable). However, while these assumptions are met for f(θ), the function g(θ) (= f (θ)) violates the assumption of continuity. In brief, we should in fact expect something to go wrong when we continue beyond (.2F). In the following, by using the delta function we add another twist to the story. The (periodic) delta function (which is in fact not a function but a distribution ) is an object that can be thought of as arising as the limit of a sequence of functions. Unfortunately, this limit is taken in a sense we are presently unable to understand, but let us not bother too much about mathematical rigor in the present context. Let us concentrate on a particular sequence of functions that converges to the delta function. For 0 < ǫ < we define d ǫ (θ) = { 2ǫ θ ( ǫ,ǫ) 0 else. version 27/0/200 (J. Mark Heinzle, WS 200/) 5

The figure shows d (θ), d /2 (θ), d /4 (θ), and d /8 (θ). As ǫ 0, the width of d ǫ converges to zero while the height diverges. There is a perfect balance, however: The important fact about each of the functions d ǫ (θ) is that the area is always, i.e., d ǫ (θ)dθ =, independently of ǫ, i.e., the limit of the area as ǫ 0 is. The delta function δ(θ) can be thought of as the limit of d ǫ (θ) as ǫ 0, i.e., an object that is infinitely thin but inifinitely high in a well-balanced manner. It is an object that is zero away from θ = 0 and infinity at θ = 0, where the area under the point at infinity at θ = 0 is still, δ(θ)dθ := lim d ǫ (θ)dθ =. ǫ 0 (The integral is not an integral in the sense of Riemann or Lebesgue.) 6 version 27/0/200 (J. Mark Heinzle, WS 200/)

In fact, since we are in the realm of periodic functions, the functions d ǫ (θ) are periodically extended (and thus there are peaks at 2k, k Z). The limit of the sequence d ǫ (θ) is the periodic delta function δ(θ) with infinities at 2k, k Z. The fundamental property of the (periodic) delta function is given by its action on test functions ψ(θ). (We refrain from introducing the concept of the space of test functions in the present context but merely note that these function are necessarily smooth.) We have To obtain this relation we define δ(θ) ψ(θ) dθ = ψ(0). (.4) δ(θ)ψ(θ)dθ := lim ǫ 0 and use the sequence d ǫ (θ). We have d ǫ (θ)ψ(θ)dθ = 2ǫ ǫ ǫ ψ(θ)dθ = 2ǫ ǫ ǫ d ǫ (θ)ψ(θ)dθ = 2ǫ( 2ǫψ(0) + O(ǫ 3 ) ) = ψ(0) + O(ǫ 2 ), and (.4) follows by taking the limit ǫ 0. ( ψ(0) + ψ (0)θ + O(θ 2 ) ) dθ Note that the sequence d ǫ (θ) is merely one of infinitely many sequences of functions that converges to the delta function; it is a delta sequence. Another well-known delta sequence is a sequence of Gauss functions (bell curves), where, however, in the periodic case we are treating, these functions have to be cut off at ± and periodically extended. A natural question to ask is whether the periodic delta function admits a Fourier series (despite the fact that it is not a proper function). And the answer is yes version 27/0/200 (J. Mark Heinzle, WS 200/) 7

(at least in a distributional sense ). To derive the Fourier series of the periodic delta function we use the delta sequence d ǫ (θ). We compute the Fourier series of d ǫ (θ) (which does not pose any problems since d ǫ (θ) is piecewise smooth). c 0 = c n = = sin nǫ nǫ d ǫ (θ)dθ =, d ǫ (θ)e inθ dθ = 2ǫ = ( ) + O(nǫ) ǫ ǫ e inθ dθ = (n 0). i ( e inǫ e inǫ) 2ǫ n Taking the limit ǫ 0 we obtain the Fourier coefficients of the periodic delta function, δ(θ) c n = n. Therefore, for the Fourier series we get δ(θ) e inθ. Note that this Fourier series does not converge in a usual sense (which is why we refrain from writing = ). Note that we could have derived the Fourier coefficients of the periodic delta function directly by using the fundamental property (.4): n Z δ(θ)e inθ dθ = e inθ θ=0 =. There are of course delta functions whose peak is not located at θ = 0. The translated delta function δ(θ θ 0 ) has its peak at θ = θ 0 (and θ 0 + 2k, k Z). Its Fourier series is δ(θ θ 0 ) e inθ 0 e inθ. This is in analogy with the general statement that the Fourier coefficients of the function f(θ θ 0 ) are c n e inθ 0 if the Fourier coefficients of f(θ) are c n. The proof is a simple exercise. n Z 8 version 27/0/200 (J. Mark Heinzle, WS 200/)

Now let us return to the function g(θ) of (.2) and compute its derivative. Away from θ = 0 (and 2k, k Z) and θ = (and (2k + ), k Z) the derivative exists (in the conventional sense) and it vanishes. What about the derivative at θ = 0? Let the derivative be denoted by g (θ), as usual. We recall that g(θ) measures the increase of area of g (θ). At θ = 0, the function g(θ) has a jump discontinuity where the height is 2 units. Hence, the area of g (θ) under the point θ = 0 must be 2, or, in other words we expect that g (θ) is related to 2δ(θ). At θ =, g(θ) has a jump discontinuity where the height is 2 units. Hence, the area of g (θ) under the point θ = 0 must be 2, or, in other words we expect that g (θ) is related to 2δ(θ ). In brief, we obtain g (θ) = 2δ(θ) 2δ(θ ). (.5) (Recall that we are dealing with periodic functions; the l.h. side and the r.h. side are periodically extended from (,] to the entire real line.) Since we know the Fourier series of δ(θ) and δ(θ θ 0 ) (where θ 0 = in the present context), we are able to obtain the Fourier series of g (θ): g (θ) = 2 [ δ(θ) δ(θ ) ] [ 2 n Z e inθ e in e inθ] n Z = ( e in ) e inθ n Z = ( ( ) n ) e inθ n Z = 2 e inθ = 2 ( e inθ + e inθ) n Z odd n N odd = 4 cos nθ. (.6) n N odd Interestingly enough, this result coincides with (.3F), which has been obtained by differentiating (.2F) term by term. What does this mean? Term by term differentiation of Fourier series can be extended beyond the point where the theorem we quoted at the beginning of this section applies; however, by proceeding beyond this point one leaves the realm of conventional functions and enters the realm of distributions (i.e., generalized functions like the delta function). In the present context, the road to success is to regard g (θ) not as a (piecewise) function, see (.3), which is the conventional view, but as the distribution version 27/0/200 (J. Mark Heinzle, WS 200/) 9

g (θ) = 2δ(θ) 2δ(θ ). (However, we recall that the Fourier series of a distribution is in general not convergent.) Satz über dominierte Konvergenz Sei (g n ) n N (x) eine Folge integrierbarer Funktionen auf einem Intervall I (wobei I = R oder ähnliches möglich ist). Es gelte dass g n (x) gegen eine Funktion g(x) punktweise konvergiere, also g n (x) g(x) (n ) x I. Wir interessieren uns für die Frage, ob die Integrale ebenso konvergieren. Gilt g n (x)dx g(x)dx? Anders ausgedrückt: Gilt lim n I I g n (x)dx = I I lim g n(x) dx? n } {{ } = g(x) Kann man die Limiten vertauschen? (Eine Nebenbemerkung: Im Allgemeinen ist nicht einmal klar, ob g(x) integrierbar ist.) Im Allgemeinen ist die Aussage nicht wahr. Als Beispiel dient die Funktionenfolge { n 2 0 < x g n (x) = n 0 sonst auf I = [0, ]. Offensichtlich gilt jedoch finden wir lim n 0 g n (x) 0 (n ) x I, g n (x)dx = lim n n2 n = lim n = lim g n(x) dx = 0. n 0 n } {{ } = 0 0 version 27/0/200 (J. Mark Heinzle, WS 200/)

Der Satz über die dominierte Konvergenz (Lebesgue) gibt ein Kriterium, das die Vertauschbarkeit der Limiten garantiert. Gibt es eine positive absolut integierbare Funktion M(x), d.h. M(x)dx < I bzw. M(x) L (I), und wird die Funktionenfolge durch M(x) dominiert, d.h. g n (x) M(x) n, dann ist garantiert, dass g(x) integrierbar ist, und dass lim g n (x)dx = lim g n(x) dx. n I I n } {{ } = g(x) Riemann-Lebesgue Lemma Das Lemma von Riemann und Lebesgue zeigt, dass die Fouriertransformierte einer Funktion im Unendlichen abfällt. D.h. wenn f(x) L, dann gilt ˆf(ξ) 0 für ξ ±. Wir beweisen dieses Lemma nicht für den allgemeinen Fall. Für Funktionen f(x), die nicht nur L sind, sondern auch stetig und (zumindest stückweise) stetig differenzierbar, und zwar so, dass f (x) L ist, wird der Beweis aber ganz einfach. Wir können dann verwenden, dass Daraus folgt nämlich trivialerweise f (ξ) = iξ ˆf(ξ). ˆf(ξ) = i ξ f (ξ). Wir erinnern uns, dass die Fouriertransformierte einer Funktion beschränkt ist. Daher gilt f (ξ) const, und wir sehen ˆf(ξ) = O ( ξ ) (ξ ± ). Insbesondere folgt ˆf(ξ) 0 für ξ ±, was das Riemann-Lebesgue Lemma in dem von uns betrachteten Spezialfall beweist. version 27/0/200 (J. Mark Heinzle, WS 200/)

I. A. lässt sich sagen, dass der Abfall von ˆf(ξ) umso besser ist, je glatter f(x) ist. Angenommen f(x) sei zwei Mal stetig differenzierbar (und f (x) L ), dann gilt ˆf(ξ) = i ξ f (ξ) = ξ 2 f (ξ), weil mehrfache Differentiation im Ortsraum der mehrfachen Multiplikation mit iξ im Impulsraum (Fourierraum) entspricht. Weil wiederum f (ξ) beschränkt ist, folgt ˆf(ξ) = O( ξ 2 ) (ξ ± ). Offensichtlich lässt sich die Prozedur weiter fortsetzen, wenn höhere stetige Differenzierbarkeit von f(x) gegeben ist. Und was ist, wenn f(x) C? Dann fällt ˆf(ξ) stärker ab, als jede Potenz von ξ, z.b. exponentiell. Dies ist in den in der Vorlesung besprochenen Beispielen gut ersichtlich. f(x) = e ax2 2 ˆf(ξ) 2 = e ξ2 2a, (.7a) a f(x) = x 2 + a 2 ˆf(ξ) = a e a ξ. (.7b) Das Beispiel f(x) = { b x [ a,a] 0 sonst ˆf(ξ) = 2b sin aξ ξ (.7c) aus der Vorlesung zeigt wiederum, dass der Abfall der Fouriertransformierten auch besser sein kann als man aus den obigen Überlegungen erwarten würde. In diesem Beispiel ist die Funktion nicht stetig, was zur Folge hat, dass f (ξ) = iξ ˆf(ξ) nicht gilt. Obwohl die obige Argumentation dadurch nicht anwenden ist, findet man trotzdem, dass ˆf(ξ) = O( ξ ) (ξ ± ). The delta function Let us give a crash course on distributions with a focus on the delta function (= delta distribution). In physics we are used to the concept of a point particle, like a point charge or a point mass. A point particle is an object that occupies nothing more than a point in space; it lacks spatial extension. Nonetheless, a point particle is supposed to possess physical properties like charge or mass. Obviously, the concept of a 2 version 27/0/200 (J. Mark Heinzle, WS 200/)

point particle is an idealization, and an extremely useful idealization at that. The reason for the usefulness is the lack of internal degrees of freedom: A point particle represents an object, whose internal structure is inexistent and thus irrelevant in a given context. Therefore, the treatment of point particles is simpler than that of (real world) extended objects. The charge density (or mass density) ρ(x) of a point particle is an entity that is concentrated at one point. (Note that we are restricting ourselves to the onedimensional case in the present context.) Since the charge is the integral of the charge density, to obtain a finite charge, ρ(x) cannot be a conventional function; it has to be a distribution; in this case, ρ(x) δ(x). Let us elaborate. Define, for ǫ > 0, the sequence of functions d ǫ (x) = { 2ǫ x ( ǫ,ǫ) 0 else. (.8) As ǫ 0, the width of d ǫ converges to zero while the height diverges. There is a perfect balance, however: The important fact about each of the functions d ǫ (θ) is that the area is always, i.e., d ǫ (θ)dθ =, independently of ǫ; in particular, the limit of the area as ǫ 0 is. A sequence of functions that exhibits the same property is, e.g., the sequence of Gauss curves d ǫ (x) = x 2 ǫ e ǫ 2. (.9) The delta function δ(x) can be thought of as the limit of d ǫ (x), or another sequence like d ǫ (x), as ǫ 0, i.e., an object that is infinitely thin but infinitely high in a well-balanced manner. Obviously, the limit of the sequence d ǫ (x) is not a pointwise limit of functions (which does not exist). The delta function is the limit of the sequence d ǫ (x) in an integrated sense: We say that the sequence d ǫ (x) has a limit (in the sense of distributions) because lim ǫ 0 d ǫ (x)ψ(x)dx (.0) exists for arbitrary test functions ψ(x), which means, for our purposes, that ψ(x) is C and goes to zero sufficiently fast as x ±. This limit then defines the version 27/0/200 (J. Mark Heinzle, WS 200/) 3

delta function through δ(x)ψ(x)dx := lim d ǫ (x)ψ(x)dx. (.) ǫ 0 The r.h. side of (.) can be computed explicitly by using the Taylor expansion of ψ(x). Namely, lim ǫ 0 d ǫ (x)ψ(x)dx = lim ǫ 0 2ǫ = lim ǫ 0 2ǫ ǫ ǫ ǫ ǫ ψ(x)dx [ ψ(0) + ψ (0)x + ψ (0) x2 2 + O(x3 ) ] dx [ = lim 2ǫψ(0) + ψ (0) ǫ3 ǫ 0 2ǫ 3 + O(ǫ4 ) ] dx = lim ǫ 0 [ ψ(0) + ψ (0) ǫ2 6 + O(ǫ3 ) ] dx = ψ(0). Therefore, the definition of the delta function (.) can be rephrased in terms of its action on test functions, δ(x)ψ(x)dx := ψ(0). (.2) We find that δ(x) cannot be viewed as a function; however, we can define δ(x) in terms of (.2), in terms of an appropriate integral involving test functions (where we note that the integral is not an integral in the sense of Riemann or Lebesgue.) Note that this is the principle on which the theory of distributions is based in general: A distribution is not defined like a function but by its action on test functions (where this action is assumed to be linear and continuous). The delta function (= delta distribution) is defined by its action (.2) on test functions. Below we will see that, e.g., δ (x) is given by the action (.6). Using ψ(x) = in (.2) we find hence the area under the delta function is. δ(x)dx =, (.3) 4 version 27/0/200 (J. Mark Heinzle, WS 200/)

Remark. Incidentally, the constant function is not a good test function, because it does not fall off at infinity. But we can use a function ψ(x) that is in a neighborhood of x = 0 and falls off outside that neighborhood. Since δ(x) = 0 away from x = 0 we still have ψ(x)δ(x) = δ(x) and the stated relation is true. For details we refer to the end of this section. In complete analogy with (.2) we may define a delta function whose peak is centered at x = x 0 instead of at x = 0. This function is δ(x x 0 ) and we have δ(x x 0 )ψ(x)dx := ψ(x 0 ). (.4) The delta function is not a function; however, since it can be integrated, it is reasonable to ask whether δ(x) possesses an antiderivative (Stammfunktion). We define the Heavyside function Θ(x) as { 0 x < 0 Θ(x) = (.5) x 0. Our claim is that Θ (x) = δ(x). Let us prove this claim (in different ways and with different level of rigor). The first idea that comes to mind would probably be to consider the expression x δ(y)dy. Unfortunately, this expression does not formally make sense (because it cannot be obtained from δ(y)ψ(y)dy. by inserting a smooth test function. However, if we don t bother, then a short glance at the delta function makes it clear that x δ(y)dy = Θ(x). The second approach to show the claim Θ (x) = δ(x) is rigorous: Recalling that d ǫ (x) converges to δ we consider x 0 x ǫ d ǫ (y)dy = 2ǫ (x + ǫ) x ( ǫ,ǫ) x ǫ. version 27/0/200 (J. Mark Heinzle, WS 200/) 5

Obviously, the antiderivatives of the functions d ǫ (x) converge to Θ(x) as ǫ 0 (in a conventional sense). Therefore, Θ(x) must be the antiderivative of δ(x). The third approach makes direct use of the definition (.2). Let us compute By partial integration we obtain Θ (x)ψ(x)dx = = ψ(0), Θ (x)ψ(x)dx. Θ(x)ψ (x)dx = 0 ψ (x)dx = ψ(x) where we have used (twice) that ψ(x) is a test function and thus falls off as x ±. Therefore, since Θ (x) acts on test functions in the exactly same way as δ(x) (and since its action on test functions is the defining property of every distribution), we find that Θ (x) = δ(x). What is the derivative of the delta function? To answer this question we can simply differentiate a sequence of functions, like d ǫ (x) or d ǫ (x), that converges to δ(x) and check lim d ǫ (x)ψ(x)dx. ǫ 0 The distribution δ (x) is then defined through δ (x)ψ(x)dx = lim ǫ 0 d ǫ (x)ψ(x)dx. However, we are able to proceed in a direct manner. By using partial integration (and the fact that test functions fall off at infinity) we obtain δ (x)ψ(x)dx = δ(x)ψ (x)dx = ψ (0). (.6) This equation defines the derivative of the delta function: δ(x) is a distribution that acts on test functions according to (.2); δ (x) is a distribution that acts on test functions according to (.6). Let us conclude by computing the Fourier transform of the delta function. To this end we note that sin ǫξ d ǫ (ξ) = ǫξ d ǫ (ξ) = e ǫ2 ξ 2 4, 0 6 version 27/0/200 (J. Mark Heinzle, WS 200/)

which follows directly from (.7a) and (.7c) by choosing a and b appropriately. Clearly, both d ǫ (ξ) and dǫ (ξ) as ǫ 0. Therefore, ˆδ(ξ) =. A direct way to obtain ˆδ(ξ) is through (.2). We have ˆδ(ξ) = δ(x)e iξx dx =. Remark. Note that the argument is valid despite the fact that e iξx is not a test function because one may use appropriate cut-off functions; cf. the comment following (.3) and the remark at the end of this section. Using the Fourier inversion formula we thus find Likewise, δ(x) = δ(x y) = ˆδ(ξ)e iξx dξ = e iξx dξ. e iξ(x y) dξ. (.7) This representation of the delta function can be used to prove the Fourier inversion formula. We claim that f(x) = Using (.7), the proof is simple, ˆf(ξ)e iξx dξ. ˆf(ξ)e iξx dξ = = = f(x). dy f(y) f(y)e iξy e iξx dy dξ e iξ(x y) dξ = dy f(y)δ(x y) Remark. This is a final remark on test functions. By definition a test function ψ(x) (of the Schwartz class) must be C and go to zero as x ± faster than any power of x. A good example is ψ(x) = e x2. However, let us construct a version 27/0/200 (J. Mark Heinzle, WS 200/) 7

test function ψ(x) like the one required for (.3), which is supposed to be in a neighborhood of x = 0. The classic example on which this construction is based is the function { 0 x 0 φ(x) = e x x > 0, which is C. (It is a good exercise to prove that φ(x) is C ; the interesting point is of course x = 0. The simplest method is to successively use de l Hospital s rule.) We set { x [,] ψ(x) = ( + e x +x ) x >. Obviously, ψ(x) = in a neighborhood of x = 0, and ψ(x) 0 as x ± exponentially and thus faster than any power of x. (Why? For large x we have x x, which is small for large x. Hence x + x x, and + e + e x e x. Therefore, ψ(x) e x for large x.) Furthermore, ψ(x) is smooth (i.e., C ) everywhere including x = ±. (Why? For x close to, the expression x is large, hence x + x x. Since e x is extremely small, we find ψ(x) = e x for x close to, and smoothness follows in close analogy with smoothness of the example φ(x).) 8 version 27/0/200 (J. Mark Heinzle, WS 200/)