x λ ϕ(x)dx x λ ϕ(x)dx = xλ+1 λ + 1 ϕ(x) u = xλ+1 λ + 1 dv = ϕ (x)dx (x))dx = ϕ(x)

Similar documents
LECTURE 5: THE METHOD OF STATIONARY PHASE

MATH 205C: STATIONARY PHASE LEMMA

Fourier Transform & Sobolev Spaces

We denote the space of distributions on Ω by D ( Ω) 2.

Jordan Canonical Form

1.5 Approximate Identities

MATH 220 solution to homework 5

Topics in Harmonic Analysis Lecture 1: The Fourier transform

Introduction to Microlocal Analysis

Reminder Notes for the Course on Distribution Theory

TD 1: Hilbert Spaces and Applications

Chapter 1. Distributions

MATH FALL 2014

1 Fourier Integrals on L 2 (R) and L 1 (R).

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018

Cartan s Criteria. Math 649, Dan Barbasch. February 26

Quantum mechanics. Chapter The quantum mechanical formalism

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

MATH 5640: Fourier Series

8 Singular Integral Operators and L p -Regularity Theory

Math 115 ( ) Yum-Tong Siu 1. Derivation of the Poisson Kernel by Fourier Series and Convolution

Kernel Method: Data Analysis with Positive Definite Kernels

SOLUTIONS TO HOMEWORK ASSIGNMENT 4

1.3.1 Definition and Basic Properties of Convolution

Math 489AB Exercises for Chapter 2 Fall Section 2.3

1 Continuity Classes C m (Ω)

FOURIER TRANSFORMS. 1. Fourier series 1.1. The trigonometric system. The sequence of functions

Linear Algebra: Matrix Eigenvalue Problems

Lecture 8 : Eigenvalues and Eigenvectors

Oscillatory integrals

Eigenvalues and eigenfunctions of the Laplacian. Andrew Hassell

MATH 220 solution to homework 4

18.175: Lecture 15 Characteristic functions and central limit theorem

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Hilbert Spaces. Contents

Here we used the multiindex notation:

MATH 220: MIDTERM OCTOBER 29, 2015

Singular Integrals. 1 Calderon-Zygmund decomposition

Lecture 4: Fourier Transforms.

MATH 220: THE FOURIER TRANSFORM TEMPERED DISTRIBUTIONS. Beforehand we constructed distributions by taking the set Cc

TOOLS FROM HARMONIC ANALYSIS

Polynomial Approximation: The Fourier System

II. FOURIER TRANSFORM ON L 1 (R)

FRAMES AND TIME-FREQUENCY ANALYSIS

Lecture 12: Detailed balance and Eigenfunction methods

Partial Differential Equations

An introduction to some aspects of functional analysis

ON MARKOV AND KOLMOGOROV MATRICES AND THEIR RELATIONSHIP WITH ANALYTIC OPERATORS. N. Katilova (Received February 2004)

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Fourier Series. 1. Review of Linear Algebra

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

u t = u p (t)q(x) = p(t) q(x) p (t) p(t) for some λ. = λ = q(x) q(x)

CHAPTER VIII HILBERT SPACES

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

Math 172 Problem Set 8 Solutions

. A NOTE ON THE RESTRICTION THEOREM AND GEOMETRY OF HYPERSURFACES

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

YAO LIU. f(x t) + f(x + t)

1 Distributions (due January 22, 2009)

Average theorem, Restriction theorem and Strichartz estimates

Lecture 15, 16: Diagonalization

RANDOM PROPERTIES BENOIT PAUSADER

1. Pseudo-Eisenstein series

MATH3383. Quantum Mechanics. Appendix D: Hermite Equation; Orthogonal Polynomials

The Calderon-Vaillancourt Theorem

Problem 1A. Find the volume of the solid given by x 2 + z 2 1, y 2 + z 2 1. (Hint: 1. Solution: The volume is 1. Problem 2A.

arxiv:math/ v2 [math.ap] 3 Oct 2006

Elements of Positive Definite Kernel and Reproducing Kernel Hilbert Space

Regularity for Poisson Equation

Hendrik De Bie. Hong Kong, March 2011

Final A. Problem Points Score Total 100. Math115A Nadja Hempel 03/23/2017

We have to prove now that (3.38) defines an orthonormal wavelet. It belongs to W 0 by Lemma and (3.55) with j = 1. We can write any f W 1 as

MAT 449 : Problem Set 7

1 Fourier Integrals of finite measures.

Linear Algebra using Dirac Notation: Pt. 2

DIFFERENTIATING THE ABSOLUTELY CONTINUOUS INVARIANT MEASURE OF AN INTERVAL MAP f WITH RESPECT TO f. by David Ruelle*.

Indeed, the family is still orthogonal if we consider a complex valued inner product ( or an inner product on complex vector space)

Errata Applied Analysis

2. The Schrödinger equation for one-particle problems. 5. Atoms and the periodic table of chemical elements

LECTURE 7. k=1 (, v k)u k. Moreover r

Definition: An n x n matrix, "A", is said to be diagonalizable if there exists a nonsingular matrix "X" and a diagonal matrix "D" such that X 1 A X

Dangerous and Illegal Operations in Calculus Do we avoid differentiating discontinuous functions because it s impossible, unwise, or simply out of

The heat equation for the Hermite operator on the Heisenberg group

λ n = L φ n = π L eınπx/l, for n Z

Math Ordinary Differential Equations

ALGEBRAIC GROUPS. Disclaimer: There are millions of errors in these notes!

Kernel Density Estimation

THEORY OF DISTRIBUTIONS

JASSON VINDAS AND RICARDO ESTRADA

The Diffusion Equation with Piecewise Smooth Initial Conditions ABSTRACT INTRODUCTION

TOPICS IN FOURIER ANALYSIS-III. Contents

The L p -dissipativity of first order partial differential operators

1 Assignment 1: Nonlinear dynamics (due September

Linear Algebra and Dirac Notation, Pt. 2

MATH 173: Problem Set 5 Solutions

The Simple Harmonic Oscillator

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

UNIVERSITY OF MANITOBA

1. General Vector Spaces

Dunkl operators and Clifford algebras II

Transcription:

1. Distributions A distribution is a linear functional L : D R (or C) where D = C c (R n ) which is linear and continuous. 1.1. Topology on D. Let K R n be a compact set. Define D K := {f D : f outside K} f K,m = sup D s f(x). x K, s m Then f n f means (1) suppf n K same K for all n (2) for any D s, D s f n D s f uniformly. T is continuous means for every K there is C K and m > so that 1.2. The distribution x λ +. T (ψ) C ψ K,m ψ D K. T λ (x) := x λ ϕ(x)dx is well defined for λ C such that Reλ > 1. T λ (x) = x λ ϕ(x)dx = xλ+1 λ + 1 ϕ(x) 1 λ + 1 x λ+1 ϕ (x)dx du = x λ dx v = ϕ(x) So u = xλ+1 λ + 1 dv = ϕ (x)dx T λ (x) = 1 λ + 1 T λ+1 The right hand side makes sense for Re(λ + 1) > 1 or Reλ > 2. In this way we can extend T λ to Reλ > 2, but there is a pole at λ = 1. lim (λ + 1)T λ = T = λ 1 So δ is the residue of T λ at λ = 1. (ϕ (x))dx = ϕ(x) 1 = ϕ()

2 1.3. Distributions with support at. Definition. We say a distribution T vanishes in an open set U, if T (ϕ) = for any function ϕ C c (R), with support in U. The support of a distribution is the smallest closed set F such that T vanishes in the open set U = R\F. Theorem. A distribution T such that suppt = {} is a derivative of δ. Proof. Let ψ be a function in D which is 1 on ( 1/2, 1/2), and vanishes outside ( 1, 1). Then write ψ ε (x) = ψ(x/ε). ψ ε (x) is identically 1 in ( ε, ε). Let f ε = f ψ ε. Because of the support property, T (f) = T (f ε ) Indeed this is because f f ε near, so T (f f ε ) =. Because of continuity, for any compact K, T (f) C sup D j f. j <k,x K We will apply this to the functions f ε, with ε, so we can take K = [ 1, 1]. By direct calculation, we see that D j f ε is of the form ε j D j ψ D i f, i + j k. Assume that D j f() = for all j k. By Taylor s formula, D s f ε Cε k+1 s So T (f) = T (f ε ), and T (f ε ) C ε. Now suppose f is arbitrary. ( f j () f k = x )ψ j Cc j! j k and has the same first k derivatives as f. So by the previous argument, T (f f k ) =. But then T (f) = T (f k ) = ( x f (j) j ) ()T j! ψ, ( ) x and the T j ψ are fixed constants independent of f. Thus we have j! proved that (1.3.1) T (f) = j k c j D j (f)().

3 2. Fourier Inversion 2.1. Recall that the Fourier transform (2.1.1) F x (f) := e iξx f(x) dx takes the Schwartz space S to itself. We would like to compute its inverse. The main theorem is the following. Theorem. The operator satisfies F ξ (φ) := 1 e iξx φ(ξ) dξ F ξ F x (f) = f, F x F ξ (φ) = φ. We will treat one of the equations, the other one is the mirror image. The natural thing to try is to write out the composition, and change the order of integration: (2.1.2) F ξ F x (f)(y) = 1 e e iξy e iξx iξ(x y) f(x) dx dξ = f(x) dξ dx. But the inner integral does not make sense. Formally, (2.1.3) e iξ(x y) dξ is the Fourier transform of the function φ(ξ) = 1, at x y : (2.1.4) φ(x y) = e i(x y)ξ 1 dξ. If we write ψ x (ξ) := 1 eixξ, then (2.1.3) is also ψ( y). To deal with the inability to just change the order of integration, and evaluate the inner integral, we resort to distributions. Definition. Let T be a tempered distribution. We define the Fourier transform T to be T (f) := T ( f). This is well defined, because f S. When T = L φ, this coincides with the usual Fourier transform. (2.1.5) L φ (f) = = φ(ξ) f(x) φ(x) dx. e iξx f(x) dx dξ = f(x) e iξx φ(x) dx dξ =

4 The change of order of integration is justified because both f and φ are in S. 2.2. From this point of view, Fourier inversion comes down to asking What distribution satisfies the property δ = f? The equation is the same as f() = T ( f). Given such a T, we can implement Fourier inversion as follows. Given x, define the translation by x operator L x as (2.2.1) L x (f)(y) := f(x + y). Then (2.2.2) f(x) = (L x (f))() = T ( L x (f)). On the other hand, L x (f)(ξ) = e iξy L x (f)(y) dy = e iξy f(x + y) dy = (2.2.3) e iξ(u x) f(u) du = e iξx f(ξ). So equation (2.2.2) becomes (2.2.4) f(x) = T ( e iξx f(ξ)). ( T acts on the RHS as a function of ξ). 2.3. Let h t (x) := e x2 t 2. This function is in S. Then, (2.3.1) lim e x 2 t t 2 f(x) = f(x), e x2 t 2 f(x) f(x), so by the Lesbegue dominated convergence theorem, (2.3.2) lim L ht (f) = lim e x2 t 2 f(x) dx = t t On the other hand, (2.3.3) L ht ( f) = L pt (f) f() f(x) dx. by formula (4.1.6) and lemma 4.1 in L. Gross s notes. The conclusion is that T = 1. Putting all of this together, we get the formula for Forier inversion.

3. Oscillator Representation 3.1. Recall V = S(R), rapidly decreasing functions. The Lie algebra { ( ) ( ) ( ) } 1 1 sl(2, C) = e =, h =, f =. 1 1 acts on V by the formulas: 5 ϖ(h) = x x + 1 2 (3.1.1) ϖ(e) = i 2 x2 ϖ(f) = i 2 2 x For a linear space V, denote by EndV, the space of linear maps L : V V. This is again vector space, with the usual adition and scalar multiplication (3.1.2) (L 1 + L 2 )(v) := L 1 (v) + L 2 (v), (αl)(v) := αl(v). To say that a Lie algebra g acts on a space V, means that there is a linear map (3.1.3) ϖ : g End(V ) which also satisfies (3.1.4) ϖ([x, y]) = ϖ(x)ϖ(y) ϖ(y)ϖ(x). We can also exponentiate the operators in (3.1.1): (1) ω(e th )F (x) = e t/2 F (e t x) (2) ω(e te )F (x) = e itx2 /2 F (x) (3) ω(e tf )F (x) = convolution with ( 1+i 2 )(πt) 1/2 e ix2 /2t (1) and (2) are easy. (3) needs Fourier transform. The reason for the i is that (1) and (2) are unitary operators w.r. 2 to L 2 (R). We are interested in the operator 2iω(e f) = x 2 2 x k := i(e f). This is called the Hermite operator. We want eigenfunctions. If we write a = x + x, a + = x x,

6 we get [a, a + 1 ] = 2, 4 (a+ a + a a + ) = ϖ(k). This is also a Lie algebra representation. The Heisenberg algebra z acts by multiplication by 2. [a, (a + ) j ] = 2j(a + ) j 1 {p, q, z}, [p, q] = z, [p, z] =, [q, z] =. [a, a +2 ] = a a +2 a +2 a = a a +2 a + (2 + a a + ) = a a +2 2a + a + a a + = a a +2 (2 + a a + )a + 2a + = 4a +. Let v = e x2 /2. Then a v =. Set v j v j s(r). So a + v j = v j+1. Furthermore := (a + ) j v, j N. Then a v j = a (a + ) j v = 2ja +j 1 v = 2 j v j 1 (v j, v l ) = 2 l l!δ jl (v, v ) = 2 l l! πδ jl e x2 dx = π Conclusion. v j = P j (x)e x2 /2, P j a polynomial. P j called Hermite Polynomial. Theorem. Hermite functions form an orthogonal basis for L 2 (R). ϖ(k)v j = ( j + 1 2) vj 4. Review of some Linear algebra 4.1. Let A be an n n matrix with complex entries. The minimal polynomial is defined as the lowest degree polynomial m = m A m(t) such that m(a) =. The chracteristic polynomial, p = p A, is defined as (4.1.1) p(t) = det(ti A). Then (4.1.2) m(t) = (t λ i ) m i, p(t) = (t λ i ) n i. The λ i are called the (generalized) eigenvalues. The Cayley-Hamilton theorem implies that 1 m i n i.

4.2. The results in this section hold over any field. Assume that m = p q, with p, q polynomials, such that (p, q) = 1. The Euclidean algorithm implies that there are polynomials a, b such that (4.2.1) ap + bq = 1. Then (4.2.2) a(a)p(a) + b(a)q(a) = I. Let (4.2.3) V p := q(a)v, V q = p(a)v. The first conclusion is that (4.2.4) V = V p + V q, p(a)v p = (), q(a)v q = (). The second part is clear; for example p(a)v p = p(a)q(a)v = m(a)v = () by the definition of m. For the first part, (4.2.5) v = Iv = p(a)a(a)v + q(a)b(a)v. In addition V p V q = (); if v = p(a)x = q(a)y, then (4.2.6) v =a(a)p(a)v + b(a)q(a)v = a(a)p(a)q(a)y + b(a)q(a)p(a)x = =a(a)y + b(a)m(a)x = + =. So if we change bases in such a way that say the first r vectors form a basis of V p, and the last s vectors a basis of V q, then the matrix A becomes block diagonal ( ) Ap (4.2.7) A = A q The minimal polynomial of A p ois p, and the minimal polynomial of A q is q. The similar result holds for when m decomposes into more than two factors, mutually prime to each other. 4.3. We return to the setting of section 4.1. There is a basis such that the matrix A is block diagonal, A λ1... A (4.3.1) A = λ2....... A λk Each A λi has minimal polynomial (t λ i ) m i, and each block has size n i. The spaces V λi are called generalized λ i eigenspaces. 7