MATH 205C: STATIONARY PHASE LEMMA

Similar documents
LECTURE 5: THE METHOD OF STATIONARY PHASE

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

1 Continuity Classes C m (Ω)

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

1 Math 241A-B Homework Problem List for F2015 and W2016

We denote the derivative at x by DF (x) = L. With respect to the standard bases of R n and R m, DF (x) is simply the matrix of partial derivatives,

MATH 220: MIDTERM OCTOBER 29, 2015

Mathematics Department Stanford University Math 61CM/DM Inner products

Math The Laplacian. 1 Green s Identities, Fundamental Solution

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018

be the set of complex valued 2π-periodic functions f on R such that

FIXED POINT ITERATIONS

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

w T 1 w T 2. w T n 0 if i j 1 if i = j

x λ ϕ(x)dx x λ ϕ(x)dx = xλ+1 λ + 1 ϕ(x) u = xλ+1 λ + 1 dv = ϕ (x)dx (x))dx = ϕ(x)

Oscillatory integrals

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

(3) Let Y be a totally bounded subset of a metric space X. Then the closure Y of Y

TOOLS FROM HARMONIC ANALYSIS

The method of stationary phase

Premiliminary Examination, Part I & II

TEST CODE: PMB SYLLABUS

i=1 α i. Given an m-times continuously

RANDOM PROPERTIES BENOIT PAUSADER

POINT VALUES AND NORMALIZATION OF TWO-DIRECTION MULTIWAVELETS AND THEIR DERIVATIVES

A Brief Outline of Math 355

CHAPTER VIII HILBERT SPACES

Multivariable Calculus

THE INVERSE FUNCTION THEOREM

Quadratic forms. Here. Thus symmetric matrices are diagonalizable, and the diagonalization can be performed by means of an orthogonal matrix.

Second Order Elliptic PDE

Course Summary Math 211

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

The oblique derivative problem for general elliptic systems in Lipschitz domains

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

x R d, λ R, f smooth enough. Goal: compute ( follow ) equilibrium solutions as λ varies, i.e. compute solutions (x, λ) to 0 = f(x, λ).

Algebra II. Paulius Drungilas and Jonas Jankauskas

8. Diagonalization.

REVIEW OF DIFFERENTIAL CALCULUS

1 Inner Product and Orthogonality

Topics in Harmonic Analysis Lecture 1: The Fourier transform

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima

17 The functional equation

Singular Integrals. 1 Calderon-Zygmund decomposition

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

8.1 Bifurcations of Equilibria

NATIONAL BOARD FOR HIGHER MATHEMATICS. Research Scholarships Screening Test. Saturday, February 2, Time Allowed: Two Hours Maximum Marks: 40

Partial Differential Equations

We denote the space of distributions on Ω by D ( Ω) 2.

Real Variables # 10 : Hilbert Spaces II

POINTWISE BOUNDS ON QUASIMODES OF SEMICLASSICAL SCHRÖDINGER OPERATORS IN DIMENSION TWO

Lecture 4: Fourier Transforms.

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =

Lecture 8 : Eigenvalues and Eigenvectors

Fourier Transform & Sobolev Spaces

Econ Lecture 14. Outline

Numerical Analysis Preliminary Exam 10 am to 1 pm, August 20, 2018

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

OR MSc Maths Revision Course

2 Two-Point Boundary Value Problems

Linear Ordinary Differential Equations

Absolute value equations

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

3 2 6 Solve the initial value problem u ( t) 3. a- If A has eigenvalues λ =, λ = 1 and corresponding eigenvectors 1

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Math Linear Algebra II. 1. Inner Products and Norms

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Vectors in Function Spaces

5.2.2 Planar Andronov-Hopf bifurcation

Your first day at work MATH 806 (Fall 2015)

Whitney s Extension Problem for C m

Detailed Proof of The PerronFrobenius Theorem

Notes on basis changes and matrix diagonalization

MATH 235. Final ANSWERS May 5, 2015

Math 24 Spring 2012 Sample Homework Solutions Week 8

Simple Examples on Rectangular Domains

Kernel Method: Data Analysis with Positive Definite Kernels

Linear Algebra Massoud Malek

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Week Quadratic forms. Principal axes theorem. Text reference: this material corresponds to parts of sections 5.5, 8.2,

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2

MATH 583A REVIEW SESSION #1

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

ARE211 FINAL EXAM DECEMBER 12, 2003

MORE NOTES FOR MATH 823, FALL 2007

Tangent spaces, normals and extrema

Math 108b: Notes on the Spectral Theorem

Here we used the multiindex notation:

1 Directional Derivatives and Differentiability

1. General Vector Spaces

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro

An Overly Simplified and Brief Review of Differential Equation Solution Methods. 1. Some Common Exact Solution Methods for Differential Equations

QUALIFYING EXAMINATION Harvard University Department of Mathematics Tuesday September 21, 2004 (Day 1)

Extreme Values and Positive/ Negative Definite Matrix Conditions

RKHS, Mercer s theorem, Unbounded domains, Frames and Wavelets Class 22, 2004 Tomaso Poggio and Sayan Mukherjee

Transcription:

MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω) C(K) sup u ; we are interested in the asymptotic behavior as ω. If f has no critical points, then as ω, the exponential becomes highly oscillatory, and one expects that I(ω) 0 rapidly. Indeed, I(ω) C ω for all in this case. To see this, we note that if f never vanishes then, with f (x) 2 = j f 2, e iωf(x) = iω Leiωf(x), L = f (x) 2 j f, and so, with L t the transpose of L, i.e. L t = f j f (x) 2, with the relevant factors acting as multiplication operators, I(ω) = (Le iωf(.) )(x) u(x) dx = e iωf(x) (L t u)(x) dx, iω iω so by induction I(ω) = (iω) e iωf(x) ((L t ) u)(x) dx, leading to the conclusion that () I(ω) C(K, f, )ω sup D α u. α Of course, here we only needed u C with support in K for this estimate. For a moment, also consider non-real valued f with Im f 0 so e iωf(x) = e ω Im f(x) for ω. Then, as long as f 0, the above calculation goes through if we replace f by its complex conjugate in the definition of L, so () also holds. Moreover, if f does have some critical points, but at these Im f > 0, then a partition of unity argument (noting that the set of critical points is closed, as is the set of points where f is real, while supp u is compact) allows one to reduce consideration of the integral the two cases where either Im f 0 and f 0, which we just analyzed, or instead Im f > 0 (hence bounded below by a positive constant). In the latter case, one actually gets I(ω) e ω inf Im f u L, i.e. one has exponential decay. Thus, if Im f 0, and in addition f = 0 implies Im f > 0, then () still holds. Returning to real valued f, the interesting case is if f has some critical points, and the simplest setting is if these are non-degenerate, i.e. if f (x 0 ) = 0 implies that the Hessian f is invertible at x 0. Thus, one assumes that if f (x 0 ) = 0 then (2) f(x) = f(x 0 ) + 2 Q(x x 0, x x 0 ) + R(x, x 0 ),

2 where R(x, x 0 ) C x x 0 3, with Q a non-degenerate symmetric bilinear quadratic form; writing it as Q(x x 0, x x 0 ) = A(x x 0 ), x x 0 for the standard inner product on R n, A is a symmetric invertible linear operator on R n. The signature of Q, and of A, can be defined as the pair (, n ) where is the maximal dimension of a subspace on which Q is positive definite, or the number of positive eigenvalues of A. It is also useful to have a single number sign Q = sign A = (n ), the difference of the number of positive and negative eigenvalues of A. Note that non-degenerate critical points are isolated, for f (x) A(x x 0 ) vanishes quadratically at x 0, and A is invertible. Thus, by using a partition unity, when the critical points are non-degenerate, one can easily arrange that there is a single critical point x 0 over supp u, which one may arrange to be at 0 by a translation of the coordinates. In this case, one has f (x) C x x 0 for a suitable C > 0, so f (x) C x x 0. It is useful to observe that even if f has a non-degenerate critical point at 0, thus the above integration by parts argument breas down in general, it continues to wor if u vanishes to sufficiently high order at 0. It is convenient to be somewhat more general (due to the singularities the integration by parts argument induces), so suppose that u C (R n \ {0}), with support in K, with D α u C x 2 α for α. Then, by the divergence theorem in the last step, I(ω) = lim e x ρ iωf(x) u(x) dx = iω lim = ( iω lim e iωf(x) (L t u)(x) dx + x ρ x ρ x =ρ (Le iωf(.) )(x) u(x) dx f 2 ( j f ) ν j )e iωf(x) u(x) ds(x), where ν is the outward unit normal of R n \ {x : x ρ}. Now, by the assumptions on u, the estimate on f and the vanishing of f at 0, the integrand in the surface integral is bounded by Cρ, and thus the surface integral has limit 0 as ρ 0. Correspondingly, I(ω) = iω lim x ρ e iωf(x) (L t u)(x) dx = iω as L t u is bounded by the assumptions. Thus, I(ω) C(K, f)ω sup (D α u) x 2+ α. α e iωf(x) (L t u)(x) dx, Further, if u C (R n \ {0}), with support in K, with D α u C x 2 α for α, then L t u C (R n \ {0}), with support in K, with D α L t u C x 2( ) α for α. Thus, the argument can be applied iteratively to conclude that (3) I(ω) C(K, f)ω sup (D α u) x 2+ α. α We now show how to use this to reduce the case of general f to that of a quadratic form, 2 Q(x x 0, x x 0 ). With the notation of (2), let f s (x) = f(x 0 ) + 2 Q(x x 0, x x 0 ) + sr(x, x 0 ), s [0, ], so f 0 is quadratic (plus a constant), and f = f. Let I(ω, s) = e iωfs(x) u(x) dx.

3 Then differentiating in s 2 times yields I (2) (ω, s) = (iω) 2 e iωfs(x) R(x, x 0 ) 2 u(x) dx. Since R is C and vanishes cubically at x = x 0, v x0 (x) = R(x, x 0 ) 2 u(x) satisfies (D α v x0 )(x) C x x 0 6 α when u C 3 (R n ) with compact support in K and α 3. Thus, applying (3) with 3 in place of, we deduce that By Taylor s theorem, (4) I(ω, ) j<2 I (2) (ω, s) Cω. j! I(j) (ω, 0) sup s [0,] (2)! I(2) (ω, s), so modulo ω decay, I(ω) can be calculated merely by calculating I (j) (ω, 0) = (iω) j e iωf0(x) R(x, x 0 ) j u(x) dx (5) = (iω) j e iωf(x0) e iωq(x x0,x x0)/2 R(x, x 0 ) j u(x) dx for j < 2, i.e. we are reduced to the case of purely quadratic phase. We now want to compute integrals of the form I 0 (ω) = e iω 2 Ax,x u(x) dx, with u being C. Note that we could replace u by any function that has the same Taylor series to order 2 in view of our preceding estimates, and one way to proceed would be to compute explicitly such integrals. Instead, we rewrite using Parseval s formula I 0 (ω) = (F e iω 2 A.,. )(ξ)(fu)(ξ) dξ We mae a general definition: if P C (R n ) with D α P (ξ) C α ξ Mα, i.e. if P is polynomially bounded with all derivatives, let P (D)u = F (P (.)(Fu)(.)), so P (D) maps S to itself and S to itself. Thus, we have (6) I 0 (ω) = (2π) n ((F e iω 2 A.,. )(D)u)(0). Our first tas is to compute the inverse Fourier transform of the imaginary Gaussian. First, recall that and (F(t e t2 /2 ))(τ) = 2πe τ 2 /2, (F (t e t2 /2 ))(τ) = 2π e τ 2 /2. A change of variables gives that for α > 0 (F (t e αt2 /2 ))(τ) = 2πα e τ 2 /(2α). Now, both sides are analytic functions of α with values in S(R) in Re α > 0, and both sides are continuous functions of α with values in S (R) in Re α 0, α 0, so the formula continues to hold there (with the square root being the standard one

4 for positive reals, with a branch cut along the negative reals). In particular, when α = βi, β > 0, we deduce that (F (t e ±iβt2 /2 ))(τ) = 2πβ e ±iπ/4 e iτ 2 /(2β). Now, if A is a non-degenerate symmetric linear map on R n, then there is an orthogonal transformation O diagonalizing A as Λ = OAO t = diag(λ j ), with rows of O given by orthonormal eigenvectors e j of A, and with λ j 0 the eigenvalues of A, so Ax, x = j λ j e j, x 2, i.e. in coordinates y j = e j, x, y = Ox, eeping in mind that O has determinant of absolute value, (2π) n e ix ξ e i Ax,x /2 dx = (2π) n e iy Oξ e i j λjy2 j /2 dy Thus, (7) (F e iω 2 A.,. )(ξ) = = Now, by Taylor s formula, f(y) = (2π) n/2 λ /2 ei(π/4)... λ n /2 j sign λj e i j (Oξ)2 j /(2λj) = (2π) n/2 det A /2 ei(π/4) sign A e i (OAO t ) Oξ,Oξ /2 = (2π) n/2 det A /2 ei(π/4) sign A e i A ξ,ξ /2. (2πω) n/2 det A /2 ei(π/4) sign A e i ω A ξ,ξ /2 f (j) (x) (y x) j (y x) + j! ( )! we have for w R (or indeed for w with Im w 0) e iw ( iw) j j! ( )! w 0 ( t) f () (ty + ( t)x) dt, Thus, with P (.) = B.,., using Plancherel s theorem, 0 ( t) dt = w! e ip (D) u ( ip (D)) j u j! L 2! P (D) u L 2.. Now, we would lie to replace the L 2 norm on the left by just the absolute value of the value of the argument at 0 in order to obtain an expansion for (6) (while strengthening the norm on the right, of course); this goal is reached if we can replace the L 2 norm by the L norm, for the argument is Schwartz when u is in C c (R n ). But by the Sobolev embedding, for s > n/2 integer, there is C > 0 such that v L C( v L 2 + D α v L 2), α =s

5 so as P (D) and e ip (D) commute with D α (for they are all Fourier multipliers) e ip (D) u ( ip (D)) j u j! L ( C e ip (D) u ( ip (D)) j u j! L 2 + D α( e ip (D) u ( ip (D)) j u ) L 2 j!) α =s ( C e ip (D) u ( ip (D)) j u j! L 2 + e ip (D) (D α u) ( ip (D)) j Dα u j! α =s C ( P (D) u L 2 + ) P (D) D α u L 2! α =s C! B ( u L 2 + ) D α u L 2 C B α =s β 2+s D β u L 2, where in the penultimate step we used that P (ξ) B ξ 2 for ξ R n so P (ξ) Fu L 2 B ξ 2 Fu L 2 and thus P (D) u L 2 B u L 2 by Plancherel, and then in the last step we expanded = n j= D2 j. Here C depends on the Sobolev inequality parameters C, s and on only. Applying this with B = ω A /2, (6) and (7) give (8) (2π) n/2 I 0 (ω) ω n/2 det A /2 ei(π/4) sign A ( iω A D, D ) j u j! (0) π n/2 det A /2 (2ω) n/2 C A D β u L 2, β 2+s Combining this with (4) and (5), we deduce that for L j differential operators of order 2j, and L 0 the identity, (9) I(ω) e iωf(x0) (2π) n/2 sign f (x ω n/2 det f ei(π/4) 0) ω j L (x 0 ) /2 j u(x 0 ) ω n/2 C f (x 0 ) D β u L 2, β 2+s which is the stationary phase lemma. Notice that if (D α u)(x 0 ) vanishes for α 2l +, l <, then the first l terms in the sum on the left hand side vanish, so the expansion starts with ω n/2 l. Assuming f(x 0 ) = 0, since ω l ( ωi)(ω) l = ω l e iωf(x) f(x) l u(x) dx, and f vanishes quadratically at x 0, derivatives of I in ω possess a similar expansion, starting with a multiple if ω n/2. Factoring out e iωf(x0) in general, (0) ω l ( ω(e l if(x0)ω (2π) n/2 I))(ω) sign f (x ω n/2 det f ei(π/4) 0) ω j L (x 0 ) /2 l,j u(x 0 ) ω n/2 C l f (x 0 ) D β u L 2, β 2+2l+s L 2 )

6 where L l,j are differential operators of the form L l+j f(x) l. One can integrate this expression to obtain the already-nown expansion for I, which shows that the coefficients in the expansion of ω(e l if(x0)ω I) are the term-by-term differentiated coefficients of e if(x0)ω I. Finally, ( ωi)(ω) l = ( ω(e l if(x0)ω (e if(x0)ω I))(ω) can be computed by the product rule. One can also allow parametric dependence on another variable y R m, i.e. for f C (R n+m ) with Im f 0, consider the integral I(ω, y) = e iωf(x,y) u(x, y) dx, u Cc (R n+m ). R n As before I decays rapidly in ω, uniformly in y unless f has critical points in x at which it is real, i.e. unless there exists (x 0, y 0 ) such that f x(x 0, y 0 ) = 0 and Im f(x 0, y 0 ) = 0. In addition, differentiation under the integral sign shows that in this case I is C in (ω, y), with Dy α DωI l still rapidly decreasing. If f is real, and has some critical point (x 0, y 0 ) in x which is non-degenerate, i.e. f xx is invertible, so d x f x,... d x f x n are linearly independent at (x 0, y 0 ), then the implicit function theorem guarantees that the joint zero set of f x,..., f (x n ) is a C graph over a neighborhood of y 0 in R m, i.e. there is a C function X defined on a neighborhood of y 0 such that in a neighborhood of (x 0, y 0 ), the critical points of f are given by (X(y), y), and these are non-degenerate. Thus, the previous arguments are applicable, uniformly in y, so the stationary phase expansion (0) holds with the y dependence added in. Writing I(ω, y) = e iωf(x(y),y) R n e iωf (x,y) u(x, y) dx, F (x, y) = f(x, y) f(x(y), y), notice that, with yj hand side, standing for the jth component of the second slot on the right ( yj F )(x, y) = ( yj f)(x, y) ( yj f)(x(y), y) (( x f)(x(y), y)) X (y), y j and thus vanishes at x = X(y) since that is a critical point of f in x. Since x f, and thus x F, have a non-degenerate zero at x = X(y), we actually obtain ( yj F )(x, y) = G j (x, y)( x F )(x, y). Differentiation under the integral sign gives, with Ĩ(ω, y) = e iωf (x,y) u(x, y) dx, R n yj Ĩ(ω, y) = (iω yj F (x, y))e iωf (x,y) u(x, y) dx + e iωf (x,y) yj u(x, y) dx. R n R n Now, the second term has a stationary phase expansion as before. On the other hand, the first term is G j (x, y)(iω x F (x, y))e iωf (x,y) u(x, y) dx R n = G j (x, y)( x e iωf (x,y) ) u(x, y) dx = e iωf (x,y) x (G j (x, y)u(x, y)) dx, R n R n so the stationary phase expansion is again applicable. Thus, yj I possesses a stationary phase expansion. Since the expansion can be integrated in y j, this proves

7 that the coefficients of the expansion of the derivative are the differentiated coefficients of the expansion of I (including the statement that the latter are differentiable). Iterating the argument shows that ω l y α ωi l still possesses an expansion as in (0), with the expansion given by term-by-term differentiation. We finally discuss an interpretation of the stationary phase result by partially compactifying (or bordifying) the (ω, y)-space. Thus, consider the new variable h = ω (0, ]. Then the stationary phase lemma, together with ω ω = h h and that ω l ω l can be rewritten as a linear combination of (ω ω ) j for j l, and similarly for h l h l, shows that I(h, y) = e if(x,y)/h u(x, y) dx = e if(x(y),y)/h h n/2 J(h, y), with h l l h α y J j h j Lj,l,α u, where means that if one sums over j <, then one obtains an error terms bounded by C l,α, h, and the series on the right is the term-by-term derivative of h j Lj,0,0 u. But notice that h l j<l hj Lj,0,0 u = 0, so in particular, taing = l, h l h l α y J C l,α,l h l, i.e. h l α y J C l,α,l for all l and α, i.e. all partial derivatives of the C function J on (0, ] h R m y are bounded. This implies that J has a unique C extension to [0, ] R m : uniqueness is automatic as (0, ] R m is dense in [0, ] R m. On the other hand, h l α y J extends continuously to h = 0 since l h α y J(h 0, y) = l h α y J(, y) l+ h h 0 α y J(h, y) dh, and l+ h y α J(h, y) is bounded, and then repeated integration shows that for l l, α α, h l α y J extends as well (which of course would follow from the same argument with (l, α) replaced by (l, α )), with extension given by the appropriate integral of l+ h y α J(h, y), so in particular this extension is differentiable (l l, α α ) times. As l and α are arbitrary, we deduce that the extension of J we defined is C ; in view of the uniqueness, one usually simply writes J for the extension as well. A slight generalization is then obtained by allowing u to depend on h as well, i.e. consider u Cc (R n R m [0, ]), and consider J(h, y) = e if(x,y)/h u(x, y, h) dx. Regarding the h in u as one of the parameters y, we have then J(h, y) = Ĵ(h, y, h), where Ĵ(h, y, h) = e if(x,y)/h u(x, y, h) dx = e if(x(y),y)/h h n/2 Ĵ (h, y, h) with Ĵ C ([0, ] R m [0, ]), so restricting to h = h yields with J C ([0, ] R m ). J(h, y) = e if(x(y),y)/h h n/2 J(h, y),