MATH 205C: STATIONARY PHASE LEMMA


 Gyles McCormick
 4 months ago
 Views:
Transcription
1 MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω) C(K) sup u ; we are interested in the asymptotic behavior as ω. If f has no critical points, then as ω, the exponential becomes highly oscillatory, and one expects that I(ω) 0 rapidly. Indeed, I(ω) C ω for all in this case. To see this, we note that if f never vanishes then, with f (x) 2 = j f 2, e iωf(x) = iω Leiωf(x), L = f (x) 2 j f, and so, with L t the transpose of L, i.e. L t = f j f (x) 2, with the relevant factors acting as multiplication operators, I(ω) = (Le iωf(.) )(x) u(x) dx = e iωf(x) (L t u)(x) dx, iω iω so by induction I(ω) = (iω) e iωf(x) ((L t ) u)(x) dx, leading to the conclusion that () I(ω) C(K, f, )ω sup D α u. α Of course, here we only needed u C with support in K for this estimate. For a moment, also consider nonreal valued f with Im f 0 so e iωf(x) = e ω Im f(x) for ω. Then, as long as f 0, the above calculation goes through if we replace f by its complex conjugate in the definition of L, so () also holds. Moreover, if f does have some critical points, but at these Im f > 0, then a partition of unity argument (noting that the set of critical points is closed, as is the set of points where f is real, while supp u is compact) allows one to reduce consideration of the integral the two cases where either Im f 0 and f 0, which we just analyzed, or instead Im f > 0 (hence bounded below by a positive constant). In the latter case, one actually gets I(ω) e ω inf Im f u L, i.e. one has exponential decay. Thus, if Im f 0, and in addition f = 0 implies Im f > 0, then () still holds. Returning to real valued f, the interesting case is if f has some critical points, and the simplest setting is if these are nondegenerate, i.e. if f (x 0 ) = 0 implies that the Hessian f is invertible at x 0. Thus, one assumes that if f (x 0 ) = 0 then (2) f(x) = f(x 0 ) + 2 Q(x x 0, x x 0 ) + R(x, x 0 ),
2 2 where R(x, x 0 ) C x x 0 3, with Q a nondegenerate symmetric bilinear quadratic form; writing it as Q(x x 0, x x 0 ) = A(x x 0 ), x x 0 for the standard inner product on R n, A is a symmetric invertible linear operator on R n. The signature of Q, and of A, can be defined as the pair (, n ) where is the maximal dimension of a subspace on which Q is positive definite, or the number of positive eigenvalues of A. It is also useful to have a single number sign Q = sign A = (n ), the difference of the number of positive and negative eigenvalues of A. Note that nondegenerate critical points are isolated, for f (x) A(x x 0 ) vanishes quadratically at x 0, and A is invertible. Thus, by using a partition unity, when the critical points are nondegenerate, one can easily arrange that there is a single critical point x 0 over supp u, which one may arrange to be at 0 by a translation of the coordinates. In this case, one has f (x) C x x 0 for a suitable C > 0, so f (x) C x x 0. It is useful to observe that even if f has a nondegenerate critical point at 0, thus the above integration by parts argument breas down in general, it continues to wor if u vanishes to sufficiently high order at 0. It is convenient to be somewhat more general (due to the singularities the integration by parts argument induces), so suppose that u C (R n \ {0}), with support in K, with D α u C x 2 α for α. Then, by the divergence theorem in the last step, I(ω) = lim e x ρ iωf(x) u(x) dx = iω lim = ( iω lim e iωf(x) (L t u)(x) dx + x ρ x ρ x =ρ (Le iωf(.) )(x) u(x) dx f 2 ( j f ) ν j )e iωf(x) u(x) ds(x), where ν is the outward unit normal of R n \ {x : x ρ}. Now, by the assumptions on u, the estimate on f and the vanishing of f at 0, the integrand in the surface integral is bounded by Cρ, and thus the surface integral has limit 0 as ρ 0. Correspondingly, I(ω) = iω lim x ρ e iωf(x) (L t u)(x) dx = iω as L t u is bounded by the assumptions. Thus, I(ω) C(K, f)ω sup (D α u) x 2+ α. α e iωf(x) (L t u)(x) dx, Further, if u C (R n \ {0}), with support in K, with D α u C x 2 α for α, then L t u C (R n \ {0}), with support in K, with D α L t u C x 2( ) α for α. Thus, the argument can be applied iteratively to conclude that (3) I(ω) C(K, f)ω sup (D α u) x 2+ α. α We now show how to use this to reduce the case of general f to that of a quadratic form, 2 Q(x x 0, x x 0 ). With the notation of (2), let f s (x) = f(x 0 ) + 2 Q(x x 0, x x 0 ) + sr(x, x 0 ), s [0, ], so f 0 is quadratic (plus a constant), and f = f. Let I(ω, s) = e iωfs(x) u(x) dx.
3 3 Then differentiating in s 2 times yields I (2) (ω, s) = (iω) 2 e iωfs(x) R(x, x 0 ) 2 u(x) dx. Since R is C and vanishes cubically at x = x 0, v x0 (x) = R(x, x 0 ) 2 u(x) satisfies (D α v x0 )(x) C x x 0 6 α when u C 3 (R n ) with compact support in K and α 3. Thus, applying (3) with 3 in place of, we deduce that By Taylor s theorem, (4) I(ω, ) j<2 I (2) (ω, s) Cω. j! I(j) (ω, 0) sup s [0,] (2)! I(2) (ω, s), so modulo ω decay, I(ω) can be calculated merely by calculating I (j) (ω, 0) = (iω) j e iωf0(x) R(x, x 0 ) j u(x) dx (5) = (iω) j e iωf(x0) e iωq(x x0,x x0)/2 R(x, x 0 ) j u(x) dx for j < 2, i.e. we are reduced to the case of purely quadratic phase. We now want to compute integrals of the form I 0 (ω) = e iω 2 Ax,x u(x) dx, with u being C. Note that we could replace u by any function that has the same Taylor series to order 2 in view of our preceding estimates, and one way to proceed would be to compute explicitly such integrals. Instead, we rewrite using Parseval s formula I 0 (ω) = (F e iω 2 A.,. )(ξ)(fu)(ξ) dξ We mae a general definition: if P C (R n ) with D α P (ξ) C α ξ Mα, i.e. if P is polynomially bounded with all derivatives, let P (D)u = F (P (.)(Fu)(.)), so P (D) maps S to itself and S to itself. Thus, we have (6) I 0 (ω) = (2π) n ((F e iω 2 A.,. )(D)u)(0). Our first tas is to compute the inverse Fourier transform of the imaginary Gaussian. First, recall that and (F(t e t2 /2 ))(τ) = 2πe τ 2 /2, (F (t e t2 /2 ))(τ) = 2π e τ 2 /2. A change of variables gives that for α > 0 (F (t e αt2 /2 ))(τ) = 2πα e τ 2 /(2α). Now, both sides are analytic functions of α with values in S(R) in Re α > 0, and both sides are continuous functions of α with values in S (R) in Re α 0, α 0, so the formula continues to hold there (with the square root being the standard one
4 4 for positive reals, with a branch cut along the negative reals). In particular, when α = βi, β > 0, we deduce that (F (t e ±iβt2 /2 ))(τ) = 2πβ e ±iπ/4 e iτ 2 /(2β). Now, if A is a nondegenerate symmetric linear map on R n, then there is an orthogonal transformation O diagonalizing A as Λ = OAO t = diag(λ j ), with rows of O given by orthonormal eigenvectors e j of A, and with λ j 0 the eigenvalues of A, so Ax, x = j λ j e j, x 2, i.e. in coordinates y j = e j, x, y = Ox, eeping in mind that O has determinant of absolute value, (2π) n e ix ξ e i Ax,x /2 dx = (2π) n e iy Oξ e i j λjy2 j /2 dy Thus, (7) (F e iω 2 A.,. )(ξ) = = Now, by Taylor s formula, f(y) = (2π) n/2 λ /2 ei(π/4)... λ n /2 j sign λj e i j (Oξ)2 j /(2λj) = (2π) n/2 det A /2 ei(π/4) sign A e i (OAO t ) Oξ,Oξ /2 = (2π) n/2 det A /2 ei(π/4) sign A e i A ξ,ξ /2. (2πω) n/2 det A /2 ei(π/4) sign A e i ω A ξ,ξ /2 f (j) (x) (y x) j (y x) + j! ( )! we have for w R (or indeed for w with Im w 0) e iw ( iw) j j! ( )! w 0 ( t) f () (ty + ( t)x) dt, Thus, with P (.) = B.,., using Plancherel s theorem, 0 ( t) dt = w! e ip (D) u ( ip (D)) j u j! L 2! P (D) u L 2.. Now, we would lie to replace the L 2 norm on the left by just the absolute value of the value of the argument at 0 in order to obtain an expansion for (6) (while strengthening the norm on the right, of course); this goal is reached if we can replace the L 2 norm by the L norm, for the argument is Schwartz when u is in C c (R n ). But by the Sobolev embedding, for s > n/2 integer, there is C > 0 such that v L C( v L 2 + D α v L 2), α =s
5 5 so as P (D) and e ip (D) commute with D α (for they are all Fourier multipliers) e ip (D) u ( ip (D)) j u j! L ( C e ip (D) u ( ip (D)) j u j! L 2 + D α( e ip (D) u ( ip (D)) j u ) L 2 j!) α =s ( C e ip (D) u ( ip (D)) j u j! L 2 + e ip (D) (D α u) ( ip (D)) j Dα u j! α =s C ( P (D) u L 2 + ) P (D) D α u L 2! α =s C! B ( u L 2 + ) D α u L 2 C B α =s β 2+s D β u L 2, where in the penultimate step we used that P (ξ) B ξ 2 for ξ R n so P (ξ) Fu L 2 B ξ 2 Fu L 2 and thus P (D) u L 2 B u L 2 by Plancherel, and then in the last step we expanded = n j= D2 j. Here C depends on the Sobolev inequality parameters C, s and on only. Applying this with B = ω A /2, (6) and (7) give (8) (2π) n/2 I 0 (ω) ω n/2 det A /2 ei(π/4) sign A ( iω A D, D ) j u j! (0) π n/2 det A /2 (2ω) n/2 C A D β u L 2, β 2+s Combining this with (4) and (5), we deduce that for L j differential operators of order 2j, and L 0 the identity, (9) I(ω) e iωf(x0) (2π) n/2 sign f (x ω n/2 det f ei(π/4) 0) ω j L (x 0 ) /2 j u(x 0 ) ω n/2 C f (x 0 ) D β u L 2, β 2+s which is the stationary phase lemma. Notice that if (D α u)(x 0 ) vanishes for α 2l +, l <, then the first l terms in the sum on the left hand side vanish, so the expansion starts with ω n/2 l. Assuming f(x 0 ) = 0, since ω l ( ωi)(ω) l = ω l e iωf(x) f(x) l u(x) dx, and f vanishes quadratically at x 0, derivatives of I in ω possess a similar expansion, starting with a multiple if ω n/2. Factoring out e iωf(x0) in general, (0) ω l ( ω(e l if(x0)ω (2π) n/2 I))(ω) sign f (x ω n/2 det f ei(π/4) 0) ω j L (x 0 ) /2 l,j u(x 0 ) ω n/2 C l f (x 0 ) D β u L 2, β 2+2l+s L 2 )
6 6 where L l,j are differential operators of the form L l+j f(x) l. One can integrate this expression to obtain the alreadynown expansion for I, which shows that the coefficients in the expansion of ω(e l if(x0)ω I) are the termbyterm differentiated coefficients of e if(x0)ω I. Finally, ( ωi)(ω) l = ( ω(e l if(x0)ω (e if(x0)ω I))(ω) can be computed by the product rule. One can also allow parametric dependence on another variable y R m, i.e. for f C (R n+m ) with Im f 0, consider the integral I(ω, y) = e iωf(x,y) u(x, y) dx, u Cc (R n+m ). R n As before I decays rapidly in ω, uniformly in y unless f has critical points in x at which it is real, i.e. unless there exists (x 0, y 0 ) such that f x(x 0, y 0 ) = 0 and Im f(x 0, y 0 ) = 0. In addition, differentiation under the integral sign shows that in this case I is C in (ω, y), with Dy α DωI l still rapidly decreasing. If f is real, and has some critical point (x 0, y 0 ) in x which is nondegenerate, i.e. f xx is invertible, so d x f x,... d x f x n are linearly independent at (x 0, y 0 ), then the implicit function theorem guarantees that the joint zero set of f x,..., f (x n ) is a C graph over a neighborhood of y 0 in R m, i.e. there is a C function X defined on a neighborhood of y 0 such that in a neighborhood of (x 0, y 0 ), the critical points of f are given by (X(y), y), and these are nondegenerate. Thus, the previous arguments are applicable, uniformly in y, so the stationary phase expansion (0) holds with the y dependence added in. Writing I(ω, y) = e iωf(x(y),y) R n e iωf (x,y) u(x, y) dx, F (x, y) = f(x, y) f(x(y), y), notice that, with yj hand side, standing for the jth component of the second slot on the right ( yj F )(x, y) = ( yj f)(x, y) ( yj f)(x(y), y) (( x f)(x(y), y)) X (y), y j and thus vanishes at x = X(y) since that is a critical point of f in x. Since x f, and thus x F, have a nondegenerate zero at x = X(y), we actually obtain ( yj F )(x, y) = G j (x, y)( x F )(x, y). Differentiation under the integral sign gives, with Ĩ(ω, y) = e iωf (x,y) u(x, y) dx, R n yj Ĩ(ω, y) = (iω yj F (x, y))e iωf (x,y) u(x, y) dx + e iωf (x,y) yj u(x, y) dx. R n R n Now, the second term has a stationary phase expansion as before. On the other hand, the first term is G j (x, y)(iω x F (x, y))e iωf (x,y) u(x, y) dx R n = G j (x, y)( x e iωf (x,y) ) u(x, y) dx = e iωf (x,y) x (G j (x, y)u(x, y)) dx, R n R n so the stationary phase expansion is again applicable. Thus, yj I possesses a stationary phase expansion. Since the expansion can be integrated in y j, this proves
7 7 that the coefficients of the expansion of the derivative are the differentiated coefficients of the expansion of I (including the statement that the latter are differentiable). Iterating the argument shows that ω l y α ωi l still possesses an expansion as in (0), with the expansion given by termbyterm differentiation. We finally discuss an interpretation of the stationary phase result by partially compactifying (or bordifying) the (ω, y)space. Thus, consider the new variable h = ω (0, ]. Then the stationary phase lemma, together with ω ω = h h and that ω l ω l can be rewritten as a linear combination of (ω ω ) j for j l, and similarly for h l h l, shows that I(h, y) = e if(x,y)/h u(x, y) dx = e if(x(y),y)/h h n/2 J(h, y), with h l l h α y J j h j Lj,l,α u, where means that if one sums over j <, then one obtains an error terms bounded by C l,α, h, and the series on the right is the termbyterm derivative of h j Lj,0,0 u. But notice that h l j<l hj Lj,0,0 u = 0, so in particular, taing = l, h l h l α y J C l,α,l h l, i.e. h l α y J C l,α,l for all l and α, i.e. all partial derivatives of the C function J on (0, ] h R m y are bounded. This implies that J has a unique C extension to [0, ] R m : uniqueness is automatic as (0, ] R m is dense in [0, ] R m. On the other hand, h l α y J extends continuously to h = 0 since l h α y J(h 0, y) = l h α y J(, y) l+ h h 0 α y J(h, y) dh, and l+ h y α J(h, y) is bounded, and then repeated integration shows that for l l, α α, h l α y J extends as well (which of course would follow from the same argument with (l, α) replaced by (l, α )), with extension given by the appropriate integral of l+ h y α J(h, y), so in particular this extension is differentiable (l l, α α ) times. As l and α are arbitrary, we deduce that the extension of J we defined is C ; in view of the uniqueness, one usually simply writes J for the extension as well. A slight generalization is then obtained by allowing u to depend on h as well, i.e. consider u Cc (R n R m [0, ]), and consider J(h, y) = e if(x,y)/h u(x, y, h) dx. Regarding the h in u as one of the parameters y, we have then J(h, y) = Ĵ(h, y, h), where Ĵ(h, y, h) = e if(x,y)/h u(x, y, h) dx = e if(x(y),y)/h h n/2 Ĵ (h, y, h) with Ĵ C ([0, ] R m [0, ]), so restricting to h = h yields with J C ([0, ] R m ). J(h, y) = e if(x(y),y)/h h n/2 J(h, y),