Gaussian beams and the propagation of singularities

Similar documents
Microlocal Analysis : a short introduction

MICROLOCAL ANALYSIS METHODS

MATH 205C: STATIONARY PHASE LEMMA

Fourier Transform & Sobolev Spaces

Implicit Functions, Curves and Surfaces

MATH 220: MIDTERM OCTOBER 29, 2015

Reminder Notes for the Course on Distribution Theory

WEYL S LEMMA, ONE OF MANY. Daniel W. Stroock

A Walking Tour of Microlocal Analysis

POINTWISE BOUNDS ON QUASIMODES OF SEMICLASSICAL SCHRÖDINGER OPERATORS IN DIMENSION TWO

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

EXPOSITORY NOTES ON DISTRIBUTION THEORY, FALL 2018

ξ,i = x nx i x 3 + δ ni + x n x = 0. x Dξ = x i ξ,i = x nx i x i x 3 Du = λ x λ 2 xh + x λ h Dξ,

Lecture 3: Asymptotic Methods for the Reduced Wave Equation

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

4.7 The Levi-Civita connection and parallel transport

RANDOM PROPERTIES BENOIT PAUSADER

DECOUPLING LECTURE 6

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

Diffraction by Edges. András Vasy (with Richard Melrose and Jared Wunsch)

Band-limited Wavelets and Framelets in Low Dimensions

1 Lyapunov theory of stability

On stable inversion of the attenuated Radon transform with half data Jan Boman. We shall consider weighted Radon transforms of the form

Analysis II: The Implicit and Inverse Function Theorems

Finite difference method for elliptic problems: I

Algebras of singular integral operators with kernels controlled by multiple norms

ON THE FOLIATION OF SPACE-TIME BY CONSTANT MEAN CURVATURE HYPERSURFACES

Here we used the multiindex notation:

Math 172 Problem Set 8 Solutions

Outline of the course

LECTURE NOTES ON GEOMETRIC OPTICS

Chapter One. The Calderón-Zygmund Theory I: Ellipticity

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Microlocal Methods in X-ray Tomography

10. Smooth Varieties. 82 Andreas Gathmann

Problem List MATH 5173 Spring, 2014

LECTURE 5: THE METHOD OF STATIONARY PHASE

TD 1: Hilbert Spaces and Applications

Chapter 4: First-order differential equations. Similarity and Transport Phenomena in Fluid Dynamics Christophe Ancey

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

Topics in Harmonic Analysis Lecture 6: Pseudodifferential calculus and almost orthogonality

THEOREM OF OSELEDETS. We recall some basic facts and terminology relative to linear cocycles and the multiplicative ergodic theorem of Oseledets [1].

We denote the space of distributions on Ω by D ( Ω) 2.

Course Summary Math 211

Laplace s Equation. Chapter Mean Value Formulas

Chapter 1. Preliminaries. The purpose of this chapter is to provide some basic background information. Linear Space. Hilbert Space.

(1) u (t) = f(t, u(t)), 0 t a.

Gaussian beams for high frequency waves: some recent developments

A COUNTEREXAMPLE TO AN ENDPOINT BILINEAR STRICHARTZ INEQUALITY TERENCE TAO. t L x (R R2 ) f L 2 x (R2 )

Traces, extensions and co-normal derivatives for elliptic systems on Lipschitz domains

Structurally Stable Singularities for a Nonlinear Wave Equation

DIEUDONNE AGBOR AND JAN BOMAN

Euler Equations: local existence

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Multivariable Calculus

Newtonian Mechanics. Chapter Classical space-time

Sobolev Spaces. Chapter 10

The harmonic map flow

56 4 Integration against rough paths

Nonlinear Dynamical Systems Lecture - 01

Partial Differential Equations

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

The first order quasi-linear PDEs

A matrix over a field F is a rectangular array of elements from F. The symbol

Mathematical Tripos Part III Michaelmas 2017 Distribution Theory & Applications, Example sheet 1 (answers) Dr A.C.L. Ashton

Second Order Elliptic PDE

THE INVERSE FUNCTION THEOREM

BIHARMONIC WAVE MAPS INTO SPHERES

8 Singular Integral Operators and L p -Regularity Theory

Contents. 1. Introduction

Austin Mohr Math 730 Homework. f(x) = y for some x λ Λ

1 Continuity Classes C m (Ω)

INVERSE FUNCTION THEOREM and SURFACES IN R n

and BV loc R N ; R d)

ELEMENTARY LINEAR ALGEBRA

1 Fourier Integrals of finite measures.

An introduction to Mathematical Theory of Control

Oscillatory integrals

Local smoothing and Strichartz estimates for manifolds with degenerate hyperbolic trapping

0.1 Complex Analogues 1

Chap. 1. Some Differential Geometric Tools

Multiscale Frame-based Kernels for Image Registration

Part IA. Vectors and Matrices. Year

On some weighted fractional porous media equations

1.5 Approximate Identities

Inner product spaces. Layers of structure:

We have to prove now that (3.38) defines an orthonormal wavelet. It belongs to W 0 by Lemma and (3.55) with j = 1. We can write any f W 1 as

Math The Laplacian. 1 Green s Identities, Fundamental Solution

FOURIER METHODS AND DISTRIBUTIONS: SOLUTIONS

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Biorthogonal Spline Type Wavelets

A LOWER BOUND ON BLOWUP RATES FOR THE 3D INCOMPRESSIBLE EULER EQUATION AND A SINGLE EXPONENTIAL BEALE-KATO-MAJDA ESTIMATE. 1.

SYMPLECTIC GEOMETRY: LECTURE 5

Chapter 0 of Calculus ++, Differential calculus with several variables

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Topics in Harmonic Analysis Lecture 1: The Fourier transform

1 Math 241A-B Homework Problem List for F2015 and W2016

The Hilbert transform

Bielefeld Course on Nonlinear Waves - June 29, Department of Mathematics University of North Carolina, Chapel Hill. Solitons on Manifolds

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

Transcription:

Gaussian beams and the propagation of singularities Dominic Dold, Karen Habermann, Ellen Powell March 17, 2014

Contents 1 Introduction 3 2 Geometrical optics 5 2.1 Construction of the approximation............................... 5 2.2 The eikonal equation....................................... 6 3 The construction of Gaussian beams 9 3.1 The construction for the wave equation............................ 17 4 Reflections 20 5 Propagation of Singularities 25 5.1 Wave Front Sets......................................... 25 5.1.1 Fourier Transforms.................................... 25 5.1.2 Definition......................................... 28 5.1.3 Examples......................................... 29 5.1.4 Effect of Partial Differential Operators......................... 30 5.2 Propagation of Singularities................................... 32 References 43 2

1 Introduction This report deals with linear hyperbolic partial differential equations. The prototype of such an equation is the classical wave equation u := 2 t 2 + 2 2 +... + 2 1 2 u = 0 1.1 n for a function u on R n+1. Our notion of hyperbolicity is made precise in the following Definition 1.1 Let P be a linear partial differential operator of order m acting on real-valued functions t, x ut, x with t, x R R n. Setting f α := expiαtτ + x ξ, the principal symbol of P is defined by the polynomial p m t, x, τ, ξ := lim α α m [ f α P f α ]t, x, 1.2 of degree m. We call P strictly hyperbolic if s p m t, x, s, ξ has m distinct real roots for all ξ 0. Remark 1.2 The coefficient of m / t m in all strictly hyperbolic operators of degree m is non-zero. We therefore assume this coefficient to be one. It is a feature of the wave equation to allow for travelling wave packets, i. e. for solutions which are localised in space and propagate in time on certain curves and this property is shared by the class of all linear hyperbolic PDEs P u = 0 on R n+1. One can construct such localised solutions by means of Gaussian beams, which are approximate solutions of the form t, x e ikψt,x a 0 t, x + a 1t, x +... + a N t, x k k n, 1.3 where ψ and a i, for 1 i N N, are real-valued and k > 1. One shows that these Gaussian beams must be concentrated around ray paths. Definition 1.3 Let P be a strictly hyperbolic operator and p m its principal symbol. Let ˆt, ˆx, ˆξ R R n R n \{0} and choose ˆτ R such that p m ˆt, ˆx, ˆτ, ˆξ = 0. 1.4 Then γ : s ts, xs, τs, ξs is a null bicharacteristic curve through ˆt, ˆx, ˆξ if the Hamiltonian system ẋ = p m ξ, ṫ = p m τ, ξ = p m, τ = p m t 1.5 is satisfied with t0, x0, τ0, ξ0 = ˆt, ˆx, ˆτ, ˆξ. We call the projection of a bicharacteristic on the t, x-space a ray path. Remark 1.4 Recall that for Ω R 2n+1 and the Hamiltonian vector field is defined by H F := F : Ω R, t, x, τ, ξ F t, x, τ, ξ 1.6 F τ t F t + τ n F F ξ i i i i=1 Thus each null bicharacteristic curve γ satisfies γ = H pm. ξ i. 1.7 3

Another characteristic of hyperbolic equations is that they allow for solutions with singularities, which is in sharp contrast to elliptic partial differential equations where a solution u of Du = f for elliptic D and smooth f is always smooth. This gives rise to the interesting question how singularities in solutions to a hyperbolic PDE propagate. The notion of propagation of singularities will be made more precise in due course. This report is structured as follows. First we will sketch the method of approximating solutions to the linear wave equation using geometrical optics, which is closely related to the Gaussian beam ansatz, and we will explain the limitations of the geometrical method. The subsequent part will be devoted to the construction of Gaussian beams on R n+1 for arbitrary strictly hyperbolic operators P and for the special case of. Thereafter we shall explore how to adapt this construction for the background R Ω with a bounded domain Ω R n. The last sections will deal with the aforementioned propagation of singularities and will apply the Gaussian beam approximation to characterise the propagation of singularities. The main reference for the theory presented in this report is Ralston s article [3]. The section about geometrical optics follows the lines of Taylor [5]. 4

2 Geometrical optics The geometric optics approximation is an important tool in studying the wave equation. The threefold purpose of this section is to introduce this method, to show its limitations and to prepare the way for the Gaussian beam approximation. Therefore we do not intend to give an account of the geometrical optics ansatz in its most general form, but rather presenting an approach parallel to our construction of Gaussian beams. Consider again the wave equation u = 0 on R n+1. We aim to find approximate solutions of the form t, x e ikψt,x j 0 a j t, x ik j 2.1 with a, ψ C T, T R n for some T > 0. More precisely, we want to achieve that, for v N := e ikψt,x N j=0 a jt,x ik j, v N = Ok ν 2.2 in C N+1 ν for 0 ν N. We will see that we can arrange this if, for all N, we require v N 0, x = ax e ikϕx 2.3 where a C 0 R n and ϕ C R n with ϕ 0 on a neighbourhood of supp a. 2.1 Construction of the approximation One easily calculates that v N = e ikψ a 0 +... + a N ik N The coefficient of k 2 is the coefficient of k 1 can be written as and for k 1 j j 1, we have 2ik e ikψ t ψ t a 0 +... + ta N ik N n i=1 2.4 i ψ i a 0 +... + ia N ik N 2.5 + ik ψv N + k 2 t ψ 2 x ψ 2 v N 2.6 e ikψ a 0 t ψ 2 x ψ 2, 2.7 i e ikψ a 0 ψ i e ikψ t ψ 2 x ψ 2 a 1 2.8 i e ikψ 2 t ψ a 0 t 2 xψ x a 0 2.9 i 1 j e ikψ a j ψ i 1 j e ikψ a j+1 t ψ 2 x ψ 2 2.10 n + 2i 1 j e ikψ t ψ t a 1 i ψ i a j + i 1 j e ikψ a 0. 2.11 We will set these coefficients successively equal to zero. The coefficient of k 2 vanishes if ϕ satisfies the eikonal equation 2 ψ t + x ψ 2 = 0 2.12 i=1 5

with initial datum ψ0, x = ϕx. Below we will prove that there is a neighbourhood U of K := supp a and a T > 0 such that the eikonal equation exhibits a unique solution for each choice of, i. e. a ϕ C T, T U with ψ0, x = ϕx, ψ t 0, x = xψx. 2.13 Remark 2.1 Equation 2.12 can be written as p 2 t, x, ψ t, xψ = 0 2.14 with the principal symbol p 2 of the wave equation. In the geometrical optics ansatz we require the principal symbol to vanish, but this may lead to a solution which is only local time, i. e. T <, meaning that we cannot construct a global approximate solution to the wave equation. In the Gaussian beam approximation, we require equation 2.14 to hold only up to a certain order, which will be crucial to guarantee global approximations. Proceeding to next order, we see that the coefficient of λ 1 vanishes if the transport equation 2 ψ a 0 t t = 2 xψ x a 0 + a 0 ψ 2.15 is satisfied. Noting that t ϕ 0 on U for t sufficiently small, one shows as in [5] that this PDE has unique solutions once initial conditions are specified. Equation 2.3 implies that a 0 0, x = ax. 2.16 Thus we get a 0 C T, T U, compactly supported in U on every time slice, for T small enough. The terms in equation 2.10 vanish provided that In the light of 2.3, we require 2 ψ a j t t = 2 xψ x a j a j ψ + a j 1. 2.17 a j 0, x = 0. 2.18 Hence the transport equation 2.17 has a unique solution a j C T, T U, compactly supported in U on each time slice, for T small enough. The above construction yields that and we conclude that v N = ik N a N e ikψ, 2.19 v N = Ok ν 2.20 in C N+1 ν T, T U for 0 ν N. Therefore the geometrical optics ansatz gives a suitable approximation to solutions of the wave equation. Furthermore, this approximation induces possible initial conditions for an initial value problem in R n+1. 2.2 The eikonal equation A crucial step in the construction above was the appeal to solutions of the eikonal equation. Considering eikonal equations of the form p m t, x, ψ t, xψ = 0 2.21 6

to high order is a key idea in the Gaussian beam approximation. To introduce the abstract methods which are related to study this problem, we will give an account of how existence and uniqueness of solutions to the eikonal equation can be shown. We will conclude this section by explaining why solutions to this equation need not be global, which results in a break down of the geometrical optics approximation. The eikonal equation is of the more general form F x, du = 0 2.22 for F smooth on R 2n+1. Let S = {t = 0} and v smooth on S. Require u S = v. Let x 0 S and ξ 0 := x v. Let τ 0 R be such that F x 0, τ 0, ξ 0 = 0. 2.23 Note that if F is the principal symbol of a strictly hyperbolic operator, then such a choice is always possible provided that ξ 0 0. We assume that S satisfies the noncharacteristic hypothesis F τ x 0, τ 0, ξ 0 0. 2.24 If F is the principal of, then this is satisfied as long as ξ 0 0. We look for a solution by appealing to the theory of Hamiltonian systems. Recall that a symplectic form σ on R 2n+1 = {x, ξ R n+1 R n+1 } is given by n σ = dξ j dx j. 2.25 Now let Λ be the graph of a function ξ = Ξx in R 2n+1. Then the following holds. j=1 Proposition 2.2 The surface Λ is locally the graph of du for a smooth function u if and only if for all j, k. Ξ j k = Ξ k j 2.26 Proof. Condition 2.26 is equivalent to saying that n i=1 Ξ ixdx i is closed. By the Poincaré lemma, there is a smooth u such that du = n i=1 Ξ ixdx i locally, but this is equivalent to Λ being locally the graph of du. Proposition 2.3 The surface Λ is locally the graph of du if and only if σx, Y = 0 for all vectors X, Y tangent to Λ. Proof. It suffices to check the condition on σ on a generating system of tangent vectors. Define X j = n Ξ l + 2.27 j j ξ l for j = 1,..., n. Then these generate the tangent space and one easily checks that The result follows from the previous proposition. We define a set Σ R 2n+1 over S by l=1 σx k, X j = Ξ j k Ξ k j. 2.28 Σ = {x, ξ : t = 0, ξ j = j v, F x, τ, ξ = 0} 2.29 using x = t, x. The noncharacteristic condition implies by the implicit function theorem that there is a local smooth function τx such that F x, τ, ξ = 0. Thus Σ is an n-dimensional surface. Define Λ to be the union of the integral curves of the Hamiltonian vector field H F through Σ. By the noncharacterisitc condition, H F has a nonvanishing / t component so that Λ has dimension n + 1 and is graph of a function ξ = Ξx in a neighbourhood of x 0. 7

Theorem 2.4 The surface Λ is locally the graph of du for a solution u to F x, du = 0, u S = 0. 2.30 Proof. Let X, Y be vector fields tangent to Λ at x, ξ Λ. By the previous propositions, it suffices to show σx, Y = 0. First, suppose that x S, i. e. x, ξ Σ. Decompose X = X 1 + X 2, Y = Y 1 + Y 2 such that X 1, Y 1 are tangent to Σ and X 2, Y 2 are multiples of H F. The surface Σ is the graph of a gradient if considered as its restriction to R 2n forgetting t and τ. By Proposition 2.3 we have σx 1, Y 1 = 0. Recalling that σh F, = df and that F = 0 along integral curves of H F, it follows that σx, Y = σx 1, Y 1 + σx 2, Y 1, + σx 1, Y 2 + σx 2, Y 2 = 0. 2.31 Denote the flow generated by H F by F s. Now let x, ξ Σ and X, Y tangent at F s x, ξ Λ. We have σx, Y = F s σf s X, F s Y, 2.32 where F 2 denotes the pullback. Note that F t X, F t Y are tangent at Σ. Using that the flow generate by H F leaves the symplectic form invariant so that σx, Y = σf t X, F t Y = 0, 2.33 which proves the theorem. Remark 2.5 This type of construction using the framework of symplecticity is exactly the one which will be used in the construction of a ψ such that p m x, ψ/ = 0. 2.34 The above existence proof of solutions to 2.30 immediately shows why we cannot expect global solutions in general. If integral curves of H F cross, i. e. if F s is not injective anymore, then solutions break down. Example 2.6 Let F be the principal symbol of the wave equation in R 2+1, i. e. F t, x, τ, ξ = τ 2 + ξ 2. We choose the initial datum vx, y = sin x + cos y on S = {t = 0}. Then x v 0 for x π/2, π/2 π, π. Defining Σ as above, we act with the Hamiltonian vector field H F = 2 cos 2 x + sin 2 y t + 2 cos x 2 sin y y. 2.35 One can easily find two integral curves which cross in finite time. 8

3 The construction of Gaussian beams For the pure construction of Gaussian beams, we do not need a distinguished time coordinate as we do not have to think about going forward or backwards in time. Therefore, in order to ease notation, we use x 0 = t until specified otherwise. In particular, we write x to denote x 0, x 1,..., x n R n+1. Let Px, D be a strictly hyperbolic linear partial differential operator of order m with real principal symbol p m x, ξ and let Γ be a smooth curve in R n+1, which is given by xs for s R. We are interested in finding asymptotic solutions ux, k to the partial differential equation Px, Du = 0 which become concentrated on Γ as k. According to the Gaussian beam ansatz, we consider functions u of the form ux, k = e ikψx a 0 x + a 1x + + a Nx k k N. 3.1 As we want u to be an asymptotic solution to Px, Du = 0 we aim to find a 0 x, a 1 x,..., a N x and ψx such that Px, Du = O k M 3.2 for some large M. On the other hand, we also want u to become concentrated on Γ as k and for that reason, we would like to choose ψx such that for all s R a ψxs is real-valued and b Im 2 ψ i j xs is positive definite on vectors orthogonal to ẋs. We note that if both a and b are satisfied then u rapidly decreases off Γ because e ikψx looks like a Gaussian distribution with variance proportional to k 1 on planes perpendicular to Γ. Thus, after multiplying u by a k-independent function which vanishes outside a small enough neighbourhood of Γ but which is also equal to one on an even smaller neighbourhood of Γ, we obtain an asymptotic solution to Px, Du = 0 which indeed becomes concentrated on Γ as k. Moreover, the rapid decrease off Γ which is implied by choosing ψx subject to a and b is also used in establishing 3.2. As we see later, the estimate 3.2 follows from the vanishing of Px, Du on Γ to sufficiently high order. Applying Px, D to the general form 3.1 gives Px, Du = k m p m x, ψx e ikψx a 0 x + Ok m 1. Thus, in order to achieve 3.2 we might want to try to find ψx such that p m x, ψ 0. However, in the geometric optics section, we saw that this could be too much to ask for as it might result in solutions breaking down in finite time. It turns out that it suffices to have p m x, ψ vanish to high order on Γ. Let us now see what conditions we obtain by requiring that fx = p m x, ψx vanishes to high order on Γ. Vanishing of order zero gives p m xs, ξs = 0 3.3 where ξs = ψ xs, whereas vanishing of order one yields using summation convention 0 = f = p m + p m 2 ψ 3.4 i i ξ k i k along Γ for i = 0, 1,..., n. Before we have a look at the higher orders of f, we look at condition 3.4 in more detail. Since the principal symbol p m is real, taking the imaginary part of 3.4 gives 0 = p m xs, ξs Im 2 ψ xs. ξ k i k 9

Thus, for all s R we need pm ξ xs, ξs to be parallel to ẋs since otherwise we could find some s 0 R for which Im 2 ψ i k xs 0 was not positive definite on vectors orthogonal to ẋs 0. After a reparametrisation, we can then assume that ẋs = p m xs, ξs ξ for all s R. Plugging this into condition 3.4 yields 0 = p m i + dx k ds 2 ψ = p m + dx k k i i ds from which we obtain that ξs = p m xs, ξs. Hence, we cannot construct a Gaussian beam along Γ, unless xs, ξs is a bicharacteristic curve, cf. definition below. Definition 3.1 A curve xs, ηs is a bicharacteristic for a linear partial differential operator Px, D of order m if it is a solution of where p m is the principal symbol of Px, D. ξ i k ẋs = p m η xs, ηs and ηs = p m xs, ηs, It follows straight from the definition that p m xs, ηs is constant along any bicharacteristic curve xs, ηs. This helps in dealing with requirement 3.3 in our Gaussian beam construction. Provided that for the curve Γ along which we want to construct a Gaussian beam the curve xs, ξs = xs, ψ xs is indeed a bicharacteristic, we only need to check if p m x0, ξ0 = 0 to ensure that 3.3 holds for all s R. Bicharacteristic curves which satisfy 3.3 are commonly referred to as null bicharacteristics. Hence, so far we have established that unless xs, ξs is a null bicharacteristic curve there is no hope of constructing a Gaussian beam along Γ = {xs}. Therefore, throughout the remainder of the construction we shall assume that xs, ξs is a null bicharacteristic curve. Under this assumption we are also guaranteed that ẋs 0 for all s R due to the following reasons. Since Px, D is a strictly hyperbolic operator the polynomial gξ 0 = p m x 0, x 1,..., x n, ξ 0, ξ 1,..., ξ n cannot have multiple roots for ξ 1,..., ξ n 0. In particular, from p m xs, ξs 0 it follows that and therefore, p m ξ 0 xs, ξs 0 ẋs = p m xs, ξs 0, ξ as claimed. Moreover, note that for bicharacteristic curves xs, ξs condition 3.4 is only the compatibility condition ξ i s = 2 ψ ẋ j s. i j 10

Let us now consider what conditions we obtain by requiring that fx = p m x, ψx vanishes to second order on Γ. As we will see below this gives rise to a non-linear ordinary differential equation and the crucial part of the Gaussian beam construction will be to show that one can solve this differential equation globally. We want 0 = 2 f j i = 2 p m j i + 2 p m ξ k i 2 ψ j k + 2 p m j ξ k 2 ψ i k + 2 p m ξ l ξ k to hold along Γ for i, j = 0, 1,..., n. Introducing the matrices 2 ψ i k 2 ψ j l + p m ξ k 3 ψ j k i 3.5 and Ms ij = 2 ψ i j xs, As ij = 2 p m i j xs, ξs, Cs ij = 2 p m ξ i ξ j xs, ξs one can rewrite the second order condition 3.5 as the matrix equation Bs ij = 2 p m ξ i j xs, ξs 0 = A + MB + B T M + MCM + dm ds. 3.6 Note that the matrices As, Bs and Cs are known as both the principal symbol p m and the null bicharacterisic curve xs, ξs are given. Thus, in order to have p m x, ψx vanish to second order on Γ, we need to construct ψ such that the matrix Ms satisfies the non-linear ordinary differential equation 3.6 globally on Γ. This turns out to be possible due to 3.6 being a Ricatti equation for the matrix Ms. Moreover, when solving 3.6 for the matrix Ms, we want the solution Ms to be symmetric, we need Msẋs = ξs to hold true for all s R and due to the desired condition b on ψ we also want Im Ms to be positive definite on the orthogonal complement of ẋs. As we see below, it will be enough to ensure that M0 has these three properties. To construct a solution to 3.6, we start by choosing matrix solutions to the linear system Ẏ = BY + CN Ṅ = AY B T N. 3.7 By linearity, there exists a unique global solution Y s, Ns to this system of ordinary differential equations for any initial data Y 0, N0. Furthermore, if Y s is invertible around s = s 0 then NY 1 is a solution to 3.6 around s 0. This follows from the calculation d NY 1 = ds ṄY 1 NY 1 Ẏ Y 1 = A B T NY 1 NY 1 B NY 1 CNY 1. The good property of Gaussian beams is that you can choose the initial data Y 0, N0 so that Y s is invertible for all s. Let M be a symmetric matrix such that Im M is positive definite on the orthogonal complement of ẋ0 and such that Mẋ0 = ξ0. In the following sequence of lemmas, we establish that if one chooses I, M as initial data then Y s is invertible for all s and Ms = NsY 1 s inherits the three desired properties from M. In particular, note that we have M0 = N0Y 1 0 = M. 11

Lemma 3.2 Let Y s, Ns be the solution to 3.7 with initial data Y 0, N0 = I, M. Then it holds true that ẋs, ξs = Y sẋ0, Nsẋ0. Proof. Using that xs, ξs is a bicharacteristic curve one computes as well as d ds ẋ i = 2 p m j ξ i ẋ j + 2 p m ξ j ξ i ξj = B ij ẋ j + C ij ξj d ξi = 2 p m ẋ j 2 p m ξj = A ij ẋ j B T ds j i ξ j ξ ij j. i Thus, ẋs, ξs and Y sẋ0, Nsẋ0 solve the same linear system of ordinary differential equations. The equality of the two curves follows because we additionally have Y 0ẋ0 = ẋ0 and due to our choice of M. N0ẋ0 = Mẋ0 = ξ0 For the next lemmas, we need the symplectic form σχ 1, χ 2 acting on pairs χ 1 s = y 1 s, η 1 s and χ 2 s = y 2 s, η 2 s of vector solutions to 3.7. The bilinear form σχ 1, χ 2 is given by Besides, we need the complexified form σχ 1, χ 2 = y 2 η 1 y 1 η 2. σ C χ 1, χ 2 = σχ 1, χ 2. Proposition 3.3 If χ 1 and χ 2 are vector solutions to 3.7 then both the symplectic form σχ 1, χ 2 and the complexified form σ C χ 1, χ 2 are constant in s. Proof. The proof relies on the observation that the entries of A, B and C are real and that A and C are symmetric. Differentiating the symplectic form with respect to s yields d ds σχ 1, χ 2 = d y2 η 1 y 1 η 2 ds = ẏ 2 η 1 + y 2 η 1 ẏ 1 η 2 y 1 η 2 which equals zero due to = By 2 η 1 + Cη 2 η 1 y 2 Ay 1 y 2 B T η 1 By 1 η 2 Cη 1 η 2 + y 1 Ay 2 + y 1 B T η 2 y 1 Ay 2 = y 1 T Ay 2 = y 1 T A T y 2 = Ay 1 y 2 and Cη 2 η 1 = Cη 1 η 2 using the symmetry of A and C as well as By 2 η 1 = y 2 B T η 1 and By 1 η 2 = y 1 B T η 2. By using that the entries of A, B and C are real, one similarly proves the constancy of the complexified form. Lemma 3.4 Let Y s, Ns be the solution to 3.7 with initial data Y 0, N0 = I, M. Then Y s is invertible for all s. Proof. Let s 0 be arbitrary and suppose Y s 0 a = 0 for some vector a C n+1. The aim is to deduce that a must then be the zero vector as this will imply that Y s 0 is invertible. 12

Let us consider χs = ys, ηs = Y sa, Nsa which has to be a vector solution of 3.7 because Y s, Ns is a solution to 3.7 and a is constant. From the conservation of the complexified form it follows that 0 = σ C χs 0, χs 0 = σ C χ0, χ0 = y0 η0 y0 η0 = a Ma a Ma = 2ia Im M a. By assumption, Im M is positive definite on the orthogonal complement of ẋ0 and therefore, the last equation implies that a = βẋ0 for some constant β C. By using Lemma 3.2, we further deduce that 0 = Y s 0 a = βy s 0 ẋ0 = βẋs 0. Previously, we have established that ẋs 0 0 and so it follows that β = 0. Hence, a = 0 and the matrix Y s 0 is indeed invertible. Thus, if Y s, Ns is the solution to 3.7 with initial data Y 0, N0 = I, M then Ms = NsY 1 s is well-defined and therefore, it is a global solution to 3.6. It remains to prove that Ms has all the desired properties mentioned above. One of them is an immediate conlusion from Lemma 3.2 as we have ξs = Nsẋ0 = MsY sẋ0 = Msẋs for all s R. The other two properties are covered by the following two lemmas. Lemma 3.5 Let Y s, Ns be the solution to 3.7 with initial data Y 0, N0 = I, M. Then Ms = NsY 1 s is a symmetric matrix for all s R. Proof. The proof mainly uses the constancy of the symplectic form. Let y i s, 0 i n denote the column vectors of Y s, let η i s, 0 i n denote those of Ns and let χ i s = y i s, η i s. By construction of Ms, we have η i s = Msy i s and therefore, for any i, j {0, 1,..., n} it holds true that σχ i s, χ j s = y j s η i s y i s η j s = y j s Msy i s y i s Msy j s. Due to the symmetry of M0 = M and the constancy of the symplectic form, it follows that y j s Msy i s y i s Msy j s = σχ i s, χ j s = σχ i 0, χ j 0 = y j 0 M0y i 0 y i 0 M0y j 0 = 0. On the other hand, by Lemma 3.4 we know that the vectors y i s 0, 0 i n form a basis of C n+1 and hence, the latter equation implies that Ms is symmetric for all s R. Lemma 3.6 Let Ms be given as in the previous lemma. Then for all s R the matrix Im Ms is positive definite on the orthogonal complement of ẋs Proof. This proof relies on the conservation of the complexified form σ C. Fix s 0 R and let ys 0 be an arbitrary vector in the orthogonal complement of ẋs 0. In particular, this means that ys 0 is non-zero. Due to Lemma 3.4 there exist b 0, b 1,..., b n C such that ys 0 = n b i y i s 0, i=0 where y i s, 0 i n are the column vectors of Y s. Similarly, for η i s, 0 i n, being the column vectors of Ns, we introduce n χs = b i y i s, η i s. i=0 13

As before, one can compute that σ C χs, χs = 2iys Im Ms ys for ys = n i=0 b iy i s. Moreover, if y0 was of the form y0 = βẋ0 for some constant β then we would get ys 0 = βẋs 0 as a consequence of Lemma 3.2. However, this contricts our assumption that ys 0 lies in the orthogonal complement of ẋs 0. Thus, the vector y0 cannot be parallel to ẋ0. Since M0 = M is positive definite on orthogonal complement of ẋ0, it follows that σ C χ0, χ0 > 0 and hence, by the constancy of the complexified form 2iys 0 Im Ms 0 ys 0 = σ C χs 0, χs 0 = σ C χ0, χ0 > 0. As ys 0 was an arbitrary vector orthogonal to ẋs 0, we deduce that Im Ms is indeed positive definite on the orthogonal complement of ẋs for all s R. In conclusion, provided that Im 2 ψ x0 i j is positive definite on the orthogonal complement of ẋ0, we can make p m x, ψx vanish to second order on Γ. This completes the crucial part of the construction of the phase ψ. By all means, we may want to require that fx = p m x, ψx vanishes on Γ to higher order than two. However, it turns out that this gives rise to linear ordinary differential equations with which it is easier to deal than with the second order condition 3.5. More precisely, for any multi-index α of length r the equations 0 = α f along Γ are of the form 0 = p m ξ j α ψ + j β =r c αβ β x ψ + d α, 3.8 cf. [2], where the coefficients c αβ and d α depend on the partial derivatives up to order r 1. Since p m ξ α x ψ = d ds α x ψ we can solve the equations 3.8 as a linear system of ordinary differential equations in s. By linearity, there exists a unique global solution to this system for any initial data. Thus, it suffices to prescribe α ψx0 for α = r to get the r th order partial derivatives of ψ on the whole curve Γ which make p m x, ψx vanish to r th order. We only need to take care of two things. Firstly, due to the dependency of c αβ and d α on lower order partial derivatives we need to determine the partials of ψ recursively. Secondly, we need to ensure that they satsify compatibiliy conditions such as 3 ψ i j k xsẋ k s = d ds 2 ψ i j xs However, it suffices to choose the partial derivatives of ψ to be compatible at x0 as they will then stay compatible for all s. Overall, we have established that we can make p m x, ψx vanish to arbitrary finite order on Γ provided that ψ xs = ξs,. 14

where xs, ξs is the null bicharacteristic curve corresponding to Γ, as well as that Im 2 ψ i j x0 is positive definite on the orthogonal complement of ẋ0. Throughout the remainder of this section, assume that p m x, ψx vanishes to finite order R on Γ. Having finished the construction of the phase ψ, we still need to determine the Taylor series of a 0 x, a 1 x,..., a N x along the curve Γ. By going back to our ansatz 3.1, we see that the only powers of k which Px, Du can contain are k m, k m 1,..., k N+1, k N. Thus, Px, Du is of the form N Px, Du = c s xk s e ikψx s= m for coefficients c m x, c m+1 x,..., c N x. We already saw that c m x = p m x, ψ a 0 x. To determine c m+1 x, we need to have a look at how Ok m 1 terms occur in Px, Du. One such term simply arises from the order m 1 terms in Px, D, i.e. one contribution to c m+1 x is p m 1 x, ψ a 0 x, where p m 1 is the symbol for the terms of order m 1 in Px, D. Similar to c m, another term in c m+1 is given by p m x, ψ a 1 x. The last two terms in c m+1 arise from all the order k terms in Px, D acting on the a 0 x e ikψx part of ux, k. To get a term of Ok m 1 we need k 1 of the x-derivatives to act on e ikψx with the other derivative acting on the product of a 0 and terms obtained by differentiating e ikψx with respect to x. In total, one gets pm a0 c m+1 x = 1 i which is of the form ξ j x, ψ j 1 + 2i 2 p m ξ j ξ k + p m x, ψ x, ψ a 1 c m+1 x = L a 0 + p m x, ψ a 1. Similarly, it is possible to show that c m+r+1 x = L a r + p m x, ψ a r+1 + g r, 2 ψ + p m 1 x, ψ a 0 3.9 j k r = 1,..., N + m where g r is a function depending on ψ, a 0,..., a r 1 and their derivatives and where a r 0 for r > N. Thus, determining the partials of a 0, a 1,..., a N along Γ again reduces to solving linear systems of ordinary differential equations. From 3.9 we deduce that whenever p m x, ψ vanishes to order R on Γ we can choose the Taylor series of a 0 on Γ up to order R 2 in such a way that c m+1 x vanishes to order R 2 on Γ. Similarly, we can choose the Taylor series of a r 1 x so that c m+r x vanishes up to order R 2r on Γ. 15

This nearly concludes the construction of Gaussian beams. As mentioned right at the beginning of the section it only remains to multiply ux, k by a k-independent function which vanishes outside a small neighbourhood O of Γ but which is also identically one on a smaller neighbourhood of Γ. This ensures that ux, k really does become concentrated on Γ as k. Even though we now finished off the construction of Gaussian beams, we still need to justify that we actually met our original aim 3.2, i.e. we still need to show that ux, k is indeed an asymptotic solution to the partial differential equation Px, Du = 0. For this, we make use of the following lemma. Lemma 3.7 Let T > 0 be given and let cx be a function on R n+1 which vanishes to order S 1 on the curve Γ, some S 2. Suppose both that supp c { x 0 T } is compact and that Im ψx ad 2 x on this set for some constant a > 0, where dx denotes the distance from the point x R n+1 to Γ. Then there exists a constant C such that x 0 T cx e ikψx 2 dx Ck S n/2. Proof. In a neighbourhood of Γ { x 0 T } we can choose k-independent coordinates z 0, z 1,..., z n such that the curve Γ is parametrised by z 0 = s, z 1 = 0,..., z n = 0 and such that we also have d 2 xz z 2 1 + + z 2 n. 3.10 Using these coordinates, we then introduce the k-dependent coordinates y 0, y 1,..., y n given by y 0 = z 0, y 1 = k 1/2 z 1,..., y n = k 1/2 z n. From 3.10 as well as our assumption on Im ψx we deduce that exp ikψ x y 0, k 1/2 y 1,..., k 1/2 y n exp kad 2 x y 0, k 1/2 y 1,..., k 1/2 y n exp a y 2 1 + + y 2 n 1. Finally, we want to change from x to y variables in the integral x 0 T cx e ikψx 2 dx. Since the Jacobian of the transformation from x to z coordinates is independent of k whereas the Jacobian for changing variables from z to y is equal to k n/2, it follows that the new integrand is bounded above by Ck n/2 c x y 0, k 1/2 y 1,..., k 1/2 2 y n exp 2a y 2 1 + + yn 2 Ck S n/2 k S/2 c x y 0, k 1/2 y 1,..., k 1/2 2 y n for some constant C. However, by assumption c vanishes to order S 1 1 on Γ and supp c { x 0 T } is compact. Therefore, k S/2 c x y 0, k 1/2 y 1,..., k 1/2 y n remains bounded on supp c { x 0 T } as k and the estimate of the lemma follows. By a repeated application of Lemma 3.7, we are now able to estimate the Sobolev s-norm P u s of Px, Du on { x 0 T }. Since c m+r x, r = 0,..., N + m, vanishes to order R 2r on Γ the lemma yields c m+r xk m r e ikψx 2 dx Ck 2m r k R+1 2r n/2 = Ck 2m R+1 n/2 x 0 T provided we choose the neighbourhood O which was introduced above so that Im ψx ad 2 x for x O { x 0 T }. By further using the inequality w 1 + w 2 + + w l 2 2w1 2 + w2 2 + + wl 2, we 16

deduce that P u 0 = 2 x 0 T N j= m 1/2 P u dx 2 = x 0 T Dk m R+1/2 n/4, x 0 T c j xk j e ikψx 2 dx N j= m 1/2 2 c j xk j e ikψx dx for some k-independent constant D. Noting that differentiating Px, Du with respect to x only multiplies the coefficients c j x by k or decreases the order to which they vanish on Γ by at most one, we similarly prove that P u s Dk m+s R+1/2 n/4. 3.11 Thus, if we choose R large enough then we can indeed achieve our aim 3.2. We end the general discussions with a remark, whose importance will become clear in the propagation of singularities section. Remark 3.8 By going back to the Gaussian beam construction, one sees that it is possible to choose the phase ψ depending smoothly on x0, ξ0 as well as on its Taylor series at x0 up to order R in a way which makes p m x, ψ vanish to order R on Γ { x 0 < T }. Similarly, each a r 1 x can be chosen as a smooth function of its Taylor series at x0 to order R 2r, the Taylor series of a 0 x,..., a r 2 x at x0 up to orders R 2,..., R 2r 1, respectively, and the Taylor series of ψ at x0 to order R. Finally, one can also establish that the constant C in Lemma 3.7 is uniform both in x0, ξ0 and in the Taylor series of ψ and a r 1 up to orders R and R 2r, respectively, provided that all the data in consideration lie on a bounded set and that we have a uniform bound on the positive definiteness of on the orthogonal complement of ẋ0. Im 2 ψ i j x0 3.1 The construction for the wave equation Having discussed the construction of Gaussian beams for a general strictly hyperbolic partial differential operator Px, D we conclude this section by demonstrating the construction for the two-dimensional wave equation u 2 u t 2 + 2 u 2 + 2 u 1 2 = 0. 2 To be consistent with the notation used in the first part of this section, we again replace t by x 0. The corresponding principal symbol is then p 2 x, ξ = ξ 2 0 + ξ 2 1 + ξ 2 2. First, we note that the wave equation is indeed strictly hyperbolic because for any fixed ξ 1, ξ 2 0 the polynomial gξ 0 = ξ 2 0 + ξ 2 1 + ξ 2 2 has two distinct real roots, as required. Thus, we can construct Gaussian beams along its null bicharaceristic curves. They are given as solutions of ẋ 0 = 2ξ 0, ẋ i = 2ξ i, i = 1, 2, and ξ = 0 1/2 17

subject to the additional condition p 2 xs, ξs = 0. From ξ = 0 we deduce that ξ 0, ξ 1 and ξ 2 need to be constant along any bicharacteristic curve, which further implies that xs is of the form xs = 2ξ 0 s, 2ξ 1 s, 2ξ 2 s. For instance, the curve xs, ξs = s, 0, s, 1 2, 0, 1 2 is a bicharacteristic which is clearly null. We now restrict our attention to this specific null bicharacteristic curve and construct a Gaussian beam along its projection Γ which is given by s, 0, s. Let us make the ansatz ux, k = e ikψx a 0 x 3.12 and as in the general construction, start by determining conditions on the phase ψ. We have already ensured that xs, ξs is a null bicharacteristic curve. Furthermore, we need to satisfy ψ xs = ξs, i.e. ψ s, 0, s = 1 2, 0, 1 3.13 2 as well as condition 3.6 which reduces to 0 = MCM + dm ds for C = 2 0 0 0 2 0. 0 0 2 To solve the latter for Ms ij = 2 ψ i j xs we first need to choose an appropriate symmetric matrix M0. On the one hand, it needs to obey M0ẋ0 = ξ0 1 0, i.e. M0 0 = 0 1 0 and on the other hand, we also want Im M0 to be positive definite on the orthogonal complement of ẋ0 = 1, 0, 1, which is spanned by 0, 1, 0 and 1, 0, 1. It is straightforward to check that bi 0 bi M0 = 0 ai 0 bi 0 bi is an admissible choice provided the constants a and b are both positive. To find the matrix Ms it then remains to solve the linear system Ẏ = CN, Ṅ = 0 with initial data Y 0, N0 = I, M0. From Ṅ = 0 we get Ns = N0 = M0. Plugging this into the first differential equation gives Ẏ = CM0 whose solution is Y s = Y 0 + scm0 = I + scm0. Thus, we obtain Ms = NsY 1 s = M0 I + scm0 1. 3.14 18

From the general discussion, we know that p 2 x, ψx will vanish to at least second order on Γ if we choose the phase ψ such that 3.13 and 3.14 are satisfied. An example of a function ψ with proper 1 st and 2 nd partials on s, 0, s is ψx 0, x 1, x 2 = x 2 x 0 + a2 x 0 x 2 1 a x 2 2 1 + 4a 2 x 2 + i 1 0 1 + 4a 2 x 2 0 2 + bx 2 x 0 2. 3.15 2 In fact one can check that with this choice of phase, p 2 x, ψx vanishes even to third order on Γ. We now still need to determine the a 0 x in our ansatz 3.12. As in the general construction, we use c 2+1 x = c 1 x to find a function a 0 x which works. For j = 0, 1, 2 we compute and 2 u 2 j = e ikψ 2 a 0 2 j u j = e ikψ a 0 j + ψ j ik e ikψ a 0 + 2ik ψ 2 a 0 e ikψ +ik 2 ψ ψ j j 2 e ikψ a 0 k 2 e ikψ a 0. j j Collecting the terms in front of k then yields 2 c 1 x = 2i ψ a 0 e ikψ +i 2 ψ j j 2 e ikψ a 0. j j=0 Due to ψ s, 0, s = 1 2, 0, 1 2 as well as da 0 ds s, 0, s = a 0 0 s, 0, s + a 0 2 s, 0, s it follows that c 1 x vanishes on s, 0, s if and only if da 0 ds ψ a 0 = 0 3.16 holds on Γ. Since ψ = ia 1 + 2ais 1 for the phase ψ as given in 3.15, we can use separation of variables to solve 3.16 for a 0 xs. If we take a 0 x0 = 1 this simply yields a 0 xs = 1 + 2ais 1/2, 3.17 where the branch of the square root is chosen so that we really do have a 0 x0 = 1. This fixes the function a 0 x on all of Γ. However, we need a 0 x to be defined globally or at least on a neighbourhood of Γ. A choice of a 0 x which is consistent with 3.17 is a 0 x 0, x 1, x 2 = 1 + 2aix 0 1/2. One can check that this makes c 1 x vanish to first order on Γ, which is exactly what we need as p 2 x, ψx vanishes to third order on s, 0, s. Finally, from 3.11 it follows immediately that ux, k 0 = e ikψx a 0 x 0 Dk 2+0 3+1/2 2/4 = Dk 1/2 = Ok 1/2. Thus, we found functions ψx and a 0 x turning 3.12 indeed into an asymptotic solution of the wave equation which becomes concentrated on the curve s, 0, s as k. 19

4 Reflections We have seen so far that we can construct a Gaussian beam on R n+1 satisfying the smallness criterion P u s Ck m+s R+1/2 n/4. 4.1 In this section we want to consider boundary effects. Let therefore Ω be a bounded domain in R n with smooth boundary and D := R Ω. For i = 1,..., l, let B i be a linear differential operator of order m i with principal symbol b i and impose the boundary conditions B i u = 0 4.2 on D = R Ω. If a ray path xs hits D at xs 0, then the Gaussian beam shall be reflected at xs 0 according to the boundary conditions. For the reflected Gaussian beam, we start with the ansatz u = e ikψ a 0 +... + a l N k N + e a ikψj j 0 +... + aj N k N, 4.3 where the first summand is given by the construction in the previous section. To carry out the construction for the reflected beam, we need to make two essential assumptions. Assumption 1 Non-grazing hypothesis Let ν = 0, ν denote the inner unit normal to D at xs 0. We assume that has m distinct roots in the complex plane. Note that 0 is always a root because xs 0, ξs 0 is a bicharacteristic. j=1 t p m xs 0, ξs 0 + tν 4.4 Example 4.1 When dealing with the wave operator it becomes evident why this assumption is called the non-grazing hypothesis. Letting p 2 denote the corresponding principal symbol and using the notation ξ = ξ 0, ξ, we have p 2 xs 0, ξx 0 + tν = ξ 0 s 0 2 + ξ s 0 + tν 2 4.5 = p 2 xs 0, ξs 0 + 2tξ s 0 ν = tt + 2ξ ν 4.6 since p 2 xs, ξs = 0 for all s. Thus the assumption is equivalent to ξs 0 ν 0, i. e. ν ẋs 0 0, which does not allow for beams hitting the boundary tangentially. Remark 4.2 For all operators P, the non-grazing hypothesis implies but is not necessarily equivalent to this. Denote the real roots of 4.4 such that [ ν p m ξ ν ẋs 0 = ν p m ξ xs 0, ξs 0 0. 4.7 pm ξ 0 ] xs 0, ξs 0 + τν > 0. 4.8 by τ i, i = 1,..., k 0. All purely complex roots appear in conjugate pairs. Label all purely complex roots with Imτ j > 0 by j {k 0,..., m k/2}. Assumption 2 Define a matrix b with components b ij = b i xs 0, ξs 0 + τ j ν 4.9 for i = 1,..., l, j = 1,..., k 0 + m k/2. We assume that rank b = l. Moreover, we assume that the number of boundary conditions is l = k 0 + m k/2. 20

Let 1 j k 0. Since pm ξ xs 0, ξs 0 + τ j ν R, we can define the bicharacteristic curve starting at xs 0, ξs 0 + τ j ν via Γ j : s xs, ξs 4.10 ẋ = p m ξ, ξ = p m. 4.11 Lemma 4.3 The ray path Γ j moves forward in time, i. e. x 0 > x 0 s 0 on Γ j in D. Proof. In terms of the bicharacteristics defined above, equation 4.8 reads as Thus Γ j enters D as x 0 increases. Strict hyperbolicity implies that ν ẋs 0 ẋ 0 s 0 > 0. 4.12 ẋ 0 s = p m ξ 0 xs, ξs 0 4.13 and thus x 0 > x 0 s 0 on Γ j in D. Remark 4.4 Note that if x 0 s decreases as s increases, we follow the ray path backwards in s. Lemma 4.5 Assume 1 j k 0. Let ψ = ψ j on D and ψ j xs = ξs. 4.14 Then the function ψ j can be chosen so that p m x, ψ j / vanishes to order R on the curve Γ j with Im ψ j xs 4.15 i k positive definite on the orthogonal complement of ẋs. Remark 4.6 Note that our requirement on ψ j / is compatible with ψ D = ψ j D since ξs 0 = ξs 0 + τ j ν. Proof. First observe that the compatibility condition n i=0 i,k α +1 ψ j α xs 0 p m xs 0, ξs 0 + τ j ν = d α ψ j i ξ i ds α xs s=0 4.16 holds for the αth derivative of ψ j at xs 0. In particular, this implies that ψxs R by 4.14. As Im 2 ψ xs 4.17 i k i,k is positive definite on vectors orthogonal to ẋs, but zero tangential to Γ, we find that Im 2 ψ xs 0 i k i,k 4.18 is positive definite on the tangential plane to D on xs 0 which is not tangential to the curve by 4.7. Since ψ j = ψ on D, Im 2 ψ j xs 0 4.19 i k i,k 21

is also positive definite on this plane. We have already established that ψxs R for all s, thus Im 2 ψ j / i k xs 0 = 0 on vectors parllel to ẋ. Therefore we conclude that i,k Im 2 ψ j xs 0 i k i,k is positive definite on the plane orthogonal to Γ j by the non-grazing condition 4.20 ν ẋs 0 = ν p m ξ xs 0, ξs 0 + τ j ν = dp m dt xs 0, ξs 0 + τ j ν 0. 4.21 Now it is possible to perform the same construction we used to find ψ in the previous section to construct ψ j such that p m x, ψ j = 0 4.22 vanishes to order R on Γ j. Since 4.21 holds, equation 4.16 enables us to always express a derivative in ν direction by other derivatives. This was not an issue in the original construction since we started the Gaussian beam at the time slice {x 0 = 0} so that the analogue of 4.21 was given by x 0 s 0 0. To determine ψ j for k 0 + 1 j k 0 + m k/2 we can exploit a more direct approach. Lemma 4.7 Let j > k 0 and assume ψ j = ψ on D with ψ j xs 0 = ξs 0 + τ j ν. 4.23 Then we can construct ψ j such that p m x, ψ j / = 0 to order R. Moreover the Taylor series of ψ j around xs 0 is determined uniquely and there is a c > 0 and a neighbourhood U D of xs 0 such that for all x U. Imψ j x c x xs 0 2 4.24 Proof. If p m x, ψ j / = 0 up to order R and α R, we have the expression 0 = n i=0 α +1 ψ j α i xs 0 p m ξ i xs 0, ψ j xs 0 + q α xs 0, 4.25 where q α is a function depending only on β ψ j / β for β α. Since ψ j = ψ on the boundary and ν p m ξ xs 0, ξs 0 + τ j ν 0 4.26 by the non-grazing condition, this determines the Taylor series uniquely. For the rest of the proof, assume that near xs 0 the domain D is locally defined by x n and xs 0 = 0. Then and Im ψ j xs 0 = 0,..., 0, Imτ j 4.27 Im 2 ψ j xs 0 = Im 2 ψ xs 0 i k i,k i k i,k =: A 4.28 22

for 0 i, k n 1. Using the notation x = x, x n, this yields Imψ j = Imτ j x n + 1 2 x Ax + x n K x + Ox 2 n + O x xs 0 3. 4.29 Since A is positive definite and Imτ j > 0, x n Imτ j K > 0 for x < Imτ j / K. Hence, for x xs 0 small enough, there is a constant c > 0 such that Imψ j x satisfies the inequality stated above. It remains to construct the coefficients a j 0,..., aj N. For u satisfying the ansatz 4.3, we will write N l P u = c r k r e ikψ + c j rk r e ikψj. 4.30 Since ψ = ψ j on D, we get r= m B j u = N j=1 r= m j d j rxk r e ikψ. 4.31 Lemma 4.8 One can choose the Taylor series of a j 0,..., aj N such that such that dj m j+s vanishes to order R 2s on D and c j m+sx vanishes on Γ j to order R 2s for j k 0 and c j m+sx vanishes to order R 2s at xs 0 for j > k 0. Proof. Using the ansatz 4.3 and assuming that B j is of order m j, we get B j u = N r= m j d j rxk r e ikψ 4.32 for j = 1,..., l on D with and, for s 1, d j m j = b j x, ψ = b j x, ψ d j m j+s = b j x, ψ = b j x, ψ a 0 + b j x, ψ 1 a 0 + a 10 +... + b j x, ψ l x l 0 4.33 l b ji a i 0 4.34 i=1 a s + b j x, ψ 1 a s + a 1s +... + b j x, ψ l a l s + g js 4.35 l b ji a i s + g js, 4.36 i=1 where g js is a function of a r, a 1 r,..., a l r for r = 0,..., s 1 and their derivatives. Since b is invertible, d j m j+s vanishing to order R 2s determines the Taylor series of a j s uniquely up to order R 2s. For 1 j k 0, we can use the method described in the previous section to chose the Taylor series of a j 0,..., aj N on Γ j such that c j m+s vanishes to order R 2s on Γ j. For j > k 0, we use the analogue for c j m j+s of equation 4.25 ot show that c j m j+s vanishes to order R 2s at xs 0. As in the previous construction, the approximation is finished once we multiply the terms by suitable bump functions in order to localise the solutions in space. Theorem 4.9 The above construction gives P u s Ck m+s R+1/2 n/4 4.37 23

and B i u s C k mj+s R+1/2 n/4 4.38 where s denotes the Sobolev norm of order s on D { x 0 < T }. Proof. One can modify Lemma 3.7 to see that the contributions of P eiψj a j 1 +... + aj N k N 4.39 s for l > k 0 are of higher order in 1/k than than for l k 0. For the latter and for B i u s we can use the results from the previous section. Remark 4.10 The constants in the inequalities are uniform for y = x0 and η = ξ0 in a neighbourhood of an admissible value as in the previous section, although now this neighbourhood might have to be small to avoid grazing. Remark 4.11 By Lemma 4.3 all ray paths go forward in time and, since we multiplied with suitable bump functions, none of these modifications made by us in this chapter to account for the boundary conditions will influence the initial values at x 0 = 0. 24

5 Propagation of Singularities One of the main differences between hyperbolic and, say elliptic, partial differential equations is assuming all differential operators have infinitely differentiable coefficients that they may admit solutions which fail to be smooth. The notion of propagation of singularities arises when one studies that points at which solutions of hyperbolic PDEs are not infinitely differentiable. The term propagation refers to the way in which singularities at a given time t 0 translate to singularities at later times. The Gaussian beam construction for strictly hyperbolic partial differential operators provides asymptotic solutions to the related PDEs which are concentrated near ray paths, or projections of null bicharacteristic curves. It turns out that one can use the construction, for an operator P, to prove that singularities in solutions of P u = f can only propagate along these paths. To demonstrate this, consider the operator P u = u t : the solutions of P u = 0 are the functions u which are independent of t. This means that there exist solutions which vanish away from lines {x 0, t, t R} for fixed x 0 R n, and clearly in this case the singularities of the solutions propagate along these lines. The idea of a singularity of a distribution can be refined by studying what is known as its wave front set, a concept introduced by Hörmander in [1]. As is now standard we will give our propagation of singularities result in terms of wave front sets, and so the following section will consist of a brief introduction to the idea and underlying theory. 5.1 Wave Front Sets 5.1.1 Fourier Transforms The key motivation for the definitions that follow, is that for an integrable function on R n we have a relationship between smoothness, and decay of the Fourier transform at infinity. This essentially arises from the fact that for any f which is sufficiently nice, we have D α x f ξ = ξ α ˆfξ and x β f ξ = D λ β ˆfξ where α and β are multi-indices, and for computational ease we write D α = i α. In fact, as we will see shortly, we can characterize smoothness of a function f in terms of bounds on ˆf. This characterization can be extended to distributions f D R n, and motivates the definition of the wave front set in terms of bounds on the Fourier transform. More precisely, given f L 1 R n we define its Fourier transform by 1 fξ = e ix.ξ fx dx 2π n/2 R n and have the folowing results: Lemma 5.1 If f C 0 R n then for each N N there is a C N such that Proof. This follows from the fact that if f C0 R n then 1 + ξ 2 M fξ = 2π n/2 which holds since 1 M f ξ = fξ C N 1 + ξ N. 5.1 = R n e ix.ξ 1 M fx dx. e ix.ξ 1 M fx dx 5.2 R n fx1 M e ix.ξ dx 5.3 = 1 + ξ 2 M R n fx e ix.ξ dx 5.4 = 1 + ξ 2 M fξ 5.5 25

and the 2nd line follows by integration by parts, since the boundary terms disappear for f compactly supported. From here, writing fξ = 1 + ξ 2 2M 1 + ξ 2 2M fξ = 1 + ξ 2 2M 1 M f ξ yields the result, since f C0R 1 n implies that 1 M f L 1 and so 1 M f ξ is bounded by a constant depending only on M. Lemma 5.2 If f, f L 1 and f obeys the estimates 5.1 then fx C R n. Proof. Since f, f are both in L 1 the Fourier inversion formula cite[theorem 9.11]rudin holds. Then for any multi-index β fx = 2π n/2 R n e ix.ξ fξ dξ a.e. D x β fx = 2π n/2 R n D x β e ix.ξ fξ dξ 5.6 = 2π n/2 R n x β fξ e ix.ξ, dξ 5.7 = 2π n/2 x β f ξ 5.8 where the estimates on f justify differentiating under the integral in line 1. The final expression is continuous as x β ˆf L 1 another consequence of the bounds 5.1 and thus has continuous Fourier transform. We have shown that all derivatives of f are continuous, and so have f C R n as required. Theorem 5.3 Characterisation of Smoothness A function f L 1 loc Rn is equivalent to a smooth function in a neighbourhood of x 0 if and only if there is a non-negative function ρ C0 R n with ρx 0 = 1 such that ρfξ satsifies the estimates 5.1. Proof. Assume that there exists such a ρ. Then ρf L 1 since f L 1 loc and ρf L 1 also, as a result of the estimates 5.1. By Lemma 5.2 this means that ρf C R n and so f is equivalent to a C function on a neighborhood of x 0. Conversely, if f is equivalent to a C function on a neighborhood of x 0 then building ρ non-negative and smooth, supported on this neighbourhood wlog assume it is compact such that ρx 0 = 1 gives that ρf C0 R n. Then ρf satisfies the estimates 5.1 by previous work. As mentioned before, this characterization can be extended to distributions f D R n. Say that f D R n is equivalent to a C function g on a neighbourhood O R n if for all ϕ DO f, ϕ = gϕ dx R n and for ρ C 0 R n = DR n define ρfξ = f, ρ e ix.ξ. 5.9 Note that if f DR n this corresponds to the Fourier transform of ρf as before. With this set up, we obtain the corresponding theorem: 26

Theorem 5.4 f D R n is equivalent to a C function on a neighbourhood of x 0 if and only if there exists ρ as before, with ρfξ satisfying the estimates 5.1. Proof. Suppose such a ρ exists, then f ρ, ϕ = f, ρϕ defines an element f ρ of E R n since ρ is compactly supported. Its Fourier transform f ρ S R n is in fact a function satisfying since we have, for all ϕ SR n f ρ, ϕ f ρ ξ = f ρ, e ix.ξ ξ = f ρ, ϕ 5.10 = f ρ, e ix.ξ ϕξ dξ 5.11 R n = f ρ, e ix.ξ ϕξ dξ 5.12 R n where the final line is justified by the fact that f ρ is compactly supported. Note that this, along with definition 5.9, means that f ρ ξ = ρfξ for all ξ. Now, since f ρ S R n E R n is a subset of this space the Fourier inversion formula for S R n holds, that is see [4] 2π n ˇ f ρ = f ρ where for ϕ DR n, ˇϕx = ϕ x and for g D R n, ǧ is defined by ǧ, ϕ = g, ˇϕ. The Fourier transform of f ρ must satisfy, for all ϕ SR n f ρ, ϕ = f ρ, ˆϕ 5.13 = f ρ ξ ϕξ dξ 5.14 R n = f ρ ξ e ix.ξ ϕx dxdξ 5.15 R n R n = f ρ ξ e ix.ξ dξ ϕx dx 5.16 R n R n where the final line uses Fubini s theorem, justified by the bounds 5.1 on ρfξ = f ρ ξ. The same bounds give that R n f ρ ξ e ix.ξ dξ is a C function of x as in the proof of Lemma 5.2 and so f ρ = 2π n ˇ f ρ, satisfies 2π n f ˇ ρ, ϕ = gϕ dx R n for some C function g, and all ϕ DR n. Clearly this implies we have the same result for f ρ, with corresponding smooth function g. Finally, since f ρ, ϕ = f, ρϕ for all ϕ, we can deduce the existence of a function g such that for all ϕ supported in O suppρ, a neighborhood of x 0, f, ϕ = gϕ dx R n 27