for all subintervals I J. If the same is true for the dyadic subintervals I D J only, we will write ϕ BMO d (J). In fact, the following is true

Similar documents
JUHA KINNUNEN. Harmonic Analysis

DIFFERENTIAL GEOMETRY, LECTURE 16-17, JULY 14-17

Bellman function approach to the sharp constants in uniform convexity

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

HARMONIC ANALYSIS. Date:

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Duality of multiparameter Hardy spaces H p on spaces of homogeneous type

arxiv: v1 [math.ca] 31 Dec 2014

MATH 6337: Homework 8 Solutions

Non-linear wave equations. Hans Ringström. Department of Mathematics, KTH, Stockholm, Sweden

ON WEAKLY NONLINEAR BACKWARD PARABOLIC PROBLEM

The Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients in a wedge

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

l(y j ) = 0 for all y j (1)

An introduction to Mathematical Theory of Control

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Geometric-arithmetic averaging of dyadic weights

A GENERAL THEOREM ON APPROXIMATE MAXIMUM LIKELIHOOD ESTIMATION. Miljenko Huzak University of Zagreb,Croatia

MATH 202B - Problem Set 5

16 1 Basic Facts from Functional Analysis and Banach Lattices

Singular Integrals. 1 Calderon-Zygmund decomposition

Functional Analysis Exercise Class

Integral Jensen inequality

VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS

1 I (x)) 1/2 I. A fairly immediate corollary of the techniques discussed in the last lecture is Theorem 1.1. For all 1 < p <

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

An introduction to some aspects of functional analysis

Exercise 1. Let f be a nonnegative measurable function. Show that. where ϕ is taken over all simple functions with ϕ f. k 1.

Laplace s Equation. Chapter Mean Value Formulas

Introduction to Real Analysis Alternative Chapter 1

APPLICATIONS OF DIFFERENTIABILITY IN R n.

The first order quasi-linear PDEs

i=1 β i,i.e. = β 1 x β x β 1 1 xβ d

MAT 257, Handout 13: December 5-7, 2011.

Tools from Lebesgue integration

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

A Pair in a Crowd of Unit Balls

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

THEOREMS, ETC., FOR MATH 515

Everywhere differentiability of infinity harmonic functions

Geometric intuition: from Hölder spaces to the Calderón-Zygmund estimate

Class Meeting # 1: Introduction to PDEs

Measure and Integration: Solutions of CW2

AN EFFECTIVE METRIC ON C(H, K) WITH NORMAL STRUCTURE. Mona Nabiei (Received 23 June, 2015)

Analysis Comprehensive Exam Questions Fall F(x) = 1 x. f(t)dt. t 1 2. tf 2 (t)dt. and g(t, x) = 2 t. 2 t

Green s Functions and Distributions

Nonlinear elliptic systems with exponential nonlinearities

ARCS IN FINITE PROJECTIVE SPACES. Basic objects and definitions

Lebesgue Measure on R n

Newtonian Mechanics. Chapter Classical space-time

REGULARITY OF SUBELLIPTIC MONGE-AMPÈRE EQUATIONS IN THE PLANE

MAT1000 ASSIGNMENT 1. a k 3 k. x =

ANALYSIS QUALIFYING EXAM FALL 2016: SOLUTIONS. = lim. F n

Rigidity and Non-rigidity Results on the Sphere

g(x) = P (y) Proof. This is true for n = 0. Assume by the inductive hypothesis that g (n) (0) = 0 for some n. Compute g (n) (h) g (n) (0)

NOTES ON VECTOR-VALUED INTEGRATION MATH 581, SPRING 2017

Gradient Estimate of Mean Curvature Equations and Hessian Equations with Neumann Boundary Condition

Sobolev Spaces. Chapter Hölder spaces

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

ANALYSIS QUALIFYING EXAM FALL 2017: SOLUTIONS. 1 cos(nx) lim. n 2 x 2. g n (x) = 1 cos(nx) n 2 x 2. x 2.

Half of Final Exam Name: Practice Problems October 28, 2014

Contents. 1. Introduction

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

ESTIMATES FOR THE MONGE-AMPERE EQUATION

SYMMETRIC INTEGRALS DO NOT HAVE THE MARCINKIEWICZ PROPERTY

A generalised Ladyzhenskaya inequality and a coupled parabolic-elliptic problem

A note on some approximation theorems in measure theory

NOTES ON CALCULUS OF VARIATIONS. September 13, 2012

The oblique derivative problem for general elliptic systems in Lipschitz domains

Stability of Stochastic Differential Equations

Implicit Functions, Curves and Surfaces

Exercise Solutions to Functional Analysis

Math The Laplacian. 1 Green s Identities, Fundamental Solution

Topology, Math 581, Fall 2017 last updated: November 24, Topology 1, Math 581, Fall 2017: Notes and homework Krzysztof Chris Ciesielski

Nonlinear Control Systems

Lebesgue Measure on R n

Reminder Notes for the Course on Measures on Topological Spaces

Metric Spaces and Topology

Chapter 2 Convex Analysis

The L p -dissipativity of first order partial differential operators

Controllability of linear PDEs (I): The wave equation

THE HARDY LITTLEWOOD MAXIMAL FUNCTION OF A SOBOLEV FUNCTION. Juha Kinnunen. 1 f(y) dy, B(x, r) B(x,r)

Course 212: Academic Year Section 1: Metric Spaces

EXISTENCE OF SOLUTIONS TO ASYMPTOTICALLY PERIODIC SCHRÖDINGER EQUATIONS

GRAND SOBOLEV SPACES AND THEIR APPLICATIONS TO VARIATIONAL PROBLEMS

2. Function spaces and approximation

HOMEWORK 2 SOLUTIONS

Math 207 Honors Calculus III Final Exam Solutions

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Analysis II Lecture notes

RIESZ BASES AND UNCONDITIONAL BASES

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Introduction to Algebraic and Geometric Topology Week 14

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

Continuous Functions on Metric Spaces

Some lecture notes for Math 6050E: PDEs, Fall 2016

Math 61CM - Quick answer key to section problems Fall 2018

Nonlinear Analysis 71 (2009) Contents lists available at ScienceDirect. Nonlinear Analysis. journal homepage:

HIGHER INTEGRABILITY WITH WEIGHTS

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Transcription:

3 ohn Nirenberg inequality, Part I A function ϕ L () belongs to the space BMO() if sup ϕ(s) ϕ I I I < for all subintervals I If the same is true for the dyadic subintervals I D only, we will write ϕ BMO d () In fact, the following is true ϕ BMO() ( ϕ(s) ϕ I p I ds) p <, p (0, ), I If we factor over the constants, we get a normed space, where the expression on the right-hand side can be taken as one of the equivalent norms for any p [, ) In what follows, we will use the L -based norm: ϕ BMO() = sup I I I ( ) ϕ(s) ϕ I ds = sup ϕ I ϕ I I The BMO ball of radius ε centered at 0 will be denoted by BMO ε Using the Haar decomposition ϕ(s) = ϕ + I D (ϕ, h I )h I (s), we can write down the expression for the norm in the following way ϕ BMO() = sup (ϕ, h I I L ) = sup L ( ϕ I I L + ϕ L L D L D I Theorem (ohn Nirenberg) There exist absolute constants c and c such that for all ϕ BMO ε () {s : ϕ(s) ϕ λ} c e c λ ϕ An equivalent integral form of the same assertion is the following Theorem There exist absolute constants ε 0 such that for any ϕ BMO() with ε < ε 0 the inequality e ϕ ce ϕ holds with a constant c = c(ε) not depending on ϕ We shall prove the theorem in this integral form and find the sharp constant c(ε) Our Bellman function, B(x; ε) = sup ϕ BMO ε() { } e ϕ : ϕ = x, ϕ = x, is well-ined on the domain Ω ε = { x = (x, x ): x x x + ε } First, we will consider the dyadic problem and deduce the main inequality for the dyadic Bellman function

Lemma 3 (Main inequality) For every pair of points x ± from Ω ε such that their mean x = (x + + x )/ is also in Ω ε, the following inequality holds (3) B(x) B(x+ ) + B(x ) Proof The proof literarily repeats the proof of the main inequality for the Buckley Bellman function We split the integral in the inition of B into two parts: the integral over + and the one over : e ϕ(s) ds = e ϕ(s) ds + e ϕ(s) ds + Now we choose such functions ϕ ± on the intervals ± that almost give us the supremum in the inition of B(x ± ), ie ± ± e ϕ(s) ds B(x ± ) η, for a fixed small η > 0 Then for the function ϕ on, ined as ϕ + on + and ϕ on, we obtain the inequality (3) e ϕ(s) ds B(x+ ) + B(x ) η Observe that the compound function ϕ is an admissible test function, corresponding to the point x Indeed, x ± = x ± and by construction ϕ ± BMO d ε( ± ); therefore, the function ϕ satisfies the inequality ϕ I ϕ ε for all I D I +, since ϕ + does, and for all I D, since ϕ does Lastly, ϕ ϕ ε, because, by assumption, x Ω ε We can now take supremum in (3) over all admissible functions ϕ which yields B(x) B(x+ ) + B(x ) η, which proves the main inequality because η is arbitrarily small For the Buckley inequality Bellman function, the next step was to reduce the number of variables Here it is possible as well, but we postpone this procedure and first derive a boundary condition for B Lemma 3 (Boundary condition) (33) B(x, x ) = e x Proof The function ϕ(s) = x is the only test function corresponding to the point x = (x, x ), because the equality in the Hölder inequality x x occurs only for the constant functions Hence, e ϕ = e x Now we are ready to describe super-solutions as functions verifying the main inequality and the boundary conditions Lemma 33 (Bellman induction) If B is a continuous function on the domain Ω ε, satisfying the main inequality (3) for any pair of x ± Ω ε, such that x = x+ +x Ω ε, and the boundary condition (33), then B(x) B(x)

Proof Fix a bounded function ϕ BMO ε () By the main inequality we have B(x ) + B(x + ) + B(x ) I B(x I ) = B(x (n) (s)) ds, I D where x (n) (s) = x I, when s I, I D By the Lebesgue differentiation theorem we have x (n) (s) (ϕ(s), ϕ (s)) almost everywhere Now, we can pass to the limit in this inequality as n Since ϕ is assumed to be bounded, x (n) (s) runs in a bounded and, therefore, compact subdomain of Ω ε Since B is continuous, it is bounded on any compact set and so, by the Lebesgue dominated convergence theorem, we can pass to the limit in the integral using the boundary condition (33): (34) B(x ) B(ϕ(s), ϕ (s)) ds = e ϕ(s) ds = e ϕ To complete the proof of the lemma, we need to pass from bounded to arbitrary BMO test functions To this end, we will use the following result: Lemma 34 Fix ϕ BMO() and two real numbers c, d such that c < d Let ϕ c,d be the cut-off of ϕ at heights c and d : c, if ϕ(s) c; (35) ϕ c,d (s) = ϕ(s), if c < ϕ(s) < d; d, if ϕ(s) d Then and, consequently, ϕ c,d I ϕ c,d ϕ I I ϕ, I, I, I ϕ c,d BMO ϕ BMO Proof First, let us note that it is sufficient to prove this lemma for one-sided cut, for example, for c = We then get the full statement by applying this argument twice Indeed, if we denote by C d ϕ the cut-off of a ϕ from above at height d, ie C d ϕ = ϕ,d, then ϕ c,d = C c ( C d ϕ) Take a measurable subset I and let I = {s I : ϕ(s) d} and I = {s I : ϕ(s) d} Let β k = I k / I, k =, We have the following identity: [ ] [ ϕ I ϕ I (Cd ϕ) I C d ϕ I] [ ] [ =β ϕ I ϕ I + β β ϕ I d ][ ] ϕ I + d ϕ I, which proves the lemma, because ϕ I d ϕ I Now, let ϕ BMO ε () be a function bounded from above Then, by the above lemma, ϕ n = ϕ n, BMO ε (), and, according to (34), we have B( ϕ n, ϕ n ) e ϕn Since e ϕ is a summable majorant for e ϕn and B is continuous, we can pass to the limit and obtain the estimate (34) for any function ϕ bounded from above Finally, we repeat this approximation procedure for an arbitrary ϕ Now, we take ϕ n = ϕ,n and use the monotone convergence theorem to pass to the limit in the right-hand side of the inequality 3

4 So, we have proved the inequality B(x ) e ϕ for arbitrary ϕ BMO ε () Taking supremum over all admissible test functions corresponding to the point x, we get B(x) B(x) As before, we pass from the finite-difference inequality (3) to the infinitesimal one: B B d B x x x (36) = dx B B 0, x x x and we will require this Hessian matrix to be degenerate, ie det( d B dx ) = 0 Again, to solve this PDE, we use a homogeneity property to reduce the problem to ODE Lemma 35 (Homogeneity) There exists a function G on the interval [0, ε ] such that B(x; ε) = e x G(x x ), G(0) = Proof Let ϕ be an arbitrary test function and x = ( ϕ, ϕ ) its Bellman point Then the function ϕ = ϕ + τ is also a test function with the same norm, and its Bellman point is x = (x + τ, x + τx + τ ) Therefore, Choosing τ = x we get B( x) = sup ϕ e ϕ = e τ sup e ϕ = e τ B(x) B(x) = e τ B(x + τ, x + τx + τ ) = e x B(0, x x ) Setting G(t) = B(0, t) completes the proof Since G 0, we can introduce g(t) = log G(t) and look for a function B of the form B(x, x ) = e x +g(x x ) By direct calculation, we get B = ( 4x x g + 4x (g ) g + 4x g ) B, B = ( g x (g ) x g ) B, x x B = ( (g ) + g ) B x The partial differential equation det( d B dx ) = 0 then turns into the following ordinary differential equation: ( 4x g + 4x (g ) g + 4x g ) ( (g ) + g ) = ( g x (g ) x g ), ϕ which reduces to g g g (g ) 3 = 0

Dividing by (g ) 3 (since we are not interested in constant solutions), we get ( g ) =, 4(g ) which yields g 4(g ) = t + const or, equivalently, ( ) = t + const, s [0, ε ] g Since the left-hand side is non-positive, the constant cannot be greater than ε Let us denote it by δ, where δ ε Thus, we have two possible solutions: = ± δ g ± t Using the boundary condition g(0) = 0, we obtain g ± (t) = t ds 0 δ s = log δ t ± δ δ t δ This yields two solutions for B : B ± (x) = { δ x + x } exp x ± δ δ x + x δ Homework assignment ) Check that the quadratic form of the Hessian is: ( (x B ± ) δ x +x ± ) { } i j = exp x ± δ x i x x +x j δ x + x δ ( δ) i,j= ) Describe the extremal trajectories along which the Hessian degenerates 4 Homogeneous Monge Ampère equation We will look for the solutions of the system (4) B x x B x x = (B x x ), B x x 0, B x x 0, subject to certain boundary conditions By the nature of the problem, we are interested in finding the smallest possible solution of (4) If a linear function fulfills the boundary conditions, it clearly is that smallest solution In what follows we assume that B is not linear This means that at each point x of the domain there exists a unique (up to a scalar coefficient) vector, say Θ(x), lying in the kernel of the matrix d B dx Let us show that the functions B xi are constant along the vector field Θ For a differentiable ( function ) f, the vector field tangent to the level set f(x, x ) = const fx has the form (it is orthogonal to f) Thus, we need to check that both f x 5

6 ( ) (Bxi ) vectors x are in the kernel of the Hessian (ie proportional to the kernel (B xi ) x vector Θ) This is a direct consequence of (4) For example, for i = we have: ( ) ( ) ( ) Bx,x B x x (Bx ) x Bx x = B x x + B x x B x x = 0 B x x (B x ) x B x x B x x + B x x B x x B x x If we parameterize the integral curves of the field Θ by some parameter s we can write B xi = t i (s), s = s(x, x ) Any B xi that is not identically constant can be itself taken as s But it is usually more convenient to parameterize the integral curves by some other parameter with a clear geometrical meaning Now, we show that the function t 0 = B x t x t is also constant along the integral curves Since t 0 t t = B x + x + x + t = x B x x x x x + x B x x and t 0 t t = B x t x x = x B x x x x x x B x x, we have ( ) ( ) ( ) (t0 ) x (t ) = x x (t ) (t 0 ) x x x (t ) Ker d B x (t ) x dx Thus, we have proved that any solution of the homogeneous Monge Ampère equation has the representation (4) B = t 0 + x t + x t where the coefficients t i are constant along the vector field generated by the kernel of the Hessian Now we prove that the integral curves of this vector field are in fact straight lines, given by the equation (43) dt 0 + x dt + x dt = 0 This is, indeed, the equation of a straight line, because all the differentials are constant along the trajectory This equation can be rewritten as a usual linear equation with constant coefficients, if we choose a parametrization of the trajectories For example, if we take s = t 0, then (43) turns into + x dt dt 0 + x dt dt 0 = 0, where the coefficients dt i dt 0, being functions of t 0, are constant on each trajectory Let us now deduce equation (43) On the one side, (44) db = B x dx + B x dx = t dx + t dx ; on the other side, from (4) we have (45) db = dt 0 + t dx + x dt + t dx + x dt Comparing of (44) and (45) yields (43)