The infinity-laplacian and its properties

Similar documents
On the infinity Laplace operator

On the definition and properties of p-harmonious functions

Laplace s Equation. Chapter Mean Value Formulas

TUG OF WAR INFINITY LAPLACIAN

EXISTENCE AND UNIQUENESS OF p(x)-harmonic FUNCTIONS FOR BOUNDED AND UNBOUNDED p(x)

AN ASYMPTOTIC MEAN VALUE CHARACTERIZATION FOR p-harmonic FUNCTIONS. To the memory of our friend and colleague Fuensanta Andreu

On a mean value property related to the p-laplacian and p-harmonious functions

Everywhere differentiability of infinity harmonic functions

AN ASYMPTOTIC MEAN VALUE CHARACTERIZATION FOR p-harmonic FUNCTIONS

VISCOSITY SOLUTIONS OF ELLIPTIC EQUATIONS

Sobolev Spaces. Chapter 10

Sobolev Spaces. Chapter Hölder spaces

Boot camp - Problem set

SOLUTIONS OF NONLINEAR PDES IN THE SENSE OF AVERAGES

ON THE DEFINITION AND PROPERTIES OF p-harmonious FUNCTIONS

u(y) dy. In fact, as remarked in [MPR], we can relax this condition by requiring that it holds asymptotically

Some Background Material

The Dirichlet s P rinciple. In this lecture we discuss an alternative formulation of the Dirichlet problem for the Laplace equation:

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Real Analysis Problems

NOTE ON A REMARKABLE SUPERPOSITION FOR A NONLINEAR EQUATION. 1. Introduction. ρ(y)dy, ρ 0, x y

Boundary value problems for the infinity Laplacian. regularity and geometric results

A PDE Perspective of The Normalized Infinity Laplacian

Some lecture notes for Math 6050E: PDEs, Fall 2016

JUHA KINNUNEN. Harmonic Analysis

arxiv: v1 [math.ap] 25 Jan 2012

MINIMAL GRAPHS PART I: EXISTENCE OF LIPSCHITZ WEAK SOLUTIONS TO THE DIRICHLET PROBLEM WITH C 2 BOUNDARY DATA

Maths 212: Homework Solutions

Partial Differential Equations

NOTE ON A REMARKABLE SUPERPOSITION FOR A NONLINEAR EQUATION. 1. Introduction. ρ(y)dy, ρ 0, x y

Lecture Notes in Advanced Calculus 1 (80315) Raz Kupferman Institute of Mathematics The Hebrew University

THEOREMS, ETC., FOR MATH 515

Metric Spaces and Topology

Regularity for Poisson Equation

Topological properties of Z p and Q p and Euclidean models

On the p-laplacian and p-fluids

Problem Set 5: Solutions Math 201A: Fall 2016

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

The Heine-Borel and Arzela-Ascoli Theorems

Quantitative Homogenization of Elliptic Operators with Periodic Coefficients

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

THE L 2 -HODGE THEORY AND REPRESENTATION ON R n

GLOBAL LIPSCHITZ CONTINUITY FOR MINIMA OF DEGENERATE PROBLEMS

The Arzelà-Ascoli Theorem

Lecture 4 Lebesgue spaces and inequalities

MEAN VALUE PROPERTY FOR p-harmonic FUNCTIONS

Partial Differential Equations, 2nd Edition, L.C.Evans The Calculus of Variations

Introduction and Preliminaries

Chapter 2. Metric Spaces. 2.1 Metric Spaces

Sobolev spaces. May 18

REGULARITY FOR INFINITY HARMONIC FUNCTIONS IN TWO DIMENSIONS

GENERAL EXISTENCE OF SOLUTIONS TO DYNAMIC PROGRAMMING PRINCIPLE. 1. Introduction

Applied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.

u xx + u yy = 0. (5.1)

Centre for Mathematics and Its Applications The Australian National University Canberra, ACT 0200 Australia. 1. Introduction

Equilibria with a nontrivial nodal set and the dynamics of parabolic equations on symmetric domains

2. Function spaces and approximation

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

and finally, any second order divergence form elliptic operator

Perron method for the Dirichlet problem.

Optimization and Optimal Control in Banach Spaces

Math The Laplacian. 1 Green s Identities, Fundamental Solution

THE NEUMANN PROBLEM FOR THE -LAPLACIAN AND THE MONGE-KANTOROVICH MASS TRANSFER PROBLEM

An introduction to Mathematical Theory of Control

Some notes on viscosity solutions

TD M1 EDP 2018 no 2 Elliptic equations: regularity, maximum principle

2) Let X be a compact space. Prove that the space C(X) of continuous real-valued functions is a complete metric space.

Part III. 10 Topological Space Basics. Topological Spaces

Example 1. Hamilton-Jacobi equation. In particular, the eikonal equation. for some n( x) > 0 in Ω. Here 1 / 2

Problem Set 2: Solutions Math 201A: Fall 2016

Asymptotic Behavior of Infinity Harmonic Functions Near an Isolated Singularity

BOUNDARY VALUE PROBLEMS ON A HALF SIERPINSKI GASKET

Calculus of Variations. Final Examination

Harmonic Functions and Brownian motion

Math Tune-Up Louisiana State University August, Lectures on Partial Differential Equations and Hilbert Space

Continuity. Matt Rosenzweig

NEUMANN BOUNDARY CONDITIONS FOR THE INFINITY LAPLACIAN AND THE MONGE-KANTOROVICH MASS TRANSPORT PROBLEM

S chauder Theory. x 2. = log( x 1 + x 2 ) + 1 ( x 1 + x 2 ) 2. ( 5) x 1 + x 2 x 1 + x 2. 2 = 2 x 1. x 1 x 2. 1 x 1.

Asymptotic behavior of infinity harmonic functions near an isolated singularity

arxiv: v2 [math.ap] 28 Nov 2016

Solution Sheet 3. Solution Consider. with the metric. We also define a subset. and thus for any x, y X 0

Multivariable Calculus

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

Math 209B Homework 2

Weak convergence and Brownian Motion. (telegram style notes) P.J.C. Spreij

Existence and Uniqueness

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

Conservation law equations : problem set

OPTIMAL POTENTIALS FOR SCHRÖDINGER OPERATORS. 1. Introduction In this paper we consider optimization problems of the form. min F (V ) : V V, (1.

at time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))

Math 321 Final Examination April 1995 Notation used in this exam: N. (1) S N (f,x) = f(t)e int dt e inx.

Chapter 2 Convex Analysis

Eigenvalues and Eigenfunctions of the Laplacian

l(y j ) = 0 for all y j (1)

CAPACITIES ON METRIC SPACES

PHASE TRANSITIONS: REGULARITY OF FLAT LEVEL SETS

EQUIVALENCE OF VISCOSITY AND WEAK SOLUTIONS FOR THE p(x)-laplacian

Calculus in Gauss Space

i=1 α i. Given an m-times continuously

1/12/05: sec 3.1 and my article: How good is the Lebesgue measure?, Math. Intelligencer 11(2) (1989),

Transcription:

U.U.D.M. Project Report 2017:40 Julia Landström Examensarbete i matematik, 15 hp Handledare: Kaj Nyström Examinator: Martin Herschend December 2017 Department of Mathematics Uppsala University

Department of Mathematics Uppsala University Julia Landström Degree Project C in Mathematics, 15.0 c Supervisor: Kaj Nyström.

Contents 1 Introduction 2 2 The p-laplace Equation 3 2.1 Letting p................................. 7 3 The Infinity-Laplace Equation 15 3.1 Mean Value Formula.............................. 17 3.2 Viscosity Solutions............................... 20 3.2.1 The Comparison Principle....................... 28 4 Tug-of-War 32 4.1 The game.................................... 32 4.1.1 p-harmonious functions........................ 33 4.1.2 A heuristic argument.......................... 34 4.1.3 Tug-of-war and p-harmonious functions............... 35 4.1.4 Maximum principles.......................... 39 4.1.5 Letting ε 0.............................. 41 1

1 Introduction The aim is to show that from the p Laplace equation it is possible to obtain the Laplace equation by letting p. G. Aronsson deduced the Laplace equation in 1967. There are several properties and applications that can be used when p, we will concentrate on the stochastic game tug-of-war. Another example, that is important but will not be focused on here is image processing. For image processing the Laplace equation is better to use than the ordinary Laplace equation, the most advantageous property is that at the isolated boundary points the boundary values can be measurable. Since the boundary can be arbitrary irregular and the non-existence of any flow along the contours i.e. in the directions of the stream lines the Laplacian is just the second derivative. Hence, the equation can be written as where ν = u u. u u 2 = 2 u ν 2, The modern theory of viscosity solutions is constructed by the Laplace equation and partial differential equations. It is the modern theory of viscosity solutions that enables the definition of solutions, since these circumvent the problem that the second derivative does not always exist for the Laplace equation. One example of a viscosity solution when = 0 (in R 2 ) is the function, u(x, y) = x 4 3 y 4 3. An example of a viscosity subsolution in R n is the function x x 0. If R y ρ(y) dy < then the function defined by the superposition is n ˆ V (x) = x y ρ(y) dy, R n where ρ 0 is a viscosity subsolution.[1] The connection between probability calculations and the Laplace equation was discovered by Y.Peres, O.Schramm, S.Sheffield and D.Wilson in 2009 and thereafter the stochastic zero-sum game Tug-of-War. A brief summary of the game follows: A token is placed at some point x 0 in a domain and moves randomly within the ball B ε (x n ). The first player moves the token to a new position x n+1, with probability α 2, then the other player moves the token to a different position, with the same probability α 2. The token moves with probability β = 1 α, in the interval I [0,1] where β is the probability that the token moves according to a Brownian motion, which does not affect the game. The game ends when the token leaves the domain and the second player pays the first player the amount equal to a value of a given boundary pay-off function. [2] 2

2 The p-laplace Equation In order to show existence and uniqueness for the Dirichlet boundary value problem, the Sobolev space needs to be introduced. Definition 1. (Sobolev space) The Sobolev space is denoted by W 1,p () and consists of functions u, such that u, u L p (), u = ( u, u,..., u ). (1) x 1 x 2 x n Provided with the following norm, which is a Banach space 1. [1] u W 1,p ()= u L p () + u L p () (2) Hence, all Lipschitz continuous functions defined in the domain belongs to the space W 1, () and W 1,p 0 () is the closure C0 () with respect to the Sobolev norm. It is necessary for p to have a large value and if p > n, then the boundary values are considered in the classical sense and the Sobolev space only consists of continuous functions (n = the number of coordinates in R n ). Consequently, when p > n all domains are regular for the Dirichlet problem. A variant of Morrey s inequality from (Lindqvist 2015, 11) with the important property that the constant remains bounded when p is large follows, Lemma 2. Let p > n and assume that is an arbitrary bounded domain in R n. If v W 1,p 0 (), then v(x) v(y) 2pn n x y 1 p v p n L p () (3) for a.e. x, y. One can redefine v in a set of measure zero and extend it to the boundary so that v C 1 n p () and v = 0. The inequality is valid for many domains even if the boundary values are not required to be zero. For instance, the inequality is valid for v W 1,p (), if the domain is a cube Q. 1 "A Banach space is a normed vector space which is complete under the metric associated with the norm." [3] 3

Consider a smooth curve v C 1 (Q) W 1,p (Q), when p the behavior of the constant is crucial, this will be shown by first taking the integral of the following, with respect to y over Q, thus v(x) v Q = ˆ1 d v(x) v(y) = v(x + t(y x)) dt dx Q ˆ 1 0 ˆ1 = 0 0 diam(q) diam(q) y x, v(x + t(y x)) dt y x, v(x + t(y x)) dt dy ˆ 1 0 Q ˆ 1 0 ( v(x + t(y x)) dt dy Q ) 1 v(x + t(y x)) p p dy dt Now, let ξ = x + t(y x), dξ = t n dy, then we obtain the inner integral as, where Q t Q. Q Then, recall that p > n, v(x + t(y x)) p dy = 1 t n Q t 1 t n Q v(x) v Q diam(q) Q 1 p = 1 1 n p ˆ 1 t n p 0 v(ξ) p dξ v(ξ) p dξ, v L p (Q) dt diam(q) v Q 1 L p (Q). p By the triangle inequality we obtain, v(x) v(y) v(x) v Q + v(y) v Q 2p diam(q) v p n Q 1 L p (Q). p 4

To sum up, one can pick a help cube Q Q such that x y diam(q ). Theorem 3. This theorem follows from (Lindqvist 2015, 13-14) Pick p>n and consider an arbitrary bounded domain R n. Assume that g C() W 1,p () is given. Then there exists a unique function u C() W 1,p () with boundary values g which minimizes the variational integral, ˆ I(v) = v p dx (4) among all similar functions. The minimizer is a weak solution to the p Laplace equation, i.e. ˆ u p 2 u, η dx = 0, (5) when η C 0 (). On the other hand, a weak solution in C() W 1,p () is always a minimizer (among functions with its own boundary values). Proof. The following is based on the proof in (Lindqvist 2015, 14-16). From this, u 1 + u 2 p 2 < u 1 p + u 2 p, 2 when u 1 u 2, upon integration, the uniqueness of the minimizer follows. Namely, u 1+u 2 2 would be admissible and ( ) u1 + u 2 I(u 1 ) I 2 < I(u 1) + I(u 2 ) 2 = I(u 1 ), if u 1 and u 2 were two minimizers, unless u 1 = u 2 almost everywhere (a.e.). u 1 = u 2, to avoid contradiction. Let From the minimizing property one can deduce the Euler-Lagrange Equation, I(u) I(u + εη). Note that, the function v(x) = u(x)+εη(x) is admissible. Then, by infinitesimal calculus we have the following, d I(u + εη) = 0, dε for ε = 0. Therefore the first variation vanishes, i.e. equation (5) holds. 5

By using the Dirichlet Method in the calculus of variation, due to Lebesgue, one can show that the minimizer exists, ˆ I 0 = inf v p dx, v the infimum is taken over the class of admissible functions. Consider a minimizing sequence of admissible functions v j, lim I(v j) = I 0, j where 0 I 0 I(g) <. Suppose that I(v j ) < I 0 + 1, j = 1, 2,... and that min g v j (x) max g, in, considering the functions to be cut at the constant heights min g and max g. Observe that the integral decreases by this procedure. The Sobolev norms v j W 1,p () are uniformly bounded, this means that the conventional way to use the Sobolev inequality is not to cut the functions, instead using the Sobolev inequality, v j g L p () C (v j g) L p () bound the norms uniformly, where j = 1, 2, 3,... v j L p () v j g L p () + g L p () C (v j g) L p () + g L p () C[ v j L p () + g L p ()]+ g L p () C[(I 0 + 1) p + g L p ()]+ g L p () M <, From weak compactness we can conclude that, there exists a function u W 1,p () and a subsequence such that, v jk u weakly in L p (). From lemma (2) we distinguished that u C(), since p > n. Thus, u = g on and the continuity is established! From weak lower semi-continuity of convex integrals we obtain the following, I(u) lim k inf I(v j k ) = I 0. I(u) I 0, since u is admissible. Hence, we have that u is a minimizer and the existence is established! 6

It is time to show that, the weak solutions of the Euler-Lagrange equation are minimizers. Note that, there are variational integrals for which this is not the true. Take the integral of the following inequality 2, (u + η) p u p + p u p 2 u, η, we get ˆ (u + η) p dx ˆ u p dx + 0 = ˆ u p dx. Thus u is a minimizer. So, the weak solutions of the Euler-Lagrange equation are minimizers! Remark : There exist a unique p harmonic function u C() with boundary values g, such that the given boundary values g are merely continuous and g C() but g / W 1,p (). It may happen that u p dx =. (For the ordinary Dirichlet integral (p = 2), Hadamard gave a counter example). 2.1 Letting p To obtain the perfect passage to the limit in the p Laplace equation p u = 0 as p, the connection with the problem of extending the Lipschitz boundary values to the whole domain has to be made. The Lipschitz function is defined as follows, Definition 4. A function f : R is Lipschitz continuous for some constant L if: where x, y. f(x) f(y) L x y, Let g : R be Lipschitz continuous, where ξ 1, ξ 2. g(ξ 1 ) g(ξ 2 ) L ξ 1 ξ 2, The following bounds were found by McShane and Whitney. For a Lipschitz continuous function u : R with boundary values g = u, the following holds, max ξ {g(ξ) L x ξ } u(x) min{g(ξ) + L x ξ }, since u depends on x the two bounds are Lipschitz extensions of g and L is the the Lipschitz constant. Note that uniqueness fails in several variables, when the two bounds ξ 2 Observe that, since w p is convex, the inequality b p a p + p a p 2 a, b a holds for vectors. 7

does not coincide with the Lipschitz continuous extension and the Lipschitz constant. The Rademacher theorem state that a Lipschitz function is a.e. differentiable. Theorem 5. (Rademacher s Theorem) Let a function f be Lipschitz continuous, then f is totally differentiable a.e. in the domain. Hence, the expansion holds at almost every point, as y x, when x f(y) = f(x) + f(x), y x + o( y x ) The following is a special case of the Ascoli Theorem from (Lindqvist 2015, 8) Theorem 6. (Ascoli s Theorem) Let the sequence f k : R be equibounded: where k = 1, 2,... and equicontinuous: sup f k (x) M <, f k (x) f k (y) C x y α, where k = 1, 2,... Then, there exists a continuous function f and a subsequence f kj such that f kj f locally uniformly in. All functions can be extended continuously to the boundary and the convergence is uniform in the closure, if the domain is bounded. Proof. The proof follows from (Lindqvist 2015,9-10). Construct a subsequence which converges at the rational points and let q 1, q 2, q 3,... be a numbering of the rational points in. By the assumptions the sequence f 1 (q 1 ), f 2 (q 2 ), f 3 (q 3 ),... is bounded and according to Weierstrass theorem 3 it has a convergent subsequence, f 1j (q 1 ), j = 1, 2, 3,... Now, consider the point q 2. Hence the sequence f 11 (q 2 ), f 12 (q 2 ), f 13 (q 2 ),..., is bounded. By extracting a convergent subsequence, f 21 (q 2 ), f 22 (q 2 ), f 23 (q 2 ),... proceeding to extract subsequences of subsequences, the following scheme is obtained, f 11, f 12, f 13, f 14...converges at q 1 f 21, f 22, f 23, f 24...converges at q 1, q 2 f 31, f 32, f 33, f 34...converges at q 1, q 2, q 3 f 41, f 42, f 43, f 44...converges at q 1, q 2, q 3, q 4... At every rational point the diagonal sequence f 11, f 22, f 33, f 44,..., converges. Simplify by denoting f kj = f jj. 3 Theorem (Weierstrass) Every bounded infinite subset of R k has a limit point in R k. [4] 8

Claim that the constructed diagonal sequence converges at each point in the domain, rational or not. Let x be an arbitrary point and let q be a rational point near x. Thus, f kj (x) f ki (x) f kj (x) f kj (q) + f kj (q) f ki (q) + f ki (q) f ki (x) 2C x q α + f kj (q) f ki (q). Fix q so close to x such that 2C x q α < ε 2, given that ε > 0. This is possible since the rational points are dense 4. At the rational points the sequence converges, so denote that, f kj (x) f ki (x) < ε 2 + ε 2 = ε, for sufficiently large indices i, j. The sequence converge at the point x, by Cauchy s general convergence criterion 5. The existence of the pointwise limit function has been stated, f(x) = lim j f kj (x). It is time to show that the convergence is locally uniform. Assume to be compact, covered by balls B(x, r) with diameter 2r = ε 1 α. Then, is covered by a finite number of these balls i.e. N B(x m, r). m=1 Pick a rational point from each ball, q m B(x m, r). An index N ε can be fixed such that, max m f k j (q m) f ki (q m) < ε, where i, j > N ε, since only a finite number of the chosen rational points q m are involved. Let x be arbitrary, then it must belong to some ball, B(x m, r). Hence we can write, f kj (x) f ki (x) 2C x q m α + f kj (q m) f ki (q m) 2C(2r) α + f kj (q m) f ki (q m) 2Cε + ε, (6) where i, j > N ε. So, the convergence is uniform in, since the index N ε is independent of how the point x is chosen. This shows that the convergence is uniform in. 4 Definition: Let X be a metric space. All points and sets are understood to be elements of X. E is dense in X if every point of X is a limit point of E, or a point of E (or both). [4] 5 Cauchy convergence criterion: A sequence converge in R k if and only if it is a Cauchy sequence [4] 9

So, recall that we want to expand the Lipschitz boundary values over the domain. By extending g from definition (4), we have for g C() W 1, () that, g L () L. To be clear, the above notation holds for the extended functions. We want to minimize this function, among all functions in W 1,p (), ˆ I p (v) = v p dx, with boundary values, v g W 1,p 0 (), then we let p. Recall p > n. By theorem (3), a unique minimizer u p exists, and especially on the boundary, u p = g and u p C( W 1,p ()). From the minimization property we obtain the following, u p L p () g L p () L 1/p. By using both Ascoli s theorem and Morrey s inequality (lemma (2)) (Morrey s inequality is useful to = u p g) it follows that, u p (x) u p (y) g(x) g(y) + v p (x) v p (y) L x y + 2pn n x y 1 p v p p n L p () L x y + 2pn n x y 1 p ( u p p n L p () + g L p ()) L x y + LC x y 1 n p when p > n + 1. The bound is u p L () g L p () + LC. Since the family {u p }, p > n+1 is equibounded and equicontinuous the Ascoli s theorem (6) is applicable, it is possible to extract a subsequence p j such that, u pj u unifromly in, where u C(), with boundary values u = g. Hence, where C depends on and L. u (x) u (y) C x y. 10

For p j > s, Hölder s inequality yields, { } 1 u pj s s dx { { L. } 1 u pj p p j j dx g p j dx } 1 p j Hence, u pj u weakly in L s (), for any subsequence of p j. From weak lower semi-continuity we have, ˆ ˆ u s dx lim inf j u pj s dx. Since weak convergence in L p leads to weak convergence in L s, for s < p, a single subsequence can be extracted through a diagonalization procedure, such that, u pj u weakly in each L s () simultaneously, when n + 1 < s <. Hence, { } 1 u s s dx L. Since s is large enough and by letting s it follows that, where L is the Lipschitz constant of u. u L () L, Theorem 7. (The Existence Theorem) There exists a function u C() W 1, () with boundary values u = g on, given g C() W 1, (). Which have the following minimizing property in each subdomain D : If v C(D) W 1, (D), and v = u on, then u L (D) v L (D), u can be obtained as the uniform limit lim u pj, where u pj denotes the solution of the p j Laplace equation with boundary values g. 11

Proof. The following is based on the proof in (Lindqvist 2015, 19-21). Show that, the minimizing property yields for the constructed function u = lim j u pj in D. Observe that the uniqueness is not proved yet. Let v pj denote the solution to pj v pj = 0 in D with boundary values v pj D = u D where v is a given function in D. Hence, the minimization property for v pj is, ˆ ˆ v pj p j dx v p j dx. Claim that v pj u uniformly in D and note that, it is enough to show that So, v pj u = (v pj u pj ) + (u pj u ) v pj u pj L (D) 0. max(v pj u pj ) = max (v p j u pj ) D D = max (u u pj ) D u u pj L (D) 0, the same yields for u pj v pj. Thus, the claim has been proved. Now, use the fact that the maximum is attained at the boundary for difference of two solutions for the p Laplace equation and suppose that, α = max(v p u p ) > max (v p u p ) = β. D D The open set is defined as, { G = x D v p (x) u p (x) > α + β }. 2 Then, on the boundary G, G D and v p = u p + α+β 2. Note that, the p Laplace equation in G, have solutions v p and u p + α+β 2 such that both have the same boundary values. Thus, by uniqueness the boundary values coincide in G. This is a contradiction at the maximum point(s). Consequently, as we claimed the maximum was at the boundary. Recall that, v pj u in D. Now, fix s >> 1. Then, { D } 1 v pj s s dx { { D D } 1 v pj p p j j dx } 1 v p p j j dx v L (D) 12

for p j > s. By weak lower semi-continuity it is possible to extract a weakly convergent subsequence v pj u in L s (D) the following is obtained, { D } 1 u s s dx v L (D). Thus, let s, u L (D) v L (D) If the classical solutions does not have critical points, the uniqueness can be shown by proving, min(v u) min(v u), (7) where u, v C() C 2 (). The following has to be satisfied u 0 and v 0, pointwise in the bounded domain under extra the assumption v 0. Proof. Use the antithesis min(v u) < min(v u). Note that, v u holds at the interior minimal point, but this is not a contradiction. So let us consider w = 1 e αv α where α > 0 is sufficiently small such that, = v αv2 2 +..., min(w u) < min(w u). At the minimal point w < 0 and 0 u w. Thus, w = e 3αv ( v α v 4 ) αe 3αv v 4. This is a contradiction and we have uniqueness for the Dirichlet boundary value problem. Remark : There is a problem of unique continuation and therefore some qualities are lost as p. When p is finite the principle of unique continuation holds in the plane, 13

though in space the problem appear to be open. However, for the -Laplace equation this is not true! There is an example that shows this: { 1 x u(x, y) = 2 + y 2 if x 0, y 0 (8) 1 y if x 0, y 0 set in the half-plane y 0. When drawing the surface z = u(x, y), we notice a cone and its tangent. Hence, this a viscosity solution of the Laplace equation and in some subdomain the above equation (8) provide -harmonic functions that coincide with u. The real analyticity is lost, since the p harmonic functions are of class C 1,α loc i.e. the gradients are locally Hölder continuous. In the open set, the functions are real analytic, where their gradients are not zero. This does not hold for the harmonic functions, it can be explained for u = 0 by adding an extra variable x n+1, such that v 0, but observe that both v(x 1, x 2,..., x n, x n+1 ) = u(x 1, x 2,..., x n ) + x n+1 are harmonic functions. So, in some way the local situation with entirely no critical points is always reached. [1] 14

3 The Infinity-Laplace Equation The p Laplace equation, p u u p 4 ( u 2 u + (p 2) u) = 0 (9) is the limit equation as p. By dividing with u p 4 and p 2 we obtain, letting p, u 2 u p 2 + u = 0, u = 0, the solutions are called the harmonic functions and the Laplace equation is denoted as, n u u 2 u, x i x j x i x j i,j=1 where u = u(x 1, x 2,..., x n ) for n variables. Classical solutions The classical solutions satisfies a pointwise two-continuous function u, u C 2 (), where the domain lies in the n dimensional Euclidean space R n. Examples of classical solutions in domains where they are of class C 2 are: a, x + b, b x a + c, a x 2 1 + x2 2 +... + x2 n + b, ( ( ) x 4 3 1 x 4 3 2, tan 1 x2 ), tan 1 x 3, x 1 x 2 1 + x 2 2 a 1 x 4 3 1 + a 2 x 4 3 2 +... + a n x 4 3 n, where a 3 1 + a 3 2 +... + a 3 n = 0. All solutions u C of the eikonal equation 2 = C can be covered because, u = u, u 2. 2 Solutions in disjoint variables can be superposed, for instance x 21 + x22 7 x 2 3 + x2 4 + x 5 + x 4 3 6 + x 4 3 7. The classical solutions cannot solve the Dirichlet problem, since they are too few. Gradient flow For the classical solutions there are an interesting explanation of the equation u = 0 15

Let the curve be smooth enough and denoted x(t) = (x 1 (t), x 2 (t),..., x n (t)). Then, by differentiating u(x(t)) 2 along the curve we obtain, d dt u(x(t)) 2 = 2 Hence let the curve be a solution to, We obtain the stream lines, dx dt n i,j=1 u 2 u dx i x j x i x j dt. = u(x(t)) (10) d dt u(x(t)) 2 = 2 u(x(t)) along the gradient flow. Note that, u denotes the speed and it is constant along the stream lines. In the direction of the gradient, the fastest growth occurs and the surface level of a function u have the normal u. Variational solutions Let g ba a Lipschitz continuous function, that is defined on the boundary of a bounded domain. Again start by minimizing the variational integral, ˆ I(u) = u(x) p dx, among all functions in C() C 1 () with boundary values g. Hence, a solution to the p Laplace equation (9) exists, u p, which is a unique minimizer. Thus, a variational solution of the -Laplace equation is achieved as a uniform limit D u = lim p j u p j through suitable sequences and the function is Lipschitz continuous. Because, { ˆ } 1 lim u(x) p p dx = ess sup u(x) = u L p (D), x D a conjecture is that u is a minimizer for u L. For each subdomain D it must be true that, u L (D) v L (D), for all v that is Lipschitz continuous in D and for v = u on the boundary D. Considering, u(x) u(y) sup = u D D x y L (D) interpret this as having an optimal Lipschitz extension, with boundary value g. Later on, we will see that all solutions are variational! 16

3.1 Mean Value Formula The harmonic functions have some kind of mean-value property and formula is used in game theory, as we will show further on. There are some properties that one should bear in mind. From Gauss we know that a continuous function u, is harmonic if and only if it comply with the mean value formula. So, u = 0 in if and only if u(x) = u(y) dy + o(ε 2 ) as ε 0, (11) B(x,ε) where every point x. There exist a mean value formula for p harmonic functions that is: u(x) = p 2 p + n max {u} + min {u} B(x,ε) B(x,ε) + 2 + n u(y) dy + o(ε 2 ), as ε, 2 p + n B(x,ε) where the first term is non-linear and the second term is linear, note that the larger the p, the "more" non-linear is the first term. This formula is used in game theory where the coefficient adds up to 1 (coefficients = probabilities), p 2 p + n + 2 + n p + n = 1 Note that when p = 2 the formula reduces to equation (11) and when p = the asymptotic formula is, u(x) = max {u} + min {u} B(x,ε) B(x,ε) 2 + o(ε 2 ), as ε 0. (12) This happens to be a viscosity solution of u = 0 in where u C() if and only if equation (12) is true for every point in. This can be interpreted in a specific way described in the following lemma, from (Lindqvist 2015,34). Lemma 8. Assume that φ C 2 () and that φ(x 0 ) 0 at the point x 0. Then the asymptotic formula holds. φ(x 0 ) = max {φ} + min {φ} B(x 0,ε) B(x 0,ε) 2 + (x 0 ) φ(x 0 ) 2 + o(ε2 ), as ε 0 (13) Proof. The proof follows from the one in (Lindqvist 2015, 34-35). To verify this, Taylor s formula and some infinitesimal calculus is used. Because φ(x 0 ) 17

0, it is possible to pick ε sufficiently small such that φ(x 0 ) > 0 when x x 0 ε. At points for which, x = x 0 ± ε φ(x) φ(x), x x 0 = ε, (14) the extremal values in B(x 0, ε) are achieved. This can be discovered by using a Lagrangian multiplier. Further, for ε small the +-sign corresponds to the maximum. The endpoints of some diameter, is the maximum and minimum points which are about the opposite. Then, φ(x) φ(x) = φ(x 0) + O(ε), as ε 0. (15) φ(x 0 ) For x B(x 0, ε) let x denote exactly opposite endpoints of a diameter, i.e., x + x = 2x 0. By Taylor expansion, φ(y) = φ(x 0 ) + φ(x 0 ), y x 0 + 1 2 D2 φ(x 0 )(y x 0 ), y x 0 + o( y x 0 2 ), when y = x and y = x, we have φ(x) + φ(x ) = 2φ(x 0 ) + D 2 φ(x 0 )(x x 0 ), x x 0 + o( x x 0 2 ). Observe that the first order terms vanishes. Pick x to be the maximal point, φ(x) = max {φ(y)}. y x 0 =ε The following inequality is obtained by, inserting formula (14) with equation (15), max {φ} + min {φ} φ(x) + φ(x ) B(x 0,ε) B(x 0,ε) Now, pick x to be a minimal point, = 2φ(x 0 ) + D 2 φ(x 0 )(x x 0 ), x x 0 + o( x x 0 2 ) = 2φ(x 0 ) + D 2 φ(x 0 ) φ(x 0) φ(x 0 ), φ(x 0) φ(x 0 ) + o(ε2 ) = 2φ(x 0 ) + φ(x 0 ) φ(x 0 ) 2 + o(ε2 ). φ(x) = min {φ(y)} y x 0 =ε then the opposite inequality, can be deduced since, max {φ} + min {φ} φ(x ) + φ(x). B(x 0,ε) B(x 0,ε) Hence, the asymptotic formula has been proved (13). 18

Note that, when φ(x 0 ) = 0, there is a bit of an issue with formula (13). Hence the following definition from (Lindqvist 2015, 36) contains changes for this situation. Definition 9. A function u C() satisfies the formula u(x) = max {u} + min {u} B(x,ε) B(x,ε) 2 in the viscosity sense, if the two following conditions are true: + o(ε 2 ), as ε 0 (16) if x 0 and if φ C 2 () touches u from below at x 0, then φ(x 0 ) 1 ( ) max 2 {φ(y)} + min {φ(y)} + o(ε 2 ), as ε 0. y x 0 ε y x 0 ε Besides, if it so happens that φ(x 0 ) = 0, we require that the test function satisfies D 2 φ(x 0 ) 0. if x 0 and if ψ C 2 () touches u from above at x 0, then ψ(x 0 ) 1 ( ) max 2 {ψ(y)} + min {ψ(y)} + o(ε 2 ), as ε 0. y x 0 ε y x 0 ε Besides, if it so happens that ψ(x 0 ) = 0, we require that the test function satisfies D 2 ψ(x 0 ) 0. Remark : When φ(x 0 ) = 0 or ψ(x 0 ) = 0 the following form is used φ(y) φ(x 0 ) ψ(y) ψ(x 0 ) lim y x 0 y x 0 2 0, lim y x 0 y x 0 2 0. The main result for the mean value formula is the following characterization theorem from (Lindqvist 2015,36). Theorem 10. Let u C(). Then u = 0 in the viscosity sense if and only if the mean value formula (16) holds in the viscosity sense. Proof. The proof follows from (Lindqvist 2015, 36-37). Consider subsolutions. Assume that ψ touches u from above at the point x 0. If ψ(x 0 ) 0 then, according to formula (13), the condition ψ(x 0 ) 0 is equivalent to the inequality for ψ in definition (9). Hence, the case is clear. Suppose that ψ(x 0 ) = 0. If u satisfies formula (13) in the viscosity sense, the extra assumption ψ(y) ψ(x 0 ) lim y x 0 y x 0 2 0 can be used. If ψ(x ε ) = min ψ(x) x x 0 ε 19

conclude that, lim inf 1 { ( 1 ε 0 ε 2 2 max B(x 0,ε) = lim ε 0 inf 1 ε 2 lim ε 0 inf 1 ε 2 1 = 1 2 lim ε 0 inf 0 ψ + min B(x 0,ε) ) } ψ ψ(x 0 ) { ( ) 1 max ψ ψ(x 0 ) + 1 ( 2 B(x 0,ε) 2 ( ) min ψ ψ(x 0 ) 2 B(x 0,ε) ( ) ψ(xε ) ψ ( x 0 ) xε x 0 2 x ε x 0 2 ε 2 min ψ ψ(x 0 ) B(x 0,ε) where xε x 0 2 1. Thus, the inequality in definition (9) has been proved. ε 2 Conversely, the condition ψ(x 0 ) 0 is satisfied, when ψ(x 0 ) = 0. Hence, the mean value property in the viscosity sense, guarantees that viscosity subsolution of the -Laplace equation is obtained.. 3.2 Viscosity Solutions The use of viscosity solutions circumvents the problem that the second derivative does not always exist in the Laplace equation. In the Dirichlet problem smooth boundary values can be prescribed such that no C 2 solution can achieve them. Smoothness is not allowed at the critical points, u = 0! Theorem 11. (Aronsson s Theorem) Assume that u C 2 () and that u = 0 in. Thus, either u reduces to a constant or u 0 in. Proof. The case when n = 2 was proved with methods in complex analysis by Aronsson in [5] and the case for n 3 is in [6]. )} Aronsson s theorem has a major significance for the Dirichlet problem. { u = 0, in u = g, on (17) if g C() C 2 (), g is chosen such that for every function f C() C 2 (), with boundary values f = g on there must be at least one critical point in. The following is a topological phenomenon related to index theory: Pick as the unit disk x 2 + y 2 < 1 and g(x, y) = xy. Thus, for every function f with the boundary 20

values f(ξ, η) = ξη where ξ 2 + η 2 < 1, a solution or not, there must exist a critical point in the open unit disk. Aronsson s theorem (11) would be contradicted if there are boundaries of class C 2, a solution to the Dirichlet problem cannot be of class C 2. This can also be seen by using symmetry, suppose that u 0 in the disc. It can be conclude, that u(x, y) = u( x, y) but when differentiating note that u(0, 0) = 0. Thus, there exists a critical point in any case! Therefore we must introduce the concept of viscosity solutions. Crandall, Evans, Lions, Ishii and Jensen among others developed the modern theory of viscosity solutions. At first it was only for first order equations, then later on it was developed to second order equations. Burgers equation is one example of how the label viscosity was established for first order equations, { u t + 1 u 2 2 x = 0 ( < x <, t > 0) (18) u(x, 0) = g(x) an extremely small term is added representing viscosity. To this Cauchy problem even smooth initial values g(x) can oblige the solution u = u(x, t) to develop shocks. Besides, the uniqueness fails frequently. By adding a term ε u, that representing artificial viscosity the Burgers Equation will be { uε t + 1 u 2 ε 2 x = ε 2 u ε x 2 u ε (x, 0) = g(x). So, now u ε = u ε (x, t) is smooth and unique. Additionally, u ε (x, t) u(x, t), this u is the desired viscosity solution of the original equation (18). Note that this is the method of vanishing viscosity. Although to identifying the viscosity solution, the concept can be formulated directly from the original equation without the limit procedure with ε. Return to the -Laplace equation and instead of considering the limit procedure, u ε + ε u ε = 0 as ε 0, (19) a concept of the viscosity solution directly from the original equation will be formulated. Equation (19) is the Euler-Lagrange equation for the following function, ˆ J(v) = e v 2 2ε dx. Start with infinitesimal calculus. Assume v : R be given, suppose that φ C 2 () is touching v from below at some point x 0 : { v(x 0 ) = φ(x 0 ) v(x) > φ(x), where x x 0 (20) 21

If v is smooth, then by infinitesimal calculus, { v(x 0 ) = φ(x 0 ), D 2 v(x 0 ) D 2 φ(x 0 ). Here the Hessian matrix evaluated at the touching point x 0 is, ( D 2 2 ) v v(x 0 ) = (x 0 ) x i x j. n n When looking at a symmetric matrix A = (a ij ) the following ordering is used, A 0 n a ij ξ i ξ j 0 ξ = (ξ 1, ξ 2,..., ξ n ), i,j=1 where A = D 2 (v φ)(x 0 ). Especially, it follows that φ xj x j (x 0 ) v xj x j (x 0 ), where j = 1, 2,..., n, and thus φ(x 0 ) v(x 0 ) φ(x 0 ) v(x 0 ) p φ(x 0 ) p v(x 0 ) for n < p. If v is a supersolution, i.e., p v 0, then p φ(x 0 ) 0. If v does not have any derivatives the last inequality still holds. Definition 12. Let n < p. If p φ(x 0 ) 0, when x 0 and φ C 2 (), are such that φ touches v from below at x 0, then v C() is a viscosity supersolution of the equation, p v = 0, in. If p ψ(x 0 ) 0, when x 0 and ψ C 2 (), are such that ψ touches u from above at x 0, then u C() is a viscosity subsolution of the equation, p u = 0, in. If h C() is both a viscosity supersolution and a viscosity subsolution then, h C() is a viscosity solution. Proposition 13. (Consistency) This proposition is from (Lindqvist 2015, 27). A function u C 2 () is a viscosity solution of p u = 0 if and only if p u(x) = 0 holds pointwise in. 22

Proof. This proof follows from (Lindqvist 2015,27). Consider the case of subsolutions. Let u be a viscosity subsolution, then at point x 0 u itself will do as a test function, such that p u(x 0 ) 0. In agreement to formula (20) use the following test function, ψ(x) = u(x) + x x 0 4, such that when x x 0 the touching from above is strict. Then 0 p ψ(x 0 ) = p u(x 0 ). If u is a classical subsolution, i.e. p u(x) 0 at every point, then definition (12) must be verified. Hence, for a test function touching from above at x 0 the inequality p ψ(x 0 ) p u(x 0 ) is true by the infinitesimal calculus. Thus, since p u(x 0 ) 0 it follows that p ψ(x 0 ) 0. An important example of a viscosity solution of u = 0 in R 2 is the function u(x, y) = x 4 3 y 4 3. Note that, at the coordinate axes the second derivative blows up and there is no appropriate test function at the origin. Indeed, u C 1, 1 3 loc (R2 ) W 2, 2 3 ε loc (R 2 ), ε > 0. For every viscosity solution in the plane, it is assumed that it belongs to C 1, 1 3 loc (R2 ), i.e. the Hölder continuity with exponent 1 3 yields locally for its gradient. An example of a viscosity subsolution in R n is the function x x 0. In fact, another viscosity subsolution is the function defined by superposition ˆ V (x) = x y ρ(y) dy, R n where ρ 0, if R n y ρ(y) dy <. The biharmonic equation u = δ in three dimensional space, have the fundamental solution x x 0 8π, such that V (x) = 8πρ(x). If V is given then we can find an appropriate ρ! Before proving that u is a viscosity solution, one must prove that u pj solution of the p j Laplace equation, recall the variational solution is a viscosity u = lim j u pj. This was constructed through a sequence of solutions to p Laplace equations in the proof of the Existence theorem (7). 23

Lemma 14. Assume that v C() W 1,p () fulfill p v 0 in the weak sense, ˆ v p 2 v, η dx 0, η 0, η C0 (). Thus, v is also a viscosity supersolution in. Proof. The proof follows from (Lindqvist 2015, 28-29). If the claim does not hold we have, at some point x 0 a test function φ touching from below such that, p φ(x 0 ) > 0 holds. By continuity we have, p φ(x) > 0 when x x 0 < 2ρ, for some small radius ρ > 0. Hence, φ is a classical solution in the ball B(x 0, 2ρ). Set, ψ(x) = φ(x) + 1 2 min {v φ} = φ(x) + m B(x 0,ρ) 2. It is clear that, ψ < v on B(x 0, ρ) and ψ(x 0 ) > v(x 0 ). Now, let the component of the open set {ψ > v} B(x 0, ρ), which include the point x 0, be denoted as D p. Hence, conclude that ψ = v on D ρ. By the Comparison Principle 6, we have that ψ v D ρ, since ψ is a weak subsolution and v a weak supersolution of the same equation. This implies a contradiction, ψ(x 0 ) v(x 0 ) ψ(x 0 ) and thus the lemma follows. Lemma 15. Assume that f j f uniformly in, where f j, f C(). If φ touches f from below at x 0, then there are points x j x 0 so that, for some subsequence. f j (x j ) φ(x j ) = min {f j φ} 6 Put the test function η = [ψ v] + into ˆ D ρ ψ p 2 ψ, η dx 0, ˆ D ρ v p 2 v, η dx 0 subtract to get, ˆ ψ p 2 ψ v p 2 v, (ψ v) + dx 0. D ρ By an elementary inequality, 2 ˆD p 2 (ψ v) + p dx. ρ Thus, (ψ v) + = 0, the result follows. 24

Proof. The proof is follows from (Lindqvist 2015, 29-30). Since f j φ = (f φ) + (f j f) and f j f, inf \B r {f j φ} 1 2 inf \B r {f φ} > 0 where j is sufficiently large. Let B r = B(x 0, r) be a small ball. Thus, inf \B r {f j φ} > f j (x 0 ) φ(x 0 ), where j > j r. The left-hand side has a positive limit whereas the right-hand side approaches zero. Hence, the infimum over \B r is bigger then the value at the center. So there is a point x j B(x 0, r) so that, min {f j φ} = f j (x j ) φ(x j ), where j > j r. Then let r 0 through a sequence r = 1, 1 2, 1 3,... to finish the proof. Theorem 16. The function v is a viscosity solution of the equation Conversely, max{ε v, v } = 0. ε φ(x 0 ) and φ(x 0 ) 0, is true for a test function touching from below and ε ψ(x 0 ) or ψ(x 0 ) 0. is true for a test function touching from above. In the special case, ε = 0, the equation is v = 0. Proof. The proof follows from (Lindquist 2015, 32-33). Verify that v p is a viscosity solution of its own Euler-Lagrange equation In weak form, ˆ p v = ε p 1. (21) v p 2 v, η dx = ε p 1 ˆ η dx, where η C 0 (). This would be equal to the pointwise equation (21) if v were of class C 2 () and p 2. The proof of this is the same as in lemma (14). Hence, let p j. The case with supersolutions is, pj ϕ(x 0 ) ε p 1 25

with ϕ touching v pj from below at point x 0. Now, assume that φ touches v from below at x 0, i.e. φ(x 0 ) = v (x 0 ) and φ(x) < v (x) where x x 0. There are points x k x 0 such that v pjk φ achieve its minimum at x k, by lemma (15). Hence, pjk φ(x k ) ε p 1 explicitly, { } φ(x k ) p j k 4 φ(x k ) 2 φ(x k ) + (p jk 2) φ(x k ) ε p j k 1. It is impossible that φ(x k ) = 0, if ε 0. So, by dividing with φ(x k) p j k 4 φ(x k ) 2 φ(x ( k) p jk 2 + φ(x k ) ε3 p jk 2 ε φ(x k ) p jk 2 ) pjk 4. we obtain, (22) The left-hand side has the limit φ(x 0 ), by continuity. If φ(x 0 ) < ε, a contradiction occurs, φ(x 0 ) =. Hence the first established required inequality is, ε φ(x 0 ) 0, this implies that the last term in equation (22) tends to zero. The second established required inequality is, this concludes the case when ε > 0. φ(x 0 ) 0, There is nothing to show if ε = 0, thus φ(x 0 ) = 0. But if φ(x 0 ) 0 then, φ(x k ) p jk 2 + φ(x k ) φ(x k ) 2 0, when k is large. It follows that φ(x 0 ) 0. This concludes the case with supersolutions. (To show the case with subsolution is easier.) Now, we want to prove uniqueness of viscosity solutions, for the Dirichlet problem in a bounded domain, this was done by Jensen. Theorem 17. (Jensen s Theorem) Let be an arbitrary bounded domain in R n. Given a Lipschitz continuous function f : R, there exists a unique viscosity solution u C() with boundary values f. Thus, u W 1, (), and u L has a bound depending only on the Lipschitz constant of f. 26

Let us present two help equations with ε > 0, the situation is, max{ε u +, u + } = 0, Upper equation u = 0, Equation min{ u ε, u } = 0, Lower equation. When the equations have the same boundary values the order is u u u +. For the Lower equation the subsolutions are needed and for the Upper equation supersolutions are needed. Lemma 18. A variational solution of the Upper Equation satisfies v L () L + ε. Note that for the Lower Equation, the stages are p u = + ε p 1, ˆ ˆ u p 2 u, η dx = ε p 1 ˆ ( ) 1 p u p + ε p 1 u dx = min. η dx, Further, let u p, u p, u + p be solutions to the Upper equation, the equation and the Lower equation, with the same boundary values f, then by comparison u p u p u + p. The equations have weak solutions which are viscosity solutions. Pick a sequence p such that, u p u, u p h, u + p u +. Then it yields, div( u + p p 2 u + p ) = ε p 1, div( u p p 2 u p ) = +ε p 1. Now take u + p u p as a test function in the weak formulation of the equation, ˆ ˆ u + p p 2 u + p u p p 2 u p, u + p u p dx = ε p 1 (u + p u p ) dx. The elementary inequality for vectors (with p > 2), b p 2 b a p 2 a, b a 2 p 2 b a p gives, ˆ 4 u + p u p 2 p ˆ dx ε p 1 (u + p u p ) dx. 27

By extracting the p th roots we get, Integration gives, u + u L () 2 ε. u + u L () ε diam(). The result u h u + u + Cε is needed for the constructed functions. The following lemma from (Lindquist 2015, 49) implies uniqueness. Lemma 19. If u C() is an arbitrary viscosity solution of the equation u = 0 with u = f on, then u u u +, where u, u + are constructed variational solutions of the help equations. In fact, if there are two viscosity solutions u 1, u 2 then, ε u u + u 1 u 2 u + u ε, where ε is given. Thus u 1 = u 2. This implies that every viscosity solution is a variational one and that the variational solutions are unique! 3.2.1 The Comparison Principle The inequalities, ε u ± L + ε, are satisfied by the variational solutions u + and u from the help equations. Hence, the upper bound in lemma (18) holds a.e. in and the test functions the lower bound holds in the viscosity sense. The following lemma from (Lindqvist 2015,46) will be used in further reading. Lemma 20. (Ishii s Lemma) Let u and v belong to C(). If there exists an interior point (x j, y j ) for which the maximum {u(x) v(y) j2 } x y 2 max x,y is attained, then there are symmetric matrices X j and Y j so that (j(x j y j ), X j ) J 2,+ u(x j ), (j(x j y j ), Y j ) J 2, u(y j ), and X j Y j 28

Lemma 21. (Lindqvist 2015, 50) If u is a viscosity solution of the equation u = 0 and if u f = u + on, then u u + in. The analogous comparison holds for the viscosity supersolutions above u. Proof. The proof follows from (Lindquist 2015, 50-52). Suppose that u + > 0, by adding a constant. Denote v = u + and claim that v u. Use the antithesis, max(u v) > max(u v). and First construct a strict supersolution w = g(v) of the Upper equation, so that, Use the approximation, max(u w) > max(u w) (23) w µ < 0. g(t) = log(1 + A(e t 1)), where A > 1. When A = 1 = g(t) = t. Assume t > 0 then, 0 < g(t) t < A 1, We have, 0 < g (t) 1 < A 1. g (t) = (A 1) g (t) 2 Ae t, in order to deduce the equation for w = g(v). By differentiation, w = g(v), w xi = g (v) v xi, w xi x j = g (v) v xi v xj + g (v) v xi x j, w = g (v) 3 v + g (v) 2 g (v) v 4. By multiplying the Upper equation max{ε v, v} 0 for supersolutions by g (x) 3 we have, w g (v) 2 g (v) v 4 = (A 1)A 1 e v g (v) 4 v 4. Note that the right-hand side is negative. The inequality ε v have the following estimate, w ε 4 (A 1)A 1 e v. 29

Given ε > 0, fix A near 1 such that, 0 < w v = g(v) v < A 1 < δ, where δ is so small such that equation (23) holds. By these adjustments, the tiny negative quantity, µ = ε 4 (A 1)A 1 e v will make it in the strict equation. Hence, the final equation is w µ. Since g (v) > 1, the bound, ε w, will be needed. A test function φ that touches v from below at some point x 0, should replace the function v and a test function ϕ, that touches w from below at the point y 0 = g(φ(x 0 )) should replace w. The following has now been proved, when ϕ touches w from below at y 0. ϕ(y 0 ) µ, ε ϕ(y 0 ), Since we want to use Ishii s lemma (20), start by doubling the variables: M = sup (u(x) w(y) j2 ) x y 2. x y The maximum is attained at the interior points x j, y j, for large indices, hence x j ˆx, y j ˆx, when ˆx is some interior point. Thus, due to equation (23) ˆx cannot be on the boundary. There exist a symmetric n n matrices X j and Y j, due to Ishii s Lemma, such that X j Y j and, (j(x j y j ), X j ) J 2,+ u(x j ), (j(x j y j ), Y j ) J 2, w(y j ), "where the closure of the subjets appear". The equations can be written as, Now, subtract to reach the contradiction, j 2 Y j (x j y j ), (x j y j ) µ, j 2 X j (x j y j ), (x j y j ) 0. j 2 (Y j X j )(x j y j ), (x j y j ) µ. The ordering that X j Y j is a contradiction, since j 2 (Y j X j )(x j y j ), (x j y j ) 0. Therefore, the antithesis does not hold and we have u v. 30

At last, we establish the general form of the comparison principle (Lindquist 2015, 52). Theorem 22. (The Comparison Principle) Suppose that u 0 and v 0 in the viscosity sense in a bounded domain. If lim inf v lim sup u on, then v u in. Proof. The proof follows from lemma (21) (Lindqvist 2015, 52). For the indirect proof, let ε > 0 and consider a component D ε of the domain {v +ε < u}. Clearly, D ε. For the domain D ε (not for ) construct the help functions u + and u, such that u + = u = u = v + ε on D ε. Thereby since ε is arbitrary the result follows. v + ε u u + ε u ε The existence and uniqueness of variational solutions of the Dirichlet boundary value problem have now been shown. Since the variational solutions are also viscosity solution they must also be unique. To sum up, there exists only one variational solution with given boundary values and the existence was determined in theorem (7).[1] 31

4 Tug-of-War In 2009, O.Schramm, Y.Peres, D.Wilson and S.Sheffield discovered the relation between the Probability calculations and the -Laplace equation. The equation was detected by using the mathematical game tug-of-war. [1] In this section we will show the relation by introduce p harmonious functions. 4.1 The game Let us consider a two-player zero-sum game in a bounded domain R N. Pick Γ N \Γ D and Γ D. Then let F : Γ D R be Lipschitz continuous. The game begins when a player places a token at a point x 0 \Γ D, then a coin is tossed and the player who wins can move the game position to any x 1 B ε (x 0 ). For every turn the coin is tossed and the winner choose the new game state x k B ε (x k 1 ). The game stops when the token has reached some x τ Γ D and player I earns F (x τ ), therefore player II earns F (x τ ) (loses F (x τ )). Hence F is called the final payoff function. The strategies adopted by the players and the game tosses affect the game states x 0, x 1, x 2,..., x τ, where every x k except x 0 are random variables. We have that: [10] 1. The initial state x 0 \Γ D. (Public knowledge, both player 1 and player 2 knows about it). 2. Each player i chooses an action a i 0 B ε(0) that is announced to the other player, this defines an action profile a i 0 = {a1 0, a2 0 } B ε(0) B ε (0) 3. The new state x 1 B ε (x 0 ) In the following sections suppose that 2 p <. p Laplacian is given by p u = div( u p 2 u). Games that approximate the Intuitively, by summing all three cases with their corresponding probability: 1. Player I, moves the token from the current position x n with probability α 2 position x n+1 within the ball B ε(xn). to a new 2. Player II, moves the token from the current position to a new position with probability α 2, in the same ball B ε(x n) 3. Hence the token moves randomly in the ball B ε(xn) with probability β = 1 α in the interval I [0,1]. When the token is no longer in the domain the game 32

has come to an end. Then at the exit position for the token x τ player II pays the amount equal to a value of a given boundary pay-off function to player I. [2] The expected payoff at the point can be calculated. The noise is distributed uniformly on B ε (x), in this version of tug-of-war with noise. Hence we are now allowed to use the dynamic programming principle, in the following form u ε (x) = α 2 { sup u ε + inf u ε } + β B ε(x) B ε(x) u ε dy, (24) B ε(x) to draw the conclusion that the game has a value and that the value is p harmonious. 4.1.1 p-harmonious functions Hence the aim is to study the functions that satisfies formula (24), when ε > 0 is fixed and α + β = 1 where α, β are non-negative. From now on these kind of functions will be called p-harmonious functions. Now, let ε > 0 be fixed and consider a bounded domain R N. For p-harmonious functions the boundary values can be imposed by indicate a compact boundary strip of width ε by, Γ ε = {x R N \ : d(x, ) ε}. For x the ball B ε (x) is not necessarily included in, therefor the aim is to use the boundary strip Γ ε instead of the boundary. Definition 23. In a bounded domain, let u ε be a p-harmonious function, ε > 0. The boundary values are a bounded Borel function F : Γ ε R if u ε (x) = α 2 { sup u ε + inf u ε } + β B ε(x) B ε(x) u ε dy, x, B ε(x) where α = p 2 2+N p+n and β = p+n. Then u ε (x) = F (x), for every x in Γ ε, Now let us formulate p-harmonious. So we know from previous that if u is harmonic, the mean-value property is satisfied. When α = 0 and β = 1 equation (24) will be, u(x) = u dy. (25) B ε(x) When α = 1 and β = 0 equation (24) will be, u ε = 1 2 { sup u ε + inf u ε } (26) B ε(x) B ε(x) 33

these equations are called harmonious functions and are values of the tug-of-war games. If we let ε 0 these functions approximates solutions to the -Laplacian! Remember, the p-laplacian is given by, p u = div( u p 2 u) = u p 2 {(p 2) u + u} (27) The goal is to prove that the p-harmonious functions converge uniformly to the p- harmonic function with given boundary data and are uniquely determined by their boundary values. 4.1.2 A heuristic argument It follows from expansion and from equation (27), that u is a solution to p u = 0 if and only if (p 2) u + u = 0. (28) This still holds in the viscosity sense, when u = 0, it has been shown in [9]. By taking the average over B ε (x) of the classical Taylor expansion 7 we get, u(x) B ε(x) where u is smooth. Using the notation ε2 u dy = 2(n + 2) u(x) + O(ε3 ), (29) udy = B ε(x) ˆ 1 u dy. B ε (x) B ε(x) Hence, notice that the maximizing direction is nearly the same as the gradient direction. By summation of the two Taylor expansions we obtain u(x) 1 { } sup u + inf u u(x) 1 { ( u x + ε u(x) ) ( + u x ε u(x) )} 2 B ε(x) B ε(x) 2 u(x) u(x) = 1 2 ε2 u(x) + O(ε 3 ). (30) By multiplying equation (29) and equation (30) by the constants α and β, (α+β = 1), we get u(x) 1 2 α { 7 sup B ε(x) } u+ inf u +β B ε(x) u dy = α ε2 B ε(x) 2 u(x) β 2(n + 2) u(x)+o(ε3 ) u(y) = u(x) + u(x)(y x) + 1 2 D2 u(x)(y x), (y x) + O( y x 3 ) ε 2 34

Hence, choose α and β such that the operator in formula (28) is obtained on the righthand side. It is from this we obtain the constants α and β that we used previously in definition (23), α = p 2 p + N and β = 2 + N p + N. (31) So, now we have that u(x) = 1 { 2 α sup B ε(x) } u + inf u + β B ε(x) 4.1.3 Tug-of-war and p-harmonious functions u dy + O(ε 3 ) as ε 0. B ε(x) Now we want to find the link between tug-of-war games and p-harmonious functions. Let ε > 0 be fixed and again consider the two-player zero-sum game tug-of-war. Place the token at a point x 0 and toss a biased coin with probabilities α and β where α + β = 1. To get heads have probability α and to get tails have probability β, since the game is tug-of-war the winner of the toss moves the game position to a new position x 1 B ε (x 0 ). If the coin that is tossed shows tails the game state is moved to a random point in the ball B ε (x 0 ), according to the uniform probability. Hence, the game continues by the same way from x 1. Thus, this game gives an infinite sequence of game states x 0, x 1,..., where every x k is a random variable. Let x τ Γ ε be the first point in Γ ε in the sequence, where τ implies a hit at the boundary Γ ε. The payoff is then F (x τ ), the given payoff function is F : Γ ε R, such that player I wins F (x τ ) while player II will lose F (x τ ) (-F (x τ )). Hence, despite that β > 0, or equivalently p < the game will end almost surely (a.s.). P x 0 S I,S II ({ω H : τ(ω) < }) = 1 for any strategy. Hence the value of the game for player I and player II is then given by: Player I: Player II: u ε I(x 0 ) = sup inf E x 0 S I S S I,S II [F (x τ )] II u ε II(x 0 ) = sup inf E x 0 S II S S I,S II [F (x τ )]. I Note that when the game begins at x 0 the best expected outcome each player can ensure is u ε I (x 0) and u ε II (x 0). Let us look at the dynamic programming principle applied to the game. Lemma 24. (The dynamic programming principle) The value function for player I satisfies the following formulas: { u ε I (x 0 ) = α 2 {sup B ε(x 0 ) uε I + inf B ε(x 0 ) uε I } + β ffl B ε(x 0 ) uε I dy, x 0 u ε I (x 0) = F (x 0 ), x 0 Γ ε (32) 35