of Mean Curvature Equations and Hessian Equations with Neumann Boundary Condition Xinan Ma NUS, Dec. 11, 2014
Four Kinds of Equations Laplace s equation: u = f(x); mean curvature equation: div( Du ) = f(x, u); 1 + Du 2 Monge-Ampère equation: det D 2 u = f(x, u, Du); Hessian equation and curvature equation: 1 < k < n σ k (D 2 u)(x) = f(x, u, Du), σ k (λ(a ij ))(x) = f(x, u, Du). a ij is a curvature matrix.
Two Typical Boundary Value Problems Dirichlet problem: u = φ(x) on Ω; Neumann problem: u = ψ(x, u) on Ω. γ
History of Neumann Problem for Second Order Elliptic Equations Dirichlet problem: There have been many basic results and more details can be seen in the book of Gilbarg-Trudinger. Neumann problem: always open for existence problem.
History of Neumann Problem for Second Order Elliptic Equations Laplace s equation and linear equations: 1930 s, Gilbarg-Trudinger s book: Theorem 6.31. mean curvature equations:, Open Monge-Ampère equations: 1986, Lions-Trudinger-Urbas: existence and uniqueness of classical convex solutions. Hessian and curvature equations: 1 < k < n. Open, true for ball by Trudinger (86) Hessian equation. There are some existence theorem for σ k Yamabe problem with Neumann boundary condition, for example Aobing Li, S. Chen etc. There are standard theory for nonlinear uniformly elliptic equations with Neumann boundary condition or more general oblique derivative bounday condition, Lieberman(quasilinear); Lieberman-Trudinger (fully nonlinear elliptic equations), existence and uniqueness of classical solutions.
Motivation That is the following problem (Lieberman book: The oblique derivative problem for elliptic equations, page 360): div( Du 1+ Du where v = (1 + Du 2 ) 1 2. ) = f(x, u) 2 in Ω, q 1 u v γ + ψ(x, u) = 0 on Ω, q = 0 is corresponding with prescribed contact angle boundary condition. I shall come back on this question.
q = 1 is corresponding with Neumann boundary condition. Lieberman has proved: q 1 u q = 0 or q > 1, where b(x, u, Du) = v γ + ψ(x, u). q = 1: Lieberman s method is not sucessful!
Main Result: Boundary for Neumann Problem of Mean Curvature Equations Consider the following Neumann problem: div( Du ) =f(x, u) in Ω, (1) 1 + Du 2 u =ψ(x, u) on Ω, (2) γ Where Ω is a bounded C 3 domain in R n, n 2. γ is the inner unit normal to Ω. f, ψ are given functions defined in Ω R respectively.
Main Result: Boundary for Neumann Problem of Mean Curvature Equations Furthermore we can assume there exist positive constants M 0, L 1, L 2 such that u M 0 in Ω, (3) f z (x, z) 0 in Ω [ M 0, M 0 ], (4) f(x, z) + f x (x, z) L 1 in Ω [ M 0, M 0 ], (5) ψ(x, z) C 3 (Ω [ M 0,M 0 ]) L 2. (6)
Main Result : Boundary for Neumann Problem of Mean Curvature Equations Theorem 1 Suppose u C 2 (Ω) C 3 (Ω) is a solution to the problem (1),(2) and satisfies (3). If f, ψ satisfy the conditions (4), (5), (6) respectively, then there exists a small positive constant µ 0 such that sup Ω µ0 Du max{m 1, M 2 }, where M 1 is a positive constant depending only on n, µ 0, M 0, L 1, which is from interior gradient estimate; M 2 is a positive constant depending only on n, Ω, µ 0, M 0, L 1, L 2.
Existence Therorem Theorem 2 Let Ω R n be a bounded domain, Ω C 3, n 2, and γ be the inward unit normal vector to Ω. Suppose ψ(x) C 3 (Ω). Then the following problem div( exists a unique C 2 (Ω) solution. Du ) = u 1+ Du 2 in Ω, u γ = ψ(x) on Ω,
Three aspects for history of s for Mean Curvature Equations Interior gradient estimates; boundary gradient estimates for prescribed Dirichlet problem;
Three aspects for history of s for Mean Curvature Equations Interior gradient estimates; boundary gradient estimates for prescribed Dirichlet problem; boundary gradient estimates Capillary boundary value problem (or prescribed contact angle problem).
Main Methods Integral estimates; Maximum principle.
History of Interior s for Mean Curvature Equations 1954, Finn, 2-dim minimal surface; 1969, Bombieri-De Giorgi-Miranda, n-dim minimal surface; 1970, Ladyzhenskaya-Ural tseva, general prescribed mean curvature equation; 1972, Trudinger, a new proof; 1976, L.Simon, general elliptic equations of divergence type. Integral estimates.
History of Interior s for Mean Curvature Equations Maximum principle: 1983, Korevaar, normal variation method, minimal surface; 1998, Xu-Jia Wang, Bernstein method: mean curvature equations, a new proof.
History of Dirichlet problem for Mean Curvature Equations Dirichlet problem: div( Du ) =f(x) in Ω, (7) 1 + Du 2 u =ϕ(x) on Ω. (8) 1965, Finn, 2-dim minimal surface with (8) is solvable iff for arbitrary ϕ C 0 ( Ω), Ω is convex; 1968, Jenkins-Serrin, n-dim minimal surface with (8) is solvable iff for arbitrary ϕ C 0 ( Ω), the mean curvature of Ω is nonnegative at every point of Ω; 1969, Serrin, the Dirichlet problem (7), (8) is solvable iff for arbitrary ϕ C 0 ( Ω), the mean curvature H of Ω satisfies H(x) Ω f(x) n 1.
History of Capillary Problem for Mean Curvature Equations Capillary Problem : div( Du ) =f(x, u) in Ω, (9) 1 + Du 2 u γ = cos θ(x) 1 + Du 2 on Ω. (10) 1973, Ural tseva, conormal problem; 1976, Simon-Spruck, Gerhardt 1983, Lieberman, conormal problem for general elliptic equations of variational type. Integration by parts.
History of Capillary Problem for Mean Curvature Equations Maximum principle: 1975, Spruck, 2-dim positive gravity; 1988, Korevaar, normal variation method, n-dim positive gravity; 1984-1987, Lieberman, quasilinear equations; 1988, Lieberman zero gravity. 1996, Bo Guan, use Korevaar method, Parabolic mean curvature equations; 1999, Bo Guan, Parabolic curvature equations, cos θ < 3 2.
Preliminaries Let Ω be a bounded C 3 domain in R n, n 2. γ is the inner unit normal to Ω. Set and d(x) = dist(x, Ω), Ω µ = {x Ω : d(x) < µ}. Then there exists a positive constant µ 1 > 0 such that d(x) C 3 (Ω µ1 ).
Preliminaries As mentioned in Simon-Spruck(1976) or in page 331 of Lieberman s (2013) book, we can take γ = Dd in Ω µ1 and γ is a C 2 (Ω µ1 ) vector field with the following properties: Dγ + D 2 γ C(n, Ω) in Ω µ1, γ = 1, γ i D j γ i =0, γ i D i γ j = 0, in Ω µ1. 1 i n 1 i n (11)
Preliminaries Let c ij = δ ij γ i γ j in Ω µ1. (12) For any vector ζ in R n, denote ζ by the tangential part of ζ, whose i th component is c ij ζ j. 1 j n We write D u for the tangential part of Du, then D u 2 = c ij u i u j. (13) 1 i,j n
Preliminaries Set a ij (Du) =v 2 δ ij u i u j, (14) v =(1 + Du 2 ) 1 2. (15) Then the problem (1), (2) is equivalent to the following n a ij u ij =f(x, u)v 3 in Ω, (16) i,j=1 u γ =ψ(x, u) on Ω. (17)
Preliminaries Lemma (Gilbarg-Trudinger Theorem 16.5) Suppose u C 3 (Ω) is a solution of (1) and satisfies (3). If f satisfies the conditions (4),(5), then for any subdomain Ω Ω, we have sup Ω Du M 1, where M 1 is a positive constant depending only on n, M 0, dist(ω, Ω), L 1.
Idea of Proof of Theorem 1 We combine the techniques of Spruck(1975)!Lieberman(1988)!Wang(1998) etc. Choose auxiliary function including Du 2 and other lower terms, and then apply maximum principle to the auxiliary function in the annular domain Ω µ0 (0 < µ 0 < µ 1 ).
Idea of Proof of Theorem 1 Set w = u ψ(x, u)d. Choose the following auxiliary function Φ(x) = log Dw 2 e 1+M 0+u e α 0d, x Ω µ0, where α 0 = ψ C 0 (Ω [ M 0,M 0 ]) + C 0 + 1, C 0 n, Ω.
Idea of Proof of Theorem 1 For simplifying the computations, let ϕ(x) = log Φ(x) = log log Dw 2 + h(u) + g(d), (18) and here h(u) = 1 + M 0 + u, g(d) = α 0 d. (19)
Idea of Proof of Theorem 1 Suppose ϕ(x) attains maximum at the point x 0 Ω µ0, where 0 < µ 0 < µ 1 is a small enough constant to be determined later. We will divide three cases to prove Theorem 1. Case I. If x 0 Ω, by Hopf lemma, we can get the bounds of Du (x 0 ). Case II. If x 0 Ω µ0 Ω, then this is due to interior gradient estimate. Case III. If x 0 Ω µ0, we use maximum principle to get the bounds of Du (x 0 ).
Proof of Theorem 1: Case I. Case I. If x 0 Ω, we shall prove Du (x 0 ) is bounded. First, differentiating ϕ along the normal direction, we have ϕ 1 i n( Dw 2 γ = ) i γ i Dw 2 log Dw + 2 h u γ + g. (20) Next compute 1 i n( Dw 2 ) i γ i.
Proof of Theorem 1: Case I. Since w i =u i ψ u u i d ψ xi d ψγ i, (21) Dw 2 = D w 2 + w 2 γ, (22) w γ =u γ ψ u u γ d ψ xi γ i d ψ = 0 on Ω, (23) 1 i n ( Dw 2 ) i =( D w 2 ) i on Ω. (24)
Proof of Theorem 1: Case I. From (12), (13) and (24), we have ( Dw 2 ) i γ i =2 c kl u ki u l γ i 2 c kl u l D k ψ, 1 i n 1 i,k,l n 1 k,l n (25) where D k ψ = ψ xk + ψ u u k.
Proof of Theorem 1: Case I. Differentiating (2) with respect to tangential direction, we obtain c kl (u γ ) k = c kl D k ψ. (26) 1 k n 1 k n It follows that c kl u ik γ i = c kl u i (γ i ) k + c kl D k ψ. (27) 1 i,k n 1 i,k n 1 k n Inserting (27) into (25) and combining (2), (20), we have Dw 2 log Dw 2 ϕ γ (x 0) =(g (0) + h ψ) Dw 2 log Dw 2 2 c kl u i u l (γ i ) k. 1 i,k,l n (28)
Proof of Theorem 1: Case I. From (21), we obtain Dw 2 = Du 2 ψ 2 on Ω. (29) Assume Du (x 0 ) 100 + 2 ψ 2, otherwise we C 0 (Ω [ M 0,M 0 ]) get the estimates. At x 0, 1 2 Du 2 Dw 2 Du 2, (30) Dw 2 50. (31)
Proof of Theorem 1: Case I. Inserting (30) and (31) into (28), we have Dw 2 log Dw 2 ϕ γ (x 0) (α 0 ψ C 0 (Ω [ M 0,M 0 ]) C 0) Dw 2 log Dw 2 = Dw 2 log Dw 2 >0. On the other hand, by Hopf lemma, it is a contradiction to (32). ϕ γ (x 0) 0, (32) So we have Du (x 0 ) 100 + 2 ψ 2 C 0 (Ω [ M 0,M 0 ]). (33)
Proof of Theorem 1: Case II. Case II. If x 0 Ω µ0 Ω, then this is due to interior gradient estimates. From Lemma 3, we have where M1 n, M 0, µ 0, L 1. sup Du M1. (34) Ω µ0 Ω
Proof of Theorem 1: Case III.(Preliminaries) Case III. x 0 Ω µ0, we prove Du (x 0 ) is bounded. In this case, x 0 is a critical point of ϕ. We choose the normal coordinate at x 0, by rotating the coordinate system suitably, we may assume that u i (x 0 ) = 0, 2 i n and u 1 (x 0 ) = Du > 0. And we can further assume that the matrix (u ij (x 0 ))(2 i, j n) is diagonal.
Proof of Theorem 1: Case III.(Preliminaries) Take such that Choose µ 2 1 100L 2, ψ u µ 2 1 100, and 99 100 1 ψ uµ 2 101 100. (35) µ 0 = 1 2 min{µ 1, µ 2, 1}.
Proof of Theorem 1: Case III.(Preliminaries) For simplicity, set w =u G, G = ψ(x, u)d. Then w k = (1 G u )u k G xk. (36) At x 0, Dw 2 =w 2 1 + w 2 i, (37) 2 i n w i = G xi = ψ xi d ψγ i, i = 2,..., n, (38) w 1 =(1 G u )u 1 G x1 = (1 G u )u 1 ψ x1 d ψγ 1. (39)
Proof of Theorem 1: Case III.(Preliminaries) Assume u 1 = Du (x 0 ) 3000(1 + ψ 2 ). (40) C 0 (Ω [ M 0,M 0 ]) Then we have 9 10 u2 1 Dw 2 11 10 u2 1, 9 10 u2 1 w2 1 11 10 u2 1, (41) By the choice of µ 0 and (35), we obtain 99 100 1 G u 101 100. (42)
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) Next we will divide three steps to complete the proof of Theorem 1. From now on, all the calculations will be done at the fixed point x 0. Step 1: We first make use of the equation (16) to get the formula (51).
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) Taking the first derivatives of ϕ, ϕ i = ( Dw 2 ) i Dw 2 log Dw + 2 h u i + g γ i. (43) From ϕ i (x 0 ) = 0, we have ( Dw 2 ) i = Dw 2 log Dw 2 (h u i + g γ i ). (44) Take the derivatives again for ϕ, ϕ ij = ( Dw 2 ) ij Dw 2 log Dw 2 (1 + log Dw 2 ) ( Dw 2 ) i ( Dw 2 ) j ( Dw 2 log Dw 2 ) 2 + h u ij + h u i u j + g γ i γ j + g (γ i ) j. (45)
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) Using (44), it follows that 0 Where and I 2 = I 1 = 1 i,j n 1 i,j n 1 Dw 2 log Dw 2 a ij ϕ ij =:I 1 + I 2. 1 i,j n (46) a ij ( Dw 2 ) ij, (47) a ij {h u ij + [ h (1 + log Dw 2 )h 2] u i u j + [ g (1 + log Dw 2 )g 2] γ i γ j 2(1 + log Dw 2 )h g γ i u j + g (γ i ) j }. (48)
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) From the choice of the coordinate, we have a 11 = 1, a ii = v 2 = 1 + u 2 1 (2 i n), aij = 0 (i j, 1 i, j n). (49)
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) From the choice of the coordinate, we have a 11 = 1, a ii = v 2 = 1 + u 2 1 (2 i n), aij = 0 (i j, 1 i, j n). (49) It follows that I 2 fv 3 (1 + c 11 α 2 0 )u2 1 log Dw 2 C 1 u 2 1. (50)
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) Then we have 0 1 i,j n a ij ϕ ij =: Q 1 + Q 2 + Q 3, (51) where Q 1 contains all the quadratic terms of u ij ; Q 2 is the term which contains all linear terms of u ij ; and other remaining terms are denoted by Q 3.
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) More precisely, the quadratic terms of u ij [ 1 Q 1 = Dw 2 log Dw 2 2(1 G u ) 2 u 2 11 + 2(1 G u) 2 (v 2 + 1) + 4(1 G u ) u 1 v 2 u 11 w k u k1 1 k n + 4(1 G u )u 1 u 1i w k u ki 2 i n + 2(1 G u ) 2 v 2 2 i n u 2 ii 1 k n 2 i n ]. (52) u 2 1i
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) Linear terms of u ij are [ 1 [2(1 Q 2 = Gu Dw 2 log Dw 2 )fu 1 v 4G uuu 1 4G ux1] w k u k1 1 k n 4v 2 G uxi w k u ki 2 i n 1 k n 4(1 G u )(G uu u 2 1 + 2G ux 1 u 1 + G x1 x 1 )u 11 4(1 G u )(v 2 + 1) (G uxi u 1 + G x1 x i )u 1i 2 i n 4(1 G u )v 2 G xi x i u ii ]; (53) 2 i n
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) Other remaining terms are [ 1 Q 3 =I 2 + Dw 2 log Dw 2 2(1 G u )f u v 3 u 1 w 1 2G uu fv 3 u 1 w 1 + 2(G uu u 2 1 + 2G ux 1 u 1 + G x1 x 1 ) 2 + 2(v 2 + 1) (G uxi u 1 + G x1 x i ) 2 2 i n + 2(1 G u )v 3 2v 2 u 1 w 1 2v 2 4u 1 1 k n 2 i n w k 1 k n f xk w k 2fv 3 1 k n G uxi x i 4G uux1 u 2 1 w 1 2u 2 1 2 i n G ux1 x k w k 2 1 k n G uxk w k 2G uuu u 3 1 w 1 1 k n G xi x i x k 2G ux1 x 1 u 1 w 1 + 2v 2 1 k n w k G uuxk 2 i,k n G xk x i G x1 x 1 x k w k ]. (54)
Proof of Theorem 1: Case III.( Step 1. Derive Formula (51)) From the estimate on I 2, we have Q 3 fv 3 2fG uu Dw 2 log Dw 2 v3 u 1 w 1 (1 + c 11 α 2 0 )u2 1 log Dw 2 C 2 u 2 1, In the above computation of Q 3, we use the relation f u 0, where C 2 n, Ω, M 0, µ 0, L 1, L 2. (55)
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) Step 2: In this step we shall treat the terms Q 1, Q 2 and get the formula (65). Let A = Dw 2 log Dw 2. (56)
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) By ϕ i = 0 we have the following formulas. 1 k n w k w ki = h 2 Dw 2 log Dw 2 u i g γ i 2 Dw 2 log Dw 2 = h 2 Au i g γ i 2 A, i = 1, 2,..., n. (57)
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) Then we get (1 G u ) (1 G u ) 1 k n 1 k n w k u ki = h 2 Au i + G uu w 1 u 1 u i g γ i 2 A + w 1u 1 G uxi + w k G uxk u i + w k G xk x i, 1 k n 1 k n i = 1, 2,..., n. (58) w k u k1 = h 2 Au 1 + G uu u 2 1 w 1 g γ 1 2 A + G ux 1 w 1 u 1 + w k G uxk u 1 + w k G xk x 1. 1 k n 1 k n (59)
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) (1 G u ) w k u ki = g γ i 2 A + G ux i w 1 u 1 + 1 k n w k G xk x i, 1 k n i = 2,..., n. (60)
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) u 1i = 1 g γ i A w i u ii + G ux i u 1 w 1 2(1 G u ) w 1 1 G u 1 + w k G xk x (1 G u )w i, 1 1 k n i = 2, 3,..., n. (61) u 11 = : w 2 i w 2 2 i n 1 u ii h Au 1 + D. (62) 2(1 G u ) w 1 1 G u
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) We have let D =G uu u 2 1 g γ 1 A + G ux1 u 1 + u 1 w k G uxk + g w i γ i A 2 w 1 w 1 2 w 2 1 k n 2 i n 1 u 1 G uxi w i + 1 w k G xk x w 1 w 1 1 2 i n 1 w 2 w i w k G xk x i 1 k n 1 1 k,i n =O(u 2 1 ). (63)
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) From (16) and (62), we have (v 2 + w2 i )u ii = fv 3 + 2 i n w 2 1 h Au 1 2(1 G u ) w 1 D 1 G u. (64)
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) From (59)-(62), we deal with each term in Q 1, Q 2, and get (65). 0 a ij ϕ ij =: J 1 + J 2, (65) 1 i,j n where J 1 only contains the terms with u ii, the other terms belong to J 2.
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) J 1 =: 1 A [ J11 + J 12 ], (66) here J 11 contains the quadratic terms of u ii (i 2), and J 12 is the term including linear terms of u ii (i 2). It follows that J 11 =2(1 G u ) 2{ d i e i u 2 ii + 2 2 i n 2 i<j n w 2 i w 2 j w 4 1 u ii u jj }, (67) where d i =v 2 + w2 i, w 2 i = 2, 3,..., n, (68) 1 e i =1 + w2 i, w 2 i = 2, 3,..., n. (69) 1
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) J 12 = 2(1 G u )(h Au 1 w 3 1 2D w 2 1 + 2(1 G u )g A v2 + 1 w 2 1 4(1 G u ) v2 + 1 w 2 1 [ 2h Au2 1 v 2 w 2 1 4 1 k n + 2g Au 1 w 1 ) 2 i n ( 2 i n 2 i n 1 k n w 2 i u ii w i γ i u ii u 3 1 4G uu + v 2 2g γ 1 Au 1 w 1 v 2 w 2 1 u 2 1 w k G uxk v 2 w 2 4 1 w i γ i u ii 4u 2 1 2 i n 1 k n w k G xk x i G x1 x i w 1 )w i u ii u 1 w k G xk x 1 v 2 w 2 1 w i G uxi u ii 2 i n u 2 1 4G ux1 v 2 w 1 ] 2 i n w 2 i u ii
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) =: 4u 1 w 1 1 k n w k 2 i n G xk x i w i u ii 4(1 G u )v 2 4(1 G u )(G uu u 2 1 + 2G ux 1 u 1 + G x1 x 1 ) 2 i n K i u ii, 2 i n w 2 i w 2 2 i n 1 u ii G xi x i u ii (70)
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) where K i = 2(1 G u )(h Au 1 w 3 1 4(1 G u ) v2 + 1 w 2 1 [ 2h Au2 1 v 2 w 2 1 4G ux1 u 2 1 v 2 w 1 4 4 4u 1 w 1 1 k n 2D w 2 1 )w 2 i + 2(1 G u )g A v2 + 1 w i γ i (G uxi u 1 w 1 + 1 k n u 3 1 4G uu + v 2 2g γ 1 Au 1 w 1 v 2 w 2 1 1 k n w k G uxk u 2 1 v 2 w 2 1 u 1 w k G xk x 1 v 2 w 2 1 w k G xk x i w i 4(1 G u )v 2 G xi x i 1 k n w k G xk x i )w i w 2 1 ] w 2 i + 2g Au 1 w 1 w i γ i 4u 2 1 w ig uxi
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) 4(1 G u )(G uu u 2 1 + 2G ux 1 u 1 + 4G x1 x 1 ) w2 i w 2 1 + 4(1 G u ) v2 + 1 (G uxi u 1 + G x1 x w i )w i 1 =O(A). i = 2,..., n. (71)
Proof of Theorem 1: Case III.( Step 2. Derive Formula (65)) We write other terms as J 2, then J 2 =Q 3 h fvu 2 1 + 2fG uu A vu3 1 w 1 + h 2 Au 2 1 2 w 2 1 + h 2 Au 3 1 + c11 u2 1 G u v 2 w 1 2 g 2 1 w 2 A + c11 g 2 Au 1 + O(u 2 1 1 G 1 u w ) 1 1 4 (1 + c11 α 2 0 )u2 1 log Dw 2 C 6 u 2 1. (72)
Proof of Theorem 1: Case III.( Step 3. Deal with J 1 and Complete the Proof) Step 3: In this step, we concentrate on J 1 and then complete the proof of Theorem 1 through Lemma 4. From (64), we have u 22 = 1 d 2 3 i n d i u ii + 1 d 2 [fv 3 + h Au 1 D ]. 2(1 G u ) w 1 1 G u (73)
Proof of Theorem 1: Case III.( Step 3. Deal with J 1 and Complete the Proof) By (73),we have J 1 = 2(1 G u) 2 [ Ad 2 3 i n b ii u 2 ii + 2 3 i<j n b ij u ii u jj + 3 i n K i u ii ] + R. (74)
Proof of Theorem 1: Case III.( Step 3. Deal with J 1 and Complete the Proof) Here b ii =e 2 d 2 i + e i d i d 2 2 w2 2 w2 i w 4 1 d i =: 2u 4 1 + A 1iu 2 1 + A 2i, i 3 b ij =e 2 d i d j w2 2 w2 i w 4 d j w2 2 w2 j w 4 d i + w2 w 2 i j w 4 1 1 1 = : u 4 1 + G iju 2 1 + Ĝ ij, i j, i, j 3, (75)
Proof of Theorem 1: Case III.( Step 3. Deal with J 1 and Complete the Proof) A 1i =4 + (w 2 2 + w2 i ) u2 1, w 2 1 A 2i =2 + (3w 2 2 + 5w2 i ) u2 1 w 2 1 + 2w2 2 + 4w2 i w 2 1 G ij =2 + w 2 2 u 2 1 w 2 1, + w 2 i (w 2 2 + w2 i ) u2 1 w 4 1 + 2w2 (w 2 i 2 + w2 ) i, w 4 1 Ĝ ij =1 + (2w 2 2 + w2 i + w 2 j ) u2 1 w 2 1 + w2 2 + w2 i + w 2 j w 2 1 + 2w2 i w 2 j w 4 1 w2 2 w2 i w 2 j w 6 1 (76).
Proof of Theorem 1: Case III.( Step 3. Deal with J 1 and Complete the Proof) K i = 2e 2 [ fv 3 + + 2w2 [ 2 fv 3 + w 4 1 =O(u 5 1 ). h Au 1 D ] di 2(1 G u ) w 1 1 G u h Au 1 2(1 G u ) w 1 D ] w 2 i + K i d 2 K 2 d i 1 G u (77) R = 2e 2(1 G u ) 2 Ad 2 [ fv 3 + + K 2 Ad 2 [fv 3 + h 2(1 G u ) h Au 1 2(1 G u ) w 1 Au 1 D ] w 1 1 G u D 1 G u ] 2 (78) u 2 1 =O( ). log u 1
Proof of Theorem 1: Case III.( Step 3. Deal with J 1 and Complete the Proof) Now we use Lemma 4, if there is a sufficiently large positive constant C 9 such that then we have Du (x 0 ) C 9, (79) J 1 C 10 u 6 1 Ad 2 C 11 u 2 1. (80)
Proof of Theorem 1: Case III.( Step 3. Deal with J 1 and Complete the Proof) Using the estimates on J 1 in (80) and J 2 in (72), by (65), we have 0 a ij ϕ ij 1 i,j n 1 4 u2 1 log Dw 2 C 12 u 2 1. (81) There exists a positive constant C 13 such that Du (x 0 ) C 13. (82)
Proof of Theorem 1: Case III.( Step 3. Deal with J 1 and Complete the Proof) So from Case I, Case II, and (82), we have Du (x 0 ) C 14, x 0 Ω µ0 Ω.
Proof of Theorem 1: Case III.( Step 3. Deal with J 1 and Complete the Proof) Since ϕ(x) ϕ(x 0 ), x Ω µ0, there exists M 2 such that Du (x) M 2, in Ω µ0 Ω, (83) So at last we get the following estimate sup Du max{m 1, M 2 }, Ω µ0 So we complete the proof of Theorem 1.
Lemma 4 Lemma Suppose (b ij ), Ki are defined as before. We study the following quadratic form Q(x 3, x 4,..., x n ) = b ii x 2 i + 2 b ij x i x j + K i x i, (84) 3 i n 3 i<j n 3 i n Then there exists a sufficiently large positive constant C 15 which depends only on n, Ω, µ 0, M 0, L 1, L 2, such that if then we have where C 16 n, Ω, µ 0, M 0, L 1, L 2. Du (x 0 ) = u 1 (x 0 ) C 15, (85) Q(x 3, x 4,..., x n ) C 16 u 6 1, (86)
Proof of Existence Theorem For the C 0 estimates we use the methods introduced by Concus-Finn(1974) and Spruck(1975). As in Simon-Spruck(1976), we use the continuity method to complete the proof of Theorem 2. Consider the following family of Neumann problems: div( Du ) =u in Ω, (87) 1 + Du 2 u =τψ(x) on Ω, (88) γ τ = 0, u = 0 is the unique solution; τ = 1 is corresponding to the solution we will find.
Compare with Lieberman s Result Recall the following boundary condition : Lieberman s book in page 360: b(x, z, p) = v q 1 u γ + ψ(x, z) = 0 on Ω. (89) Remark 1: q = 0. Xu can give new proofs. Φ(x) = log(v + ψu γ )h(u)g(d).see her 2014 USTC thesis. Remark 2: Using Φ(x) == log D u 2 h(u)g(d), Xu can give a unified treatment for q = 0, q > 1 in arxiv 1411.5790.
Compare with Lieberman s Result Remark 3: q = 1. Neumann problem: Lieberman doesn t work out. This is because b p γ = 1, δb(x, z, p) = ψ, His condition in page 356 (9.64h) δb o(b p γ) doesn t hold, where δf(x, z, p) = p f p (x, z, p). Ii is easy to check when q = 1, b(x, z, p) = u γ + ψ(x, z), then we have b pi = γ i, b p γ = 1, δb(x, z, p) = u γ = ψ.
Hessian equation Using the similar technique we can get the gradient estimates on the Hessian equation with Neumann boundary value problem. { Sk (D 2 u) = f(x, u) in Ω, u (90) γ = ϕ(x, u) on Ω, Our main result is as follows. sup Ω µ0 Du max{ M1, M2 }, (91) where M1 is a positive constant depending only on n, k, µ 0, M 0, L 1, which is from the interior gradient estimates; M 2 is a positive constant depending only on n, k, Ω, µ 0, M 0, L 1, L 2.
Hessian equation we need first recall an interior estimate in Chou-Wang, and Trudinger. Let Ω R n be a bounded domain. Suppose u C 3 (Ω) is a k- admissible solution of σ k (D 2 u) = f(x, u) in Ω (92) satisfying u M 0. If f Ω [ M 0, M 0 ] satisfies the conditions that there exist positive constant L 1 such that f(x, z) 0 in Ω [ M0, M 0 ], f(x, z) + f x (x, z) + f z (x, z) L 1 in Ω [ M 0, M 0 ], ϕ(x, z) C 3 ( Ω [ M 0,M 0 ]) L 2. (93) then for all Ω Ω, it has sup Du M1, (94) Ω where M1 is a positive constant which depends on n, k, M 0, dist(ω, Ω), L 1.
Hessian equation We consider the auxiliary function G(x) = log Dw 2 + h(u) + g(d), (95) where w(x) = u(x) ϕ(x, u)d(x); (96) and h(u) = log(1 + 4M 0 u); (97) in which α 0 large to be chosen later. By (97) we have g(d) = α 0 d, (98) log(1 + 4M 0 ) h log(1 + 3M 0 ), (99) 1 h 1, 1 + 4M 0 1 + 3M 0 (100) 1 h 1. 1 + 4M 0 1 + 3M 0 (101)
Hessian equation we have w i =u i (ϕ i + ϕ u u i )d ϕd i. (102) If we assume that Du > 8nL 2 and µ 0 1 2L 2, it follows that 1 Du Dw 2 Du. (103) 4 These inequalities will be used below. We assume that G(x) attains its maximum at x 0 Ω µ0, where 0 < µ 0 < µ 1 is a sufficiently small number which we shall decide it later.
Hessian equation If maximum of G is attained on the boundary, at the maximum point we have 0 G γ = 2 Dw 2 pγ p Dw 2 + g + h u γ. (104) Take tangential derivative to the Neumann boundary condition, and choose α 0 = C 1 + C 2 + L 2 1+3M 0 + 1, such that 0 G γ α 0 C 1 C 2 Dw h ϕ (105) Thus we have estimate Dw (x 0 ) 1. C 2 C 2 Dw, (106)
Hessian equation Near boundary estimates, t ake the first derivatives and second derivatives to the auxiliary function: 0 = G i = 2 n w p w pi p=1 Dw 2 + g D i d + h u i, (107) n 2w pj w pi + 2w p w pji 4 n w p w pi w q w qj p=1 p,q=1 G ij = Dw 2 Dw 4 (108) + g D i dd j d + g D ij d + h u i u j + h u ij. (109) Now we choose coordinate at x 0 such that w = w 1 and (u ij ) 2 i,j n is diagonal.
Hessian equation µ 0 µ 1 := 1 2L 2, such that 1 ϕ u d 1 2 Suppose that Du (x 0 ) > M 1 := 64nL 2, we have for i 2 u i 1 Du, (110) 16n and u 1 1 Du. (111) 2
Hessian equation We also get the key fact that u 11 1 128 h Du 2 < 0, (112) here we assume that µ 0 µ 2 := 1 64C 4 (1+4M 0 ).
Hessian equation 1 if µ 0 µ 3 := small, we get 32C 9 (1+4M 0 ) 2 (n k+1) h F 11 u 2 1 8 C 9 F Du 2. (113)
Hessian equation 0 h F Du 2 32(n k + 1) C 11 > 0 (114) provided that Du (x 0 ) M 4. We conclude that if µ 0 = min{µ 1, µ 2, µ 3 }, we have the estimate Du (x 0 ) max{m 1, M 2, M 3, M 4 }. (115)
Review Laplace s equation and linear equations: 1930 s, Gilbarg-Trudinger s book: Theorem 6.31. Mean curvature equations: 2014, Ma-Xu arxiv:1406.0046 Nonlinear uniformly elliptic equations: 1984, Lieberman(quasilinear); 1986, Lieberman-Trudinger (fully nonlinear). Monge-Ampère equations: 1986, Lions-Trudinger-Urbas. Hessian equations, 1 < k < n Hessian equations with Neumann boundary condition: 2014, Ma-Qiu-Xu, C 1 estimate. Hessian equation with contact angle boundary value condition, C 1 boundary estimate, now only for k = 2 up to now. Monge-Ampere equation with contact angle condition: C 1 estimate. Since as Urbas (IHP 95) mentioned it, the LTU (86) technique didn t work.
Thank you for your attention!