Advances in Mathematics 5 010 1616 1633 www.elsevier.com/locate/aim A Brunn Minkowski inequality for the Hessian eigenvalue in three-dimensional convex domain Pan Liu a, Xi-Nan Ma b,,luxu c a Department of Mathematics, East China Normal University, Shanghai, 0041, China b Department of Mathematics, University of Science and Technology of China, Hefei, 3006, Anhui Province, China c Wuhan Institute of Physics and Mathematics, The Chinese Academy of Science, Wuhan, 430071, HuBei Province, China Received 8 October 008; accepted 11 April 010 Available online 6 May 010 Communicated by Luis Caffarelli Abstract We use the deformation methods to obtain the strictly log concavity of solution of a class Hessian equation in bounded convex domain in R 3, as an application we get the Brunn Minkowski inequality for the Hessian eigenvalue and characterize the equality case in bounded strictly convex domain in R 3. 010 Elsevier Inc. All rights reserved. Keywords: Constant rank theorem; Hessian equation; Eigenvalue; Brunn Minkowski inequality 1. Introduction The convexity is an issue of interest for a long time in partial differential equation. It connects the geometric properties to analysis inequalities. In 1976, Brascamp and Lieb [] establish the log-concavity of the fundamental solution of diffusion equation with convex potential in bounded convex domain in R n. As a consequence, they proved the log-concavity of the first eigenfunction Research of the first author was supported by NSFC No. 10601017, the second author was supported by NSFC No. 10671186 and Hundreds peoples Program in Chinese Academy of Sciences and the third author was supported by NSFC No. 10901159. * Corresponding author. E-mail addresses: pliu@math.ecnu.edu.cn P. Liu, xinan@ustc.edu.cn X.-N. Ma, xulu@wipm.ac.cn L. Xu. 0001-8708/$ see front matter 010 Elsevier Inc. All rights reserved. doi:10.1016/j.aim.010.04.003
P. Liu et al. / Advances in Mathematics 5 010 1616 1633 1617 of Laplace equation in convex domains. At the same time, they obtained the Brunn Minkowski inequality for the first eigenvalue as following: λ 1 tk 0 + tk 1 1 1 tλk 0 1 + tλk1 1, 1.1 where t [0, 1], K 0, K 1 are nonempty convex bodies in R n. In fact, in that paper they proved that this inequality holds for all compact connected domain having sufficiently regular boundary. One always is interesting on the equality case. For example, Jerison [7] pointed out that it is related to uniqueness of the solution for the Minkowski problem about λ. In [5], Colesanti provides a new proof of 1.1 for convex bodies, and essentially tells us that equality holds if and only if K 0 is homothetic to K 1. At the same time, he asked whether the same kind of result holds for a class of fully nonlinear elliptic operator called Hessian operators other than Laplace operator. If we consider the following eigenvalue problems for bounded strict convex domain K R n with C boundary, { Sk D u = λk u k, u<0 ink, u = 0 on K, 1. where S k is the so-called Hessian operators. For k = 1,...,n, and a C function u, thek-th Hessian operator S k D u is the k-th elementary symmetric function of the eigenvalues of the Hessian matrix of u.alsoifu satisfies S i D u > 0 for all 1 i k, then we call u an admissible solution of 1. see for example [4]. Equivalently we can define { λk = inf K us kd u dx K u k+1 dx }, 1.3 where the inf is taken over the functions u C K C K, admissible and u = 0on K. Obviously this functional λk is homogeneous of order k. Note that when k = 1, the last equation 1. is corresponding to the Laplace operator, the Brunn Minkowski inequality for λk is just 1.1. When k = n, Eq. 1. is corresponding to the Monge Ampère operator, the Brunn Minkowski inequality of the λk had obtained in Salani [14], i.e. λ n 1 K is concave in K, and the equality holds if and only if K 0 is homothetic to K 1. Wang [16] proved for 1 <k<n, up to a positive factor, for Eq. 1. exists a unique negative admissible solution u C K C 1,1 K. Furthermore Eq. 1. exists exactly one positive eigenvalue in a convex domain with smooth boundary in fact, the result by Wang is true for a larger class of domains. In this paper, we use Wang s result to deal with the case k = in 3-dimensional convex domain. Theorem 1.1. Suppose K is a bounded smooth strict convex domain in R 3, and u C K C 1,1 K is the unique up to a positive factor admissible solution of { S D u = λk u, u<0 in K, u = 0 on K, 1.4 then v = log u is a strictly convex function in K.
1618 P. Liu et al. / Advances in Mathematics 5 010 1616 1633 With this result we will prove the Brunn Minkowski inequality for the positive eigenvalue of S operator. Our method is from Colesanti [5] and Salani [14]. Theorem 1.. Suppose K 0,K 1 are bounded smooth strict convex domains in R 3, and t [0, 1], then the functional λ satisfies the inequality: λ 1 tk 0 + tk 1 1 4 1 tλk 0 1 4 + tλk1 1 4. 1.5 Moreover equality holds if and only if K 0 is homothetic to K 1. The plan of the paper is as follows. In Section, we prove the Hessian of v has constant rank if the function v in Theorem 1.1 is convex, this is essential for our paper. In Section 3, combing the boundary estimates we use the deformation process to get the function v is strict convex, then we complete the proof of Theorem 1.1. In the last section, we prove the Brunn Minkowski inequality and characterize the equality case.. A constant rank theorem In this section we establish a constant rank theorem for the convex solution of the related nonlinear elliptic equation. In what follows, S n denotes the set of the symmetric n n matrices, and S+ n Sn ++ is the subset of the semipositive positive definite matrices. Let K R 3 be any bounded domain. Note that if let v = log u, then Eq. 1.4 is equivalent to { S D v Tr PDvD v = λk in K,.1 vx +, x K, where TrA denotes the trace of matrix A, and P v = P ij is a matrix with P ij = v δ ij v i v j, i,j = 1,, 3. It follows that P is semipositive definite, and TrP D v = 3 i,j=1 P ij v ij. By a simple observation, if u is an admissible solution for Eq. 1.4, then v is an admissible solution for Eq..1. One of main ingredients of the proof of Theorem 1.1 is a constant rank theorem, which states as follows see e.g. [3] and [9]. Some related results have been obtained in [11]. Theorem.1. Suppose v is a C 4 K admissible solution of.1, and the Hessian matrix {v ij } of v is semipositive definite in K R 3, then {v ij } must have constant rank in K, where K is any domain. Proof. Let v be an admissible solution of Eq..1. Let W ={v ij } denote the Hessian matrix of v. Since v is admissible, from Eq..1 the rank of W can be only or 3. Suppose now z 0 K is a point in which Wz 0 is of minimal rank, we shall show W is of constant rank in K. Let φz = det Wz, and define U ={z K φz = 0}. We will show that U = K, which means W is of constant rank in K.First,U is not empty, since clearly z 0 U. Note that, by the
P. Liu et al. / Advances in Mathematics 5 010 1616 1633 1619 definition of U, U is a closed set in K. We shall show that there is a small open neighborhood O of z 0 in U, and φz 0inO which implies the set U is an open set in K. Since K is connected, we must have U = K. Following the notations in [3] and [9], for two functions h, k defined in an open set O K. Let y O, we say that hy ky if there exist positive constants c 1 and c such that h ky c 1 φ +c φ y.. We write hy ky if both hy ky and ky hy hold. Next, we write h k if the above inequality holds in O, with the constant c 1, and c independent of y in this neighborhood. Finally, say h k if h k and k h. Now let F = FDv,D v := S D v TrP DvD v, and we shall use the following notations: We shall show that F ij = F v ij, F ij,rs = F v ij v rs, F vl = F v l, F ij v l = F v ij v l, F vk v l = F v k v l. 3 F ij φ ij 0,.3 i,j=1 in an open small neighborhood O of z 0. Since φ 0inK and φz 0 = 0, it then follows from the strong minimum principle that φz 0inO. In order to prove.3 at an arbitrary point z O, as in Caffarelli Friedman [3], we choose the normal coordinate, i.e. we perform a rotation T z about z so that in the new coordinates W is diagonal at z and v 11 v v 33 at z. Consequently we can choose T z to vary smoothly with z. If we can establish.3 at z under the assumption that W is diagonal at z, then go back to the original coordinates we find that.3 remains valid with new coefficients c 1,c in., depending smoothly on the independent variable. Thus it remains to establish.3 under the assumption that W is diagonal at z. Since rank is at least, there exists a positive constant C, which depends only on v C 4, such that v 11 v C at z. In the following, all calculations are working at the point z using the notation, with the understanding that the constants in.3 are under control. Next we compute φ, its first and second derivatives in the directions e i,e j, we find and 0 φ v 33,.4 0 φ i v 33i,.5 φ ij v 11 v v 33ij v 11 v 3i v 3j v v 13i v 13j..6
160 P. Liu et al. / Advances in Mathematics 5 010 1616 1633 Differentiating Eq..1 along e 3 once we get F ij v ij 3 + F vl v l3 = 0..7 In this paper, the summation convention over repeated indices will be employed. From.4.5, and since v ij is diagonal at z, one can see F 11 v 113 + F v 3 + F 1 v 13 0..8 Differentiating Eq..1 along e 3 twice to get it follows that F ij v ij 33 + F ij,rs v ij 3 v rs3 + F ij v l v ij 3 v l3 + F vl v l33 + F vl v s v l3 v s3 = 0,.9 F ij v ij 33 F ij,rs v ij 3 v rs3, which together with.6 imply F ij φ ij v 11 v F ij,rs v ij 3 v rs3 v 11 F ij v 13i v 13j v F ij v 3i v 3j..10 Now we compute the partial derivatives of F along v ij, it follows that F 11 v v + v 3, F v 11 v 1 + v 3, F 1 = F 1 v 1 v,.11 F 11, = F,11 = 1, F 1,1 = F 1,1 = 1..1 Hence we have We want to prove F ij φ ij v 11 v v 113 v 3 + v 13 v 11 F 11 v 113 + F v 13 + F 1 v 113 v 13 v F 11 v 13 + F v 3 + F 1 v 3 v 13..13 It comes out two possibilities. F ij φ ij v 11 v 0.
P. Liu et al. / Advances in Mathematics 5 010 1616 1633 161 Case 1. If F 1 v 13 0, put.8 into.13 to substitute F 1 v 13 above, then F ij φ ij v 113 v 3 + v F 11 13 v113 v 11 v v 11 F F 11 + v13 v 11 v + v 113 + v 3 v 11 v 1 F F 11 v 11 v F v v 3 F 11 v 113 + F v 3 v 13 v 113v 3 v 11v v 1 + v 3 v v + v 3 v 11 v 11 v v 13 v 113 v 3 λ v 11 v v 13 v 113 v 3..14 The last comes from the original equation λ = F v 11 v v 1 + v 3 v v + v 3 v11. If v 113 v 3 0, then the proof of.3 is concluded. While if v 113 v 3 > 0, by using.8 and.11 again we have Recall.14, this implies.3 true. 4v 1 v v 13 F 11 v 113 + F v 3 4F 11 F v 113 v 3 4 [ λ + v + v 3 v 1 + v 3] v113 v 3 4v 1 v v 113v 3..15 Case. If F 1 v 13 = 0, we still use.8 to substitute v 3 in.13, then F ij φ ij 1 F F 11 v13 v 11 v v 11 v F 11 v 11 + F v v 11 v F 11 F v113 v 11 v λ 13 F 11 v + v 11 v F v 113 0..16 So the proof of.3 is concluded. Then proof of Theorem.1 is completed.
16 P. Liu et al. / Advances in Mathematics 5 010 1616 1633 3. Proof of Theorem 1.1 In this section, we shall use the deformation process to prove Theorem 1.1. The constant rank theorem is a very powerful tool to produce strict convex solution for nonlinear elliptic equation, see for example Caffarelli and Friedman [3], Korevaar and Lewis [9] and Guan and Ma [6]. Here we follow the approach of Korevaar and Lewis [9] and Ma and Xu [11] to get the result. First we need understand the radial solution of Eq. 1.4 defined on ball. Then we study the geometrical properties of the solution near the convex boundary. The following lemma is well known, for completeness we give the proof along the idea of McCuan [1] see pp. 17 173 in [1]. Lemma 3.1. Let B R o be unit ball in R 3 with radius R>0. Let u C B R C 1,1 B R be the eigenfunction for Eq. 1.4 in B R o. Then v = log u is a strict convex function in B R o. Proof. By the uniqueness of solution for Eq. 1.4 up to a constant factor, we know the solution u is a radial function. We set ux = ϕ x = ϕr, for r = x, where r [0,R], ϕr < 0forr [0,R. Then ϕ is an increasing function in 0,Rand ϕ 0 = ϕr = 0. Since it follows that and For v = log ϕ,wehave r = x i x i r, r = r 3 x i x j + r 1 δ ij, x i x j u ij = ϕ r ϕ r 3 x i x j + ϕ r 1 δ ij, S D u = ϕ ϕ r 1 + ϕ r. 3.1 ϕ = e v v, ϕ = e v[ v v ]. So Eq. 1.4 on u transforms to the following equation on v. rv v r v 3 + v = λr for 0 <r<r. 3. It follows that
P. Liu et al. / Advances in Mathematics 5 010 1616 1633 163 v 0 = 0, v r > 0 for 0<r<R, and Now let L = lim r 0 +[ v r r ], then by L Hopital rule provides that From 3. 3.4, it follows that v 0 0. 3.3 L = lim r 0 + v r = v 0. 3.4 v 0 = λ > 0. 3.5 3 We assume by contradiction the existence of a smallest positive r o for which v r o = 0. We know v r > 0for0<r R and v r > 0for0<r<r o. Differentiating 3. and evaluating at r = r o, we obtain v r o = λ v r o + v r o > 0, 3.6 r o which contradicts the sign of v. Hence v is strictly convex in [0,Rand the lemma is proven. Now we state the following well-known boundary convexity lemma, see for example Caffarelli and Friedman [3] p. 450, Lemma 4.3 or Korevaar [8] p. 610, Lemma.4. Lemma 3.. See [3] or [8]. Let Ω R n be smooth, bounded and strictly convex i.e. all the principal curvature of Ω are positive. Let u C Ω C 1,1 Ω satisfies u<0 in Ω, u = 0 and Du ν>0 on Ω, 3.7 where ν is the exterior normal to Ω. Let Ω ε = { x Ω: dx, Ω > ε } 3.8 and let v = fu. Then for small enough ε>0 the function v is strictly convex in a boundary strip Ω\Ω ε if f satisfies i f > 0, ii f > 0, iii lim u 0 f = 0. 3.9 f Remark 3.3. In Korevaar [8], the function u C Ω. But we can follow the calculation in Caffarelli and Friedman [3] to know the similar result is true in our case u C Ω C 1,1 Ω.
164 P. Liu et al. / Advances in Mathematics 5 010 1616 1633 Now we use the deformation technique combined with Theorem.1 constant rank theorem to obtain the proof of Theorem 1.1 as in Korevaar and Lewis [9] and Ma and Xu [11]. For completeness we repeat partly their proof. Proof of Theorem 1.1. Now if K is the ball B R o, by Lemma 3.1, for the solution u of 1.4, we have v = log u is a strict convex function in B R o. For an arbitrary bounded strict convex domain K, setk t = 1 tb R o + tk,0 t 1. Then from the theory of convex bodies see for example Sections 1.7, 1.8 and.5 in the book [15], and Section 3.1 in the book [17]. We can deform B R o continuously into K by the family K t, 0 t<1, of strictly convex domain in such a way that K t K s as t s in the sense of Hausdorff distance, whenever 0 s 1. And the deformation also is chosen so that K t,0 t<1, can be locally represented for some α, 0<α<1, by a function whose norm in the space C,α of functions with Hölder continuous second derivatives depends only on δ, whenever 0 <t δ<1. Suppose u t C K t C 1,1 K t is the admissible solution of 1.4, v t := log u t and H t is the corresponding Hessian matrix of v t.firsth 0 is positive definite, and from the boundary estimates Lemma 3. we have H δ is positive definite in an ε neighborhood of K δ.fromthea priori estimates of the solution u on the Hessian equation [16], we know this bounded depends only on the uniformly bounded geometry of K t which depends on the geometry K and t. We conclude that if v.,s is strictly convex for all 0 s<t, then v.,t is convex. So if for some δ,0 <δ<1, H δ is positive semi-definite but not positive definite in K δ,wesay it is impossible by constant rank theorem Theorem.1 and boundary estimates Lemma 3.. We conclude H δ is positive definite. Then v = log u is strictly convex in K. 4. Proof of Theorem 1. The aim of this section is to prove Theorem 1.. Now we state some element propositions on the convexity of the matrix functions. First we recall Jensen s inequality for means see []. If a,b are real positive numbers, α [, + ] and λ 0, 1, we define [1 λa α + λb α ] 1/α if α, 0 0, +, m α a,b,λ= mina, b if α =, a 1 λ b λ if α = 0, maxa, b if α =+. Jensen s inequality for means implies that m α a,b,λ m β a,b,λ if α β. 4.1 In particular, the arithmetic geometric mean inequality holds a 1 λ b λ 1 λa + λb for every a,b 0, λ [0, 1]. Lemma 4.1. If fxis a positive concave function in R n, then f 1 is convex. Proof. Note that the condition of f means
P. Liu et al. / Advances in Mathematics 5 010 1616 1633 165 f 1 tx + ty 1 tfx+ tf y, x,y R n,t [0, 1], 4. thus we have f 1 tx + ty 1 [ 1 tfx+ tf y ] 1 = [ 1 t fx 1 1 + t fy 1 1] 1 1 tfx 1 + tf y 1, 4.3 where the last inequality comes from Jensen s inequality above. This says that f 1 is convex. Remark 4.. Note that in the above if f 1 is not strictly convex, i.e. in 4.3 f 1 tx + ty 1 = 1 tfx 1 + tf y 1, 4.4 then the two equalities in 4.3 must hold at the same time, which means f is not strictly concave then either or f 1 tx + ty = 1 tfx+ tf y 4.5 [ 1 t fx 1 1 + t fy 1 1] 1 = 1 tfx 1 + tf y 1. 4.6 Proposition 4.3. As in Section, weletp v = P ij be a matrix with P ij = v δ ij v i v j, i, j = 1,, 3, and we let TrP A = 3 i,j=1 P ij a ij for A = a ij.if Dv 0, then the function fa:= S A 1 TrP A 1 is convex in A S3 ++, and 1 TrP A 1 is concave in A S3 ++. 1 Proof. TrP A 1 is concave in A S3 ++ from the appendix in [1]. Now we concentrate the proof of the first part. By Lemma 4.1, it is sufficiently to prove that fa 1 = TrP A 1 is concave in A. Since S A 1 P S+ 3, so it can be written as O P O T with OO T = I and P, the diagonal matrix of the eigenvalues of P. Then we have TrP A 1 S A 1 = TrO P O T A 1 S A 1 = Tr P O T A 1 O S O T A 1 O = Tr P Ã 1. 4.7 S Ã 1 Hence without loss of generality we may assume P = P is a diagonal matrix, in our case, which is diagonal v, v, 0. We are going to show that
166 P. Liu et al. / Advances in Mathematics 5 010 1616 1633 TrP A + B 1 S A + B 1 TrP A 1 S A 1 + TrP B 1 S B 1, 4.8 for any A,B S++ 3. In fact, it follows from a direct calculation that TrP A 1 S A 1 = P 11A A 33 A 3 A 3 + P A 11 A 33 A 13 A 31 + P 33 A 11 A A 1 A 1 A 11 + A + A 33 = P 11A A 33 + P A 11 A 33 + P 33 A 11 A TrA P 11A 3 A 3 + P A 13 A 31 + P 33 A 1 A 1, TrA then we have TrP A + B 1 S A + B 1 TrP A 1 S A 1 TrP B 1 S B 1 = 3 i=1 C i [A ii TrB B ii TrA] TrATrBTrA + B + P [A 13 TrB B 13 TrA] TrATrBTrA + B 0, where C i = 1 3j=1 P jj P ii 0. + P 11[A 3 TrB B 3 TrA] TrATrBTrA + B + P 33[A 1 TrB B 1 TrA] TrATrBTrA + B Remark 4.4. Note that actually we have C 1 = C = P 33 = 0, C 3 = P 11 = P = v 0, and the equality in 4.8 holds if and only if Now we state another useful result. Proposition 4.5. S A 1 1 is convex in A S 3 ++. TrA TrB = A 33 B 33 = A 13 B 13 = A 3 B 3. 4.9 Proof. As a special case of Theorem 15.16 in [10]. Along the ideas of Colesanti [5] and Salani [14], now we will prove Theorem 1.. Proof of Theorem 1.. For i = 0, 1, let K i be a convex domain in R 3, and let u i be the solution of { S D u i = λki u i, u i < 0 in intk i, u i = 0 on K i, then the function v i x = log u i x solves
P. Liu et al. / Advances in Mathematics 5 010 1616 1633 167 { S D v i Tr P vi D v i = λki, in intk i, v i x +, x K i, 4.10 where P = P ij with P ij = v δ ij v i v j.insectionwehaveprovedv i is strictly convex in K i, so that By the boundary condition verified by v i,wehave Let us now consider the conjugate function v i of v i : det D v i x > 0, x intk i. 4.11 v i intki = R 3. 4.1 vi [ ρ = sup x, ρ vi x ], ρ R 3. x K i For the basic properties of this function we refer to [13]; here we just put out some points connected with our concerns. Note that vi is defined on the image of K i through the gradient map of v i, which is, by 4.1, the whole R 3. Moreover as v i is strictly convex, v i C 1 R 3, and vi is the inverse map of v i : x = vi vi x, x K i. In particular this identity and 4.11 imply that vi C R 3 and D v i x = [ D vi vi x ] 1, x Ki. 4.13 Let t [0, 1], we define K t = 1 tk 0 + tk 1. Now we introduce a new function w in K t in the following: wz = min { 1 tv 0 x + tv 1 y: x K 0,y K 1,1 tx + ty = z }, 4.14 w is called the infimal convolution of v 0 and v 1 in K t. It is a strictly convex function, and from the boundary conditions in problem 4.10 it can be deduced that lim wz =+. 4.15 z K t Moreover w satisfies the following identity see Theorem 16.4 in [13] w = 1 tv 0 + tv 1 in R 3. 4.16 Now 4.11, 4.13, and 4.16 imply that w is C R 3, strictly convex and D w > 0 inr 3.
168 P. Liu et al. / Advances in Mathematics 5 010 1616 1633 Consequently, w C intk t. Let us fix z K t, by the definition of w and the boundary conditions in 4.10, there exist unique x intk 0 and y intk 1 such that z = 1 tx + ty and wz = 1 tv 0 x + tv 1 y. 4.17 By the Lagrange multipliers theorem one deduces immediately that On the other hand, v 0 x = v 1 y := ρ. 4.18 w ρ = 1 t v 0 ρ + t v 1 ρ = 1 tx + ty = z = w wz. 4.19 Hence by the injectivity of w we have Therefore, wz = ρ. 4.0 D wz = [ D w ρ ] 1 = [ 1 td v 0 ρ + td v 1 ρ] 1 = [ 1 t D v 0 x 1 + t D v 1 y 1] 1. 4.1 Note that 4.18, 4.0 imply the matrix P satisfies Now we state the following Claim A. Claim A. P w = P v 0 = P v 1 Pρ:= P. 4. S D wz Tr PD wz max i {0,1} λk i, for all z K t. 4.3 If Claim A is true, then we define so ūz has the following properties: ūz := e wz, z K t, { S Dū max i {0,1} λk i ū, ū<0 in intk t, ū = 0 K t. 4.4 By multiplying both sides of the inequality above by ū and then integrate over K t to obtain
P. Liu et al. / Advances in Mathematics 5 010 1616 1633 169 max λk K i t ūs D ū dx i {0,1} K t ū 3 λk t, 4.5 dx where the last inequality follows from the definition of λ in 1.3. Hence max { λk 0, λk 1 } λ 1 tk 0 + tk 1, K0,K 1,t [0, 1]. 4.6 In order to get the Brunn Minkowski inequality, we just replace K 0 with K 0, K 1 with K 1 and t with t in which K 0 = [ λk 0 ] 1/4 K0, K 1 = [ λk 1 ] 1/4 K1, t = t[λk 1 ] 1/4 1 t[λk 0 ] 1/4. 4.7 + t[λk 1 ] 1/4 Proof of Claim A. Since the solution wz is strictly convex in K t, then there exists a unique point z o K t such that Dwz o = 0. So we divide two cases to prove Claim A. Case 1. At the unique point z o K t with Dwz o = 0. In this case, we know from 4.18 4.0, there are unique x o K 0 and y o K 1 such that z o = 1 tx o + ty o and Dv 0 x o = Dv 1 y o = 0. Moreover by 4.10, we have λk 0 = S D v 0 xo, λk 1 = S D v 1 yo. In order to prove Claim A, 4.3, we only need to prove Since we have 4.1, it follows that using Proposition 4.5, we have S D wz o max { S D v 0 xo, S D v 1 yo }. 4.8 D wz o = [ 1 t D v 0 x o 1 + t D v 1 y o 1] 1, 4.9 S 1 D wz o 1 ts 1 D v 0 x o + ts 1 D v 1 y o, 4.30 then we have 4.8 and finish the proof of this case. Case. At the point z K t with Dwz 0. In this case, we know from 4.18 4.0, there are unique x K 0 and y K 1 such that z = 1 tx + ty and Dv 0 x = Dv 1 y 0. Using Proposition 4.3 we have S D wz TrP D wz 1 t S D v 0 x TrP D v 0 x + t S D v 1 y TrP D v 1 y. 4.31
1630 P. Liu et al. / Advances in Mathematics 5 010 1616 1633 Consequently it follows from 4.10 that S D wz TrP D wz 1 t λk 0 TrP D v 0 x + t λk 1 TrP D v 1 y + 1 [ ] max λk 1 i 1 t i {0,1} TrP D v 0 x + t 1 TrP D + 1 v 1 y max λk 1 i i {0,1} TrP D + 1, 4.3 wz where the last inequality still comes from Proposition 4.3. So we now get the following inequality S D wz Tr PD wz max i {0,1} λk i, for all z K t, 4.33 and complete the proof of Claim A. Up to now, we complete the proof of the Brunn Minkowski inequality. Prescribing the equality case. Now we will deal with the equality case in Theorem 1.. If K 0 is homothetic to K 1,thatis,ifK 0 = K 1 + ρ for some ρ R 3, then the equality holds in 1.5 by the homogeneity of λ. and by the invariance with respect to translation. Conversely, if equality holds in 1.5, then the arguments in previous show that equality must hold in 4.6, up to a normalization of the involved sets. Namely, let K 0, K 1 and t be as in 4.7 and let K t = 1 t K 0 + t K 1. Thanks to the homogeneity of λ we may assume λ K t = λ K 0 = λ K 1 = 1. Hence reduce the equality to the case in which the bodies K 0,K 1 and K t have the same eigenvalue 1. We shall prove the following Claim B. Claim B. For x K 0 and y K 1 such that z = 1 tx + ty and Dv 0 x = Dv 1 y = Dwz, we have D v 0 x = D v 1 y. 4.34 If we have Claim B, then as in Colesanti p. 19 in [5], we conclude that D v 0 x = D v 1 y D v 0 ρ = D v 1 ρ ρ R3 v 0 ρ = v 1 ρ + ρ, ρ and for some fixed ρ R3.
P. Liu et al. / Advances in Mathematics 5 010 1616 1633 1631 Finally we have then we complete the proof of Theorem 1.. K 0 = v 0 R 3 = v 1 R 3 + ρ = K 1 + ρ, Proof of Claim B. As in the above Claim A, now we also divide two cases to prove Claim B. Case 1. At the unique point z o K t with Dwz o = 0. In this case, we know from 4.18 4.0, there are unique x o K 0 and y o K 1 such that z o = 1 tx o + ty o and Dv 0 x o = Dv 1 y o = 0. Since we have 4.1, it follows that D wz o = [ 1 t D v 0 x o 1 + t D v 1 y o 1] 1, 4.35 using the equality case in Proposition 4.5 Lieberman [10], we have the following equality if and only if S 1 D wz o = 1 ts 1 D v 0 x o + ts 1 D v 1 y o, 4.36 Case. At the point z K t with Dwz 0. D v 0 x o = D v 1 y o. 4.37 In this case, we know from 4.18 4.0, there are x K 0 and y K 1 such that z = 1 tx + ty and Dv 0 x = Dv 1 y 0. Then the calculations above show that all the inequalities in 4.31 4.3 become equalities, in particular from 4.31 we have S [1 td v 0 x 1 + td v 1 y 1 ] 1 TrP [1 td v 0 x 1 + td v 1 y 1 ] 1 = 1 t S D v 0 x TrP D v 0 x + t S D v 1 y TrP D v 1 y. For simplicity, let A = D v 0 x 1, B = D v 1 y 1, then from the above equality and Remark 4. we must have TrP 1 ta+ tb 1 S [1 ta+ tb] 1 = 1 ttrp A 1 S A 1 + t TrP B 1 S B 1. 4.38 This is equivalent to the equality case in 4.8 for its homogeneity of degree 1. Now from Remark 4.4, we have TrA TrB = A 33 B 33 = A 13 B 13 = A 31 B 31 = A 3 B 3 = A 3 B 3 := c. 4.39
163 P. Liu et al. / Advances in Mathematics 5 010 1616 1633 Notice that the equality in 4.3 implies 1 TrP [1 ta+ tb] 1 = 1 t 1 TrP A 1 + t 1 TrP B 1. By homogeneity of degree 1, it is equivalent to 1 TrP A 1 + 1 TrP B 1 1 TrP A + B 1 = 0. 4.40 With the help of the arguments in the proof of Proposition 4.3, we may assume P is a diagonalized matrix with P 11 = P = v > 0, P 33 = 0. Hence a simple calculation gives 1 TrP A 1 = Putting together 4.39 4.41, we have det A P 11 A A 33 A 3 + A 11A 33 A 4.41 13. A 1 cb 1 + A cb = 0, 4.4 which implies A 1 = cb 1, A = cb. 4.43 From 4.39 and 4.43, we have A = cb due to the symmetry of the involved matrix. Now the equality in 4.31 immediately implies A = B, i.e. D v 0 x = D v 1 y, thanks to the homogeneity of degree 1 in S A 1 Now we complete the proof of Claim B. TrP A 1. This finishes the proof of Theorem 1.. We suspect Theorem 1. should be true in high-dimensional case. Acknowledgments The authors would like to thank Professor Pengfei Guan for helpful discussions. And the authors would also like to thank the referee for his her careful reading and helpful suggestions.
P. Liu et al. / Advances in Mathematics 5 010 1616 1633 1633 References [1] O. Alvarez, J.M. Lasry, P.L. Lions, Convexity viscosity solutions and state constraints, J. Math. Pures Appl. 76 1997 65 88. [] H.J. Brascamp, E.H. Lieb, On extensions of the Brunn Minkowski and Prekopa Leindler theorems, including inequalities for log-concave functions, with an application to the diffusion equation, J. Funct. Anal. 1976 366 389. [3] L. Caffarelli, A. Friedman, Convexity of solutions of some semilinear elliptic equations, Duke Math. J. 5 1985 431 455. [4] L. Caffarelli, L. Nirenberg, J. Spruck, The Dirichlet problems for nonlinear second order elliptic equations III: Functions of the eigenvalue of Hessian, Acta Math. 155 1985 61 301. [5] A. Colesanti, Brunn Minkowski inequalities for variational functionals and related problems, Adv. Math. 194 005 105 140. [6] P. Guan, X.N. Ma, Christoffel Minkowski problem I: Convexity of solutions of a Hessian equations, Invent. Math. 151 003 553 577. [7] D. Jerison, The direct method in the calculus of variations for convex bodies, Adv. Math. 1 1996 6 79. [8] N. Korevaar, Convex solutions to nonlinear elliptic and parabolic boundary value problems, Indiana Univ. Math. J. 3 1983 603 614. [9] N. Korevaar, J. Lewis, Convex solutions of certain elliptic equations have constant rank hessians, Arch. Ration. Mech. Anal. 91 1987 19 3. [10] G. Lieberman, Second Order Parabolic Differential Equations, World Scientific, 1996. [11] X.N. Ma, L. Xu, The convexity of solution of a class Hessian equation in bounded convex domain in R 3,J.Funct. Anal. 55 7 008 1713 173. [1] J. McCuan, Concavity, quasiconcavity, and quasilinear elliptic equations, Taiwanese J. Math. 6 00 157 174. [13] T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, 1970. [14] P. Salani, A Brunn Minkowski inequality for the Monge Ampère eigenvalue, Adv. Math. 194 005 67 86. [15] R. Schneider, Convex Bodies: The Brunn Minkowski Theory, Cambridge University, 1993. [16] X.J. Wang, A class of fully nonlinear elliptic equations and related functionals, Indiana Univ. Math. J. 43 1994 5 54. [17] X.P. Zhu, Mean Curvature Flow, AMS/IP, USA, 00.