KLS-type isoperimetric bounds for log-concave probability measures

Similar documents
KLS-TYPE ISOPERIMETRIC BOUNDS FOR LOG-CONCAVE PROBABILITY MEASURES. December, 2014

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

Convex inequalities, isoperimetry and spectral gap III

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

Concentration Properties of Restricted Measures with Applications to Non-Lipschitz Functions

Needle decompositions and Ricci curvature

A note on the convex infimum convolution inequality

On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law

On isotropicity with respect to a measure

REGULARITY OF MONOTONE TRANSPORT MAPS BETWEEN UNBOUNDED DOMAINS

High-dimensional distributions with convexity properties

Invariances in spectral estimates. Paris-Est Marne-la-Vallée, January 2011

Weak and strong moments of l r -norms of log-concave vectors

Heat Flows, Geometric and Functional Inequalities

From the Brunn-Minkowski inequality to a class of Poincaré type inequalities

On a weighted total variation minimization problem

GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n

Poincaré Inequalities and Moment Maps

On Concentration Functions of Random Variables

THEOREMS, ETC., FOR MATH 515

On Estimates of Biharmonic Functions on Lipschitz and Convex Domains

Concentration inequalities: basics and some new challenges

Local semiconvexity of Kantorovich potentials on non-compact manifolds

BRUNN MINKOWSKI AND ISOPERIMETRIC INEQUALITY IN THE HEISENBERG GROUP

2 (Bonus). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Dimensional behaviour of entropy and information

Title: Localized self-adjointness of Schrödinger-type operators on Riemannian manifolds. Proposed running head: Schrödinger-type operators on

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

Pseudo-Poincaré Inequalities and Applications to Sobolev Inequalities

Logarithmic Sobolev Inequalities

Inverse Brascamp-Lieb inequalities along the Heat equation

Approximately gaussian marginals and the hyperplane conjecture

3 (Due ). Let A X consist of points (x, y) such that either x or y is a rational number. Is A measurable? What is its Lebesgue measure?

Mass transportation methods in functional inequalities and a new family of sharp constrained Sobolev inequalities

ON CONCENTRATION FUNCTIONS OF RANDOM VARIABLES. Sergey G. Bobkov and Gennadiy P. Chistyakov. June 2, 2013

J. Kinnunen and R. Korte, Characterizations of Sobolev inequalities on metric spaces, arxiv: v2 [math.ap] by authors

COMMENT ON : HYPERCONTRACTIVITY OF HAMILTON-JACOBI EQUATIONS, BY S. BOBKOV, I. GENTIL AND M. LEDOUX

1 Cheeger differentiation

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

Integral Jensen inequality

Pointwise estimates for marginals of convex bodies

2 Lebesgue integration

Brunn-Minkowski inequality for the 1-Riesz capacity and level set convexity for the 1/2-Laplacian

Jensen s inequality for multivariate medians

arxiv: v1 [math.fa] 20 Dec 2011

PROPERTY OF HALF{SPACES. July 25, Abstract

SPECTRAL GAP, LOGARITHMIC SOBOLEV CONSTANT, AND GEOMETRIC BOUNDS. M. Ledoux University of Toulouse, France

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES

On the isotropic constant of marginals

Robustness for a Liouville type theorem in exterior domains

Brunn-Minkowski inequality for the 1-Riesz capacity and level set convexity for the 1/2-Laplacian

Note on the Chen-Lin Result with the Li-Zhang Method

SELF-ADJOINTNESS OF SCHRÖDINGER-TYPE OPERATORS WITH SINGULAR POTENTIALS ON MANIFOLDS OF BOUNDED GEOMETRY

Combinatorics in Banach space theory Lecture 12

The Codimension of the Zeros of a Stable Process in Random Scenery

LECTURE 15: COMPLETENESS AND CONVEXITY

N. GOZLAN, C. ROBERTO, P-M. SAMSON

THE INVERSE FUNCTION THEOREM

ON THE GROUND STATE OF QUANTUM LAYERS

Optimization and Optimal Control in Banach Spaces

Integro-differential equations: Regularity theory and Pohozaev identities

Isoperimetric Inequalities for the Cauchy-Dirichlet Heat Operator

BOUNDS ON THE DEFICIT IN THE LOGARITHMIC SOBOLEV INEQUALITY

Functional inequalities for heavy tailed distributions and application to isoperimetry

Discrete Ricci curvature: Open problems

Regularity estimates for fully non linear elliptic equations which are asymptotically convex

THE FORM SUM AND THE FRIEDRICHS EXTENSION OF SCHRÖDINGER-TYPE OPERATORS ON RIEMANNIAN MANIFOLDS

ON A MAXIMAL OPERATOR IN REARRANGEMENT INVARIANT BANACH FUNCTION SPACES ON METRIC SPACES

M. Ledoux Université de Toulouse, France

MATH MEASURE THEORY AND FOURIER ANALYSIS. Contents

b i (µ, x, s) ei ϕ(x) µ s (dx) ds (2) i=1

STABILITY RESULTS FOR THE BRUNN-MINKOWSKI INEQUALITY

Appendix B Convex analysis

Contents 1. Introduction 1 2. Main results 3 3. Proof of the main inequalities 7 4. Application to random dynamical systems 11 References 16

Lecture No 1 Introduction to Diffusion equations The heat equat

On uniqueness in the inverse conductivity problem with local data

Probability and Measure

On the smoothness of the conjugacy between circle maps with a break

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

The Lusin Theorem and Horizontal Graphs in the Heisenberg Group

Quadratic estimates and perturbations of Dirac type operators on doubling measure metric spaces

arxiv:math/ v1 [math.fa] 30 Sep 2004

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

MATH 205C: STATIONARY PHASE LEMMA

Chapter 2 Convex Analysis

Heat kernels of some Schrödinger operators

ON A MEASURE THEORETIC AREA FORMULA

ERRATUM TO AFFINE MANIFOLDS, SYZ GEOMETRY AND THE Y VERTEX

A generic property of families of Lagrangian systems

PARTIAL REGULARITY OF BRENIER SOLUTIONS OF THE MONGE-AMPÈRE EQUATION

Small ball probability and Dvoretzky Theorem

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

Solutions to Tutorial 11 (Week 12)

L -uniqueness of Schrödinger operators on a Riemannian manifold

ABSOLUTE CONTINUITY OF FOLIATIONS

1 Directional Derivatives and Differentiability

On the Intrinsic Differentiability Theorem of Gromov-Schoen

On m-accretive Schrödinger operators in L p -spaces on manifolds of bounded geometry

AFFINE MAXIMAL HYPERSURFACES. Xu-Jia Wang. Centre for Mathematics and Its Applications The Australian National University

Reducing subspaces. Rowan Killip 1 and Christian Remling 2 January 16, (to appear in J. Funct. Anal.)

Transcription:

Annali di Matematica DOI 0.007/s03-05-0483- KLS-type isoperimetric bounds for log-concave probability measures Sergey G. Bobkov Dario Cordero-Erausquin Received: 8 April 04 / Accepted: 4 January 05 Fondazione Annali di Matematica Pura ed Applicata and Springer-Verlag Berlin Heidelberg 05 Abstract The paper considers geometric lower bounds on the isoperimetric constant for logarithmically concave probability measures, extending and refining some results by Kannan et al. (Discret Comput Geom 3:54 559, 995). Keywords Isoperimetric inequalities Logarithmically concave distributions Geometric functional inequalities Mathematics Subject Classification 5A40 60E5 46B09 Introduction Let μ be a log-concave probability measure on R n with density f, thus satisfying f (( t)x + ty) f (x) t f (y) t, for all x, y R n, t (0, ). (.) We consider an optimal value h = h(μ), called an isoperimetric constant of μ, in the isoperimetric-type inequality μ + (A) h min{μ(a), μ(a)}. (.) Here, A is an arbitrary Borel subset of R n of measure μ(a) with μ-perimeter μ + (A) = lim inf ε 0 μ(a ε ) μ(a), ε where A ε ={x R n : x a <ε,for some a A} denotes an open ε-neighborhood of A with respect to the Euclidean distance. Sergey G. Bobkov: Research partially supported by NSF grant and Simons Fellowship. S. G. Bobkov D. Cordero-Erausquin (B) University Pierre et Marie Curie, Paris, France e-mail: dario.cordero@imj-prg.fr

S. G. Bobkov, D. Cordero-Erausquin The geometric characteristic h is related to a number of interesting analytic inequalities such as the Poincaré-type inequality u dμ λ u dμ (which is required to hold for all smooth bounded functions u on R n with u dμ = 0). In general, the optimal value λ, also called the spectral gap, satisfies λ h 4. This relation goes back to the work by J. Cheeger in the framework of Riemannian manifolds [0] and to earlier works by V. G. Maz ya (cf. [3,8,9]). The problem on bounding these two quantities from below has a long story. Here, we specialize to the class of log-concave probability measures, in which case λ and h are equivalent (λ 36 h,cf.[6,0]). As an important particular case, one may consider a uniform distribution μ = μ K in a given convex body K in R n (i.e., the normalized Lebesgue measure on K ). In this situation, Kannan, Lovász and Simonovits proposed the following geometric bound on the isoperimetric constant. With K, they associate the function χ K (x) = max { h :x + h K and x h K }, x R n, (.3) which expresses one half of the length of the longest interval inside K with center at x (Note that χ K = 0 outside K ). Equivalently, in terms of the diameter, χ K (x) = diam((k x) (x K )). It is shown in [5] that any convex body K admits an isoperimetric inequality μ + (A) χk dμ K μ(a)( μ(a)) which thus provides the bound h(μ K ) χ K dμ K. (.4) Some sharpening of this result were considered in [8], where it is shown that μ + c (A) (μ(a)( μ(a)) (n )/n χk dμ K where c > 0 is an absolute constant. The purpose of the present note is to extend the KLS bound (.4) to general log-concave probability measures. If μ on R n has density f satisfying (.), define χ f,δ (x) with a parameter 0 <δ< to be the supremum over all h such that f (x + h) f (x h) >δf (x). (.5) Note that, by the hypothesis (.), we have an opposite inequality with δ =, i.e., f (x + h) f (x h) f (x). (.6) Hence, in some sense, χ f,δ (x) measures the strength of log-concavity of f at a given point x. Further comments on the definition of χ f,δ (x) will be given in the next section. Note however that in the convex body case, when μ = μ K with density f (x) = vol n (K ) K (x), we recover the original quantity above, namely, χ f,δ (x) = χ K (x), for all 0 <δ<.

KLS-type isoperimetric bounds for log-concave probability measures Our main observation about the new geometric characteristic is the following: Theorem. For any probability measure μ on R n with a log-concave density f, and for any 0 <δ<, we have h(μ) C χ f,δ dμ (.7) δ where C is an absolute constant. One may take C = 64. We will comment on examples and applications later on (cf. Sect. 5). Here, let us only mention that in certain cases (.7) provides a sharp estimate with respect to the dimension n. For example, for the standard Gaussian and other rotationally invariant measures, normalized by the condition x dμ(x) = n, both sides of (.7) are of order (with a fixed value of δ). One may also consider the class of uniformly convex log-concave measures, in which case the functions χ f,δ are necessarily bounded on the whole space. In fact, the bound (.7) admits a further self-improvement by reducing the integration on the right-hand side to a smaller region whose measure is however not small. On this step, one may apply recent stability results due to E. Milman. In particular, it is shown in []that, if a log-concave probability measure ν is absolutely continuous with respect to the given log-concave probability measure μ on R n, and if these measures are close in total variation so that ν μ TV = sup ν(a) μ(a) α<, A then h(μ) C α h(ν) up to some constant depending on α, only. As we will see, the above condition with a universal value of α is always fulfilled for the normalized restriction ν of μ to the Euclidean ball B R with center at the origin and of radius R = x dμ(x). As a result, this leads, to: Corollary. For any probability measure μ on R n with a log-concave density f, and for any 0 <δ<, we have h(μ) C δ χ f BR,δ dμ (.8) where the constant depends on δ, only. x R In particular, since χ f BR,δ(x) χ BR (x) = R x (cf. Lemma.3 below), the estimate (.8) implies ( / h(μ) C δ R x dμ(x) C δ (R x ) dμ(x)). x R x R By another application of Cauchy s inequality (and taking, e.g., δ = ), we get with some universal constant C h(μ) C Var( X ) /4 (.9) where X is a random vector in R n distributed according to μ. This bound was obtained in [5] as a possible sharpening of another estimate by Kannan, Lovász and Simonovits, h(μ) C E X =CR (.0)

S. G. Bobkov, D. Cordero-Erausquin (cf. also []). It is interesting that the two alternative bounds, (.4)and(.0), are characterized in [5] as non-comparable. Now, we see that the χ-estimate given in Corollary. unites and refines all these results including (.9), as well. Reduction to dimension one The proof of Theorem. and Corollary. uses the localization lemma of Lovász and Simonovits [7], see also [,] for further developments. We state it below in an equivalent form. Lemma. ([7]) Suppose that u and u are continuous integrable functions defined on an open convex set E R n and such that, for any compact segment E and for any positive affine function l on, u l n 0, u l n 0. (.) Then, E u 0, E u 0. (.) The one-dimensional integrals in (.) are taken with respect to the Lebesgue measure on, while the integrals in (.)aren-dimensional. This lemma allows one to reduce various multidimensional integral relations to the corresponding one-dimensional relations with additional weights of the type l n.insome reductions, the next lemma, which appears in a more general form as Corollary. in [5], is more convenient. Lemma. ([5]) Suppose that u i,i =,, 3, 4 are nonnegative continuous functions defined on an open convex set E R n and such that, for any compact segment E and for any positive affine function l on, u l n u l n u 3 l n u 4 l n. (.3) Then, u u u 3 u 4. (.4) E E E E Lemmas. and. remain to hold for many discontinuous functions u i, as well, like the indicator functions of open or closed sets in the space. If a log-concave function f is defined on a convex set E R n, we always extend it to be zero outside E. The definition (.5) is applied with the convention that χ f,δ = 0 outside the supporting set E f ={f > 0}. Moreprecisely,forx E f, { χ f,δ (x) = sup h : x ± h E f, } f (x + h) f (x h) >δf (x). One may always assume that E f is relatively open (i.e., open in the minimal affine subspace of R n containing E f ). In that case, the function f (x + h) f (x h) ρ(x, h) = f (x)

KLS-type isoperimetric bounds for log-concave probability measures is continuous on the relatively open set {(x, h) R n R n : x ± h E f },sothatanysetof the form { x E f : χ f,δ (x) >r } = { x E f : x ± h E f,ρ(x, h) >δ } h >r is open in E f. This means that the function χ f,δ is lower semi-continuous on E f and therefore Borel measurable on R n. We will need two basic properties of the functions χ f,δ which are collected in the next lemma. Lemma.3 Let f be a log-concave function defined on a convex set E R n, and let 0 <δ<. (a) If F is a convex subset of E, then χ f F,δ χ f,δ on F. (b) If g is a positive log-concave function defined on a convex subset F of E, then χ fg,δ χ f,δ on F. In fact, the first property follows from the second one with the particular log-concave function g = F. To prove b),letx F be fixed. Applying the definition and the inequality (.6) with g,weget { χ fg,δ (x) = sup h :x ± h F, } f (x + h) f (x h)g(x + h)g(x h) >δf (x)g(x) { sup h :x ± h F, } f (x + h) f (x h) >δf (x) χ f,δ (x). It is also worthwile to emphasize the homogeneity property of the functional X χ f,δ dμ with respect to X,whereμ is the distribution of X. Lemma.4 Let a random vector X in R n have the distribution μ with a log-concave density f.forλ>0, letμ λ denotes the distribution and f λ (x) = λ n f (x/λ), x R n, the density of λx. Then χ fλ,δ(λx) = λχ f,δ (x/λ), for all x R n. In particular, χ fλ,δ dμ λ = λ χ f,δ dμ. The statement is straightforward and does not need a separate proof. It is not used in the proof of Theorem., but just shows that the inequality (.7) is homogeneous with respect to dilations of μ. Following [5], let us now describe how to reduce Theorem. to a similar assertion in dimension one (with an additional factor of ). Let E R n be an open convex supporting set for the log-concave density f of μ. We are going to apply Lemma. to the functions of the form u = A, u = B, u 3 = C, u 4 (x) = c ε χ f,δ(x), where A is an arbitrary closed subset of R n, B = R n \ A ε,andc = R n \ (A B), with fixed constants c > 0andε>0. Then, (.4) turns into μ(a)μ(b) μ(c) c χ f,δ dμ, (.5) ε

S. G. Bobkov, D. Cordero-Erausquin and letting ε 0, we arrive at the isoperimetric inequality of the form μ(a)( μ(a)) cμ + (A) χ f,δ dμ. (.6) Actually, (.5) also follows from (.6), cf. [8], Proposition 0., and consequently, the inequalities (.5)and(.6) are equivalent. It should be mentioned as well that, once (.6) holds true in the class of all closed subsests of the space, it extends to all Borel sets in R n. Now, for the chosen functions u i, the one-dimensional inequality (.3)isthesameas(.5), μ l (A)μ l (B) μ l (C) c χ f,δ dμ l, (.7) ε but written for the probability measure μ l on with density f l (x) = dμ l dx = Z f (x)l(x)n, x. Here Z = f (x)ln (x) dx is a normalizing constant, and dx denotes the Lebesgue measure on. Likewise (.6), it is equivalent to the isoperimetric inequality μ l (A)( μ l (A)) cμ l + (A) χ f,δ dμ l. (.8) The density of μ l is log-concave and has the form f l = fg,whereg is log-concave on F =. Hence, by Lemma.3, (.8) will only be strengthened, if we replace χ f,δ by χ fl,δ. But the resulting stronger inequality μ l (A)( μ l (A)) cμ l + (A) χ fl,δ dμ l (.9) is again equivalent to μ l (A)μ l (B) μ l (C) c ε χ fl,δ dμ l. (.0) Thus, by Lemma., (.5) (.6) will hold for μ with density f, as soon as the onedimensional inequalities (.9) (.0) hold true for μ l with density f l. In particular, (.5) (.6) with a constant c will be true in the entire class of full-dimensional log-concave probability measures, as soon as they hold with the same constant in dimension one (since the assertion on the intervals E may be stated for intervals on the real line). The inequalities (.6) and(.9) are not of the Cheeger-type (.), but are equivalent to it within the universal factor of, in view of the relations p( p) min{p, p} p( p) (0 p ). Hence, we may summarize this reduction in the following: Corollary.5 Given numbers C > 0 and 0 < δ <, assume that, for any probability measure μ on R with a log-concave density f, h(μ) C χ f,δ dμ. Then, the same inequality with constant C in place of C holds true for any probability measure μ on R n with a log-concave density f.

KLS-type isoperimetric bounds for log-concave probability measures 3 Dimension one Next, we consider the one-dimensional case in Theorem..Let μ be a probability measure on the real line with a log-concave density f supported on the interval (a, b), finite or not. In particular, f is bounded and continuous on that interval. As a first step, we will bound from below the function χ f,δ near the median m of μ in terms of the maximum value M = sup a<x<b f (x). Without the loss of generality, assume that m = 0, that is, μ(, 0] =μ[0, ) =. Let F(x) = μ(a, x), a < x < b, denote the distribution function associated to μ,andlet F : (0, ) (a, b) be its inverse. The function I (t) = f (F (t)) is concave on (0, ), so, for each fixed t (0, ), it admits linear bounds I (s) I (t) I (t) t s, 0 < s t, t I (s) I (t) I (t) s t t, t s <. Equivalently, after the change t = F(x), s = F(y), wehave F(x) F(y) f (y) f (x) f (x), a < y x < b, F(x) F(y) F(x) f (y) f (x) f (x), a < x y < b. F(x) On the other hand, F(y) F(x) M(y x) and F(x) M x,sothat F(x) M x, F(x) M x. This yields x y f (y) f (x) Mf(x), M x a < y x < b, y x f (y) f (x) Mf(x), M x a < x y < b. In particular, if x 4M, in both cases we get f (y) f (x) 4Mf(x) y x, a < y < b. (3.) For y = x ± h, rewrite the above as f (x ± h) f (x)( 4M h ). Hence, for any h > 0, such that x ± h (a, b), f (x + h) f (x h) f (x)( 4Mh). If 4M h <, the condition x ± h (a, b) will be fulfilled automatically, since then x ± h < M and therefore x +h 0 f (y) dy < and 0 x h f (y) dy <. (Alternatively, in order to avoid the verification that x ± h (a, b), one could assume that f is positive on the whole real line.) In order to estimate χ f,δ (x), it remains to solve 4Mh = δ,givingh = δ 4M. Thus, we arrive at:

S. G. Bobkov, D. Cordero-Erausquin Lemma 3. Let μ be a probability measure on the real line with median at the origin, having a log-concave density f, and let M = sup f. Then, for any 0 <δ<, χ f,δ (x) δ, for all x 4M 4M. Next, we need to integrate this inequality over the measure μ. Thisleadsto χ f,δ dμ δ 4M 4M 4M f (x) dx. By (3.), applied to x = 0 and with x replacing y, wehave f (x) f (0)( 4M x ),so, Hence, 4M 4M But, by the concavity of I, 4M f (x) dx f (0) ( 4M x ) dx = f (0) 4M 4M. χ f,δ dμ f (0) ( δ). 6M M = sup I (t) I 0<t< ( ) = f (0). Thus, we obtain an integral bound χ f,δ dμ ( δ). 64 f (0) Finally, it is known that h(μ) = f (0),cf.[]. Hence, dropping the median assumption, we may conclude: Lemma 3. Let μ be a probability measure on the real line with a log-concave density f. Then, for any 0 <δ<, h(μ) 3 χ f,δ dμ. δ Proof of Theorem. Combine Lemma 3. with Corollary.5. 4 Concentration in the ball In order to turn to Corollary., we need the following assertion which might be of an independent interest. Proposition 4. If a random vector X in R n has a log-concave distribution, then P{ X E X } 4 e. (4.) Here, the numerical constant is apparently not optimal (but it is dimension-free).

KLS-type isoperimetric bounds for log-concave probability measures Proof The localization lemma (Lemma.) allows us to reduce the statement to the case where the distribution of X is supported on some line of the Euclidean space. Indeed, (4.) may be rewritten as the property that, for all a > 0, E X a P{ X a} 4 e. In terms of the log-concave density of X,say f, this assertion is the same as ( ( x a) f (x) dx 0 { x a} ) 4 e f (x) dx 0. If this was not true (with a fixed value of a), then we would have ( ( x a) f (x) dx 0, { x a} ) 4 e f (x) dx < 0. Hence, by Lemma., for some compact segment E f and some positive affine function l on, ( ( x a) f (x) l(x) n dx 0, { x a} ) 4 e f (x) l(x) n dx < 0. But this would contradict to the validity of (4.) for a random vector X which takes values in and has a density proportional to f (x) l(x) n with respect to the uniform measure on. Since fl n is log-concave and defines a one-dimensional log-concave measure, the desired reduction has been achieved. In the one-dimensional case, we may write X = a + ξθ where the vectors a,θ R n are orthogonal, θ =, and where ξ is a random variable having a log-concave distribution on the real line. Then, (4.) turns into the bound { P ξ + a E } ξ + a c (4.) with the indicated constant c. To further simplify, we first notice that the inequality ξ + a E ξ + a is getting weaker, as a grows, and hence, the case a = 0 is the worst one. Indeed, this inequality may be rewritten as ( ξ ψ(t) E ) ξ + t t, t = a. We have ψ(0) = E ξ, while for t > 0, ψ (t) = E ξ + t E ξ + t 0, due to Jensen s inequality. Hence, ψ(t) is non-decreasing, proving the assertion. Now, when a = 0, (4.) becomes P{ ξ E ξ } c, (4.3) which is exactly (4.) in dimension one. If, for example, ξ 0, this statement is known, since we always have P{ξ E ξ} e, (4.4)

S. G. Bobkov, D. Cordero-Erausquin cf. []. However, (4.3) is more delicate and requires some analysis similar to the one of the previous section. Denote by μ the distribution of ξ. We use the same notations, such as f, F, I, m,andm, as in the proof of Lemma.3.Toprove(4.3), we may assume that Eξ 0. By the concavity of I, Hence, after the change t = F(x), wehave and after integration, I (t) I (/) min{t, t}, 0 < t <. f (x) f (m) min{f(x), F(x)}, a < x < b, x μ(x 0, x ) f (m) min{f(x), F(x)} dx, a x 0 x b. x 0 Note that the function u(x) = min{f(x), F(x)} has the Lipschitz semi-norm u Lip M, so Hence, u(x) u(x 0 ) M x x 0 a < x < b. P{x 0 ξ x }=μ(x 0, x ) f (m) x x 0 = f (m)(x x 0 ) (u(x 0 ) M(x x 0 )) dx ( u(x 0 ) M ) (x x 0 ). Here, we choose x 0 = Eξ. Recall that, by (4.4), F(x 0 ) e. An application of (4.4) tothe random variable ξ gives a similar bound F(x 0 ) e,sothatu(x 0) e and thus ( P { Eξ ξ x } f (m)(x Eξ) e M ) (x Eξ). Here we put x = Eξ + α E ξ Eξ with a parameter 0 <α.sinceeξ 0 is assumed, it follows that x E ξ. Thus, using x x 0 = α E ξ Eξ, weget ( P{Eξ ξ E ξ } α f (m) E ξ Eξ e Mα ) E ξ Eξ. (4.5) To simplify, let η be an independent copy of ξ. We have, again by the concavity of I, E ξ Eξ E ξ η = = 0 0 0 0 F (t) F (s) dtds s du I (u) dtds t

KLS-type isoperimetric bounds for log-concave probability measures s du 0 0 t I ( ) min{u, u} dtds = I ( ) E ξ η = f (m) E ξ η, where ξ and η are independent and have a two-sided exponential distribution, with density e x. To further bound, just use E ξ η E (ξ η ) = Var(ξ ) =. Hence, E ξ Eξ f (m). In addition, as we have already mentioned, M f (m), implying M E ξ Eξ. Hence, from (4.5) ( ) P {Eξ ξ E ξ } α f (m) E ξ Eξ e α. The expression on the right-hand side is maximized at α = e, which gives P {Eξ ξ E ξ } f (m) E ξ Eξ. (4.6) e Now, E ξ η E ξ Eξ, while, since I (t) M f (m), E ξ η = = M b a 0 0 F(x)( F(x)) dx t( t) dt I (t) t( t) dt = 3M 6 f (m). Hence, E ξ Eξ f (m), so the right-hand side expression in (4.6) is greater than or equal to e =. Proposition 4. is proved. 4e Proof of Corollary.. Let X be a random vector in R n with distribution μ,andletν be the normalized restriction of μ to the Euclidean ball centered at the origin and with radius R = E X = x dμ(x). In terms of the density f of μ, the density of ν is just the function p f (x) { x R}, where p = P{ X R}. Hence, ν μ TV = p f (x) { x R} f (x) dx = p α, where one may take α =, according to Proposition 4.. Applying the perturbation 4 e result of E. Milman (mentioned in Sect. ), we thus get that h(μ) Ch(ν) with some absolute constant C. It remains to apply Theorem. to ν, cf.(.7), and then we arrive at (.8). Corollary. is proved.

S. G. Bobkov, D. Cordero-Erausquin 5 Examples Let us describe how the χ-function may behave in a few basic examples.. (Convex body case). As we have already mentioned, when μ is the normalized Lebesgue measure in a convex body K R n with density f = vol n (K ) K,wehaveχ f,δ = χ K for all 0 <δ<. In particular, if K = B R is the ball with center at the origin and of radius R, χ f,δ (x) = R x. Hence, χ f,δ dμ CR n with some universal constant C. This leads in Theorem. to a correct bound on the isoperimetric constant, as emphasized in [5].. If μ is the one-sided exponential distribution on the real line with density f (x) = e x (x > 0), then χ f,δ (x) = x for x > 0, thus independent of δ. 3. If μ is the two-sided exponential distribution on the real line with density f (x) = e x, then χ f,δ (x) = log(/δ) + x. 4. If μ = γ n is the standard Gaussian measure on R n, that is, with density ϕ n (x) = (π) n/ e x /,wehave χ ϕn,δ(x) = log(/δ) which is independent of x. 5. Let μ be a probability measure on R n having a log-concave density with respect to γ n. That is, with respect to the Lebesgue measure, μ has density of the form f = ϕ n g,where g is a log-concave function. By Lemma.3, χ f,δ χ ϕn,δ, so by the previous example, χ f,δ (x) log(/δ). By Theorem., thisgivesh(μ) C with some universal constant C. Sucha dimension-free result in a sharper form of the Gaussian isoperimetric inequality was first obtained by Bakry and Ledoux []. As was later shown by Caffarelli [9], the measures μ in this example can be treated as contractions of γ n. This property allows one to extend many Sobolev-type inequalities about γ n to μ. Localization arguments together with some extensions are discussed in [3]and[6]. 6. More generally, let μ be a uniformly log-concave probability measure on R n (with respect to the Euclidean norm). This means that μ has a log-concave density f = e V where the convex function V satisfies ( ) ( ) V (x) + V (y) x + y x y V α, for all x, y R n, with some α = α(t), increasing and positive for t > 0. The optimal function α is sometimes called the modulus of convexity of V.Notethat,thecaseα(t) = t returns us to the family described in Example 5. As for the general uniformly log-concave case, various isoperimetric inequalities of the form μ + (A) I (μ(a)) in which the behavior

KLS-type isoperimetric bounds for log-concave probability measures of I near zero is determined by the modulus of convexity have been studied by Milman andsodin[]. To see how to apply Theorem. for obtaining a Cheeger-type isoperimetric inequality like in (.), let us rewrite the definition of the uniform log-concavity directly in terms of the density as f (x + h) f (x h) e α( h ) f (x). Hence, once we know that α(t) >α(t 0 )>0, for all t > t 0 with some t 0 > 0, it follows that χ f,δ (x) t 0, for all x R n with δ = e α(t 0). Thus, like in Examples 4 and 5, the χ-function is uniformly bounded on the whole space, implying that h(μ) Ct 0 e α(t 0) with some universal constant C. This example also shows that the uniform log-concavity is the property of a global character, while χ f,δ reflects more a local behavior of the density. 7. As an example of a measure which is not uniformly log-concave, let μ be the rotationally invariant probability measure on R n with density f (x) = Z n e n x where Z n is a normalizing constant. Note that, x dμ(x) is of order n. Inordertofindχ f,δ (x), we need to solve the inequality n x + h + n x h log(/δ) + n x and determine the maximal possible value for h. Rewrite it as x + x, h + h + x x, h + h log(/δ) + x. n If h =r is fixed, the inner product t = x, h can vary from r x to r x. Moreover, the left-hand side represents an even convex function in t which thus attains minimum at t = 0. In other words, if we want to prescribe to h as maximal value, as possible (to meet the inequality), we need to require that t = 0. In this case, the inequality is simplified to x + h log(/δ) + x n which is equivalent to Hence, h n log (/δ) + x n log(/δ). χ f,δ (x) = n log (/δ) + x n log(/δ). Integrating this inequality over μ and choosing, for example, δ = /e, weget χ f,δ dμ C

S. G. Bobkov, D. Cordero-Erausquin with some universal constant C. Therefore, by Theorem., h(μ) C (with some other constant). Note that, χ f,δ is bounded by a δ-dependent constant on the ball of radius of order n. Hence, one may also apply Corollary. without any integration. Apparently, this example may further be extended to general spherically invariant logconcave probability measures μ on R n, in which case it is already known that h(μ) is bounded by an absolute constant (under the normalization condition x dμ(x) = n). Using a different argument, this class was previously considered in [4] and recently in [4], where results of such type are obtained even for more general log-concave densities f (x) = ρ( x ) involving norms on R n. 8. Finally, as a somewhat negative example, let μ = ν n be the product of the two-sided exponential distribution ν from example, thus, with density f (x) = e (x + +x n ), x = (x,...,x n ), x i > 0. As easy to see, χ f,δ (x) = x, x R n +, which by Theorem., only gives h(μ) C n. Corollary. allows to improve this estimate to h(μ) Cn /4 which may also be seen from (.9). However, we cannot obtain the correct bound h(μ) C derivedin[7] using the induction argument. References. Bakry, D., Ledoux, M.: Lévy Gromov s isoperimetric inequality for an infinite dimensional diffusion generator. Invent. Math., 59 8 (996). Bobkov, S.G.: Isoperimetric and analytic inequalities for log-concave probability measures. Ann. Probab. 7(4), 903 9 (999) 3. Bobkov, S.G.: Localization proof of the isoperimetric inequality of Bakry Ledoux and some applications. Theory Probab. Appl. 47, 340 347 (00) 4. Bobkov, S.G.: Spectral gap and concentration for some spherically symmetric probability measures. Geom. Asp. Funct. Anal. Lect. Notes Math. 003, 37 43 (807) 5. Bobkov, S.G.: On isoperimetric constants for log-concave probability distributions. Geom. Asp. Funct. Anal. Lect. Notes Math. 007, 8 88 (90) 6. Bobkov, S.G.: Perturbations in the Gaussian isoperimetric inequality. J. Math. Sci. (N.Y.) 66(3), 5 38 (00) 7. Bobkov, S.G., Houdré, C.: Isoperimetric constants for product probability measures. Ann. Probab. 5(), 84 05 (997) 8. Bobkov, S.G., Zegarlinski, B.: Entropy bounds and isoperimetry. Mem. AMS 76(89), 69 (005) 9. Caffarelli, L.A.: Monotonicity properties of optimal transportation and the FKG and related inequalities. Comm. Math. Phys. 4, 547 563 (000) 0. Cheeger, J.: A lower bound for the smallest eigenvalue of the Laplacian. In: Problems in Analysis, Symposium in Honor of S. Bochner, pp. 95 99, Princeton University Press, Princeton (970). Fradelizi, M., Guédon, O.: The extreme points of subsets of s-concave probabilities and a geometric localization theorem. Discret. Comput. Geom. 3(), 37 335 (004). Fradelizi, M., Guédon, O.: A generalized localization theorem and geometric inequalities for convex bodies. Adv. Math. 04(), 509 59 (006) 3. Grigor yan, A.: Isoperimetric inequalities and capacities on Riemannian manifolds. The Maz ya Anniversary Collection, vol. (Rostock, 998), pp. 39 53; Oper. Theory Adv. Appl., vol. 09. Birkhäuser, Basel (999) 4. Huet, N.: Spectral gap for some invariant log-concave probability measures. Mathematika 57(), 5 6 (0) 5. Kannan, R., Lovász, L., Simonovits, M.: Isoperimetric problems for convex bodies and a localization lemma. Discret. Comput. Geom. 3, 54 559 (995) 6. Ledoux, M.: Spectral gap, logarithmic Sobolev constant, and geometric bounds. In: Surveys in Differential Geometry, Vol. IX, pp. 9 40. Int. Press, Somerville, MA (004) 7. Lovász, L., Simonovits, M.: Random walks in a convex body and an improved volume algorithm. Random Struct. Algorithms 4, 359 4 (993)

KLS-type isoperimetric bounds for log-concave probability measures 8. Maz ya, V.G.: The negative spectrum of the n-dimensional Schrödinger operator. Sov. Math. Dokl. 3, 808 80 (96). Translated from: Dokl. Acad. Nauk SSSR, 44 (96), No.4, 7 7 9. Maz ya, V.G.: On the solvability of the Neumann problem. Dokl. Acad. Nauk SSSR 47,94 96 (96). (Russian) 0. Milman, E.: On the role of convexity in isoperimetry, spectral gap and concentration. Invent. Math. 77(), 43 (009). Milman, E.: Properties of isoperimetric, functional and transport-entropy inequalities via concentration. Probab. Theory Rel. Fields 5(3 4), 475 507 (0). Milman, E., Sodin, S.: An isoperimetric inequality for uniformly log-concave measures and uniformly convex bodies. J. Funct. Anal. 54(5), 5 68 (008)