Pointwise estimates for marginals of convex bodies

Similar documents
High-dimensional distributions with convexity properties

Convex inequalities, isoperimetry and spectral gap III

Approximately Gaussian marginals and the hyperplane conjecture

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

Approximately gaussian marginals and the hyperplane conjecture

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

Super-Gaussian directions of random vectors

On isotropicity with respect to a measure

Weak and strong moments of l r -norms of log-concave vectors

On a Generalization of the Busemann Petty Problem

NECESSARY CONDITIONS FOR WEIGHTED POINTWISE HARDY INEQUALITIES

KLS-TYPE ISOPERIMETRIC BOUNDS FOR LOG-CONCAVE PROBABILITY MEASURES. December, 2014

ESTIMATES FOR THE MONGE-AMPERE EQUATION

A note on the convex infimum convolution inequality

Super-Gaussian directions of random vectors

On the isotropic constant of marginals

On John type ellipsoids

Dimensional behaviour of entropy and information

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

Characterization of Self-Polar Convex Functions

Lecture 4 Lebesgue spaces and inequalities

Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse?

Integral Jensen inequality

Existence and Uniqueness

Invariances in spectral estimates. Paris-Est Marne-la-Vallée, January 2011

GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n

Star bodies with completely symmetric sections

arxiv:math/ v1 [math.fa] 30 Sep 2004

Rate of convergence of geometric symmetrizations

Empirical Processes and random projections

An example of a convex body without symmetric projections.

Small ball probability and Dvoretzky Theorem

Daniel M. Oberlin Department of Mathematics, Florida State University. January 2005

ON THE ISOTROPY CONSTANT OF PROJECTIONS OF POLYTOPES

Poincaré Inequalities and Moment Maps

Another Low-Technology Estimate in Convex Geometry

Convexity in R n. The following lemma will be needed in a while. Lemma 1 Let x E, u R n. If τ I(x, u), τ 0, define. f(x + τu) f(x). τ.

A LOCALIZATION PROPERTY AT THE BOUNDARY FOR MONGE-AMPERE EQUATION

Introduction to Real Analysis Alternative Chapter 1

SYMMETRY RESULTS FOR PERTURBED PROBLEMS AND RELATED QUESTIONS. Massimo Grosi Filomena Pacella S. L. Yadava. 1. Introduction

On the distributional divergence of vector fields vanishing at infinity

MEAN CURVATURE FLOW OF ENTIRE GRAPHS EVOLVING AWAY FROM THE HEAT FLOW

(convex combination!). Use convexity of f and multiply by the common denominator to get. Interchanging the role of x and y, we obtain that f is ( 2M ε

An example related to Whitney extension with almost minimal C m norm

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

On metric characterizations of some classes of Banach spaces

REAL AND COMPLEX ANALYSIS

Non-radial solutions to a bi-harmonic equation with negative exponent

ON BOUNDEDNESS OF MAXIMAL FUNCTIONS IN SOBOLEV SPACES

Local semiconvexity of Kantorovich potentials on non-compact manifolds

Contents: 1. Minimization. 2. The theorem of Lions-Stampacchia for variational inequalities. 3. Γ -Convergence. 4. Duality mapping.

Concentration Properties of Restricted Measures with Applications to Non-Lipschitz Functions

Laplace s Equation. Chapter Mean Value Formulas

Probabilistic Methods in Asymptotic Geometric Analysis.

Course 212: Academic Year Section 1: Metric Spaces

ALEXANDER KOLDOBSKY AND ALAIN PAJOR. Abstract. We prove that there exists an absolute constant C so that µ(k) C p max. ξ S n 1 µ(k ξ ) K 1/n

Exercise Solutions to Functional Analysis

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Introduction and Preliminaries

Non-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007

A Tilt at TILFs. Rod Nillsen University of Wollongong. This talk is dedicated to Gary H. Meisters

APPROXIMATE ISOMETRIES ON FINITE-DIMENSIONAL NORMED SPACES

On the distribution of the ψ 2 -norm of linear functionals on isotropic convex bodies

Packing-Dimension Profiles and Fractional Brownian Motion

REGULARITY OF MONOTONE TRANSPORT MAPS BETWEEN UNBOUNDED DOMAINS

Local strong convexity and local Lipschitz continuity of the gradient of convex functions

Sampling and high-dimensional convex geometry

On the Intrinsic Differentiability Theorem of Gromov-Schoen

Universal convex coverings

A Bernstein-Chernoff deviation inequality, and geometric properties of random families of operators

Optimization of a Convex Program with a Polynomial Perturbation

Estimates for the affine and dual affine quermassintegrals of convex bodies

Combinatorics in Banach space theory Lecture 12

The Lusin Theorem and Horizontal Graphs in the Heisenberg Group

LECTURE 15: COMPLETENESS AND CONVEXITY

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Topological properties

Measurable functions are approximately nice, even if look terrible.

4 Divergence theorem and its consequences

Chapter 2 Convex Analysis

ABSOLUTE CONTINUITY OF FOLIATIONS

A. Iosevich and I. Laba January 9, Introduction

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide

DECOMPOSING DIFFEOMORPHISMS OF THE SPHERE. inf{d Y (f(x), f(y)) : d X (x, y) r} H

Exercises: Brunn, Minkowski and convex pie

Subdifferential representation of convex functions: refinements and applications

Analysis Finite and Infinite Sets The Real Numbers The Cantor Set

POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES

Scalar curvature and the Thurston norm

KLS-type isoperimetric bounds for log-concave probability measures

THE MULTIPLICATIVE ERGODIC THEOREM OF OSELEDETS

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Gaussian Measure of Sections of convex bodies

Concentration inequalities: basics and some new challenges

The oblique derivative problem for general elliptic systems in Lipschitz domains

Isoperimetry of waists and local versus global asymptotic convex geometries

COARSELY EMBEDDABLE METRIC SPACES WITHOUT PROPERTY A

SPACES ENDOWED WITH A GRAPH AND APPLICATIONS. Mina Dinarvand. 1. Introduction

Transcription:

Journal of Functional Analysis 54 8) 75 93 www.elsevier.com/locate/jfa Pointwise estimates for marginals of convex bodies R. Eldan a,1,b.klartag b,, a School of Mathematical Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel b Department of Mathematics, Princeton University, Princeton, NJ 8544, USA Received August 7; accepted 7 August 7 Available online 18 January 8 Communicated by J. Bourgain Abstract We prove a pointwise version of the multi-dimensional central limit theorem for convex bodies. Namely, let μ be an isotropic, log-concave probability measure on R n. For a typical subspace E R n of dimension n c, consider the probability density of the projection of μ onto E. We show that the ratio between this probability density and the standard Gaussian density in E is very close to 1 in large parts of E. Here c> is a universal constant. This complements a recent result by the second named author, where the total variation metric between the densities was considered. 7 Elsevier Inc. All rights reserved. Keywords: Central limit theorem; Convex bodies; Marginal distribution 1. Introduction Suppose X is a random vector in R n that is distributed uniformly in some convex set K R n. For a subspace E R n we denote by Proj E the orthogonal projection operator onto E in R n. The central limit theorem for convex bodies [7,8] asserts that there exists a subspace E R n, * Corresponding author. E-mail addresses: roneneldan@gmail.com R. Eldan), bklartag@princeton.edu B. Klartag). 1 Partly supported by the Israel Science Foundation. Supported by the Clay Mathematics Institute and by NSF grant #DMS-45659. -136/$ see front matter 7 Elsevier Inc. All rights reserved. doi:1.116/j.jfa.7.8.14

76 R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 with dime) > n c, such that the random vector Proj E X) is approximately Gaussian, in the total variation sense. This means that for a certain Gaussian random vector Γ in the subspace E, { sup P ProjE X) A } P{Γ A} C A E n c, 1) where the supremum runs over all measurable subsets A E. Here, and throughout this note, the letters c,c,c 1,C,c, C, etc. denote some positive universal constants, whose value may change from one appearance to the next. The total variation estimate 1) implies that the density of Proj E X) is close to the density of Γ in the L 1 -norm. In this note we observe that a stronger conclusion is within reach: One may deduce that the ratio between the density of Proj E X) and the density of Γ deviates from 1 by no more than Cn c, in the significant parts of the subspace E. Let us introduce some notation. Write for the standard Euclidean norm in R n. A random vector Z in R n is isotropic if the following normalization holds: EZ =, CovZ) = Id ) where CovZ) stands for the covariance matrix of Z, and Id is the identity matrix. The Grassman manifold G n,l of all l-dimensional subspaces of R n carries a unique rotationally-invariant probability measure μ n,l. Whenever we say that E is a random l-dimensional subspace in R n,we relate to the above probability measure μ n,l. Under the additional assumption that the random vector X is isotropic, the subspace E for which Proj E X) is approximately Gaussian may be chosen at random, and 1) will hold with high probability [7,8]. A function f : R n [, ) is log-concave if log f : R n [, ) is a concave function. The characteristic function of a convex set is log-concave. Throughout the entire discussion, the requirement that X be distributed uniformly in a convex body could have been relaxed to the weaker condition, that X has a log-concave density. Our main result in this paper reads as follows: Theorem 1. Let X be an isotropic random vector in R n with a log-concave density. Let 1 l n c 1 be an integer. Then there exists a subset E G n,l with μ n,l E) 1 C exp n c ) such that for any E E, the following holds. Denote by f E the density of the random vector Proj E X). Then, f E x) γx) 1 C n c 3) 3 for all x E with x n c 4.Here,γx)= π) l/ exp x /) is the standard Gaussian density in E, and C,c 1,c,c 3,c 4 > are universal constants. Note that almost the entire mass of a standard l-dimensional Gaussian distribution is contained in a ball of radius 1 l about the origin. Therefore, 3) easily implies the total variation bound mentioned above. The history of the central limit theorem for convex bodies goes back to the conjectures and results of Brehm and Voigt [4] and Anttila, Ball and Perissinaki [], see [7] and references therein. The case l = 1 of Theorem 1 was proved in [8] using the moderate deviation estimates of Sodin [13]. The generalization to higher dimensions is the main contribution of the present paper. See also [3] and [1].

R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 77 The basic idea of the proof of Theorem 1 is the following. It is shown in [8], using concentration techniques, that the density of Proj E X + Y) is pointwise approximately radial, where Y is an independent small Gaussian random vector. It is furthermore proved that the random vector X + Y is concentrated in a thin spherical shell. We combine these facts to deduce, in Section, that the density of Proj E X + Y) is not only radial, but in fact very close to the Gaussian density in E. Then, in Section 3, we show that the addition of the Gaussian random vector Y is not required. That is, we prove that when a log-concave density convolved with a small Gaussian is almost Gaussian then the original density is also approximately Gaussian.. Convolved marginals are Gaussian For a dimension n and v> we write ) 1 γ n [v]x) = exp x x πv) n/ R n ). 4) v That is, γ n [v] is the density of a Gaussian random vector in R n with mean zero and covariance matrix v Id. LetX be an isotropic random vector with a log-concave density in R n, and let Y be an independent Gaussian random vector in R n whose density is γ n [n α ], for a parameter α to be specified later on. Denote by f X+Y the density of the random vector X + Y. Our first step is to show that the density of the projection of X + Y onto a typical subspace is pointwise approximately Gaussian. We follow the notation of [8]. For an integrable function f : R n [, ), a subspace E R n and a point x E we write π E f )x) = x+e fy)dy, 5) where x + E is the affine subspace orthogonal to E that passes through the point x. In other words, π E f ) : E [, ) is the marginal of f onto E. The group of all orthogonal transformations of determinant one in R n is denoted by SOn). Fix a dimension l and a subspace E R n with dime ) = l.forx E and a rotation U SOn), set M f,e,x U) = log π E f U)x ). 6) Define M x ) = M fx+y,e,x U) dμ n U), 7) SOn) where μ n stands for the unique rotationally-invariant Haar probability measure on SOn). Note that M x ) is independent of the direction of x, so it is well defined. We learned in [8] that the function U M fx+y,e,x U) is highly concentrated with respect to U in the special orthogonal group SOn), around its mean value M x ). This implies that the function π E f X+Y ) is almost spherically symmetric, for a typical subspace E. This information is contained in our next lemma, which is equivalent to [8, Lemma 3.3].

78 R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 Lemma. Let 1 l n be integers, let <α<1 5 and denote λ = 5α+ 1. Assume that l n λ. Suppose that X is an isotropic random vector with a log-concave density and that Y is an independent random vector with density γ n [n αλ ]. Denote the density of X + Y by f X+Y. Let E G n,l be a random subspace. Then, with probability greater than 1 Ce cn1/1 of selecting E, we have log π E f X+Y )x) M x ) Cn λ, 8) for all x E with x 5n λ/.herec,c > are universal constants. Sketch of proof. We need to follow the proof of Lemma 3.3 in [8], choosing for instance, u = 9 1, λ = 1 5α+, k = nλ and η = 1. Throughout the argument in [8], it was assumed that the dimension of the subspace is exactly k = n λ, while in the present version of the statement, note that it could possibly be smaller, i.e., l k note also that here, k need not be an integer). We re-run the proofs of Lemmas.7,.8, 3.1 and 3.3 from [8], allowing the dimension of the subspace we are working with to be smaller than k, noting that the reduction of the dimension always acts in our favor. We refer the reader to the original argument in the proof of Lemma 3.3 in [8] for further details. Our main goal in this section is to show that M x ) behaves approximately like log γ n [1 + n αλ ]x). Once we prove this, it would follow from the above lemma that the density of X + Y is pointwise approximately Gaussian. Next we explain why no serious harm is done if we take the logarithm outside the integral in the definition of M x ). Denote, for x E, M x ) = SOn) π E f X+Y U)x) dμ n U). 9) Lemma 3. Under the notation and assumptions of Lemma,for x 5n λ/ we have where C> is a universal constant. log M x ) M x ) C, 1) n1/5 Proof. Recall that E R n is some fixed l-dimensional subspace with l n λ.fixx E with x 5n λ/. Lemma 3.1 of [8] states that for any U 1,U SOn), M fx+y,e,x U 1 ) M fx+y,e,x U ) C n λα+) du 1,U ), 11) where du 1,U ) stands for the geodesic distance between U 1 and U in SOn). As mentioned before, Lemma 3.1 is proved in [8] under the assumption that the dimension of the subspace E is exactly n λ. In our case, the dimension l might be smaller than n λ, but a close inspection of the proofs in [8] reveals that the reduction of the dimension can only improve the estimates. Hence 11) holds true.

R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 79 We apply the Gromov Milman concentration inequality on SOn), quoted as Proposition 3. in [8], and conclude from 11) that for any ε>, μ n { U SOn); M fx+y,e,x U) M x ) ε } C exp cnε /L ), 1) with L = C n λα+). That is, the distribution of n FU)= MfX+Y,E L,x U) M x )) U SOn) ) on SOn) has a subgaussian tail. Note also that SOn) FU)dμ nu) =. A standard computation shows for any p 1, F p U) dμ n U) C p) p, 13) SOn) where C is a universal constant. Hence, for any <t C, exp tfu) ) dμ n U) SOn) 1 + t 1 + SOn) i= FU)dμ n U) + Ct ) i/ i/! C i) i ti i! i= 1 + C C + 1 ) Ct ) j j=1 j! Ct ) j j! j= = exp Ct ). 14) The left-hand side of 1) follows by Jensen s inequality. We use 14) for the value t = L n = C n α+ 5α+ 1 C n 1/1 C, to conclude that M x ) expm x )) = = SOn) expm f X+Y,E,x U)) dμ n U) expm x )) SOn) exp M fx+y,e,x U) M x )) dμ n U) exp Ĉn 1/5). Taking logarithms of both sides completes the proof. Let X, Y, α, λ, l be as in Lemma. We choose a slightly different normalization. Define Z = X + Y 1 + n λα, 15)

8 R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 and denote by f Z the corresponding density. Clearly f Z is isotropic and log-concave. Next we define, for x E, ) M 1 x := π E f Z U)x) dμ n U). 16) SOn) Our goal is to show that the following estimate holds: M 1 x ) γ l [1]x) 1 <C 1n c 1 17) for all x R l with x <c n c for some universal constants C 1,c 1,c >. We write S n 1 ={x R n ; x =1}, the unit sphere in R n. Define: f Z x) = S n 1 f Z x θ ) dσn θ) = SOn) f Z Ux) dμ n U) x R n ) 18) where σ n is the unique rotationally-invariant probability measure on S n 1. Since f Z is spherically symmetric, we shall also use the notation f Z x ) = f Z x). Clearly, for any x E, M 1 x ) = SOn) π E f Z U)x) dμ n U) = SOn) π E f Z U)x) dμ n U) = π E f Z )x). 19) We will use the following thin-shell estimate, proved in [8, Theorem 1.3]. Proposition 4. Let n 1 be an integer and let X be an isotropic random vector in R n with a log-concave density. Then, { X P 1 n 1 where C,c > are universal constants. n 1/15 Applying the above for f Z, denoting ε = n 1/15, and defining A = { x R n ; n1 ε) x n1 + ε) }, } <Cexp cn 1/15) ) we get, f Z x) dx > 1 Ce cn1/15. 1) A

R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 81 From the definition of f Z, it is clear that the above inequality also holds when we replace f Z with f Z. In other words, if we define gt) = t n 1 ω n f Z t) t ) ) where ω n is the surface area of the unit sphere S n 1 in R n, and use integration in polar coordinates, we get n1+ε) 1 gt)dt > 1 Ce cn1/15. 3) n1 ε) Our next step is to apply the methods from Sodin s paper [13] in order to prove a generalization of [13, Theorem ], for a multi-dimensional marginal rather than a one-dimensional marginal. Our estimate will be rather crude, but suitable for our needs. Denote by σ n,r the unique rotationally-invariant probability measure on the Euclidean sphere of radius r around the origin in R n. A standard calculation shows that the density of an l-dimensional marginal of σ n,r is given by the following formula: ) n l ) 1 ψ n,l,r x) = ψ n,l,r x := Γn,l 1 r l x ) r 1 [ r,r] x 4) where ) 1 l Ɣ n Γ n,l = ) π Ɣ n l ) 5) and where 1 [ r,r] is the characteristic function of the interval [ r, r]. see for example [5, Remark.1]). When l n we have Γ n,l π n )l/ 1. By the definition ) of g, and since f Z is spherically symmetric, we may write π E f Z )x) = ψ n,l,r x ) gr)dr x E ). 6) Indeed, the measure whose density is f Z equals gr)σ n,r dr, hence its marginal onto E has density x ψ n,l,r x)gr) dr. We will show that the above density is approximately Gaussian for x E when x is not too large. But first we need the following technical lemma. Lemma 5. Let g be the density defined in ), and suppose that n C and l n 1/.For ε = n 1/15 denote U ={t>; t<1 ε) n or t>1 + ε) n }. Then, t l gt)dt < C exp c n 1/15). 7) Here, c,c > are universal constants. U

8 R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 Proof. Define for convenience, ht) = t l gt). 8) Denote A = [, 1n ] [ 1, B = n, ] n1 ε) [ n1 + ε), ), and write U ht) dt = A ht) dt + B ht) dt. 9) We estimate the two terms separately. For t> 1 n we have ht) n l gt) = e l log n gt). 3) Thus we can estimate the second term as follows: ht) dt e l log n gt)dt < e l log n Ce cn1/15 <Ce 1 cn1/15, 31) B B where for the second inequality we apply the reformulation 3) of Proposition 4 recall that ε = n 1/15 and that l n 1/ ). To estimate the first term on the right-hand side of 9), we use the fact that f Z is isotropic and log concave, so we can use a crude bound for the isotropic constant see e.g. [11, Theorem 5.14e)] or [6, Corollary 4.3]) which gives sup R n f Z <e n log n, thus, also sup R n f Z <e n log n. Hence we can estimate A ht) dt = 1/n t l gt)dt = 1/n t n l 1 ω n f Z t) dt <n n l) ω n sup f Z <e 1.5n log n+n log n <e n, 3) as ω n <C. The combination of 31) and 3) completes the proof. We are now ready to show that the marginals of f Z are approximately Gaussian. Note that by 19) and 6), M 1 x ) γ l [1]x) 1 = ψ n,l,r x )gr) dr 1 γ l [1]x). 33) Our desired bound 17) is contained in the following lemma.

R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 83 Lemma 6. Let 1 l n be integers, with n C and l n 1/. Let g : R + R + be a function that satisfies 3) and 7). Then we have, ψ n,l,r x )gr) dr 1 γ l [1]x) <Cn 1/6 34) for all x R l with x < n 1/4 where C> is a universal constant. Proof. We begin by using a well-known fact, that follows from a straightforward computation using asymptotics of Ɣ-functions: for x <n 1/8, ψ n,l, n x ) 1 π γ l [1]x) = n ) l/ 1 x ) n l )/ n Γ n,l e x / 1 C. 35) n We omit the details of the simple computation. An almost identical computation is done, for example, in [13, Lemma 1]. Note that in addition to the computation there, we have to use, e.g., Stirling s formula to estimate the constants ε n.) Using the above fact 35), we see that it suffices to prove the following inequality: ψ n,l,r x )gr) dr 1 ψ n,l, n x ) <Cn 6 1 36) for all x R l with x < n 1/4. To that end, fix x R l with x < n 1/4, define and write A = [ n 1 n 1 15 ), n 1 + n 1 15 )], B =[, ) \ A, ψ n,l,r x ) gr)dr = A ψ n,l,r x ) gr)dr + B ψ n,l,r x ) gr)dr. 37) We estimate the two terms separately. For the second term, we have, B ψ n,l,r x ) gr)dr = Γ n,l B <Γ n,l B 1 r l 1 x r ) n l 1 [ r,r] x ) gr)dr 1 r l gr)dr < Γ n,lce cn1/15, 38) where the last inequality follows from 7). Therefore, B ψ n,l,r x )gr) dr < ψ n,l, n x ) 1 Ce cn1/15 n ) l 1 x n ) n l <Ce cn1/15 + x + 1 l log n <Ce n1/. 39)

84 R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 To estimate the first term on the right-hand side of 37), we will show that the following inequality holds: A ψ n,l,r x )gr) dr 1 ψ n,l, n x ) <Cn 1/6 4) for some constant C>. For r>such that x < 1,wehave, d dr log ψ n,l,r x ) = l r + n l ) x 1 r 3 1 x r r ) < l r + n x r 3. 41) Recalling that x < n 1/4 and l n 1/, the above estimate gives, for all r [ 1 n, 3 n ], which gives, for r [ 1 n, 3 n ], d dr log ψ n,l,r x ) < n 1 1 + 16n 1+ 1 3 <Cn 9 4) ψ n,l,r x ) ψ n,l, n x ) 1 <Cn 9 r n. 43) Recall that for r A we have r n n 13/3. Hence the last estimate yields, A ψ n,l,r x )gr) dr ψ n,l, n x ) A gr)dr 1 <Cn 9 13 n 3 = Cn 6 1. 44) Combining the last inequality with 3), we get A ψ n,l,r x )gr) dr 1 ψ n,l, n x ) < Ce cn 15 1 + Cn 6 1 <C n 6 1. 45) From 39) and 45) we deduce 36), and the lemma is proved. Recall the definitions 9) and 16) of M x ) and M 1 x ); the only difference is the normalization of X + Y. By an easy scaling argument, we deduce from 33) and Lemma 6 that when n C, M x ) γ l [1 + n λα ]x) 1 <C 1n 6 1 46) for all x R l with x <n 1/4,forC 1 > a universal constant. By substituting 1) and 46) into Lemma, we conclude the following.

R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 85 Proposition 7. Let 1 l n be integers. Let <α<1 5 and denote λ = 5α+ 1. Assume that l n λ. Suppose that f : R n [, ) is a log-concave function that is the density of an isotropic random vector. Define g = f γ n [n λα ], the convolution of f and γ n [n λα ]. Let E G n,l be a random subspace. Then, with probability greater than 1 Ce cn1/1 of selecting E, we have π E g)x) γ l [1 + n λα ]x) 1 Cn λ 47) for all x E with x <n λ/, where C> is a universal constant. We did not have to explicitly assume that n C in Proposition 7, since otherwise the proposition is vacuously true. In the next section we will show that the above estimate still holds without taking the convolution, though perhaps with slightly worse constants. 3. Deconvolving the Gaussian Our goal in this section is to establish the following principle. Suppose that X is a random vector with a log-concave density, and that Y is an independent, Gaussian random vector whose covariance matrix is small enough with respect to that of X. Then, in the case where X + Y is approximately Gaussian, the density of X is also approximately Gaussian, in a rather large domain. We begin with a lower bound for the density of X. Note that the notation n in this section corresponds to the dimension of the subspace, that was denoted by l in the previous section. Lemma 8. Let n 1 be a dimension, and let α, β, ε, R >. Suppose that X is an isotropic random vector in R n with a log-concave density, and that Y is an independent Gaussian random vector in R n with mean zero and covariance matrix α Id. Denote by f X and f X+Y the respective densities. Suppose that, for all x R. Assume that α c n 8 and that Then, f X+Y x) 1 ε)γ n [1 + α]x) 48) 1n) max{3β,3/} α 1/4 <ε< 1 1. 49) f X x) 1 6ε)γ n [1]x) 5) for all x R n with x min{r 1,n) β }.Here, <c < 1 is a universal constant. Proof. Suppose first that f X is positive everywhere in R n, and that log f X is strictly concave. Fix x R n with x min{r 1,n) β }. Assume that ε > is such that f X x )<1 ε )γ n [1]x ). 51)

86 R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 To prove the lemma for the case where log f X is strictly concave) it suffices to show that ε 6ε. 5) Consider the level set L ={x R n ; f X x) f X x )}. Then L is convex and bounded, as f X is log-concave and integrable here we used the fact that f X x )>). Let H be an affine hyperplane that supports L at its boundary point x, and denote by D the open ball of radius α 1/4 tangent to H at x, that is disjoint from the level set L. By definition, f X x) < f X x ) for x D. Denote the center of D by x 1. Then, x 1 x α 1/4 with x n) β, and a straightforward computation yields x 1 x n) β + α 1/4) α 1/4 ε, 53) where we used 49). Note that x 1 x +α 1/4 R. Apply the last inequality and 48) to obtain, f X+Y x 1 ) 1 ε)γ n [1 + α]x )e x x 1 1+α) >1 ε)γ n [1 + α]x ). 54) By definition, f X+Y x 1 ) = f X x)γ n [α]x 1 x)dx R n = f X x)γ n [α]x 1 x)dx + x D x/ D f X x)γ n [α]x 1 x)dx. 55) We will estimate both integrals. First, recall that f X x) < f X x ) for x D and use 51) to deduce f X x)γ n [α]x 1 x)dx < f X x )<1 ε )γ n [1]x ). 56) x D For the integral outside D, a rather rough estimate would suffice. We may write, f X x)γ n [α]x 1 x)dx < P G n 1 ) α 1/4 sup f X 57) R n x/ D where G n γ n [1] is a standard Gaussian random vector. To bound the right-hand side term, we shall use a standard tail bound for the norm of a Gaussian random vector, P G n >t n ) <Ce ct, 58) and the following crude bound for the isotropic constant of f X see, e.g., [11, Theorem 5.14e)]), sup R n f X <e 1 n log n+6n <e Cnlog n. 59)

R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 87 Consequently, x/ D f X x)γ n [α]x 1 x)dx < Ce cn 1 α 1/ e Cnlog n <e α 1/3, 6) for an appropriate choice of a sufficiently small universal constant c > so that all other constants are absorbed). Combining 55), 56) and 6) gives ) f X+Y x 1 )< 1 ε + e α 1/3 γ n [1]x ). 61) γ n [1]x ) Using the fact that n + n) β < α 1/3, which follows easily from our assumptions, we have e α 1/3 γ n [1]x ) = e x + n logπ) α 1/3 <e 1 α 1/3 α 1/3 < ε < ε 6) for the last inequality, note that if ε < 6ε then 1) holds and we have nothing to prove. So we can assume that ε >ε). From 61) and 6) we obtain the bound Combining 54) and 63) we get, A calculation yields, f X+Y x 1 )< 1 ε ) γ n [1]x ). 63) 1 ε)γ n [1 + α]x )< 1 ε ) γ n [1]x ). 64) γ n [1]x ) γ n [1 + α]x ) γ n[1]) γ n [1 + α]) = 1 + α) n < 1 + ε. 65) From the above two inequalities, we finally deduce, 1 ε / 1 ε > 1 1 + ε > 1 ε ε < 6ε, 66) which proves 1). The lemma is proved, under the additional assumption that log f X is strictly concave. The general case follows by a standard approximation argument. After proving a lower bound, we move to the upper bound. We will show that if we add to the requirements of the previous lemma an assumption that the density of f X+Y is bounded from above, then we can provide an upper bound for f X.

88 R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 Lemma 9. Let n, X, Y, α, β, ε, R, c be defined as in Lemma 8, and suppose that all of the conditions of Lemma 8 are satisfied. Suppose that in addition, we have the following upper bound for f X+Y : for all x <R. Then we have: for all x with x < min{n) β,r} 3. f X+Y x) < 1 + ε)γ n [1 + α]x) 67) f X x) < 1 + 8ε)γ n [1]x) 68) Proof. Denote Fx) = log f X x). Again we use the upper bound for the supremum of the density 59), Fx)>6n 1 n log n> n log n, x Rn. 69) Use the conclusion of Lemma 8 to deduce that for x < min{n) β,r} 1 the following holds: ) 1 Fx)< log γ n[1]x) < log + n logπ)+ n)β < 3n) max{β, 3 }. 7) Next we will show that for x,y A ={x R n ; x < min{n) β,r} }, the following Lipschitz condition holds: Fx) Fy) 5n) max{β,3/} x y. 71) To that end, denote a = 5n) max{β, 3 } and suppose by contradiction that x,y A are such that Fy) Fx)>a y x. 7) Since Fy) Fx)<a as implied by 69) and 7)), we have y x < 1 and for the point we have, using the convexity of F, Fy 1 ) Fx) y 1 := x + y x y x, Fy) Fx) y x Note that y 1 x +1 < min{n) β,r} 1, thus we obtain a contradiction of 69) and 7). This proves 71). Therefore, given two points x,x A such that x x <α 1/4, 71) implies, >a. Fx ) Fx) < 5α 1/4 n) max{β,3/} <ε/. 73)

R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 89 Recall that F = log f X, hence the above translates to fx x ) f X x) < e ε/ 1 ) f X x )< ε 4 f Xx ). 74) Now, suppose x R n and <ε < 1 are such that f X x )>1 + ε )γ n [1]x ), 75) with x < min{r,n) β } 3. Again, to prove the lemma it suffices to show that in fact ε < 8ε. Let D be a ball of radius α 1/4 around x. Since we can assume that ε >εotherwise, there is nothing to prove), we deduce from 74) and 75) that for all x D, f X x) > 1 ε ) 1 + ε )γ n [1]x )> 1 + ε ) γ n [1]x ). 76) 4 Thus, f X+Y x ) = > f X x)γ n [α]x x)dx > R n 1 + ε ) γ n [1]x ) 1 P x D f X x)γ n [α]x x)dx G n > 1 )) α 1/4 > 1 + ε ) γ n [1]x ), 77) 3 where in the last inequality we used the estimate 58) and the assumption ε >ε. Now, a computation yields, γ n [1 + α]x ) γ n [1]x ) We thus obtain, combining 67) and 77) and using 78), that <e 1 x x 1+α ) = e 1 x α 1+α <e n)βα < 1 + ε. 78) 1 + ε /3 1 + ε < γ n[1 + α]x ) γ n [1]x ) < 1 + ε, so ε < 8ε, and the proof of the lemma is complete. The combination of the two lemmas above gives us the desired estimate for the density of X, as proclaimed in the beginning of this section. 4. Proof of the main theorem Proof of Theorem 1. We may clearly assume that n exceeds some positive universal constant otherwise, take E = ). Let 1 l n 1/1 be an integer, and let δ be such that l = n δ. Set α = 1 and λ = 5α+ 1 = 7 1.LetY be a Gaussian random vector in Rn with mean zero and covariance matrix n αλ Id, independent of X. We first apply Proposition 7 for the random vector

9 R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 X + Y with parameters l and α noting that l n 1/1 n λ ). According to the conclusion of that proposition, if E is a random subspace of dimension l, then π E f X+Y )x) γ n [1 + n αλ ]x) 1 Cn 1/1, 79) for all x E with x <n 1, with probability greater than 1 Ce cn1/1 of selecting E. Next, we apply Lemmas 8 and 9 in the l-dimensional subspace E, with the parameters α = n 1λ n 1/ l 8 1, β = 6δ+1/ log n), R = n1/, ε = Cn 1/1 where C is the constant from 79). It is straightforward to verify that the requirements of these two lemmas hold, since n may be assumed to exceed a given universal constant. According to the conclusions of Lemmas 8 and 9, for any x E with x <n7 1, π E f X )x) γ n [1]x) 1 C n 1/1. This completes the proof. Remark. The numerical values of the exponents c 1,c,c 3,c 4 provided by our proof of Theorem 1 are far from optimal. The theorem is tight only in the sense that the power-law dependencies on n cannot be improved to, say, exponential dependence. The only constant among c 1,c,c 3,c 4 for which the best value is essentially known to us is c. It is clear from the proof that c can be made arbitrarily close to 1 at the expense of decreasing the other constants. Note also that necessarily c 4 1/4, as is shown by the example where X is distributed uniformly in a Euclidean ball see [13, Section 4.1]). 5. An additional Gaussian deconvolution In this section we improve an estimate from [7,8] which is related to Gaussian convolution. This improvement can be used to obtain slightly better bounds on certain exponents related to the central limit theorem for convex bodies. The following proposition was conjectured by Meckes [1]. Proposition 1. Let n 1 and f : R n [, ) be an isotropic, log-concave density. Suppose that ε> and denote g ε = f γ n [ε ], the convolution of f with γ n [ε ]. Then, g ε f L 1 R n ) = where C> is a universal constant. R n g ε x) fx) dx Cnε, Proposition 1 improves upon Lemma 5.1 in [7] and the results of Section 3 in [1], and it admits a simpler proof. It is straightforward to adapt the argument in [8], and to use Proposition 1 in place of the inferior Lemma 5.1 of [7]. This leads to slightly better estimates. For instance, we conclude that whenever X is a random vector with a log-concave density in R n, one may find a subspace E R n of dimension, say, cn 1/15 such that Proj E X) is approximately Gaussian, in

R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 91 the total variation sense. The exponent 1/15 is probably far from optimal, yet it is better than previous bounds. Meckes has observed that Proposition 1 would follow from the next lemma. Lemma 11. Let f : R n [, ) be a C -smooth, isotropic, log-concave density. Then, fx) dx C n, R n where C > is a universal constant. To see that Lemma 11 leads to Proposition 1, one only needs to apply an inequality from Ledoux [1]. In the notation of Proposition 1, it is proven in [1] that when f is C -smooth, g ε f L 1 R n ) ε R n fx) dx. 8) Thus, Proposition 1 follows from Lemma 11 in virtue of 8), by approximating f with a C -smooth function. Proposition 1 and Lemma 11 are tight, for small ε,uptothevalueofthe constants C,C. This is shown, e.g., by the example of f being close to the isotropic, log-concave function that is proportional to the characteristic function of the cube [ 3, 3] n. Proof of Lemma 11. The case n = 1 is covered, e.g., in [1]. We assume from now on that n. Our method builds on the main idea of the proof of Lemma.3 in [9]. Fix x R n.we claim that fx) C 1 nf x) C fx) x, 81) for some universal constants C 1,C >. Suppose first that fx)=. Since f and f is C - smooth, then necessarily fx)=. Therefore 81) is trivial in this case. It remains to prove 81) for the case where fx)>. Denote F = log f. Then F : R n, ] is convex. Additionally, F is finite and C -smooth in a neighborhood of x. The graph of the convex function F lies entirely above the supporting hyperplane to F at x. That is, Consequently, for any y R n, Fx)+ Fx) y x) Fy) for all y R n. Fx) y [ Fy) inf F ] + Fx) x. By taking the supremum over all y R n with y 1 1, we see that Fx) 1 Fx) x + sup y 1/1 Fy) inf F. 8)

9 R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 Denote K = { x R n ; fx) e 1n sup f }. Then K is clearly convex. Additionally, K fx)dx 1 e 5n/4 9/1, by Corollary 5.3 in [7] we actually use the formulation from Lemma. in [8]). According to Lemma 5.4 from [7] we have the inclusion {y R n ; y 1 1 } K. Therefore, sup y 1/1 Fy) inf F sup Fy) inf F [1n + inf F ] inf F = 1n. y K Hence 8) implies that for any x R n, Fx) 1 Fx) x) + 1n. 83) Since fx)= fx) Fx), then 81) follows from 83). This completes the proof of 81). Next, we integrate by parts and see that R n fx) xdx= n i=1 R n x i i fdx 1...dx n = n i=1 R n fx)dx= n. The boundary terms vanish, since x fx) as x see, e.g., [9, Lemma.1]). Accordingto81), fx) dx C 1 n fx)dx C fx) xdx= C 1 + C )n. R n R n R n Acknowledgments The first named author would like to express his sincere gratitude to his supervisor, Prof. Vitali Milman who introduced him to the subject, guided him and encouraged him to write this note. We would also like to thank Sasha Sodin and Prof. Vitali Milman for reviewing a preliminary version of this note. References [1] D. Alonso-Gutierrez, J. Bastero, J. Bernues, G. Paouris, High-dimensional random sections of isotropic convex bodies, preprint. [] M. Anttila, K. Ball, I. Perissinaki, The central limit problem for convex bodies, Trans. Amer. Math. Soc. 355 1) 3) 473 4735. [3] J. Bastero, J. Bernués, Asymptotic behavior of averages of k-dimensional marginals of measures on R n, preprint, available at http://www.unizar.es/galdeano/preprints/5/preprint34.pdf. [4] U. Brehm, J. Voigt, Asymptotics of cross sections for convex bodies, Beiträge Algebra Geom. 41 ) ) 437 454. [5] P. Diaconis, D. Freedman, A dozen de Finetti-style results in search of a theory, Ann. Inst. H. Poincaré Probab. Statist. 3 ) 1987) 397 43. [6] B. Klartag, On convex perturbations with a bounded isotropic constant, Geom. Funct. Anal. 16 6) 6) 174 19. [7] B. Klartag, A central limit theorem for convex sets, Invent. Math. 168 7) 91 131.

R. Eldan, B. Klartag / Journal of Functional Analysis 54 8) 75 93 93 [8] B. Klartag, Power-law estimates for the central limit theorem for convex sets, J. Funct. Anal. 45 7) 84 31. [9] B. Klartag, Uniform almost sub-gaussian estimates for linear functionals on convex sets, Algebra i Analiz St. Petersburg Math. J.) 19 1) 7) 19 148. [1] M. Ledoux, Spectral gap, logarithmic Sobolev constant, and geometric bounds, in: Surveys Differ. Geom., vol. IX, Int. Press, Somerville, MA, 4, pp. 19 4. [11] L. Lovász, S. Vempala, The geometry of logconcave functions and sampling algorithms, Random Structures Algorithms 3 3) 7) 37 358. [1] M.W. Meckes, Gaussian marginals of convex bodies with symmetries, available under http://arxiv.org/abs/math/ 6673. [13] S. Sodin, Tail-sensitive Gaussian asymptotics for marginals of concentrated measures in high dimension, in: Geometric Aspects of Functional Analysis, Israel Seminar, in: Lecture Notes in Math., vol. 191, Springer, 7, pp. 71 95.