Weak and strong moments of l r -norms of log-concave vectors

Similar documents
Convex inequalities, isoperimetry and spectral gap III

Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures

High-dimensional distributions with convexity properties

On the isotropic constant of marginals

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

Geometry of log-concave Ensembles of random matrices

On Z p -norms of random vectors

ON THE CONVEX INFIMUM CONVOLUTION INEQUALITY WITH OPTIMAL COST FUNCTION

Estimates for the affine and dual affine quermassintegrals of convex bodies

Dimensional behaviour of entropy and information

Approximately Gaussian marginals and the hyperplane conjecture

A note on the convex infimum convolution inequality

Approximately gaussian marginals and the hyperplane conjecture

On isotropicity with respect to a measure

Super-Gaussian directions of random vectors

Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP

Poincaré Inequalities and Moment Maps

A Banach space with a symmetric basis which is of weak cotype 2 but not of cotype 2

Concentration phenomena in high dimensional geometry.

On the Equivalence Between Geometric and Arithmetic Means for Log-Concave Measures

Empirical Processes and random projections

KLS-TYPE ISOPERIMETRIC BOUNDS FOR LOG-CONCAVE PROBABILITY MEASURES. December, 2014

THEOREMS, ETC., FOR MATH 515

Invariances in spectral estimates. Paris-Est Marne-la-Vallée, January 2011

Probabilistic Methods in Asymptotic Geometric Analysis.

Concentration inequalities: basics and some new challenges

The small ball property in Banach spaces (quantitative results)

UNIFORM EMBEDDINGS OF BOUNDED GEOMETRY SPACES INTO REFLEXIVE BANACH SPACE

Pointwise estimates for marginals of convex bodies

An example of a convex body without symmetric projections.

Dimensionality in the Stability of the Brunn-Minkowski Inequality: A blessing or a curse?

Super-Gaussian directions of random vectors

arxiv:math/ v1 [math.fa] 30 Sep 2004

Concentration inequalities for non-lipschitz functions

ALEXANDER KOLDOBSKY AND ALAIN PAJOR. Abstract. We prove that there exists an absolute constant C so that µ(k) C p max. ξ S n 1 µ(k ξ ) K 1/n

On the constant in the reverse Brunn-Minkowski inequality for p-convex balls.

On the convex infimum convolution inequality with optimal cost function

A subspace of l 2 (X) without the approximation property

Asymptotic shape of a random polytope in a convex body

Rapid Steiner symmetrization of most of a convex body and the slicing problem

ON THE ISOTROPY CONSTANT OF PROJECTIONS OF POLYTOPES

On the singular values of random matrices

On metric characterizations of some classes of Banach spaces

MAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS

Small ball probability and Dvoretzky Theorem

On the distribution of the ψ 2 -norm of linear functionals on isotropic convex bodies

Banach spaces without local unconditional structure

Characterization of Self-Polar Convex Functions

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES

A Bernstein-Chernoff deviation inequality, and geometric properties of random families of operators

Nine 20+ Year Old Problems in the. Geometry of Banach Spaces. or: (What I have been doing this millenium) Bill Johnson

SUBSPACES AND QUOTIENTS OF BANACH SPACES WITH SHRINKING UNCONDITIONAL BASES. 1. introduction

Concentration Properties of Restricted Measures with Applications to Non-Lipschitz Functions

The extension of the finite-dimensional version of Krivine s theorem to quasi-normed spaces.

Multi-normed spaces and multi-banach algebras. H. G. Dales. Leeds Semester

POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES

KLS-type isoperimetric bounds for log-concave probability measures

Isomorphic and almost-isometric problems in high-dimensional convex geometry

A remark on the slicing problem

CONVOLUTION OPERATORS IN INFINITE DIMENSION

Aliprantis, Border: Infinite-dimensional Analysis A Hitchhiker s Guide

ELEMENTS OF PROBABILITY THEORY

ON SUBSPACES OF NON-REFLEXIVE ORLICZ SPACES

ON THE CONVERGENCE OF GREEDY ALGORITHMS FOR INITIAL SEGMENTS OF THE HAAR BASIS

Functional Analysis I

A PROPERTY OF STRICTLY SINGULAR 1-1 OPERATORS

Tail inequalities for additive functionals and empirical processes of. Markov chains

SOME REMARKS ON THE SPACE OF DIFFERENCES OF SUBLINEAR FUNCTIONS

On the existence of supergaussian directions on convex bodies

QUOTIENTS OF ESSENTIALLY EUCLIDEAN SPACES

Asymptotic stability of an evolutionary nonlinear Boltzmann-type equation

Non-linear factorization of linear operators

Anti-concentration Inequalities

Notes for Functional Analysis

arxiv:math/ v1 [math.fa] 29 Apr 1999

GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n

Probability and Measure

Random regular digraphs: singularity and spectrum

Asymptotic Geometric Analysis, Fall 2006

Convex Geometry. Carsten Schütt

Small Ball Probability, Arithmetic Structure and Random Matrices

The Rademacher Cotype of Operators from l N

Phenomena in high dimensions in geometric analysis, random matrices, and computational geometry Roscoff, France, June 25-29, 2012

Sampling and high-dimensional convex geometry

Functional Analysis Exercise Class

Isomorphic Steiner symmetrization of p-convex sets

A UNIVERSAL BANACH SPACE WITH A K-UNCONDITIONAL BASIS

Geometry and topology of continuous best and near best approximations

ON VECTOR-VALUED INEQUALITIES FOR SIDON SETS AND SETS OF INTERPOLATION

The Banach Tarski Paradox and Amenability Lecture 20: Invariant Mean implies Reiter s Property. 11 October 2012

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

A probabilistic take on isoperimetric-type inequalities

Measurable functions are approximately nice, even if look terrible.

Functional Analysis Exercise Class

TWO MAPPINGS RELATED TO SEMI-INNER PRODUCTS AND THEIR APPLICATIONS IN GEOMETRY OF NORMED LINEAR SPACES. S.S. Dragomir and J.J.

Convergence Rates in Regularization for Nonlinear Ill-Posed Equations Involving m-accretive Mappings in Banach Spaces

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

A NICE PROOF OF FARKAS LEMMA

Transcription:

Weak and strong moments of l r -norms of log-concave vectors Rafał Latała based on the joint work with Marta Strzelecka) University of Warsaw Minneapolis, April 14 2015

Log-concave measures/vectors A measure µ on a locally convex linear space F is called logarithmically concave log-concave in short) if for any compact nonempty sets K, L F and λ [0, 1], µλk + 1 λ)l) µk) λ µl) 1 λ. A random vector with values in F is called log-concave if its distribution is logarithmically concave. By the result of Borell an n-dimensional vector with a full dimensional support is log-concave iff it has a log-concave density, i.e. a density of the form e h, where h is a convex function with values in, ].

Log-concave measures/vectors A measure µ on a locally convex linear space F is called logarithmically concave log-concave in short) if for any compact nonempty sets K, L F and λ [0, 1], µλk + 1 λ)l) µk) λ µl) 1 λ. A random vector with values in F is called log-concave if its distribution is logarithmically concave. By the result of Borell an n-dimensional vector with a full dimensional support is log-concave iff it has a log-concave density, i.e. a density of the form e h, where h is a convex function with values in, ].

Examples of log-concave vectors Gaussian vectors Vectors with independent log-concave coordinates in particular vectors with product exponential distribution) Vectors uniformly distributed on convex bodies Affine images of log-concave vectors Sums of independent log-concave vectors Weak limits of log-concave vectors It may be shown that the class of log-concave distributions on R n is the smallest class that contains uniform distributions on convex bodies and is closed under affine transformations and weak limits.

Examples of log-concave vectors Gaussian vectors Vectors with independent log-concave coordinates in particular vectors with product exponential distribution) Vectors uniformly distributed on convex bodies Affine images of log-concave vectors Sums of independent log-concave vectors Weak limits of log-concave vectors It may be shown that the class of log-concave distributions on R n is the smallest class that contains uniform distributions on convex bodies and is closed under affine transformations and weak limits.

Isotropic vectors Let X = X 1,..., X n ) be a random vector in R n such that E X 2 <. We say that the distribution of X is isotropic, if EX i = 0 and EX i X j = δ i,j for all 1 i, j n. If E X 2 < and X has a full dimensional support then there exists an affine transformation T such that TX is isotropic. Here and in the sequel x = x 2, where x r = x i r i n 1/r for x R d, r 1. Remark. By the result of Borell, for any log-concave vector X and any seminorm, E X p <. Moreover E X p ) 1/p C p q E X q ) 1/q for p q 1. By C we denote universal constants that may differ from line to line).

Isotropic vectors Let X = X 1,..., X n ) be a random vector in R n such that E X 2 <. We say that the distribution of X is isotropic, if EX i = 0 and EX i X j = δ i,j for all 1 i, j n. If E X 2 < and X has a full dimensional support then there exists an affine transformation T such that TX is isotropic. Here and in the sequel x = x 2, where x r = x i r i n 1/r for x R d, r 1. Remark. By the result of Borell, for any log-concave vector X and any seminorm, E X p <. Moreover E X p ) 1/p C p q E X q ) 1/q for p q 1. By C we denote universal constants that may differ from line to line).

Isotropic vectors Let X = X 1,..., X n ) be a random vector in R n such that E X 2 <. We say that the distribution of X is isotropic, if EX i = 0 and EX i X j = δ i,j for all 1 i, j n. If E X 2 < and X has a full dimensional support then there exists an affine transformation T such that TX is isotropic. Here and in the sequel x = x 2, where x r = x i r i n 1/r for x R d, r 1. Remark. By the result of Borell, for any log-concave vector X and any seminorm, E X p <. Moreover E X p ) 1/p C p q E X q ) 1/q for p q 1. By C we denote universal constants that may differ from line to line).

Paouris inequality One of the fundamental properties of log-concave vectors is the Paouris inequality. Theorem Paouris 06) Fo any log-concave vector X in R n, ) E X p ) 1/p C E X 2 ) 1/2 + σ X p) for p 1, where n p ) 1/p σ X p) := sup E t t 2 1 i X i. i=1 Equivalently, in terms of tails we have ) P X CtE X ) exp σx 1 te X ) for t 1,

Paouris inequality One of the fundamental properties of log-concave vectors is the Paouris inequality. Theorem Paouris 06) Fo any log-concave vector X in R n, ) E X p ) 1/p C E X 2 ) 1/2 + σ X p) for p 1, where n p ) 1/p σ X p) := sup E t t 2 1 i X i. i=1 Equivalently, in terms of tails we have ) P X CtE X ) exp σx 1 te X ) for t 1,

Paouris inequality in the isotropic case For an isotropic vectorx, E X E X 2 ) 1/2 = n, and σ X 2) = 1. So if X is isotropic log-concave then σ X p) p for p 1. Hence we have the following weaker form of the Paouris inequality. Corollary Fo any isotropic log-concave vector X in R n, E X p ) 1/p C n + p ) for p 1. and P X Ct n) exp t n ) for t 1.

Paouris inequality in the isotropic case For an isotropic vectorx, E X E X 2 ) 1/2 = n, and σ X 2) = 1. So if X is isotropic log-concave then σ X p) p for p 1. Hence we have the following weaker form of the Paouris inequality. Corollary Fo any isotropic log-concave vector X in R n, E X p ) 1/p C n + p ) for p 1. and P X Ct n) exp t n ) for t 1.

Example Let Y = ngu, where U has a uniform distribution on S n 1 and g is the standard normal N 0, 1) r.v., independent of U. Then it is easy to see that Y is isotropic, rotationally invariant and for any seminorm on R n E Y p ) 1/p = ne g p ) 1/p E U p ) 1/p pne U p ) 1/p for p 1. In particular this implies that for any t R n, n p ) 1/p E t i Y i C p n q ) 1/q E t q i Y i for p q 1. i=1 i=1 Therefore E Y p ) 1/p pn, E Y 2 ) 1/2 = n, σ Y p) Cp and for 1 p n, E Y p ) 1/p E Y 2 ) 1/2 + σ Y p). It would be very valuable to have a nice characterization of random vectors which satisfy the Paouris inequality.

Example Let Y = ngu, where U has a uniform distribution on S n 1 and g is the standard normal N 0, 1) r.v., independent of U. Then it is easy to see that Y is isotropic, rotationally invariant and for any seminorm on R n E Y p ) 1/p = ne g p ) 1/p E U p ) 1/p pne U p ) 1/p for p 1. In particular this implies that for any t R n, n p ) 1/p E t i Y i C p n q ) 1/q E t q i Y i for p q 1. i=1 i=1 Therefore E Y p ) 1/p pn, E Y 2 ) 1/2 = n, σ Y p) Cp and for 1 p n, E Y p ) 1/p E Y 2 ) 1/2 + σ Y p). It would be very valuable to have a nice characterization of random vectors which satisfy the Paouris inequality.

Example Let Y = ngu, where U has a uniform distribution on S n 1 and g is the standard normal N 0, 1) r.v., independent of U. Then it is easy to see that Y is isotropic, rotationally invariant and for any seminorm on R n E Y p ) 1/p = ne g p ) 1/p E U p ) 1/p pne U p ) 1/p for p 1. In particular this implies that for any t R n, n p ) 1/p E t i Y i C p n q ) 1/q E t q i Y i for p q 1. i=1 i=1 Therefore E Y p ) 1/p pn, E Y 2 ) 1/2 = n, σ Y p) Cp and for 1 p n, E Y p ) 1/p E Y 2 ) 1/2 + σ Y p). It would be very valuable to have a nice characterization of random vectors which satisfy the Paouris inequality.

Conjecture about weak and strong moments It is natural to ask whether the Paouris inequality may be generalized to non-euclidean norms. One may risk the following conjecture. Conjecture There exists a universal constant C such that for any log-concave vector X with values in a normed space F, ), E X p ) 1/p C E X + sup ϕ F, ϕ 1 E ϕx) p ) 1/p ) for p 1. Remark. Obviously for p 1, E X p ) 1/p E X and strong moments dominate weak moments, i.e. E X p ) 1/p sup E ϕx) p ) 1/p. ϕ F, ϕ 1

Conjecture about weak and strong moments It is natural to ask whether the Paouris inequality may be generalized to non-euclidean norms. One may risk the following conjecture. Conjecture There exists a universal constant C such that for any log-concave vector X with values in a normed space F, ), E X p ) 1/p C E X + sup ϕ F, ϕ 1 E ϕx) p ) 1/p ) for p 1. Remark. Obviously for p 1, E X p ) 1/p E X and strong moments dominate weak moments, i.e. E X p ) 1/p sup E ϕx) p ) 1/p. ϕ F, ϕ 1

Main result Theorem Let X be a log-concave vector with values in a normed space F, ) which may be isometrically embedded in l r for some r [2, ). Then for p 1, E X p ) 1/p Cr E X + sup ϕ F, ϕ 1 E ϕx) p ) 1/p ) Remark. Let X and F be as above. Then by Chebyshev s inequality we obtain large deviation estimate for X : ) P X CrtE X ) exp σx,f 1 te X ) for t 1, where σ X,F p) := denotes the weak p-th moment of X. sup EϕX) p ) 1/p for p 1 ϕ F, ϕ 1.

Main result Theorem Let X be a log-concave vector with values in a normed space F, ) which may be isometrically embedded in l r for some r [2, ). Then for p 1, E X p ) 1/p Cr E X + sup ϕ F, ϕ 1 E ϕx) p ) 1/p ) Remark. Let X and F be as above. Then by Chebyshev s inequality we obtain large deviation estimate for X : ) P X CrtE X ) exp σx,f 1 te X ) for t 1, where σ X,F p) := denotes the weak p-th moment of X. sup EϕX) p ) 1/p for p 1 ϕ F, ϕ 1.

Isomorhic embeddings Remark. If i : F l r is an isomorphic embedding and λ = i F lr i 1 if ) F, then we may define another norm on F by x := ix) / i F lr. Obviously F, ) isometrically embeds in l r, moreover x x λ x for x F. Hence the previous theorem gives E X p ) 1/p λe X ) p ) 1/p Crλ E X + Crλ E X + sup ϕ F, ϕ 1 sup ϕ F, ϕ 1 E ϕx) p ) 1/p E ϕx) p ) 1/p ).

Reduction to finite dimension Since log-concavity is preserved under linear transformations and, by Hahn-Banach theorem, any linear functional on a subspace of l r is a restriction of a functional on the whole l r with the same norm, it is enough to prove Theorem 2 for F = l r. An easy approximation argument shows that we may consider finite dimensional spaces lr n. To simplify the notation for an n-dimensional vector X and p 1 we write σ r,x p) := sup t r 1 n p ) 1/p E t i X i, i=1 where r denotes the Hölder s dual of r, i.e. r = Theorem r r 1. Let X be a log-concave vector in R n and r [2, ). Then E X p r ) 1/p Cr E X r + σ r,x p)) for p 1.

Reduction to finite dimension Since log-concavity is preserved under linear transformations and, by Hahn-Banach theorem, any linear functional on a subspace of l r is a restriction of a functional on the whole l r with the same norm, it is enough to prove Theorem 2 for F = l r. An easy approximation argument shows that we may consider finite dimensional spaces lr n. To simplify the notation for an n-dimensional vector X and p 1 we write σ r,x p) := sup t r 1 n p ) 1/p E t i X i, i=1 where r denotes the Hölder s dual of r, i.e. r = Theorem r r 1. Let X be a log-concave vector in R n and r [2, ). Then E X p r ) 1/p Cr E X r + σ r,x p)) for p 1.

Modified bound for l r -norms Proof of the main result is based on the following estimate. Theorem Suppose that r [2, ) and X is a log-concave n-dimensional random vector. Let d i := EX 2 i ) 1/2, d := Then for p r and t Cr log d σ r,x p) n ), ) 1/r di r. 1) i=1 n p/r E X i r 1 { Xi td i }) Crσ r,x p)) p. i=1

Modified bound implies comparison of weak and strong moments in l n r. Since E X p r ) 1/p CpE X r, we may assume that p r. Let d i and d be definde by 1). Then d 2 = EX 2 i ) r/2 E X 2 i ) r/2 = E X 2 r CE X r ) 2. Set p := inf{q p : σ r,x q) d}. Modified bound applied with p instead of p and t = 0 yields E X p r ) 1/p E X p r ) 1/ p Crσ r,x p) = Cr max{d, σ r,x p)} CrE X r + σ r,x p)).

Modified bound implies comparison of weak and strong moments in l n r. Since E X p r ) 1/p CpE X r, we may assume that p r. Let d i and d be definde by 1). Then d 2 = EX 2 i ) r/2 E X 2 i ) r/2 = E X 2 r CE X r ) 2. Set p := inf{q p : σ r,x q) d}. Modified bound applied with p instead of p and t = 0 yields E X p r ) 1/p E X p r ) 1/ p Crσ r,x p) = Cr max{d, σ r,x p)} CrE X r + σ r,x p)).

Modified bound implies comparison of weak and strong moments in l n r. Since E X p r ) 1/p CpE X r, we may assume that p r. Let d i and d be definde by 1). Then d 2 = EX 2 i ) r/2 E X 2 i ) r/2 = E X 2 r CE X r ) 2. Set p := inf{q p : σ r,x q) d}. Modified bound applied with p instead of p and t = 0 yields E X p r ) 1/p E X p r ) 1/ p Crσ r,x p) = Cr max{d, σ r,x p)} CrE X r + σ r,x p)).

Idea of the proof of the modified bound Random vector X is also log-concave, has the same values of d i and σ r, X = σ r,x. Hence it is enough to show that n ) p/r E Xi r 1 {Xi td i } Crσ r,x p)) p i=1 for t Cr logd/σ r,x p)). It is easy to reduce to the case when t Cr and l = p/r is a positive integer. For l = 1, 2,... we have n l n ) l E Xi r 1 {Xi td i }) E 2 k+1)r td i ) r 1 {Xi 2 k td i } i=1 where = 2t) rl n i=1 k=0 i 1,...,i l =1 k 1,...,k l =0 2 k 1+...+k l )r d r i 1... d r i l PB i1,k 1...,i l,k l ), B i1,k 1...,i l,k l := {X i1 2 k 1 td i1,..., X il 2 k l td il }.

Idea of the proof of the modified bound Random vector X is also log-concave, has the same values of d i and σ r, X = σ r,x. Hence it is enough to show that n ) p/r E Xi r 1 {Xi td i } Crσ r,x p)) p i=1 for t Cr logd/σ r,x p)). It is easy to reduce to the case when t Cr and l = p/r is a positive integer. For l = 1, 2,... we have n l n ) l E Xi r 1 {Xi td i }) E 2 k+1)r td i ) r 1 {Xi 2 k td i } i=1 where = 2t) rl n i=1 k=0 i 1,...,i l =1 k 1,...,k l =0 2 k 1+...+k l )r d r i 1... d r i l PB i1,k 1...,i l,k l ), B i1,k 1...,i l,k l := {X i1 2 k 1 td i1,..., X il 2 k l td il }.

Idea of the proof of the modified bound Random vector X is also log-concave, has the same values of d i and σ r, X = σ r,x. Hence it is enough to show that n ) p/r E Xi r 1 {Xi td i } Crσ r,x p)) p i=1 for t Cr logd/σ r,x p)). It is easy to reduce to the case when t Cr and l = p/r is a positive integer. For l = 1, 2,... we have n l n ) l E Xi r 1 {Xi td i }) E 2 k+1)r td i ) r 1 {Xi 2 k td i } i=1 where = 2t) rl n i=1 k=0 i 1,...,i l =1 k 1,...,k l =0 2 k 1+...+k l )r d r i 1... d r i l PB i1,k 1...,i l,k l ), B i1,k 1...,i l,k l := {X i1 2 k 1 td i1,..., X il 2 k l td il }.

Idea of the proof of modified bound II So we are to show that ml) := n k 1,...,k l =0 i 1,...,i l =1 ) rl Crσr,X rl) t { for t Cr max 1, log d σ r,x rl) 2 k 1+...+k l )r d r i 1... d r i l PB i1,k 1,...,i l,k l ). )}. This is done by dividing terms in ml) into a number of groups and estimating each of them.

Idea of the proof of modified bound II So we are to show that ml) := n k 1,...,k l =0 i 1,...,i l =1 ) rl Crσr,X rl) t { for t Cr max 1, log d σ r,x rl) 2 k 1+...+k l )r d r i 1... d r i l PB i1,k 1,...,i l,k l ). )}. This is done by dividing terms in ml) into a number of groups and estimating each of them.

Crucial technical estimate Proposition Let X, r, d i and d be as before and A := {X K}, where K is a convex set in R n satisfying 0 < PA) 1/e. Then for every t r, n i=1 ) E X i r 1 A {Xi td i } C r PA) r r σr,x r logpa))) + dt) r e t/c. The proof is based on the fact that vector Y distributed as X conditioned on the set A = {X K}, i.e. PY B) = PX B) PX B K), is again log-concave if K is convex. To see that EYi 2 cannot be large for too many i s we use the Paouris inequality for X.

Crucial technical estimate Proposition Let X, r, d i and d be as before and A := {X K}, where K is a convex set in R n satisfying 0 < PA) 1/e. Then for every t r, n i=1 ) E X i r 1 A {Xi td i } C r PA) r r σr,x r logpa))) + dt) r e t/c. The proof is based on the fact that vector Y distributed as X conditioned on the set A = {X K}, i.e. PY B) = PX B) PX B K), is again log-concave if K is convex. To see that EYi 2 cannot be large for too many i s we use the Paouris inequality for X.

Case r = Recall the general conjecture about comparison of weak and strong moments ) E X p ) 1/p C E X + sup E ϕx) p ) 1/p for p 1. ϕ F, ϕ 1 2) Since every separable Banach space embeds in l it is enough to prove 2) in l n. It is known, but under the additional assumption that X is isotropic. Theorem Let X be an isotropic log-concave vector in R n. Then for any a 1,..., a n and p 1, E max a i X i p ) 1/p C i E max i ) a i X i + maxe X i p ) 1/p i The proof is completely different than in the case of l r -norms. It uses exponential concentration of log-concave vectors.

Case r = Recall the general conjecture about comparison of weak and strong moments ) E X p ) 1/p C E X + sup E ϕx) p ) 1/p for p 1. ϕ F, ϕ 1 2) Since every separable Banach space embeds in l it is enough to prove 2) in l n. It is known, but under the additional assumption that X is isotropic. Theorem Let X be an isotropic log-concave vector in R n. Then for any a 1,..., a n and p 1, E max a i X i p ) 1/p C i E max i ) a i X i + maxe X i p ) 1/p i The proof is completely different than in the case of l r -norms. It uses exponential concentration of log-concave vectors.

Case r = Recall the general conjecture about comparison of weak and strong moments ) E X p ) 1/p C E X + sup E ϕx) p ) 1/p for p 1. ϕ F, ϕ 1 2) Since every separable Banach space embeds in l it is enough to prove 2) in l n. It is known, but under the additional assumption that X is isotropic. Theorem Let X be an isotropic log-concave vector in R n. Then for any a 1,..., a n and p 1, E max a i X i p ) 1/p C i E max i ) a i X i + maxe X i p ) 1/p i The proof is completely different than in the case of l r -norms. It uses exponential concentration of log-concave vectors.

Exponential concentration Let µ be a measure on R n. We say that µ satisfies the exponential concentration with constant α if for any Borel set A, µa) 1 2 µa + αtbn 2 ) 1 e t for t 0. The fundamental open problem Kannan-Lovász-Simonovits conjecture) states that every isotropic log-concave measure satisfies exponential concentration with universal α. Klartag proved it with α Cn 1/2 ε with ε 1/30. This was improved by Eldan with the use of the result of Guédon-E.Milman) to α Cn 1/3 log 1/2 n + 1).

Exponential concentration Let µ be a measure on R n. We say that µ satisfies the exponential concentration with constant α if for any Borel set A, µa) 1 2 µa + αtbn 2 ) 1 e t for t 0. The fundamental open problem Kannan-Lovász-Simonovits conjecture) states that every isotropic log-concave measure satisfies exponential concentration with universal α. Klartag proved it with α Cn 1/2 ε with ε 1/30. This was improved by Eldan with the use of the result of Guédon-E.Milman) to α Cn 1/3 log 1/2 n + 1).

Exponential concentration Let µ be a measure on R n. We say that µ satisfies the exponential concentration with constant α if for any Borel set A, µa) 1 2 µa + αtbn 2 ) 1 e t for t 0. The fundamental open problem Kannan-Lovász-Simonovits conjecture) states that every isotropic log-concave measure satisfies exponential concentration with universal α. Klartag proved it with α Cn 1/2 ε with ε 1/30. This was improved by Eldan with the use of the result of Guédon-E.Milman) to α Cn 1/3 log 1/2 n + 1).

Optimal concentration inequalities For a probability measure µ on R n define Λ µ y) = log e y,z dµz), Λ µx) = sup y, x Λ µ y)) y and B µ t) = {x R n : Λ µ x) t}. We say that µ satifies the optimal concentration inequality with constant α if for any Borel set A, µa) 1 2 µa + αb µt)) 1 e t for t 1. Proposition If the law of an n-dimensional random vectors X satisfies the optimal concentration inequality with constant α then E X p ) 1/p E X + Cα sup ϕ F, ϕ 1 E ϕx) p ) 1/p for p 1.

Optimal concentration inequalities For a probability measure µ on R n define Λ µ y) = log e y,z dµz), Λ µx) = sup y, x Λ µ y)) y and B µ t) = {x R n : Λ µ x) t}. We say that µ satifies the optimal concentration inequality with constant α if for any Borel set A, µa) 1 2 µa + αb µt)) 1 e t for t 1. Proposition If the law of an n-dimensional random vectors X satisfies the optimal concentration inequality with constant α then E X p ) 1/p E X + Cα sup ϕ F, ϕ 1 E ϕx) p ) 1/p for p 1.

Optimal concentration inequalities For a probability measure µ on R n define Λ µ y) = log e y,z dµz), Λ µx) = sup y, x Λ µ y)) y and B µ t) = {x R n : Λ µ x) t}. We say that µ satifies the optimal concentration inequality with constant α if for any Borel set A, µa) 1 2 µa + αb µt)) 1 e t for t 1. Proposition If the law of an n-dimensional random vectors X satisfies the optimal concentration inequality with constant α then E X p ) 1/p E X + Cα sup ϕ F, ϕ 1 E ϕx) p ) 1/p for p 1.

Example - Talagrand s two level concentration In the case when µ = ν n is the product exponential measure i.e. the measure with the density 2 n exp i n x i ) it is easy to check that for t 1, B ν nt) tb n 1 + tb n 2. The optimal concentration inequality in this case is the Talagrand two-level concentration ν n A) 1 2 νn A + tb n 1 + tb n 2 ) 1 e t/c for t 1. Optimal concentration inequalities are strictly related to infimum convolution inequalities.

Example - Talagrand s two level concentration In the case when µ = ν n is the product exponential measure i.e. the measure with the density 2 n exp i n x i ) it is easy to check that for t 1, B ν nt) tb n 1 + tb n 2. The optimal concentration inequality in this case is the Talagrand two-level concentration ν n A) 1 2 νn A + tb n 1 + tb n 2 ) 1 e t/c for t 1. Optimal concentration inequalities are strictly related to infimum convolution inequalities.

Optimal concentration inequalities - examples Examples of vectors that satisfy the optimal concentration inequality with universal constant Gaussian vectors vectors with independent log-concave coordinates rotationally invariant log-concave vectors uniform distributions on B n r -balls log-concave vectors with densities of the form exp g x r )), 1 r <, g : [0, ), ] convex increasing. Corollary For all random vectors listed above E X p ) 1/p E X + C sup E ϕx) p ) 1/p for p 1. ϕ F, ϕ 1

Optimal concentration inequalities - examples Examples of vectors that satisfy the optimal concentration inequality with universal constant Gaussian vectors vectors with independent log-concave coordinates rotationally invariant log-concave vectors uniform distributions on B n r -balls log-concave vectors with densities of the form exp g x r )), 1 r <, g : [0, ), ] convex increasing. Corollary For all random vectors listed above E X p ) 1/p E X + C sup E ϕx) p ) 1/p for p 1. ϕ F, ϕ 1

Unconditional vectors We say that a random vector X = X 1,..., X n ) has unconditional distribution if the distribution of η 1 X 1,..., η n X n ) is the same as X for any choice of signs η 1,..., η n. Theorem Let X be an n-dimensional isotropic, unconditional, log-concave vector and Y = Y 1,..., Y n ), where Y i are independent symmetric exponential r.v s with variance 1 i.e. with the density 2 1/2 exp 2 x )). Then for any norm on R n and p 1, E X p ) 1/p C E Y + sup ϕ 1 E ϕx) p ) 1/p). Proof is based on the Talagrand two-sided estimate of E Y and the Bobkov-Nazarov bound for the joint d.f. of X, which implies E ϕx) p ) 1/p CE ϕy ) p ) 1/p for p 1.

Unconditional vectors We say that a random vector X = X 1,..., X n ) has unconditional distribution if the distribution of η 1 X 1,..., η n X n ) is the same as X for any choice of signs η 1,..., η n. Theorem Let X be an n-dimensional isotropic, unconditional, log-concave vector and Y = Y 1,..., Y n ), where Y i are independent symmetric exponential r.v s with variance 1 i.e. with the density 2 1/2 exp 2 x )). Then for any norm on R n and p 1, E X p ) 1/p C E Y + sup ϕ 1 E ϕx) p ) 1/p). Proof is based on the Talagrand two-sided estimate of E Y and the Bobkov-Nazarov bound for the joint d.f. of X, which implies E ϕx) p ) 1/p CE ϕy ) p ) 1/p for p 1.

Unconditional vectors We say that a random vector X = X 1,..., X n ) has unconditional distribution if the distribution of η 1 X 1,..., η n X n ) is the same as X for any choice of signs η 1,..., η n. Theorem Let X be an n-dimensional isotropic, unconditional, log-concave vector and Y = Y 1,..., Y n ), where Y i are independent symmetric exponential r.v s with variance 1 i.e. with the density 2 1/2 exp 2 x )). Then for any norm on R n and p 1, E X p ) 1/p C E Y + sup ϕ 1 E ϕx) p ) 1/p). Proof is based on the Talagrand two-sided estimate of E Y and the Bobkov-Nazarov bound for the joint d.f. of X, which implies E ϕx) p ) 1/p CE ϕy ) p ) 1/p for p 1.

Unconditional vectors ctd Using the easy estimate E Y C log n E X we get. Corollary For any n-dimensional unconditional, log-concave vector X, any norm on R n and p 1 one has E X p ) 1/p C log n E X + sup ϕ 1 E ϕx) p ) 1/p). The Maurey-Pisier result implies E Y CE X in spaces with nontrivial cotype. Corollary Let X be as above, 2 q < and F = R n, ) has a q-cotype constant bounded by β <. E X p ) 1/p Cq, β) E X + sup ϕ 1 E ϕx) p ) 1/p).

Unconditional vectors ctd Using the easy estimate E Y C log n E X we get. Corollary For any n-dimensional unconditional, log-concave vector X, any norm on R n and p 1 one has E X p ) 1/p C log n E X + sup ϕ 1 E ϕx) p ) 1/p). The Maurey-Pisier result implies E Y CE X in spaces with nontrivial cotype. Corollary Let X be as above, 2 q < and F = R n, ) has a q-cotype constant bounded by β <. E X p ) 1/p Cq, β) E X + sup ϕ 1 E ϕx) p ) 1/p).

Questions The conjecture E X p ) 1/p C E X + sup ϕ F, ϕ 1 E ϕx) p ) 1/p ) for p 1. seems to be rather hard in full generality. One may try to considerfirst some simpler open cases: l r -norms with 1 r 2 Orlicz norms or more general unconditional norms Unconditional log-concave vectors It is also not clear if one may improve the Paoris inequality to E X p ) 1/p E X + Cσ X p) for p 1.

Questions The conjecture E X p ) 1/p C E X + sup ϕ F, ϕ 1 E ϕx) p ) 1/p ) for p 1. seems to be rather hard in full generality. One may try to considerfirst some simpler open cases: l r -norms with 1 r 2 Orlicz norms or more general unconditional norms Unconditional log-concave vectors It is also not clear if one may improve the Paoris inequality to E X p ) 1/p E X + Cσ X p) for p 1.

References R. Eldan, Thin shell implies spectral gap up to polylog via a stochastic localization scheme, Geom. Funct. Anal. 23 2013), 532 569. B. Klartag, Power-law estimates for the central limit theorem for convex sets, J. Funct. Anal. 245 2007), 284 310. R. Latała, Weak and strong moments of random vectors, in: Marcinkiewicz centenary volume, 115 121, Banach Center Publ. 95, Polish Acad. Sci. Inst. Math., Warsaw, 2011. R. Latała, M. Strzelecka, Weak and strong moments of l r -norms of log-concave vectors, arxiv:1501.01649. R. Latała, J. O. Wojtaszczyk, On the infimum convolution inequality, Studia Math. 189 2008), 147-187. G. Paouris, Concentration of mass on convex bodies, Geom. Funct. Anal. 16 2006), 1021 1049. M. Talagarand, A new isoperimetric inequality and the concentration of measure phenomenon, in Israel Seminar GAFA), Lecture Notes in Math. 1469, 94 124, Springer, Berlin 1991

Thank you for your attention!