Invariances in spectral estimates Franck Barthe Dario Cordero-Erausquin Paris-Est Marne-la-Vallée, January 2011
Notation
Notation Given a probability measure ν on some Euclidean space, the Poincaré constant c p (ν) is the best constant such that : g L 2 (ν), ( 2 Var ν (g) := g g dν) dµ cp (ν) g 2 dµ.
Notation Given a probability measure ν on some Euclidean space, the Poincaré constant c p (ν) is the best constant such that : g L 2 (ν), ( 2 Var ν (g) := g g dν) dµ cp (ν) g 2 dµ. For a random vector X µ we write c p (X) = c p (µ), i.e. we ask Var ( g(x) ) c p (X) E [ g(x) 2].
Notation Given a probability measure ν on some Euclidean space, the Poincaré constant c p (ν) is the best constant such that : g L 2 (ν), ( 2 Var ν (g) := g g dν) dµ cp (ν) g 2 dµ. For a random vector X µ we write c p (X) = c p (µ), i.e. we ask Var ( g(x) ) c p (X) E [ g(x) 2]. For a probability measure µ on R n with density e V, or for a random vector X µ, we define the group of isometries leaving µ or X invariant as O n (µ) = O n (X) := {R O n ; RX X} = {R O n ; V R = V }. Furthermore, for any isometry R O n set Fix(R) := {x R n ; Rx = x}.
Open problems for log-concave distributions
Open problems for log-concave distributions We say X µ is log-concave distribution on R n if dµ = e V dx with V convex on R n. We say it is isotropic if cov(x) = Id.
Open problems for log-concave distributions We say X µ is log-concave distribution on R n if dµ = e V dx with V convex on R n. We say it is isotropic if cov(x) = Id. KLS conjecture: Variance conjecture: sup n sup µ isot. c p (µ) < +. Var X 2 sup sup n X E X 2 < +.
Open problems for log-concave distributions We say X µ is log-concave distribution on R n if dµ = e V dx with V convex on R n. We say it is isotropic if cov(x) = Id. KLS conjecture: Variance conjecture: sup n sup µ isot. c p (µ) < +. Var X 2 sup sup n X E X 2 < +. Hard, but partial (dimensional depending) results are know, connected to deep facts on the distribution of mass for log-concave measure and convex bodies in high-dimensions.
Open problems for log-concave distributions We say X µ is log-concave distribution on R n if dµ = e V dx with V convex on R n. We say it is isotropic if cov(x) = Id. KLS conjecture: Variance conjecture: sup n sup µ isot. c p (µ) < +. Var X 2 sup sup n X E X 2 < +. Hard, but partial (dimensional depending) results are know, connected to deep facts on the distribution of mass for log-concave measure and convex bodies in high-dimensions. Klartag proved the variance conjecture holds for the class of unconditionnal measures.
Hörmarnder s L 2 -method
Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x).
Hörmarnder s L 2 -method Sketch of the proof Introduce the differential operator Lu := u V u. L is negative self-adjoint on L 2 (µ) with ulv dµ = u v dµ. We have ker(l) = {constant functions}. Crucial integration by parts formula: for all (smooth) u, D (Lu) 2 dµ = D 2 V (x) u(x) u(x) dµ(x) + 2 u(x) 2 dµ So the right-hand side in 2) is exactly (Lu) 2 dµ. Proof of 2) 1). For f L 2 (µ) with f dµ = 0, introduce u L 2 (µ) such that f = Lu and use f 2 dµ = flu dµ f 2 dµ C u 2 dµ f 2 dµ (Lu) 2 dµ.
Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x).
Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x). Formally, for the (negative) operator L = V on L 2 (µ) 1 expresses that L 1 C. 2 expresses that ( L) 2 1 C ( L).
Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x). The equivalence also holds for classes of O n (µ) invariant functions, because L commutes with the such isometries. More precisely, for R O n (µ) (i.e. if V is R-invariant), we have the equivalence: 1 For every function f L 2 (µ) that is R-invariant, we have... 2 For every function u L 2 (µ) that is R-invariant, we have...
Hörmarnder s L 2 -method Lemma (Bochner, Hörmarnder, etc. ) Let V be a C 2 -smooth function on R n with Z V := e V < +, and V dx introduce dµ(x) = e Z V. Then, for C > 0, the following assertions are equivalent: 1 For every function f L 2 (µ), we have Var µ (f) C f 2 dµ 2 For every function u L 2 (µ), we have 1 u 2 D 2 V (x) u(x) u(x) dµ(x) + C D 2 u(x) 2 dµ(x).
Invariances
Invariances We write P F for the orthogonal projection onto a subspace. We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. i=1 i.e. m c i P Ei v 2 = v 2 v R n. i=1
Invariances We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. Covers a large class. i=1
Invariances We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. Covers a large class. Consider the case of hyperplanes symmetries (=reflections) : S u (x) = x 2(x u)u. Note that Fix(S u ) = Ru. (HYP ) There exists reflections S u1,..., S um such that Fix(S uj ) = {0}, i=1
Invariances We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. Covers a large class. Consider the case of hyperplanes symmetries (=reflections) : S u (x) = x 2(x u)u. Note that Fix(S u ) = Ru. (HYP ) There exists reflections S u1,..., S um such that Fix(S uj ) = {0}, Then we have : (HYP ) = (HYP). i=1
Invariances We will consider distributions µ on R n satisfying (HYP): (HYP) There exists R 1,..., R m O n (µ) and c 1,..., c m > 0 such that, m setting E i := Fix(R i ), we have c i P Ei = Id. Covers a large class. Consider the case of hyperplanes symmetries (=reflections) : S u (x) = x 2(x u)u. Note that Fix(S u ) = Ru. (HYP ) There exists reflections S u1,..., S um such that Fix(S uj ) = {0}, Then we have : (HYP ) = (HYP). Examples : If µ is unconditional : take S e1,..., S en O n (µ) and note ei e i = Id. If µ has the invariances of the simplex n, i.e µ invariant under permutation of the coordinates (note that O n ( n ) = S n ). Then we have (HYP ) and (HYP). Setting u ij = e i e j 2 for i j, we have S uij O n (µ) and 1 n 1 i=1 u ij u ij = Id. i j
Restrictions
Restrictions For F R n and E = F, and µ a measure on R n with density e V, define the measure µ x,e to be the normalized restriction of µ to x + E = P F x + E, i.e. the probability measure on E given by ( ) dµ x,e (y) := e V y + P F x dy E e V (z+p F x) dz, y E. In other words, if X µ, then E ( X P F X = P F x ) µ x,e.
Restrictions For F R n and E = F, and µ a measure on R n with density e V, define the measure µ x,e to be the normalized restriction of µ to x + E = P F x + E, i.e. the probability measure on E given by ( ) dµ x,e (y) := e V y + P F x dy E e V (z+p F x) dz, y E. In other words, if X µ, then E ( X P F X = P F x ) µ x,e. Theorem Let X µ be a log-concave probability measure on R n verifying (HYP). Then, for every smooth function f : R n R such that f R i = f for all i m, we have [ m ] Var µ (f) E c p (X P E i X) c i P Ei f(x) 2. i=1
Restrictions For F R n and E = F, and µ a measure on R n with density e V, define the measure µ x,e to be the normalized restriction of µ to x + E = P F x + E, i.e. the probability measure on E given by ( ) dµ x,e (y) := e V y + P F x dy E e V (z+p F x) dz, y E. In other words, if X µ, then E ( X P F X = P F x ) µ x,e. Theorem Let X µ be a log-concave probability measure on R n verifying (HYP). Then, for every smooth function f : R n R such that f R i = f for all i m, we have [ m ] Var µ (f) E c p (X P E i X) c i P Ei f(x) 2. i=1 ( ) m = c p (µ x,ei ) c i P Ei f(x) 2 dµ(x). i=1
Ideas of the argument
Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ.
Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ. If f is S θ -invariant, then so is the u such that f = Lu. But this means that θ u is odd in the direction θ.
Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ. If f is S θ -invariant, then so is the u such that f = Lu. But this means that θ u is odd in the direction θ. So on every line in the direction θ we have E [( θ u)(x) P θ X] = 0 and therefore
Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ. If f is S θ -invariant, then so is the u such that f = Lu. But this means that θ u is odd in the direction θ. So on every line in the direction θ we have E [( θ u)(x) P θ X] = 0 and therefore E [ ( θ u) 2 (X) P θ X ] c p (X P θ X) E [ ( θθu) 2 2 (X) P θ X ]. [ 1 ] Therefore we have E c p (X P θ X) ( θu) 2 (X) E [ ( θθu) 2 2 (X) ].
Ideas of the argument We need to understand how the L 2 argument combines with invariances. This is done by using proper restrictions and Fubini. Let us explain the argument in the case we are working with a invariance by a reflection S θ. If f is S θ -invariant, then so is the u such that f = Lu. But this means that θ u is odd in the direction θ. So on every line in the direction θ we have E [( θ u)(x) P θ X] = 0 and therefore E [ ( θ u) 2 (X) P θ X ] c p (X P θ X) E [ ( θθu) 2 2 (X) P θ X ]. [ 1 ] Therefore we have E c p (X P θ X) ( θu) 2 (X) E [ ( θθu) 2 2 (X) ]. Putting all the invariances together, using the decomposition of the identity to decompose the norms we get, because of log-concavity : (Lu) 2 dµ D 2 u 2 dµ = c i P Ei D 2 up Ei 2 dµ 1 c i c p (x, E i ) P E i u(x) 2 dµ.
Consequences Recall the result: for f : R n R such that f R i = f for all i m, we have ( ) m Var µ (f) c p (µ x,ei ) c i P Ei f(x) 2 dµ(x). i=1
Consequences Recall the result: for f : R n R such that f R i = f for all i m, we have ( ) m Var µ (f) c p (µ x,ei ) c i P Ei f(x) 2 dµ(x). i=1 Variance estimates (i.e. for f(x) = x 2 ) for log-concave distributions. We can use a result of Bobkov/KLS to control spectral gap of the conditioned measures: : c p (ν) c x 2 dν(x). Example of result : If µ is log-concave isotropic and verifies (HYP) then Var µ ( x 2 ) C m i=1 c i dim(e i ) 2 C n max i m dim(e i) If µ is log-concave (isotropic) satisfies (HYP ) then µ satisfies the variance conjecture : Var µ ( x 2 ) C n.
Consequences Recall the result: for f : R n R such that f R i = f for all i m, we have ( ) m Var µ (f) c p (µ x,ei ) c i P Ei f(x) 2 dµ(x). i=1 What about spectral gap? For this, we present two ways of proceeding. One can make a symmetrization argument to extend our inequality to every function f. This requires to use the spectral gap for the Cayley graph given by the group of isometries and a family of generators. Analyze the invariance properties of eigenfunctions.
Examples of results for spectral gap
Examples of results for spectral gap Theorem Let X µ be a log-concave measure and R 1,..., R m O n (µ) such that i m Fix(R i) = {0}. Then, setting E i = Fix(R i ), we have c p (µ) max i m sup c p (µ x,ei ) c max E [ P Ei X 2 P ] x Fix(R i) i m Fix(Ri)X.
Examples of results for spectral gap Theorem Let X µ be a log-concave measure and R 1,..., R m O n (µ) such that i m Fix(R i) = {0}. Then, setting E i = Fix(R i ), we have c p (µ) max i m sup c p (µ x,ei ) c max E [ P Ei X 2 P ] x Fix(R i) i m Fix(Ri)X. Then, one can use the deep result a E. Milman on stability of spectral gap, and cut-off to a bounded convex domain. Problem : we may loose the invariance. We also need to control the number of isometries we need. Theorem Under the same assumptions, suppose also that each R i acts as a permutation on the set {E 1,..., E m }. Then, c p (µ) c log(m) 2 max i m dim(e i). Theorem If S u1,..., S um O n (µ) such that Fix(S uj ) = {0}. Then, c p (µ) c log(n) 2.
Examples of results for spectral gap In the sequel, we will use the following particular case of the result : Theorem Let X µ is a log-concave measure and if S u1,..., S um O n (µ) reflections such that Fix(S uj ) = {0}, then c p (µ) max i m sup x u i c p (µ x,rui )
Conditioned spin system
Conditioned spin system Here µ is probability measure on R having a spectral gap.
Conditioned spin system Here µ is probability measure on R having a spectral gap. Then c p (µ n ) = c p (µ).
Conditioned spin system Here µ is probability measure on R having a spectral gap. Then c p (µ n ) = c p (µ). For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1
Conditioned spin system Here µ is probability measure on R having a spectral gap. Then c p (µ n ) = c p (µ). For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.)
Conditioned spin system For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.) What is known? Write e V for the density of µ on R.
Conditioned spin system For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.) What is known? Write e V for the density of µ on R. Easy : if V c > 0, then we have the Bakry-Emery criterion for µ nρ.
Conditioned spin system For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.) What is known? Write e V for the density of µ on R. Easy : if V c > 0, then we have the Bakry-Emery criterion for µ nρ. Much more difficult: Assume µ has density e φ ψ. Landim-Panuzi-Yao (Chafai, Grunewald-Otto-Reznikoff-Villani): if φ(x) = x 2 /2 and ψ < +, ψ < +, then OK: c p (µ n,ρ ) < +. sup n,ρ Caputo ( 03). If φ c > 0 and and ψ < +, ψ < +, AND moreover we have some other growth condition on ψ, then OK: sup c p (µ n,ρ ) < +. n,ρ
Conditioned spin system For n 1 and ρ R, let µ n ρ be the measure µ n conditioned (i.e. n restricted and renormalized) to the affine hyperplane x i = ρ. i=1 Question: is c p (µ n ρ ) uniformly bounded (in n and ρ)? (for many classes of measure it is enough to consider µ n 0.) What is known? Write e V for the density of µ on R. Easy : if V c > 0, then we have the Bakry-Emery criterion for µ nρ. Much more difficult: Assume µ has density e φ ψ. Landim-Panuzi-Yao (Chafai, Grunewald-Otto-Reznikoff-Villani): if φ(x) = x 2 /2 and ψ < +, ψ < +, then OK: sup n,ρ c p (µ n,ρ ) < +. Caputo ( 03). If φ c > 0 and and ψ < +, ψ < +, AND moreover we have some other growth condition on ψ, then OK: sup c p (µ n,ρ ) < +. n,ρ This case was recently completely solved by
Our results
Our results Theorem Assume µ = e V sup n,ρ is log-concave. Then c p (µ n,ρ ) sup c p (µ 2,ρ ) ρ sup a R e [ V (a+t)+v (a V ) 2V (a) ] dt.
Our results Theorem Assume µ = e V sup n,ρ is log-concave. Then c p (µ n,ρ ) sup c p (µ 2,ρ ) ρ sup a R [ ] e V (a+t)+v (a V ) 2V (a) dt. Example of application : V (a + t) + V (a V ) 2V (a) c(t) with e c(t) dt < +. For instance : V (t) = t β + ψ(t) with β 2 and ψ small. Gives also counter-examples for V (t) = t β with β < 2.
Our results Theorem Assume µ = e V sup n,ρ is log-concave. Then c p (µ n,ρ ) sup c p (µ 2,ρ ) ρ sup a R [ ] e V (a+t)+v (a V ) 2V (a) dt. Example of application : V (a + t) + V (a V ) 2V (a) c(t) with e c(t) dt < +. For instance : V (t) = t β + ψ(t) with β 2 and ψ small. Gives also counter-examples for V (t) = t β with β < 2. Theorem Assume µ = e c(x2 ) ψ(x) with c convex and ψ small. Then c p (µ n,ρ ) < + sup n,ρ
Argument
Argument The density of µ n is e V (x1) V (x2)...v (xn) and so its restriction to xi = ρ has the symmetries of the simplex. So we know that, setting u i,j = ei ej 2 we have c p (µ n,ρ ) c sup i j sup c p ((µ n,ρ ) z,rui,j ) z u i,j
Argument The density of µ n is e V (x1) V (x2)...v (xn) and so its restriction to xi = ρ has the symmetries of the simplex. So we know that, setting u i,j = ei ej 2 we have c p (µ n,ρ ) c sup i j sup z u i,j c p ((µ n,ρ ) z,rui,j ) Fix z u 1,2. The density of (µ n,ρ ) z,ru1,2 on z + Ru 1,2 is V (z1+ t e 2 ) V (z 1 t 2 ) V (z 3)... V (z n), renormalized. So we have d(µ n,ρ ) z,ru1,2 = e V (z1+ t 2 ) V (z 1 t 2 ) dt. V (z e 1+ 2 ) V (z 1 2 ) So indeed, it is of the form µ 2,z1.
Argument
Argument Estimation of c p (µ 2,ρ ).
Argument Estimation of c p (µ 2,ρ ). We are working with measures on R of the form V (a+t) V (a t) dt dν(t) = e Z. These are even log-concave measures on R, and by a result of Bobkov : c p (ν) t 2 dν(t).
Argument Estimation of c p (µ 2,ρ ). We are working with measures on R of the form V (a+t) V (a t) dt dν(t) = e Z. These are even log-concave measures on R, and by a result of Bobkov : c p (ν) t 2 dν(t). Classical fact for f : R R + even and log-concave : 1 12 f(0)2 t 2 f(t) dt ( ) 3 1 f 2.
Argument Estimation of c p (µ 2,ρ ). We are working with measures on R of the form V (a+t) V (a t) dt dν(t) = e Z. These are even log-concave measures on R, and by a result of Bobkov : c p (ν) t 2 dν(t). Classical fact for f : R R + even and log-concave : 1 12 f(0)2 t 2 f(t) dt ( ) 3 1 f 2. So we have : c p (ν) e [ V (a+t)+v (a t) 2V (a) ] dt.