Empirical Processes and random projections
|
|
- Merilyn Gordon
- 5 years ago
- Views:
Transcription
1 Empirical Processes and random projections B. Klartag, S. Mendelson School of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, USA. Institute of Advanced Studies, The Australian National University, Canberra, ACT 000, Australia. Abstract In this note we establish some bounds on the supremum of certain empirical processes indexed by sets of functions with the same L norm. We present several geometric applications of this result, the most important of which is a sharpening of the Johnson-Lindenstrauss embedding Lemma. Our results apply to a large class of random matrices, as we only require that the matrix entries have a subgaussian tail. Introduction The goal of this article is to present bounds on the supremum of some empirical processes and to use these bounds in certain geometric problems. The original motivation for this study was the following problem: let X,..., X be independent random vectors in R n and let Γ : R n R be the random operator Γv = X i, v e i, where {e,..., e } is the standard orthonormal basis in R. Assume for simplicity that E X i, t = for any i and any t S n, where S n denotes the Euclidean sphere in R n. Given a subset T S n, we as whether the random operator Γ : l n l almost preserves the norm of all elements of T. To ensure that Γ is almost norm preserving on T it suffices to analyze the supremum of the random variables Zt = Γt, and to show l that with positive probability, sup Z t is sufficiently small. An important example is when the random vector X i = ξ, i..., ξn, i and ξj in i,j= are independent random variables with expectation 0, variance and have a subgaussian tail, and thus, Γ is a random n subgaussian matrix. A standard concentration argument applied to each Zt individually shows that if T = n and if c log n ε, then with high probability, for every t T, ε Γt l + ε, and thus, Γ almost preserves the norm of all the elements in T. This simple fact is the basis of the celebrated Johnson-Lindenstrauss Lemma [7] which analyzes the ability to almost isometrically embed subsets of l in a Supported by NSF grant DMS-098 and the Bell Companies Fellowship. Supported by an Australian Research Council Discovery grant.
2 Hilbert space of a much smaller dimension; that is, for A l, to find a mapping f : T l such that for any s, t A ε s t l fs ft l + ε s t l. We will call such a mapping an ε-isometry of A. In [7], Johnson and Lindenstrauss proved the following: Theorem. There exists an absolute constant c for which the following holds. If A l, A = n and = c log n ε, there exists an ε-isometry f : l l. The actual proof yields a little more than this statement, from which the connection to the problem we are interested in{ should become clearer. By considering xi x the set of normalized differences T = j x : i j i x j }, if one can find a linear mapping which almost preserves the norms of elements in T, this mapping would be the desired ε-isometry; and indeed, a random n subgaussian matrix is such a mapping. Of course, the complexity parameter for T in the Johnson-Lindenstrauss Lemma is rather unsatisfying - the logarithm of the cardinality of the set T. One might as if there is another, sharper complexity parameter that could be used in this case. Let us consider a more general situation. Let T, d be a metric space, let F T = {f t : t T } L µ be a set of functions on a probability space Ω, µ and denote FT = {f : f F T }. Given X i, independent random variables distributed according to µ, let µ = δ X i be the empirical uniform probability measure supported on X,..., X i.e. µ is a random discrete measure. For any class of functions F L µ, set µ µ F = sup fdµ fdµ = sup fx i Ef f F, and note that µ µ F is a random variable. Our goal is to find a bound for µ µ F T that holds with a positive probability, under the assumption that f dµ = for any f F T. This bound should depend on the geometry of F T and, under some additional mild assumptions, reflect the metric structure of T, d. One possible way of bounding µ µ F which is almost sharp, is based on a symmetrization argument due to Giné and Zinn [5], namely, that for every set F E µ µ F CE X E g g i δ Xi, F where g i are standard independent gaussian random variables and C > 0 is an absolute constant. Let f F P X,..,X F = {fx,..., fx : f F } R
3 be a random coordinate projection of F. Fixing such a random projection, one needs to bound the expectation of the gaussian process indexed by that projection. To that end, let us remind the reader of the definition of the γ- functionals see [8]: Definition. For a metric space T, d define γ α T, d = inf sup s/α dt, T s, where the infimum is taen with respect to all subsets T s T with cardinality T s s and T 0 =. If the metric is clear we denote the γ functionals by γ α T. By the celebrated majorizing measures Theorem see the discussion before Theorem. for more details, if {X t : t T } is a gaussian process, and d s, t = E X s X t is its covariance structure, then s=0 c γ T, d E sup X t c γ T, d, where c, c { > 0 are absolute constants. Observe that for every X,..., X } the process g ifx i : f F is gaussian and its covariance structure satisfies E g g i f X i g i f X i = f f L µ. Thus, E µ µ F C E X γ P X,..,X F, L µ, and in particular this general bound holds for the set FT. Unfortunately, although this bound is almost sharp and is given solely in terms of γ, it is less than satisfactory for our purposes as it involves averaging γ of random coordinate projections of F with respect to the random L µ structure, and thus could be rather hard to estimate. On the other hand, if one wishes to estimate the expectation of the supremum of a general empirical process E µ µ F with a more accessible geometric parameter of the set F which does not depend on the random coordinate structure of F, the best that one can hope for in general is a bound which is a combination of γ and γ. This too follows from Talagrand s generic chaining approach [8] see Theorem. and Lemma 3. here. The main result we present here is that if a process is indexed by a set of functions with the same L norm, then with non-zero probability the γ part of the bound can be removed. In some sense, the process behaves as if it were a purely subgaussian process, rather than a combination of a subgaussian and sub-exponential. 3
4 To formulate the exact result, we require the notion of the Orlicz norms ψ p. Let X be a random variable and define the ψ p norm of X as X ψp = X p inf C>0 E exp C. A standard argument shows that if X has a bounded p ψ p norm then the tail of X decays faster than exp u p / X p ψ p [9]. In particular, for p =, it means that X has a subgaussian tail. Theorem.3 Let Ω, µ be a probability space and let X, X, X,..., X be independent random variables distributed according to µ. Set T to be a collection of functions, such that for every f T, Ef X = f L = and f ψ β. Define the random variable Z f = f X i f L. Then for any e c γ T, ψ < δ <, with probability larger than δ, sup Zf cδ, β γ T, ψ f T where c > 0 is an absolute constant and cδ, β depends solely on δ, β. The three applications we present have a geometric flavor. The first, which motivated our study, is concerned with random projections and follows almost immediately from Theorem.3. Theorem.4 For every β > 0 there exists a constant cβ for which the following holds. Let T S n be a set and let Γ : R n R be a random matrix whose entries are independent, identically distributed random variables, that satisfy EΓ i,j = 0, EΓ i,j = and Γ i,j ψ β. Then, with probability larger than /, for any x T and ε c γt, ε Γx l < + ε. In the case of a Gaussian projection, our result was proved by Gordon Corollary.3 in [6]. However, that proof uses the special structure of Gaussian variables and does not seem to generalize to arbitrary subgaussian random variables. Note that for a set T S N of cardinality n, γ T, c log n. In particular, Theorem.4 improves the Johnson-Lindenstrauss embedding result for any random operator with i.i.d ψ entries which are centered and have variance. From here on we shall refer to such an operator as a ψ operator. The novelty in Theorem.4 is that the logn term can be improved to γt in the general case, for any ψ operator. 4
5 The second application that follows from the theorem is an estimate on the so called Gelfand numbers of a convex, centrally symmetric body, that is, on the ability to find sections of the body of a small Euclidean diameter a question which is also nown as the low M -estimate. Indeed, with a similar line of reasoning to the one used in [, 6, 9] where such an estimate on the Gelfand numbers was obtained using a random orthogonal projection or a gaussian operator, one can establish the next corollary. Corollary.5 There exists an absolute constant c for which the following holds. Let V R n be a convex, centrally symmetric body, set n and let Γ be a random, n ψ matrix. Then, with probability at least /, diam V erγ cγ V /. Let us point out that this corollary does not use the full strength of our result, as its proof only requires that for ρ γ V /, 0 ΓT, where T = V/ρ S n. Thus, Corollary.5 is implied by a one-sided isomorphic condition, that 0 < inf Γt, rather than the two-sided quantitative estimate we actually have in Theorem.4. The third application is related to a Dvorezy-type theorem. As T R n, the parameter γ T has a natural geometric interpretation: it is approximately proportional to the mean width of T. Indeed, if T S n R n then γ T is equivalent to nwt, where wt is the mean-width of T R n see, for example, [0], chapter 9. Note that exactly the same estimate as in Theorem.4, which is c γt ε, appears in a result related to Dvorezy s Theorem due to Milman e.g. [9], Section.3.5. There, random -dimensional projections of a convex body are analyzed for much larger than the critical value - that is, is larger than the dimension in which a random projection is almost Euclidean. It turns out that even though a typical projection is far from being Euclidean, a regular behavior may be observed for the diameter of the projected body. In our setting, this result could be described as follows: if T S n is a centrally-symmetric set and Γ is a random orthogonal projection, then for C γ T ε with high probability ε max Γt l + ε. 3 This consequence of a Dvorezy-type theorem also follows from our arguments, even for an arbitrary ψ -matrix Γ. Let us emphasize that for γt the convex hull of ΓT is approximately a Euclidean ball. Thus, for << γt it is impossible to obtain an estimate in the spirit of 3, even for ε =, as the Euclidean ball cannot be well embedded in a lower-dimensional space. We end this introduction with a notational convention. Throughout, all absolute constants are positive and are denoted by c or C. Their values may change from line to line or even within the same line. Cϕ and C ϕ denote a constant which depends only on the parameter ϕ which is usually a real number, and a ϕ b means that c ϕ b a C ϕ b. If the constants are absolute we use the notation a b. 5
6 Generic chaining Our analysis is based on Talagrand s generic chaining approach [7, 8], which is a way of controlling the expectation of the supremum of centered processes indexed by a metric space compare also with [] for another application of a similar flavor. The generic chaining was introduced by Talagrand [8] as a way of simplifying the majorizing measures approach used in the analysis of some stochastic processes in particular of gaussian processes, and in numerous other applications see [8, 6, 3] and references therein. We will not discuss this beautiful approach in detail, but rather mention the results we require. The majorizing measures Theorem in its modern version states, as shown in, that for any T l, γ T, is equivalent to the expectation of the supremum of the gaussian process {X t : t T }, where X t = g it i. The upper bound in its original form is due to Fernique building on earlier ideas of Preston [4,, 3], while the lower was established by Talagrand in [5]. Let us reformulate the corresponding result for subgaussian processes as well as for processes whose tails are a mixture of a subgaussian part and a sub-exponential part. Theorem. [8] Let d and d be metrics on T space and let {Z t : t T } be a centered stochastic process.. If there is a constant α such that for every s, t T and every u > 0, P r { Z s Z t > u} α exp, u d s,t then for any t 0 T, E sup Z t Z t0 Cαγ T, d.. If there is a constant α such that for every s, t T and every u > 0, u P r { Z s Z t > u} α exp min d s,t, u d s,t, then for any t 0 T, E sup Z t Z t0 Cα γ T, d + γ T, d. The main result of this article, formulated in Theorem.3, follows as a particular case of the next proposition and the rest of this section is devoted to its proof. Proposition. Let T, d be a metric space and let {Z x } x T be a stochastic process. Let > 0, ϕ : [0, R and set W x = ϕ Z x and ε = γt,d Assume that for some η > 0 and e cη < δ < 4 For any x, y T and t < δ 0 = 4 η log δ, the following holds: P r Z x Z y > tdx, y < exp ηδ0 t.. 6
7 For any x, y T and t >, P r W x W y > tdx, y < exp ηt. 3 For any x T, with probability larger than δ, Z x < ε. 4 ϕ is increasing, differentiable at zero and ϕ 0 > 0. Then, with probability larger than δ, sup Z x < Cε x T where C = Cϕ, δ, η > 0 depends solely on its arguments. Note the difference between requirements and in Proposition.. In we are interested in the domain t < C, while in the domain of interest is t >. The proof is composed of two steps. The first reduces the problem of estimating sup Z t to the easier problem of estimating sup Z t, where T T is an ε-cover of T that is, a set T such that for any t T there is some s T for which dt, s ε. Lemma.3 Let {W t } be a stochastic process that satisfies requirement in Proposition.. Let also ε,, η be as in Proposition.. Then there exist constants c and c which depend solely on η from Proposition. and a set T T with T 4, such that with probability larger than e c, sup W t < sup W t + c ε. Proof. Let {T s : s 0} be a collection of subsets of T such that T s = s, T 0 = and for which γ T, d is almost attained. For any x T, let π s x be a nearest point to x in T s with respect to the metric d. We may assume that γ T, d <, and thus the sequence π s x converges to x for any x T. Clearly, by the definition of γ and the triangle inequality, s/ dπ s x, π s+ x Cγ T, d. 4 i=0 Let s 0 be the minimal integer such that s0 >, put T = T s0 and note that T = s 0 < = 4. It remains to prove that with high probability, for any x T W x W πs0 x = Wπs+x W πsx < Cε, 5 s=s 0 as this implies that sup W t sup W t + Cε. Fix s > s 0 and x T s+, y log Ts Ts+ T s. Since > then by the increment assumption on W, the probability that log W x W y < c Ts T s+ η dx, y 6 7
8 is larger than e c η log Ts Ts+ > T s T s+ for the appropriate choice of the constants c η, c η > 0 which depend only on η. Hence, with probability larger than T, condition 6 holds for all x T s s+ and y T s. It follows that with probability larger than exp c η, for all x T and any s s 0, Wπs+x W πsx < Cη s/ dπ s+ x, π s x, since log T s = s. Applying 4, with probability at least exp c η, for any x T, W x W πs0 x W πs+x W πsx < C η s/ dπ s+ x, π s x s=s 0 s=s 0 < C η γ T and thus 5 holds, as γt < ε. The next step is to bound the maximum of a stochastic process, when indexed by a small set. Lemma.4 Let T, d be a metric space, assume that T 4 and let {Z t } be a stochastic process that satisfies requirements and 3 in Proposition.. If η, ε, δ, are as in Proposition. then with probability larger than 3 δ, sup Z t Cη, δε where Cη, δ depends solely on δ and η. Proof. Let {T s : s N} be a collection of subsets of T such that T s s, T 0 = and for which γ T is almost attained. Again, for any x E, let π s x be a nearest point to x in T s, and because T < 4, we may assume that T = T s for s > s 0 = log +. Thus, it is evident that Z x Z π0x = Zπsx Z πs x s= log + = s= Zπsx Z πs x. Fix some u δ0 = η log δ and s s 0. Selecting t = u s+/ / δ 0 in condition in Proposition., it follows that for a fixed x T, P r Z πsx Z πs x > u s+/ dπ sx, π s x e η δ u s+ 0. Hence, with probability larger than s= s s e η δ 0 u s+, for every x 8
9 T s Z x Z π0x 0+ = Zπsx Z πs x u s/ dπ sx, π s x s= cu γ T. By condition 3 in Proposition., with probability larger than δ we have Z π0x < ε. Hence, with probability larger than s= e η s+ δ u s 0 δ, for every x T, Z x < Z π0x + cu γ T s < Z π0x + cuε < cu + ε. It remains to estimate the probability, and to that end, put u = η log δ and observe that s+ e η δ u s+ 0 δ δ δ s > 3 δ, as claimed. s= Proof of Proposition.. By condition 4, and hence for any 0 < ε <, s= C = Cϕ ϕ ϕt + t = sup < 0<t< t ϕε + ε < ϕ Cε. 7 Let T T be the set from Lemma.3. Applying Lemma.3 and since ϕ is increasing, sup Z t = ϕ sup W t ϕ sup W t + c ε = ϕ ϕ sup Z t + c ε with probability larger than e c > δ. Since T < 4 then by Lemma.4, with probability larger than 3 δ, sup Z t < Cη, δε. and hence, by 7, with probability larger than δ [ ] sup Z t < ϕ [ϕ Cη, δε + c ε] < ϕ ϕ Cϕ max{cη, δ, c }ε = cη, δ, ϕε. 9
10 3 Empirical Processes Next we show how Theorem.3 follows from Proposition.. A central ingredient in the proof is the well nown Bernstein s inequality. Theorem 3. [9, ] Let X,..., X be independent random variables with zero mean such that for every i and every m, E X i m m!m m v i /. Then, for any v v i and any u > 0, P r { } X i > u i=i exp u v + um It is easy to see that if E exp X /b, i.e., if X ψ b then m= b m m!, and the assumptions of Theorem 3. are satisfied for M = X ψ and v = 4 X ψ. Hence, { } { } u u P r X i > u exp c min X, 8 ψ X ψ i=i. E X m Lemma 3. Using the notation of Theorem.3, for any f, g T and u > 0, P r Z f Zg > u f g ψ exp c min{u, u } and P r Z f > u exp c β min{u, u } where c is a universal constant and c β depends solely on β from Theorem.3. Proof. Clearly, Z f Z g = f gx i f + gx i. Let Y i = f gx i f + gx i = f X i g X i. Then for any u > 0, P r Y i > 4u f g ψ f + g ψ P r f gx i > u f g ψ + P r { f + gxi > u f + g ψ } e u, which implies that Y i ψ c f g ψ f + g ψ c f g L, as f L = g L =. In particular, since Y,..., Y are independent and EY i = 0, then by 8 P r { Z f Zg u > u exp c } u min f g,. L f g L Z The estimate for P r f > u follows the same path, where we define Y i = f X i, use the fact that fx ψ β and apply 8. 0
11 Proof of Theorem.3. We will show that the conditions -4 of Proposition. are satisfied, for the choice of ϕt = + t and df, g = f g ψ. Fix η c, the constant from Lemma 3.. Assume that t < δ 0 = 4 η log δ. By Lemma 3., P r Z f Z g > t f g ψ < exp η min{t, t } < exp η t. By the triangle inequality, W f W g = / / f X i g X i Applying 8 for t >, P r W f W g > t f g ψ P r = P r δ 0 f g X i f g X i > t f g ψ f g X i > t f g ψ < exp ct exp ηt since we assumed that η is smaller than the constant in 8. 3 For any x T, by Lemma 3., 4 ϕ 0 = > 0. P r Z x > ε < exp ηε < δ. / Remar: We may formulate a variant of Theorem.3, as follows Theorem 3.3 Let Ω, µ be a probability space and let X, X, X,..., X be independent random variables distributed according to µ. Set T to be a collection of functions such that for every f T, EfX = 0 and fx ψ β. Define a metric on T by df, g = max{ f gx ψ, f gx ψ } and consider the random variable Zf = fx i. Then for any e c γ T,d < δ <, with probability larger than δ, sup Zf cβ, δ γ T, d 9 f T where cβ, δ depends solely on β and δ and c is an absolute constant.
12 The proof of Theorem 3.3 is almost identical to that of Theorem.3, with the only difference being the fact that requirement in Proposition. is satisfied. In this case, and for t >, W f W g = + fx i f gx i P r W f W g > t f g ψ by 8. P r + gx i f gx i > t f g ψ < exp ct 4 Applications to random Embeddings Theorem 4. For every β > 0 there exists a constant cβ for which the following holds. Let T S n be a set, and let Γ : R n R be a random operator whose rows are independent vectors Γ,.., Γ R n. Assume that for any x R n and i, E Γ i, x = x and Γ i, x ψ β Γ i, x L. Then, with probability larger than /, for any x T and any ε cβ γt, ε Γx l < + ε. Proof. For x T define Z x = Γx l n = Γ i, x. Then EZx = 0 and Γ i, x y ψ β x y l n. The assumptions of Theorem.3 are satisfied, and hence for δ =, with probability at least /, as claimed. sup x T Γx l < ε,
13 An important example is when the elements of the matrix Γ are independent random variables Γ i,j, such that EΓ i,j = 0, V arγ i,j = and Γ i,j ψ β Γ i,j L, since in that case, Theorem.4 holds. Indeed, one just needs to verify that n Γ i,jx i ψ cβ x l n. To that end, recall that for any random variable X, X ψ < c u E expux < exp 3c u X ψ < 0c. Hence, the fact that n Γ i,jx i ψ cβ x l n follows from Ee u n Γi,jxi = n Ee uxiγi,j n e cβu x c i = e βu x l n. As was mentioned in the introduction, in the special case where Γ i,j are independent, standard gaussian variables, shorter proofs of Theorem.4 exist. The first proof in that case is due to Gordon [6] and is based on a comparison Theorem for Gaussian variables. Another simple proof which uses Talagrand s generic chaining approach may be described as follows. Let Γ be an n gaussian matrix. Since the gaussian measure is rotation invariant, it follows that for any t S n, E Γt l is independent of t, and we denote it by A. It is standard to verify that A is equivalent to an absolute constant. Consider the process Z t = A Γt l indexed by T Sn and note that this process is centered. In [4], G. Schechtman proved that there is some absolute constant c such that for any s, t S n, { } Γt l P r Γs u l u exp c s t. 0 l n Hence, the process Z t is subgaussian with respect to the Euclidean metric, implying that for any t 0 T E sup Z t E sup Z t Z t0 + E A Γt 0 l γ T C A + C A ε, where the last inequality follows from Theorem., combined with a standard estimate on E Γt l E Γt l and the fact that Cγ T /ε. A second, analogous case in which the preceding argument wors, is that of projections onto random subspaces. Consider G n,, the Grassmanian of -dimensional subspaces of R n and recall that there exists a unique rotation invariant probability measure on G n,. Assume that P is an orthogonal projection onto a random -dimensional subspace in R n and let Γ = np. Note that for a suitable A, which is equivalent to an absolute constant, the process Z t = A Γt l, indexed by T Sn, is centered. Next, we shall prove 3
14 an analog of 0 for such random operators. As in the gaussian case, Theorem. implies that all of our results also hold for random operators which are orthogonal projections onto random subspaces. The proof of the following lemma is similar to Schechtman s proof in [4]. Lemma 4. There exists an absolute constant c for which the following holds. Fix s, t S n and let P be a random projection of ran. Then, { P } P r s l P t l > u n s t l n exp cu. Proof. Let d = s t l n and set Ω d = {x, y : x, y S n, x y l n = d}. There exists a unique rotation invariant probability measure µ d on Ω d see, for example, the first pages of [0]. Rather than fixing s, t and randomly selecting P, one may equivalently fix an orthogonal projection P of ran and prove that } µ d { P s l P t l > u n s t l n exp cu. Note that if one conditions on x = s+t then y = s t is distributed uniformly on the sphere d Sn. Thus it is enough to show that for any fixed x, { P r y S n : P x + d l P y P x d l } P y > du exp cu. n Since the function fy = P x + d P y l P x d P y l is Lipschitz on S n with a constant d and since its expectation is zero, then by the standard concentration inequality on the sphere see, e.g. [0], which completes the proof. P r y S n : fy > ud e cun, Note that this argument relays heavily on the structure of the gaussian random variables particularly, the rotation invariance, which is the reason why one can have a purely subgaussian behavior for the process Z t. For a general ψ operator, the best that one could hope for off-hand is a mixture of subgaussian and sub-exponential tails, and in order to obtain an upper bound solely in terms of γ one requires the more general argument. References [] S. Artstein, The change in the diameter of a convex body under a random sign-projection, preprint. 4
15 [] G. Bennett, Probability inequalities for the sum of independent random variables, J. Am. Stat. Assoc. 57, 33-45, 96. [3] R.M. Dudley, Uniform Central Limit Theorems, Cambridge Studies in Advanced Mathematics 63, Cambridge University Press, 999. [4] X. Fernique, Régularité des trajectoires des fonctiones aléatoires gaussiennes, Ecole d Eté de Probabilités de St-Flour 974, Lecture notes in Mathematics 480, Springer-Verlag -96, 975. [5] E. Giné and J. Zinn: Some limit theorems for empirical processes, Ann. Probab. 4, , 984. [6] Y. Gordon, On Milman s inequality and random subspaces which escape through a mesh in R n. Geom. Aspects of Funct. Anal., Israel seminar, Lecture Notes in Math., 37, Springer-Verlag, 84-06, 988. [7] W.B. Johnson, J. Lindenstrauss, Extensions of Lipschitz mappings into a Hilbert space, Contemp. math. 6, 89-06, 984. [8] V.D. Milman, Geometrical inequalities and mixed volumes in the local theory of Banach spaces, Astérisque 3, , 985. [9] V.D. Milman, Topics in Asymptotic Geometric Analysis, Geom. Funct. Anal., Special Volume - GAFA000, Birhäuser, 79-85, 000. [0] V.D. Milman, G. Schechtman, Asymptotic theory of finite dimensional normed spaces, Lecture Notes in Mathematics 00, Springer, 986. [] A. Pajor, N. Tomcza-Jaegermann, Subspaces of small codimension of finite-dimensional Banach spaces, Proc. AMS, 97 4, , 986. [] C. Preston, Banach spaces arising from some integral inequalities, Indiana Math. J. 0, , 97. [3] C. Preston, Continuity properties of some Gaussian processes, Ann. Math. Statist. 43, 85-9, 97. [4] G. Schechtman, A remar concerning the dependence on ɛ in Dvoretzy s theorem. Geom. Aspects of Funct. Anal , Lecture Notes in Math., 376, Springer, Berlin, 74 77, 989. [5] M. Talagrand, Regularity of Gaussian processes, Acta Math. 59, 99-49, 987. [6] M. Talagrand, Majorizing Measures, the generic chaining, Ann. Probab. 4, , 996. [7] M. Talagrand, Majorizing Measures without measures, Ann. Probab. 9, 4-47, 00. 5
16 [8] M. Talagrand, The generic chaining, forthcoming. [9] A. W. Van der Vaart, J. A. Wellner, Wea convergence and Empirical Processes, Springer-Verlag,
Non-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007
Non-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007 Lecturer: Roman Vershynin Scribe: Matthew Herman 1 Introduction Consider the set X = {n points in R N } where
More informationUniform uncertainty principle for Bernoulli and subgaussian ensembles
arxiv:math.st/0608665 v1 27 Aug 2006 Uniform uncertainty principle for Bernoulli and subgaussian ensembles Shahar MENDELSON 1 Alain PAJOR 2 Nicole TOMCZAK-JAEGERMANN 3 1 Introduction August 21, 2006 In
More informationA Bernstein-Chernoff deviation inequality, and geometric properties of random families of operators
A Bernstein-Chernoff deviation inequality, and geometric properties of random families of operators Shiri Artstein-Avidan, Mathematics Department, Princeton University Abstract: In this paper we first
More informationA Banach space with a symmetric basis which is of weak cotype 2 but not of cotype 2
A Banach space with a symmetric basis which is of weak cotype but not of cotype Peter G. Casazza Niels J. Nielsen Abstract We prove that the symmetric convexified Tsirelson space is of weak cotype but
More informationSubspaces and orthogonal decompositions generated by bounded orthogonal systems
Subspaces and orthogonal decompositions generated by bounded orthogonal systems Olivier GUÉDON Shahar MENDELSON Alain PAJOR Nicole TOMCZAK-JAEGERMANN August 3, 006 Abstract We investigate properties of
More informationOn the singular values of random matrices
On the singular values of random matrices Shahar Mendelson Grigoris Paouris Abstract We present an approach that allows one to bound the largest and smallest singular values of an N n random matrix with
More informationAn example of a convex body without symmetric projections.
An example of a convex body without symmetric projections. E. D. Gluskin A. E. Litvak N. Tomczak-Jaegermann Abstract Many crucial results of the asymptotic theory of symmetric convex bodies were extended
More informationLARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011
LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS S. G. Bobkov and F. L. Nazarov September 25, 20 Abstract We study large deviations of linear functionals on an isotropic
More informationWeak and strong moments of l r -norms of log-concave vectors
Weak and strong moments of l r -norms of log-concave vectors Rafał Latała based on the joint work with Marta Strzelecka) University of Warsaw Minneapolis, April 14 2015 Log-concave measures/vectors A measure
More informationSmall ball probability and Dvoretzky Theorem
Small ball probability and Dvoretzy Theorem B. Klartag, R. Vershynin Abstract Large deviation estimates are by now a standard tool in Asymptotic Convex Geometry, contrary to small deviation results. In
More informationRate of convergence of geometric symmetrizations
Rate of convergence of geometric symmetrizations B. Klartag School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel Abstract It is a classical fact, that given an arbitrary convex
More information18.409: Topics in TCS: Embeddings of Finite Metric Spaces. Lecture 6
Massachusetts Institute of Technology Lecturer: Michel X. Goemans 8.409: Topics in TCS: Embeddings of Finite Metric Spaces September 7, 006 Scribe: Benjamin Rossman Lecture 6 Today we loo at dimension
More informationConvex inequalities, isoperimetry and spectral gap III
Convex inequalities, isoperimetry and spectral gap III Jesús Bastero (Universidad de Zaragoza) CIDAMA Antequera, September 11, 2014 Part III. K-L-S spectral gap conjecture KLS estimate, through Milman's
More informationRademacher Averages and Phase Transitions in Glivenko Cantelli Classes
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 48, NO. 1, JANUARY 2002 251 Rademacher Averages Phase Transitions in Glivenko Cantelli Classes Shahar Mendelson Abstract We introduce a new parameter which
More informationarxiv:math/ v1 [math.fa] 30 Sep 2004
arxiv:math/04000v [math.fa] 30 Sep 2004 Small ball probability and Dvoretzy Theorem B. Klartag, R. Vershynin Abstract Large deviation estimates are by now a standard tool in the Asymptotic Convex Geometry,
More informationDimensionality Reduction Notes 3
Dimensionality Reduction Notes 3 Jelani Nelson minilek@seas.harvard.edu August 13, 2015 1 Gordon s theorem Let T be a finite subset of some normed vector space with norm X. We say that a sequence T 0 T
More informationMultivariate Statistics Random Projections and Johnson-Lindenstrauss Lemma
Multivariate Statistics Random Projections and Johnson-Lindenstrauss Lemma Suppose again we have n sample points x,..., x n R p. The data-point x i R p can be thought of as the i-th row X i of an n p-dimensional
More informationOn metric characterizations of some classes of Banach spaces
On metric characterizations of some classes of Banach spaces Mikhail I. Ostrovskii January 12, 2011 Abstract. The first part of the paper is devoted to metric characterizations of Banach spaces with no
More informationA SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS
A SIMPLE TOOL FOR BOUNDING THE DEVIATION OF RANDOM MATRICES ON GEOMETRIC SETS CHRISTOPHER LIAW, ABBAS MEHRABIAN, YANIV PLAN, AND ROMAN VERSHYNIN Abstract. Let A be an isotropic, sub-gaussian m n matrix.
More informationNon-linear factorization of linear operators
Submitted exclusively to the London Mathematical Society doi:10.1112/0000/000000 Non-linear factorization of linear operators W. B. Johnson, B. Maurey and G. Schechtman Abstract We show, in particular,
More informationMAJORIZING MEASURES WITHOUT MEASURES. By Michel Talagrand URA 754 AU CNRS
The Annals of Probability 2001, Vol. 29, No. 1, 411 417 MAJORIZING MEASURES WITHOUT MEASURES By Michel Talagrand URA 754 AU CNRS We give a reformulation of majorizing measures that does not involve measures,
More informationBoundedly complete weak-cauchy basic sequences in Banach spaces with the PCP
Journal of Functional Analysis 253 (2007) 772 781 www.elsevier.com/locate/jfa Note Boundedly complete weak-cauchy basic sequences in Banach spaces with the PCP Haskell Rosenthal Department of Mathematics,
More informationThe extension of the finite-dimensional version of Krivine s theorem to quasi-normed spaces.
The extension of the finite-dimensional version of Krivine s theorem to quasi-normed spaces. A.E. Litvak Recently, a number of results of the Local Theory have been extended to the quasi-normed spaces.
More informationReconstruction from Anisotropic Random Measurements
Reconstruction from Anisotropic Random Measurements Mark Rudelson and Shuheng Zhou The University of Michigan, Ann Arbor Coding, Complexity, and Sparsity Workshop, 013 Ann Arbor, Michigan August 7, 013
More informationDiameters of Sections and Coverings of Convex Bodies
Diameters of Sections and Coverings of Convex Bodies A. E. Litvak A. Pajor N. Tomczak-Jaegermann Abstract We study the diameters of sections of convex bodies in R N determined by a random N n matrix Γ,
More informationOn the constant in the reverse Brunn-Minkowski inequality for p-convex balls.
On the constant in the reverse Brunn-Minkowski inequality for p-convex balls. A.E. Litvak Abstract This note is devoted to the study of the dependence on p of the constant in the reverse Brunn-Minkowski
More informationHigh-dimensional distributions with convexity properties
High-dimensional distributions with convexity properties Bo az Klartag Tel-Aviv University A conference in honor of Charles Fefferman, Princeton, May 2009 High-Dimensional Distributions We are concerned
More informationSupremum of simple stochastic processes
Subspace embeddings Daniel Hsu COMS 4772 1 Supremum of simple stochastic processes 2 Recap: JL lemma JL lemma. For any ε (0, 1/2), point set S R d of cardinality 16 ln n S = n, and k N such that k, there
More informationPointwise estimates for marginals of convex bodies
Journal of Functional Analysis 54 8) 75 93 www.elsevier.com/locate/jfa Pointwise estimates for marginals of convex bodies R. Eldan a,1,b.klartag b,, a School of Mathematical Sciences, Tel-Aviv University,
More informationSparse Recovery with Pre-Gaussian Random Matrices
Sparse Recovery with Pre-Gaussian Random Matrices Simon Foucart Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie Paris, 75013, France Ming-Jun Lai Department of Mathematics University of
More informationAn Algorithmist s Toolkit Nov. 10, Lecture 17
8.409 An Algorithmist s Toolkit Nov. 0, 009 Lecturer: Jonathan Kelner Lecture 7 Johnson-Lindenstrauss Theorem. Recap We first recap a theorem (isoperimetric inequality) and a lemma (concentration) from
More informationSpectral Gap and Concentration for Some Spherically Symmetric Probability Measures
Spectral Gap and Concentration for Some Spherically Symmetric Probability Measures S.G. Bobkov School of Mathematics, University of Minnesota, 127 Vincent Hall, 26 Church St. S.E., Minneapolis, MN 55455,
More informationSOME CONVERSE LIMIT THEOREMS FOR EXCHANGEABLE BOOTSTRAPS
SOME CONVERSE LIMIT THEOREMS OR EXCHANGEABLE BOOTSTRAPS Jon A. Wellner University of Washington The bootstrap Glivenko-Cantelli and bootstrap Donsker theorems of Giné and Zinn (990) contain both necessary
More informationIsoperimetry of waists and local versus global asymptotic convex geometries
Isoperimetry of waists and local versus global asymptotic convex geometries Roman Vershynin October 21, 2004 Abstract If two symmetric convex bodies K and L both have nicely bounded sections, then the
More informationALEXANDER KOLDOBSKY AND ALAIN PAJOR. Abstract. We prove that there exists an absolute constant C so that µ(k) C p max. ξ S n 1 µ(k ξ ) K 1/n
A REMARK ON MEASURES OF SECTIONS OF L p -BALLS arxiv:1601.02441v1 [math.mg] 11 Jan 2016 ALEXANDER KOLDOBSKY AND ALAIN PAJOR Abstract. We prove that there exists an absolute constant C so that µ(k) C p
More informationSections of Convex Bodies via the Combinatorial Dimension
Sections of Convex Bodies via the Combinatorial Dimension (Rough notes - no proofs) These notes are centered at one abstract result in combinatorial geometry, which gives a coordinate approach to several
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationRandom sets of isomorphism of linear operators on Hilbert space
IMS Lecture Notes Monograph Series Random sets of isomorphism of linear operators on Hilbert space Roman Vershynin University of California, Davis Abstract: This note deals with a problem of the probabilistic
More informationOptimal terminal dimensionality reduction in Euclidean space
Optimal terminal dimensionality reduction in Euclidean space Shyam Narayanan Jelani Nelson October 22, 2018 Abstract Let ε (0, 1) and X R d be arbitrary with X having size n > 1. The Johnson- Lindenstrauss
More informationOn the minimum of several random variables
On the minimum of several random variables Yehoram Gordon Alexander Litvak Carsten Schütt Elisabeth Werner Abstract For a given sequence of real numbers a,..., a n we denote the k-th smallest one by k-
More informationAnother Low-Technology Estimate in Convex Geometry
Convex Geometric Analysis MSRI Publications Volume 34, 1998 Another Low-Technology Estimate in Convex Geometry GREG KUPERBERG Abstract. We give a short argument that for some C > 0, every n- dimensional
More informationRapid Steiner symmetrization of most of a convex body and the slicing problem
Rapid Steiner symmetrization of most of a convex body and the slicing problem B. Klartag, V. Milman School of Mathematical Sciences Tel Aviv University Tel Aviv 69978, Israel Abstract For an arbitrary
More informationOptimisation Combinatoire et Convexe.
Optimisation Combinatoire et Convexe. Low complexity models, l 1 penalties. A. d Aspremont. M1 ENS. 1/36 Today Sparsity, low complexity models. l 1 -recovery results: three approaches. Extensions: matrix
More informationFUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES. 1. Compact Sets
FUNCTIONAL ANALYSIS LECTURE NOTES: COMPACT SETS AND FINITE-DIMENSIONAL SPACES CHRISTOPHER HEIL 1. Compact Sets Definition 1.1 (Compact and Totally Bounded Sets). Let X be a metric space, and let E X be
More informationarxiv: v1 [math.pr] 22 May 2008
THE LEAST SINGULAR VALUE OF A RANDOM SQUARE MATRIX IS O(n 1/2 ) arxiv:0805.3407v1 [math.pr] 22 May 2008 MARK RUDELSON AND ROMAN VERSHYNIN Abstract. Let A be a matrix whose entries are real i.i.d. centered
More informationHouston Journal of Mathematics. c 2002 University of Houston Volume 28, No. 3, 2002
Houston Journal of Mathematics c 00 University of Houston Volume 8, No. 3, 00 QUOTIENTS OF FINITE-DIMENSIONAL QUASI-NORMED SPACES N.J. KALTON AND A.E. LITVAK Communicated by Gilles Pisier Abstract. We
More informationRegularity of the density for the stochastic heat equation
Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department
More informationOn a Generalization of the Busemann Petty Problem
Convex Geometric Analysis MSRI Publications Volume 34, 1998 On a Generalization of the Busemann Petty Problem JEAN BOURGAIN AND GAOYONG ZHANG Abstract. The generalized Busemann Petty problem asks: If K
More informationAPPROXIMATE ISOMETRIES ON FINITE-DIMENSIONAL NORMED SPACES
APPROXIMATE ISOMETRIES ON FINITE-DIMENSIONAL NORMED SPACES S. J. DILWORTH Abstract. Every ε-isometry u between real normed spaces of the same finite dimension which maps the origin to the origin may by
More informationProblem Set 6: Solutions Math 201A: Fall a n x n,
Problem Set 6: Solutions Math 201A: Fall 2016 Problem 1. Is (x n ) n=0 a Schauder basis of C([0, 1])? No. If f(x) = a n x n, n=0 where the series converges uniformly on [0, 1], then f has a power series
More informationGinés López 1, Miguel Martín 1 2, and Javier Merí 1
NUMERICAL INDEX OF BANACH SPACES OF WEAKLY OR WEAKLY-STAR CONTINUOUS FUNCTIONS Ginés López 1, Miguel Martín 1 2, and Javier Merí 1 Departamento de Análisis Matemático Facultad de Ciencias Universidad de
More informationNearest Neighbor Preserving Embeddings
Nearest Neighbor Preserving Embeddings Piotr Indyk MIT Assaf Naor Microsoft Research Abstract In this paper we introduce the notion of nearest neighbor preserving embeddings. These are randomized embeddings
More informationAuerbach bases and minimal volume sufficient enlargements
Auerbach bases and minimal volume sufficient enlargements M. I. Ostrovskii January, 2009 Abstract. Let B Y denote the unit ball of a normed linear space Y. A symmetric, bounded, closed, convex set A in
More informationSuper-Gaussian directions of random vectors
Super-Gaussian directions of random vectors Bo az Klartag Abstract We establish the following universality property in high dimensions: Let be a random vector with density in R n. The density function
More informationMetric spaces and metrizability
1 Motivation Metric spaces and metrizability By this point in the course, this section should not need much in the way of motivation. From the very beginning, we have talked about R n usual and how relatively
More informationOn John type ellipsoids
On John type ellipsoids B. Klartag Tel Aviv University Abstract Given an arbitrary convex symmetric body K R n, we construct a natural and non-trivial continuous map u K which associates ellipsoids to
More informationQUOTIENTS OF ESSENTIALLY EUCLIDEAN SPACES
QUOTIENTS OF ESSENTIALLY EUCLIDEAN SPACES T. FIGIEL AND W. B. JOHNSON Abstract. A precise quantitative version of the following qualitative statement is proved: If a finite dimensional normed space contains
More informationBasic Properties of Metric and Normed Spaces
Basic Properties of Metric and Normed Spaces Computational and Metric Geometry Instructor: Yury Makarychev The second part of this course is about metric geometry. We will study metric spaces, low distortion
More informationConcentration behavior of the penalized least squares estimator
Concentration behavior of the penalized least squares estimator Penalized least squares behavior arxiv:1511.08698v2 [math.st] 19 Oct 2016 Alan Muro and Sara van de Geer {muro,geer}@stat.math.ethz.ch Seminar
More informationUNIFORM EMBEDDINGS OF BOUNDED GEOMETRY SPACES INTO REFLEXIVE BANACH SPACE
UNIFORM EMBEDDINGS OF BOUNDED GEOMETRY SPACES INTO REFLEXIVE BANACH SPACE NATHANIAL BROWN AND ERIK GUENTNER ABSTRACT. We show that every metric space with bounded geometry uniformly embeds into a direct
More informationExercise Solutions to Functional Analysis
Exercise Solutions to Functional Analysis Note: References refer to M. Schechter, Principles of Functional Analysis Exersize that. Let φ,..., φ n be an orthonormal set in a Hilbert space H. Show n f n
More informationThe Rademacher Cotype of Operators from l N
The Rademacher Cotype of Operators from l N SJ Montgomery-Smith Department of Mathematics, University of Missouri, Columbia, MO 65 M Talagrand Department of Mathematics, The Ohio State University, 3 W
More informationProbabilistic Methods in Asymptotic Geometric Analysis.
Probabilistic Methods in Asymptotic Geometric Analysis. C. Hugo Jiménez PUC-RIO, Brazil September 21st, 2016. Colmea. RJ 1 Origin 2 Normed Spaces 3 Distribution of volume of high dimensional convex bodies
More informationat time t, in dimension d. The index i varies in a countable set I. We call configuration the family, denoted generically by Φ: U (x i (t) x j (t))
Notations In this chapter we investigate infinite systems of interacting particles subject to Newtonian dynamics Each particle is characterized by its position an velocity x i t, v i t R d R d at time
More informationDimensional behaviour of entropy and information
Dimensional behaviour of entropy and information Sergey Bobkov and Mokshay Madiman Note: A slightly condensed version of this paper is in press and will appear in the Comptes Rendus de l Académies des
More informationLectures in Geometric Functional Analysis. Roman Vershynin
Lectures in Geometric Functional Analysis Roman Vershynin Contents Chapter 1. Functional analysis and convex geometry 4 1. Preliminaries on Banach spaces and linear operators 4 2. A correspondence between
More informationStar bodies with completely symmetric sections
Star bodies with completely symmetric sections Sergii Myroshnychenko, Dmitry Ryabogin, and Christos Saroglou Abstract We say that a star body is completely symmetric if it has centroid at the origin and
More informationEntropy extension. A. E. Litvak V. D. Milman A. Pajor N. Tomczak-Jaegermann
Entropy extension A. E. Litvak V. D. Milman A. Pajor N. Tomczak-Jaegermann Dedication: The paper is dedicated to the memory of an outstanding analyst B. Ya. Levin. The second named author would like to
More informationThe Johnson-Lindenstrauss Lemma
The Johnson-Lindenstrauss Lemma Kevin (Min Seong) Park MAT477 Introduction The Johnson-Lindenstrauss Lemma was first introduced in the paper Extensions of Lipschitz mappings into a Hilbert Space by William
More informationCharacterization of Self-Polar Convex Functions
Characterization of Self-Polar Convex Functions Liran Rotem School of Mathematical Sciences, Tel-Aviv University, Tel-Aviv 69978, Israel Abstract In a work by Artstein-Avidan and Milman the concept of
More informationAsymptotic Geometric Analysis, Fall 2006
Asymptotic Geometric Analysis, Fall 006 Gideon Schechtman January, 007 1 Introduction The course will deal with convex symmetric bodies in R n. In the first few lectures we will formulate and prove Dvoretzky
More informationMethods for sparse analysis of high-dimensional data, II
Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional
More informationLecture 13 October 6, Covering Numbers and Maurey s Empirical Method
CS 395T: Sublinear Algorithms Fall 2016 Prof. Eric Price Lecture 13 October 6, 2016 Scribe: Kiyeon Jeon and Loc Hoang 1 Overview In the last lecture we covered the lower bound for p th moment (p > 2) and
More informationGAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. 2π) n
GAUSSIAN MEASURE OF SECTIONS OF DILATES AND TRANSLATIONS OF CONVEX BODIES. A. ZVAVITCH Abstract. In this paper we give a solution for the Gaussian version of the Busemann-Petty problem with additional
More informationFrom now on, we will represent a metric space with (X, d). Here are some examples: i=1 (x i y i ) p ) 1 p, p 1.
Chapter 1 Metric spaces 1.1 Metric and convergence We will begin with some basic concepts. Definition 1.1. (Metric space) Metric space is a set X, with a metric satisfying: 1. d(x, y) 0, d(x, y) = 0 x
More informationNotes taken by Costis Georgiou revised by Hamed Hatami
CSC414 - Metric Embeddings Lecture 6: Reductions that preserve volumes and distance to affine spaces & Lower bound techniques for distortion when embedding into l Notes taken by Costis Georgiou revised
More informationMajorizing measures and proportional subsets of bounded orthonormal systems
Majorizing measures and proportional subsets of bounded orthonormal systems Olivier GUÉDON Shahar MENDELSON1 Alain PAJOR Nicole TOMCZAK-JAEGERMANN Abstract In this article we prove that for any orthonormal
More informationUpper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1
Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.
More informationSmallest singular value of sparse random matrices
Smallest singular value of sparse random matrices Alexander E. Litvak 1 Omar Rivasplata Abstract We extend probability estimates on the smallest singular value of random matrices with independent entries
More informationA NOTION OF NONPOSITIVE CURVATURE FOR GENERAL METRIC SPACES
A NOTION OF NONPOSITIVE CURVATURE FOR GENERAL METRIC SPACES MIROSLAV BAČÁK, BOBO HUA, JÜRGEN JOST, MARTIN KELL, AND ARMIN SCHIKORRA Abstract. We introduce a new definition of nonpositive curvature in metric
More informationRANDOM FIELDS AND GEOMETRY. Robert Adler and Jonathan Taylor
RANDOM FIELDS AND GEOMETRY from the book of the same name by Robert Adler and Jonathan Taylor IE&M, Technion, Israel, Statistics, Stanford, US. ie.technion.ac.il/adler.phtml www-stat.stanford.edu/ jtaylor
More informationA UNIVERSAL BANACH SPACE WITH A K-UNCONDITIONAL BASIS
Adv. Oper. Theory https://doi.org/0.5352/aot.805-369 ISSN: 2538-225X (electronic) https://projecteuclid.org/aot A UNIVERSAL BANACH SPACE WITH A K-UNCONDITIONAL BASIS TARAS BANAKH,2 and JOANNA GARBULIŃSKA-WȨGRZYN
More informationPOSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES
POSITIVE DEFINITE FUNCTIONS AND MULTIDIMENSIONAL VERSIONS OF RANDOM VARIABLES ALEXANDER KOLDOBSKY Abstract. We prove that if a function of the form f( K is positive definite on, where K is an origin symmetric
More informationPACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION
PACKING-DIMENSION PROFILES AND FRACTIONAL BROWNIAN MOTION DAVAR KHOSHNEVISAN AND YIMIN XIAO Abstract. In order to compute the packing dimension of orthogonal projections Falconer and Howroyd 997) introduced
More informationDvoretzky s Theorem and Concentration of Measure
Dvoretzky s Theorem and Concentration of Measure David Miyamoto November 0, 06 Version 5 Contents Introduction. Convex Geometry Basics................................. Motivation and the Gaussian Reformulation....................
More informationEmpirical Processes: General Weak Convergence Theory
Empirical Processes: General Weak Convergence Theory Moulinath Banerjee May 18, 2010 1 Extended Weak Convergence The lack of measurability of the empirical process with respect to the sigma-field generated
More informationOn the concentration of eigenvalues of random symmetric matrices
On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest
More informationEigenvalue variance bounds for Wigner and covariance random matrices
Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual
More informationMATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:
MATH 51H Section 4 October 16, 2015 1 Continuity Recall what it means for a function between metric spaces to be continuous: Definition. Let (X, d X ), (Y, d Y ) be metric spaces. A function f : X Y is
More informationTHERE IS NO FINITELY ISOMETRIC KRIVINE S THEOREM JAMES KILBANE AND MIKHAIL I. OSTROVSKII
Houston Journal of Mathematics c University of Houston Volume, No., THERE IS NO FINITELY ISOMETRIC KRIVINE S THEOREM JAMES KILBANE AND MIKHAIL I. OSTROVSKII Abstract. We prove that for every p (1, ), p
More informationLECTURE 15: COMPLETENESS AND CONVEXITY
LECTURE 15: COMPLETENESS AND CONVEXITY 1. The Hopf-Rinow Theorem Recall that a Riemannian manifold (M, g) is called geodesically complete if the maximal defining interval of any geodesic is R. On the other
More informationLocal semiconvexity of Kantorovich potentials on non-compact manifolds
Local semiconvexity of Kantorovich potentials on non-compact manifolds Alessio Figalli, Nicola Gigli Abstract We prove that any Kantorovich potential for the cost function c = d / on a Riemannian manifold
More informationMAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9
MAT 570 REAL ANALYSIS LECTURE NOTES PROFESSOR: JOHN QUIGG SEMESTER: FALL 204 Contents. Sets 2 2. Functions 5 3. Countability 7 4. Axiom of choice 8 5. Equivalence relations 9 6. Real numbers 9 7. Extended
More informationNotes on Gaussian processes and majorizing measures
Notes on Gaussian processes and majorizing measures James R. Lee 1 Gaussian processes Consider a Gaussian process {X t } for some index set T. This is a collection of jointly Gaussian random variables,
More informationKRIVINE SCHEMES ARE OPTIMAL
KRIVINE SCHEMES ARE OPTIMAL ASSAF NAOR AND ODED REGEV Abstract. It is shown that for every N there exists a Borel probability measure µ on { 1, 1} R { 1, 1} R such that for every m, n N and x 1,..., x
More informationCOARSELY EMBEDDABLE METRIC SPACES WITHOUT PROPERTY A
COARSELY EMBEDDABLE METRIC SPACES WITHOUT PROPERTY A PIOTR W. NOWAK ABSTRACT. We study Guoliang Yu s Property A and construct metric spaces which do not satisfy Property A but embed coarsely into the Hilbert
More informationON ASSOUAD S EMBEDDING TECHNIQUE
ON ASSOUAD S EMBEDDING TECHNIQUE MICHAL KRAUS Abstract. We survey the standard proof of a theorem of Assouad stating that every snowflaked version of a doubling metric space admits a bi-lipschitz embedding
More informationSzemerédi s Lemma for the Analyst
Szemerédi s Lemma for the Analyst László Lovász and Balázs Szegedy Microsoft Research April 25 Microsoft Research Technical Report # MSR-TR-25-9 Abstract Szemerédi s Regularity Lemma is a fundamental tool
More informationApplied Analysis (APPM 5440): Final exam 1:30pm 4:00pm, Dec. 14, Closed books.
Applied Analysis APPM 44: Final exam 1:3pm 4:pm, Dec. 14, 29. Closed books. Problem 1: 2p Set I = [, 1]. Prove that there is a continuous function u on I such that 1 ux 1 x sin ut 2 dt = cosx, x I. Define
More informationSome Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform
Some Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform Nir Ailon May 22, 2007 This writeup includes very basic background material for the talk on the Fast Johnson Lindenstrauss Transform
More information