Tail estimates for norms of sums of log-concave random vectors

Size: px
Start display at page:

Download "Tail estimates for norms of sums of log-concave random vectors"

Transcription

1 Tail estiates for nors of sus of log-concave rando vectors Rados law Adaczak Rafa l Lata la Alexander E. Litvak Alain Pajor Nicole Toczak-Jaegerann Abstract We establish new tail estiates for order statistics and for the Euclidean nors of projections of an isotropic log-concave rando vector. More generally, we prove tail estiates for the nors of projections of sus of independent log-concave rando vectors, and unifor versions of these in the for of tail estiates for operator nors of atrices and their sub-atrices in the setting of a log-concave enseble. This is used to study a quantity A k, that controls uniforly the operator nor of the sub-atrices with k rows and coluns of a atrix A with independent isotropic log-concave rando rows. We apply our tail estiates of A k, to the study of Restricted Isoetry Property that plays a ajor role in the Copressive Sensing theory. AMS Classification: 46B06, 15B52, 60E15, 60B20 Key Words and Phrases: log-concave rando vectors; concentration inequalities, deviation inequalities, rando atrices, order statistics of rando vectors, Copressive Sensing, Restricted Isoetry Property. Research partially supported by MNiSW Grant no. N N and the Foundation for Polish Science. Research partially supported by the E.W.R. Steacie Meorial Fellowship. Research partially supported by the ANR project ANR-08-BLAN This author holds the Canada Research Chair in Geoetric Analysis. 1

2 1 Introduction In the recent years a lot of work was done on the study of the epirical covariance atrix, and on understanding related rando atrices with independent rows or coluns. In particular, such atrices appear naturally in two iportant and distinct directions. Naely, approxiation of covariance atrices of high-diensional distributions by epirical covariance atrices; and the Restricted Isoetry Property of sensing atrices defined in the Copressive Sensing theory. To illustrate, let n, N be integers. For 1 N by U = U R N we denote the set of -sparse vectors of nor one, that is, vectors x S N 1 with at ost non-zero coordinates. For any n N rando atrix A, treating A as a linear operator A : R N R n we define δ A by δ A = sup x U Ax 2 E Ax 2. Here denotes the Euclidean nor on R n. Now let X R N be a centered rando vector with the covariance atrix equal to the identity, that is, EX X = Id; such vectors are called isotropic. Consider n independent rando vectors X 1,..., X n distributed as X and let A be the n N atrix whose rows are X 1,..., X n. Then A n δ = sup 1 Ax 2 E Ax 2 x U n 1 n = sup Xi, x 2 E X i, x x U n In the particular case of = N it is also easy to check that A n δ N = 1 n X i X i EX X. 1.2 n We first discuss the case n N. In this case we will work only with the paraeter δ N A/ n. By the law of large nubers, under soe oent hypothesis, the epirical covariance atrix 1 n n X i X i converges to E X X = Id in the operator nor, as n. A natural goal iportant for any classes of distributions is to get quantitative estiates of the rate of this convergence, in other words, to estiate the error ter δ N A/ n with high probability, as n. 2

3 This question was raised and investigated in [17] otivated by a proble of coplexity in coputing volue in high diensions. In this setting it was natural to consider unifor easures on convex bodies, or ore generally, log-concave easures see below for all the definitions. Partial solutions were given in [10] and [26] soon after the question was raised, and in the intervening years further partial solutions were produced. A full and optial answer to the Kannan-Lovász- Sionovits question was given in [5] and [7]. For recent results on siilar questions for other distributions, see e.g., [29, 27]. The answer fro [5] and [7] to the K-L-S question on the rate of convergence stated that: P sup x S N 1 1 n n Xi, x 2 1 N C 1 e c N, 1.3 n where C and c are absolute positive constants. The proofs are based on an approach initiated by J. Bourgain [10] where the following nor of a atrix played a central role. Let 1 k n, then A k,n = sup sup J {1,...,n} x S N 1 J =k 1/2 X j, x 2 = j J sup J {1,...,n} J =k sup P J Ax, 1.4 x S N 1 where for J {1,..., n}, P J denotes the orthogonal projection on the coordinate subspace R J of R n. To understand the role of A k,n for estiating δ N A/ n, let us explain the standard approach. For each individual x on the sphere, the rate of convergence ay be estiated via soe probabilistic concentration inequality. The ethod consists of a discretization of the sphere and then the use of an approxiation arguent to coplete the proof. This approach works perfectly as long as the trade-off between coplexity and concentration allows it. Thus when the rando variables 1 n n X i, x 2 satisfy a good concentration inequality sufficient to handle uniforly exponentially any points, the ethod works. This is the case for instance when the rando variables X, x are sub-gaussian or bounded, due to Bernstein inequalities. In the general case, we decopose the function X i, x 2 as the su of two ters, the first being its truncation at the level B 2, for soe B > 0. Now let us discuss the second ter in the decoposition of n X i, x 2. Let E B = E B x = {i n : X i, x > B}. 3

4 For siplicity, let us assue that the axiu cardinality of the sets of the faily {E B x : x S N 1 } is a fixed non-rando nuber k, then clearly the second ter is controlled by i E B X i, x 2 A 2 k,n. In order to estiate k, let x such that k = E B x = E B, then B 2 k = B 2 E B i E B X i, x 2. Thus we get the iplicit relation A 2 k,n B2 k. Fro this relation and an estiate of the paraeter A k,n we eventually deduce an upper bound for k. To conclude the arguent of Bourgain, the bounded part is uniforly estiated by a classical concentration inequality and the rest is controlled by the paraeter A k,n. Notice that we only need tail inequalities to estiate A k,n, that is to control uniforly the nors of sub-atrices of A. This is still a difficult task however because of a high coplexity of the proble and the lack of atching probability estiates; and a ore sophisticated arguent has been developed in [5] to handle it. We now pass to the copleentary case n < N, which is one of central points of the present paper, and was announced in [4]. Let A be an n N rando atrix with rows X 1,..., X n which are independent rando centered and with covariance atrices equal to the identity, but not necessarily identically distributed. Clearly, A is then not invertible. The unifor concentration on the sphere U N = S N 1 which appeared in the definition of δ A/ n for = N does not hold and the expressions in 1.1 are not uniforly sall on U N = S N 1. The best one can hope for is that A ay be alost nor-preserving on soe subsets of S N 1. This is true for subsets U, for soe 1 N and is indeed easured by δ A/ n. The paraeter δ plays a ajor role in the Copressive Sensing theory and an iportant question is to bound it fro above with high probability, for soe fixed. For exaple, it can be directly used to express the socalled Restricted Isoetry Property RIP introduced by E. Candes and T. Tao in [12] which in turn ensures that every -sparse vector x can be 4

5 reconstructed fro its copression Ax with n N by the so-called l 1 - iniization ethod. For atrices with independent rows X 1,..., X n, questions on the RIP were understood and solved in the case of Gaussian and sub-gaussian easureents see [12], [23] and [8]. When X 1,..., X n are independent logconcave isotropic rando vectors, these questions reained open and this is one of our otivation for this article. For an n N atrix A and N, the definition of δ A/ n iplies a unifor control of the nors of all sub-atrices of A/ n with n rows and coluns. Passing to transposed atrices, it iplies a unifor control of P I X i over all I {1, 2,, N} of cardinality and 1 i n. In order to verify a necessary condition that for soe, δ A/ n is sall with high probability, one needs to get an upper estiate for sup{ P I X : I = } valid with high probability. The probabilistic inequality fro [24] P P I X C t e t 1.5 valid for t 1 is optial for each individual I, but it does not allow to get directly by a union bound arguent a unifor estiate because the probability estiate does not atch the cardinality of the faily of the I s. Thus the first natural goal we address in this paper is to get unifor tail estiates for soe nors of log-concave rando vectors. This heuristic analysis points out to the ain objective and novelty of the present paper; naely the study of high-diensional log-concave easures and a deeper understanding of such easures and their convolutions via new tail estiates for nors of sus of projections of log-concave rando vectors. To ephasize a unifor character of our tail estiates, for an integer N 1, an N-diensional rando vector Z, an integer 1 N, and t 1, we consider the event { ΩZ, t,, N = sup P I Z Ct en } log, 1.6 I {1,...,N} I = where C is a sufficiently large absolute constant. Note that the cut-off level in this definition is of the order of the edian of the supreu for the exponential rando vector. 5

6 Recall that X, X 1,..., X n denote N-diensional independent log-concave isotropic rando vectors, and A is the n N atrix whose rows are X 1,..., X n. A chain of ain results of this paper provides estiates for PΩZ, t,, N in the cases when i Z = X; and, ore generally, ii Z = Y is a weighted su Y = n 1 x ix i, where x = x i n 1 R n, with control of the Euclidean and supreu nors of x, iii a unifor version of ii in the for of tail estiates for operator nors of sub-atrices of A. Our first ain theore solves the question of unifor tail estiates for projections of a log-concave rando vector discussed above. Theore 1.1. Let X be an N-diensional log-concave isotropic rando vector. For any 1 N and t 1, P ΩX, t,, N exp t en log / loge. The proof of the theore is based on tail estiates for order statistics of isotropic log-concave vectors. By X i i we denote the non-increasing rearrangeent of Xi i. Cobining 1.5 with ethods of [18] and the forula sup I {1,...,N} I = P I X = X i 2 1/2 will coplete the arguent. Let us also ention that further applications in Section 4 of inequality of this type require a stronger probability bound that involves a natural paraeter σ X p defined in 3.3 deterined by a l p -weak behavior of the rando vector X. More generally, the next step provides tail estiates for Euclidean nors of weighted sus of independent isotropic log-concave rando vectors. Let x = x i n 1 R n and set Y = n 1 x ix i = A x. The key estiate used later, Theore 4.3, provides unifor estiates for the Euclidean nor of projections of Y. Naely, for every x R n, P ΩY, t,, N is exponentially sall with specific estiates depending on whether the ratio x / x is larger or saller than 1/. Since precise forulations of probability estiates are rather convoluted we do not state the here and we refer the reader to Section 4. 6

7 The last step of this chain of results estiating probabilities of 1.6 is connected with the faily of paraeters A k,, with 1 k n and 1 N, defined by A k, = sup J {1,...,n} J =k 1/2 sup X j, x 2 = x U j J sup J {1,...,n} J =k sup P J Ax. 1.7 x U That is, A k, is the axial operator nor over all sub-atrices of A with k rows and coluns and for = N it obviously coincides with 1.4. Finding bounds on deviation of A k, is one of our ain goals. To develop an intuition of this result we state it below in a slightly less technical for. Full details are contained in Theore 5.1. Theore 1.2. For any t 1 and n N we have P A k, Ctλ exp tλ/ log3, where λ = log log3 logen/ + k logen/k and C is a universal constant. The threshold value λ is optial, up to the factor of log log3. Assuing additionally unconditionality of the distributions of rows or coluns, this factor can be reoved to get a sharp estiate see [3]. We ake several coents about the proof. Set Γ = A. Then. A k, = sup xi P I X i sup I {1,...,N} I = sup x U k R n P I Γx = sup I {1,...,N} I = x U k R n To bound A k, one has then to prove unifority with respect to two failies of different character: one coing fro the cardinality of the faily {I {1,..., N} : I = }; and the other, fro the coplexity of U k R n. This leads us to distinguishing two cases, depending on the relation between k and the quantity k = inf{l 1: logen/ l logen/l}. First, if k k, we adjust the chaining arguent siilar to the one fro [5] to reduce the proble to the case k k. In this step we use the unifor tail estiate fro Theore 3.4 for the Euclidean nor of the faily of vectors 7

8 {P I X : I = }. Next, we use a different chain decoposition of x and apply Theore 4.3. As already alluded to, an independent interest of this paper lies in upper bounds for δ A/ n where A is our n N rando atrix. We presently return to this subject to explain the connections. The faily A k, plays a very essential role in studies of the Restricted Isoetry constant, which in fact applies even in a ore general setting. Naely, for an arbitrary subset T R N and 1 k n define the paraeter A k T by A k T = sup sup I {1,...,n} y T I =k 1/2. X i, y Thus A k, = A k U. The paraeter A k T was studied in [22] by eans of Talagrand s γ-functionals. The following lea reduces a concentration inequality to a deviation inequality and hence is useful in studies of the RIP. It is based on an arguent of truncation siilar to Bourgain s approach presented earlier. Lea 1.3. Let X 1,..., X n be independent isotropic rando vectors in R N. Let T S N 1 be a finite set. Let 0 < θ < 1 and B 1. Then with probability at least 1 T exp 3θ 2 n/8b 2 one has sup y T 1 n i I n X i, y 2 E X i, y 2 θ + 1 Ak T 2 + EA k T 2, n where k n is the largest integer satisfying k A k T /B 2. In this paper we focus on the copressive sensing setting where T is the set of sparse vectors. The lea above shows that after a suitable discretisation, estiating δ or checking the RIP, can be reduced to estiating A k,. This generalizes naturally Bourgain s approach explained above for = N. Using the lea, we can show that if 0 < θ < 1, B 1, and N satisfy logcn/ 3θ 2 n/16b 2, then with probability at least 1 exp 3θ 2 n/16b 2 one has δ A/ n θ + 1 n 8 A 2 k, + EA 2 k,,

9 where k n is the largest integer satisfying k A k, /B 2 note that k is a rando variable. Cobining this with tail inequalities fro Theore 1.2 allows us to prove the following result on the RIP of atrices with independent isotropic logconcave rows. Theore 1.4. Let 0 < θ < 1, 1 n N. Let A be an n N rando atrix with independent isotropic log-concave rows. There exists cθ > 0 such that δ A/ n θ with an overwheling probability, whenever log 2 2N/ log log 3 cθn. The result is optial, up to the factor log log 3, as shown in [6]. As for Theore 5.1, assuing unconditionality of the distributions of the rows, this factor can be reoved see [3]. The paper is organized as follows. In the next section we collect the notation and necessary preliinary tools concerning log-concave rando variables. In Section 3, given an isotropic log-concave rando vector X, we present several unifor tail estiates for Euclidean nors of the whole faily of projections of X on coordinate subspaces of diension. As already entioned, these estiates are based on tail estiates for order statistics of X. The ain result, Theore 3.4, provides a strong probability bound in ters of the l p -weak paraeter σ X p defined in 3.3. The proofs of the ain technical results, Theores 3.2 and 3.4, are given in Section 7. Section 4 provides tail estiates for Euclidean nors of projections of weighted sus of independent isotropic log-concave rando vectors. The proof of the ain Theore 4.3 is a cobination of Theore 3.4 and one-diensional Proposition 4.3. In Section 5 we prove the result announced above on deviation of A k,. Section 6 treats the Restricted Isoetry Property and estiates of δ A/ n. The last Section 7 is devoted to the proofs of technical results of Section 3. Acknowledgent: The research on this project was partially done when the authors participated in the Theatic Progra on Asyptotic Geoetric Analysis at the Fields Institute in Toronto in Fall 2010 and in the Discrete Analysis Prograe at the Isaac Newton Institute in Cabridge in Spring The authors wish to thank these institutions for their hospitality and excellent working conditions. 9

10 2 Notation and preliinaries Let L be an origin syetric convex copact body in R d. This is the unit ball of a nor that we denote by L. Let K R d. We say that a set Λ K is an ε-net of K with respect to the etric corresponding to L if K z Λz + εl. In other words, for every x K there exists z Λ such that x z L ε. We will ostly use ε-nets in the case K = L. It is well-known and follows by the standard volue arguent that for every syetric convex copact body K in R d and every ε > 0 there exists an ε-net Λ of K with respect to etric corresponding to K, of cardinality not exceeding 1 + 2/ε d. It is also easy to see that Λ K 1 ε 1 convλ. In particular, for any convex positively 1-hoogenous function f one has sup x K fx 1 ε 1 sup fx. x Λ A rando vector X in R n is called isotropic if E X, y = 0, E X, y 2 = y 2 for all y R n, in other words, if X is centered and its covariance atrix E X X is the identity. A rando vector X in R n is called log-concave if for all copact nonepty sets A, B R n and θ [0, 1], PX θa + 1 θb PX A θ PX B 1 θ. By the result of Borell [9] a rando vector X with full diensional support is log-concave if and only if it adits a log-concave density f, i.e. such density for which fθx + 1 θy fx θ fy 1 θ for all x, y R n, θ [0, 1]. It is known that any affine iage, in particular any projection, of a logconcave rando vector is log-concave. Moreover, if X and Y are independent log-concave rando vectors then so is X + Y see [9, 14, 25]. One iportant and siple odel of a centered log-concave rando variable with variance 1 is the syetric exponential rando variable E which has density ft = 2 1/2 exp 2 t. In particular for every s > 0 we have P E s = exp s/ 2. 10

11 Every centered log-concave rando variable Z, with variance 1 satisfies a sub-exponential inequality: for every s > 0, P Z s C exp s/c, 2.1 where C > 0 is an absolute constant see [9]. Definition 2.1. For a rando variable Z we define the ψ 1 -nor by Z ψ1 = inf {C > 0 : E exp Z /C 2} and we say that Z is ψ 1 with constant ψ, if Z ψ1 ψ. A consequence of 2.1 is that there exists an absolute constant C > 0 such that any centered log-concave rando variable with variance 1 is ψ 1 with constant C. It is well known that the ψ 1 -nor of a rando variable ay be estiated fro the growth of the oents. More precisely if a rando variable Z is such that for any p 1, Z p pk, for soe K > 0, then Z ψ1 ck where c is an absolute constant. By we denote the standard Euclidean nor on R n as well as the cardinality of a set. By, we denote the standard inner product on R n. We denote by B2 n and S n 1 the standard Euclidean unit ball and unit sphere in R n. A vector x R n is called sparse or k-sparse for soe 1 k n if the cardinality of its support satisfies supp x k. We let U k = U k R n := {x S n 1 : x is k-sparse}. 2.2 For any subset I {1,..., N} let P I denote the orthogonal projection on the coordinate subspace R I := {y R N : supp y I}. We will use the letters C, C 0, C 1,..., c, c 0, c 1,... to denote positive absolute constants whose values ay differ at each occurrence. 3 New bounds for log-concave vectors In this section we state several new estiates for Euclidean nors of logconcave rando vectors. Proofs of Theores 3.2 and 3.4 are given in Section 7. 11

12 We start with the following theore, which was essentially proved by Paouris in [24]. Indeed, it is a consequence of Theore 8.2 cobined with Lea 3.9 in that paper, after checking that Lea 3.9 holds not only for convex bodies but for log-concave easures as well. Theore 3.1. For any N-diensional log-concave rando vector X and any p 1 we have E X p 1/p C E X 2 1/2 + sup E t, X p 1/p, 3.1 t S N 1 where C is an absolute constant. Rearks. 1. variable then It is well known cf. [9] that if Z is a log-concave rando E Z p 1/p C p q E Z q 1/q for p q 2. If Z is syetric one ay in fact take C = 1 cf. Proposition 3.8 in [20] and if Z is centered then denoting by Z an independent copy of Z we get for p q 2, E Z p 1/p E Z Z p 1/p p q E Z Z q 1/q 2 p q E Z q 1/q. Therefore if X R N is isotropic log-concave then sup E t, X p 1/p p sup E t, X 2 1/2 = p. t S N 1 t S N 1 Also note that E X 2 1/2 = N. Cobining these estiates together with inequality 3.1, we get that E X p 1/p C N + p. Using Chebyshev s inequality we conclude that there exists C > 0 such that for every isotropic log-concave rando vector X R N and every s 1 P X C s N e s N 3.2 which is Theore 1.1 fro [24]. 2. It is well known and it follows fro [9] that for any p 1, E X 2p 1/2p CE X p 1/p where C is an absolute constant. Fro the coparison between 12

13 the first and second oent it is clear that inequality 3.1 is an equivalence. Moreover, there exists C > 0 such that P X C E X 2 1/2 + sup E t, X p 1/p e p t S N 1 and P X 1 { 1 } E X 2 1/2 + sup E t, X p 1/p in C t S C, e p. N 1 The upper bound follows trivially fro Chebyshev s inequality. The lower bound is a consequence of Paley-Zygund s inequality and coparison between the p-th and 2p-th oents of X. 3. Since for any Euclidean nor on R N there exists a linear ap T such that x = T x and the class of log-concave rando vectors is closed under linear transforations, Theore 3.1 iplies that for any N-diensional logconcave vector X, any Euclidean nor on R N and p 1 we have E X p 1/p C E X 2 1/2 + sup E t, X p, 1/p t 1 where R N, is the dual space to R N,. It is an open proble whether such an inequality holds for arbitrary nors see [19] for a discussion of this question and for related results. We now introduce our ain technical notations. For a rando vector X = X1,..., XN in R N, p 1 and t > 0 consider the functions and σ X p = N X t = sup E t, X p 1/p 3.3 t S N 1 N 1 {Xi t}. That is, N X t is equal to the nuber of coordinates of X larger than or equal to t. By σ 1 X we denote the inverse of σ X i.e., σ 1 X s = sup{t: σ Xt s}. Reark 1 after Theore 3.1 iplies that for isotropic vectors X, σ X tp 2tσ X p for p 2, t 1 and σ 1 X 2ts tσ 1 X s for t, s 1. 13

14 We also denote a nonincreasing rearrangeent of X1,..., XN by X 1 X 2... X N. One of the ain technical tools of this paper says: Theore 3.2. For any N-diensional log-concave isotropic rando vector X, p 2 and t C log Nt 2 /σ 2X p we have Et 2 N X t p Cσ X p 2p, where C is an absolute positive constant. We apply Theore 3.2 to obtain probability estiates on order statistics X i s. Theore 3.3. For any N-diensional log-concave rando isotropic vector X, any 1 l N and t C logen/l, 1 PX l t exp σ 1 X l C t, where C is an absolute positive constant. Proof. Observe that σ X p = σ X p and that X l t iplies that N X t l/2 or N X t l/2. So by Chebyshev s inequality and Theore 3.2, 2 penx C PX l t t p + EN X t p σ X p 2p l t l provided that t C lognt 2 /σx 2 p, where C, C are absolute positive constants. To conclude the proof it is enough to take p = σ 1 X 1 t l and to C e notice that the restriction on t follows by the condition t C logen/l. We can now state one of the ain results of this paper. Theore 3.4. Let X be an isotropic log-concave rando vector in R N and N. For any t 1, P sup P I X Ct en t log en log exp σ 1 X, I {1,...,N} loge/0 I = where C is an absolute positive constant and { en 0 = 0 X, t = sup k : k log k σ 1 X t en } log. 14

15 Reark. We believe that the probability estiate should not contain any logarithic ter in the denoinator, but it sees that our ethods fail to show it. However it is not crucial in the sequel. Since σ X p p, Theore 1.1 is an iediate consequence of Theore Tail estiates for projections of sus of log-concave rando vectors We shall now study consequences that the results of Section 3 have for tail estiates for Euclidean nors of projections of sus of log-concave rando vectors. Naely, we investigate the behavior of a rando vector Y = Y x = n x ix i, where X 1,..., X n are independent isotropic logconcave rando vectors in R N and x = x i n 1 R n is a fixed vector. We provide unifor bounds on projections of such a vector. We start with the following proposition. Proposition 4.1. Let X 1,..., X n be independent isotropic log-concave rando vectors in R N, x = x i n 1 R n, and Y = n x ix i. Then for every p 1 one has σ Y p = where C is an absolute positive constant. sup E t, Y p 1/p C p x + p x, t S N 1 Proof. For every t S N 1 we have t, Y = n x i t, X i. Let E i be independent syetric exponential rando variables with variance 1. Let t S N 1 and x = x i n 1 R n. The variables Z i = t, X i are one diensional centered log-concave with variance 1, therefore by 2.1 for every s > 0 one has P Z i s C 0 P E i s/c 0. 15

16 Let ε i be independent Bernoulli ±1 rando variables, independent also fro Z i. A classical syetrization arguent and Lea 4.6 of [21] iply that there exists C such that E t, Y p 1/p n p 1/p n p 1/p. 2 E x i ε i Z i C E x i E i The well-known estiate which follows e.g. fro Theore 1 in [16] n 1/p p E x i E i C p x + p x concludes the proof. Corollary 4.2. Let X 1,..., X n, x and Y be as in Proposition 4.1 and 1 l N. Then for any t C x log en l one has { PY l t exp 1C } in t 2 l x, t l, 2 x where C is an absolute positive constant. Proof. The vector Z = Y/ x is isotropic and log-concave. Proposition 4.1 we have σ Z p = 1 p x σ x Y p C 1 + p. x Moreover by Therefore for every t C 1 σ 1 Z t 1 in C 2 { t 2, x } t x and by Theore 3.3 we get for every t C 3 x log en l PY l t = P Z l t exp x exp 1 { t 2 C in l t l σ 1 Z C 4 x x, t l }. 2 x 16

17 The next theore provides unifor estiates for the Euclidean nor of projections of sus Y x, considered above, in ters of the Euclidean and l nors of the vector x R n. Theore 4.3. Let X 1,..., X n be independent isotropic log-concave rando vectors in R N, x = x i n 1 R n, and Y = n x ix i. Assue that x 1, x b 1 and let 1 N. i If b 1 then for any t 1 P sup P I Y Ct en log exp I {1,...,N} I = ii if b 1 then for any t 1 P sup P I Y Ct en log I {1,...,N} I = where C is an absolute positive constant. t en log b ; loge 2 b 2 { en exp in t 2 log 2, t en } log, b Reark. Basically the sae proof as the one given below shows that in i the ter loge 2 b 2 ay be replaced by loge 2 b 2 /t 2 and the condition b 1 by b t. We oit the details. The proof of Theore 4.3 is based on Theore 3.4. Let us first note that we ay assue that vector Y is isotropic, i.e. x = 1. Indeed, we ay find vector y = y 1,..., y l such that y b and x 2 + y 2 = 1 and take Y = l y ig i, where G i are i.i.d. canonical N-diensional Gaussian vectors, independent of vectors X i s. Then the vector Y + Y is isotropic, satisfies assuptions of the theore and for any u > 0, P sup P I Y u I {1,...,N} I = 2P sup P I Y + Y u I {1,...,N} I = Siilarly as in the proof of Corollary 4.2, for t C we have σ 1 Y t 1 C in {t 2, 17 t x }. 4.1.

18 This allows us to estiate the quantity 0 in Theore 3.4. For 1/ b 1 define 1 = 1 b > 0 by the equation en en 1 b log = log b b One ay show that 1 b log en b only the following siple estiate. / log enb, we will however need Lea 4.4. If 1/ b 1 then log/ 1 b 2 logeb. Proof. Let fz = z logen/z. Using 1/ b we observe 1 f = 1 en en log + loge 2 b 2 e 2 b 2 e 2 b 2 e 2 b log + 1 eb e 2 b 2 en 1 log b e + 1 en < log = f e 2 1 b. b Since f increases on 0, N], we obtain 1 b eb 2, which iplies the result. Proof of Theore 4.3. As we noticed after reark following Theore 4.3, without loss of generality we ay assue that x = 1, i.e. that Y is isotropic. i Assue b 1/. By 4.1 for every t C/ logen/ we have σ 1 Y t en log By 4.2 it follows that for every t x = 1 en 1 b log 1 b Cb t en log. 4.3 σ 1 Y Ct en log. By the definition of 0, given in Theore 3.4, this iplies that 0 Y, Ct 1 b, and since 0 Y, Ct 1 we get 0 Y, Ct 1 b/2. By Lea 4.4 this yields loge/ 0 Y, Ct logeb 4 logeb. Writing t = t 2 loge 2 b 2 and applying 4.3 we obtain σ 1 Y t log en log e 0 Y,Ct σ 1 Y t log 18 en t en log. Cb

19 Theore 3.4 applied to Ct instead of t iplies the result one needs to adjust absolute constants. ii Assue b 1/. By 4.1 for every t C x we have σ 1 Y t en log 1 { en C in t 2 log 2 t en C log,, t en } log b which by the definition of 0 iplies that 0 Y, Ct =. As in part i, Theore 3.4 iplies the result. 5 Unifor bounds for nors of sub-atrices In this section we establish unifor estiates for nors of subatrices of a rando atrix, naely for the quantity A k, defined below. Fix integers n and N. Let X 1,..., X n R N be independent log-concave isotropic rando vectors. Let A be the n N rando atrix with rows X 1,..., X n. For any subsets J {1,..., n} and I {1,..., N}, by AJ, I we denote the subatrix of A consisting of the rows indexed by eleents fro J and the coluns indexed by eleents fro I. Let k n and N. We define the paraeter A k, by A k, = sup AJ, I l 2 l k 2, 5.1 where the supreu is taken over all subsets J {1,..., n} and I {1,..., N} with cardinalities J = k, I =. That is, A k, is the axial operator nor of a subatrix of A with k rows and coluns. It is often ore convenient to work with atrices with log-concave coluns rather than rows, therefore in this section we fix the notation Γ = A. Thus Γ is an N n atrix with coluns X 1,..., X n. In particular, given x R n the su Y = n x ix i considered in Section 4 satisfies Y = Γx. Clearly, ΓI, J = AJ, I 19

20 so that, recalling that U k was defined in 2.2, we have A k, = Γ,k = sup{ P I Γx : I {1,..., N}, I =, x U k }. 5.2 and Define λ k, and λ by λ k, = log log3 e ax{n, n} log λ = log log3 log3 + en k log k, 5.3 e ax{n, n} log. 5.4 The following theore is our ain result providing estiates for the operator nors of subatrices of A and of Γ. Its first part in the case n N was stated as Theore 1.2. Theore 5.1. There exists a positive absolute constant C such that for any positive integers n, N, k n, N and any t 1 one has P A k, Ctλ k, exp tλ k,. log3 In particular, there exists an absolute positive constant C 1 such that for every t 1 and for every N one has P k A k, C 1 tλ k, exp tλ. 5.5 First we show the in particular part, which is easy. Proof of inequality 5.5. The ain part of the theore iplies that for every t 1 p := P k A k, Ctλ k, n k=1 Thus if > log 3 n then for every t 100 one has exp tλ k, / log3. p n exp tλ exp t/2 λ. 20

21 If log 3 n in particular n 3 then for every t 100 one has p exp tλ k, / log3 + exp tλ k, / log3 k log n 2 k log n 2 where log n 2 exp tλ + exp tλ a k, = k en log log3 k Since log 3 n, we obtain for every t 100 k log n 2 exp ta k,, p exp t/2λ + exp tλ exp t/4λ. The result follows by writing t = 100t and by adjusting absolute constants.. Now we prove the ain part of the theore. Its proof consists of two steps that depend on the relation between and k. The Step I is applicable en en if log < k log, and it reduces this case to the second copleentary case log k log. The latter case will then be k treated en in Step II. To ake this reduction we define k as follows eñ k = inf { k N : k n and k log k en k en } log of course if the set in 5.6 is epty, we iediately pass to Step II. 5.1 Step I: k log en k > log 5.6 en, in particular k k. Proposition 5.2. Assue that k k. Then for any t 1 we have sup sup P I Γx C sup sup P I Γx I {1,...,N} x U k I {1,...,N} x U k I = I = + t en log + t en k log k

22 with probability at least logen/ + k logen/k en 1 n exp t exp tk log, 5.8 log e k where C is a positive absolute constant. The proof of Proposition 5.2 is based on the ideas fro [5]. We start it with the following fact. Proposition 5.3. Let X i i n be independent centered rando vectors in R N and ψ > 0 be such that Xi, θ E exp 2 for all i n, θ S N 1. ψ Then for 1 p n and t 1 with probability at least 1 exp tp logen/p the following holds: for all y, z U p and all E, F {1,..., n} with E F =, y i X i, en z j X j 20tp log p i E j F ψ ax i E y i where Γ p := ax x Up Γx = ax x Up n x ix i. 1/2Γp zj 2, Proof. In this proof we use for siplicity the notation [n] = {1,..., n}. First let us fix sets E, F [n] with E F =. Since we consider y, z U p, without loss of generality we ay assue that E, F p. For z U p denote Y F z = j F j F z j X j and Z F z = Y F z Y F z if Y F z = 0 we set Z F z = 0. For any y, z U p we have z j X j y i X i, Y F z i E y i X i, j F i E ax i E y i Y F z i E. X i, Z F z 22

23 The rando vector Z F z is independent fro the vectors X i s, i E, oreover Z F z 1 and Y F z j F z2 j 1/2 Γ p. Therefore for any z U p and u > 0, P y Up i X i, i Ey z j X j > uψ ax y i i E j F j F P X i, Z F z uψ i E e u E exp i E z 2 j 1/2Γp X i, Z F z 2 E e u 2 p e u. ψ Let N F denote a 1/2-net in the Euclidean etric in B n 2 R F of cardinality at ost 5 F 5 p. We have p E,F u := P y Up z Up y i X i, i E j F P y Up z NF y i X i, i E j F y Up y i X i, i E j F z N F P 10 p e u. z j X j > 2uψ ax i E y i z j X j > uψ ax i E y i z j X j > uψ ax i E y i Hence P y Up z Up E,F [n], E, F p,e F = y i X i, z j X j > 2uψ ax y i i E i E j F j F n p E,F u p E,F [n], E, F p,e F = z 2 j n p 1/2Γp j F j F j F z 2 j z 2 j z 2 j 1/2Γp 1/2Γp 1/2Γp 10 p e u e u/2, provided that u 10p logne/p. This iplies the desired result. Before forulating the next proposition we recall the following eleentary lea see e.g. Lea 3.2 in [5]. 23

24 Lea 5.4. Let x 1,..., x n R N, then x i, x j 4 ax x i, x j. E {1,...,n} i E j E c i j Proposition 5.5. Let X i i n be independent centered rando vectors in R N and ψ > 0 be such that Xi, θ E exp 2 for all i n, θ S N 1. ψ Let p n and t 1. Then with probability at least 1 exp tp lnen/p for all x U p, n where C is an absolute constant. en x i X i C x ax X i + tp log ψ x, i p Proof. As in the previous proof we set [n] = {1,..., n}. Fix α > 0 and define { n } : Γ p α = sup x i X i x Up, x α. For E [n] with E p let N E α denote a 1/2-net in R E B n 2 αb with respect to the etric defined by B n 2 αb. We ay choose N E α of cardinality 5 E 5 p. Let Nα = E =p N Eα, then p 5en Nα and Γ p α 2 sup p x Nα Fix E [n] with E = p and x N E α. We have n 2 x i X i = Therefore Lea 5.4 gives n n. x i X i 5.9 n x 2 i X i 2 + x i X i, x j X j. i j 2 x i X i ax X i sup i F E 24 x i X i, i F j E\F x j X j.

25 Notice that for any F E, ax i F x i α and j E\F x jx j Γ p α, hence as in the proof of Proposition 5.3 we can show that P x i X i, x j X j > uψαγp α < 2 F e u. Thus i F j E\F n 2 P x i X i > ax X i 2 + 4uψαΓ p α 2 F e u 3 E e u. i F E This together with 5.9 and the union bound iplies P Hence Γ p α 2 > 4 ax i P X i uψαΓ p α Γ p α > 2 2 ax i x Nα X i + 32uψα 15en pe 3 p e u u. p 15en pe u Using that Γ p α Γ p β for α β > 0 we obtain for every l 1 n P x Up x i X i > 2 2 ax X i + uψ ax{ x, 2 l } i = P 2 l α 1 Γ p α > 2 2 ax X i + uψα i P 0 j l 1 Γ p 2 j > 2 2 ax X i + 1 i 2 uψ2 j l 1 P Γ p 2 j > 2 2 ax X i en pe i 2 uψ2 j l u/64, p j=0 where the last inequality follows by Taking l log p so that x 2 l x and u = Ctp logen/p, we obtain the result. p Proof of Proposition 5.2. For any I {1,..., N}, the vector P I X is isotropic and log-concave in R I, hence it satisfies the ψ 1 bound with a universal constant. 25

26 We fix t 10. Let s 1 be the sallest integer such that k2 s < k. Set k µ = k2 1 µ k2 µ for µ = 1,..., s and k s+1 = k2 s. Then ax{1, k2 µ } k µ k2 1 µ, k 2 k s+1 k, and s+1 k µ = k µ=1 Consider an arbitrary vector x = xi i U k and let n 1,..., n k be pairwise distinct integers such that xn 1 xn 2... xn k and xi = 0 for i / {n 1,..., n k }. For µ = 1,..., s + 1 let F µ = {n i } jµ<i j µ+1, where j µ = r<µ k r j 1 = 0. Let x Fµ be the coordinate projection of x onto R Fµ. Note that for each µ s we have x Fµ 1 and x Fµ 1/ k j µ µ /k. The equality x = s+1 µ=1 x F µ yields that for every I {1,..., N} of cardinality, P I Γx P I Γx s+1 + s P I Γx Fµ µ=1 sup I {1,...,N} I = sup P I Γx + x U k s, P I Γx Fµ where in the second inequality we used that k s+1 k. Taking the suprea over I of cardinality and x U k we obtain A k, = Note that sup I {1,...,N} I = s 2 P I Γx Fµ = µ=1 sup P I Γx x U k sup I {1,...,N} I = sup x U k s s 1 P I Γx Fµ P I Γx µ, µ=1 µ=1 P I Γx + sup I {1,...,N} I = s ν=µ+1 µ=1 sup x U k s. P I Γx Fµ µ= P I Γx ν We are going to use Proposition 5.5 to estiate the first suand and Proposition 5.3 to estiate the second one. First note that by the definition of k and s we have N en exp k log en k and < 2 s 2k 2n k k k k. 26

27 Hence, using the definition of k µ s, we observe that for t 10 we have s N en exp tk µ log s exp k log en en exp tk k µ=1 µ k s log k s 1 en 2 exp tk /2 log. k Since x Fµ U kµ for every x U k and µ = 1,..., s, the union bound and Proposition 5.5 iply that with probability at least s N 1 exp tk µ log µ=1 en k µ exp tk /2 log en k, for every x U k, every I of cardinality, and every µ {1,..., s} one has 2 µ P I Γx Fµ C, 5.14 x Fµ ax i en P I X i + tk µ log k µ where C is an absolute constant. Siilarly, by Proposition 5.3, with probability at least 1 1 en 2 exp tk /2 log, k for every x U k, every I of cardinality and every µ {1,..., s} one has s en 2 µ P I Γx µ, P I Γx ν Ctk µ log k µ k A k,, 5.15 ν=µ+1 where we have used the facts that s ν=µ+1 x ν U kµ and ν x ν 2 1. Using we conclude that there exists an absolute constant C 1 > 0 such that with probability at least 1 exp tk /2 logen/k, A 2 k, C C 1 sup I {1,...,N} I = + t 2 s µ=1 sup I {1,...,N} I = + t 2 k log 2 en k sup P I Γx 2 + x U k k en2 µ 2 µ log2 k s µ=1 + t x Fµ 2 ax i s µ=1 sup P I Γx 2 + ax x U k i + t en k log k k ax P I X i 2 I {1,...,N} I = k en2 µ A 2 log µ k, k ax P I X i 2 I {1,...,N} I = A k,. 27

28 Thus, with the sae probability A k, C 2 sup I {1,...,N} I = sup P I Γx + ax x U k i ax P I X i + t en k log I {1,...,N} k I = where C 2 > 0 is an absolute constant. But by Theore 1.1 and the union bound we have for every t 1, ax ax P I X i C 3 t logen/ + t k logen/k, i I {1,...,N} I = with probability larger than or equal to logen/ + k logen/k p 0 := 1 n exp t log e we added the ter depending on k to get better probability, we ay do it by adjusting t. This proves the result for t 10 with probability p 0 exp tk /2 logen/k. Passing to t 0 = t/20 and adjusting absolute constants, we coplete the proof. 5.2 Step II. k log en k log, en, in particular k k. In this case we have to be a little bit ore careful than in the previous case with the choice of nets. We will need the following lea, in which Ũk denotes the set of k-sparse vectors of the Euclidean nor at ost one. Lea 5.6. Suppose that k n, k 1, k 2,..., k s are positive integers such that k k s k and k s+1 = 1. We ay then find a finite subset N of 3 satisfying the following. i 2Ũk For any x U k there exists y N such that x y 2Ũk. 1 ii Any x N ay be represented in the for x = π 1 x π s x, where vectors π 1 x,..., π s x have disjoint supports, supp π i x k i for i = 1,..., s, s k i+1 π i x 2 4 and π i N = {π i x: x N } 28 en k i 3ki for i = 1,..., s.

29 Proof. First note that we can assue that k 1 + k k s = k. Indeed, otherwise denote by j the largest integer such that k j + k j k s k. If j = s then set k j = k, if j < s then set k i = k i for j < i s, k j = k k j+1 k j+2... k s, π i x = 0 for i < j and repeat the proof below for the sequence k j, k j+1,..., k s, k s+1, where k s+1 = 1 as before. Recall that for F {1,..., n}, R F denotes the set of all vectors in R n with support contained in F. For i = 1,..., s and F {1,..., n} of cardinality at ost k i let N i F denote the subset of S F i := R F B2 n k 1/2 i+1 Bn such that S F i N i F + k i B2 n k 1/2 i+1 2n Bn. Standard voluetric arguent shows that we ay choose N i F of cardinality at ost 6n/k i F 6n/k i k i additionally without loss of generality we assue that 0 N i F. We set N i := N i F, F k i then en ki 6n ki en 3ki N i. k i k i k i Fix x U k, let F s denote the set of indices of k s largest coefficients of x, F s 1 the set of indices of the next k s 1 largest coefficients, etc. Then x = x Fs + x Fs x F1, x Fs 1 and x Fi 1 ki+1 x Fi+1 1 ki+1 for i < s. In particular, x Fi R F i B n 2 k 1/2 i+1 Bn for all i = 1,..., s. Let π i x be a vector in N i F i such that x Fi π i x k i 2n and x Fi π i x k i 2n k i+1. Define also πx = π 1 x π s x. Then x πx s x Fi π i x 29 s k i 2n = k 2n 1 2

30 and s s 1 k i+1 π i x k i+1 x Fi 2 + x Fi π i x 2 s Thus we coplete the proof by letting x Fi+1 2 ki x 2 + k2 2n 2n 4. 2 N = {πx: x U k }. Lea 5.7. Suppose that n N and k in{n, k }. Then for soe positive integer s C log log3 we can find s + 1 positive integers k 1 = k, k i [ 1 6 1/4, ] for 2 i s, k s+1 = 1 satisfying en k i log 20gk i+1, for i = 1,..., s, 5.16 k i where C is an absolute positive constant and z loge gz = log en 2 /z { z in log, log 2 en } en if z <, if z. Proof. Let us define en hz = z log z and en Hz = z log z. Notice that hz Hz, h is increasing on 0, n] and H is increasing on 0, N]. It is also easy to see that h z 2hz for z [1, n]. We first establish soe relations between the functions g and H. It is not hard to check that log 3/2 e 2 e 2, therefore for z [1, ], z H = log 3/2 e 2 z log 3/2 e 2 z en log 3/2 e 2 log en log 3/2 e 2 log + log z 1 + loge 2 2gz

31 Write z = p with p 0, 1, so H2z = 2p log en + log 2z. Then H2z 2 p en loge2 /p log p loge2 /p1 + log1/p 10gz, 5.18 where the last inequality follows since sup p 0,1 2 p loge 2 /p1 + log1/p = 2 sup e u 2 + 2u1 + 2u u sup e u/ u 10. u 0 Let us define the increasing sequence l 0, l 1,..., l s 1 by the forula l 0 = 1 and hl i = 10gl i 1, i = 1, 2,..., where s is the sallest nuber such that l s 1 if at soe oent 10gl j 1 n we set l j = and s = j +1. First we show that such an s exists and satisfies s C log log3 for soe absolute constant C > 0. We will use that hz Hz. By 5.17 if l i 1 then l i l i 1 / log 3/2 e 2, which iplies and, by induction, l 1 / log 3/2 e / l i log 3 e i for i = 0, 1, 2,... In particular we have for soe absolute constant C 1 > 0, l s1 2 log 3 e 2 for soe s 1 C 1 log log3. By 5.18 we have h2z H2z 10gz for z, so, if l i 1 then l i 2l i 1. It iplies that for soe s s 1 + C 2 log log3 C log log3 we indeed have l s 1. Finally we choose the sequence k i i s+1 in the following way. Let k 1 = k and k i := in{, l s+1 i }, i = 2,..., s + 1 note that k 2 =, k s+1 = 1. Using h z 2hz and construction of l i s, we obtain 5.16 for i 2, while for i = 1 by definition of k and since k k we clearly have hk 1 = hk 2g = 2gk 2. Since l i i is increasing we also observe by 5.19 that k i 1 6 1/4 for 2 i s. This copletes the proof. 31

32 Proposition 5.8. Suppose that N n and k in{n, k }. Then for t 1, P sup I {1,...,N} I = sup P I Γx Ct log log3 en log x U k exp t log log3 logen/, loge where C is a universal constant. Proof. Let k 1,..., k s+1 be given by Lea 5.7 and N 3 2 U k be as in Lea 5.6. Notice that sup I {1,...,N} I = sup P I Γx 2 x U k sup I {1,...,N} I = sup P I Γx, x N so we will estiate the latter quantity. Let us fix x N and 1 i s. We apply Theore 4.3 to the vector y = π i x/ k i+1 π i x + π i x observe that y 1 and y 1/ k i+1 and on the other hand 1/ k i+1 1 to get for u > 0, P sup P I Γπ i x C k i+1 π i x + π i x en log + u I {1,...,N} I = ki+1 u exp 100gk i+1 exp C loge where gx is as in Lea 5.7. By the properties of the net N guaranteed by Lea 5.6 and 5.16 π i N exp 100gk i+1 1. Therefore for all u > 0 and i = 1,..., s, P sup x N sup P I Γπ i x C k i+1 π i x + π i x en log + u I {1,...,N} I = ki+1 u exp C loge,. 32

33 We have for any x N, s k i+1 π i x + π i x s 1/2 s 1/2 s k i+1 π i x 2 + π i x 2 Therefore for any u 1,..., u s > 0, P sup x N s C log log3. sup P I Γx C log log3 en log + I {1,...,N} I = s P s sup x N exp s u i sup P I Γπ i x C k i+1 π i x + π i x en log I {1,...,N} I = ki+1 u i C loge. Hence it is enough to choose u s = Ct log log3 logen/ and u i = 1 s Ct log log3 logen/ for i = 1,..., s 1 and to use the fact that k i 1 6 1/4 for i = 2,..., s. 5.3 Conclusion of the proof of Theore 5.1 Proof. First notice that it is sufficient to consider the case n N. Indeed, if n > N we ay find independent isotropic n-diensional log-concave rando vectors X 1,..., Xn such that X i = P {1,...,N} Xi for 1 i n. Let à be the n n atrix with rows X 1,..., Xn and à k, := sup{ P I à x : I {1,..., n}, I =, x U k }. Then obviously Ãk, A k, and this allows us to iediately deduce the case N n fro the case N = n. If k logen/k + logen/ k logen/k we ay apply results of [5]. Recall that Γ = A. Let s k logen/k + logen/. Applying in particular part of Theore 3.13 of [5] and Paouris Theore inequality 3.2 together with the union bound to the coluns of n atrix P I Γ and adjusting corresponding constants, we obtain that P sup x U k P I Γx Cs exp 2s 33 + u i

34 for any I {1,..., N} with I = cf. Theore 3.6 of [5]. Therefore PA k, Cs P sup P I Γx Cs N exp 2s. x U k I = By the definition of k we get N exp logen/ exp k logen/k, hence for s as above PA k, Cs exp k logen/k exp 2s exp s and Theore 5.1 follows in this case. Finally assue that n N and that k logen/k + logen/ k logen/k. For siplicity put a k = k logen/k, b = logen/, d = log log3. If k k then Theore 5.1 follows by Proposition 5.8 applied with t 0 = t1 + a k /d a. If k k then we apply Proposition 5.8 with the sae t 0 and Propositions 5.2 with t 1 = tb d + a k /a k + b to obtain Theore 5.1 note that C logen/ log N log n, so loge the factor n in the probability in Propositions 5.2 can be eliinated. 6 The Restricted Isoetry Property Fix integers n and N 1 and let A be an n N atrix. Consider the proble of reconstructing any vector x R N with short support sparse vectors fro the data Ax R n, with a fast algorith. Copressive Sensing provides a way of reconstructing the original signal x fro its copression Ax with n N by the so-called l 1 -iniization ethod see [15, 11, 13]. Let δ = δ A = sup Ax 2 E Ax 2 x U be the Restricted Isoetry Constant RIC of order, introduced in [12]. Its iportant feature is that if δ 2 is appropriately sall then every -sparse vector x can be reconstructed fro its copression Ax by the l 1 -iniization ethod. The goal is to check this property for certain odels of atrices. 34

35 The articles [1, 2, 5, 6, 7] considered rando atrices with independent coluns, and investigated the RIP for various odels of atrices, including the log-concave Enseble build with independent isotropic log-concave coluns. In this setting, the quantity A n, played a central role. In this section we consider n N rando atrices A with independent rows X i. For T R N the quantity A k T has been defined in 1.8 and A k, = A k U was estiated in the previous section. We start with a general Lea 6.1 which will be used to show that after a suitable discretization, one can reduce a concentration inequality to a deviation inequality; in particular, checking the RIP is reduced to estiating A k,. It is a slight strengthening of Lea 1.3 fro the introduction. Lea 6.1. Let X 1,..., X n be independent isotropic rando vectors in R N. Let T S N 1 be a finite set. Let 0 < θ < 1 and B 1. Then with probability at least 1 T exp 3θ 2 n/8b 2 one has sup y T 1 n X i, y 2 E X i, y 2 n θ + 1 n A k T 2 + sup E X i, y 2 1 { Xi,y B} n y T Ak T 2 + EA k T 2, θ + 1 n where k n is the largest integer satisfying k A k T /B 2. Reark. Note that k in Lea 6.1 is a rando variable. To prove Lea 6.1 we need Bernstein s inequality see e.g., Lea in [28]. Proposition 6.2. Let Z i be independent centered rando variables such that Z i a for all 1 i n. Then for all τ 0 one has 1 n τ 2 n P Z i τ exp, n 2σ 2 + aτ/3 where σ 2 = 1 n n VarZ i. 35

36 Proof of Lea 6.1. For y T let 1 n Sy = Xi, y 2 E X i, y 2 n and observe that Sy 1 n + 1 n n Xi, y B 2 E X i, y B 2 n Xi, y 2 B 2 1 { Xi,y B} + 1 n n E Xi, y 2 B 2 1 { Xi,y B}. We denote the three suands by S 1 y, S 2 y, S 3 y, respectively, and we estiate each of the separately. Estiate for S 1 y: We will use Bernstein s inequality Proposition 6.2. Given y T let Z i y = X i, y B 2 E X i, y B 2, for i n. Then Z i y B 2, so a = B 2. By isotropicity of X for every i n one has VarZ i y E X i, y B 4 E X i, y 2 B 2 = B 2, which iplies σ 2 B 2. By Proposition 6.2, 1 n θ 2 n P Z i y θ exp n 2B 2 + B 2 θ/3 Then, by the union bound, P sup S 1 y θ y T = P sup y T 1 n n Z i y θ exp 3θ2 n. 8B 2 T exp 3θ2 n. 8B 2 Estiates for S 2 y and S 3 y: For every y T consider E B y = {i n: X i, y B}, and let k = sup E B y. y T 36

37 Then, by the definition of A k T, This yields B 2 k = B 2 sup y T E B y sup y T i E B y X i, y 2 A 2 k T. k A2 k T, B 2 and therefore k k, where k n is the biggest integer satisfying k A k T /B 2. Using the definition of A k T again we observe sup S 2 y 1 y T n sup y T Siilarly, sup y T 1 n sup y T n sup E k S 3 y 1 n sup E y T X i, y 2 1 { Xi,y B} = 1 n sup y T X i, y 2 1 n A2 kt. i E i E B y n X i, y 2 1 { Xi,y B} 1 n EA2 kt. X i, y 2 Cobining estiates for S 1 y, S 2 y, S 3 y we obtain the desired result. By an approxiation arguent Lea 6.1 has the following iediate consequence cf., [7]. Corollary 6.3. Let 0 < θ < 1 and B 1. Let n, N be positive integers and A be an n N atrix, whose rows are independent isotropic rando vectors X i, for i n. Assue that N satisfies Then with probability at least log 11eN 1 exp 3θ2 n 16B 2. 3θ2 n 16B 2 37

38 one has A n 1 n δ = sup X i, y 2 E X i, y 2 y U n 2θ + 2 n A 2 k, + sup E X i, y 2 1 { Xi,y B} n y U 2θ + 2 A 2 n k, + EA 2 k,, where k n is the largest integer satisfying k A k, /B 2. Rearks. 1. Note that as in Lea 6.1, k in Corollary 6.3 is a rando variable. 2. In all our applications we would like to have A 2 k, and EA2 k, of order θn. To obtain this, we choose the paraeter B appropriately. 3. Note that A k, is increasing in k, therefore we iediately have that if N satisfies log 11eN θ 3θ2 n 16B 2 and EA 2 n, θn then with probability at least 1 exp 3θ2 n P A 2 16B 2 n, > θn one has δ A n 6θ. 6.1 Proof of Corollary 6.3. Let N be a 1/5-net in U of cardinality N 11 11eN/ we can construct N in such a way that for every y U there exists z y N with such that z y / z y U and y z y 1/5. By the assuption on, log 11eN 3θ2 n 16B, 2 and thus N exp 3θ2 n exp 3θ2 n. 8B 2 16B 2 38

Geometry of log-concave Ensembles of random matrices

Geometry of log-concave Ensembles of random matrices Geometry of log-concave Ensembles of random matrices Nicole Tomczak-Jaegermann Joint work with Radosław Adamczak, Rafał Latała, Alexander Litvak, Alain Pajor Cortona, June 2011 Nicole Tomczak-Jaegermann

More information

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices

13.2 Fully Polynomial Randomized Approximation Scheme for Permanent of Random 0-1 Matrices CS71 Randoness & Coputation Spring 018 Instructor: Alistair Sinclair Lecture 13: February 7 Disclaier: These notes have not been subjected to the usual scrutiny accorded to foral publications. They ay

More information

L p moments of random vectors via majorizing measures

L p moments of random vectors via majorizing measures L p oents of rando vectors via ajorizing easures Olivier Guédon, Mark Rudelson Abstract For a rando vector X in R n, we obtain bounds on the size of a saple, for which the epirical p-th oents of linear

More information

A Simple Regression Problem

A Simple Regression Problem A Siple Regression Proble R. M. Castro March 23, 2 In this brief note a siple regression proble will be introduced, illustrating clearly the bias-variance tradeoff. Let Y i f(x i ) + W i, i,..., n, where

More information

The Hilbert Schmidt version of the commutator theorem for zero trace matrices

The Hilbert Schmidt version of the commutator theorem for zero trace matrices The Hilbert Schidt version of the coutator theore for zero trace atrices Oer Angel Gideon Schechtan March 205 Abstract Let A be a coplex atrix with zero trace. Then there are atrices B and C such that

More information

Block designs and statistics

Block designs and statistics Bloc designs and statistics Notes for Math 447 May 3, 2011 The ain paraeters of a bloc design are nuber of varieties v, bloc size, nuber of blocs b. A design is built on a set of v eleents. Each eleent

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory Proble sets 5 and 6 Due: Noveber th Please send your solutions to learning-subissions@ttic.edu Notations/Definitions Recall the definition of saple based Radeacher

More information

Sharp Time Data Tradeoffs for Linear Inverse Problems

Sharp Time Data Tradeoffs for Linear Inverse Problems Sharp Tie Data Tradeoffs for Linear Inverse Probles Saet Oyak Benjain Recht Mahdi Soltanolkotabi January 016 Abstract In this paper we characterize sharp tie-data tradeoffs for optiization probles used

More information

Metric Entropy of Convex Hulls

Metric Entropy of Convex Hulls Metric Entropy of Convex Hulls Fuchang Gao University of Idaho Abstract Let T be a precopact subset of a Hilbert space. The etric entropy of the convex hull of T is estiated in ters of the etric entropy

More information

Feature Extraction Techniques

Feature Extraction Techniques Feature Extraction Techniques Unsupervised Learning II Feature Extraction Unsupervised ethods can also be used to find features which can be useful for categorization. There are unsupervised ethods that

More information

FAST DYNAMO ON THE REAL LINE

FAST DYNAMO ON THE REAL LINE FAST DYAMO O THE REAL LIE O. KOZLOVSKI & P. VYTOVA Abstract. In this paper we show that a piecewise expanding ap on the interval, extended to the real line by a non-expanding ap satisfying soe ild hypthesis

More information

3.8 Three Types of Convergence

3.8 Three Types of Convergence 3.8 Three Types of Convergence 3.8 Three Types of Convergence 93 Suppose that we are given a sequence functions {f k } k N on a set X and another function f on X. What does it ean for f k to converge to

More information

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion

Supplementary Material for Fast and Provable Algorithms for Spectrally Sparse Signal Reconstruction via Low-Rank Hankel Matrix Completion Suppleentary Material for Fast and Provable Algoriths for Spectrally Sparse Signal Reconstruction via Low-Ran Hanel Matrix Copletion Jian-Feng Cai Tianing Wang Ke Wei March 1, 017 Abstract We establish

More information

Lecture 9 November 23, 2015

Lecture 9 November 23, 2015 CSC244: Discrepancy Theory in Coputer Science Fall 25 Aleksandar Nikolov Lecture 9 Noveber 23, 25 Scribe: Nick Spooner Properties of γ 2 Recall that γ 2 (A) is defined for A R n as follows: γ 2 (A) = in{r(u)

More information

A Bernstein-Markov Theorem for Normed Spaces

A Bernstein-Markov Theorem for Normed Spaces A Bernstein-Markov Theore for Nored Spaces Lawrence A. Harris Departent of Matheatics, University of Kentucky Lexington, Kentucky 40506-0027 Abstract Let X and Y be real nored linear spaces and let φ :

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 11 10/15/2008 ABSTRACT INTEGRATION I Contents 1. Preliinaries 2. The ain result 3. The Rieann integral 4. The integral of a nonnegative

More information

Statistics and Probability Letters

Statistics and Probability Letters Statistics and Probability Letters 79 2009 223 233 Contents lists available at ScienceDirect Statistics and Probability Letters journal hoepage: www.elsevier.co/locate/stapro A CLT for a one-diensional

More information

Supplement to: Subsampling Methods for Persistent Homology

Supplement to: Subsampling Methods for Persistent Homology Suppleent to: Subsapling Methods for Persistent Hoology A. Technical results In this section, we present soe technical results that will be used to prove the ain theores. First, we expand the notation

More information

Randomized Recovery for Boolean Compressed Sensing

Randomized Recovery for Boolean Compressed Sensing Randoized Recovery for Boolean Copressed Sensing Mitra Fatei and Martin Vetterli Laboratory of Audiovisual Counication École Polytechnique Fédéral de Lausanne (EPFL) Eail: {itra.fatei, artin.vetterli}@epfl.ch

More information

Support recovery in compressed sensing: An estimation theoretic approach

Support recovery in compressed sensing: An estimation theoretic approach Support recovery in copressed sensing: An estiation theoretic approach Ain Karbasi, Ali Horati, Soheil Mohajer, Martin Vetterli School of Coputer and Counication Sciences École Polytechnique Fédérale de

More information

Asymptotics of weighted random sums

Asymptotics of weighted random sums Asyptotics of weighted rando sus José Manuel Corcuera, David Nualart, Mark Podolskij arxiv:402.44v [ath.pr] 6 Feb 204 February 7, 204 Abstract In this paper we study the asyptotic behaviour of weighted

More information

The degree of a typical vertex in generalized random intersection graph models

The degree of a typical vertex in generalized random intersection graph models Discrete Matheatics 306 006 15 165 www.elsevier.co/locate/disc The degree of a typical vertex in generalized rando intersection graph odels Jerzy Jaworski a, Michał Karoński a, Dudley Stark b a Departent

More information

Chapter 6 1-D Continuous Groups

Chapter 6 1-D Continuous Groups Chapter 6 1-D Continuous Groups Continuous groups consist of group eleents labelled by one or ore continuous variables, say a 1, a 2,, a r, where each variable has a well- defined range. This chapter explores:

More information

CS Lecture 13. More Maximum Likelihood

CS Lecture 13. More Maximum Likelihood CS 6347 Lecture 13 More Maxiu Likelihood Recap Last tie: Introduction to axiu likelihood estiation MLE for Bayesian networks Optial CPTs correspond to epirical counts Today: MLE for CRFs 2 Maxiu Likelihood

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Notes for EE7C (Spring 018: Convex Optiization and Approxiation Instructor: Moritz Hardt Eail: hardt+ee7c@berkeley.edu Graduate Instructor: Max Sichowitz Eail: sichow+ee7c@berkeley.edu October 15,

More information

On Conditions for Linearity of Optimal Estimation

On Conditions for Linearity of Optimal Estimation On Conditions for Linearity of Optial Estiation Erah Akyol, Kuar Viswanatha and Kenneth Rose {eakyol, kuar, rose}@ece.ucsb.edu Departent of Electrical and Coputer Engineering University of California at

More information

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany.

New upper bound for the B-spline basis condition number II. K. Scherer. Institut fur Angewandte Mathematik, Universitat Bonn, Bonn, Germany. New upper bound for the B-spline basis condition nuber II. A proof of de Boor's 2 -conjecture K. Scherer Institut fur Angewandte Matheati, Universitat Bonn, 535 Bonn, Gerany and A. Yu. Shadrin Coputing

More information

ORIE 6340: Mathematics of Data Science

ORIE 6340: Mathematics of Data Science ORIE 6340: Matheatics of Data Science Daek Davis Contents 1 Estiation in High Diensions 1 1.1 Tools for understanding high-diensional sets................. 3 1.1.1 Concentration of volue in high-diensions...............

More information

arxiv: v5 [cs.it] 16 Mar 2012

arxiv: v5 [cs.it] 16 Mar 2012 ONE-BIT COMPRESSED SENSING BY LINEAR PROGRAMMING YANIV PLAN AND ROMAN VERSHYNIN arxiv:09.499v5 [cs.it] 6 Mar 0 Abstract. We give the first coputationally tractable and alost optial solution to the proble

More information

Hybrid System Identification: An SDP Approach

Hybrid System Identification: An SDP Approach 49th IEEE Conference on Decision and Control Deceber 15-17, 2010 Hilton Atlanta Hotel, Atlanta, GA, USA Hybrid Syste Identification: An SDP Approach C Feng, C M Lagoa, N Ozay and M Sznaier Abstract The

More information

A note on the multiplication of sparse matrices

A note on the multiplication of sparse matrices Cent. Eur. J. Cop. Sci. 41) 2014 1-11 DOI: 10.2478/s13537-014-0201-x Central European Journal of Coputer Science A note on the ultiplication of sparse atrices Research Article Keivan Borna 12, Sohrab Aboozarkhani

More information

1 Bounding the Margin

1 Bounding the Margin COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #12 Scribe: Jian Min Si March 14, 2013 1 Bounding the Margin We are continuing the proof of a bound on the generalization error of AdaBoost

More information

In this chapter, we consider several graph-theoretic and probabilistic models

In this chapter, we consider several graph-theoretic and probabilistic models THREE ONE GRAPH-THEORETIC AND STATISTICAL MODELS 3.1 INTRODUCTION In this chapter, we consider several graph-theoretic and probabilistic odels for a social network, which we do under different assuptions

More information

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable.

Prerequisites. We recall: Theorem 2 A subset of a countably innite set is countable. Prerequisites 1 Set Theory We recall the basic facts about countable and uncountable sets, union and intersection of sets and iages and preiages of functions. 1.1 Countable and uncountable sets We can

More information

The isomorphism problem of Hausdorff measures and Hölder restrictions of functions

The isomorphism problem of Hausdorff measures and Hölder restrictions of functions The isoorphis proble of Hausdorff easures and Hölder restrictions of functions Doctoral thesis András Máthé PhD School of Matheatics Pure Matheatics Progra School Leader: Prof. Miklós Laczkovich Progra

More information

arxiv: v1 [math.pr] 17 May 2009

arxiv: v1 [math.pr] 17 May 2009 A strong law of large nubers for artingale arrays Yves F. Atchadé arxiv:0905.2761v1 [ath.pr] 17 May 2009 March 2009 Abstract: We prove a artingale triangular array generalization of the Chow-Birnbau- Marshall

More information

Testing Properties of Collections of Distributions

Testing Properties of Collections of Distributions Testing Properties of Collections of Distributions Reut Levi Dana Ron Ronitt Rubinfeld April 9, 0 Abstract We propose a fraework for studying property testing of collections of distributions, where the

More information

A Probabilistic and RIPless Theory of Compressed Sensing

A Probabilistic and RIPless Theory of Compressed Sensing A Probabilistic and RIPless Theory of Copressed Sensing Eanuel J Candès and Yaniv Plan 2 Departents of Matheatics and of Statistics, Stanford University, Stanford, CA 94305 2 Applied and Coputational Matheatics,

More information

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3

A1. Find all ordered pairs (a, b) of positive integers for which 1 a + 1 b = 3 A. Find all ordered pairs a, b) of positive integers for which a + b = 3 08. Answer. The six ordered pairs are 009, 08), 08, 009), 009 337, 674) = 35043, 674), 009 346, 673) = 3584, 673), 674, 009 337)

More information

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010

A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING. Emmanuel J. Candès Yaniv Plan. Technical Report No November 2010 A PROBABILISTIC AND RIPLESS THEORY OF COMPRESSED SENSING By Eanuel J Candès Yaniv Plan Technical Report No 200-0 Noveber 200 Departent of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065

More information

PREPRINT 2006:17. Inequalities of the Brunn-Minkowski Type for Gaussian Measures CHRISTER BORELL

PREPRINT 2006:17. Inequalities of the Brunn-Minkowski Type for Gaussian Measures CHRISTER BORELL PREPRINT 2006:7 Inequalities of the Brunn-Minkowski Type for Gaussian Measures CHRISTER BORELL Departent of Matheatical Sciences Division of Matheatics CHALMERS UNIVERSITY OF TECHNOLOGY GÖTEBORG UNIVERSITY

More information

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE

ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE ADVANCES ON THE BESSIS- MOUSSA-VILLANI TRACE CONJECTURE CHRISTOPHER J. HILLAR Abstract. A long-standing conjecture asserts that the polynoial p(t = Tr(A + tb ] has nonnegative coefficients whenever is

More information

Multi-Dimensional Hegselmann-Krause Dynamics

Multi-Dimensional Hegselmann-Krause Dynamics Multi-Diensional Hegselann-Krause Dynaics A. Nedić Industrial and Enterprise Systes Engineering Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu B. Touri Coordinated Science Laboratory

More information

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011)

E0 370 Statistical Learning Theory Lecture 5 (Aug 25, 2011) E0 370 Statistical Learning Theory Lecture 5 Aug 5, 0 Covering Nubers, Pseudo-Diension, and Fat-Shattering Diension Lecturer: Shivani Agarwal Scribe: Shivani Agarwal Introduction So far we have seen how

More information

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material

Consistent Multiclass Algorithms for Complex Performance Measures. Supplementary Material Consistent Multiclass Algoriths for Coplex Perforance Measures Suppleentary Material Notations. Let λ be the base easure over n given by the unifor rando variable (say U over n. Hence, for all easurable

More information

VC Dimension and Sauer s Lemma

VC Dimension and Sauer s Lemma CMSC 35900 (Spring 2008) Learning Theory Lecture: VC Diension and Sauer s Lea Instructors: Sha Kakade and Abuj Tewari Radeacher Averages and Growth Function Theore Let F be a class of ±-valued functions

More information

arxiv: v1 [cs.ds] 3 Feb 2014

arxiv: v1 [cs.ds] 3 Feb 2014 arxiv:40.043v [cs.ds] 3 Feb 04 A Bound on the Expected Optiality of Rando Feasible Solutions to Cobinatorial Optiization Probles Evan A. Sultani The Johns Hopins University APL evan@sultani.co http://www.sultani.co/

More information

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography

Tight Bounds for Maximal Identifiability of Failure Nodes in Boolean Network Tomography Tight Bounds for axial Identifiability of Failure Nodes in Boolean Network Toography Nicola Galesi Sapienza Università di Roa nicola.galesi@uniroa1.it Fariba Ranjbar Sapienza Università di Roa fariba.ranjbar@uniroa1.it

More information

List Scheduling and LPT Oliver Braun (09/05/2017)

List Scheduling and LPT Oliver Braun (09/05/2017) List Scheduling and LPT Oliver Braun (09/05/207) We investigate the classical scheduling proble P ax where a set of n independent jobs has to be processed on 2 parallel and identical processors (achines)

More information

1. INTRODUCTION AND RESULTS

1. INTRODUCTION AND RESULTS SOME IDENTITIES INVOLVING THE FIBONACCI NUMBERS AND LUCAS NUMBERS Wenpeng Zhang Research Center for Basic Science, Xi an Jiaotong University Xi an Shaanxi, People s Republic of China (Subitted August 00

More information

Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization

Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization Robust Principal Coponent Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optiization John Wright, Arvind Ganesh, Shankar Rao, and Yi Ma Departent of Electrical Engineering University

More information

On the Use of A Priori Information for Sparse Signal Approximations

On the Use of A Priori Information for Sparse Signal Approximations ITS TECHNICAL REPORT NO. 3/4 On the Use of A Priori Inforation for Sparse Signal Approxiations Oscar Divorra Escoda, Lorenzo Granai and Pierre Vandergheynst Signal Processing Institute ITS) Ecole Polytechnique

More information

A Note on the Applied Use of MDL Approximations

A Note on the Applied Use of MDL Approximations A Note on the Applied Use of MDL Approxiations Daniel J. Navarro Departent of Psychology Ohio State University Abstract An applied proble is discussed in which two nested psychological odels of retention

More information

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket.

Generalized eigenfunctions and a Borel Theorem on the Sierpinski Gasket. Generalized eigenfunctions and a Borel Theore on the Sierpinski Gasket. Kasso A. Okoudjou, Luke G. Rogers, and Robert S. Strichartz May 26, 2006 1 Introduction There is a well developed theory (see [5,

More information

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon

Model Fitting. CURM Background Material, Fall 2014 Dr. Doreen De Leon Model Fitting CURM Background Material, Fall 014 Dr. Doreen De Leon 1 Introduction Given a set of data points, we often want to fit a selected odel or type to the data (e.g., we suspect an exponential

More information

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval

Uniform Approximation and Bernstein Polynomials with Coefficients in the Unit Interval Unifor Approxiation and Bernstein Polynoials with Coefficients in the Unit Interval Weiang Qian and Marc D. Riedel Electrical and Coputer Engineering, University of Minnesota 200 Union St. S.E. Minneapolis,

More information

Some Classical Ergodic Theorems

Some Classical Ergodic Theorems Soe Classical Ergodic Theores Matt Rosenzweig Contents Classical Ergodic Theores. Mean Ergodic Theores........................................2 Maxial Ergodic Theore.....................................

More information

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD

ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Matheatical Sciences 04,, p. 7 5 ON THE TWO-LEVEL PRECONDITIONING IN LEAST SQUARES METHOD M a t h e a t i c s Yu. A. HAKOPIAN, R. Z. HOVHANNISYAN

More information

The Weierstrass Approximation Theorem

The Weierstrass Approximation Theorem 36 The Weierstrass Approxiation Theore Recall that the fundaental idea underlying the construction of the real nubers is approxiation by the sipler rational nubers. Firstly, nubers are often deterined

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Coputable Shell Decoposition Bounds John Langford TTI-Chicago jcl@cs.cu.edu David McAllester TTI-Chicago dac@autoreason.co Editor: Leslie Pack Kaelbling and David Cohn Abstract Haussler, Kearns, Seung

More information

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40

On Poset Merging. 1 Introduction. Peter Chen Guoli Ding Steve Seiden. Keywords: Merging, Partial Order, Lower Bounds. AMS Classification: 68W40 On Poset Merging Peter Chen Guoli Ding Steve Seiden Abstract We consider the follow poset erging proble: Let X and Y be two subsets of a partially ordered set S. Given coplete inforation about the ordering

More information

Solutions of some selected problems of Homework 4

Solutions of some selected problems of Homework 4 Solutions of soe selected probles of Hoework 4 Sangchul Lee May 7, 2018 Proble 1 Let there be light A professor has two light bulbs in his garage. When both are burned out, they are replaced, and the next

More information

A Simple Homotopy Algorithm for Compressive Sensing

A Simple Homotopy Algorithm for Compressive Sensing A Siple Hootopy Algorith for Copressive Sensing Lijun Zhang Tianbao Yang Rong Jin Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Departent of Coputer

More information

Computational and Statistical Learning Theory

Computational and Statistical Learning Theory Coputational and Statistical Learning Theory TTIC 31120 Prof. Nati Srebro Lecture 2: PAC Learning and VC Theory I Fro Adversarial Online to Statistical Three reasons to ove fro worst-case deterinistic

More information

Distributed Subgradient Methods for Multi-agent Optimization

Distributed Subgradient Methods for Multi-agent Optimization 1 Distributed Subgradient Methods for Multi-agent Optiization Angelia Nedić and Asuan Ozdaglar October 29, 2007 Abstract We study a distributed coputation odel for optiizing a su of convex objective functions

More information

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters

The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Parameters journal of ultivariate analysis 58, 96106 (1996) article no. 0041 The Distribution of the Covariance Matrix for a Subset of Elliptical Distributions with Extension to Two Kurtosis Paraeters H. S. Steyn

More information

3.3 Variational Characterization of Singular Values

3.3 Variational Characterization of Singular Values 3.3. Variational Characterization of Singular Values 61 3.3 Variational Characterization of Singular Values Since the singular values are square roots of the eigenvalues of the Heritian atrices A A and

More information

Highly Robust Error Correction by Convex Programming

Highly Robust Error Correction by Convex Programming Highly Robust Error Correction by Convex Prograing Eanuel J. Candès and Paige A. Randall Applied and Coputational Matheatics, Caltech, Pasadena, CA 9115 Noveber 6; Revised Noveber 7 Abstract This paper

More information

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t.

This model assumes that the probability of a gap has size i is proportional to 1/i. i.e., i log m e. j=1. E[gap size] = i P r(i) = N f t. CS 493: Algoriths for Massive Data Sets Feb 2, 2002 Local Models, Bloo Filter Scribe: Qin Lv Local Models In global odels, every inverted file entry is copressed with the sae odel. This work wells when

More information

Optimal Jamming Over Additive Noise: Vector Source-Channel Case

Optimal Jamming Over Additive Noise: Vector Source-Channel Case Fifty-first Annual Allerton Conference Allerton House, UIUC, Illinois, USA October 2-3, 2013 Optial Jaing Over Additive Noise: Vector Source-Channel Case Erah Akyol and Kenneth Rose Abstract This paper

More information

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1.

M ath. Res. Lett. 15 (2008), no. 2, c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS. Van H. Vu. 1. M ath. Res. Lett. 15 (2008), no. 2, 375 388 c International Press 2008 SUM-PRODUCT ESTIMATES VIA DIRECTED EXPANDERS Van H. Vu Abstract. Let F q be a finite field of order q and P be a polynoial in F q[x

More information

Chaotic Coupled Map Lattices

Chaotic Coupled Map Lattices Chaotic Coupled Map Lattices Author: Dustin Keys Advisors: Dr. Robert Indik, Dr. Kevin Lin 1 Introduction When a syste of chaotic aps is coupled in a way that allows the to share inforation about each

More information

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis

E0 370 Statistical Learning Theory Lecture 6 (Aug 30, 2011) Margin Analysis E0 370 tatistical Learning Theory Lecture 6 (Aug 30, 20) Margin Analysis Lecturer: hivani Agarwal cribe: Narasihan R Introduction In the last few lectures we have seen how to obtain high confidence bounds

More information

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product.

Reed-Muller Codes. m r inductive definition. Later, we shall explain how to construct Reed-Muller codes using the Kronecker product. Coding Theory Massoud Malek Reed-Muller Codes An iportant class of linear block codes rich in algebraic and geoetric structure is the class of Reed-Muller codes, which includes the Extended Haing code.

More information

Introduction to Discrete Optimization

Introduction to Discrete Optimization Prof. Friedrich Eisenbrand Martin Nieeier Due Date: March 9 9 Discussions: March 9 Introduction to Discrete Optiization Spring 9 s Exercise Consider a school district with I neighborhoods J schools and

More information

Fixed-to-Variable Length Distribution Matching

Fixed-to-Variable Length Distribution Matching Fixed-to-Variable Length Distribution Matching Rana Ali Ajad and Georg Böcherer Institute for Counications Engineering Technische Universität München, Gerany Eail: raa2463@gail.co,georg.boecherer@tu.de

More information

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008

Constrained Consensus and Optimization in Multi-Agent Networks arxiv: v2 [math.oc] 17 Dec 2008 LIDS Report 2779 1 Constrained Consensus and Optiization in Multi-Agent Networks arxiv:0802.3922v2 [ath.oc] 17 Dec 2008 Angelia Nedić, Asuan Ozdaglar, and Pablo A. Parrilo February 15, 2013 Abstract We

More information

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization

Support Vector Machine Classification of Uncertain and Imbalanced data using Robust Optimization Recent Researches in Coputer Science Support Vector Machine Classification of Uncertain and Ibalanced data using Robust Optiization RAGHAV PAT, THEODORE B. TRAFALIS, KASH BARKER School of Industrial Engineering

More information

Necessity of low effective dimension

Necessity of low effective dimension Necessity of low effective diension Art B. Owen Stanford University October 2002, Orig: July 2002 Abstract Practitioners have long noticed that quasi-monte Carlo ethods work very well on functions that

More information

Asymptotic mean values of Gaussian polytopes

Asymptotic mean values of Gaussian polytopes Asyptotic ean values of Gaussian polytopes Daniel Hug, Götz Olaf Munsonius and Matthias Reitzner Abstract We consider geoetric functionals of the convex hull of norally distributed rando points in Euclidean

More information

Suprema of Chaos Processes and the Restricted Isometry Property

Suprema of Chaos Processes and the Restricted Isometry Property Suprea of Chaos Processes and the Restricted Isoetry Property Felix Kraher, Shahar Mendelson, and Holger Rauhut June 30, 2012; revised June 20, 2013 Abstract We present a new bound for suprea of a special

More information

Probability Distributions

Probability Distributions Probability Distributions In Chapter, we ephasized the central role played by probability theory in the solution of pattern recognition probles. We turn now to an exploration of soe particular exaples

More information

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness

A Note on Scheduling Tall/Small Multiprocessor Tasks with Unit Processing Time to Minimize Maximum Tardiness A Note on Scheduling Tall/Sall Multiprocessor Tasks with Unit Processing Tie to Miniize Maxiu Tardiness Philippe Baptiste and Baruch Schieber IBM T.J. Watson Research Center P.O. Box 218, Yorktown Heights,

More information

THE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n

THE POLYNOMIAL REPRESENTATION OF THE TYPE A n 1 RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n THE POLYNOMIAL REPRESENTATION OF THE TYPE A n RATIONAL CHEREDNIK ALGEBRA IN CHARACTERISTIC p n SHEELA DEVADAS AND YI SUN Abstract. We study the polynoial representation of the rational Cherednik algebra

More information

An RIP-based approach to Σ quantization for compressed sensing

An RIP-based approach to Σ quantization for compressed sensing An RIP-based approach to Σ quantization for copressed sensing Joe-Mei Feng and Felix Kraher October, 203 Abstract In this paper, we provide new approach to estiating the error of reconstruction fro Σ quantized

More information

Kernel Methods and Support Vector Machines

Kernel Methods and Support Vector Machines Intelligent Systes: Reasoning and Recognition Jaes L. Crowley ENSIAG 2 / osig 1 Second Seester 2012/2013 Lesson 20 2 ay 2013 Kernel ethods and Support Vector achines Contents Kernel Functions...2 Quadratic

More information

LORENTZ SPACES AND REAL INTERPOLATION THE KEEL-TAO APPROACH

LORENTZ SPACES AND REAL INTERPOLATION THE KEEL-TAO APPROACH LORENTZ SPACES AND REAL INTERPOLATION THE KEEL-TAO APPROACH GUILLERMO REY. Introduction If an operator T is bounded on two Lebesgue spaces, the theory of coplex interpolation allows us to deduce the boundedness

More information

arxiv: v1 [math.na] 10 Oct 2016

arxiv: v1 [math.na] 10 Oct 2016 GREEDY GAUSS-NEWTON ALGORITHM FOR FINDING SPARSE SOLUTIONS TO NONLINEAR UNDERDETERMINED SYSTEMS OF EQUATIONS MÅRTEN GULLIKSSON AND ANNA OLEYNIK arxiv:6.395v [ath.na] Oct 26 Abstract. We consider the proble

More information

Note on generating all subsets of a finite set with disjoint unions

Note on generating all subsets of a finite set with disjoint unions Note on generating all subsets of a finite set with disjoint unions David Ellis e-ail: dce27@ca.ac.uk Subitted: Dec 2, 2008; Accepted: May 12, 2009; Published: May 20, 2009 Matheatics Subject Classification:

More information

Asynchronous Gossip Algorithms for Stochastic Optimization

Asynchronous Gossip Algorithms for Stochastic Optimization Asynchronous Gossip Algoriths for Stochastic Optiization S. Sundhar Ra ECE Dept. University of Illinois Urbana, IL 680 ssrini@illinois.edu A. Nedić IESE Dept. University of Illinois Urbana, IL 680 angelia@illinois.edu

More information

On Constant Power Water-filling

On Constant Power Water-filling On Constant Power Water-filling Wei Yu and John M. Cioffi Electrical Engineering Departent Stanford University, Stanford, CA94305, U.S.A. eails: {weiyu,cioffi}@stanford.edu Abstract This paper derives

More information

Exact tensor completion with sum-of-squares

Exact tensor completion with sum-of-squares Proceedings of Machine Learning Research vol 65:1 54, 2017 30th Annual Conference on Learning Theory Exact tensor copletion with su-of-squares Aaron Potechin Institute for Advanced Study, Princeton David

More information

COS 424: Interacting with Data. Written Exercises

COS 424: Interacting with Data. Written Exercises COS 424: Interacting with Data Hoework #4 Spring 2007 Regression Due: Wednesday, April 18 Written Exercises See the course website for iportant inforation about collaboration and late policies, as well

More information

Computable Shell Decomposition Bounds

Computable Shell Decomposition Bounds Journal of Machine Learning Research 5 (2004) 529-547 Subitted 1/03; Revised 8/03; Published 5/04 Coputable Shell Decoposition Bounds John Langford David McAllester Toyota Technology Institute at Chicago

More information

arxiv: v1 [math.nt] 14 Sep 2014

arxiv: v1 [math.nt] 14 Sep 2014 ROTATION REMAINDERS P. JAMESON GRABER, WASHINGTON AND LEE UNIVERSITY 08 arxiv:1409.411v1 [ath.nt] 14 Sep 014 Abstract. We study properties of an array of nubers, called the triangle, in which each row

More information

A note on the realignment criterion

A note on the realignment criterion A note on the realignent criterion Chi-Kwong Li 1, Yiu-Tung Poon and Nung-Sing Sze 3 1 Departent of Matheatics, College of Willia & Mary, Williasburg, VA 3185, USA Departent of Matheatics, Iowa State University,

More information

On the Homothety Conjecture

On the Homothety Conjecture On the Hoothety Conjecture Elisabeth M Werner Deping Ye Abstract Let K be a convex body in R n and δ > 0 The hoothety conjecture asks: Does K δ = ck iply that K is an ellipsoid? Here K δ is the convex

More information

Weighted- 1 minimization with multiple weighting sets

Weighted- 1 minimization with multiple weighting sets Weighted- 1 iniization with ultiple weighting sets Hassan Mansour a,b and Özgür Yılaza a Matheatics Departent, University of British Colubia, Vancouver - BC, Canada; b Coputer Science Departent, University

More information

Physics 215 Winter The Density Matrix

Physics 215 Winter The Density Matrix Physics 215 Winter 2018 The Density Matrix The quantu space of states is a Hilbert space H. Any state vector ψ H is a pure state. Since any linear cobination of eleents of H are also an eleent of H, it

More information

arxiv: v1 [cs.ds] 17 Mar 2016

arxiv: v1 [cs.ds] 17 Mar 2016 Tight Bounds for Single-Pass Streaing Coplexity of the Set Cover Proble Sepehr Assadi Sanjeev Khanna Yang Li Abstract arxiv:1603.05715v1 [cs.ds] 17 Mar 2016 We resolve the space coplexity of single-pass

More information