ROW PRODUCTS OF RANDOM MATRICES

Similar documents
arxiv: v1 [math.pr] 22 May 2008

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX

Invertibility of symmetric random matrices

Geometry of log-concave Ensembles of random matrices

Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1

Small Ball Probability, Arithmetic Structure and Random Matrices

MAT 570 REAL ANALYSIS LECTURE NOTES. Contents. 1. Sets Functions Countability Axiom of choice Equivalence relations 9

On the singular values of random matrices

Reconstruction from Anisotropic Random Measurements

Introduction and Preliminaries

Supremum of simple stochastic processes

Exercise Solutions to Functional Analysis

THE SMALLEST SINGULAR VALUE OF A RANDOM RECTANGULAR MATRIX

Estimates for probabilities of independent events and infinite series

arxiv: v1 [math.pr] 22 Dec 2018

Part III. 10 Topological Space Basics. Topological Spaces

Linear Algebra March 16, 2019

Lecture 3. Random Fourier measurements

DELOCALIZATION OF EIGENVECTORS OF RANDOM MATRICES

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

arxiv: v5 [math.na] 16 Nov 2017

Sections of Convex Bodies via the Combinatorial Dimension

Lecture 2: Linear Algebra Review

A strongly polynomial algorithm for linear systems having a binary solution

Lebesgue Measure on R n

Boolean Inner-Product Spaces and Boolean Matrices

The 123 Theorem and its extensions

BALANCING GAUSSIAN VECTORS. 1. Introduction

SINGULAR VALUES OF GAUSSIAN MATRICES AND PERMANENT ESTIMATORS

Convex Geometry. Carsten Schütt

Compressed Sensing: Lecture I. Ronald DeVore

Topological properties

Finite Metric Spaces & Their Embeddings: Introduction and Basic Tools

Optimal compression of approximate Euclidean distances

Metric Spaces and Topology

1 Directional Derivatives and Differentiability

STAT 200C: High-dimensional Statistics

Introduction to Real Analysis Alternative Chapter 1

CS 6820 Fall 2014 Lectures, October 3-20, 2014

ON THE REGULARITY OF SAMPLE PATHS OF SUB-ELLIPTIC DIFFUSIONS ON MANIFOLDS

LARGE DEVIATIONS OF TYPICAL LINEAR FUNCTIONALS ON A CONVEX BODY WITH UNCONDITIONAL BASIS. S. G. Bobkov and F. L. Nazarov. September 25, 2011

Existence and Uniqueness

Random matrices: Distribution of the least singular value (via Property Testing)

SHARP TRANSITION OF THE INVERTIBILITY OF THE ADJACENCY MATRICES OF SPARSE RANDOM GRAPHS

A Randomized Algorithm for the Approximation of Matrices

I. The space C(K) Let K be a compact metric space, with metric d K. Let B(K) be the space of real valued bounded functions on K with the sup-norm

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

MATH 205C: STATIONARY PHASE LEMMA

Invertibility of random matrices

Definition 2.3. We define addition and multiplication of matrices as follows.

Linear Algebra Massoud Malek

Non commutative Khintchine inequalities and Grothendieck s theo

arxiv: v2 [math.ag] 24 Jun 2015

Lecture 13 October 6, Covering Numbers and Maurey s Empirical Method

THE BANACH SPACE S IS COMPLEMENTABLY MINIMAL AND SUBSEQUENTIALLY PRIME

Notes 6 : First and second moment methods

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

Non-Asymptotic Theory of Random Matrices Lecture 4: Dimension Reduction Date: January 16, 2007

Irredundant Families of Subcubes

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

= ϕ r cos θ. 0 cos ξ sin ξ and sin ξ cos ξ. sin ξ 0 cos ξ

Topology. Xiaolong Han. Department of Mathematics, California State University, Northridge, CA 91330, USA address:

Concentration Inequalities for Random Matrices

Laplace s Equation. Chapter Mean Value Formulas

Delocalization of eigenvectors of random matrices Lecture notes

An introduction to some aspects of functional analysis

Analysis-3 lecture schemes

Part V. 17 Introduction: What are measures and why measurable sets. Lebesgue Integration Theory

The following definition is fundamental.

JUHA KINNUNEN. Harmonic Analysis

MATH 51H Section 4. October 16, Recall what it means for a function between metric spaces to be continuous:

Valerio Cappellini. References

Extreme points of compact convex sets

A Bernstein-Chernoff deviation inequality, and geometric properties of random families of operators

Probability and Measure

NO-GAPS DELOCALIZATION FOR GENERAL RANDOM MATRICES MARK RUDELSON AND ROMAN VERSHYNIN

Probabilistic Methods in Asymptotic Geometric Analysis.

Foundations of Matrix Analysis

Euler Equations: local existence

Topological properties of Z p and Q p and Euclidean models

Z Algorithmic Superpower Randomization October 15th, Lecture 12

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

Grothendieck s Inequality

Statistical Inference on Large Contingency Tables: Convergence, Testability, Stability. COMPSTAT 2010 Paris, August 23, 2010

Random regular digraphs: singularity and spectrum

DIEUDONNE AGBOR AND JAN BOMAN

14.1 Finding frequent elements in stream

Markov Chains CK eqns Classes Hitting times Rec./trans. Strong Markov Stat. distr. Reversibility * Markov Chains

Random Methods for Linear Algebra

8. Prime Factorization and Primary Decompositions

Some Useful Background for Talk on the Fast Johnson-Lindenstrauss Transform

Banach Spaces II: Elementary Banach Space Theory

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

Math 118B Solutions. Charles Martin. March 6, d i (x i, y i ) + d i (y i, z i ) = d(x, y) + d(y, z). i=1

On John type ellipsoids

17. Convergence of Random Variables

The small ball property in Banach spaces (quantitative results)

Stability of boundary measures

Lower Bounds for q-ary Codes with Large Covering Radius

Transcription:

ROW PRODUCTS OF RANDOM MATRICES MARK RUDELSON Abstract Let 1,, K be d n matrices We define the row product of these matrices as a d K n matrix, whose rows are entry-wise products of rows of 1,, K This construction arises in certain computer science problems We study the question, to which extent the spectral and geometric properties of the row product of independent random matrices resemble those properties for a d K n matrix with independent random entries In particular, we show that the largest and the smallest singular values of these matrices are of the same order, as long as n d K We also consider a problem of privately releasing the summary information about a database, and use the previous results to obtain a bound for the minimal amount of noise, which has to be added to the released data to avoid a privacy breach 1 Introduction This paper discusses spectral and geometric properties of a certain class of random matrices with dependent rows, which are constructed from random matrices with independent entries Such constructions first appeared in computer science, in the study of privacy protection for contingency tables The behavior of the extreme singular values of various random matrices with dependent entries has been extensively studied in the recent years [1], [2], [9], [16], [22] These matrices arise in asymptotic geometric analysis [1], signal processing [2], [16], statistics [22] etc The row products studied below have also originated in a computer science problem [9] For two matrices with the same number of rows we define the row product as a matrix whose rows consist of entry-wise product of the rows of original matrices Definition 11 Let x and y be 1 n matrices Denote by x r y the 1 n matrix, whose entries are products of the corresponding entries of x and y: x r yj = xj yj If A is an N n matrix, and B is Key words and phrases Random matrices, extreme singular values, privacy protection Research was supported in part by NSF grants DMS-0907023 and DMS-1161372 1

2 MARK RUDELSON an M n matrix, denote by A r B an NM n matrix, whose rows are entry-wise products of the rows of A and B: A r B j 1M+k = A j r B k, where A r B l, A j, B k denote rows of the corresponding matrices Row products arise in a number of computer science related problems They have been introduced in [7] and studied in [24] in the theory of probabilistic automata They also appeared in compressed sensing, see [3] and [6], as well as in privacy protection problems [9] These papers use different notation for the row product; we adopt the one from [6] This paper considers spectral and geometric properties of row products of a finite number of independent random matrices The definition above assumes a certain order of the rows of the matrix A r B This order, however, is not important, since changing the relative positions of rows of a matrix doesn t affect its eigenvalues and singular values Therefore, to simplify the notation, we will denote the row of the matrix C = A r B corresponding to the rows A j and B k by C j,k We will use a similar convention for the rows of the row products of more than two matrices Recall that the singular values of N n random matrix A are the eigenvalues of A A 1/2 written in the non-increasing order: s 1 A s 2 A s n A 0 The first and the last singular values have a clear geometric meaning: s 1 A is the norm of A, considered as a linear operator from l n 2 to l N 2, and if n N and ranka = n, then s n A is the reciprocal of the norm of A 1 considered as a linear operator from l N 2 AR n to l n 2 The quantity κa = s 1 A/s n A, called the condition number of A, controls the error level and the rate of convergence of many algorithms in numerical linear algebra The matrices with bounded condition number are nice embedding of R n into R N, ie they don t significantly distort the Euclidian structure This property holds, in particular, for random N n matrices with independent centered subgaussian entries having unit variance, as long as N n Obviously, the row product of several matrices is a submatrix of their tensor product This fact, however, doesn t provide much information about the spectral properties of the row product, since they can be different from those of the tensor product In particular, for random matrices, the spectra of A B and A r B are, indeed, very different For example, let d n d 2, and consider d n matrices A and B with independent ±1 random values The spectrum of A B is the product

ROW PRODUCTS OF RANDOM MATRICES 3 of spectra of A and B, so the norm of A B will be of the order O n + d 2 = On, and the last singular value is O n d 2 whenever d < n, see [17] From the other side, computer experiments show that the extreme singular values of the row product behave as for the d 2 n matrix with independent entries, ie the first singular value is Od + n = Od, and the last one is Od n, see [9] Based on this data, it was conjectured that the extreme singular values of the row product of several random matrices behave like for the matrices with independent entries This fact was established in [9] up logarithmic terms, whose powers depended on the number of multipliers We remove these logarithmic terms in Theorems 13 and 15 for row products of any fixed number of random matrices with independent bounded entries To formulate these results more precisely, we introduce a class of uniformly bounded random variables, whose variances are uniformly bounded below To shorten the notation we summarize their properties in the following definition Definition 12 Let δ > 0 We will call a random variable ξ a δ random variable if ξ 1 as, Eξ = 0, and Eξ 2 δ 2 We start with an estimate of the norm of the row product of random matrices with independent δ random entries Theorem 13 Let 1,, K be d n matrices with independent δ random entries Then the K-times entry-wise product 1 r 2 r r K is a d K n matrix satisfying P 1 r r K C d K/2 + n 1/2 exp c d + n d K 1 The constants C, c may depend upon K and δ The paper [9] uses an ε-net argument to bound the norm of the row product This is one of the sources of the logarithmic terms in the bound To eliminate these terms, we use a different approach The expectation of the norm is bounded using the moment method, which is one of the standard tools of the random matrix theory The moment method allows to bound the probability as well However, the estimate obtained this way would be too weak for our purposes Instead, we apply the measure concentration inequality for convex functions, which is derived from Talagrand s measure concentration theorem

4 MARK RUDELSON The bound for the norm in Theorem 13 is the same as for a d K n random matrix with bounded or subgaussian iid entries, while the probability estimate is significantly weaker than in the independent case Nevertheless, the estimate of Theorem 13 is optimal both in terms of the norm bound and the probability see Remarks 53 and 55 for details In the important for us case d K n the assertion of Theorem 13 reads P 1 r r K C d K exp cd It is well-known that with high probability a random N n matrix A with independent identically distributed bounded centered random entries has a bounded condition number, whenever N n see, eg [15] Our next result shows that the same happens for the row products of random matrices as well For the next theorem we need the iterated logarithmic function Definition 14 For q N define the function log q : 0, R by induction 1 log 1 t = max log t, 1 ; 2 log q+1 t = log 1 logq t Throughout the paper we assume that the constants appearing in various inequalities may depend upon the parameters K, q, δ, but are independent of the size of the matrices, and the nature of random variables Theorem 15 Let K, q, n, d be natural numbers Assume that n cdk log q d Let 1,, K be d n matrices with independent δ random entries Then the K-times entry-wise product 1 r 2 r r K satisfies P s n 1 r r K c d K C exp cd This bound, together with the norm estimate above shows that the condition number of the row product of matrices with δ random entries exceeds a constant with probability Oexp cd While this probability is close to 0, it is much bigger than that for a d K n random matrix with independent random entries, in which case it is of order exp d K However, it is easy to show that this estimate is optimal see Remarks 53 and 82 This weak probability bound renders standard approaches to singular value estimates unusable In particular,

ROW PRODUCTS OF RANDOM MATRICES 5 the size of a 1/2 net on the sphere S n 1 is exponential in n, so the union bound in the ε-net argument breaks down This weaker bound not only makes the proofs more technically involved, but also leads to qualitative effects which cannot be observed in the context of random matrices with independent entries One of the main applications of random matrices in asymptotic geometric analysis is to finding roughly Euclidean or almost Euclidean sections of convex bodies In particular, the classical theorem of Kashin [8] states that a random section of the unit ball of l N 1 by a linear subspace of dimension proportional to N is roughly Euclidean The original proof of Kashin used a random ±1 matrix to construct these sections The optimal bounds were obtained by Gluskin, who used random Gaussian matrices [5] The particular structure of the l 1 norm plays no role in this result, and it can be extended to a larger class of convex bodies Let D R N be a convex symmetric body such that B2 N D and define the volume ratio [19] of D by vrd = 1/N vold volb2 N Assume that the volume ratio of D is bounded: vrd V Then for a random N n matrix A with independent entries satisfying certain conditions, P x R n Ax D cv N n N 1/2 x 2 exp cn This fact was originally established in [18], and extended in [12] to a broad class of random matrices with independent entries However, the volume ratio theorem doesn t hold for the row product of random matrices We show in Lemma 32 that there exists a convex symmetric body D R dk with bounded volume ratio, such that D inf x ckd 1/2 x S n 1 with probability 1 For K > 1 this bound is significantly lower than N = d K/2, which corresponds to the independent entries case Surprisingly, despite the fact that the general volume ratio theorem breaks down, it still holds for the original case of the l 1 ball The main result of this paper is the following Theorem Theorem 16 Let K, q, n, d be natural numbers Assume that n N cdk log q d

6 MARK RUDELSON and let 1,, K be d n matrices with independent δ random entries Then the K-times entry-wise product = 1 r 2 r r K is a d K n matrix satisfying P x S n 1 1 x c d K C exp cd Note that the results similar to Theorems 13, 15, and 16 remain valid if the matrices 1,, K have different numbers of rows, and the proofs require only minor changes The rest of the paper is organized as follows In Section 2 we consider a privacy protection problem from which the study of row products has originated We derive an estimate on the minimal amount of noise needed to avoid a privacy breach from Theorem 15 Section 3 introduces necessary notation Section 4 contains an outline of the proofs of Theorems 13 and 16 Theorem 13 is proved in the first part of Section 5 The rest of this section and Section 6 develop technical tools needed to prove Theorem 16 In Section 7 we introduce a new technical method for obtaining lower estimates The minimal norm of Ax over the unit sphere is frequently bounded via an ε-net argument The implementation of this approach in [9] was one of the main sources of the parasitic logarithmic terms In Section 7 the lower bound is handled differently The required bound is written as the infimum of a random process The most powerful method of controlling the supremum of a random process is to use chaining, ie to represent the process as a sum of increments, and control the increments separately [21] Such method, however, cannot be directly applied to control the infimum of a positive random process Indeed, lower estimates for the increments cannot be automatically combined to obtain the lower estimate for the sum Nevertheless, In Lemma 71 we develop a variant of a chaining, which allows to control the infimum of a process This chaining lemma is the major step in proving Theorem 16, which is presented in Section 8, where we also derive Theorem 15 from it Acknowledgement: the author thanks Dick Windecker and Martin Strauss for pointing out to the papers [7, 24, 3, 6], and the referee for correcting numerous typos in the first version of the paper 2 Minimal noise for attribute non-privacy Marginal, or contingency tables are the standard way of releasing statistical summaries of data Consider a database D, which we view as a d n matrix with entries from {0, 1} The columns of the matrix are n individual records, and the rows correspond to d attributes of each

ROW PRODUCTS OF RANDOM MATRICES 7 record Each attribute is binary, so it may be either present, or absent For any set of K + 1 different attributes we release the percentage of records having all attributes from this set The list of these values for all d K+1 sets forms the contingency table In the row product notation the contingency table is the subset of coordinates of the vector y = K+1 times {}}{ D r r D w, which correspond to all sets of K + 1 different rows of the matrix D Here w R n is the vector with coordinates w = 1,, 1 The attribute non-privacy model refers to the situation when d 1 rows of the database D are publicly available, or leaked, and one row is sensitive The analysis of a more general case, where there are more than one sensitive attribute can be easily reduced to this setting For the comparison of this model with other privacy models see [9], and the references therein Denote the d 1 n submatrix of D corresponding to non-sensitive attributes by D, and the sensitive vector by x Then the coordinates of y contain all coordinates of the vector K times K times {}}{ {}}{ z = D r r D r x w T = D r r D x, which correspond to K different rows of the matrix D Hence, if the database D is generic, then the sensitive vector y can be reconstructed from D and the released vector z by solving a linear system To avoid this privacy breach, the contingency table is released with some random noise This noise should be sufficient to make the reconstruction impossible, and at the same time, small enough, so that the summary data presented in the contingency table would be reliable Let z noise be the vector of added noise Let D be the D K n submatrix of D r r D corresponding to all K-element subsets of {1,, n} If the last singular value of D is positive, then one can form the left inverse D 1 L of D, and D 1 L = s 1 n D In this case, knowing the released data z + z noise we can approximate the sensitive vector x by x = D 1 L z + z noise Then x x 2 = D 1 2 D 1 znoise 2 L z noise Therefore, if z noise 2 = o n s 1 n D, then x x 2 = o n Since the coordinates of x are 0 or 1, we can reconstruct 1 o1n coordinates of x by rounding the coordinates of x Thus, the lower estimate of s 1 n D provides a lower bound for the norm of the noise vector L

8 MARK RUDELSON We analyze below the case of a random database Assume that the entries of the database are independent {0, 1} variables, and the entries in the same column are identically distributed This means that the distribution of any given attribute is the same for each record, but different attributes can be distributed differently We exclude almost degenerate attributes, ie the attributes having probabilities very close to 0 or 1 In this case bound on the minimal amount of noise follows from Theorem 21 Let K, q, n, d be natural numbers Assume that n cdk log q d Let 0 < p < p < 1, and let p 1,, p d be any numbers such that p < p j < p Consider a d n matrix A with independent Bernoulli entries a j,k satisfying P a j,k = 1 = p j for all j = 1,, d, k = 1,, n Then the K-times entry-wise product à = A r A r r A is a d K n matrix satisfying P s n à c d K C exp cd The constants c, c, C, C may depend upon the parameters K, q, p, p Proof This theorem will follow from Theorem 15, after we pass to the row product of matrices having independent δ random entries To this end, notice that if an m n matrix U is formed from the M n matrix U by taking a subset of rows, then s n U s n U Let d = 2Kd + m, where 0 m < 2K For j = 1,, K denote by 1 j the submatrix of A consisting of rows 2Kj 1 + 1,, 2Kj 1 + K, and by 0 J the submatrix consisting or rows 2Kj 1 + K + 1,, 2Kj Let Dj 1, Dj 0 R d be vectors with coordinates Dj 1 = p 2Kj 1+1,, p 2Kj 1+K and Dj 0 = p 2Kj 1+K+1,, p 2Kj Set j = D 0 j r 1 j D 1 j r 0 j Then 1,, K are d n matrices with independent δ random entries for some δ depending on p, p Let U s, s = 1, 2, 3 be N s n matrices, and let D R N 2 be a vector with coordinates satisfying d j 1 for all j Then for any x R n U1 r D T r U 2 r U 3 x 2 U 1 r U 2 r U 3 x 2 Indeed, any coordinate of U 1 r D T r U 2 r U 3 x equals the correspondent coordinate of U 1 r U 2 r U 3 x multiplied by some d j,j, so

ROW PRODUCTS OF RANDOM MATRICES 9 the inequality above follows from the bound on d j,j This argument shows that for any ε 1,, ε K {0, 1} K D 1 ε 1 1 T r ε 1 1 r r D 1 ε K 1 T r ε K K x 2 ε 1 1 r r ε K K x 2 Therefore, 1 r r K x 2 ε=ε 1,,ε K {0,1} K 2 K A r r A x 2, ε j 1 r r ε j K x 2 because ε j 1 r ε j K is a submatrix of A r r A Thus, for any t > 0 P s n A r r A < t P s n 1 r K < 2 K t To complete the proof we use Theorem 15 with d in place of d, and note that d 3Kd 3 Notation and preliminary results The coordinates of a vector x R n are denoted by x1,, xn Throughout the paper we will intermittently consider x as a vector in R n and as an n 1 matrix The sequence e 1,, e n stands for the standard basis in R n For 1 p < denote by Bp n the unit ball of the space l n p: n 1/p Bp n = x Rn x p = xj p 1 By S n 1 we denote the Euclidean unit sphere Denote by A the operator norm of the matrix A, and by A HS the Hilbert Schmidt norm: 1/2 A HS = a j,k 2 j,k The volume of a convex set D R n will be denoted vold, and the cardinality of a finite set J by J By x we denote the integer part of x R Throughout the paper we denote by K the number of terms in the row product, by q the number of iterations of logarithm, and by δ 2 the minimum of the variances of the entries of random matrices C, c etc denote constants, which may depend on the parameters K, q, and δ, and whose value may change from line to line j=1

10 MARK RUDELSON Let V R n be a compact set, and let ε > 0 A set N V is called an ε-net if for any x K there exists y N such that x y 2 ε If T : R n R m is a linear operator, and N and N are ε-nets in B2 n and B2 m respectively, then T 1 ε 1 sup x N T x 2 1 ε 2 sup x N sup T x, y y N We will use the following volumetric estimate Let V B2 n Then for any ε < 1 there exists an ε-net N V such that n 3 N ε We will repeatedly use Talagrand s measure concentration inequality for convex functions see [20], Theorem 66, or [10], Corollary 49 Theorem Talagrand Let X 1,, X n be independent random variables with values in [ 1, 1] Let f : [ 1, 1] n R be a convex L- Lipschitz function, ie x, y [ 1, 1] n fx fy L x y 2 Denote by M the median of fx 1,, X n Then for any t > 0, P fx 1,, X n M t 4 exp t2 16L 2 To estimate various norms we will divide the coordinates of a vector x R n into blocks Let π : {1,, n} {1,, n} be a permutation rearranging the absolute values of the coordinates of x in the nonincreasing order: xπ1 xπ1 For l < n and 0 m define N 0 = 0, N m = m 1 j=0 4 j l, and set I m = π {N m + 1,, N m+1 } In other words, I 0 contains l largest coordinates of z, I 1 contains 4l next largest, etc We continue as long as I m The block I m will be called the m-th block of type l of the coordinates of x Denote x I the restriction of x to the coordinates from the set I We need the following standard Lemma 31 Let b < 1 and let x B2 n bb n For l b 2 consider blocks I 0, I 1, of type l of the coordinates of x Then I m x Im 2 5 m 0

ROW PRODUCTS OF RANDOM MATRICES 11 Proof Note that the absolute value of any non-zero coordinate of x Im 1 is greater or equal x Im Hence, I m 1 x Im 2 m 0 I m x Im 2 = l x I 0 2 + 4 m 1 lb 2 + 4 x Im 1 2 5 2 m 1 The next lemma shows that Theorem 16 cannot be extended from L 1 norm to a general Banach space whose unit ball has a bounded volume ratio Lemma 32 There exists a convex symmetric body D R dk such that B2 dk D, 1/d K vold C volb2 dk satisfying inf 1 r r K x D ckd 1/2 x S n 1 for all d n matrices 1,, K with entries 1 or 1 Proof Set and let D = conv W = dk 1/2 W, B2 dk D we use Urysohn s inequality [13]: vold volb dk 2 ε 1,,ε K { 1,1} d ε 1 ε K 1/d K To estimate the volume ratio of d K/2 E sup g, x, x D where g is a standard Gaussian vector in R dk Since D dk 1/2 convw + B dk 2, the right hand side of the previous inequality is bounded by 1 + d K/2 dk 1/2 E sup g, x 1 + cdk 1/2 log 1/2 W, x W where W is the cardinality of W Since W = 2 dk, the volume ratio of D is bounded by an absolute constant Let e 1 be the first basic vector of R n The lemma now follows from the equality 1 r r K e 1 = ε 1 ε K, where ε 1,, ε K are the first columns of the matrices 1,, K

12 MARK RUDELSON 4 Outline of the proof We begin with proving Theorem 13 We use the moment method, which is one of the standard random matrix theory tools To estimate the norm of a rectangular random matrix A with centered entries, one considers the matrix A A p for some large p N, and evaluates the expectation of its trace using combinatorics Since A 2p tra A p, any estimate of the trace translates into an estimate for the norm Following a variant of this approach, developed in [4], we obtain an upper bound for the norm of the row product of independent random matrices, which is valid with probability close to 1 However, the moment method alone is insufficient to obtain an exponential bound for the probability To improve the probability estimate, we combine the bound for the median of the norm, obtained by the moment method, and a measure concentration theorem To this end we extend Talagrand s measure concentration theorem for convex functions to the functions, which are polyconvex, ie convex with respect to certain subsets of coordinates Before tackling the small ball probability estimate for min 1 r r K x 1, x S n 1 we consider an easier problem of finding a lower bound for 1 r r K 1 r K x 1 for a fixed vector x S n 1 The entries of the row product are not independent, so to take advantage of independence, we condition on 1,, K 1 To use Talagrand s theorem in this context, we have to bound the Lipschitz constant of this norm above, and the median of it below Such bounds are not available for all matrices 1,, K 1, but they can be obtained for typical matrices, namely outside of a set of a small probability Moreover, the bounds will depend on the vector x, so to obtain them, we have to prove these estimates for all submatrices of the row product This is done in Sections 52 and 53 Using these results, we bound the small ball probability in Section 6 Actually, we prove a stronger estimate for the Levy concentration function, which is the supremum of the small ball probabilities over all balls of a fixed radius The final step of the proof is combining the individual small ball probability estimates to obtain an estimate of the minimal l 1 -norm over the sphere This is usually done by introducing an ε-net, and approximating a point on the sphere by its element Since the small ball probability depends on the direction of the vector x, one ε-net would not be enough A modification of this method, using several ε-nets was developed in [11] However, its implementation for the row

ROW PRODUCTS OF RANDOM MATRICES 13 products lead to appearance of parasitic logarithmic terms, whose degrees rapidly grow with K [9] To avoid these terms, we develop a new chaining argument in Section 7 Unlike standard chaining argument, which is used to bound the supremum of a random process, the method of section 7 applies to the infimum In section 8 we combine the chaining lemma with the Levy concentration function bound of Section 6 to complete the proof of Theorem 16, and derive Theorem 15 from it We also show that the image of R n under the row product of random matrices is a Kashin subspace, ie the l 1 and l 2 norms are equivalent on this space 5 Norm estimates 51 Norm of the matrix We start with a preliminarily estimate of the operator norm of the row product of random matrices To this end we use the moment method, which is based on bounding the expectation of the trace of high powers of the matrix This approach, which is standard in the theory of random matrices with independent entries, carries over to the row product setting as well Theorem 51 Let 1,, K be d n matrices with independent δ random entries Let p N be a number such that p cn 1/12K Then the K-times entry-wise product = 1 r 2 r r K is a d K n matrix satisfying E 2p p 2K+1 n d 1/2 + n 1/2K 2pK Proof The proof of this theorem closely follows [4], so we will only sketch it Denote the entries of the matrix l by δ l i,j, so the entry of the matrix corresponding to the product of the entries in the rows i 1, i 2 i K and column j will be denoted δ 1 i 1 δk Then 1,j i K 1,j E 2p Etr T p V Eδ 1 i 1 1,j 1 δ 1 i 1 p δ K i K δ 1 1,j 1 i 1 2,j 1 δ K i K 2,j 1 δ K δ 1 δ K,j p i K k,j p i 1 1,jp i K 1,j p Here V is the set of admissible multi-paths, ie a sequence of 2p lists {i 1 m 1, j m,, i 1 m K, j m } 2p m=1 such that 1 the column number j m is the same for all entries of the list m 2 the first list is arbitrary;

14 MARK RUDELSON 3 the entries of the second list are in the same column as the entries of the first list, the entries of the third list are in the same rows as the respective entries of the second list, etc; 4 the entries of the last list are in the same rows as the respective entries of the first list; 5 every entry, appearing in each path, appears at list twice Since the entries of the matrices 1,, K are uniformly bounded, the expectations are uniformly bounded as well, so E 2p V To estimate the cardinality of V denote by βr 1,, r K, c the number of admissible multi-paths whose entries are taken from exactly r 1 rows of the matrix 1, exactly r 2 rows of the matrix 2, etc, and exactly from c columns of each matrix Note that the set of columns through which the path goes is common for the matrices 1,, K An admissible multi-path can be viewed as an ordered K-tuple of closed paths q 1,, q K of length 2p + 1 in the d n bi-partite graph, such that q 1 2j = q 2 2j = = q K 2j for j = 1,, p, and each edge is traveled at least twice for each path With this notation we have 51 E 2p βr 1,, r K, c, J where J is the set of sequences of natural numbers r 1,, r K, c satisfying r l + c p + 1 for each l = 1,, K The inequality here follows from condition 5 above Let γr 1,, r K, c be the number of admissible multi-paths, which go through the first r 1 rows of the matrix 1, the first r 2 rows of the matrix 2, etc, and the first c columns Then n K d βr 1,, r K, c γr 1,, r K, c c l=1 We call a closed path of length 2p + 1 path in the d n bi-partite graph standard if 1 it starts with the edge 1, 1; 2 if the path visits a new left right vertex, then its number is the minimal among the left right vertices, which have not yet been visited by this path; 3 each edge in the path is traveled at least twice r l

ROW PRODUCTS OF RANDOM MATRICES 15 Let mr, c is the number of the standard paths through r left vertices and c right vertices of the bi-partite graph Then K γr 1,, r K, c c! r l! mr l, c This inequality follows from the fact that all K paths in the admissible multi-path visit a new column vertex at the same time, so the column vertex enumeration defined by different paths of the same multi-path is consistent Combining two previous estimates, we get K βr 1,, r K, c n c d r l mr l, c l=1 l=1 The inequality on page 260 [4] reads 2 p mr, c p 12p r c+14 r Substituting it into the inequality above, we obtain 52 βr 1,, r K, c J = p c=1 r 1 +c p+1 p K c=1 l=1 p+1 c r l =1 r K +c p+1 n c K 2 p d rl p 12p r l c+14 l=1 2 p n c/k d rl p 12p rl c+14 To estimate the last quantity note that since p 1 2 n1/12k, p+1 c 2 p n c/k d rl p 12p r l c+14 r l =1 p+1 c = p 2 n 1/K r l =1 p 2 n 1/K p r l =0 r l r l 2 p 12p+1 rl c n p+1 r l c/k p d r l n p r l/k p r l = p 2 n 1/K d 1/2 + n 1/2K 2p 2 d 1/2 r l n 1/2K p r l Finally, combining this with 51 and 52, we conclude E 2p p 2K+1 n d 1/2 + n 1/2K 2pK p 2K+1 n d 1/2 + n 1/2K 2pK r l r l

16 MARK RUDELSON Applying Chebychev s inequality, we can derive a large deviation estimate from the moment estimate of Theorem 51 Corollary 52 Under the conditions of Theorem 51, P 1 r r K C d K/2 + n 1/2 exp cn 1 12K Remark 53 The bound for the norm appearing in Corollary 52 matches that for a random matrix with centered iid entries This bound is optimal for the row products as well To see it, assume that the entries of 1,, K are independent ±1 random variables Then e 2 1 = d K/2 Also, if x S n 1 is such that xj = n 1/2 δ1,j, where δ 1,j is an entry in the first row of the matrix, then x n 1/2 2 More precise versions of the moment method show that the moment bound of the type of Theorem 13 is valid for bigger values of p as well, and lead to more precise large deviation bound We do not pursue this direction here, since these bounds are not powerful enough for our purposes Instead, we use the previous corollary to bound the median of the norm of 1 r r K, and apply measure concentration The standard tool for deriving measure concentration results for norms of random matrices is Talagrand s measure concentration theorem for convex functions However, this theorem is not available in our context, since the norm of 1 r r K is not a convex function of the entries of 1,, K We will modify this theorem to apply it to polyconvex functions Lemma 54 Consider a function F : R KM R For 1 k K and x 1,, x k 1, x k+1,, x K R M define a function f x1,,x k 1,x k+1,,x K : R M R by f x1,,x k 1,x k+1,,x K x = F x 1,, x k 1, x, x k+1,, x K Assume that for all 1 k K and for all x 1,, x k 1, x k+1,, x K B d the functions f x1,,x k 1,x k+1,,x K are L-Lipschitz and convex Let ε 1,, ε K = ν 1,1,, ν 1,M,, ν K,1,, ν K,M R KM be a set of independent random variables, whose absolute values are uniformly bounded by 1 If P F ε 1,, ε K µ 2 4 K,

then for any t > 0 ROW PRODUCTS OF RANDOM MATRICES 17 P F ε 1,, ε K µ + t 4 K exp ct2 K 2 L 2 Proof We prove this lemma by induction on K In case K = 1 the assertion of the lemma follows immediately from Talagrand s measure concentration theorem for convex functions Assume that the lemma holds for K 1 Let F : R KM R be a function satisfying the assumptions of the lemma Set Ω = {x 1,, x K 1 B K 1M P F x 1,, x K 1, ε K > µ 1/2} Then Chebychev s inequality yields 53 P ε 1,, ε K 1 Ω 4 K 1 By Talagrand s theorem, for any x 1,, x K 1 B K 1M Hence, P Define Ξ = P F x 1,, x K 1, ε K µ + t K 2 exp ct2 K 2 L 2 F ε 1,, ε K µ + t K ε 1,, ε K 1 B K 1M 2 exp { x K B M P F ε 1,, ε K 1, x K µ + t K ε 1,, ε K 1 B K 1M The previous estimate and Chebychev s inequality imply P ε K Ξ 2 4 K 1 exp ct2 K 2 L 2 \ Ω \ Ω ct2 K 2 L 2 } \ Ω > 4 K 1 If x K Ξ c, then combining the conditional probability bound with the estimate 53, we obtain P F ε 1,, ε K 1, x K µ + t K P F ε 1,, ε K 1, x K µ + t K ε 1,, ε K 1 B K 1M \ Ω + P Ω 2 4 K 1

18 MARK RUDELSON Hence, applying the induction hypothesis with K 1 t in place of t, we K get P F ε 1,, ε K 1, x K µ + t K + K 1 K t 4 K 1 exp ct2 K 2 L 2 Finally, P F ε 1,, ε K µ + t P F ε 1,, ε K µ + t ε K B M \ Ξ + P ε K Ξ 4 K exp ct2, K 2 L 2 which completes the proof of the induction step This concentration inequality combined with Corollary 52 allows to establish the correct probability bound for large deviations of the norm of the row product of random matrices Proof of Theorem 13 For k = 1,, K let ε k R dn be the entries of the matrix k rewritten as a vector For any matrices 1,, k 1, k+1,, K the function f 1,, k 1, k+1,, K k = 1 r r k 1 r k r k+1 r r K is convex Also, since the absolute values of the entries of the matrices 1,, k 1, k+1,, K do not exceed 1, f 1,, k 1, k+1,, K k f 1,, k 1, k+1,, K k 1 r r k 1 r k k r k+1 r r K 1 r r k 1 r k k r k+1 r r K HS d K 1/2 k k HS, so the Lipschitz constant of this function doesn t exceed d K 1/2 By Corollary 52, we can take µ = C d K/2 + n 1/2 Applying Lemma 54 with t = C d K/2 + n 1/2 finishes the proof Remark 55 The probability bound of Theorem 13 is optimal Indeed, assume first that d K n, and let 1,, K be d n matrices with independent random ±1 variables Choose a number s N such that s > C, where C is the constant in Theorem 13, and set x = e 1 + + e s / s With probability 2 sk d all entries in the first s columns of these matrices equal 1, so 1 r r K x 2 = s d K/2 In the opposite case, n > d K, set s = C 2 n/d K, where the constant C is the same as above Then for x defined above we have

ROW PRODUCTS OF RANDOM MATRICES 19 1 r r K x 2 = s d K/2 = C n with probability at least 2 sk d = expc Kn/d K 1 52 Norms of the submatrices We start with two deterministic lemmas The first one is a trivial bound for the norm of the row product of two matrices Lemma 56 Let U be an M n matrix, and let V be a d n matrix Assume that v i,j 1 for all entries of the matrix V Then U r V d U Proof The matrix U r V consists of d blocks U r v j, j = 1,, d, where v j is a row of V For any x R M U r v j x 2 = Uvj r x T T 2 U vj r x T 2 U x 2 Hence, U r V 2 d j=1 U r v j 2 d U 2 The second lemma is based on the block decomposition of the coordinates of a vector Lemma 57 Let T : R n R m be a linear operator Set L = 1/4 log 2 n and let 1 L 0 < L For l = 1,, L denote M l = {x B n 2 suppx 4 l, and xj {0, 2 l, 2 l } for all j} Let b 2 L 0 Then T : B n 2 bb n B m 2 5 L 1/2 max T z 2 z M 2 l l=l 0 Proof Let x B n 2 bb n Let I 0, I 1,, I L L0 be blocks of type 4 L 0 of coordinates of x Recall that I m = 4 L 0+m If x m 0, set m=0 y m = I m 1/2 x Im x Im, otherwise y m = 0 Then y m I m 1/2 = 2 L0 m, and y m 2 1, so y m convm L0 +m for all m By Cauchy Schwartz inequality, L L 0 L L0 1/2 L L0 T x 2 T x Im 2 I m x Im 2 T y m 2 2 L L0 m=0 m=0 I m x Im 2 1/2 L L0 m=0 The estimate of Lemma 31 completes the proof m=0 max z M L0 +m T z 2 2 1/2 1/2

20 MARK RUDELSON For k N denote by W k the set of all d k n matrices V satisfying 54 V J C k d k/2 + en J log k/2 J for all non-empty subsets J {1,, n} Here V J denotes the submatrix of V with columns belonging to J, and C k is a constant depending on k only This definition obviously depends on the choice of the constants C k These constants will be defined inductively in the proof of Lemma 59 and then fixed for the rest of the paper We will prove that the row product of random matrices satisfies condition 54 with high probability To this end we need an estimate of the norm of a vector consisting of iid blocks of coordinates Lemma 58 Let W be an m n matrix Let θ R n be a vector with independent δ random coordinates For l N let Y 1,, Y l be independent copies of the random variable Y = W θ Then for any s > 0 P l j=1 Yj 2 4l W 2 HS + s 2 l exp cs W 2 Proof Note that F : R n R, F x = W x is a Lipschitz convex function with the Lipschitz constant W By Talagrand s theorem P Y M t 4 exp t2 16 W 2, where M = MY is the median of Y For j = 1,, l set Z j = Y j M Then the previous inequality means that Z j is a ψ 2 random variable, ie c Z 2 j E exp W 2 2 for some constant c > 0 By the Chebychev inequality and independence of Z 1,, Z l, l P Zj 2 c l > t = P W 2 Zj 2 > c t W 2 2 l exp c t W 2 j=1 j=1 Using the elementary inequality x 2 2x a 2 + 2a 2, valid for all x, a R, we derive that l l P Yj 2 > 2lM 2 + 2t P Zj 2 > t 2 l exp c t W 2 j=1 j=1

ROW PRODUCTS OF RANDOM MATRICES 21 By Markov s inequality, M 2 = MY 2 2EY 2 To finish the proof, notice that since the coordinates of θ are independent, m n EY 2 = wj,k 2 Eθk 2 W 2 HS j=1 k=1 The next lemma shows that a typical row product of random matrices satisfies 54 Lemma 59 Let d, n, k N be numbers satisfying n d k+1/2 Let 1,, k matrices with independent δ random entries There exist numbers C 1,, C k > 0 such that Proof We use the induction on k P 1 r r k / W k ke cd Step 1 Let k = 1 In this case 1 is a matrix with independent δ random entries For such matrices the result is standard and follows from an easy covering argument Let x S d 1, and let y S n 1 R J Then x, 1 J y is a linear combination of independent δ random variables By Hoeffding s inequality see eg [23], P x, 1 J y > t e ct2 for any t 1 Let J {1,, n}, J = m Let N be a 1/2-net in S d 1, and let M be a 1/2-net in S n 1 R J Then 1 J 4 sup x N sup x, 1 J y y M The nets N and M can be chosen so that N 6 d and M 6 m Combining this with the union bound, we get P 1 J 4t N M e ct2 exp ct 2 + m + d log 6 e c t 2 provided that t C d + m Let t = t m = τ d + m log en J, with τ > C to be chosen later, and set C 1 = 4τ Taking the union bound, we get n n n P 1 / W 1 P 1 J > 4t m e c t 2 m m m=1 J =m m=1 [ n d exp c τ 2 + m log en 2 ] + m log en m m m=1

22 MARK RUDELSON We can choose the constant τ so that the last expression doesn t exceed e d Step 2 Let k > 1, and assume that C 1,, C k 1 are already defined It is enough to find C k > 0 such that for any U W k 1 with u i,j 1 for all i, j 55 P U r k / W k e cd Indeed, in this case P 1 r r k / W k 1 r r k 1 W k 1 e cd Hence, the induction hypothesis yields P 1 r r k / W k P 1 r r k / W k 1 r r k 1 W k 1 + P 1 r r k 1 / W k 1 ke cd Fix U W k 1 To shorten the notation denote W = U r k For j N define m j as the smallest number m satisfying en d j m log j m Our strategy of proving 55 will depend on the cardinality of the set J {1,, n} appearing in 54 Consider first any set J such that J m k 1 By Lemma 56, W J d U J d C k 1 d k 1/2 + J log k 1/2 en/ J 2C k 1 d k/2, and so W satisfies the condition W K with C k = 2C k 1 for all such J Now consider all sets J such that m k 1 < J < m k The previous argument shows that any vector y S n 1 with suppy m k 1 satisfies W y 2C k 1 d k/2 Any x S n 1 can be decomposed as x = y + z, where suppy m k 1 and z m 1/2 k 1 Therefore, to prove 55, it is enough to show that P J {1,, n} m k 1 < J m k and W J : B2 n m 1/2 k 1 Bn B2 dk > Cd k e cd To this end take any z S n 1 such that suppz m k and z m 1/2 k 1 We will obtain a uniform bound on W z 2 over all such z, and use the ε-net argument to derive a bound for W J from it

ROW PRODUCTS OF RANDOM MATRICES 23 Let M be the minimal natural number such that 4 M m k 1 m k Let I 0,, I M be blocks of type m k 1 of the coordinates of z Since U W k 1, for any m M U Im 2 C k 1 d k 1 + I m log k 1 en/ I m 2C k 1 I m log k 1 en, because I m m k 1 Let ε = ε 1,, ε n be a row of the matrix k Then the coordinates of the vector W z corresponding to this row form the vector U r εz = U r z T ε T Let U be the d k 1 J matrix defined as U = U r z T J The inequality above and Lemma 31 imply U 2 M U Im 2 z Im 2 2C k 1 log k 1 en m=0 M I m z Im 2 m=0 10C k 1 log k 1/2 en Also, since all entries of U have absolute value at most 1, U 2 HS dk 1 The sequence of coordinates of the vector W z consists of d independent copies of U ε T I Therefore, applying Lemma 58 with l = d and s = tdk, we get px : = P W z 2 4 + t d k 2 d exp ctdk 2 d exp td k c k logk 1 en, U 2 where c k = 4C2 k 1 /c By the volumetric estimate, we can construct a 1/2-net N for the set E k := {z S n 1 suppz m k, z m 1/2 k 1 } in the Euclidean metric, such that n N 6 m k exp 2m k log en m k Since n d k+1/2, and m k d k, we have logen 2k logen/m k, and so d k m k logen 2k k log k 1 en

24 MARK RUDELSON Hence, we can chose the constant t = t k large enough, so that P z N W z 2 C kd k N 2 d t k d k exp c k logk 1 en d k exp log k 1 en with the constant C k = 4 + t k Thus, P z E k W z 2 4C kd k d k exp, log k 1 en which implies condition 54 with C k = 4Ck 1 2 + 4C k 1/2 for all sets J such that J < m k Finally, consider any set J with J m k As in the previous case, we can split any vector x S n 1 as x = y + z, where suppy m k and z m 1/2 k The previous argument shows that with probability greater than 1 exp d k / log k 1 en, W y 4C 2 k 1 + 4C k 1/2 d k for all such y Therefore, it is enough to estimate max W J z over z B2 n m 1/2 k B n A 1/2-net in the set B2 n m 1/2 k B n is too big, so following the argument used in the previous case would lead to the losses that break down the proof Instead, we will use the sets M l defined in Lemma 57 and obtain the bounds for max W J z for each set separately To this end, set b = 1/ m k, and let L 0 be the largest number such that 2 L 0 b Let l L 0 and take any x M l Choose any set I suppx such that I = 4 l As in the previous case, let U be the d k 1 4 l matrix defined as U = U r x T I Since all non-zero coordinates of x have absolute value 2 l = 1/ I, the assumption U W k 1 implies U 1 U I C k 1 d k 1/2 + en I log k 1/2 I I I 2C k 1 log k 1/2 en 4 l The last inequality holds since for any m m k m k 1 d k 1/2 en m log k 1/2 m Also, as before, all entries of U have absolute value at most 1, so U 2 HS dk 1 The sequence of coordinates of the vector W x consists

ROW PRODUCTS OF RANDOM MATRICES 25 of d independent copies of U ε T I Therefore, applying Lemma 58, we get P W x 2 4d d k 1 + s 2 d exp cs U 2 2 d s exp c k logk 1 en 4 l where c k = 4C2 k 1 /c Set s = sl = 2c k 4 l log k en 4 l Then sl 2c k m k log k en/m k 2c k dk, so the previous inequality can be rewritten as P W x 2 c ksl exp 2 4 l log en 4 l Hence, the union bound implies that there exists a constant C k satisfying P l L 0 x M l W x > C k sl n 3 4l exp 2 4 l log en 4 l exp 4 L 0 log en 4 L 0 l=l 0 4 l exp d k Define the event Ω 1 by Ω 1 = { l L 0 x M l W x sl} The previous inequality means that P Ω c 1 exp d k Assume that the event Ω 1 occurs Let J {1,, n} be such that J m k, and choose L so that 4 L 1 < J 4 L Applying Lemma 57 to T = W J and b = 1/ m k, we obtain W J : B2 n m 1/2 k B n B2 dk 2 L en 5c k sl C k 4 L log k 4 L l=l 0 en 4C k J log k J This shows that condition 54 holds with C k = 4C 2 k 1 +4C k +4C k 1/2 for all non-empty sets J {1,, n} This completes the induction step and the proof of Lemma 59

26 MARK RUDELSON 53 Lower bounds for the Q-norm To obtain bounds for the Levy concentration function below, we need a lower estimate for a certain norm of the row product of random matrices Definition 510 Let U = u j,k be an M m matrix Denote M m 1/2 U Q = uj,k 2 j=1 In other words, Q is the norm in the Banach space l M 1 l m 2 If U is an M m matrix with independent centered entries of unit variance, then for any x R n, E U r x 1/2 M m T Q E u 2 j,kx k 2 = M x 2 j=1 k=1 Moreover, if the coordinates of x are commensurate, we can expect that a reverse inequality would follow from the Central Limit theorem This observation leads to the following definition Let V L be the set of d L n matrices A such that for any x R n 56 A r x T Q cd L x 2 We will show below that the row product of L independent d n random matrices belongs to V L with high probability, provided that the constant c in 56 is appropriately chosen To this end, consider the behavior of 1 r L r x T Q for a fixed vector x R n Lemma 511 Let 1,, L be d m random matrices with independent δ random entries Then for any x R m 1 P r L r x T Q cd L x 2 exp cdl x 2 2 x 2 Proof Without loss of generality, assume that x = 1, so x 2 1 Let α > 0, and let ν 1,, ν m [0, 1] be independent random variables satisfying Eν j α for all j = 1,, m The standard symmetrization and Bernstein s inequality [23] yield m m P j=1 x 2 jν j E j=1 k=1 x 2 jν j > t 2 exp t 2 2 m j=1 x4 j + t/3 Setting t = α/2 x 2 2, and using x 1, we get m P x 2 jν j < α 2 x 2 2 2 exp α2 16 x 2 2 j=1

ROW PRODUCTS OF RANDOM MATRICES 27 Applying the previous inequality to the random variable Y i, i = 1,, d L, which is the l 2 -norm of a row of the matrix 1 r L r x T, we obtain P Y i < c x 2 2 exp c x 2 2 Let 0 < θ < 1 If 1 r L r x T Q = d L i=1 Y i θ d L x 2, then Y i < c x 2 for at least 1 θd L numbers i Hence, P 1 r L r x Q θcd L x 2 d L exp c1 θd L x 2 1 θd L 2 exp d L c1 θ x 2 2 θ log e exp c/2d L x 2 2 θ, if θ is small enough We will use Lemma 511 to show that the row product 1 r r K 1 satisfies condition 56 with high probability Lemma 512 There exists a constant c > 0 for which the following holds Let K > 1, and let n d K For d n matrices 1,, K 1 be matrices with independent δ random entries P 1 r r K 1 / V K 1 exp cd K 1 Proof Denote for shortness = 1 r r K 1 To conclude that V K 1, it is enough to show that condition 56 holds for any x S n 1 For x S n 1 denote by Ωx the set of matrices A such that A r x Q cd K 1 For L = K 1 Lemma 511 yields 57 P Ω c x exp cdk 1 2 x 2 As the first step in proving the lemma, we will show that for A = condition 56 holds for all x from some subset of the sphere More precisely, we will prove the following claim Claim Let a > 0 and m n Denote Sa, m = {x S n 1 x a, suppx m} If a 2 m log d < Cd K 1, then P / x Sa,m Ωx exp c d K 1 a 2

28 MARK RUDELSON It is enough to prove the claim for 0 < a 1 Note that if 0 yj xj for any j = 1,, k, then r y T Q r x T Q Hence, to prove the claim, it is enough to construct a set N of vectors y B2 n \ 1/2B2 n yj xj for all j and P / Set N = such that for any x Sa, m there is y N with y N Ωy exp c d K 1 { 1 y 2 Z n suppy m, y m a and 12 } y 2 1 a 2 By the volumetric considerations n N C m expcm log n expc m log d, m since n d K For x Sa, m consider the vector y with coordinates yj = 1/2 m 2 m xj Then yj xj, and y 2 1 x y 2 1/2, so y N By the union bound and 57, P / Ωy N exp cdk 1 2a 2 y N The claim now follows from the assumption a 2 m log d Cd K 1 for a suitable constant C The lemma can be easily derived from the claim For a and m as above denote Ωa, m = x Sa,m Ωx Set a i = 3d 1 ik/6, m i = min d ik/3, n, i = 1, 2, 3 Then m 3 = n, and the condition a 2 i m i log d Cd K 1, i = 1, 2, 3 is satisfied Set 3 V = Ωa i, m i i=1 By the claim, P V c exp cd K 1 Assume now that V Using the non-increasing rearrangement of xj, we can decompose any x S n 1 as x = x 1 + x 2 + x 3,where x 1, x 2, x 3 have disjoint supports, suppx i m i, x i a i /3 By the triangle inequality, x i 2 1/3 for some i Thus, r x T Q r x T i Q x T i r x i 2 Q 1 3 c 3 dk 1, since x i / x i 2 Sa i, m i This proves the Lemma with c = c/3

ROW PRODUCTS OF RANDOM MATRICES 29 6 Bounds for the Levy concentration function Definition 61 Let ρ > 0 Define the Levy concentration function of a random vector X R n by L 1 X, ρ = sup x R n P X x 1 ρ Unlike the standard definition of the Levy concentration function, we use the l 1 -norm instead of the l 2 -norm We need the following standard Lemma 62 Let X R n be a random vector, and let X be an independent copy of X Then for any ρ > 0 L 1 X, ρ P 1/2 X X 1 2ρ Proof Let y R n be any vector Then P 2 X y 1 ρ = P X y 1 ρ and X y 1 ρ P X X 1 2ρ Taking the supremum over y R n proves the Lemma In the next lemma, we bound the Levy concentration function using Talagrand s inequality, in the same way it was done in the proof of Lemma 58 Lemma 63 Let U = u i,j be any N n matrix, and let ε = ε 1,, ε n T be a vector with independent δ random coordinates Then for any x R n 61 L 1 U r ε T x, c U r x U r x T 2 T Q 2 exp c Q N U r x T 2 Proof Note that U r ε T x = U r x T ε Let ε 1,, ε n be independent copies of ε 1,, ε n Applying Lemma 62, we obtain for any ρ > 0 62 L 1 U r x T ε, ρ P 1/2 U r x T ε ε 1 2ρ Consider a function F : R n R, defined by F y = U r x T y 1, where y R n Then F is a convex function with the Lipschitz constant L U r x T : B2 n B1 N N U r x T By Talagrand s measure concentration theorem P F ε ε MF > s 4 exp cs2, L 2

30 MARK RUDELSON where MF is a median of F, considered as a function on R n equipped with the probability measure defined by the vector ε ε This tail estimate implies MF EF c 1 L c 1 N U r x T By Lemma 26 [15] we have N n EF = E u i,j xj εj ε j c 2 i=1 j=1 = c 2 U r x T Q N n u 2 i,jx 2 j i=1 j=1 1/2 Note that if the constant c in the formulation of the lemma is chosen small enough, we may assume that 2c 1 N U r x T c2 U r x T Q Indeed, if this inequality does not hold, the right-hand side of 61 would be greater than 1 Combining the previous estimates yields MF c 2 /2 U r x T Q Hence, U P r x T ε ε 1 c 2 U r x T 4 Q P F ε ε MF 14 MF 4 exp cm2 F L 2 U r x T 2 4 exp c Q N U r x T 2 This inequality and 62, applied with ρ = c 2 U r 8 x T Q, finish the proof For the next result we need the following standard Lemma Lemma 64 Let s 1,, s d be independent non-negative random variables such that P s j R p for all j Then d P s j 1 2 Rd 4p d/2 j=1 Proof If d j=1 s j 1 2 Rd, then s j R for at least d/2 numbers j Combining Lemma 63 with this inequality, we obtain the tensorized version of Lemma 63 Corollary 65 Let U = u i,j be any N n matrix, and let V be a d n matrix with independent δ random coordinates Then for any

ROW PRODUCTS OF RANDOM MATRICES 31 x R n 63 L 1 U r V x, cd U r x Q C2 d exp c d U r x T 2 Q N U r x T 2 Proof The coordinates of the vector U r V x R Nd consist of d independent blocks U r ε 1 x,, U r ε d x, where ε 1,, ε d are the rows of V The corollary follows from Lemma 64, applied to the random variables s j = U r ε j x y j 1, where y 1,, y d R N are any fixed vectors To prove Theorem 16 we have to bound the probability that the matrix 1 r r K maps some vector from the unit sphere into a small l 1 ball Before doing that, we consider an easier problem of estimating the probability that this matrix maps a fixed vector into a small l 1 ball We phrase this estimate in terms of the Levy concentration function Lemma 66 Let U W K 1 V K 1 be a d K 1 n matrix, and let K be a d n random matrix with independent δ random entries For any x R n L 1 U r K x, cd K x 2 exp c d x 2 2 c x 2 + exp d K log K 1 en x 2 Proof To use Corollary 65, we have to estimate the Q-norm and the operator norms of U r x T The estimate of the Q-norm is given by 56 To estimate the operator norm, assume that x 2 = 1, and set s = x 2 Let L be the maximal number l such that 2 l s n, and let I 0,, I L be the blocks of coordinates of x of type s Then x Jl 2 l x, and by Lemma 31 x 2 2 L J l x Jl 2 5 j=0