Concentration of the Spectral Measure for Large Random Matrices with Stable Entries

Size: px
Start display at page:

Download "Concentration of the Spectral Measure for Large Random Matrices with Stable Entries"

Transcription

1 E l e c t r o n i c J o u r n a l o f P r o b a b i l i t y Vol , Paper no. 5, pages Journal URL Concentration of the Spectral Measure for Large Random Matrices with Stable Entries Christian Houdré Hua Xu Abstract We derive concentration inequalities for functions of the empirical measure of large random matrices with infinitely divisible entries, in particular, stable or heavy tails ones. We also give concentration results for some other functionals of these random matrices, such as the largest eigenvalue or the largest singular value. Key words: Spectral Measure, Random Matrices, Infinitely divisibility, Stable Vector, Concentration. AMS 2 Subject Classification: Primary 6E7, 6F1, 15A42, 15A52. Submitted to EJP on June 12, 27, final version accepted January 18, 28. Georgia Institute of Technology, School of Mathematics, Atlanta, Georgia, 3332, houdre@math.gatech.edu Georgia Institute of Technology, School of Mathematics, Atlanta, Georgia, 3332, xu@math.gatech.edu 17

2 1 Introduction and Statements of Results: Large random matrices have recently attracted a lot of attention in fields such as statistics, mathematical physics or combinatorics e.g., see Mehta [24], Bai and Silverstein [4], Johnstone [18], Anderson, Guionnet and Zeitouni [2]. For various classes of matrix ensembles, the asymptotic behavior of the, properly centered and normalized, spectral measure or of the largest eigenvalue is understood. Many of these results hold true for matrices with independent entries satisfying some moment conditions Wigner [35], Tracy and Widom [33], Soshnikov [28], Girko [8], Pastur [25], Bai [3], Götze and Tikhomirov [9]. There is relatively little work outside the independent or finite second moment assumptions. Let us mention Soshnikov [3] who, using ideas from perturbation theory, studied the distribution of the largest eigenvalue of Wigner matrices with entries having heavy tails. Recall that a real or complex Wigner matrix is a symmetric or Hermitian matrix whose entries M i,i, 1 i, and M i,j, 1 i < j, form two independent families of iid complex valued in the Hermitian case random variables. In particular, see [3], for a properly normalized Wigner matrix with entries belonging to the domain of attraction of an α-stable law, lim P λ max x = exp x α here λ max is the largest eigenvalue of such a normalized matrix. Further, Soshnikov and Fyodorov [32], using the method of determinants, derived results for the largest singular value of K rectangular matrices with independent Cauchy entries, showing that the largest singular value of such a matrix is of order K 2 2 see also the survey article [31], where band and sparse matrices are studied. On another front, Guionnet and Zeitouni [1], gave concentration results for functionals of the empirical spectral measure of, self-adjoint, random matrices whose entries are independent and either satisfy a logarithmic Sobolev inequality or are compactly supported. They obtained, for such matrices, the subgaussian decay of the tails of the empirical spectral measure when it deviates from its mean. They also noted that their technique could be applied to prove results for the largest eigenvalue or for the spectral radius of such matrices. Alon, Krivelevich and Vu [1] further obtained concentration results for any of the eigenvalues of a Wigner matrix with uniformly bounded entries see, Ledoux [2] for more developments and references. Our purpose in the present work is to deal with matrices whose entries form a general infinitely divisible vector, and in particular a stable one without independence assumption. As well known, unless degenerated, an infinitely divisible random variable cannot be bounded. We obtain concentration results for functionals of the corresponding empirical spectral measure, allowing for any type of light or heavy tails. The methodologies developed here apply as well to the largest eigenvalue or to the spectral radius of such random matrices. Following the lead of Guionnet and Zeitouni [1], let us start by setting our notation and framework. Let M C be the set of Hermitian matrices with complex entries, which is throughout equipped with the Hilbert-Schmidt or Frobenius or entrywise Euclidean norm: M = trm M = M i,j 2. Let f be a real valued function on R. The function f can be viewed as mapping M C to M C. Indeed, for M = M i,j 1 i,j M C, so that M = UDU, where D is a 18 i,j=1

3 diagonal matrix, with real entries λ 1,..., λ, and U is a unitary matrix, set fλ 1 fm = UfDU fλ 2, fd = fλ Let trm = i=1 M i,i be the trace operator on M C and set also tr M = 1 M i,i. For a random Hermitian matrix with eigenvalues λ 1, λ 2,..., λ, let F x = 1 i=1 1 {λ i x} be the corresponding empirical spectral distribution function. As well known, if M is a Hermitian Wigner matrix with E[M 1,1 ] = E[M 1,2 ] =, E[ M 1,2 2 ] = 1, and E[M 2 1,1 ] <, the spectral measure of M/ converges to the semicircle law: σdx = 4 x 2 1 { x 2} dx/2π [2]. We study below the tail behavior of either the spectral measure or the linear statistic of fm for classes of matrices M. Still following Guionnet and Zeitouni, we focus on a general random matrix X A given as follows: X A = X A i,j 1 i,j, X A = X A, X A i,j = 1 A i,j ω i,j, with ω i,j 1 i,j = ωi,j R + 1ωi,j I 1 i,j, ω i,j = ω j,i, and where ω i,j, 1 i j is a complex valued random variable with law P i,j = Pi,j R + 1Pi,j I, 1 i j, with P I i,i = by the Hermite property. Moreover, the matrix A = A i,j 1 i,j is Hermitian with, in most cases, non-random complex valued entries uniformly bounded, say, by a. Different choices for the entries of A allow to cover various types of ensembles. For instance, if ω i,j, 1 i < j, and ω i,i, 1 i, are iid, 1 random variables, taking A i,i = 2 and A i,j = 1, for 1 i < j gives the GOE Gaussian Orthogonal Ensemble. If ωi,j R, ωi i,j, 1 i < j, and ωr i,i, 1 i, are iid, 1 random variables, taking A i,i = 1 and A i,j = 1/ 2, for 1 i < j gives the GUE Gaussian Unitary Ensemble see [24]. Moreover, if ωi,j R, ωi i,j, 1 i < j, and ωr i,i, 1 i, are two independent families of real valued random variables, taking A i,j = for i j large and A i,j = 1 otherwise, gives band matrices. Proper choices of non-random A i,j also make it possible to cover Wishart matrices, as seen in the later part of this section. In certain instances, A can also be chosen to be random, like in the case of diluted matrices, in which case A i,j, 1 i j, are iid Bernoulli random variables see [1]. On R 2, let P be the joint law of the random vector X = ωi,i R, ωr i,j, ωi i,j, 1 i < j, where it is understood that the indices for ωi,i R are 1 i. Let E be the corresponding expectation. Denote by ˆµ A the empirical spectral measure of the eigenvalues of X A, and further note that tr fx A = 1 trfx A = fxˆµ Adx, 19 i=1 R

4 for any bounded Borel function f. For a Lipschitz function f : R d R, set fx fy f Lip = sup, x y x y where throughout is the Euclidean norm, and where we write f Lipc whenever f Lip c. Each element M of M C has a unique collection of eigenvalues λ = λm = λ 1,, λ listed in non increasing order according to multiplicity in the simplex S = {λ 1 λ : λ i R, 1 i }, where throughout S is equipped with the Euclidian norm λ = i=1 λ2 i. It is a classical result sometimes called Lidskii s theorem [26], that the map M C S which associates to each Hermitian matrix its ordered list of real eigenvalues is 1-Lipschitz [11], [19]. For a matrix X A under consideration with eigenvalues λx A, it is then clear that the map ϕ : ωi,i R, ωr i,j, ωi i,j 1 i<j λx A is Lipschitz, from R 2, to S,, with Lipschitz constant bounded by a 2/. Moreover, for any real valued Lipschitz function F on S with Lipschitz constant F Lip, the map F ϕ is Lipschitz, from R 2, to R, with Lipschitz constant at most a F Lip 2/. Appropriate choices of F [19], [2] ensure that the maximal eigenvalue λ max X A = λ 1 X A and, in fact, any one of the eigenvalues is a Lipschitz function with Lipschitz constants at most a 2/. Similarly, the spectral radius ρx A = max λ i and 1 i tr fx A, where f : R R is a Lipschitz function, are themselves Lipschitz functions with Lipschitz constants at most a 2/ and f Lip /, respectively. These observations and our results are also valid for the real symmetric matrices, with proper modification of the Lipschitz constants. ext, recall that X is a d-dimensional infinitely divisible random vector without Gaussian component, X IDβ,, ν, if its characteristic function is given by, ϕ X t = Ee i t,x { = exp i t, β + R d } e i t,u 1 i t, u 1 u 1 νdu, 1.1 where t, β R d and ν the Lévy measure is a positive measure on BR d, the Borel σ-field of R d, without atom at the origin, and such that R d 1 u 2 νdu < +. The vector X has independent components if and only if its Lévy measure ν is supported on the axes of R d and is thus of the form: νdx 1,...,dx d = d dx 1... dx k 1 ν k dx k dx k+1... dx d, 1.2 k=1 for some one-dimensional Lévy measures ν k. Moreover, the ν k are the same for all k = 1,...,d, if and only if X has identically distributed components. The following proposition gives an estimate on any median or the mean, if it exists of a Lipschitz function of an infinitely divisible vector X. It is used in most of the results presented in this paper. The first part is a consequence of Theorem 1 in [14], while the proof of the second part can be obtained as in [14]. 11

5 Proposition 1.1. Let X = ω R i,i, ωr i,j, ωi i,j 1 i<j IDβ,, ν in R 2. Let V 2 x = u x u 2 νdu, νx = u >x νdu, and for any γ >, let p γ = inf { x > : < V 2 x/x 2 γ }. Let f Lip1, then for any γ such that νp γ 1/4, i any median mfx of fx satisfies γ mfx f G 1 γ := p γ + 3kγ 1/4 + E γ, ii the mean E [fx] of fx, if it exists, satisfies γ E [fx] f G 2 γ := p γ + kγ 1/4 + E γ, where k γ x, x >, is the solution, in y, of the equation y y + γln 1 + y = lnx, γ and where E γ = 2 k=1 1/2 2 e k, β e k, y νdy + e k, y νdy, 1.3 p γ< y 1 1< y p γ with e 1, e 2,...,e 2 being the canonical basis of R 2. Our first result deals with the spectral measure of a Hermitian matrix whose entries on and above the diagonal form an infinitely divisible random vector with finite exponential moments. Below, for any b >, c >, let Lip b c = { f : R R : f Lip c, f b }, while for a fixed compact set K R, with diameter K = sup x y, let x,y K where suppf is the support of f. Lip K c := {f : R R : f Lip c, suppf K}, Theorem 1.2. Let X = ωi,i R, ωr i,j, ωi i,j 1 i<j be a random vector with joint law P IDβ,, ν such that E [e t X ] < +, for some t >. Let T = sup{t : E [ e t X ] < + } and let h 1 be the inverse of hs = u e s u 1 νdu, < s < T. R 2 i For any compact set K R, P sup tr fx A E [tr fx A ] f Lip K 1 8 K for all > such that 2 < 8 K ht /. exp { } 2 8 K h 1 sds,

6 ii P sup tr fx A E [tr fx A ] f Lip b 1 { C, b exp } 2 C,b h 1 sds, 1.5 for all > such that 2 C, bht /, where C, b = C G 2 γ + ht + b, with G 2 γ as in Proposition 1.1, C a universal constant, and with t the solution, in t, of tht t hsds ln12b/ =. Remark 1.3. i The order of C, b in part ii can be made more specific. Indeed, it will be clear from the proof of this theorem see 2.39, that for any fixed t, < t T, ln 12b C, b C t + t hsds t + G 2 γ. ii As seen from the proof see 2.38, in the statement of the above theorem, G 2 γ can be replaced by E [ X ]. ow E [ X ] is of order, since min j=1,2,..., 2E[ X j ] E [ X ] max E [ ] X 2 j=1,2,..., 2 j, 1.6 where the X j, j = 1, 2,..., 2 are the components of X. Actually, an estimate more precise than 1.6 is given by a result of Marcus and Rosiński [22] which asserts that if E[X] =, then 1 4 x E [ X ] 17 8 x, where x is the solution of the equation: V 2 x x 2 + Ux x with V 2 x as defined before and Ux = u x u νdu, x >. = 1, 1.7 iii As usual, one can easily pass from the mean E [tr f] to any median mtr f in either 1.4 or 1.5. Indeed, for any 2b, if sup tr f mtr f, f Lip b 1 there exist a function f Lip b 1 and a median mtr f of tr f, such that either tr f mtr f or tr f mtr f. Without loss of generality assuming the former, otherwise dealing with the latter with f, consider the function gy = mindy, A, /2, y R 2, where A = {tr f mtr f}. Clearly 112

7 g Lip b 1, E [tr g] /4, and therefore tr g E [tr g] /4, which indicates that sup tr g E [tr g] g Lip b 1 4. Hence, P sup f Lip b 1 tr f mtr f P sup g Lip b 1 tr g E [tr g] iv When the entries of X are independent, and under a finite exponential moment assumption, the dependency in of the function h above and below can sometimes be improved. We refer the reader to [15] where some of these generic problems are discussed and tackled. ext, recall see [7], [19] that the Wasserstein distance between any two probability measures µ 1 and µ 2 on R is defined by. d W µ 1, µ 2 = sup fdµ 1 fdµ R f Lip b 1 Hence, Theorem 1.2 actually gives a concentration result, with respect to the Wasserstein distance, for the empirical spectral measure ˆµ A, when it deviates from its mean E [ˆµ A ]. As in [1], we can also obtain a concentration result for the distance between any particular probability measure and the empirical spectral measure. Proposition 1.4. Let X = ω R i,i, ωr i,j, ωi i,j 1 i<j be a random vector with joint law P IDβ,, ν such that E [ e t X ] < +, for some t >. Let T = sup{t > : E [ e t X ] < + } and let h 1 be the inverse of hs = R 2 u es u 1νdu, < s < T. Then, for any probability measure µ, R P d W ˆµ A,µ E [d W ˆµ A,µ] { } exp h 1 sds, 1.1 for all < < ht /. Of particular importance is the case of an infinitely divisible vector having boundedly supported Lévy measure. We then have: Corollary 1.5. Let X = ω R i,i, ωr i,j, ωi i,j 1 i<j be a random vector with joint law P IDβ,, ν such that ν has bounded support. Let R = inf{r > : νx : x > r = }, let V 2 = V 2 R = R 2 u 2 νdu, and for x > let lx = 1 + xln1 + x x. 113

8 i For any >, P sup tr fx A E [tr fx A ] f Lip b 1 { where C, b C, b = C exp G 2 γ + V 2 V 2 R 2 l R R 2 C, bv 2 }, 1.11 e t R 1 + b, with G 2 γ as in Proposition 1.1, C a universal constant, and t the solution, in t, of V 2 R 2 tre tr e tr + 1 = ln 12b. ii For any probability measure µ on R, and any >, P d W ˆµ A,µ E [d W ˆµ A,µ] { exp + V 2 R R R 2 ln } 1 + R2 V Remark 1.6. i As in Theorem 1.2, the dependency of C, b in and b can be made more precise. A key step in the proof of 1.11 is to choose τ such that E [tr 1 { XA τ}] /12b, and then C, b is determined by τ. Minimizing, in t, the right hand side of 2.38, leads to the following estimate { } E [tr 1 { XA τ}] exp V 2 R R 2 l τ G 2 γ V 2, where lx = 1 + xln1 + x x. For x 1, 2lx xlnx. Hence one can choose τ to be the solution, in x, of the equation x R xr 12b ln = 2 ln V 2. It then follows that C, b can be taken to be C G 2 γ + τ + b. Without the finite exponential moment assumption, an interesting class of random matrices with infinitely divisible entries are the ones with stable entries, which we now analyze. Recall that X in R d is α-stable, < α < 2, if its Lévy measure ν is given, for any Borel set B BR d, by + νb = σdξ 1 B rξ dr S d 1 r1+α,

9 where σ, the spherical component of the Lévy measure, is a finite positive measure on S d 1, the unit sphere of R d. Since the expected value of the spectral measure of a matrix with α-stable entries might not exist, we study the deviation from a median. Here is a sample result. Theorem 1.7. Let < α < 2, and let X = ω R i,i, ωr i,j, ωi i,j 1 i<j be an α-stable random vector in R 2, with Lévy measure ν given by i Let f Lip1, and let mtr fx A be any median of tr fx A. Then, P tr fx A mtr fx A Cα ασs2 1 α α, 1.14 whenever > [ 1/α, 2σS Cα] 2 1 and where Cα = 4 α 2 α + eα/α2 α. ii Let λ max X A be the largest eigenvalue of X A, and let mλ max X A be any median of λ max X A, then P λ max X A mλ max X A Cα ασs2 1 α/2 α, 1.15 whenever > [ 1/α, 2σS Cα] 2 1 and where Cα = 4 α 2 α + eα/α2 α. Remark 1.8. Let M be a Wigner matrix whose entries M i,i,1 i,m R i,j,1 i<j, and MI i,j, 1 i<j, are iid random variables, such that the distribution of M 1,1 belongs to the domain of attraction of an α-stable distribution, i.e., for any >, P M 1,1 > = L α, for some slowly varying positive function L such that lim Lt/L = 1, for all t >. Soshnikov [3] showed that, for any >, lim P λ max b 1 M = 1 exp α, where b is a normalizing factor such that lim 2 Lb /b α = 2 and where λ maxb 1 M is the largest eigenvalue of b 1 M. In fact lim 2 α ǫ /b = and lim b / 2 α +ǫ =, for any ǫ >. As stated in [13], when the random vector X is in the domain of attraction of an α-stable distribution, concentration inequalities similar to 1.14 or 1.15 can be obtained for general Lipschitz function. In particular, if the Lévy measure of X is given by + νb = σdξ 1 B rξ Lrdr, 1.16 r1+α S 2 1 for some slowly varying function L on [, +, and if we still choose the normalizing factor b such that lim σs 2 1 Lb /b α is constant, then, 115

10 P λ max b 1 M mλ maxb 1 M CασS2 1 2 α/2 b α L b 2 α, 1.17 whenever b α 2 1+α/2 CασS 2 1 L b / 2. ow, recall that for an 2 dimensional vector with iid entries, σs 2 1 = 2 ˆσ1 + ˆσ 1, where ˆσ1 is short for σ1,,..., and similarly for ˆσ 1. Thus, for fixed, our result gives the correct order of the upper bound for large values of, since for > 1, e 1 e α 1 e α 1 α. Moreover, in the stable case, L becomes constant, and b = 2/α. Since λ max 2/α M is a Lipschitz function of the entries of the matrix M with Lipschitz constant at most 2 2/α, for any median mλ max 2/α M of λ max 2/α M, we have, P λ max 2 α M mλmax ˆσ1+ˆσ α M Cα 2 α/2 α, 1.18 whenever [ 2Cα ˆσ1 + ˆσ 1 ] 1/α. Furthermore, using Theorem 1 in [14], it is not difficult to see that mλ max 2/α M can be upper and lower bounded independently of. Finally, an argument as in Remark 1.15 below will give a lower bound on λ max 2/α M of the same order as The following proposition will give an estimate on any median of a Lipschitz function of X, where X is a stable vector. It is the version of Proposition 1.1 for α-stable vectors. Proposition 1.9. Let < α < 2, and let X = ω R i,i, ωr i,j, ωi i,j 1 i<j be an α-stable random vector in R 2, with Lévy measure ν given by Let f Lip1, then i any median mfx of fx satisfies mfx f J 1 α := σs 2 1 4α 1/α α 42 α + 3k α 42 α 1/4 +E, 1.19 ii the mean E [fx] of fx, if it exists, satisfies E [fx] f J 2 α := σs 2 1 4α 1/α α 42 α + k α 42 α 1/4 +E,

11 where k α/42 α x, x >, is the solution, in y, of the equation y y + α ln α 42 αy = lnx, α and where E = 2 k=1 e k, β 4σS 2 1 1/α< y 1 e k, y νdy α + 1/2 2 1< y 4σS 2 1 1/α e k, y νdy, 1.21 α with e 1, e 2,...,e 2 being the canonical basis of R 2. Remark 1.1. When the components of X are independent, a direct computation shows that, 1/α up to a constant, E in both J 1 α and J 2 α is dominated by, as. σs 2 1 4α In complete similarity to the finite exponential moments case, we can obtain concentration results for the spectral measure of matrices with α-stable entries. Theorem Let < α < 2, and let X = ω R i,i, ωr i,j, ωi i,j 1 i<j be an α-stable random vector in R 2, with Lévy measure ν given by i Then, where P C, b, α = sup f Lip b 1 tr fx A E [tr fx A ] C, b, α aα σs 2 1 α α 1, α J1 α+1 1+α C 1 α + b + C 2 α, with C 1 α and C 2 α constants depending only on α, and with J 1 α as in Proposition 1.9. ii For any probability measure µ, P d W ˆµ A,µ md W ˆµ A,µ Cα ασs2 1 α α, 1.23 whenever [ 1/α 2σS Cα] 2 1 and where Cα = 4 α 2 α + eα/α2 α. 117

12 It is also possible to obtain concentration results for smaller values of. Indeed, the lower and intermediate range for the stable deviation obtained in [5] provide the appropriate tools to achieve such results. We refer to [5] for complete arguments, and only provide below a sample result. Theorem Let < α < 2, and let X = ω R i,i, ωr i,j, ωi i,j 1 i<j be an α-stable random vector in R 2, with Lévy measure ν given by For any ǫ >, there exists ηǫ, and constants D 1 = D 1 α, a,, σs 2 1 and D 2 = D 2 α, a,, σs 2 1, such that for all < < ηǫ, P sup tr fx A E [tr fx A ] f Lip b ǫ D 1 α+1 α exp D 2 2α+1 α Remark i In 1.14, 1.15 or 1.23, the constant Cα is not of the right order as α 2. It is, however, a simple matter to adapt Theorem 2 of [13] to obtain, at the price of worsening the range of validity of the concentration inequalities, the right order in the constants as α 2. ii Let us now provide some estimation of D 1 and D 2, which are needed for comparison with the GUE{ results of [1] see } iii below. Let Cα = 2 α eα + 2 α/22 α, Kα = max 2 α /α 1, Cα, Lα = α 1/α α/α 1 2 α/1 and let 2α 1 D =2 α 12 Cα Kα 1 α 1 α J2 αb 1 α CασS 2 1 As shown in the proof of the theorem, D 1 = 24D, while 1 α α 12 Cα Kα 1 α b α+1 α b 1 α D 2 = Lα σs α 1 α α D α α 1 Thus, as +, D 1 is of order 1/2 σs 2 1 1/α, while D 2 is of order 3α/2α 2 σs 2 1 2/1 α. iii Guionnet and Zeitouni [1], obtained concentration results for the spectral measure of matrices with independent entries, which are either compactly supported or satisfy a logarithmic Sobolev inequality. In particular for the elements of the GUE, their upper bound of concentration for the spectral measure is C 1 + b 3/2 exp 3/2 { C 2 8ca 22 5 C 1 + b 3/ }, 1.26

13 where C 1 and C 2 are universal constants. In Theorem 1.12, the order, in b, of D 1 is at most b α+1/α, while that of D 2 is at least b α+1/α 1. For α close to 2, this order is thus consistent with the one in Taking into account part ii above, the order of the constants in 1.24 are correct when α 2. Following [5] see also Remark 4 in [21], we can recover a suboptimal Gaussian result by considering a particular stable random vector X α and letting α 2. Toward this end, let X α be the stable random vector whose Lévy measure has for spherical component σ, the uniform measure with total mass σs 2 1 = 2 2 α. As α converges to 2, X α converges in distribution to a standard normal random vector. Also, as α 2, the range of in Theorem 1.12 becomes, + while the constants in the concentration bound do converge. Thus, the right hand side of 1.24 becomes { } D 1 exp D 3/2 2 5, which is of the same order, in, as However our order in is suboptimal. iv In the proof of Theorem 1.12, the desired estimate in 2.56 is achieved through a truncation of order 1/α, which, when α 2, is of the same order as the one used in obtaining However, for the GUE result, using Gaussian concentration, a truncation of order ln12b/ gives a slightly better bound, namely, C 1 ln 12b { exp C } 8ca 2 ln 12b, where C 1 and C 2 are absolute constants different from those of Wishart matrices are of interest in many contexts, in particular as sample covariance matrices in statistics. Recall that M = Y Y is a complex Wishart matrix if Y is a K matrix, K >, with entries Y i,j = Y R i,j + 1Y I i,j a real Wishart matrix is defined similarly with YI i,j = and M = Y t Y. Recall also that if the entries of Y are iid centered random variables with finite variance σ 2, the empirical distribution of the eigenvalues of Y Y/ converges as K,, and K/ γ, + to the Marčenko-Pastur law [4], [23] with density p γ x = 1 2πxγσ 2 c2 xx c 1, c 1 x c 2, where c 1 = σ 2 1 γ 1/2 2 and c 2 = σ γ 1/2 2. When the entries of Y are iid normal, Johansson [16] and Johnstone [17] showed, in the complex and real case respectively, that the properly normalized largest eigenvalue converges in distribution to the Tracy-Widom law [33], [34]. Soshnikov [29] extended the results of Johansson and Johnstone to Wishart matrices with symmetric subgaussian entries under the condition that K = O 1/3. As kindly indicated to us by a referee, this last condition has recently been removed by Péché. Soshnikov and Fyodorov [32] studied the distribution of the largest eigenvalue of the Wishart matrix Y Y, when the entries of Y are iid Cauchy random variables. We are interested here in concentration for the linear statistics of the spectral measure and for the largest eigenvalue of the Wishart matrix Y Y, where the entries of Y form an infinitely divisible vector and, in particular, a 119

14 stable one. We restrict our work to the complex framework, the real framework being essentially the same. It is not difficult to see that if Y has iid Gaussian entries, Y Y has infinitely divisible entries, each with a Lévy measure without a known explicit form. However the dependence structure among the entries of Y Y prevents the vector of entries to be, itself, infinitely divisible this is a well known fact originating with Lévy, see [27]. Thus the methodology, we used till this point, cannot be directly applied to deal with functions of eigenvalues of Y Y. However, concentration results can be obtained when we consider the following facts, due to Guionnet and Zeitouni [1] and already used for that purpose in their paper. Let for 1 i K, 1 j K for + 1 i K +, K + 1 j K + A i,j = 1 for 1 i K, K + 1 j K + 1 for + 1 i K +, 1 j K, and for 1 i K, 1 j K for + 1 i K +, K + 1 j K + ω i,j = Ȳ i,j for 1 i K, K + 1 j K + Y i,j for + 1 i K +, 1 j K, Y then X A = M Y K+ K+ C, and X 2 Y A = Y YY Moreover, since the spectrum of Y Y differs from that of YY only by the multiplicity of the zero eigenvalue, for any function f, one has and trfx 2 A = 2trfY Y + K f, λ max M 1/2 = max 1 i λ ix A, where M 1/2 is the unique positive semi-definite square root of M = Y Y. ext let P K, be the joint law of Y R i,j,yi i,j 1 i K,1 j on R 2K, and let E K, be the corresponding expectation. We present below, in the infinitely divisible case, a concentration result for the largest eigenvalue λ max M, of the Wishart matrices M = Y Y. The concentration for the linear statistic tr fm could also be obtained using the above observations. Corollary Let M = Y Y, with Y i,j = Y R i,j + 1Y I i,j. i Let X = Yi,j R,YI i,j 1 i,1 j K be a random vector with joint law P K, IDβ,, ν such that E K, [e t X ] < +, for some t >. Let T = sup{t > : E K, [e t X ] < + } and let h 1 be the inverse of hs = u e s u 1νdu, < s < T. R 2K 12

15 Then, P K, λ max M 1/2 E K, [λ max M 1/2 ] e R / 2 h 1 sds, 1.29 for all < < ht. ii Let X = Yi,j R,YI i,j 1 i K,1 j be an α-stable random vector with Lévy measure ν given by νb = S σdξ + 2K 1 1 B rξdr/r 1+α. Then, P K, λ max M 1/2 mλ max M 1/2 Cα 2 ασs2k 1 α, whenever > [ 2σS 2K 1 Cα ] 1/α and where Cα = 4 α 2 α + eα/α2 α. Remark i As already mentioned, Soshnikov and Fyodorov [32] obtained the asymptotic behavior of the largest eigenvalue of the Wishart matrix Y Y when the entries of, the K matrix, Y are iid Cauchy random variables. They further argue that although the typical eigenvalues of Y Y are of order K, the correct order for the largest one is K 2 2. The above corollary combined with Remark 1.1 and the estimate 1.19, shows that when the entries of Y form an α-stable random vector, the largest eigenvalue of Y Y is of order at most σs 2K 1 2/α. There is also a lower concentration result, described next, which leads to a lower bound on the order of this largest eigenvalue. Thus, from these two estimates, if the entries of Y are iid α-stable, the largest eigenvalue of Y Y is of order K 2/α 2/α. ii Let X IDβ,, ν in R d, then see Lemma 5.4 in [6] for any x >, and any norm on R d, P X x 1 { 1 exp ν { u R d : u 2x }}. 4 But, λ max M 1/2 is a norm of the vector X = Y R i,j,yi i,j, which we denote by X λ, if X is a stable vector in R 2K. P K, λ max M 1/2 mλ max M 1/2 = P K, λ max M 1/2 + mλ max M 1/2 1 { 1 exp ν { λ max M 1/2 2 + mλ max M 1/2 }} 4 1 { 1 exp ν { X λ 2 + mλ max M 1/2 }} 4 = 1 { σ S 2K 1 } 1 exp λ 4 α + mλ max M 1/2 α, 1.3 where S 2K 1 λ is the unit sphere relative to the norm λ and where σ is the spherical part of the Lévy measure corresponding to this norm. Moreover, if the components of X are independent, in which case the Lévy measure is supported on the axes of R 2K, σ S 2K 1 λ is of order K, and so, as above, the largest eigenvalue of M 1/2 is of order K 1/α 1/α. 121

16 iii For any function f such that gx = fx 2 is Lipschitz with Lipschitz constant g Lip := f L, trgx A = trfx 2 A is a Lipschitz function of the entries of Y with Lipschitz constant at most 2 f L K +. Hence, under the assumptions of part i of Corollary 1.14, P K, tr fm E K, [tr fm] K + { / 2K+ f L } exp h 1 sds, 1.31 for all < < f L ht / 2K +. iv Under the assumptions of part ii of Corollary 1.14, for any function f such that gx = fx 2 is Lipschitz with g Lip = f L, and any median mtr fm of tr fm we have: P K, tr fm mtr fm K + f α L σs 2K 1 Cα 2 α K + α α, 1.32 whenever > f L [ 2σS 2K 1 Cα ] 1/α / 2K +, and where Cα = 4 α 2 α + eα/α2 α. Remark In the absence of finite exponential moments, the methods described in the present paper extend beyond the heavy tail case and apply to any random matrix whose entries on and above the main diagonal form an infinitely divisible vector X. However, to obtain explicit concentration estimates, we do need explicit bounds on V 2 and on ν. Such bounds are not always available when further knowledge on the Lévy measure of X is lacking. 2 Proofs: We start with a proposition, which is a direct consequence of the concentration inequalities obtained in [12] for general Lipschitz function of infinitely divisible random vectors with finite exponential moment. Proposition 2.1. Let X = ωi,i R, ωr i,j, ωi i,j 1 i<j be a random vector with joint law P IDβ,, ν such that E [ e t X ] < +, for some t > and let T = sup{t > : E [ e t X ] < + }. Let h 1 be the inverse of hs = u e s u 1 νdu, < s < T. R 2 i For any Lipschitz function f, P tr fx A E [tr fx A ] exp for all < < f Lip ht /. { f Lip h 1 sds }, 122

17 ii Let λ max X A be the largest eigenvalue of the matrix X A. Then, P λ max X A E [λ max X A ] exp for all < < ht /. { } h 1 sds, Proof of Theorem 1.2: For part i, following the proof of Theorem 1.3 of [1], without loss of generality, by shift invariance, assume that min{x : x K} =. ext, for any v >, let if x g v x = x if < x < v 2.33 v if x v. Clearly g v Lip1 with g v = v. ext for any function f Lip K 1, any >, define recursively f x = for x, and for j 1 x j, j = 1,..., x, let x f x = g j, where g j := 21 {fj >f j 1 } 1g x j 1. Then f f and the 1-Lipschitz function f is the sum of at most K / functions g j Lip1, regardless of the function f. ow, for > 2, P tr fx A E [tr fx A ] P sup f Lip K 1 sup f Lip K 1 j=1 { tr f X A E tr f X A + tr fx A tr f X A + E [tr fx A ] E [tr f X A ] } P sup tr f X A E tr f X A > 2 f K 8 K sup P tr g j g j Lip1 exp X A E [tr g j X A] 2 K { 2 } 8 K h 1 sds, K ht /, and where the last inequality follows from part i of whenever < < the previous proposition by taking also = /4. 123

18 In order to prove part ii, for any f Lip b 1, i.e, such that f Lip 1, f b, and any τ >, let f τ be given via: fx if x < τ fτ signfτx τ if τ x < τ + fτ f τ x = f τ + signf τx + τ if τ f τ < x τ otherwise. Clearly f τ Lip1 and suppf τ [ τ f τ, τ + fτ ]. Moreover, sup tr fx A E tr fx A f Lip b 1 sup f Lip b 1 sup f Lip b 1 tr f τ X A E tr f τ X A + sup f Lip b 1 tr fx A f τ X A E [tr fx A f τ X A ] tr f τ X A E tr f τ X A tr g b X A τ +2E [tr g b X A τ], 2.36 with g b given as in ow, P sup tr fx A E tr fx A f Lip b 1 P sup f τ X A E f Lip b 1 tr tr f τ X A 3 + P 2tr g b X A τ + 2E [tr g b X A τ] 2 3 P sup f τ X A E f Lip b 1 tr tr f τ X A 3 +P tr g b X A τ E [tr g b X A τ] 3 2E [tr g b X A τ] P sup f τ X A E f Lip b 1 tr tr f τ X A 3 +P tr g b X A τ E [tr g b X A τ] 3 2bE [tr 1 { XA τ}] Let us first bound the second probability in Recall that the spectral radius ρx A = max λ i is a Lipschitz function of X with Lipschitz constant at most a 2/. Hence, for any 1 i 124

19 < t T, and γ > such that νp γ 1/4, E [tr 1 { XA τ}] = 1 P ρx A τ i=1 P ρx A P λ i X A τ E [ ρx A ] { } exp Ht τ G 2 γ t τ G 2 γ 2.38 where, above, we have used Proposition 1.1 in the next to last inequality and where the last inequality follows from Theorem 1 in [12] p with t Ht = hsds = e t u t u 1 νdu. R 2 We want to choose τ, such that E [tr 1 { XA τ}] /12b. This can be achieved if 12b ln + Ht τ G 2 γ t Since and d 2 ln 12b dt 2 d ln 12b + Ht = dt t + Ht t tht ln 12b Ht t 2, = t3 H t 2ttht ln 12b Ht t 4, it is clear that the right hand side of 2.39 is minimized when t = t, where t is the solution of and the minimum is then ht. Thus, if tht Ht ln 12b τ = C, b := =, G 2 γ + ht, 2.4 then and so, E [tr 1 { XA τ}] 12b, P tr g b X A τ E [tr g b X A τ] 3 2bE [tr 1 { XA τ}] P tr g b X A τ E [tr g b X A τ] 6 { } 6 exp h 1 sds,

20 for all < < 6 ht /, where Proposition 2.1 is used in the last inequality. For τ chosen as in 2.4, letting K = [ τ b, τ + b], it follows that for any f Lip b 1, f τ Lip K 1. By part i, the first term in 2.37 is such that P sup f τ X A E f Lip b 1 tr tr f τ X A 3 P sup f τ X A E f τ Lip K 1 tr [tr f τ X A ] 3 { 48C }, b + b C,b+b exp h 1 sds, 2.42 for all < C, b + b ht /. Hence, returning to 2.37, using 2.41 and 2.42 and for { < min 6 h T /, 144 C, b + b } ht /, we have P sup f Lip b C, b+b tr fx A E tr fx A exp { C, b + b C,b+b h 1 sds } +exp { 6 h 1 sds { } C,b+b exp h 1 sds, 2.43 } since only the case 2b presents some interest otherwise the probability in the statement of the theorem is zero. Part ii is then proved. Proof of Proposition 1.4: As a function of x R 2, d W ˆµ A, µx is Lipschitz with Lipschitz constant at most /. Indeed, for x, y R 2, d W ˆµ A,µx = sup tr fxa x fdµ f Lip b 1 R sup tr fx A x tr fx A y f Lip b 1 + sup tr fx A y fdµ f Lip b 1 R x y + d Wˆµ A,µy

21 Theorem 1.4 then follows from Theorem 1 in [12]. Proof of Corollary 1.5: For Lévy measures with bounded support, E [ e t X ] < +, for all t, and moreover Hence and Thus, one can take { exp x Ht = e ht V 2 tr 1. R t hsds V 2 R 2 s tr 1 tr, } { x x h 1 sds exp R R + V 2 R 2 ln 1 + Rx } V 2. C, b = C G 2 γ + V 2 R e t R 1 + b, where t is the solution, in t, of Applying Theorem 1.2 ii yields the result. V 2 R 2 tre tr e tr + 1 = ln 12b. In order to prove Theorem 1.11, we first need the following lemma, whose proof is essentially as the proof of Theorem 1 in [13]. Lemma 2.2. Let X = ω R i,i, ωr i,j, ωi i,j 1 i<j be an α-stable vector, < α < 2, with Lévy measure ν given by For any x, x 1 >, let g x,x 1 x = g x1 x x, where g x1 x is defined as in Then, tr P g x,x 1 X A E [tr g x,x 1 X A ] Cα aα σs 2 1 α α, whenever 1+α > 2 1+α σs 2 1 x 1 /α 1+α and where Cα = 2 5α/2 2eα + 2 α/α2 α. Proof of Theorem 1.11 For part i, first consider f Lip K 1. Using the same approximation as in Theorem 1.2, any function f Lip K 1 can be approximated by f, which is the sum of at most K / functions g j Lip1, regardless of the function f. ow, and as before, for > 2, 127

22 whenever P sup tr fx A E tr fx A f Lip K 1 tr P g j X A E [tr g j X A] K sup g j Lip b1 j=1,, K 2 K 4 K 8 α a α C 2 ασs 2 1 K α α 2α, K > 2 σs α, α and where the last inequality follows from Lemma 2.2, taking also = /4. For any f Lip b 1, and any τ >, let f τ be given as in Then, f τ Lip K 1, where K = [ τ b, τ + b], and moreover, P sup f Lip b 1 tr fx A E tr fx A P tr g τ,b X A E [tr g τ,b X A ] 3 2bE [tr 1 { XA τ}] + P sup f τ X A E f τ Lip K 1 tr tr f τ X A The spectral radius ρx A is a Lipschitz function of X with Lipschitz constant at most /. Then by Theorem 1 in [13], E [tr 1 { XA τ}] = 1 P λ i X A τ i=1 P ρx A > τ P ρx A mρx A > τ J 1 α C 1α2 α/2 a α σs 2 1 α/2 τ J 1 α α, 2.48 whenever α τ J 1 α 2C 1α2 α/2 a α σs 2 1 α/2, 2.49 and where C 1 α = 4 α 2 α + eα/α2 α. ow, if τ is chosen such that C 1 α2 α/2 a α σs 2 1 α/2 τ J 1 α α 12b, 128

23 that is, if it then follows that α τ J 1 α 12bC 1α2 α/2 a α σs 2 1 α/2, 2.5 E [tr 1 { XA τ}] 12b. Since g τ,b X A is the sum of two functions of the type studied in Lemma 2.2 with x 1 = b, we have, P tr g τ,b X A E [tr g τ,b X A ] 3 2bE [tr 1 { XA τ}] 2P tr g τ,b X A E [tr g τ,b X A ] 12 2C 2 α 12α a α σs 2 1 α α, 2.51 whenever 2 1+α12 1+α 1+α σs 2 1 b >, 2.52 α and where C 2 α = 2 5α/2 2eα+2 α/α2 α. The respective ranges 2.5 and 2.52 suggest that one can choose, for example, τ = J 1 α +. Then, there exists α, a,, ν such that for > α, a,, ν, P sup tr fx A E [tr fx A ] f Lip b 1 P sup f τ X A E f τlip K 1 tr [tr f τ X A ] 3 + P tr g τ,b X A E [tr g τ,b X A ] 3 2bE [tr 1 { XA τ}] C 3 αa α σs α J 1 α + b + α 1+2α + C 4αa α σs 2 1 α α, where C 3 α = 2 4+2α 12 α C 2 α, C 4 α = 212 α C 2 α and α, a,, ν is such that 2.46 and 2.52 hold. Part ii is a direct consequence of Theorem 1 of [13], since d W ˆµ A, µ Lip / as shown in the proof of Proposition 1.4. Proof of Theorem For any f Lip1, Theorem 1 in [13] gives a concentration inequality for fx, when it deviates from one of its medians. For 1 < α < 2, a completely similar even simpler argument gives the following result, P fx E [fx] x CασS2 1 x α,

24 whenever x α KασS 2 1, where Cα = 2 α eα + 2 α/α2 α and Kα = max { 2 α /α 1, Cα }. ext, following the proof of Theorem 1.2, approximate any function f Lip b 1 by f τ Lip [ τ b,τ+b] 1 defined via Hence, P sup tr fx A E tr fx A f Lip b 1 P sup f τ X A E f τ Lip K 1 tr tr f τ X A 3 +P tr g τ,b X A E [tr g τ,b X A ] 3 2bE [tr 1 { XA τ}] For ρx A the spectral radius of the matrix X A, and for any τ, such that τ E [ρx A ] KασS 2 1 1/α, E tr 1 { XA >τ} P ρx A E [ρx A ] τ E [ρx A ] αcασs 2 1 τ E [ρx A ] α, 2.55 where we have used, in the last inequality, 2.53 and the fact that ρx A Lip /. For Q >, let τ = E [ρx A ] + Q 1/α. With this choice, we then have: E tr 1 { XA >τ} αcασs 2 1 τ E [ρx A ] α provided Q α / > KασS 2 1 /, and αcασs 2 1 Q α 12b, 2.56 αcασs 2 1 /Q α 1/12b. ow, taking Q = 12bCασS 2 1 1/α /, and recalling, for 1 < α < 2, the lower range concentration result for stable vectors Theorem 1 and Remark 3 in [5]: For any ǫ >, there exists η ǫ, such that for all < < f Lip η ǫ/, P tr fx A E tr fx A { 2 α α 1 α α 1 1 α 1 + ǫexp σs 2 1 1/α 1 α f Lip α 1 α α 1 }

25 With arguments as in the proof of Theorem 1.2, if < ηǫ := 72 1/α 1/2 J 2 α + b + KασS 2 1 η ǫ, there exist constants D 1 α, a,, σs 2 1 and D 2 α, a,, σs 2 1, such that the first term in 2.54 is bounded above by 1 + ǫ D 1 α, a,, σs 2 1 exp α+1 α D 2 α, a,, σs 2 1 2α+1 α Indeed, with the choice of τ above and D as in 1.25, 2τ + b D / 1/α. Moreover, as in obtaining 2.34, D 1 can be chosen to be 24D, while D 2 can be chosen to be 2 α 1 α 1 α α 1 α σs α 1 α α D α α 1 ext, as already mentioned, J 2 α can be replaced by E [ X ]. In fact, according to 1.7 and an estimation in [14], if E [X] =, then 1 42 α 1/ασS2 1 1/α E [ X ] αα 1 1/α σs2 1 1/α. Finally, note that, as in the proof of Theorem 1.2 ii, the second term in 2.54 is dominated by the first term. The theorem is then proved, with the constant D 1 a,, σs 2 1 magnified by 2. Proof of Corollary As a function of Y R i,j,yi i,j 1 i K,1 j, with the choice of A made in 1.27, λ max X A Lip 2. Hence parti is a direct application of Theorem 1 in [12], while partii can be obtained by applying Theorem 1.7. Acknowledgements Both authors would like to thank the organizers of the Special Program on High-Dimensional Inference and Random Matrices at SAMSI. Their hospitality and support, through the grant DMS-11269, greatly facilitated the completion of this paper. 131

26 References [1]. Alon, M. Krivelevich, V. Vu, On the concentration of eigenvalues of random symmetric matrices. Isr. J. Math., 131, 22, MR [2] G.W. Anderson, A. Guionnet, O. Zeitouni, Lecture notes on random matrices. SAMSI. September, 26. [3] Z.D. Bai, Circular law. Ann. Probab., 25, 1997, MR [4] Z.D. Bai, J.W. Silverstein, Spectral analysis of large dimensional random matrices. Science Press, Beijing, 26. [5] J.-C. Breton, C. Houdré, On finite range stable-type concentration. To appear Theory Probab. Appl. 27. [6] J.-C. Breton, C. Houdré,. Privault, Dimension free and infinite variance tail estimates on Poisson space. Acta Appli. Math., 95, 27, MR [7] R.M. Dudley, Real analysis and probability. Cambridge University Press, 22. MR [8] V.L. Girko, Spectral theory of random matrices. Moscow, auka, MR [9] F. Götze, A. Tikhomirov, On the circular law. Preprint. Avalaible at Math arxiv 72386, 27. [1] A. Guionnet, O. Zeitouni, Concentration of spectral measure for large matrices. Electron. Comm. Probab. 5 2, MR [11] R.A. Horn, C. Johnson, Topics in matrix analysis. Cambridge Univ. Press, MR [12] C. Houdré, Remarks on deviation inequalities for functions of infinitely divisible random vectors. Ann. Probab. 3 22, MR19216 [13] C. Houdré, P. Marchal, On the concentration of measure phenomenon for stable and related random vectors. Ann. Probab , MR2636 [14] C. Houdré, P. Marchal, Median, concentration and fluctuations for Lévy processes. Stochastic Processes and their Applications 27. [15] C. Houdré, P. Marchal, P. Reynaud-Bouret, Concentration for norms of infinitely divisible vectors with independent components. Bernoulli 28. [16] K. Johansson, On fluctuations of eigenvalues of random Hermitian matrices. Duke math. J , MR [17] I.M. Johnstone, On the distribution of the largest principal component. Ann. Statist., 29, 21, MR

27 [18] I.M. Johnstone, High dimensional statistical inference and random matrices. International Congress of Mathematicians. Vol. I, , Eur. Math. Soc., Zürich, 27 MR [19] M. Ledoux, The concentration of measure phenomenon. Math. Surveys Monogr., 89 American Mathematical Society, Providence, RI, 21. MR [2] M. Ledoux, Deviation inequalities on largest eigenvalues. Geometric Aspects of Functional Analysis, Israel Seminar 24-25, Lecture otes in Math. 191, Springer 27. MR [21] P. Marchal, Measure concentration for stable laws with index close to 2. Electron. Comm. Probab. 1 25, MR [22] M.B. Marcus, J. Rosiński, L 1 -norms of infinitely divisible random vectors and certain stochastic integrals. Electron. Comm. Probab. 6 21, MR [23] V.A. Marčenko, L.A. Pastur. Distributions of eigenvalues of some sets of random matrices. Math. USSR-Sb., , [24] M.L. Mehta, Random matrices, 2nd ed. Academic Press, San Diego, MR [25] L.A. Pastur, On the spectrum of random matrices. Teor. Mat. Fiz , MR47552 [26] B. Simon, Trace ideals and their applications. Cambridge University Press, MR [27] K-I. Sato, Lévy processes and infinitely divisible distributions. Cambridge University Press, Cambridge, MR [28] A. Soshnikov, Universality at the edge of the spectrum in Wigner random matrices. Comm. Math. Phys , MR [29] A. Soshnikov, A note on universality of the distribution of the largest eigenvalues in certain sample covariance matrices. Dedicated to David Ruelle and Yasha Sinai on the occasion of their 65th birthdays. J. Stat. Phys., 18, os 5/6, 22, MR [3] A. Soshnikov, Poisson statistics for the largest eigenvalues of Wigner random matrices with heavy tails. Elec. Commun. Probab. 9 24, MR [31] A. Soshnikov, Poisson statistics for the largest eigenvalues in random matrix ensembles. Mathematical physics of quantum mechanics, , Lecture otes in Phys., 69, Springer, Berlin, 26. MR [32] A. Soshnikov, Y. V. Fyodorov, On the largest singular values of random matrices with independent Cauchy entries. J. Math. Phys , MR [33] C. Tracy, H. Widom, Level-spacing distribution and the Airy kernel. Comm. Math. Phys , MR [34] C. Tracy, H. Widom, On orthorgonal and symplectic random matrix ensembles. Comm. Math. Phys , MR

28 [35] E.P. Wigner, On the distribution of the roots of certain symmetric matrices. Ann. of Math , MR

ASPECTS OF RANDOM MATRIX THEORY: CONCENTRATION AND SUBSEQUENCE PROBLEMS

ASPECTS OF RANDOM MATRIX THEORY: CONCENTRATION AND SUBSEQUENCE PROBLEMS ASPECTS OF RADO ATRIX THEORY: COCETRATIO AD SUBSEQUECE PROBLES A Thesis Presented to The Academic Faculty by Hua Xu In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

On the Concentration of Measure Phenomenon for Stable and Related Random Vectors

On the Concentration of Measure Phenomenon for Stable and Related Random Vectors The Annals of Probability 2003, Vol. 00, No. 00, 000 000 1 On the Concentration of Measure Phenomenon for Stable and elated andom Vectors By Christian Houdré 2 and Philippe Marchal 3 Université Paris XII

More information

Fluctuations from the Semicircle Law Lecture 4

Fluctuations from the Semicircle Law Lecture 4 Fluctuations from the Semicircle Law Lecture 4 Ioana Dumitriu University of Washington Women and Math, IAS 2014 May 23, 2014 Ioana Dumitriu (UW) Fluctuations from the Semicircle Law Lecture 4 May 23, 2014

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr

More information

Concentration inequalities: basics and some new challenges

Concentration inequalities: basics and some new challenges Concentration inequalities: basics and some new challenges M. Ledoux University of Toulouse, France & Institut Universitaire de France Measure concentration geometric functional analysis, probability theory,

More information

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW Serials Publications www.serialspublications.com OMPLEX HERMITE POLYOMIALS: FROM THE SEMI-IRULAR LAW TO THE IRULAR LAW MIHEL LEDOUX Abstract. We study asymptotics of orthogonal polynomial measures of the

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31 STAT C26A / MATH C223A : Stein s method and applications Lecture 3 Lecture date: Nov. 7, 27 Scribe: Anand Sarwate Gaussian concentration recap If W, T ) is a pair of random variables such that for all

More information

A Generalization of Wigner s Law

A Generalization of Wigner s Law A Generalization of Wigner s Law Inna Zakharevich June 2, 2005 Abstract We present a generalization of Wigner s semicircle law: we consider a sequence of probability distributions (p, p 2,... ), with mean

More information

Random matrices: Distribution of the least singular value (via Property Testing)

Random matrices: Distribution of the least singular value (via Property Testing) Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued

More information

Interpolating the arithmetic geometric mean inequality and its operator version

Interpolating the arithmetic geometric mean inequality and its operator version Linear Algebra and its Applications 413 (006) 355 363 www.elsevier.com/locate/laa Interpolating the arithmetic geometric mean inequality and its operator version Rajendra Bhatia Indian Statistical Institute,

More information

arxiv: v3 [math-ph] 21 Jun 2012

arxiv: v3 [math-ph] 21 Jun 2012 LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.

More information

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

arxiv: v1 [math.pr] 22 Dec 2018

arxiv: v1 [math.pr] 22 Dec 2018 arxiv:1812.09618v1 [math.pr] 22 Dec 2018 Operator norm upper bound for sub-gaussian tailed random matrices Eric Benhamou Jamal Atif Rida Laraki December 27, 2018 Abstract This paper investigates an upper

More information

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W. Silverstein* Department of Mathematics Box 8205 North Carolina State University Raleigh,

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

Random Matrix: From Wigner to Quantum Chaos

Random Matrix: From Wigner to Quantum Chaos Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution

More information

Valerio Cappellini. References

Valerio Cappellini. References CETER FOR THEORETICAL PHYSICS OF THE POLISH ACADEMY OF SCIECES WARSAW, POLAD RADOM DESITY MATRICES AD THEIR DETERMIATS 4 30 SEPTEMBER 5 TH SFB TR 1 MEETIG OF 006 I PRZEGORZAłY KRAKÓW Valerio Cappellini

More information

The Tracy-Widom distribution is not infinitely divisible.

The Tracy-Widom distribution is not infinitely divisible. The Tracy-Widom distribution is not infinitely divisible. arxiv:1601.02898v1 [math.pr] 12 Jan 2016 J. Armando Domínguez-Molina Facultad de Ciencias Físico-Matemáticas Universidad Autónoma de Sinaloa, México

More information

arxiv: v2 [math.pr] 2 Nov 2009

arxiv: v2 [math.pr] 2 Nov 2009 arxiv:0904.2958v2 [math.pr] 2 Nov 2009 Limit Distribution of Eigenvalues for Random Hankel and Toeplitz Band Matrices Dang-Zheng Liu and Zheng-Dong Wang School of Mathematical Sciences Peking University

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

The norm of polynomials in large random matrices

The norm of polynomials in large random matrices The norm of polynomials in large random matrices Camille Mâle, École Normale Supérieure de Lyon, Ph.D. Student under the direction of Alice Guionnet. with a significant contribution by Dimitri Shlyakhtenko.

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices S. Dallaporta University of Toulouse, France Abstract. This note presents some central limit theorems

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Stable Process. 2. Multivariate Stable Distributions. July, 2006

Stable Process. 2. Multivariate Stable Distributions. July, 2006 Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

Rectangular Young tableaux and the Jacobi ensemble

Rectangular Young tableaux and the Jacobi ensemble Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau

More information

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES

ON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.

More information

A determinantal formula for the GOE Tracy-Widom distribution

A determinantal formula for the GOE Tracy-Widom distribution A determinantal formula for the GOE Tracy-Widom distribution Patrik L. Ferrari and Herbert Spohn Technische Universität München Zentrum Mathematik and Physik Department e-mails: ferrari@ma.tum.de, spohn@ma.tum.de

More information

Lectures 6 7 : Marchenko-Pastur Law

Lectures 6 7 : Marchenko-Pastur Law Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n

More information

KLS-TYPE ISOPERIMETRIC BOUNDS FOR LOG-CONCAVE PROBABILITY MEASURES. December, 2014

KLS-TYPE ISOPERIMETRIC BOUNDS FOR LOG-CONCAVE PROBABILITY MEASURES. December, 2014 KLS-TYPE ISOPERIMETRIC BOUNDS FOR LOG-CONCAVE PROBABILITY MEASURES Sergey G. Bobkov and Dario Cordero-Erausquin December, 04 Abstract The paper considers geometric lower bounds on the isoperimetric constant

More information

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS*

LARGE DEVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILED DEPENDENT RANDOM VECTORS* LARGE EVIATION PROBABILITIES FOR SUMS OF HEAVY-TAILE EPENENT RANOM VECTORS* Adam Jakubowski Alexander V. Nagaev Alexander Zaigraev Nicholas Copernicus University Faculty of Mathematics and Computer Science

More information

The Matrix Dyson Equation in random matrix theory

The Matrix Dyson Equation in random matrix theory The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),

More information

Convergence of spectral measures and eigenvalue rigidity

Convergence of spectral measures and eigenvalue rigidity Convergence of spectral measures and eigenvalue rigidity Elizabeth Meckes Case Western Reserve University ICERM, March 1, 2018 Macroscopic scale: the empirical spectral measure Macroscopic scale: the empirical

More information

The Hadamard product and the free convolutions

The Hadamard product and the free convolutions isid/ms/205/20 November 2, 205 http://www.isid.ac.in/ statmath/index.php?module=preprint The Hadamard product and the free convolutions Arijit Chakrabarty Indian Statistical Institute, Delhi Centre 7,

More information

arxiv: v1 [math-ph] 19 Oct 2018

arxiv: v1 [math-ph] 19 Oct 2018 COMMENT ON FINITE SIZE EFFECTS IN THE AVERAGED EIGENVALUE DENSITY OF WIGNER RANDOM-SIGN REAL SYMMETRIC MATRICES BY G.S. DHESI AND M. AUSLOOS PETER J. FORRESTER AND ALLAN K. TRINH arxiv:1810.08703v1 [math-ph]

More information

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Craig A. Tracy UC Davis RHPIA 2005 SISSA, Trieste 1 Figure 1: Paul Painlevé,

More information

Universal Fluctuation Formulae for one-cut β-ensembles

Universal Fluctuation Formulae for one-cut β-ensembles Universal Fluctuation Formulae for one-cut β-ensembles with a combinatorial touch Pierpaolo Vivo with F. D. Cunden Phys. Rev. Lett. 113, 070202 (2014) with F.D. Cunden and F. Mezzadri J. Phys. A 48, 315204

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Inhomogeneous circular laws for random matrices with non-identically distributed entries

Inhomogeneous circular laws for random matrices with non-identically distributed entries Inhomogeneous circular laws for random matrices with non-identically distributed entries Nick Cook with Walid Hachem (Telecom ParisTech), Jamal Najim (Paris-Est) and David Renfrew (SUNY Binghamton/IST

More information

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium SEA 06@MIT, Workshop on Stochastic Eigen-Analysis and its Applications, MIT, Cambridge,

More information

SELF-ADJOINTNESS OF SCHRÖDINGER-TYPE OPERATORS WITH SINGULAR POTENTIALS ON MANIFOLDS OF BOUNDED GEOMETRY

SELF-ADJOINTNESS OF SCHRÖDINGER-TYPE OPERATORS WITH SINGULAR POTENTIALS ON MANIFOLDS OF BOUNDED GEOMETRY Electronic Journal of Differential Equations, Vol. 2003(2003), No.??, pp. 1 8. ISSN: 1072-6691. URL: http://ejde.math.swt.edu or http://ejde.math.unt.edu ftp ejde.math.swt.edu (login: ftp) SELF-ADJOINTNESS

More information

Bulk scaling limits, open questions

Bulk scaling limits, open questions Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson

More information

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

Concentration Properties of Restricted Measures with Applications to Non-Lipschitz Functions

Concentration Properties of Restricted Measures with Applications to Non-Lipschitz Functions Concentration Properties of Restricted Measures with Applications to Non-Lipschitz Functions S G Bobkov, P Nayar, and P Tetali April 4, 6 Mathematics Subject Classification Primary 6Gxx Keywords and phrases

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title itle On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime Authors Wang, Q; Yao, JJ Citation Random Matrices: heory and Applications, 205, v. 4, p. article no.

More information

A Note on Jackknife Based Estimates of Sampling Distributions. Abstract

A Note on Jackknife Based Estimates of Sampling Distributions. Abstract A Note on Jackknife Based Estimates of Sampling Distributions C. Houdré Abstract Tail estimates of statistics are given; they depend on jackknife estimates of variance of the statistics of interest. 1

More information

Large sample covariance matrices and the T 2 statistic

Large sample covariance matrices and the T 2 statistic Large sample covariance matrices and the T 2 statistic EURANDOM, the Netherlands Joint work with W. Zhou Outline 1 2 Basic setting Let {X ij }, i, j =, be i.i.d. r.v. Write n s j = (X 1j,, X pj ) T and

More information

Convex inequalities, isoperimetry and spectral gap III

Convex inequalities, isoperimetry and spectral gap III Convex inequalities, isoperimetry and spectral gap III Jesús Bastero (Universidad de Zaragoza) CIDAMA Antequera, September 11, 2014 Part III. K-L-S spectral gap conjecture KLS estimate, through Milman's

More information

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS David García-García May 13, 2016 Faculdade de Ciências da Universidade de Lisboa OVERVIEW Random Matrix Theory Introduction Matrix ensembles A sample computation:

More information

Probability and Measure

Probability and Measure Part II Year 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2018 84 Paper 4, Section II 26J Let (X, A) be a measurable space. Let T : X X be a measurable map, and µ a probability

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

From the mesoscopic to microscopic scale in random matrix theory

From the mesoscopic to microscopic scale in random matrix theory From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system

More information

Convergence of the Ensemble Kalman Filter in Hilbert Space

Convergence of the Ensemble Kalman Filter in Hilbert Space Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Density of States for Random Band Matrices in d = 2

Density of States for Random Band Matrices in d = 2 Density of States for Random Band Matrices in d = 2 via the supersymmetric approach Mareike Lager Institute for applied mathematics University of Bonn Joint work with Margherita Disertori ZiF Summer School

More information

Random Matrix Theory and its Applications to Econometrics

Random Matrix Theory and its Applications to Econometrics Random Matrix Theory and its Applications to Econometrics Hyungsik Roger Moon University of Southern California Conference to Celebrate Peter Phillips 40 Years at Yale, October 2018 Spectral Analysis of

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

ON THE QR ITERATIONS OF REAL MATRICES

ON THE QR ITERATIONS OF REAL MATRICES Unspecified Journal Volume, Number, Pages S????-????(XX- ON THE QR ITERATIONS OF REAL MATRICES HUAJUN HUANG AND TIN-YAU TAM Abstract. We answer a question of D. Serre on the QR iterations of a real matrix

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

MEASURE CONCENTRATION FOR COMPOUND POIS- SON DISTRIBUTIONS

MEASURE CONCENTRATION FOR COMPOUND POIS- SON DISTRIBUTIONS Elect. Comm. in Probab. 11 (26), 45 57 ELECTRONIC COMMUNICATIONS in PROBABILITY MEASURE CONCENTRATION FOR COMPOUND POIS- SON DISTRIBUTIONS I. KONTOYIANNIS 1 Division of Applied Mathematics, Brown University,

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

Entropy and Ergodic Theory Lecture 15: A first look at concentration

Entropy and Ergodic Theory Lecture 15: A first look at concentration Entropy and Ergodic Theory Lecture 15: A first look at concentration 1 Introduction to concentration Let X 1, X 2,... be i.i.d. R-valued RVs with common distribution µ, and suppose for simplicity that

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Compound matrices and some classical inequalities

Compound matrices and some classical inequalities Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using

More information

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction

SHARP BOUNDARY TRACE INEQUALITIES. 1. Introduction SHARP BOUNDARY TRACE INEQUALITIES GILES AUCHMUTY Abstract. This paper describes sharp inequalities for the trace of Sobolev functions on the boundary of a bounded region R N. The inequalities bound (semi-)norms

More information

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Central Limit Theorems for linear statistics for Biorthogonal Ensembles Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s

More information

Large deviations for random projections of l p balls

Large deviations for random projections of l p balls 1/32 Large deviations for random projections of l p balls Nina Gantert CRM, september 5, 2016 Goal: Understanding random projections of high-dimensional convex sets. 2/32 2/32 Outline Goal: Understanding

More information

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT

ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT Elect. Comm. in Probab. 10 (2005), 36 44 ELECTRONIC COMMUNICATIONS in PROBABILITY ON THE ZERO-ONE LAW AND THE LAW OF LARGE NUMBERS FOR RANDOM WALK IN MIXING RAN- DOM ENVIRONMENT FIRAS RASSOUL AGHA Department

More information

Estimation of the Global Minimum Variance Portfolio in High Dimensions

Estimation of the Global Minimum Variance Portfolio in High Dimensions Estimation of the Global Minimum Variance Portfolio in High Dimensions Taras Bodnar, Nestor Parolya and Wolfgang Schmid 07.FEBRUARY 2014 1 / 25 Outline Introduction Random Matrix Theory: Preliminary Results

More information

Fluctuations of the free energy of spherical spin glass

Fluctuations of the free energy of spherical spin glass Fluctuations of the free energy of spherical spin glass Jinho Baik University of Michigan 2015 May, IMS, Singapore Joint work with Ji Oon Lee (KAIST, Korea) Random matrix theory Real N N Wigner matrix

More information

Introduction and Preliminaries

Introduction and Preliminaries Chapter 1 Introduction and Preliminaries This chapter serves two purposes. The first purpose is to prepare the readers for the more systematic development in later chapters of methods of real analysis

More information

Journal of Inequalities in Pure and Applied Mathematics

Journal of Inequalities in Pure and Applied Mathematics Journal of Inequalities in Pure and Applied Mathematics MATRIX AND OPERATOR INEQUALITIES FOZI M DANNAN Department of Mathematics Faculty of Science Qatar University Doha - Qatar EMail: fmdannan@queduqa

More information

ON ALLEE EFFECTS IN STRUCTURED POPULATIONS

ON ALLEE EFFECTS IN STRUCTURED POPULATIONS PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 00, Number 0, Pages 000 000 S 0002-9939(XX)0000-0 ON ALLEE EFFECTS IN STRUCTURED POPULATIONS SEBASTIAN J. SCHREIBER Abstract. Maps f(x) = A(x)x of

More information

Second Order Freeness and Random Orthogonal Matrices

Second Order Freeness and Random Orthogonal Matrices Second Order Freeness and Random Orthogonal Matrices Jamie Mingo (Queen s University) (joint work with Mihai Popa and Emily Redelmeier) AMS San Diego Meeting, January 11, 2013 1 / 15 Random Matrices X

More information

Math212a1413 The Lebesgue integral.

Math212a1413 The Lebesgue integral. Math212a1413 The Lebesgue integral. October 28, 2014 Simple functions. In what follows, (X, F, m) is a space with a σ-field of sets, and m a measure on F. The purpose of today s lecture is to develop the

More information

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Alan Edelman Department of Mathematics, Computer Science and AI Laboratories. E-mail: edelman@math.mit.edu N. Raj Rao Deparment

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Kernel families of probability measures. Saskatoon, October 21, 2011

Kernel families of probability measures. Saskatoon, October 21, 2011 Kernel families of probability measures Saskatoon, October 21, 2011 Abstract The talk will compare two families of probability measures: exponential, and Cauchy-Stjelties families. The exponential families

More information