ON THE CONVERGENCE OF THE NEAREST NEIGHBOUR EIGENVALUE SPACING DISTRIBUTION FOR ORTHOGONAL AND SYMPLECTIC ENSEMBLES

Size: px
Start display at page:

Download "ON THE CONVERGENCE OF THE NEAREST NEIGHBOUR EIGENVALUE SPACING DISTRIBUTION FOR ORTHOGONAL AND SYMPLECTIC ENSEMBLES"

Transcription

1 O THE COVERGECE OF THE EAREST EIGHBOUR EIGEVALUE SPACIG DISTRIBUTIO FOR ORTHOGOAL AD SYMPLECTIC ESEMBLES Dissertation zur Erlangung des Doktorgrades der aturwissenschaften an der Fakultät für Mathematik der Ruhr-Universität Bochum vorgelegt von Kristina Beatrice Schubert April 212

2

3 Contents 1. Introduction Some aspects of random matrix theory Random matrix ensembles Eigenvalues of random matrices Scaling the eigenvalues The empirical distribution of the spacings The main theorem umerical experiments I. Analysis of the spacings of adjacent eigenvalues of random matrices The measures γ (k, H) Correlation functions Correlation functions for β = Correlation functions and the matrix kernels K,β with β = 1, Formulation of the main theorem Results for B,k, (β) S β, D β and I β, β = 1, Results for B (β),k with β = 1, Integrability of S β, D β and I β for β = 1, Expected values for γ and σ Expected values ( for γ (k, H) ) Results for E,β f dσ (k, )

4 iv Contents 7. The limiting spacing distribution Gap probabilities and correlation functions Gap probabilities and the limiting spacing distribution Tail estimates for µ β An estimate on the variance of γ Main estimate and first reduction step for β = 1, 2, Proof of theorem 8.1 for β = The second reduction step The third reduction step otation for the third reduction step The fundamental estimate Preparing for the proof of the fundamental estimate otation for the fundamental estimate Basic estimates for the proof of the fundamental estimate Proof of the fundamental estimate for beta= Proof of the fundamental estimate for beta= Completing the proof of the main result, theorem Intermediate result for the main proof Final estimates for the proof of the main theorem II. umerical experiments using MATLAB Introduction to numerical experiments Theoretical outline of the numerical experiments Generating the matrices and scaling the eigenvalues The Kolmogorov-Smirnov distance The limiting spacing distributions The expected Kolmogorov-Smirnov distance The results Observations for unfolded statistics Observations for localised statistics Summary of the results Implementing the algorithm in MATLAB Generating the matrices Rescaling the eigenvalues The limiting spacing distributions

5 Contents v The Kolmogorov-Smirnov distance Further modifications and the final code Choice of parameters The number of repetition per matrix size (nrep) Matrix sizes The points at which to approximate F β Summary for the choice of parameters (including Wigner ensembles) Linear regression Plots and tables 233 Bibliography 243 List of Symbols 249

6

7 Chapter1 Introduction The roots of random matrix theory can be traced back for more than a century and can e.g. be seen in the study of the Haar measure on classical groups and in statistics (see the preface of [23] for an outline). The field experienced a first boost in the 195s due to the remarkable idea of E. Wigner to model highly excited energy levels of heavy nuclei by the spectrum of random matrices. Over the years a number of different applications of random matrices has been discovered culminating in a second boost 15 years ago when the connection to certain combinatorial models was established. In random matrix theory one distinguishes between several types of ensembles, where each ensemble is given by a set of matrices together with a probability measure on these matrices. The most studied ensembles of random matrices are the Gaussian ensembles (c.f. p.4), consisting of either real symmetric, complex Hermitian or quaternionic self-dual matrices with independent Gaussian entries (as permitted by the symmetry). Gaussian ensembles have been generalised in many ways. Arguably, the three most prominent generalisations are invariant ensembles (c.f. p.9), Wigner ensembles (c.f. p.1) and β-hermite ensembles (c.f. p.12), which will all be discussed in this thesis. For general information on the rich field of random matrices see the recent monographs [2], [3], [8], [9], [23], [37] and the survey on Wigner matrices [17]. A striking feature of random matrix theory is the universality of their local eigenvalue statistics. Here local statistics means that one focusses on some (small) part of the spectrum and that one rescales the eigenvalues such that the average spacing is of order one. It turns out that in this regime various quantities, e.g. k-point correlation functions, gap probabilities, distribution of spacings, converge as the matrix size tends to infinity. Moreover, the limits are universal in the sense that, within a given symmetry class (see e.g. [2, chapter 3] for symmetry classes), there exist many different ensembles with the same limiting behaviour. A second

8 2 1. Introduction and quite surprising aspect of universality is that the limiting distributions of random matrix theory also appear in a number of seemingly unrelated models of combinatorics and statistical mechanics. In this thesis we consider the empirical distribution of the spacings of adjacent eigenvalues (c.f. definition 1.6 for the relevant counting measure). In particular, we study the expected Kolmogorov-Smirnov distance EKS (c.f. (1.49)) of the empirical spacing distribution from its limit. Our two main results are: (A) We can use the same methods as in [28] to express the empirical distribution of the spacings in terms of the marginal densities of the joint distribution of the eigenvalues, called k-point correlation functions. These k-point correlation functions can be represented in terms of kernel functions K,2 for β = 2 and in terms of 2 2 matrix-kernel functions K,β for β = 1 and β = 4 (see [45]). It is our main results (see theorem 4.3) for orthogonal/unitary/symplectic (i.e. β = 1/2/4) invariant ensembles to reduce the question of convergence EKS for to the convergence of the corresponding kernel function K,β K β (see theorem 4.3). More precisely, we will show that uniform convergence of K,β K β in the region of interest with arbitrarily slow rates (see assumption 4.1) suffices to prove EKS. In particular, our theorem covers all invariant ensembles for which the convergence of K,β to K β has been proved using a Riemann-Hilbert approach (see remark 4.2). (B) Our numerical experiments conducted with MATLAB for Wigner and β- ensembles clearly show that EKS converges precisely with the rate 1/2 (see chapter 12 for the numerical results). This suggests that there should be a version of the central limit theorem, most likely universal, for the empirical distribution of spacings. Observe that the settings for these two results are slightly different. In (A) we consider only a small part of the spectrum (c.f. localised statistics, p.17) and the number of eigenvalues contributing to the empirical spacing distribution is. On the other hand, to obtain the rate 1/2 of EKS in the numerical experiments, one needs to consider a region, where the number of eigenvalues grows proportionally with (c.f. unfolded statistics, p.15). We therefore use different methods of rescaling for (A) and (B). The method of proof for the result (A) follows the path devised by. Katz and P. Sarnak in [28] for circular ensembles with β = 2. In this thesis we extend their method in two ways. Firstly, we introduce a localisation, which was not necessary in [28] since they only deal with the case of constant spectral density. For unitary ensembles this localisation was already carried out in [39]. Secondly, and more importantly, we consider orthogonal and symplectic ensembles (β = 1/4),

9 3 where the relation between the matrix-kernel functions K,β and the spacing distribution is much more involved than for β = 2. The proof of the main theorem (result (A)) comes in three steps: First, one establishes that the expected value of the empirical spacing distribution E,β ( s dσ ( )) (see definition 1.6) converges point wise to the distribution function of some limiting measure F β (s) = s dµ β. This convergence is well known although it seems that the details have so far only been worked out in the case β = 2 (see e.g. [3], [9]). In order to obtain the convergence of ( s s ) E,β dσ ( ) dµ β (1.1) for any given s R, we provide a bound on the variance of s dσ in the second step. This is the most challenging part in generalising the method of Katz and Sarnak to β = 1, 4. Here we found the representation of the k-point correlation functions in terms of K,β as provided in [45] useful. Finally, the desired result is obtained by controlling the s-dependence of the bound on (1.1) together with tail estimates on µ β. In the literature the expected Kolmogorov-Smirnov distance of the empirical spacing distribution from its limit has so far only been considered for circular ensembles. The convergence to zero for β = 2, that was shown by Katz and Sarnak in [28], was sharpened by Soshnikov [42], proving almost sure convergence for β = 1 and β = 2, but not for β = 4. Moreover, a central limit theorem for β = 1 and β = 2 is shown in [42], i.e. the appropriately normalised random variables s ζ (s) = dσ (H) E,β ( s dσ (H)) (1.2) converge to a Gaussian process with E(ζ(s)) = and for which E(ζ(s)ζ(t)) can be expressed in terms of the k-point correlation functions. In the case β = 2 there is a related result within the theory of determinantal point processes, which we would like to mention. In [3, corollary ] it is shown that for a certain class of determinantal point processes with constant intensities the empirical spacing distribution converges to µ β almost surely as the number of considered points tends to infinity. This result does not directly apply to random matrix models. evertheless it is of interest since after an appropriate rescaling, the distribution of the random eigenvalues converges to a determinantal point process for which [3, corollary ] holds. The remainder of this introductory chapter is organised as follows: In section we introduce different matrix ensembles, including Gaussian ensembles (GOE, GUE, GSE), invariant ensembles, Wigner ensembles and β-ensembles. The expected eigenvalue distribution and the (limiting) spectral density are defined in section As we consider rescaled eigenvalues, we introduce different rescalings

10 4 1. Introduction of the eigenvalues adapted to the settings of result (A) and (B) in section (unfolded and localised statistics). The quantities of interest, i.e. the empirical distribution of the spacings and the related counting measure(s) are introduced in section In section 1.2 we can formulate a preliminary version of the main theorem (result (A)) and present an outline of its proof, which composes part I of this thesis. In section 1.3 we add some remarks about our numerical experiments Some aspects of random matrix theory We give an overview of several random matrix ensembles and provide some facts about the eigenvalues of random matrices. Afterwards we introduce two possible rescalings of the eigenvalues and the empirical distribution function of the spacings Random matrix ensembles The most studied ensembles of random matrices are the Gaussian Orthogonal Ensemble (GOE), Gaussian Unitary Ensemble (GUE) and Gaussian Symplectic Ensemble (GSE). These ensembles consist of a set of matrices obeying a symmetry condition together with a probability measure. The latter is induced by the product measure of those matrix entries which are not constrained by the symmetry condition. The precise definitions of the three Gaussian ensembles are given in the subsequent paragraphs. Gaussian Orthogonal Ensemble, GOE The Gaussian Orthogonal Ensemble consists of real symmetric matrices M = (m ij ) 1 i,j with independent entries in the upper triangular part m ij m ii ( (,, ) 1, 1 i < j (1.3) 4 ) 1, 1 i. (1.4) 2 I.e. the entries in the upper triangular part are independent Gaussians with mean zero and variance 1 1 resp. variance on the diagonal. The entries in the remaining lower triangular part are determined by the symmetry constraint. 4 2 The choice of the variances in (1.3) and (1.4) for GOE is motivated as follows: When scaling the variances of the matrix entries by 1 (multiplied by a constant), the uniform limit of the empirical distribution of the eigenvalues does not depend

11 1.1. Some aspects of random matrix theory 5 on the matrix size (see Wigner s semi-circle law in theorem 1.3). In addition, the variances of the entries on the diagonal and the upper triangular part differ by a factor 2, which ensures that the GOE is a special case of an orthogonal invariant ensemble (a precise definition of invariant ensembles is given in the course of this section). The exact constants in the variances are chosen such that the interval that the eigenvalues can asymptotically be expected to lie in is normalised to [-1,1]. Similar arguments motivate the variances of the matrix entries in the Gaussian Unitary and Symplectic Ensemble, which we will now define. Gaussian Unitary Ensemble, GUE The Gaussian Unitary Ensemble consists of complex Hermitian matrices M = (m ij ) 1 i,j, where the real and imaginary parts of the entries in the strict upper triangular part and the real diagonal entries are independently distributed according to ( ) 1 Re(m ij ), Im(m ij ),, 1 i < j (1.5) 8 m ii (, ) 1, 1 i. (1.6) 4 The lower triangular part of M is given by the symmetry condition m ij = m ji for i > j. Gaussian Symplectic Ensemble, GSE Before we define the Gaussian Symplectic Ensemble, we recall some facts about quaternions and introduce some related definitions. We represent quaternions by 2 2 matrices with complex entries of the form ( ) u v q =, u, v C. (1.7) v ū The quaternion adjoint of q is then given by (ū ) v adj(q) :=. (1.8) v u A quaternionic matrix M, represented as a 2 2 complex matrix with 2 2 blocks M ij, 1 i, j of form (1.7), is called self-dual if we have M ji = adj(m ij ), 1 i j. (1.9)

12 6 1. Introduction We observe that the diagonal blocks M ii C 2 2, i = 1,..., of a self-dual matrix M are hence of the form ( ) uii M ii =, 1 i, u u ii R. (1.1) ii In order to derive another characterisation of self-dual matrices, we observe that with ( ) 1 σ := (1.11) 1 we have ( ) t ( ) q11 q σ 12 σ t q22 q = 12 for any q q 21 q 22 q 21 q ij C, i, j = 1, 2. (1.12) 11 On the one hand, this implies that the 2 2 blocks M ij of a 2 2 complex matrix M are of form (1.7) if and only if σ M t ij σ t = M ij for all i, j = 1,...,, (1.13) where M ij denotes the complex conjugate transpose of M ij. Equation (1.13) is equivalent to J M t J t = M with J := diag(σ,..., σ) R 2 2. (1.14) On the other hand, (1.12) implies that σq t σ t = adj(q) for any quaternion q and hence for the 2 2 blocks M ij of form (1.7) we have M ji = adj(m ij ) for all 1 i j J M t J t = M. (1.15) Combining equation (1.14) and (1.15), we derive for a quaternionic matrix M (respectively a 2 2 complex matrix M): M is self-dual M = M = JM t J t M is Hermitian with 2 2 blocks of form (1.7). Hence a self-dual quaternionic matrix is in particular Hermitian and has 2 real eigenvalues. Moreover, each of these 2 eigenvalues has even multiplicity, which is e.g. shown in [9, (2.3.3)]. The proof therein is based on the fact that for any (non-zero) eigenvector x of M with real eigenvalue ν the vector J x is also an eigenvector of M with eigenvalue ν, and these two eigenvectors are linearly independent. We can now define the Gaussian Symplectic Ensemble.

13 1.1. Some aspects of random matrix theory 7 The Gaussian Symplectic Ensemble consists of quaternionic self-dual matrices resp. 2 2 complex Hermitian matrices with blocks of form (1.7). The 2 2 blocks q ij in the upper triangular part of form ( ) uij v q ij = ij, 1 i < j v ij u ij are given by four real random variables, distributed according to ( ) 1 Re(u ij ), Im(u ij ), Re(v ij ), Im(v ij ),. (1.16) 16 The real entries of the 2 2 blocks on the diagonal ( ) uii q ii =, 1 i u ii are distributed according to u ii (, ) 1, 1 i. (1.17) 8 In addition, all the random variables in (1.16) and (1.17) are stochastically independent. For matrices from the GSE each of the 2 real eigenvalues is only considered with half its multiplicity, leading to eigenvalues as for GOE and GUE. Some remarks on the three Gaussian ensembles Let P,β denote the probability measure on the set of real symmetric (β = 1), complex Hermitian (β = 2) or quaternionic self-dual (β = 4) matrices. The names Gaussian Orthogonal, Unitary and Symplectic Ensemble are due to the fact that the probability measures P,β are invariant under orthogonal (for GOE), unitary (for GUE) and unitary-symplectic (for GSE) transformations. More precisely, for any measurable set A we have P,β (A) = P,β (U 1 AU) (1.18) for all orthogonal U in the case of GOE, unitary U in the case of GUE and unitary and symplectic U in the case of GSE. Recall that a 2 2 complex matrix U is called symplectic if we have (with J and σ as in (1.11),(1.14)) UJU t = J

14 8 1. Introduction or equivalently for the respective 2 2 blocks U ij we have σ (U ji ) t σ t = (U 1 ) ij. (1.19) For unitary matrices, i.e. U 1 = U = U t, equation (1.19) simplifies to the requirement that the 2 2 blocks U ij are of form (1.7). Because of the invariance property in (1.18), the probability measure P,β factorises as a measure on the eigenvalues and the uniform measure on the eigenvectors. Moreover, the joint probability distribution of the eigenvalues on the Weyl chamber W := {(x 1,..., x ) R : x 1 x 2... x } (1.2) can be calculated explicitly (see e.g. [23]) and is given by: dp,β ( λ 1,..., λ ) = 1 λ k λ β j Z,β j<k e β λ2 j d λ 1... d λ (1.21) j=1 with β = 1 for GOE, β = 2 for GUE and β = 4 for GSE and Z,β denotes the normalisation constant such that P,β is a probability measure on W. Observe that the eigenvalues are denoted by λ i rather than λ i because the latter notation is reserved for the rescaled eigenvalues. We come back to the scaling of the eigenvalues in section The following theorem (see e.g. [37, theorem 3.3.1]) shows that the Gaussian ensembles are the only ones with both the independence of the entries (as symmetry permits) and the invariance property given in (1.18). Theorem 1.1 (c.f. [37]). Consider the set of real symmetric (β = 1) or complex Hermitian (β = 2) or quaternionic self-dual (β = 4) matrices each with a probability measure P,β. Suppose the following: 1. For any measurable set A and orthogonal U R for β = 1 resp. unitary U C for β = 2 resp. unitary-symplectic U C 2 2 for β = 4 we have P,β (A) = P,β (U 1 AU). 2. For each matrix H the density P,β (H) factorises in functions of single matrix entries of H. Due to symmetry constraints the product has then (+1) 2 (for β = 1) resp. 2 (for β = 2) resp. (2 1) (for β = 4) factors.

15 1.1. Some aspects of random matrix theory 9 The joint distribution of the eigenvalues is then given by dp,β ( λ 1,..., λ ) = 1 λ k λ β j Z,β for some a > and b, c R. j<k e a λ2 j +b λj j=1 d λ 1... d λ Hence, up to scaling, we derive the same joint eigenvalue distributions as in (1.21). Beyond the three classical Gaussian ensembles (GOE, GUE and GSE) several generalisations of the latter may be considered. For example, on the one hand, the restriction on the distributions of the entries for real symmetric, complex hermitian or self-dual quaternionic matrices to be Gaussian may be loosened, and we can consider independent entries drawn from other distributions. These matrices are called Wigner matrices (real, complex or quaternionic), and a precise definition will be given in the course of this section. On the other hand, considering the eigenvalue distribution given in (1.21), one may ask if there are random matrices with such a joint eigenvalue distribution where β is an arbitrary positive real number. The answer is provided by certain tridiagonal matrix ensembles belonging to so-called β-hermite ensembles, which will be described at the end of section Wigner ensembles and β-hermite ensembles will be included in our later numerical experiments, but in our analytical considerations we focus on invariant ensembles, which are characterised by (1.18). Invariant ensembles are considered in the subsequent paragraph. Invariant ensembles As already indicated by theorem 1.1, the classical Gaussian ensembles may also be defined as real symmetric, complex Hermitian or quaternionic self-dual matrices with independent entries except those subject to symmetry constraints together with the requirement that the probability measure on the matrices obeys the invariance property given in (1.18). Abandoning the independence property of the matrix entries leads to invariant ensembles, which are defined as follows (see e.g. [9, section 2.1]): Orthogonal ensembles (β = 1) consist of real symmetric matrices together with a probability measure P,β that is invariant under orthogonal conjugation. Unitary ensembles (β = 2) consist of Hermitian matrices together with a probability measure P,β that is invariant under unitary conjugation.

16 1 1. Introduction Symplectic ensembles (β = 4) consist of quaternionic self-dual matrices together with a probability measure P,β that is invariant under unitarysymplectic conjugation. Wide classes of invariant matrix ensembles are given by probability measures P,β of the form P,β (M) dm = 1 Z,β e tr(v (M)) dm, (1.22) where Z,β denotes the normalisation constant and V : R R has the additional property V (x) lim =. (1.23) x ln(x 2 + 1) Here condition (1.23) ensures that the measure e V (x) dx has finite moments (see e.g. [13, (1.2)]). Observe that dm in (1.22) denotes the product measure on the algebraically independent entries of M, which is obtained similar to the product measure for the three Gaussian ensembles (see (1.3) to (1.6), (1.16), (1.17)) with the only difference that the normal distribution has to be replaced by the Lebesgue measure. As common in random matrix literature we focus on invariant ensembles with probability measures given by (1.22) and (1.23). We set w () β (x) := e, for β = 1, 2 e, for β = 4. (1.24) Then the joint probability density of the eigenvalues on W is given by (see e.g. [9, (2.53)]) dp,β ( λ 1,..., λ ) = 1 λ k λ β j Z,β j<k j=1 w () β ( λ j ) d λ 1... d λ. (1.25) This wide class of invariant ensembles with (1.25) will be studied in our main theorem and our analytical considerations, but except for the three Gaussian ensembles (where V (x) = x 2 ) they are not well suited for numerical considerations. This is because it is unclear how to implement the generation of random matrices with the probability measure in (1.22), i.e. the joint eigenvalue distribution (1.25). Hence we perform our numerical experiments for Wigner matrix ensembles and β-hermite ensembles and we will now introduce these ensembles. Wigner matrices Wigner matrix ensembles consist of matrices obeying a symmetry condition (either symmetric, Hermitian or self-dual) with independent random matrix entries

17 1.1. Some aspects of random matrix theory 11 except for those subject to symmetry constraints. There is no standard definition of Wigner ensembles in the literature. Usually one requires at least that all entries have mean zero and prescribed variances. For our purposes it is convenient to specialise further and assume that the entries are identically distributed except for a factor on the diagonal. We use the following definitions: Real Wigner ensembles consist of real symmetric matrices H = (h jk ) 1 j,k with h jk = 1 x jk, 1 j k 1 and x jk (1 j < k ), 2 x kk (1 k ) are i.i.d. random variables with mean zero and variance 1. Observe that this implies Var(h 4 kk) = 1 2 for 1 k and Var(h jk ) = 1 for 1 j < k. 4 Complex Wigner ensembles consist of complex Hermitian matrices H = (h jk ) 1 j,k with h jk = 1 (x jk + i y jk ), 1 j < k, h kk = 1 x kk, 1 k, 1 and x jk, y jk (1 j < k ), 2 x kk (1 k ) are i.i.d. random variables with mean zero and variance 1. Hence we have Var(h 8 kk) = 1, 4 1 k and Var(Re(h jk )) = Var(Im(h jk )) = 1, 1 j < k. 8 Quaternionic Wigner ensembles consist of quaternionic self-dual matrices, or according to (1.15) equivalently of 2 2 complex Hermitian matrices H = (h jk ) 1 j,k, h jk C 2 2, where h jk is of form (1.7), i.e. h jk = 1 ( ) ujk v jk C 2 2, 1 j < k, (1.26) v jk u jk h kk = 1 ( ) ukk u kk R 2 2, 1 k. (1.27) Moreover, Re(u jk ), Im(u jk ), Re(v jk ), Im(v jk ), (1 j < k ), 1 2 u kk, (1 k ) are i.i.d. random variables with mean zero and variance Wigner matrices with non-centred entries In addition to the standard Wigner ensembles we will include real, complex and quaternionic Wigner matrices with non-centred entries in our numerical experiments. These are given by the above definitions abandoning the restriction on the expectation of the matrix entries.

18 12 1. Introduction β-hermite ensembles A real matrix H belongs to a β-hermite ensemble if the joint probability distribution of the eigenvalues is given by (1.21) for some β >. These distributions have been studied in the physics literature as log-gases. In terms of random matrix theory it was only Dumitriu and Edelman in their important work [14] who introduced ensembles of real tridiagonal matrices that have the corresponding joint distribution of the eigenvalues. Following the work of Trotter in the GOE case [46], Dumitriu and Edelman obtained these tridiagonal matrices H β for the three classical Gaussian ensembles by tridiagonalising matrices from the GOE, GUE and GSE using successive Householder transformations. Moreover, Dumitriu and Edelman were able to provide generalisations of the tridiagonal matrices H β for any β >, for which they could prove that the joint distribution of the eigenvalues is given by (1.21). The matrices H β are real symmetric tridiagonal matrices that can schematically be depicted by H β 1 2 β (, 2) χ ( 1)β χ ( 1)β (, 2) χ ( 2)β χ ( 2)β (, 2) χ ( 3)β χ ( 3)β (1.28) χ β χ β (, 2) Here χ d denotes the square root of a chi-square distributed number with d degrees of freedom, i.e. the probability distribution function of the random variable χ nβ is given by Γ(nβ/2) xnβ 1 e x2. The diagonal entries and the 1 sub diagonal entries are mutually independent and the super diagonal entries are determined by the required symmetry of the matrix. ote that the auxiliary factor 1/ 2β, which is absent in [14], stems from the deviant variances used in our considerations. For our numerical experiments the matrices H β can easily be implemented for any real positive β, but we will focus on β > 2 for reasons concerning the implementation of the limiting spacing distribution, which will be discussed in section Eigenvalues of random matrices In this subsection we introduce some definitions and facts about the eigenvalues of random matrices from the ensembles introduced in section In addition, we explain how to rescale these eigenvalues such that we can observe a universal

19 1.1. Some aspects of random matrix theory 13 behaviour of the spacings of adjacent eigenvalues. We denote the ordered eigenvalues of a random matrix of a β-hermite, invariant or Wigner ensemble by λ () () () 1 λ 2... λ. In some cases we may omit the superscript if the matrix size can be recovered from the context. Recall that for β-hermite and invariant ensembles there are explicit formulas for the joint distribution of the eigenvalues on the Weyl chamber available (see (1.21) and (1.25)). We begin our discussion of the eigenvalue statistics by introducing the notion of limiting spectral density. () () λ 1 λ Definition 1.2. Let 2... denote the ordered eigenvalues of a matrix from a β-hermite, invariant or Wigner ensemble as introduced in section i) We set F (s) := 1 E,β λ () ( #{j : λ () j s} ). The function F is called the expected eigenvalue distribution (for matrices of size ). ote that here E,β denotes the expectation with respect to the probability measure P,β on the matrices. ii) If the distribution F is absolutely continuous, its density is denoted by ψ, i.e. F (s) = s ψ (t) dt. In this case ψ is called the expected eigenvalue density (for matrices of size ). iii) If the expected eigenvalue distributions (F ) uniformly converge to a function F, i.e. lim F (x) F (x) =, (1.29) sup x R then F is called the limiting expected eigenvalue distribution. ote that because of the uniform convergence F is actually a distribution function. iv) If the limiting expected eigenvalue distribution F exists and is absolutely continuous with density ψ, i.e. F (s) = s ψ(t) dt, (1.3) then ψ is called limiting spectral density (or limiting expected eigenvalue density).

20 14 1. Introduction For β-hermite ensembles and for rather general classes of Wigner ensembles the limiting expected eigenvalue density ψ takes a universal form. This universal behaviour is described by Wigner s semi-circle law, which we state below. The theorem requires some additional properties of the distributions of the matrix entries of Wigner matrices on which we comment afterwards. Theorem 1.3 (Wigner s semi-circle law). We consider β-hermite ensembles with β > and Wigner matrix ensembles as defined in section with some additional conditions on the distributions of the matrix entries. Then the limiting expected eigenvalue density is given by ψ(t) = 2 π 1 t2 χ [ 1,1]. (1.31) It was Wigner himself who provided the above theorem in the 195s (see [5]) for an ensemble of real symmetric matrices with independent entries (as symmetry permits), whose distribution laws are symmetric, have the same second moments and all other moments exist with a uniform upper bound. In [51] these restrictions could be loosened, and he stated the theorem for real symmetric matrices with independent entries (as symmetry permits) that have all mean zero and the same second moments and in addition all higher moments exist. A slightly different version of the theorem concerning the associated counting measure of the eigenvalues L := 1 i=1 δ λ() i can e.g. be found in [3, theorems and 2.2.1]. More precisely, the theorems in [3] state the weak convergence in probability of L against the measure with density ψ(t) = 2 π 1 t2 χ [ 1,1]. The proof is provided for real Wigner matrices as introduced in section (recall that the variance for the off-diagonal entries and the diagonal entries differs by a factor two) with the additional assumption that all moments of the entries exist. The last assumption can be omitted and the proof in [3] can be extended to complex and quaternionic Wigner matrices as introduced in section Some remarks about the evolution of Wigner s semi-circle law can be found in [3, section 2.7]. ote that matrices from Wigner ensembles with non-centred entries correspond to standard Wigner matrices under a rank one Hermitian deterministic perturbation and L converges weakly in expectation and almost surely to the semicircular distribution given in (1.31) (see e.g. [7]). For general β-hermite ensembles the theorem can e.g. be found in [4, (5.3)] and [53]. Observe that some of the references refer to slightly different scaled matrix entries than introduced in section 1.1.1, but taking this scaling into account we arrive at the limiting density ψ given in (1.31) for the matrix models in section

21 1.1. Some aspects of random matrix theory 15 For the invariant ensembles that we consider the limiting expected eigenvalue density ψ depends on V (see (1.22)). If V satisfies (1.23), the support of ψ is compact and a finite union of intervals (see [13]) Scaling the eigenvalues The probability measures P,β have been chosen such that the eigenvalues accumulate on some compact set, which is the support of the limiting expected () eigenvalue density ψ. The spacings of the eigenvalues λ i of an matrix can therefore be expected to be of order 1. In order to study the distribution of the spacings, we rescale the eigenvalues such that the spacings of adjacent eigenvalues are normalised to unity. This can be done in different ways and we introduce two rescalings, one leading to unfolded statistics the other one to localised statistics. In both cases we start with the ordered original eigenvalues of a random matrix λ () () () 1 λ 2... λ and the rescaled ordered eigenvalues will be denoted by λ () 1 λ () 2... λ (). The superscript for both the rescaled and the unrescaled eigenvalues may be omitted if the matrix size can be recovered from the context. Unfolded statistics The spacings of the rescaled eigenvalues can be normalised to unity by rescaling such that the rescaled eigenvalues λ () 1 λ () 2... λ () lie in [, ] and their expected eigenvalue density (see definition 1.2) is almost constant at 1, leading to an expectation of one rescaled eigenvalue per unit interval. In the case of unfolded statistics we obtain the rescaled eigenvalues λ () i from the original eigenvalues via λ () i λ () i := F ) () ( λ i, (1.32) where F denotes the limiting expected eigenvalue distribution (see (1.29)). Indeed, () as the original eigenvalues λ i can be expected to lie in supp(ψ) for large matrix sizes, the rescaled eigenvalues λ () i can be expected to lie in [, ] (see (1.29)). Concerning the expected eigenvalue density of the rescaled eigenvalues, we observe

22 16 1. Introduction that for large matrix sizes F is close to F and we have for u [, ]: 1 E,β ({# i λ i u}) = 1 ({ E,β # i F ( λ i ) u }) = 1 ({ E,β # i λ ( )}) u i F 1 ( ( )) u = F F 1 ( ( )) u F F 1 = u u 1 = dt. Hence the spacings of adjacent rescaled eigenvalues are on average normalised to unity. The just described method of rescaling, using the limiting expected distribution function F, is commonly referred to as unfolded statistics. It is well known that different rescalings are needed at boundary points of the support of the limiting spectral density ψ, i.e. we have to restrict our attention to eigenvalues that lie in the so-called bulk of the spectrum. Unfolded statistics will only be used in our numerical experiments and hence only for Wigner ensembles and β-hermite ensembles. For these ensembles Wigner s semi-circle law (theorem 1.3) is valid and hence we have to consider eigenvalues in some interval I = [I min, I max ] supp(ψ) = [ 1, 1]. In addition, the explicit formula for F in the case of Wigner and β-hermite ensembles provided by Wigner s semi-circle law, ensures that (1.32) can easily be implemented in MATLAB and hence unfolded statistics are well suited for our numerical experiments. For our analytical considerations we introduce a linear rescaling that is better suited for our analysis. It can be motivated by linearising the rescaling in (1.32) in a small neighbourhood of a given point a supp(ψ) with ψ(a) >. More precisely, we have F ( λ i ) F (a) + F (a)( λ i a) = F (a) + ψ(a)( λ i a). (1.33) Being interested in the spacings λ j+1 λ j, we may neglect the term F (a) in (1.33). This motivates the definition of the rescaling for localised statistics specified in the next paragraph.

23 1.1. Some aspects of random matrix theory 17 Localised statistics Let a be a point in the interior of the support of the limiting expected eigenvalue density ψ with ψ(a) >. For each matrix size we consider an interval I = [a δ, a + δ ] supp(ψ) (1.34) such that for the length of the interval I (denoted by I ) we have I for and I for. (1.35) We expect about I ψ(a) eigenvalues in I for large matrix sizes because ψ (x) ψ(x) and on the very small interval I the limiting expected eigenvalue density ψ is almost constant at ψ(a). Indeed, we have E,β ({ #j : λ () }) j I = ψ (t)dt (1.36) I ψ(t) dt (1.37) I I ψ(a). (1.38) The expected difference of two adjacent eigenvalues near a will approximately be 1 because the limiting expected eigenvalue density ψ is almost constant on ψ(a) the small interval I. Therefore, and according to the discussion around (1.33) we rescale the original eigenvalues in the following way: λ () i λ () i := ( λ () i a)ψ(a). (1.39) The above rescaling is defined for all i = 1,..., but it is only used for those () eigenvalues λ i that lie in the interval I. For later use we denote A := ψ(a)(i a) = {ψ(a)(t a) t I } R (1.4) and observe that for the length of A we have A = ψ(a) I for. (1.41) Hence for rescaled eigenvalues obtained by (1.39) we have λ () i I λ () i A. (1.42) Although the linear rescaling in the case of localised statistics is well suited for analytical considerations and still implies convergence of the empirical distribution function of the rescaled spacings (see main theorem in chapter 4), it has two

24 18 1. Introduction major disadvantages with regard to numerical implementations. On the one hand, by replacing F by its linearisation we lose accuracy, and on the other hand, we are forced to let the interval length tend to zero for increasing matrix sizes. This implies that in the case of localised statistics far less eigenvalues are taken into account for a given matrix size. This is why localised statistics are not as well suited for our numerical experiments as unfolded statistics. evertheless, we will include localised statistics in the experiments in order to study the rate of convergence in the main theorem. We sum up the features of the considered setting of random matrices in the following remark. Remark 1.4. In our later analytical considerations (chapter 2 to chapter 9) we use the following setting: We consider matrices from invariant ensembles as introduced in section We suppose that the probability measures P,β on the matrices are of form (1.22), depending on a function V : R R. Let V satisfy (1.23) and let the spectral density ψ exist (see definition 1.2). The eigen- () () values of a matrix H are denoted by λ 1 λ 2... and we () () may write λ i (H) instead of λ i if we want to indicate the respective matrix H. The rescaled eigenvalues are obtained according to (1.39) and these are denoted by λ () 1 λ () 2... λ () (again we may stress the dependence on H in the notation). We assume that I is an interval that satisfies (1.34) and (1.35) for some point a in the interior of supp(ψ) with ψ(a) >. The interval A is then given in (1.4), implying (1.41) and (1.42). Some results will be derived under the assumption of the main theorem (see assumption 4.1), which imposes an (implicit) condition on the function V and the intervals I respectively A. Having rescaled the eigenvalues, we can now consider the spacing distribution and introduce some notation (section 1.1.4) before we can state a preliminary version of our main theorem (section 1.2). λ () The empirical distribution of the spacings In order to introduce the main quantity to be studied, which is called the expected Kolmogorov-Smirnov distance of the empirical spacing distribution from its limiting distribution (for unfolded or localised statistics), we have to define the relevant counting measures of the spacings. For our analysis and our numerical experiments these differ slightly by a normalisation pre-factor. We start with the introduction of two counting measures for our numerical experiments, one for localised and one for unfolded statistics.

25 1.1. Some aspects of random matrix theory 19 Definition 1.5. Let H denote a random matrix of a Wigner, invariant or β-hermite ensemble and let λ 1 λ 2... λ denote the rescaled ordered eigenvalues that were either obtained by (1.32) (unfolded statistics) or (1.39) (localised statistics). a) Let I ( 1, 1) be an interval. If the rescaled eigenvalues λ 1 λ 2... λ were obtained by (1.32) (unfolded statistics), we denote the number of original eigenvalues in I by S I (H), i.e. and set σ num (H) := S I (H) := #{i λ i I} 1 S I (H) 1 j=1,..., 1 λj, λj+1 I δ (λj+1 λ j ). (1.43) b) If the rescaled eigenvalues λ 1 λ 2... λ were obtained by (1.39) (localised statistics) for some interval I as in remark 1.4, we denote the number of original eigenvalues in I by S I (H), i.e. and set σ num (H) := S I (H) := #{i λ i I } 1 S I (H) 1 δ (λj+1 λj). (1.44) j=1,..., 1 λj, λj+1 I ote that in the notation of the counting measures σ num (H), which will be used in our numerical experiments only, we suppressed the dependence on the type of rescaling such that the claim that we want to support numerically (see (1.52) and (1.53) below) can be formulated to apply to both unfolded and localised statistics. Moreover, the counting measures depend on the intervals I resp. I, but we also suppress this dependency in the notation. ote further that for unfolded as well as for localised statistics s dσnum (H) is the empirical distribution functions of the rescaled spacings (for eigenvalues in I resp. I ) of the matrix H evaluated at some point s. In the main theorem of this thesis we will only study localised statistics and the relevant counting measure will be similar to (1.44). For analytical reasons the random quantity S I (H) 1 in (1.44) is replaced by the deterministic term ψ(a) I, which is, according to (1.36) to (1.38), close to the expected number of eigenvalues (resp. spacings belonging to eigenvalues) in the interval I. We continue with the introduction of the counting measure for our analytical studies (for localised statistics).

26 2 1. Introduction Definition 1.6. We consider the setting of remark 1.4. For a real symmetric, complex Hermitian or quaternionic self-dual matrix H with rescaled eigenvalues λ 1 λ 2... λ obtained by (1.39) (localised statistics), we set σ (H) := 1 δ (λj+1 λj). (1.45) ψ(a) I j=1,..., 1 λj, λj+1 I s The convergence of the empirical distribution function of the spacings given by dσ (H) as is the quantity of interest in our main theorem The main theorem With the notation we introduced so far we can now formulate a preliminary version of the main theorem (for the final version see theorem 4.3). Theorem 1.7 (Preliminary version of the main theorem). We consider the setting of remark 1.4. Let P,β denote the probability measure for invariant matrix ensembles as introduced in section Let E,β denote the expected value with respect to P,β and let the counting measure σ be given by definition 1.6 with respect to some interval I as in remark 1.4. Then there exists a measure µ β such that for a large class of potentials V we have lim E,β ( sup s R s s ) dσ ( ) dµ β =. (1.46) Remark 1.8. i) The difference between the final version of the main theorem (theorem 4.3) and theorem 1.7 is that the phrase for a large class of potential V will be made precise. However, the hypothesis of the theorem will not be formulated as an explicit assumption on V, but rather on the matrix kernels K,β (see assumption 4.1), which will be introduced in chapter 3. The required uniform convergence of K,β in the region of interest is what one would expect from the known universality results. In fact, in all cases where universality has been shown for invariant ensembles by a Riemann Hilbert approach, the kernels K,β satisfy our assumption 4.1 (c.f. remark 4.2) and thus (1.46) holds.

27 1.2. The main theorem 21 ii) In chapter 6 (theorem 6.4) we prove the existence and uniqueness of a measure µ β, called the limiting spacing distribution, with ( s ) s lim E,β dσ ( ) = dµ β. (1.47) In the proof of the intermediate result (1.47) we do not use the known connection between the limiting spacing distribution and the gap probabilities. We rather derive this relation as a result of our representation of µ β (see (7.2) together with lemma 7.8 and lemma 7.11). iii) For later reference we introduce some notation for the quantity of interest in the main theorem. The Kolmogorov-Smirnov distance of the empirical spacing distribution from the limiting spacing distribution is denoted by KS := sup s R s s dσ (H) dµ β, (1.48) and for its expected value we set EKS := E,β (sup s R s dσ ( ) s ) dµ β. (1.49) iv) As a by-product of our proof one could also formulate the hypothesis of theorem 4.3 in terms of the k-point correlation functions, which would be more suitable for Wigner matrix ensembles. At this point, the existing results (see e.g. [19], [17]) are not in a form to apply to theorem 4.3 (see remark 4.5). As mentioned above, the main theorem is stated in chapter 4. Its proof is the content of part I of this thesis (chapter 2 to chapter 9) and it comes in three steps as already noted. The main result of the first step (chapter 2 to chapter 7) concerns the convergence of the expected empirical distribution function of the spacings E,β ( s dσ ( )). The derivation of the desired results for E,β ( s dσ ( )) is organised as follows: Following the arguments of [28] we introduce in chapter 2 auxiliary measures γ (H), which are closely related to the measures σ (H). The advantage of γ (H) is that s dγ (H) are symmetric functions of the eigenvalues of H. In order to consider the expectation with respect to H, we have to study the marginal densities of the joint probability density of the eigenvalues of a matrix given in (1.25) with respect to k marginal variables for any 1 k. These densities are called k-point correlation functions. In chapter 3 we recall their known representations in terms of the (matrix) kernel functions

28 22 1. Introduction K,β and we discuss known universality results. After providing further results about the (rescaled) k-point correlation functions in chapter 5 we consider the expected values of γ and σ in chapter 6. We prove the existence of the limiting spacing measures µ β defined via (1.47) and provide an estimate on s s dµ β dσ (H). Relating the limiting spacing measures µ β to gap probabilities and proving a tail estimate on µ β, which will be needed in the later proof of the main theorem (see chapter 7), completes the first step of the proof. The second step in the proof of theorem 4.3 is to provide sufficient estimates for the variances of the auxiliary measures γ. These estimates are the content of chapter 8 and are the heart of the proof of the main theorem. In particular, in the cases β = 1 and β = 4 this necessitates a detailed notation and a series of intermediate results. The estimates on the variances of γ imply that for given s R we have (see remark 6.5) ( s s ) E,β dσ ( ) dµ β for. (1.5) The final step in the proof of the main theorem (chapter 9) is to consider the supremum over all s R in (1.5) (c.f. (1.49)) before taking the expected value. To this end we introduce nodes, at which we consider the expectation in (1.5) and control the dependence on these nodes. Together with the tail estimates on µ β we complete the proof of the main theorem in chapter umerical experiments In part II of this thesis we present numerical experiments using MATLAB that provide rates of convergence for s s ) EKS := E,β (sup dσ num ( ) dµ β. (1.51) s R Observe that in part II of this thesis EKS differs sightly from (1.49), as the measure dσ (H) is replaced by the closely related measure dσ num (H). We conduct our experiments for some Wigner ensembles and some general β-hermite ensembles (including GOE, GUE and GSE) for both unfolded and localised statistics (see (1.32) and (1.39)). In the case of unfolded statistics we provide numerical evidence for the claim that in the considered ensembles the leading asymptotic of EKS (as in (1.51)) is C 1/2, i.e. EKS C 1/2. (1.52)

29 1.3. umerical experiments 23 This suggests strongly that there should be a corresponding limit law similar to the one shown by Soshnikov [42] (see (1.2)). For localised statistics we observe that the rate of convergence of EKS depends on the choice of the interval I (see (1.34, 1.35)). Hence we consider intervals I = [a 1, a + 1 ] for some α α a ( 1, 1) and some α (, 1). For the considered settings our experiments show that EKS C k(α) (1.53) for some k(α) >. For Wigner matrices it could be shown that several local statistics of the eigenvalues, such as correlation functions and gap probabilities, converge to the same limit as in the respective Gaussian case, where β = 1 corresponds to real Wigner matrices, β = 2 to complex matrices and β = 4 to quaternionic matrices (see e.g. a series of papers by Erdős et. al. [17] [16], [18],[19], [2], the survey [17] and [43] by Tao and Vu). The convergence of the k-point correlation functions of Wigner matrices suggests that the empirical distribution of the spacings also converges to the respective limit (see (1.47)). The eigenvalue statistics of β-hermite ensembles have not yet been exhausted, although some results e.g. for the oneand two-point correlation functions (see [23, section ]) and the asymptotic behaviour of the gap probabilities (see e.g. [47]) are available. However, neither for Wigner nor for β-hermite ensembles a rigorous proof of the convergence of the empirical spacing distribution (see (1.47)) seems to have been attempted yet. One important issue in the numerical experiments is the approximation of the liming measures µ β resp. their densities p β. For β = 1, 2, 4 we use the MATLAB toolbox by Bornemann (c.f. [5]) for a fast and precise evaluation of the related gap probabilities. Then we obtain the limiting densities p β by numerical differentiation (see chapter 7). For β R + \ {1, 2, 4} no such precise numerical schemes for the evaluation of p β are available. Instead we use the generalised Wigner surmise (see [33] and section 11.3 of this thesis). In fact, our numerical experiments provide a strong indication for the validity of this surmise (see remark 11.1). A detailed description of our numerical experiments and the results together with selected plots are given in part II of this thesis. An outline is given at the beginning of part II.

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes

More information

Lectures 6 7 : Marchenko-Pastur Law

Lectures 6 7 : Marchenko-Pastur Law Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

Universality for random matrices and log-gases

Universality for random matrices and log-gases Universality for random matrices and log-gases László Erdős IST, Austria Ludwig-Maximilians-Universität, Munich, Germany Encounters Between Discrete and Continuous Mathematics Eötvös Loránd University,

More information

Random Matrix: From Wigner to Quantum Chaos

Random Matrix: From Wigner to Quantum Chaos Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution

More information

Convergence of spectral measures and eigenvalue rigidity

Convergence of spectral measures and eigenvalue rigidity Convergence of spectral measures and eigenvalue rigidity Elizabeth Meckes Case Western Reserve University ICERM, March 1, 2018 Macroscopic scale: the empirical spectral measure Macroscopic scale: the empirical

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium SEA 06@MIT, Workshop on Stochastic Eigen-Analysis and its Applications, MIT, Cambridge,

More information

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS David García-García May 13, 2016 Faculdade de Ciências da Universidade de Lisboa OVERVIEW Random Matrix Theory Introduction Matrix ensembles A sample computation:

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW Serials Publications www.serialspublications.com OMPLEX HERMITE POLYOMIALS: FROM THE SEMI-IRULAR LAW TO THE IRULAR LAW MIHEL LEDOUX Abstract. We study asymptotics of orthogonal polynomial measures of the

More information

Universality of local spectral statistics of random matrices

Universality of local spectral statistics of random matrices Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.

More information

arxiv: v5 [math.na] 16 Nov 2017

arxiv: v5 [math.na] 16 Nov 2017 RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Local Kesten McKay law for random regular graphs

Local Kesten McKay law for random regular graphs Local Kesten McKay law for random regular graphs Roland Bauerschmidt (with Jiaoyang Huang and Horng-Tzer Yau) University of Cambridge Weizmann Institute, January 2017 Random regular graph G N,d is the

More information

A new type of PT-symmetric random matrix ensembles

A new type of PT-symmetric random matrix ensembles A new type of PT-symmetric random matrix ensembles Eva-Maria Graefe Department of Mathematics, Imperial College London, UK joint work with Steve Mudute-Ndumbe and Matthew Taylor Department of Mathematics,

More information

Valerio Cappellini. References

Valerio Cappellini. References CETER FOR THEORETICAL PHYSICS OF THE POLISH ACADEMY OF SCIECES WARSAW, POLAD RADOM DESITY MATRICES AD THEIR DETERMIATS 4 30 SEPTEMBER 5 TH SFB TR 1 MEETIG OF 006 I PRZEGORZAłY KRAKÓW Valerio Cappellini

More information

arxiv:math/ v2 [math.pr] 10 Aug 2005

arxiv:math/ v2 [math.pr] 10 Aug 2005 arxiv:math/0403090v2 [math.pr] 10 Aug 2005 ORTHOGOAL POLYOMIAL ESEMBLES I PROBABILITY THEORY By Wolfgang König 1 10 August, 2005 Abstract: We survey a number of models from physics, statistical mechanics,

More information

Random Fermionic Systems

Random Fermionic Systems Random Fermionic Systems Fabio Cunden Anna Maltsev Francesco Mezzadri University of Bristol December 9, 2016 Maltsev (University of Bristol) Random Fermionic Systems December 9, 2016 1 / 27 Background

More information

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n) GROUP THEORY PRIMER New terms: so(2n), so(2n+1), symplectic algebra sp(2n) 1. Some examples of semi-simple Lie algebras In the previous chapter, we developed the idea of understanding semi-simple Lie algebras

More information

The Matrix Dyson Equation in random matrix theory

The Matrix Dyson Equation in random matrix theory The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC

More information

here, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional

here, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional 15. Perturbations by compact operators In this chapter, we study the stability (or lack thereof) of various spectral properties under small perturbations. Here s the type of situation we have in mind:

More information

Preface to the Second Edition...vii Preface to the First Edition... ix

Preface to the Second Edition...vii Preface to the First Edition... ix Contents Preface to the Second Edition...vii Preface to the First Edition........... ix 1 Introduction.............................................. 1 1.1 Large Dimensional Data Analysis.........................

More information

Khinchin s approach to statistical mechanics

Khinchin s approach to statistical mechanics Chapter 7 Khinchin s approach to statistical mechanics 7.1 Introduction In his Mathematical Foundations of Statistical Mechanics Khinchin presents an ergodic theorem which is valid also for systems that

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

DLR equations for the Sineβ process and applications

DLR equations for the Sineβ process and applications DLR equations for the Sineβ process and applications Thomas Leble (Courant Institute - NYU) Columbia University - 09/28/2018 Joint work with D. Dereudre, A. Hardy, M. Maı da (Universite Lille 1) Log-gases

More information

From the mesoscopic to microscopic scale in random matrix theory

From the mesoscopic to microscopic scale in random matrix theory From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system

More information

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES ADAM MASSEY, STEVEN J. MILLER, AND JOHN SINSHEIMER Abstract. Consider the ensemble of real symmetric Toeplitz

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

Fluctuations of random tilings and discrete Beta-ensembles

Fluctuations of random tilings and discrete Beta-ensembles Fluctuations of random tilings and discrete Beta-ensembles Alice Guionnet CRS (E S Lyon) Workshop in geometric functional analysis, MSRI, nov. 13 2017 Joint work with A. Borodin, G. Borot, V. Gorin, J.Huang

More information

Fluctuations of random tilings and discrete Beta-ensembles

Fluctuations of random tilings and discrete Beta-ensembles Fluctuations of random tilings and discrete Beta-ensembles Alice Guionnet CRS (E S Lyon) Advances in Mathematics and Theoretical Physics, Roma, 2017 Joint work with A. Borodin, G. Borot, V. Gorin, J.Huang

More information

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Craig A. Tracy UC Davis RHPIA 2005 SISSA, Trieste 1 Figure 1: Paul Painlevé,

More information

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University PCA with random noise Van Ha Vu Department of Mathematics Yale University An important problem that appears in various areas of applied mathematics (in particular statistics, computer science and numerical

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Update on the beta ensembles

Update on the beta ensembles Update on the beta ensembles Brian Rider Temple University with M. Krishnapur IISC, J. Ramírez Universidad Costa Rica, B. Virág University of Toronto The Tracy-Widom laws Consider a random Hermitian n

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

CONSTRAINED PERCOLATION ON Z 2

CONSTRAINED PERCOLATION ON Z 2 CONSTRAINED PERCOLATION ON Z 2 ZHONGYANG LI Abstract. We study a constrained percolation process on Z 2, and prove the almost sure nonexistence of infinite clusters and contours for a large class of probability

More information

arxiv: v1 [math-ph] 19 Oct 2018

arxiv: v1 [math-ph] 19 Oct 2018 COMMENT ON FINITE SIZE EFFECTS IN THE AVERAGED EIGENVALUE DENSITY OF WIGNER RANDOM-SIGN REAL SYMMETRIC MATRICES BY G.S. DHESI AND M. AUSLOOS PETER J. FORRESTER AND ALLAN K. TRINH arxiv:1810.08703v1 [math-ph]

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV

More information

Quantum Chaos and Nonunitary Dynamics

Quantum Chaos and Nonunitary Dynamics Quantum Chaos and Nonunitary Dynamics Karol Życzkowski in collaboration with W. Bruzda, V. Cappellini, H.-J. Sommers, M. Smaczyński Phys. Lett. A 373, 320 (2009) Institute of Physics, Jagiellonian University,

More information

FREE PROBABILITY THEORY

FREE PROBABILITY THEORY FREE PROBABILITY THEORY ROLAND SPEICHER Lecture 4 Applications of Freeness to Operator Algebras Now we want to see what kind of information the idea can yield that free group factors can be realized by

More information

arxiv: v2 [cond-mat.dis-nn] 9 Feb 2011

arxiv: v2 [cond-mat.dis-nn] 9 Feb 2011 Level density and level-spacing distributions of random, self-adjoint, non-hermitian matrices Yogesh N. Joglekar and William A. Karr Department of Physics, Indiana University Purdue University Indianapolis

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

Progress in the method of Ghosts and Shadows for Beta Ensembles

Progress in the method of Ghosts and Shadows for Beta Ensembles Progress in the method of Ghosts and Shadows for Beta Ensembles Alan Edelman (MIT) Alex Dubbs (MIT) and Plamen Koev (SJS) Oct 8, 2012 1/47 Wishart Matrices (arbitrary covariance) G=mxn matrix of Gaussians

More information

Numerical Methods for Random Matrices

Numerical Methods for Random Matrices Numerical Methods for Random Matrices MIT 18.95 IAP Lecture Series Per-Olof Persson (persson@mit.edu) January 23, 26 1.9.8.7 Random Matrix Eigenvalue Distribution.7.6.5 β=1 β=2 β=4 Probability.6.5.4.4

More information

Bulk scaling limits, open questions

Bulk scaling limits, open questions Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson

More information

Beyond the Gaussian universality class

Beyond the Gaussian universality class Beyond the Gaussian universality class MSRI/Evans Talk Ivan Corwin (Courant Institute, NYU) September 13, 2010 Outline Part 1: Random growth models Random deposition, ballistic deposition, corner growth

More information

TOP EIGENVALUE OF CAUCHY RANDOM MATRICES

TOP EIGENVALUE OF CAUCHY RANDOM MATRICES TOP EIGENVALUE OF CAUCHY RANDOM MATRICES with Satya N. Majumdar, Gregory Schehr and Dario Villamaina Pierpaolo Vivo (LPTMS - CNRS - Paris XI) Gaussian Ensembles N = 5 Semicircle Law LARGEST EIGENVALUE

More information

Random matrices: Distribution of the least singular value (via Property Testing)

Random matrices: Distribution of the least singular value (via Property Testing) Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued

More information

Homogenization of the Dyson Brownian Motion

Homogenization of the Dyson Brownian Motion Homogenization of the Dyson Brownian Motion P. Bourgade, joint work with L. Erdős, J. Yin, H.-T. Yau Cincinnati symposium on probability theory and applications, September 2014 Introduction...........

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part I Manuela Girotti based on M. Girotti s PhD thesis and A. Kuijlaars notes from Les Houches Winter School 202 Contents Point Processes

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

Triangular matrices and biorthogonal ensembles

Triangular matrices and biorthogonal ensembles /26 Triangular matrices and biorthogonal ensembles Dimitris Cheliotis Department of Mathematics University of Athens UK Easter Probability Meeting April 8, 206 2/26 Special densities on R n Example. n

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Lectures 2 3 : Wigner s semicircle law

Lectures 2 3 : Wigner s semicircle law Fall 009 MATH 833 Random Matrices B. Való Lectures 3 : Wigner s semicircle law Notes prepared by: M. Koyama As we set up last wee, let M n = [X ij ] n i,j=1 be a symmetric n n matrix with Random entries

More information

Abstract. 2. We construct several transcendental numbers.

Abstract. 2. We construct several transcendental numbers. Abstract. We prove Liouville s Theorem for the order of approximation by rationals of real algebraic numbers. 2. We construct several transcendental numbers. 3. We define Poissonian Behaviour, and study

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne Outline Eigenvalue PDFs Peter Forrester, M&S, University of Melbourne Hermitian matrices with real, complex or real quaternion elements Circular ensembles and classical groups Products of random matrices

More information

Diskrete Mathematik und Optimierung

Diskrete Mathematik und Optimierung Diskrete Mathematik und Optimierung Steffen Hitzemann and Winfried Hochstättler: On the Combinatorics of Galois Numbers Technical Report feu-dmo012.08 Contact: steffen.hitzemann@arcor.de, winfried.hochstaettler@fernuni-hagen.de

More information

The Free Central Limit Theorem: A Combinatorial Approach

The Free Central Limit Theorem: A Combinatorial Approach The Free Central Limit Theorem: A Combinatorial Approach by Dennis Stauffer A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 (Honour s Seminar)

More information

Random matrix pencils and level crossings

Random matrix pencils and level crossings Albeverio Fest October 1, 2018 Topics to discuss Basic level crossing problem 1 Basic level crossing problem 2 3 Main references Basic level crossing problem (i) B. Shapiro, M. Tater, On spectral asymptotics

More information

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

Second Order Freeness and Random Orthogonal Matrices

Second Order Freeness and Random Orthogonal Matrices Second Order Freeness and Random Orthogonal Matrices Jamie Mingo (Queen s University) (joint work with Mihai Popa and Emily Redelmeier) AMS San Diego Meeting, January 11, 2013 1 / 15 Random Matrices X

More information

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA)

Least singular value of random matrices. Lewis Memorial Lecture / DIMACS minicourse March 18, Terence Tao (UCLA) Least singular value of random matrices Lewis Memorial Lecture / DIMACS minicourse March 18, 2008 Terence Tao (UCLA) 1 Extreme singular values Let M = (a ij ) 1 i n;1 j m be a square or rectangular matrix

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

DISTRIBUTION OF EIGENVALUES FOR THE ENSEMBLE OF REAL SYMMETRIC TOEPLITZ MATRICES

DISTRIBUTION OF EIGENVALUES FOR THE ENSEMBLE OF REAL SYMMETRIC TOEPLITZ MATRICES DISTRIBUTIO OF EIGEVALUES FOR THE ESEMBLE OF REAL SYMMETRIC TOEPLITZ MATRICES CHRISTOPHER HAMMOD AD STEVE J. MILLER Abstract. Consider the ensemble of real symmetric Toeplitz matrices, each independent

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

A Generalization of Wigner s Law

A Generalization of Wigner s Law A Generalization of Wigner s Law Inna Zakharevich June 2, 2005 Abstract We present a generalization of Wigner s semicircle law: we consider a sequence of probability distributions (p, p 2,... ), with mean

More information

BALANCING GAUSSIAN VECTORS. 1. Introduction

BALANCING GAUSSIAN VECTORS. 1. Introduction BALANCING GAUSSIAN VECTORS KEVIN P. COSTELLO Abstract. Let x 1,... x n be independent normally distributed vectors on R d. We determine the distribution function of the minimum norm of the 2 n vectors

More information

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices S. Dallaporta University of Toulouse, France Abstract. This note presents some central limit theorems

More information

Algebra I Fall 2007

Algebra I Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary

More information

Maximal height of non-intersecting Brownian motions

Maximal height of non-intersecting Brownian motions Maximal height of non-intersecting Brownian motions G. Schehr Laboratoire de Physique Théorique et Modèles Statistiques CNRS-Université Paris Sud-XI, Orsay Collaborators: A. Comtet (LPTMS, Orsay) P. J.

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Gaussian Random Fields

Gaussian Random Fields Gaussian Random Fields Mini-Course by Prof. Voijkan Jaksic Vincent Larochelle, Alexandre Tomberg May 9, 009 Review Defnition.. Let, F, P ) be a probability space. Random variables {X,..., X n } are called

More information

Multi Degrees of Freedom Systems

Multi Degrees of Freedom Systems Multi Degrees of Freedom Systems MDOF s http://intranet.dica.polimi.it/people/boffi-giacomo Dipartimento di Ingegneria Civile Ambientale e Territoriale Politecnico di Milano March 9, 07 Outline, a System

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

A fast randomized algorithm for overdetermined linear least-squares regression

A fast randomized algorithm for overdetermined linear least-squares regression A fast randomized algorithm for overdetermined linear least-squares regression Vladimir Rokhlin and Mark Tygert Technical Report YALEU/DCS/TR-1403 April 28, 2008 Abstract We introduce a randomized algorithm

More information

Stochastic Differential Equations Related to Soft-Edge Scaling Limit

Stochastic Differential Equations Related to Soft-Edge Scaling Limit Stochastic Differential Equations Related to Soft-Edge Scaling Limit Hideki Tanemura Chiba univ. (Japan) joint work with Hirofumi Osada (Kyushu Unv.) 2012 March 29 Hideki Tanemura (Chiba univ.) () SDEs

More information

LUCK S THEOREM ALEX WRIGHT

LUCK S THEOREM ALEX WRIGHT LUCK S THEOREM ALEX WRIGHT Warning: These are the authors personal notes for a talk in a learning seminar (October 2015). There may be incorrect or misleading statements. Corrections welcome. 1. Convergence

More information

Rectangular Young tableaux and the Jacobi ensemble

Rectangular Young tableaux and the Jacobi ensemble Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau

More information

The norm of polynomials in large random matrices

The norm of polynomials in large random matrices The norm of polynomials in large random matrices Camille Mâle, École Normale Supérieure de Lyon, Ph.D. Student under the direction of Alice Guionnet. with a significant contribution by Dimitri Shlyakhtenko.

More information

Brown University Analysis Seminar

Brown University Analysis Seminar Brown University Analysis Seminar Eigenvalue Statistics for Ensembles of Random Matrices (especially Toeplitz and Palindromic Toeplitz) Steven J. Miller Brown University Providence, RI, September 15 th,

More information

Trades in complex Hadamard matrices

Trades in complex Hadamard matrices Trades in complex Hadamard matrices Padraig Ó Catháin Ian M. Wanless School of Mathematical Sciences, Monash University, VIC 3800, Australia. February 9, 2015 Abstract A trade in a complex Hadamard matrix

More information

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Central Limit Theorems for linear statistics for Biorthogonal Ensembles Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s

More information

Multivariate Distributions

Multivariate Distributions IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate

More information

On the principal components of sample covariance matrices

On the principal components of sample covariance matrices On the principal components of sample covariance matrices Alex Bloemendal Antti Knowles Horng-Tzer Yau Jun Yin February 4, 205 We introduce a class of M M sample covariance matrices Q which subsumes and

More information

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0 LECTURE LECTURE 2 0. Distinct eigenvalues I haven t gotten around to stating the following important theorem: Theorem: A matrix with n distinct eigenvalues is diagonalizable. Proof (Sketch) Suppose n =

More information