MATH 505b Project Random Matrices

Similar documents
1 Intro to RMT (Gene)

Lectures 6 7 : Marchenko-Pastur Law

The norm of polynomials in large random matrices

Fluctuations from the Semicircle Law Lecture 4

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Distribution of Eigenvalues of Weighted, Structured Matrix Ensembles

DISTRIBUTION OF EIGENVALUES OF REAL SYMMETRIC PALINDROMIC TOEPLITZ MATRICES AND CIRCULANT MATRICES

Homework # , Spring Due 14 May Convergence of the empirical CDF, uniform samples

2.1 Lecture 5: Probability spaces, Interpretation of probabilities, Random variables

Math 421, Homework #7 Solutions. We can then us the triangle inequality to find for k N that (x k + y k ) (L + M) = (x k L) + (y k M) x k L + y k M

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

Exponential tail inequalities for eigenvalues of random matrices

Problem 1 HW3. Question 2. i) We first show that for a random variable X bounded in [0, 1], since x 2 apple x, wehave

Lectures 2 3 : Wigner s semicircle law

Lecture I: Asymptotics for large GUE random matrices

On the concentration of eigenvalues of random symmetric matrices

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

3 Integration and Expectation

Wigner s semicircle law

. Find E(V ) and var(v ).

Random Matrices: Beyond Wigner and Marchenko-Pastur Laws

Sequences. Chapter 3. n + 1 3n + 2 sin n n. 3. lim (ln(n + 1) ln n) 1. lim. 2. lim. 4. lim (1 + n)1/n. Answers: 1. 1/3; 2. 0; 3. 0; 4. 1.

Lecture 6 Basic Probability

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

Problem set 1, Real Analysis I, Spring, 2015.

Rectangular Young tableaux and the Jacobi ensemble

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

THE N-VALUE GAME OVER Z AND R

arxiv: v2 [math.pr] 2 Nov 2009

Determinantal point processes and random matrix theory in a nutshell

Convergence of spectral measures and eigenvalue rigidity

Lectures 2 3 : Wigner s semicircle law

Lecture 2: Convergence of Random Variables

PCA with random noise. Van Ha Vu. Department of Mathematics Yale University

Concentration inequalities for non-lipschitz functions

5 Birkhoff s Ergodic Theorem

9 Brownian Motion: Construction

Analysis III. Exam 1

Concentration Inequalities for Random Matrices

Empirical Macroeconomics

ELEMENTS OF PROBABILITY THEORY

Semicircle law on short scales and delocalization for Wigner random matrices

Operator-Valued Free Probability Theory and Block Random Matrices. Roland Speicher Queen s University Kingston

Convergence of Random Variables

Markov Chains, Stochastic Processes, and Matrix Decompositions

Part II Probability and Measure

Random Matrix Theory Lecture 3 Free Probability Theory. Symeon Chatzinotas March 4, 2013 Luxembourg

Empirical Macroeconomics

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

1 Introduction. 2 Measure theoretic definitions

Limits at Infinity. Horizontal Asymptotes. Definition (Limits at Infinity) Horizontal Asymptotes

STAT C206A / MATH C223A : Stein s method and applications 1. Lecture 31

Probability and Measure

STA205 Probability: Week 8 R. Wolpert

Linear Algebra Practice Problems

DISTRIBUTION OF EIGENVALUES FOR THE ENSEMBLE OF REAL SYMMETRIC TOEPLITZ MATRICES

THE DYNAMICS OF SUCCESSIVE DIFFERENCES OVER Z AND R

Edexcel GCE A Level Maths Further Maths 3 Matrices.

Constructing Approximations to Functions

Measurable Choice Functions

Hilbert Spaces. Hilbert space is a vector space with some extra structure. We start with formal (axiomatic) definition of a vector space.

Eigenvalue Statistics for Toeplitz and Circulant Ensembles

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich

Chapter 6. Convergence. Probability Theory. Four different convergence concepts. Four different convergence concepts. Convergence in probability

Definitions & Theorems

Random matrices: Distribution of the least singular value (via Property Testing)

The Free Central Limit Theorem: A Combinatorial Approach

Continuity. Chapter 4

Random matrices: A Survey. Van H. Vu. Department of Mathematics Rutgers University

7: FOURIER SERIES STEVEN HEILMAN

Continuity. Chapter 4

ACO Comprehensive Exam October 18 and 19, Analysis of Algorithms

On the distinguishability of random quantum states

Statistical signal processing

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

Lecture Notes 1: Vector spaces

Exercises in Extreme value theory

Math 104: Homework 7 solutions

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

Analysis-3 lecture schemes

Fluctuations from the Semicircle Law Lecture 1

Solutions Final Exam May. 14, 2014

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

BALANCING GAUSSIAN VECTORS. 1. Introduction

LUCK S THEOREM ALEX WRIGHT

arxiv: v1 [math-ph] 19 Oct 2018

Local Kesten McKay law for random regular graphs

A Generalization of Wigner s Law

Monotonic ɛ-equilibria in strongly symmetric games

An almost sure invariance principle for additive functionals of Markov chains

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

here, this space is in fact infinite-dimensional, so t σ ess. Exercise Let T B(H) be a self-adjoint operator on an infinitedimensional

MA 123 (Calculus I) Lecture 3: September 12, 2017 Section A2. Professor Jennifer Balakrishnan,

Random Bernstein-Markov factors

1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.

STAT 7032 Probability Spring Wlodek Bryc

Markov operators, classical orthogonal polynomial ensembles, and random matrices

MATH 202B - Problem Set 5

Limit Laws for Random Matrices from Traffic Probability

Transcription:

MATH 505b Project andom Matrices Emre Demirkaya, Mark Duggan, Melike Sirlanci Tuysuzoglu April 30, 2014 1 Introduction Studies of andom Matrix Theory began in 1928. J. Wishart studied the distribution of eigenvalues of random matrices with fixed dimesion in his paper: The Generalized Product Moment Distribution in Samples from a Normal Multivariate Population. If the entries of the matrix are Gaussian (nonasymptotic), the joint distribution of the eigenvalues is known. In 1955, E. Wigner introduced asymptotic behavior of eigenvalues of random matrices, Characteristic vectors of bordered matrices with infinite dimensions. His motivation lies in nuclear physics: the statistics of experimentally measured atomic energy levels can be explained by the limiting spectrum of random matrices. That is, he studied the distribution of eigenvalues of random matrices as the matrix dimension tends to infinity. One interesting result, which can be regarded as the central limit theorem of random matrix theory, is that the asymptotic eigenvalue distribution of a random matrix does not depend on the distribution of the entries of the random matrix. In this project, we focus on a theorem about asymptotic behavior of normalized symmetric random matrices, which will be defined in the next chapter. As a further study, one may try to answer the natural question: how fast does the eigenvalue distribution converge as n increases? 1

2 Preliminaries In this section, we give the definition of Wigner matrices and other important definitions. Definition 2.1 (Wigner Matrix): A Wigner matrix is a symmetric n n matrix M n with entries Y ] Z i,j i<j M n (j, i) =M n (i, j) = [ Y i i = j. where {Z i,j} 1Æi<j, {Y i } 1Æi are two familes of independent identically distributed real-valued random variables with mean 0 and variance 1. emark 2.2: We call the matrix M n / Ô n a normalized Wigner matrix. We don t divide by Ô n on a whim, notice that E [Mn / Ô n] remains bounded as n æœ, while E [M n ] æœ,as n æœ. Example 2.3: When M i,j is normally distributed (i.e. Gaussian), we call this a Gaussian Wigner matrix. When Y i N(0, 2), and Z i,j is a standard normal, we call this the Gaussian Orthognal Ensemble, which earns its name because it is invariant under orthognal transformation. The Gaussian Orthognal Ensemble is useful, because it is highly symmetric. Definition 2.4 (Empirical Spectral Distribution): For a normalized Wigner matrix X n = M n / Ô n, we consider its n real eigenvalues 1 (X n ) Æ... Æ n (X n ) (the eigenvalues are real, because X n is symmetric). We call µ n the empirical spectral distribution, where µ n := 1 nÿ 1 { n n j Æx}, j=1 where n j = j (X n ). Notice that for each X n (Ê), where Ê œ function., µ n is a cumulative distribution Definition 2.5 (The Empirical Spectral Measure): We define the empirical spectral measure: S T µ n := E U 1 nÿ 1 { n V n j Æx}, defined by de[µ n ]=E dµ n for every compactly supported continuous real function. j=1 Definition 2.6 (Weak Convergence): A sequence n of deterministic probability measures on a space with associated -algebra F is said to converge weakly to a probability measure, if for a continuous, bounded function f, fd n æ fd 2

as n æœ.if n is random, then we consider weak convergence to either in probability or almost surely. 3

3 Eigenvalue Distribution of Wigner Matrices In this chapter, we aim to prove the following theorem, the main result of this project. Theorem 3.1: Let {M n } Œ n=1 be a sequence of Wigner matrices, and for each n, denote X n = M n / Ô n. Then µ n converges weakly, in probability to the semicircle distribution, (x)dx = 1 Ô 4 x2 1 x Æ2 dx. 2fi Proof: We want to show that c s fdµ n æ s f (x)dx in probability for any continuous bounded function. In other words, we want to show for every >0, 3- --- P fdµ for any continuous bounded function. 4 f (x)dx - > æ 0 In order to rewrite the integrals in (3) as a linear combination of moments, we can replace f by polynomials using the Weierstrass Approximation Theorem. Let B>0 be a fixed constant, k>0 be any positive integer, and f be a continuous bounded function. By using the Markov s Inequality, we obtain Cl P 1 s x k 1 x >B dµ n > 2 Æ 1 E Ë s x k 1 x >B dµ n È Æ 1 E Ë s È x k x k dµ B k n = 1 E Ë s È B k x2k dµ n = 1 B k s x2k d µ n. By Lemma (4.2), we say that s xk d µ n converges (deterministically) to s xk (x)dx for any positive integer k. Then, we obtain c lim sup næœ P 1 s x k 1 x >B dµ n > 2 Æ 1 s B k x2k (x)dx. On the other hand, by Lemma (4.4), we know c s x2k (x)dx = C k. Also since C k Æ 4 k, combining (3) and (3), we obtain c lim sup næœ P 1 s x2k 1 x >B dµ n > 2 Æ 4k. B k Now, let a k = lim sup næœ P 1 s x2k 1 x >B dµ n > 2, b k = 4k B k, and choose B>4. We claim that a k = 0 for all k œ N. 4

Suppose the contrary, that there is k 0 œ N with a k0 > 0. But since a k is an increasing sequence, by definition, a k Ø a k0 for all k Ø k 0. In this case, we have 0 = lim kæœ b k Ø lim kæœ a k Ø a k0 > 0 which is a contradiction. So, a k = 0 for all k œ N. Now, let >0 and f : æ be bounded and continuous. Also, suppose that f is compactly supported on [ B,B]. Then, by the Weierstrass Approximation Theorem, we can choose a polynomial p such that p (x) f(x) Æ /4 for all x œ [ B,B]. Now, remembering f is supported on [ B,B] and using the triangle inequality, Cl s fdµ n s f (x)dx Æ s (f p )dµ n s (f p ) (x)dx + s p dµ n s p (x)dx Æ s f p dµ n + s f p (x)dx + s p dµ n s p (x)dx Æ s f p 1 x ÆB dµ n + s f p 1 x ÆB (x)dx + - - - s p 1 x >B dµ n - -- + s p dµ n s p (x)dx Since p (x) f(x) Æ /4 for all x œ [ B,B], substitution into (3) yields - fdµ n f (x)dx - Æ - - --- --- 2 + p 1 x >B dµ n + - p dµ n p (x)dx -. By using the triangle inequality, we get 3- --- P fdµ n 4 A --- f (x)dx - > Æ P - Q Æ Pa - - --- p 1 x >B dµ n + - - --- p 1 x >B dµ n + - p dµ n p d µ n p (x)dx - > B 2 p (x)dx - --- + - p dµ n p d µ n- > b 2 A --- --- Æ P - p 1 x >B dµ n- > B 6 A --- + P - p d µ n p (x)dx - > B 6 A --- --- + P - p dµ n p d µ n- > B. (1) 6 Note that the event s fdµ n s f (x)dx > is contained in the union of the three events - s p - -- 1 x >B dµ n >, 6 s p µ n s p (x)dx >, and 6 s p µ n s p µ n >. 6 Now, from equation (3), the first probabilty in (1) goes to 0 as n æœ. By Lemma (4.2), the second probabilty converges to 0 as n æœ. Finally, the third probabilty also goes to 0 as n æœ, by Lemma (4.3). 5

Thus, for any >0, and any bounded function f, we have that c lim næœ P ( s fdµ n int f (x)dx > )=0. Now, by using Lemma 4.5, we can prove a stronger version of the result our main theorem which can be cobsidered as a corollart bof the theorem. Corollary: Let {M n } Œ n=1 be a sequence of Wigner matrices, and for each n, denote X n = M n / Ô n.t henµ n converges weakly, almost surely to the semicircle distribution. Proof: The convergence used in Lemma 4.2 is alsready in deterministic sense. So we just need to check whether the convergence in probabilty in Lemma 4.2 can be updated to the almost sure convergence. Firstly by using the Markov s Inequality and then using the Lemma 4.5, we get the following inequality Cl q Œ n=1 P 1- -s xk dµ n s - 2-5 -- xk d µ n > Æ 1 q Œn=1 --- 1s 2 6 2 E 2 xk dµ n 1 E Ë È2-2--- s xk dµ n Æ C 1 + 1 q Œn=1 C < Œ. n 2 The constant C 1 is used because the inequality in Lemma 4.5 holds when n is su large. Now, by using the Borel-Cantelli Lemma, we immediately get that 3 P lim sup - x k dµ n næœ - 4 --- x k d µ n > =0. Then, this inequality means that - - - s xk dµ n s xk d µ n - -- to0 asn æœalmost surely. ciently Now, again assuming f is compactly supported on [ B,B] and taking the polynomial p which is the approximation of f on [ B,B], we get as in the proof of the main theorem, Cl s fdµ n s f (x)dx Æs f p 1 x ÆB dµ n + s f p 1 x ÆB (x)dx + - s - p -- 1 x >B dµ n + s p d µ n s p (x)dx + s p dµ n s p d µ n. By the choose of p, we can make the first two terms in the above inequalirt arbitrarily small. The third and fourth terms approach 0, deterministically, and the last term converges to 0 almost surely. Therefore, for every bounded function f, s fdµ n s f (x)dx æ0 almost surely which means the emprical spectral distribution converges almost surely, weakly to the semicircle distribution. 6

4 Technical Theorems and Lemmas Theorem 4.1 (Weierstrass Approximation Theorem): If f is a continuous, real-valued function on [a, b] and if any >0 is given, there exists a polynomial p on [a, b] such that f(x) p(x) <, for all x œ [a, b]. In other words, any continuous function on a closed and bounded interval can be uniformly approximated on that interval by polynomials to any degree of accuracy. Lemma 4.2: For any positive integer k, s xk d µ n converges (deterministically) to s xk (x)dx as n æœ. Lemma 4.3: Let >0 be given and k be a positive integer. Then, 3- - 4 --- --- lim P x k dµ n x k d µ n > =0. næœ Lemma 4.4: The moments of the semicircle law are given by Y ] x k C k/2 k is even, (x)dx =. [ 0 k is odd, 2 Note that C n represents the n th Catalan number, namely C n = n+11 1 2n n. Lemma 4.5: Let X n be a Wigner matrix with emprical sprectral distribution µ n. Then for every fixed k, there exists a constant C not depending on n such that for su ciently large n. 4 D 2 - C3 E x k dµ n 3 5 64 2 E x k C dµ n Æ n, 2 Lemma 4.6 (Borel-Cantelli Lemma): Let {E n } Œ n=1 be a sequence of events in some probabilty space. If Œÿ P (E n ) < Œ, then n=1 3 4 P lim sup E n =0. næœ 7

1 1 Figure 1: Eigenvalues Gaussian andom Matrix on Complex Plane (n = 100 and n = 1000) Empirical esults 8

1 1 Figure 2: Eigenvalues Gaussian andom Matrix on Complex Plane (n = 2000 and n = 4000) 9

1 1 1 1 1 1 1 1 1 1 1 1 Figure 3: Eigenvalues of Wigner matrices n= 10, 100 and 1000 10

1 1 1 1 1 1 1 1 1 1 1 1 Figure 4: Eigenvalues of a collection of 1, 10 and 100 Wigner matrices with fixed matrix size n=10 11

5 Mathematica Code of Simulations Normalized Symmetric andom Matrix andomsymmetricmatrix[dist_, n_] := Module[{mat = (1/(Sqrt[n*Variance[dist]]))* andomvariate[dist, {n, n}]}, UpperTriangularize[mat, 1] + Transpose[UpperTriangularize[mat]]] dists = {StudentTDistribution[4], NormalDistribution[], LaplaceDistribution[0, 1], ExponentialDistribution[1]}; ev = Eigenvalues[andomSymmetricMatrix[#, 10ˆ2]] & /@ dists; edist = EstimatedDistribution[#, WignerSemicircleDistribution[r]] & /@ ev; h[dist_, data_, i_] := Histogram[data, 20, "PDF", ChartStyle -> (ColorData["Gradients"][[andomInteger[{1, 51}]]]), BaseStyle -> {FontFamily -> "Verdana"}, PlotLabel -> dists[[i]], Plotange -> {0, 1.5 PDF[dist, 0]}, ImageSize -> 280, Epilog -> Inset[Framed[ Style[Grid[{{"Estimated Distribution:"}, {dist}}], 10], oundingadius -> 10, FrameStyle -> GrayLevel@0.3, Background -> LightOrange], {ight, 1.45 PDF[dist, 0]}, {ight, Top}]] distplot[dist_, data_] := Plot[PDF[dist, x], {x, Min[data], Max[data]}, PlotStyle -> {Thick, Black}] Grid[Partition[ Table[Show[h[edist[[i]], ev[[i]], i], distplot[edist[[i]], ev[[i]]]], {i, 4}], 2]] Normalized Gaussian andom Matrix n = {100, 1000, 2000, 4000}; dist = NormalDistribution[]; GaussianandomMatrix = (1/(Sqrt[#*Variance[dist]]))* andomvariate[dist, {#, #}] & /@ n; ev = Eigenvalues[#] & /@ GaussianandomMatrix; Table[Show[ ListPlot[{e[#], Im[#]} & /@ ev[[i]], Aspectatio -> 1], Graphics@Circle[{0, 0}, 1]], {i, 4}] 12

Collection of Normalized Symmetric andom Matrices andomsymmetricmatrix[dist_, n_] := Module[{mat = (1/(Sqrt[n*Variance[dist]]))* andomvariate[dist, {n, n}]}, UpperTriangularize[mat, 1] + Transpose[UpperTriangularize[mat]]] dists = NormalDistribution[]; collection = Table[Eigenvalues[andomSymmetricMatrix[dists, 10]], {i, 100}]; ev = Flatten[collection]; edist = EstimatedDistribution[ev, WignerSemicircleDistribution[r]]; h[dist_, data_] := Histogram[data, 20, "PDF", ChartStyle -> (ColorData["Gradients"][[andomInteger[{1, 51}]]]), BaseStyle -> {FontFamily -> "Verdana"}, PlotLabel -> dists, Plotange -> {0, 1.5 PDF[dist, 0]}, ImageSize -> 280, Epilog -> Inset[Framed[ Style[Grid[{{"Estimated Distribution:"}, {dist}}], 10], oundingadius -> 10, FrameStyle -> GrayLevel@0.3, Background -> LightOrange], {ight, 1.45 PDF[dist, 0]}, {ight, Top}]] distplot[dist_, data_] := Plot[PDF[dist, x], {x, Min[data], Max[data]}, PlotStyle -> {Thick, Black}] Show[h[edist, ev], distplot[edist, ev]] 6 eferences Feier, A.. (2012). Methods of Proof in andom Matrix Theory. Unpublished. Weisstein, Eric W. Weierstrass Approximation Theorem. From MathWorld A Wolfram Web esource. http://mathworld.wolfram.com/weierstrassapproximationtheorem.html Anderson, Guionnet, Zeitouni, An Introduction to andom Matrices, pp.1-5 13