Random matrix theory and log-correlated Gaussian fields

Size: px
Start display at page:

Download "Random matrix theory and log-correlated Gaussian fields"

Transcription

1 Random matrix theory and log-correlated Gaussian fields Nick Simm Collaborators : Yan Fyodorov and Boris Khoruzhenko (Queen Mary, London) Mathematics Institute, Warwick University, Coventry, UK XI Brunel-Bielefeld RMT Workshop December 2015 Research supported by Leverhulme fellowship ECF

2 Outline

3 Contents Introduction Regularization 1 Regularization 2 Distribution of the maximum value

4 Log-correlated Gaussian fields: what and why?

5 Log-correlated Gaussian fields: what and why? A log-correlated Gaussian field is a random distribution V(x) with covariance kernel of the form E(V(x)V(x )) log x x, x x and Gaussian finite-dimensional distributions.

6 Log-correlated Gaussian fields: what and why? A log-correlated Gaussian field is a random distribution V(x) with covariance kernel of the form E(V(x)V(x )) log x x, x x and Gaussian finite-dimensional distributions. Why: Central object of Kahane s theory of multiplicative chaos: e V(x)

7 Log-correlated Gaussian fields: what and why? A log-correlated Gaussian field is a random distribution V(x) with covariance kernel of the form E(V(x)V(x )) log x x, x x and Gaussian finite-dimensional distributions. Why: Central object of Kahane s theory of multiplicative chaos: e V(x) Liouville quantum gravity. e.g. Rhodes, Vargas, et al. 2015

8 Log-correlated Gaussian fields: what and why? A log-correlated Gaussian field is a random distribution V(x) with covariance kernel of the form E(V(x)V(x )) log x x, x x and Gaussian finite-dimensional distributions. Why: Central object of Kahane s theory of multiplicative chaos: e V(x) Liouville quantum gravity. e.g. Rhodes, Vargas, et al Non-trivial extreme value statistics relating to branching Brownian motions, Gaussian free fields. Ding, Roy, Zeitouni 2015.

9 Log-correlated Gaussian fields: what and why? A log-correlated Gaussian field is a random distribution V(x) with covariance kernel of the form E(V(x)V(x )) log x x, x x and Gaussian finite-dimensional distributions. Why: Central object of Kahane s theory of multiplicative chaos: e V(x) Liouville quantum gravity. e.g. Rhodes, Vargas, et al Non-trivial extreme value statistics relating to branching Brownian motions, Gaussian free fields. Ding, Roy, Zeitouni Connections to random matrix theory, number theory Fyodorov, Keating, Le Doussal, Arguin, Belius, Harper, Bourgade, Webb,

10 Random matrices

11 Random matrices A random Hermitian matrix H, of size N N taken from, e.g. the Gaussian Unitary Ensemble: P(H) = 1 C e 2NTr(H2 )

12 Random matrices A random Hermitian matrix H, of size N N taken from, e.g. the Gaussian Unitary Ensemble: P(H) = 1 C e 2NTr(H2 ) Wigner s semi-circle law (1955): lim N 1 N R 1(x) = { (2/π) 1 x 2, x 1 0, x > 1

13 Random matrices A random Hermitian matrix H, of size N N taken from, e.g. the Gaussian Unitary Ensemble: P(H) = 1 C e 2NTr(H2 ) Wigner s semi-circle law (1955): lim N 1 N R 1(x) = { (2/π) 1 x 2, x 1 0, x > 1 In this talk we study the characteristic polynomial of H: p N (x) = det(xi H)

14 Random matrices A random Hermitian matrix H, of size N N taken from, e.g. the Gaussian Unitary Ensemble: P(H) = 1 C e 2NTr(H2 ) Wigner s semi-circle law (1955): lim N 1 N R 1(x) = { (2/π) 1 x 2, x 1 0, x > 1 In this talk we study the characteristic polynomial of H: p N (x) = det(xi H) Viewpoint: p N (x) is a sequence (N = 1,2,3,...) of stochastic processes in x.

15 Peaks and troughs of the characteristic polynomial Figure : A plot of a single realization of p N (x) e Elog p N(x) for N = 50. Above: a random curve (stochastic process) depending on N. Is there a limit as N?

16 Larger N: logarithmic scale Figure : A plot of a single realization of log p N (x) Elog p N (x) for N = x

17 Colours of noise

18 Colours of noise A time signal V(t) can be described by a spectral density 1 S(ω) = lim T T E T V(t)e iωt 2 dt. T

19 Colours of noise A time signal V(t) can be described by a spectral density 1 S(ω) = lim T T E T V(t)e iωt 2 dt. For example: T V(t) V(t) log(s(f)) z1 P(f) P(f) S(f)= 1 f 0 Whitenoise

20 Colours of noise A time signal V(t) can be described by a spectral density 1 S(ω) = lim T T E T V(t)e iωt 2 dt. For example: T V(t) V(t) log(s(f)) z1 P(f) P(f) S(f)= 1 f 0 Whitenoise V(t) V(t) log(s(f)) V(t) z1 P(f) P(f) S(f)= 1 f 2 z Red noise

21 Colours of noise A time signal V(t) can be described by a spectral density 1 S(ω) = lim T T E T V(t)e iωt 2 dt. For example: T V(t) V(t) log(s(f)) z1 P(f) P(f) S(f)= 1 f 0 Whitenoise V(t) V(t) log(s(f)) V(t) z1 P(f) P(f) A well-known example is the random Fourier series: V(t) = k=1 S(f)= 1 f 2 z Red noise 2sin(kπt) X k, X k i.i.d. standard Gaussians. kπ which represents a Brownian bridge on [0, 1].

22 Colours of noise A time signal V(t) can be described by a spectral density 1 S(ω) = lim T T E T V(t)e iωt 2 dt. For example: T V(t) V(t) log(s(f)) z1 P(f) P(f) S(f)= 1 f 0 Whitenoise V(t) V(t) log(s(f)) z P(f) P(f) S(f)= 1 )= f 1 Pink noise V(t) V(t) log(s(f)) V(t) z1 P(f) P(f) A well-known example is the random Fourier series: V(t) = k=1 S(f)= 1 f 2 z Red noise 2sin(kπt) X k, X k i.i.d. standard Gaussians. kπ which represents a Brownian bridge on [0, 1].

23 Characteristic function of log p N (x)

24 Characteristic function of log p N (x) Study the statistics in Fourier space: φ N ( x, α) = E (e ) K k=1 α k log p N (x k )

25 Characteristic function of log p N (x) Study the statistics in Fourier space: φ N ( x, α) = E (e ) K k=1 α k log p N (x k ) Diagonalize H = UDU : the above is equal to N c 1... w K (λ j ) (λ i λ j ) 2 dλ 1...dλ N R R j=1 1 i<j N

26 Characteristic function of log p N (x) Study the statistics in Fourier space: φ N ( x, α) = E (e ) K k=1 α k log p N (x k ) Diagonalize H = UDU : the above is equal to N c 1... w K (λ j ) (λ i λ j ) 2 dλ 1...dλ N R R j=1 1 i<j N Weight (symbol) has Fisher-Hartwig type singularities K w K (λ) = e 2Nλ2 λ x k α k, Re(α k ) > 1/2 k=1

27 Characteristic function of log p N (x) Study the statistics in Fourier space: φ N ( x, α) = E (e ) K k=1 α k log p N (x k ) Diagonalize H = UDU : the above is equal to N c 1... w K (λ j ) (λ i λ j ) 2 dλ 1...dλ N R R j=1 1 i<j N Weight (symbol) has Fisher-Hartwig type singularities K w K (λ) = e 2Nλ2 λ x k α k, Re(α k ) > 1/2 Hankel determinant: k=1 φ N ( x, α) det N N { λ i+j w K (λ)dλ R } N 1 i,j=0

28 Asymptotics of the Hankel determinant

29 Asymptotics of the Hankel determinant Krasovsky (2007) computed the asymptotics of this determinant: E (e ) K k=1 2α k log p N (x k ) = K C(α k )(1 xk 2 )α2 k /2 (N/2) α2 k e (2xk 2 1)α kn k=1 1 i<j K ( )] (2 x i x j ) 2α iα j logn [1+O N

30 Asymptotics of the Hankel determinant Krasovsky (2007) computed the asymptotics of this determinant: E (e ) K k=1 2α k log p N (x k ) = K C(α k )(1 xk 2 )α2 k /2 (N/2) α2 k e (2xk 2 1)α kn k=1 1 i<j K ( )] (2 x i x j ) 2α iα j logn [1+O N What does this tell us about the statistics of log p N (x)?

31 Asymptotics of the Hankel determinant Krasovsky (2007) computed the asymptotics of this determinant: E (e ) K k=1 2α k log p N (x k ) = K C(α k )(1 xk 2 )α2 k /2 (N/2) α2 k e (2xk 2 1)α kn k=1 1 i<j K ( )] (2 x i x j ) 2α iα j logn [1+O N What does this tell us about the statistics of log p N (x)? Quadratic α 2 k in the exponential: Gaussianity

32 Asymptotics of the Hankel determinant Krasovsky (2007) computed the asymptotics of this determinant: E (e ) K k=1 2α k log p N (x k ) = K C(α k )(1 xk 2 )α2 k /2 (N/2) α2 k e (2xk 2 1)α kn k=1 1 i<j K ( )] (2 x i x j ) 2α iα j logn [1+O N What does this tell us about the statistics of log p N (x)? Quadratic α 2 k in the exponential: Gaussianity x i x j factor: O(1) logarithmic correlations

33 Asymptotics of the Hankel determinant Krasovsky (2007) computed the asymptotics of this determinant: E (e ) K k=1 2α k log p N (x k ) = K C(α k )(1 xk 2 )α2 k /2 (N/2) α2 k e (2xk 2 1)α kn k=1 1 i<j K ( )] (2 x i x j ) 2α iα j logn [1+O N What does this tell us about the statistics of log p N (x)? Quadratic α 2 k in the exponential: Gaussianity x i x j factor: O(1) logarithmic correlations Factor (N/2) α2 k: variance diverges like log(n).

34 Asymptotics of the Hankel determinant Krasovsky (2007) computed the asymptotics of this determinant: E (e ) K k=1 2α k log p N (x k ) = K C(α k )(1 xk 2 )α2 k /2 (N/2) α2 k e (2xk 2 1)α kn k=1 1 i<j K ( )] (2 x i x j ) 2α iα j logn [1+O N What does this tell us about the statistics of log p N (x)? Quadratic α 2 k in the exponential: Gaussianity x i x j factor: O(1) logarithmic correlations Factor (N/2) α2 k: variance diverges like log(n). So limit does not exist! Unless one rescales by log(n)... But this kills the subtle correlations log x i x j. How can we regularize?

35 Regularizing the logarithmic divergence

36 Regularizing the logarithmic divergence Regularization 1: We embed the characteristic polynomial in a space of distributions, obtaining a limiting object which is a random generalized function. The latter will be a random Gaussian field on a function space, with logarithmic correlations. Here the convergence takes place on global scales (all eigenvalues are involved).

37 Regularizing the logarithmic divergence Regularization 1: We embed the characteristic polynomial in a space of distributions, obtaining a limiting object which is a random generalized function. The latter will be a random Gaussian field on a function space, with logarithmic correlations. Here the convergence takes place on global scales (all eigenvalues are involved). Regularization 2: Instead of putting x [ 1,1] we will go slightly complex: x z = x + t +iη d N for some large parameter d N and η > 0. This will actually lead to a well-defined Gaussian process in the limit (no normalization necessary). Here the limit takes place on mesoscopic scales (only a fraction of the eigenvalues are involved).

38 Contents Introduction Regularization 1 Regularization 2 Distribution of the maximum value

39 Theorem: Weak convergence in a Sobolev space

40 Theorem: Weak convergence in a Sobolev space Let V N (x) = log det(x H) and x ( 1,1). Act on test functions: V N [f] := c k (V N )c k (f), k=0 f W (a)

41 Theorem: Weak convergence in a Sobolev space Let V N (x) = log det(x H) and x ( 1,1). Act on test functions: V N [f] := c k (V N )c k (f), k=0 f W (a) where c k (f) are Chebyshev coefficients, given by c k (f) := 2 π 1 1 dx T k(x)f(x) 1 x 2, T k(x) = cos(k cos 1 (x)) and W (a) := {f L 2 [ 1,1] : k c k(f) 2 (1+k 2 ) a < }.

42 Theorem: Weak convergence in a Sobolev space Let V N (x) = log det(x H) and x ( 1,1). Act on test functions: V N [f] := c k (V N )c k (f), k=0 f W (a) where c k (f) are Chebyshev coefficients, given by c k (f) := 2 π 1 1 dx T k(x)f(x) 1 x 2, T k(x) = cos(k cos 1 (x)) and W (a) := {f L 2 [ 1,1] : k c k(f) 2 (1+k 2 ) a < }. Theorem (Fyodorov, Khoruzhenko and Simm 13) Let X 1,X 2,... be i.i.d. standard Gaussians. Then as N log det(x H) E[...] weakly in W (a) with a < 1/2. k=1 X k k T k (x)

43 Properties of the limiting object

44 Properties of the limiting object The limiting object V W ( a) is log-correlated: E(V(x)V(x T k (x)t k (x ) )) = 2 = 1 k 2 log(2 x x ) k=1

45 Properties of the limiting object The limiting object V W ( a) is log-correlated: E(V(x)V(x T k (x)t k (x ) )) = 2 = 1 k 2 log(2 x x ) k=1 It acts on test functions via X k V[f] = c k (f), f W (a). k k=1

46 Properties of the limiting object The limiting object V W ( a) is log-correlated: E(V(x)V(x T k (x)t k (x ) )) = 2 = 1 k 2 log(2 x x ) k=1 It acts on test functions via X k V[f] = c k (f), f W (a). k k=1 Thus, V is the centered Gaussian field on W (a) with covariance operator specified by E(V[f]V[g]) = log(2 x y )f(x)g(y)µ(dx)µ(dy) where µ(dx) = (1 x 2 ) 1/2 dx, i.e. V has logarithmic correlations.

47 Idea of proof:

48 Idea of proof: (Step 1) Convergence of finite-dimensional distributions:

49 Idea of proof: (Step 1) Convergence of finite-dimensional distributions: We expand the logarithm log det(x H) = k=1 T k (x)x k (H) k where (with λ 1,...,λ N denoting the eigenvalues of H), X k (H) := N j=1 2T k (λ j ) k + 2 k N j=1 ( (1 χ [ 1,1] (λ j )) T k (λ j ) ( ) ) k λ j λ 2 j 1.

50 Idea of proof: (Step 1) Convergence of finite-dimensional distributions: We expand the logarithm log det(x H) = k=1 T k (x)x k (H) k where (with λ 1,...,λ N denoting the eigenvalues of H), X k (H) := N j=1 2T k (λ j ) k + 2 k N j=1 ( (1 χ [ 1,1] (λ j )) T k (λ j ) ( ) ) k λ j λ 2 j 1. The first term converges as N to i.i.d. standard Gaussians X k (Johansson 98). We prove that the remainder 0 in probability.

51 Idea of proof: (Step 1) Convergence of finite-dimensional distributions: We expand the logarithm log det(x H) = k=1 T k (x)x k (H) k where (with λ 1,...,λ N denoting the eigenvalues of H), X k (H) := N j=1 2T k (λ j ) k + 2 k N j=1 ( (1 χ [ 1,1] (λ j )) T k (λ j ) ( ) ) k λ j λ 2 j 1. The first term converges as N to i.i.d. standard Gaussians X k (Johansson 98). We prove that the remainder 0 in probability. (Step 2) Tightness: We prove the following estimate lim N k=0 (1+k 2 ) a Var(X k (H)) <, a < 1/2.

52 Contents Introduction Regularization 1 Regularization 2 Distribution of the maximum value

53 Fractional Brownian motion

54 Fractional Brownian motion Let B(ds) be the Gaussian white noise measure with E(B(ds)) = 0 and E(B(ds 1 )B(ds 2 )) = δ(s 1 s 2 )ds 1 ds 2. For 0 < H < 1, we define fbm as (see Taqqu et al. 03) B H (t) = 1 C where d B(ω) = db 1 (ω)+idb 2 (ω). 0 [e iωt 1 1] d B(ω)+c.c. ωh+1/2

55 Fractional Brownian motion Let B(ds) be the Gaussian white noise measure with E(B(ds)) = 0 and E(B(ds 1 )B(ds 2 )) = δ(s 1 s 2 )ds 1 ds 2. For 0 < H < 1, we define fbm as (see Taqqu et al. 03) B H (t) = 1 C where d B(ω) = db 1 (ω)+idb 2 (ω). From the definition one sees B H (t) is a zero-mean Gaussian field. 0 [e iωt 1 1] d B(ω)+c.c. ωh+1/2 E[B H (t)b H (s)] = (σ 2 /2)( t 2H + s 2H t s 2H ) Self-similarity: B H (at) d = a H B H (t)

56 Fractional Brownian motion Let B(ds) be the Gaussian white noise measure with E(B(ds)) = 0 and E(B(ds 1 )B(ds 2 )) = δ(s 1 s 2 )ds 1 ds 2. For 0 < H < 1, we define fbm as (see Taqqu et al. 03) B H (t) = 1 C where d B(ω) = db 1 (ω)+idb 2 (ω). From the definition one sees B H (t) is a zero-mean Gaussian field. 0 [e iωt 1 1] d B(ω)+c.c. ωh+1/2 E[B H (t)b H (s)] = (σ 2 /2)( t 2H + s 2H t s 2H ) Self-similarity: B H (at) d = a H B H (t) The parameter H - Hurst index.

57 Fractional Brownian motion Let B(ds) be the Gaussian white noise measure with E(B(ds)) = 0 and E(B(ds 1 )B(ds 2 )) = δ(s 1 s 2 )ds 1 ds 2. For 0 < H < 1, we define fbm as (see Taqqu et al. 03) B H (t) = 1 C where d B(ω) = db 1 (ω)+idb 2 (ω). From the definition one sees B H (t) is a zero-mean Gaussian field. 0 [e iωt 1 1] d B(ω)+c.c. ωh+1/2 E[B H (t)b H (s)] = (σ 2 /2)( t 2H + s 2H t s 2H ) Self-similarity: B H (at) d = a H B H (t) The parameter H - Hurst index. Loosely speaking, one may say that fbm has spectral density S(ω) ω 2H 1.

58 Fractional Brownian motion Let B(ds) be the Gaussian white noise measure with E(B(ds)) = 0 and E(B(ds 1 )B(ds 2 )) = δ(s 1 s 2 )ds 1 ds 2. For 0 < H < 1, we define fbm as (see Taqqu et al. 03) B H (t) = 1 C where d B(ω) = db 1 (ω)+idb 2 (ω). From the definition one sees B H (t) is a zero-mean Gaussian field. 0 [e iωt 1 1] d B(ω)+c.c. ωh+1/2 E[B H (t)b H (s)] = (σ 2 /2)( t 2H + s 2H t s 2H ) Self-similarity: B H (at) d = a H B H (t) The parameter H - Hurst index. Loosely speaking, one may say that fbm has spectral density S(ω) ω 2H 1. This motivates us to consider the limit H 0. Undefined!

59 The regularization of fbm for H 0

60 The regularization of fbm for H 0 Solution: For η > 0, introduce the 1-parameter extension B (η) H (t) = e ηω ω H+1/2[e iωt 1]d B(ω)+c.c. 0 For fixed η > 0, the limit H 0 is well defined and yields a particular regularization of 1/f-noise. We denote it B (η) 0 (t).

61 The regularization of fbm for H 0 Solution: For η > 0, introduce the 1-parameter extension B (η) H (t) = e ηω ω H+1/2[e iωt 1]d B(ω)+c.c. 0 For fixed η > 0, the limit H 0 is well defined and yields a particular regularization of 1/f-noise. We denote it B (η) 0 (t). Properties: B (η) 0 (t) is a zero-mean Gaussian process. B (η) 0 (t) has stationary increments. B (η) 0 (t) is self-similar : B(Tη) 0 (Tt) = d B (η) 0 (t).

62 The regularization of fbm for H 0 Solution: For η > 0, introduce the 1-parameter extension B (η) H (t) = e ηω ω H+1/2[e iωt 1]d B(ω)+c.c. 0 For fixed η > 0, the limit H 0 is well defined and yields a particular regularization of 1/f-noise. We denote it B (η) 0 (t). Properties: B (η) 0 (t) is a zero-mean Gaussian process. B (η) 0 (t) has stationary increments. B (η) 0 (t) is self-similar : B(Tη) 0 (Tt) = d B (η) 0 (t). Covariance structure: ( E((B (η) 0 (t 1) B (η) 0 (t 2)) 2 (t1 t 2 ) 2 ) ) = log η 2 +1 Regularized log correlations on scale η > 0.

63 Scaling regimes in Random Matrix Theory

64 Scaling regimes in Random Matrix Theory Let λ 1,...,λ N be the eigenvalues of H. The results of the previous few slides describe the limiting law of X N := N f(d N (λ j x )) E[...] j=1 for specific f with d N = 1, x = 0.

65 Scaling regimes in Random Matrix Theory Let λ 1,...,λ N be the eigenvalues of H. The results of the previous few slides describe the limiting law of X N := N f(d N (λ j x )) E[...] j=1 for specific f with d N = 1, x = 0. Three distinct scales: Global regime: d N = 1. Many results available, one expects Gaussian fluctuations of X N as N.

66 Scaling regimes in Random Matrix Theory Let λ 1,...,λ N be the eigenvalues of H. The results of the previous few slides describe the limiting law of X N := N f(d N (λ j x )) E[...] j=1 for specific f with d N = 1, x = 0. Three distinct scales: Global regime: d N = 1. Many results available, one expects Gaussian fluctuations of X N as N. Mesoscopic regime: 1 d N N. e.g. d N = N α with 0 < α < 1. See Erdos and Knowles 13, Duits and Johansson 13, Bourgade et al. 14, Breuer and Duits 14, Lodhia and Simm 15, Johansson and Lambert 15, Lambert 15 (See his poster)

67 Scaling regimes in Random Matrix Theory Let λ 1,...,λ N be the eigenvalues of H. The results of the previous few slides describe the limiting law of X N := N f(d N (λ j x )) E[...] j=1 for specific f with d N = 1, x = 0. Three distinct scales: Global regime: d N = 1. Many results available, one expects Gaussian fluctuations of X N as N. Mesoscopic regime: 1 d N N. e.g. d N = N α with 0 < α < 1. See Erdos and Knowles 13, Duits and Johansson 13, Bourgade et al. 14, Breuer and Duits 14, Lodhia and Simm 15, Johansson and Lambert 15, Lambert 15 (See his poster) Local regime: d N = N. CLT fails, not enough eigenvalues.

68 Theorem: Mesoscopic regime and fbm Regularization 2: Shift the argument of p N (x): x z t := x + t +iη d N with x,t,η > 0 fixed. Define W (η) N (t) := log p N(z t ) +log p N (z 0 )

69 Theorem: Mesoscopic regime and fbm Regularization 2: Shift the argument of p N (x): x z t := x + t +iη d N with x,t,η > 0 fixed. Define W (η) N (t) := log p N(z t ) +log p N (z 0 ) Theorem (Fyodorov, Khoruzhenko and Simm 13) Let d N with d N = o(n/log(n)). After centering W := W E(W), we obtain the convergence in law ( W (η) N (t 1),..., W (η) N (t k)) d (B (η) 0 (t 1),...,B (η) 0 (t k)), N, where B (η) 0 (t) is our regularization of fbm with H = 0: B (η) 0 (t) = 0 e ηω ω [e itf 1]d B(ω)+c.c.

70 Theorem: Mesoscopic regime and fbm Regularization 2: Shift the argument of p N (x): x z t := x + t +iη d N with x,t,η > 0 fixed. Define W (η) N (t) := log p N(z t ) +log p N (z 0 ) Theorem (Fyodorov, Khoruzhenko and Simm 13) Let d N with d N = o(n/log(n)). After centering W := W E(W), we obtain the convergence in law ( W (η) N (t 1),..., W (η) N (t k)) d (B (η) 0 (t 1),...,B (η) 0 (t k)), N, where B (η) 0 (t) is our regularization of fbm with H = 0: B (η) 0 (t) = 0 e ηω ω [e itf 1]d B(ω)+c.c.

71 Theorem: Mesoscopic regime and fbm Regularization 2: Shift the argument of p N (x): x z t := x + t +iη d N with x,t,η > 0 fixed. Define W (η) N (t) := log p N(z t ) +log p N (z 0 ) Theorem (Fyodorov, Khoruzhenko and Simm 13) Let d N with d N = o(n/log(n)). After centering W := W E(W), we obtain the convergence in law ( W (η) N (t 1),..., W (η) N (t k)) d (B (η) 0 (t 1),...,B (η) 0 (t k)), N, where B (η) 0 (t) is our regularization of fbm with H = 0: B (η) 0 (t) = 0 e ηω ω [e itf 1]d B(ω)+c.c.

72 The Fourier decomposition

73 The Fourier decomposition An exact identity rewrites W (η) N (t) in the form of a Fourier integral: ( W (η) N (t) = log det H E t +iη ) ( +log det H E iη d N) = 0 d N e ηω ω [e itω 1]b N (ω)dω +c.c.

74 The Fourier decomposition An exact identity rewrites W (η) N (t) in the form of a Fourier integral: ( W (η) N (t) = log det H E t +iη ) ( +log det H E iη d N) = 0 d N e ηω ω [e itω 1]b N (ω)dω +c.c. where b N (ω) are Fourier coefficients b N (ω) = 1 Tr(e iωdn(h E) ) = 1 N e iωd N(λ j x ). ω ω j=1

75 The Fourier decomposition An exact identity rewrites W (η) N (t) in the form of a Fourier integral: ( W (η) N (t) = log det H E t +iη ) ( +log det H E iη d N) = 0 d N e ηω ω [e itω 1]b N (ω)dω +c.c. where b N (ω) are Fourier coefficients b N (ω) = 1 Tr(e iωdn(h E) ) = 1 N e iωd N(λ j x ). ω ω Our definition of the limit object was just B (η) 0 (t) = 0 j=1 e ηω ω [e itω 1]d B(ω)+c.c.

76 The Fourier decomposition An exact identity rewrites W (η) N (t) in the form of a Fourier integral: ( W (η) N (t) = log det H E t +iη ) ( +log det H E iη d N) = 0 d N e ηω ω [e itω 1]b N (ω)dω +c.c. where b N (ω) are Fourier coefficients b N (ω) = 1 Tr(e iωdn(h E) ) = 1 N e iωd N(λ j x ). ω ω Our definition of the limit object was just B (η) 0 (t) = 0 j=1 e ηω ω [e itω 1]d B(ω)+c.c. We are motivated to prove the following result.

77 White noise limit for Fourier coefficients

78 White noise limit for Fourier coefficients Let U U(N) with Haar measure. Recall Theorem (Diaconis and Shahshahani 94) Let b N (k) = 1 k Tr(U k ). Then for any fixed k 1,...,k m, we have (b N (k 1 ),...,b N (k m )) Z := (Z 1,...,Z m ), N where Z is a discrete white noise.

79 White noise limit for Fourier coefficients Let U U(N) with Haar measure. Recall Theorem (Diaconis and Shahshahani 94) Let b N (k) = 1 k Tr(U k ). Then for any fixed k 1,...,k m, we have (b N (k 1 ),...,b N (k m )) Z := (Z 1,...,Z m ), N where Z is a discrete white noise. Theorem (Fyodorov, Khoruzhenko, Simm 13) We defined: b N (ω) = 1 ω Tr(e id N(H x ) ) Let x ( 1,1) and set d N = N α with 0 < α < 1. Then we have the convergence in distribution (b N (ω 1 ),...,b N (ω m )) {d B(ω 1 ),...,d B(ω m )}, N

80 White noise limit for Fourier coefficients Let U U(N) with Haar measure. Recall Theorem (Diaconis and Shahshahani 94) Let b N (k) = 1 k Tr(U k ). Then for any fixed k 1,...,k m, we have (b N (k 1 ),...,b N (k m )) Z := (Z 1,...,Z m ), N where Z is a discrete white noise. Theorem (Fyodorov, Khoruzhenko, Simm 13) We defined: b N (ω) = 1 ω Tr(e id N(H x ) ) Let x ( 1,1) and set d N = N α with 0 < α < 1. Then we have the convergence in distribution (b N (ω 1 ),...,b N (ω m )) {d B(ω 1 ),...,d B(ω m )}, N Continuum (mesoscopic) analogue of Diaconis-Shahshahani.

81 Universality of mesoscopic fluctuations - Wigner matrices Consider a general Wigner matrix W consisting of i.i.d. random variables with EW ij = 0 and E W ij 2 = 1. Assume: distribution of W ij has sub-gaussian decay..

82 Universality of mesoscopic fluctuations - Wigner matrices Consider a general Wigner matrix W consisting of i.i.d. random variables with EW ij = 0 and E W ij 2 = 1. Assume: distribution of W ij has sub-gaussian decay. Theorem (Lodhia and Simm 15) Fix E ( 1+δ,1 δ) and let z t := E + t+iη d N. Assume that 1 d N N 1/3. Then the sequence of random functions W (η) N (t) = log det(z t W) +log det(z 0 W) converges as N in the sense of finite-dimensional distributions to B (η) 0 (t): ( W N (t 1 ),..., W N (t k )) (B (η) 0 (t 1),...,B (η) 0 (t k)), k = 1,2,3,....

83 Universality of mesoscopic fluctuations - Wigner matrices Consider a general Wigner matrix W consisting of i.i.d. random variables with EW ij = 0 and E W ij 2 = 1. Assume: distribution of W ij has sub-gaussian decay. Theorem (Lodhia and Simm 15) Fix E ( 1+δ,1 δ) and let z t := E + t+iη d N. Assume that 1 d N N 1/3. Then the sequence of random functions W (η) N (t) = log det(z t W) +log det(z 0 W) converges as N in the sense of finite-dimensional distributions to B (η) 0 (t): ( W N (t 1 ),..., W N (t k )) (B (η) 0 (t 1),...,B (η) 0 (t k)), k = 1,2,3,... Boutet de Monvel, Khorunzhy: 1 d N N 1/8..

84 Universality of mesoscopic fluctuations - Wigner matrices Consider a general Wigner matrix W consisting of i.i.d. random variables with EW ij = 0 and E W ij 2 = 1. Assume: distribution of W ij has sub-gaussian decay. Theorem (Lodhia and Simm 15) Fix E ( 1+δ,1 δ) and let z t := E + t+iη d N. Assume that 1 d N N 1/3. Then the sequence of random functions W (η) N (t) = log det(z t W) +log det(z 0 W) converges as N in the sense of finite-dimensional distributions to B (η) 0 (t): ( W N (t 1 ),..., W N (t k )) (B (η) 0 (t 1),...,B (η) 0 (t k)), k = 1,2,3,... Boutet de Monvel, Khorunzhy: 1 d N N 1/8. Four moments: If W ij s match moments with the GUE to order 4, one gets all of 1 d N N (using arguments of Tao and Vu 12).

85 Contents Introduction Regularization 1 Regularization 2 Distribution of the maximum value

86 Statistics of the maximum Now we ask a different question. What about the statistics of the highest peak of the characteristic polynomial? Figure : A single realization of p N (x) e Elog p N(x). What are the statistics of the maximum value? (see the red dot) x

87 Analytical approach Step 1. Random partition function: Z N (β) = 1 1 e βv N(x) ρ sc (x)dx with potential V N (x) = 2(log p N (x) Elog p N (x) ).

88 Analytical approach Step 1. Random partition function: Z N (β) = 1 1 e βv N(x) ρ sc (x)dx with potential V N (x) = 2(log p N (x) Elog p N (x) ). Free energy F(β) = β 1 logz N (β) satisfies lim F(β) = min V N(x) β x [ 1,1]

89 Analytical approach Step 1. Random partition function: Z N (β) = 1 1 e βv N(x) ρ sc (x)dx with potential V N (x) = 2(log p N (x) Elog p N (x) ). Free energy F(β) = β 1 logz N (β) satisfies Step 2. Moments of Z N (β) EZ N (β) K = lim F(β) = min V N(x) β x [ 1,1] [ 1,1] K E (e β K k=1 V N(x k ) ) K k=1 Integrand: Use Krasovsky 07 asymptotics (but...) ρ sc (x k )dx k

90 Large N limit of the moments Inserting Krasovsky 07 asymptotics: K E(Z N (β) K ) cβ K NKβ2 dx k (1 xk 2 )a/2 Selberg s integral! [ 1,1] K k=1 x i x j 2β2 1 i<j K

91 Large N limit of the moments Inserting Krasovsky 07 asymptotics: E(Z N (β) K ) c K β NKβ2 [ 1,1] K k=1 K dx k (1 xk 2 )a/2 Selberg s integral! For β 2 < 1 the latter equals K 1 c K β N Kβ2 j=0 1 i<j K Γ(a+1 jβ 2 ) 2 Γ(1 (j +1)β 2 ) Γ(2a+2 (k +j 2)β 2 ) x i x j 2β2

92 Large N limit of the moments Inserting Krasovsky 07 asymptotics: E(Z N (β) K ) c K β NKβ2 [ 1,1] K k=1 K dx k (1 xk 2 )a/2 Selberg s integral! For β 2 < 1 the latter equals K 1 c K β N Kβ2 j=0 1 i<j K Γ(a+1 jβ 2 ) 2 Γ(1 (j +1)β 2 ) Γ(2a+2 (k +j 2)β 2 ) x i x j 2β2 Problem: Asymptotics are not uniform in x 1,...,x k. Claeys and Krasovsky 14, Claeys and Fahs 15: rigorous transition asymptotics for the determinants with K = 2.

93 Large N limit of the moments Inserting Krasovsky 07 asymptotics: E(Z N (β) K ) c K β NKβ2 [ 1,1] K k=1 K dx k (1 xk 2 )a/2 Selberg s integral! For β 2 < 1 the latter equals K 1 c K β N Kβ2 j=0 1 i<j K Γ(a+1 jβ 2 ) 2 Γ(1 (j +1)β 2 ) Γ(2a+2 (k +j 2)β 2 ) x i x j 2β2 Problem: Asymptotics are not uniform in x 1,...,x k. Claeys and Krasovsky 14, Claeys and Fahs 15: rigorous transition asymptotics for the determinants with K = 2. Main task: analytic continuation of Selberg s integral to complex K.

94 Freezing and temperature duality: β 1/β The obtained analytic continuation of the moments turns out to be invariant under β 1/β.

95 Freezing and temperature duality: β 1/β The obtained analytic continuation of the moments turns out to be invariant under β 1/β. Freezing hypothesis: Thermodynamic quantities which for β < 1 are invariant under β 1/β retain for all β > 1 the value at the critical point β = 1.

96 Freezing and temperature duality: β 1/β The obtained analytic continuation of the moments turns out to be invariant under β 1/β. Freezing hypothesis: Thermodynamic quantities which for β < 1 are invariant under β 1/β retain for all β > 1 the value at the critical point β = 1. Hypothesis supported by heuristic reasoning, numerics and more recently by rigorous calculation. e.g. Subag and Zeitouni 2014, Arguin, Belius and Bourgade 2015.

97 Freezing and temperature duality: β 1/β The obtained analytic continuation of the moments turns out to be invariant under β 1/β. Freezing hypothesis: Thermodynamic quantities which for β < 1 are invariant under β 1/β retain for all β > 1 the value at the critical point β = 1. Hypothesis supported by heuristic reasoning, numerics and more recently by rigorous calculation. e.g. Subag and Zeitouni 2014, Arguin, Belius and Bourgade This philosophy allows us to obtain the β limit by setting β = 1.

98 Freezing and temperature duality: β 1/β The obtained analytic continuation of the moments turns out to be invariant under β 1/β. Freezing hypothesis: Thermodynamic quantities which for β < 1 are invariant under β 1/β retain for all β > 1 the value at the critical point β = 1. Hypothesis supported by heuristic reasoning, numerics and more recently by rigorous calculation. e.g. Subag and Zeitouni 2014, Arguin, Belius and Bourgade This philosophy allows us to obtain the β limit by setting β = 1. Rigorously proving these conjectures represents an ongoing and significant mathematical challenge.

99 Conjecture

100 Conjecture Consider the random variable V N := max x [ 1,1] (2log p N(x) E2log p N (x) ).

101 Conjecture Consider the random variable V N := max x [ 1,1] (2log p N(x) E2log p N (x) ). Conjecture (Fyodorov and Simm 15) We have the convergence in distribution V N 2log(N)+ 3 2 log(log(n)) d = u, N, where u is a continuous random variable characterized by E(e us ) = 1 C 2 4s Γ(s +1)Γ(s +3)G(s +7/2) 2 π s G(s +1)G(s +6) and G(s) is the Barnes-G function satisfying G(s +1) = Γ(s)G(s), G(1) = 1

102 Recent developments: Unitary matrices Theorem (Arguin, Belius and Bourgade 15. Freezing.) Let U U(N) be chosen from the unitary group with Haar measure and P N (θ) = det(e iθ U). Then the following limits hold in probability lim N ( 1 N 2π ) βlogn log P N (θ) 2β dθ = 2π 0 { β + 1 β, β < 1 2, β 1

103 Recent developments: Unitary matrices Theorem (Arguin, Belius and Bourgade 15. Freezing.) Let U U(N) be chosen from the unitary group with Haar measure and P N (θ) = det(e iθ U). Then the following limits hold in probability lim N ( 1 N 2π ) βlogn log P N (θ) 2β dθ = 2π 0 { β + 1 β, β < 1 2, β 1 and max θ [0,2π] 2log P N (θ) lim = 2 N log(n)

104 Recent developments: Unitary matrices Theorem (Arguin, Belius and Bourgade 15. Freezing.) Let U U(N) be chosen from the unitary group with Haar measure and P N (θ) = det(e iθ U). Then the following limits hold in probability lim N ( 1 N 2π ) βlogn log P N (θ) 2β dθ = 2π 0 { β + 1 β, β < 1 2, β 1 and max θ [0,2π] 2log P N (θ) lim = 2 N log(n) Open questions: Correction term (3/2) log(log(n))? The fluctuating O(1) term?

105 Y.V. Fyodorov, B.A. Khoruzhenko and N.J. Simm. Fractional Brownian motion with Hurst index H = 0 and the Gaussian Unitary Ensemble. arxiv: A. Lodhia and N.J. Simm. Mesoscopic linear statistics of Wigner matrices. arxiv: Y.V. Fyodorov and N.J. Simm. On the distribution of the maximum of GUE characteristic polynomials. arxiv: Thank you!

106 Proof: Mesoscopic Riemann-Hilbert problem

107 Proof: Mesoscopic Riemann-Hilbert problem Proof is based on the RHP for orthogonal polynomials (extending Krasovsky s approach to mesoscopic regime).

108 Proof: Mesoscopic Riemann-Hilbert problem Proof is based on the RHP for orthogonal polynomials (extending Krasovsky s approach to mesoscopic regime). Problem: control error term R when singularities come very close to the real axis. Figure : N-dependent contour for the R Riemann-Hilbert problem

109 Proof: Mesoscopic Riemann-Hilbert problem Proof is based on the RHP for orthogonal polynomials (extending Krasovsky s approach to mesoscopic regime). Problem: control error term R when singularities come very close to the real axis. Figure : N-dependent contour for the R Riemann-Hilbert problem RHP very useful for resolving this construction down to almost optimal scales d N = o(n/log(n)).

arxiv: v2 [math-ph] 21 Jun 2015

arxiv: v2 [math-ph] 21 Jun 2015 ON THE DISTRIBUTION OF THE MAXIMUM VALUE OF THE CHARACTERISTIC POLYNOMIAL OF GUE RANDOM MATRICES Y. V. FYODOROV AND N. J. SIMM arxiv:1503.07110v [math-ph] 1 Jun 015 Abstract. Motivated by recently discovered

More information

Modelling large values of L-functions

Modelling large values of L-functions Modelling large values of L-functions How big can things get? Christopher Hughes Exeter, 20 th January 2016 Christopher Hughes (University of York) Modelling large values of L-functions Exeter, 20 th January

More information

The Fyodorov-Bouchaud formula and Liouville conformal field theory

The Fyodorov-Bouchaud formula and Liouville conformal field theory The Fyodorov-Bouchaud formula and Liouville conformal field theory Guillaume Remy École Normale Supérieure February 1, 218 Guillaume Remy (ENS) The Fyodorov-Bouchaud formula February 1, 218 1 / 39 Introduction

More information

Fractional Brownian motion with Hurst index H = 0 and the Gaussian Unitary Ensemble

Fractional Brownian motion with Hurst index H = 0 and the Gaussian Unitary Ensemble Fractional Brownian motion with Hurst index H = and the Gaussian Unitary Ensemble Y. V. Fyodorov, B. A. Khoruzhenko and. J. Simm School of Mathematical Sciences, Queen Mary University of London Mile End

More information

RANDOM HERMITIAN MATRICES AND GAUSSIAN MULTIPLICATIVE CHAOS NATHANAËL BERESTYCKI, CHRISTIAN WEBB, AND MO DICK WONG

RANDOM HERMITIAN MATRICES AND GAUSSIAN MULTIPLICATIVE CHAOS NATHANAËL BERESTYCKI, CHRISTIAN WEBB, AND MO DICK WONG RANDOM HERMITIAN MATRICES AND GAUSSIAN MULTIPLICATIVE CHAOS NATHANAËL BERESTYCKI, CHRISTIAN WEBB, AND MO DICK WONG Abstract We prove that when suitably normalized, small enough powers of the absolute value

More information

Universality for random matrices and log-gases

Universality for random matrices and log-gases Universality for random matrices and log-gases László Erdős IST, Austria Ludwig-Maximilians-Universität, Munich, Germany Encounters Between Discrete and Continuous Mathematics Eötvös Loránd University,

More information

Mandelbrot Cascades and their uses

Mandelbrot Cascades and their uses Mandelbrot Cascades and their uses Antti Kupiainen joint work with J. Barral, M. Nikula, E. Saksman, C. Webb IAS November 4 2013 Random multifractal measures Mandelbrot Cascades are a class of random measures

More information

Random Matrix: From Wigner to Quantum Chaos

Random Matrix: From Wigner to Quantum Chaos Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution

More information

From the mesoscopic to microscopic scale in random matrix theory

From the mesoscopic to microscopic scale in random matrix theory From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system

More information

Some new estimates on the Liouville heat kernel

Some new estimates on the Liouville heat kernel Some new estimates on the Liouville heat kernel Vincent Vargas 1 2 ENS Paris 1 first part in collaboration with: Maillard, Rhodes, Zeitouni 2 second part in collaboration with: David, Kupiainen, Rhodes

More information

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS

RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS David García-García May 13, 2016 Faculdade de Ciências da Universidade de Lisboa OVERVIEW Random Matrix Theory Introduction Matrix ensembles A sample computation:

More information

Bulk scaling limits, open questions

Bulk scaling limits, open questions Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson

More information

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium SEA 06@MIT, Workshop on Stochastic Eigen-Analysis and its Applications, MIT, Cambridge,

More information

Universality of local spectral statistics of random matrices

Universality of local spectral statistics of random matrices Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.

More information

Homogenization of the Dyson Brownian Motion

Homogenization of the Dyson Brownian Motion Homogenization of the Dyson Brownian Motion P. Bourgade, joint work with L. Erdős, J. Yin, H.-T. Yau Cincinnati symposium on probability theory and applications, September 2014 Introduction...........

More information

Geometric Dyson Brownian motion and May Wigner stability

Geometric Dyson Brownian motion and May Wigner stability Geometric Dyson Brownian motion and May Wigner stability Jesper R. Ipsen University of Melbourne Summer school: Randomness in Physics and Mathematics 2 Bielefeld 2016 joint work with Henning Schomerus

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

Near extreme eigenvalues and the first gap of Hermitian random matrices

Near extreme eigenvalues and the first gap of Hermitian random matrices Near extreme eigenvalues and the first gap of Hermitian random matrices Grégory Schehr LPTMS, CNRS-Université Paris-Sud XI Sydney Random Matrix Theory Workshop 13-16 January 2014 Anthony Perret, G. S.,

More information

Convergence of spectral measures and eigenvalue rigidity

Convergence of spectral measures and eigenvalue rigidity Convergence of spectral measures and eigenvalue rigidity Elizabeth Meckes Case Western Reserve University ICERM, March 1, 2018 Macroscopic scale: the empirical spectral measure Macroscopic scale: the empirical

More information

Rigidity of the 3D hierarchical Coulomb gas. Sourav Chatterjee

Rigidity of the 3D hierarchical Coulomb gas. Sourav Chatterjee Rigidity of point processes Let P be a Poisson point process of intensity n in R d, and let A R d be a set of nonzero volume. Let N(A) := A P. Then E(N(A)) = Var(N(A)) = vol(a)n. Thus, N(A) has fluctuations

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Moments of the Riemann Zeta Function and Random Matrix Theory. Chris Hughes

Moments of the Riemann Zeta Function and Random Matrix Theory. Chris Hughes Moments of the Riemann Zeta Function and Random Matrix Theory Chris Hughes Overview We will use the characteristic polynomial of a random unitary matrix to model the Riemann zeta function. Using this,

More information

Applications of random matrix theory to principal component analysis(pca)

Applications of random matrix theory to principal component analysis(pca) Applications of random matrix theory to principal component analysis(pca) Jun Yin IAS, UW-Madison IAS, April-2014 Joint work with A. Knowles and H. T Yau. 1 Basic picture: Let H be a Wigner (symmetric)

More information

Triangular matrices and biorthogonal ensembles

Triangular matrices and biorthogonal ensembles /26 Triangular matrices and biorthogonal ensembles Dimitris Cheliotis Department of Mathematics University of Athens UK Easter Probability Meeting April 8, 206 2/26 Special densities on R n Example. n

More information

TOP EIGENVALUE OF CAUCHY RANDOM MATRICES

TOP EIGENVALUE OF CAUCHY RANDOM MATRICES TOP EIGENVALUE OF CAUCHY RANDOM MATRICES with Satya N. Majumdar, Gregory Schehr and Dario Villamaina Pierpaolo Vivo (LPTMS - CNRS - Paris XI) Gaussian Ensembles N = 5 Semicircle Law LARGEST EIGENVALUE

More information

The Matrix Dyson Equation in random matrix theory

The Matrix Dyson Equation in random matrix theory The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

Fredholm determinant with the confluent hypergeometric kernel

Fredholm determinant with the confluent hypergeometric kernel Fredholm determinant with the confluent hypergeometric kernel J. Vasylevska joint work with I. Krasovsky Brunel University Dec 19, 2008 Problem Statement Let K s be the integral operator acting on L 2

More information

NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford

NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS Sheehan Olver NA Group, Oxford We are interested in numerically computing eigenvalue statistics of the GUE ensembles, i.e.,

More information

Primes, partitions and permutations. Paul-Olivier Dehaye ETH Zürich, October 31 st

Primes, partitions and permutations. Paul-Olivier Dehaye ETH Zürich, October 31 st Primes, Paul-Olivier Dehaye pdehaye@math.ethz.ch ETH Zürich, October 31 st Outline Review of Bump & Gamburd s method A theorem of Moments of derivatives of characteristic polynomials Hypergeometric functions

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

Large deviations of the top eigenvalue of random matrices and applications in statistical physics

Large deviations of the top eigenvalue of random matrices and applications in statistical physics Large deviations of the top eigenvalue of random matrices and applications in statistical physics Grégory Schehr LPTMS, CNRS-Université Paris-Sud XI Journées de Physique Statistique Paris, January 29-30,

More information

Maximum of the characteristic polynomial of random unitary matrices

Maximum of the characteristic polynomial of random unitary matrices Maximum of the characteristic polynomial of random unitary matrices Louis-Pierre Arguin Department of Mathematics, Baruch College Graduate Center, City University of New York louis-pierre.arguin@baruch.cuny.edu

More information

Einstein-Hilbert action on Connes-Landi noncommutative manifolds

Einstein-Hilbert action on Connes-Landi noncommutative manifolds Einstein-Hilbert action on Connes-Landi noncommutative manifolds Yang Liu MPIM, Bonn Analysis, Noncommutative Geometry, Operator Algebras Workshop June 2017 Motivations and History Motivation: Explore

More information

DLR equations for the Sineβ process and applications

DLR equations for the Sineβ process and applications DLR equations for the Sineβ process and applications Thomas Leble (Courant Institute - NYU) Columbia University - 09/28/2018 Joint work with D. Dereudre, A. Hardy, M. Maı da (Universite Lille 1) Log-gases

More information

Topics in fractional Brownian motion

Topics in fractional Brownian motion Topics in fractional Brownian motion Esko Valkeila Spring School, Jena 25.3. 2011 We plan to discuss the following items during these lectures: Fractional Brownian motion and its properties. Topics in

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

MIMO Capacities : Eigenvalue Computation through Representation Theory

MIMO Capacities : Eigenvalue Computation through Representation Theory MIMO Capacities : Eigenvalue Computation through Representation Theory Jayanta Kumar Pal, Donald Richards SAMSI Multivariate distributions working group Outline 1 Introduction 2 MIMO working model 3 Eigenvalue

More information

Spectral Universality of Random Matrices

Spectral Universality of Random Matrices Spectral Universality of Random Matrices László Erdős IST Austria (supported by ERC Advanced Grant) Szilárd Leó Colloquium Technical University of Budapest February 27, 2018 László Erdős Spectral Universality

More information

Update on the beta ensembles

Update on the beta ensembles Update on the beta ensembles Brian Rider Temple University with M. Krishnapur IISC, J. Ramírez Universidad Costa Rica, B. Virág University of Toronto The Tracy-Widom laws Consider a random Hermitian n

More information

Central Limit Theorems for linear statistics for Biorthogonal Ensembles

Central Limit Theorems for linear statistics for Biorthogonal Ensembles Central Limit Theorems for linear statistics for Biorthogonal Ensembles Maurice Duits, Stockholm University Based on joint work with Jonathan Breuer (HUJI) Princeton, April 2, 2014 M. Duits (SU) CLT s

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

Local law of addition of random matrices

Local law of addition of random matrices Local law of addition of random matrices Kevin Schnelli IST Austria Joint work with Zhigang Bao and László Erdős Supported by ERC Advanced Grant RANMAT No. 338804 Spectrum of sum of random matrices Question:

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne

Eigenvalue PDFs. Peter Forrester, M&S, University of Melbourne Outline Eigenvalue PDFs Peter Forrester, M&S, University of Melbourne Hermitian matrices with real, complex or real quaternion elements Circular ensembles and classical groups Products of random matrices

More information

The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap

The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap The Density Matrix for the Ground State of 1-d Impenetrable Bosons in a Harmonic Trap Institute of Fundamental Sciences Massey University New Zealand 29 August 2017 A. A. Kapaev Memorial Workshop Michigan

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich

From random matrices to free groups, through non-crossing partitions. Michael Anshelevich From random matrices to free groups, through non-crossing partitions Michael Anshelevich March 4, 22 RANDOM MATRICES For each N, A (N), B (N) = independent N N symmetric Gaussian random matrices, i.e.

More information

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random

More information

arxiv: v3 [math-ph] 21 Jun 2012

arxiv: v3 [math-ph] 21 Jun 2012 LOCAL MARCHKO-PASTUR LAW AT TH HARD DG OF SAMPL COVARIAC MATRICS CLAUDIO CACCIAPUOTI, AA MALTSV, AD BJAMI SCHLI arxiv:206.730v3 [math-ph] 2 Jun 202 Abstract. Let X be a matrix whose entries are i.i.d.

More information

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW Serials Publications www.serialspublications.com OMPLEX HERMITE POLYOMIALS: FROM THE SEMI-IRULAR LAW TO THE IRULAR LAW MIHEL LEDOUX Abstract. We study asymptotics of orthogonal polynomial measures of the

More information

Second Order Freeness and Random Orthogonal Matrices

Second Order Freeness and Random Orthogonal Matrices Second Order Freeness and Random Orthogonal Matrices Jamie Mingo (Queen s University) (joint work with Mihai Popa and Emily Redelmeier) AMS San Diego Meeting, January 11, 2013 1 / 15 Random Matrices X

More information

ON THE SECOND-ORDER CORRELATION FUNCTION OF THE CHARACTERISTIC POLYNOMIAL OF A HERMITIAN WIGNER MATRIX

ON THE SECOND-ORDER CORRELATION FUNCTION OF THE CHARACTERISTIC POLYNOMIAL OF A HERMITIAN WIGNER MATRIX ON THE SECOND-ORDER CORRELATION FUNCTION OF THE CHARACTERISTIC POLYNOMIAL OF A HERMITIAN WIGNER MATRIX F. GÖTZE AND H. KÖSTERS Abstract. We consider the asymptotics of the second-order correlation function

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

Malliavin calculus and central limit theorems

Malliavin calculus and central limit theorems Malliavin calculus and central limit theorems David Nualart Department of Mathematics Kansas University Seminar on Stochastic Processes 2017 University of Virginia March 8-11 2017 David Nualart (Kansas

More information

The maximum of the characteristic polynomial for a random permutation matrix

The maximum of the characteristic polynomial for a random permutation matrix The maximum of the characteristic polynomial for a random permutation matrix Random Matrices and Free Probability Workshop IPAM, 2018/05/16 Nick Cook, UCLA Based on joint work with Ofer Zeitouni Model

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Maximal height of non-intersecting Brownian motions

Maximal height of non-intersecting Brownian motions Maximal height of non-intersecting Brownian motions G. Schehr Laboratoire de Physique Théorique et Modèles Statistiques CNRS-Université Paris Sud-XI, Orsay Collaborators: A. Comtet (LPTMS, Orsay) P. J.

More information

Free Probability and Random Matrices: from isomorphisms to universality

Free Probability and Random Matrices: from isomorphisms to universality Free Probability and Random Matrices: from isomorphisms to universality Alice Guionnet MIT TexAMP, November 21, 2014 Joint works with F. Bekerman, Y. Dabrowski, A.Figalli, E. Maurel-Segala, J. Novak, D.

More information

Concentration inequalities: basics and some new challenges

Concentration inequalities: basics and some new challenges Concentration inequalities: basics and some new challenges M. Ledoux University of Toulouse, France & Institut Universitaire de France Measure concentration geometric functional analysis, probability theory,

More information

Isotropic local laws for random matrices

Isotropic local laws for random matrices Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H. Some motivations:

More information

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior

Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Pathwise volatility in a long-memory pricing model: estimation and asymptotic behavior Ehsan Azmoodeh University of Vaasa Finland 7th General AMaMeF and Swissquote Conference September 7 1, 215 Outline

More information

Random Fermionic Systems

Random Fermionic Systems Random Fermionic Systems Fabio Cunden Anna Maltsev Francesco Mezzadri University of Bristol December 9, 2016 Maltsev (University of Bristol) Random Fermionic Systems December 9, 2016 1 / 27 Background

More information

arxiv: v1 [math-ph] 19 Oct 2018

arxiv: v1 [math-ph] 19 Oct 2018 COMMENT ON FINITE SIZE EFFECTS IN THE AVERAGED EIGENVALUE DENSITY OF WIGNER RANDOM-SIGN REAL SYMMETRIC MATRICES BY G.S. DHESI AND M. AUSLOOS PETER J. FORRESTER AND ALLAN K. TRINH arxiv:1810.08703v1 [math-ph]

More information

Beyond the Gaussian universality class

Beyond the Gaussian universality class Beyond the Gaussian universality class MSRI/Evans Talk Ivan Corwin (Courant Institute, NYU) September 13, 2010 Outline Part 1: Random growth models Random deposition, ballistic deposition, corner growth

More information

September 29, Gaussian integrals. Happy birthday Costas

September 29, Gaussian integrals. Happy birthday Costas September 29, 202 Gaussian integrals Happy birthday Costas Of course only non-gaussian problems are of interest! Matrix models of 2D-gravity Z = dme NTrV (M) in which M = M is an N N matrix and V (M) =

More information

Preface to the Second Edition...vii Preface to the First Edition... ix

Preface to the Second Edition...vii Preface to the First Edition... ix Contents Preface to the Second Edition...vii Preface to the First Edition........... ix 1 Introduction.............................................. 1 1.1 Large Dimensional Data Analysis.........................

More information

What s more chaotic than chaos itself? Brownian Motion - before, after, and beyond.

What s more chaotic than chaos itself? Brownian Motion - before, after, and beyond. Include Only If Paper Has a Subtitle Department of Mathematics and Statistics What s more chaotic than chaos itself? Brownian Motion - before, after, and beyond. Math Graduate Seminar March 2, 2011 Outline

More information

Double contour integral formulas for the sum of GUE and one matrix model

Double contour integral formulas for the sum of GUE and one matrix model Double contour integral formulas for the sum of GUE and one matrix model Based on arxiv:1608.05870 with Tom Claeys, Arno Kuijlaars, and Karl Liechty Dong Wang National University of Singapore Workshop

More information

Constructing the 2d Liouville Model

Constructing the 2d Liouville Model Constructing the 2d Liouville Model Antti Kupiainen joint work with F. David, R. Rhodes, V. Vargas Porquerolles September 22 2015 γ = 2, (c = 2) Quantum Sphere γ = 2, (c = 2) Quantum Sphere Planar maps

More information

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr

More information

Universal Fluctuation Formulae for one-cut β-ensembles

Universal Fluctuation Formulae for one-cut β-ensembles Universal Fluctuation Formulae for one-cut β-ensembles with a combinatorial touch Pierpaolo Vivo with F. D. Cunden Phys. Rev. Lett. 113, 070202 (2014) with F.D. Cunden and F. Mezzadri J. Phys. A 48, 315204

More information

Reproducing formulas associated with symbols

Reproducing formulas associated with symbols Reproducing formulas associated with symbols Filippo De Mari Ernesto De Vito Università di Genova, Italy Modern Methods of Time-Frequency Analysis II Workshop on Applied Coorbit space theory September

More information

The one-dimensional KPZ equation and its universality

The one-dimensional KPZ equation and its universality The one-dimensional KPZ equation and its universality T. Sasamoto Based on collaborations with A. Borodin, I. Corwin, P. Ferrari, T. Imamura, H. Spohn 28 Jul 2014 @ SPA Buenos Aires 1 Plan of the talk

More information

NUMERICAL SOLUTION OF DYSON BROWNIAN MOTION AND A SAMPLING SCHEME FOR INVARIANT MATRIX ENSEMBLES

NUMERICAL SOLUTION OF DYSON BROWNIAN MOTION AND A SAMPLING SCHEME FOR INVARIANT MATRIX ENSEMBLES NUMERICAL SOLUTION OF DYSON BROWNIAN MOTION AND A SAMPLING SCHEME FOR INVARIANT MATRIX ENSEMBLES XINGJIE HELEN LI AND GOVIND MENON Abstract. The Dyson Brownian Motion (DBM) describes the stochastic evolution

More information

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE

Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Craig A. Tracy UC Davis RHPIA 2005 SISSA, Trieste 1 Figure 1: Paul Painlevé,

More information

Modeling and testing long memory in random fields

Modeling and testing long memory in random fields Modeling and testing long memory in random fields Frédéric Lavancier lavancier@math.univ-lille1.fr Université Lille 1 LS-CREST Paris 24 janvier 6 1 Introduction Long memory random fields Motivations Previous

More information

Meixner matrix ensembles

Meixner matrix ensembles Meixner matrix ensembles W lodek Bryc 1 Cincinnati April 12, 2011 1 Based on joint work with Gerard Letac W lodek Bryc (Cincinnati) Meixner matrix ensembles April 12, 2011 1 / 29 Outline of talk Random

More information

Local Kesten McKay law for random regular graphs

Local Kesten McKay law for random regular graphs Local Kesten McKay law for random regular graphs Roland Bauerschmidt (with Jiaoyang Huang and Horng-Tzer Yau) University of Cambridge Weizmann Institute, January 2017 Random regular graph G N,d is the

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Chapter 29. Quantum Chaos

Chapter 29. Quantum Chaos Chapter 29 Quantum Chaos What happens to a Hamiltonian system that for classical mechanics is chaotic when we include a nonzero h? There is no problem in principle to answering this question: given a classical

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes

More information

A determinantal formula for the GOE Tracy-Widom distribution

A determinantal formula for the GOE Tracy-Widom distribution A determinantal formula for the GOE Tracy-Widom distribution Patrik L. Ferrari and Herbert Spohn Technische Universität München Zentrum Mathematik and Physik Department e-mails: ferrari@ma.tum.de, spohn@ma.tum.de

More information

Primes, queues and random matrices

Primes, queues and random matrices Primes, queues and random matrices Peter Forrester, M&S, University of Melbourne Outline Counting primes Counting Riemann zeros Random matrices and their predictive powers Queues 1 / 25 INI Programme RMA

More information

NEW FUNCTIONAL INEQUALITIES

NEW FUNCTIONAL INEQUALITIES 1 / 29 NEW FUNCTIONAL INEQUALITIES VIA STEIN S METHOD Giovanni Peccati (Luxembourg University) IMA, Minneapolis: April 28, 2015 2 / 29 INTRODUCTION Based on two joint works: (1) Nourdin, Peccati and Swan

More information

Mod-φ convergence I: examples and probabilistic estimates

Mod-φ convergence I: examples and probabilistic estimates Mod-φ convergence I: examples and probabilistic estimates Valentin Féray (joint work with Pierre-Loïc Méliot and Ashkan Nikeghbali) Institut für Mathematik, Universität Zürich Summer school in Villa Volpi,

More information

Geometric RSK, Whittaker functions and random polymers

Geometric RSK, Whittaker functions and random polymers Geometric RSK, Whittaker functions and random polymers Neil O Connell University of Warwick Advances in Probability: Integrability, Universality and Beyond Oxford, September 29, 2014 Collaborators: I.

More information

Nonparametric Drift Estimation for Stochastic Differential Equations

Nonparametric Drift Estimation for Stochastic Differential Equations Nonparametric Drift Estimation for Stochastic Differential Equations Gareth Roberts 1 Department of Statistics University of Warwick Brazilian Bayesian meeting, March 2010 Joint work with O. Papaspiliopoulos,

More information

Optimal Estimation of a Nonsmooth Functional

Optimal Estimation of a Nonsmooth Functional Optimal Estimation of a Nonsmooth Functional T. Tony Cai Department of Statistics The Wharton School University of Pennsylvania http://stat.wharton.upenn.edu/ tcai Joint work with Mark Low 1 Question Suppose

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 2D SYSTEMS & PRELIMINARIES Hamid R. Rabiee Fall 2015 Outline 2 Two Dimensional Fourier & Z-transform Toeplitz & Circulant Matrices Orthogonal & Unitary Matrices Block Matrices

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

Dynamical approach to random matrix theory

Dynamical approach to random matrix theory Dynamical approach to random matrix theory László Erdős, Horng-Tzer Yau May 9, 207 Partially supported by ERC Advanced Grant, RAMAT 338804 Partially supported by the SF grant DMS-307444 and a Simons Investigator

More information

Gaussian Free Field in beta ensembles and random surfaces. Alexei Borodin

Gaussian Free Field in beta ensembles and random surfaces. Alexei Borodin Gaussian Free Field in beta ensembles and random surfaces Alexei Borodin Corners of random matrices and GFF spectra height function liquid region Theorem As Unscaled fluctuations Gaussian (massless) Free

More information

Random Toeplitz Matrices

Random Toeplitz Matrices Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n

More information

arxiv: v2 [math.pr] 16 Aug 2014

arxiv: v2 [math.pr] 16 Aug 2014 RANDOM WEIGHTED PROJECTIONS, RANDOM QUADRATIC FORMS AND RANDOM EIGENVECTORS VAN VU DEPARTMENT OF MATHEMATICS, YALE UNIVERSITY arxiv:306.3099v2 [math.pr] 6 Aug 204 KE WANG INSTITUTE FOR MATHEMATICS AND

More information

Numerical analysis and random matrix theory. Tom Trogdon UC Irvine

Numerical analysis and random matrix theory. Tom Trogdon UC Irvine Numerical analysis and random matrix theory Tom Trogdon ttrogdon@math.uci.edu UC Irvine Acknowledgements This is joint work with: Percy Deift Govind Menon Sheehan Olver Raj Rao Numerical analysis and random

More information

SEA s workshop- MIT - July 10-14

SEA s workshop- MIT - July 10-14 Matrix-valued Stochastic Processes- Eigenvalues Processes and Free Probability SEA s workshop- MIT - July 10-14 July 13, 2006 Outline Matrix-valued stochastic processes. 1- definition and examples. 2-

More information