Low-temperature random matrix theory at the soft edge. Abstract
|
|
- Geraldine Dorsey
- 5 years ago
- Views:
Transcription
1 Low-temperature random matrix theory at the soft edge Alan Edelman Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 239, USA Per-Olof Persson Department of Mathematics, University of California, Berkeley, CA 9472, USA Brian D. Sutton Department of Mathematics, Randolph-Macon College, Ashland, VA 235, USA Abstract Low temperature random matrix theory is the study of random eigenvalues as energy is removed. In standard notation, β is identified with inverse temperature, and low temperatures are achieved through the limit β. In this paper, we derive statistics for low-temperature random matrices at the soft edge, which describes the extreme eigenvalues for many random matrix distributions. Specifically, new asymptotics are found for the expected value and standard deviation of the generalβ Tracy-Widom distribution. The new techniques utilize beta ensembles, stochastic differential operators, and Riccati diffusions. The asymptotics fit known high-temperature statistics curiously well.
2 I. INTRODUCTION With modern technology, a random matrix can be frozen, leaving a deterministic object that is often easier to study. As heat is reapplied, the matrix thaws, and the effects of randomness can be studied in a new way. This article focuses on the asymptotic regime β, reviewing prior work on finite beta ensembles and presenting new results on stochastic differential operators and Riccati diffusions. In the end, we gain a better understanding of the low-temperature regime and introduce methods that may apply generally to all β >. With the publication of Dyson s Statistical Theory of the Energy Levels of Complex Systems in 962 [], a bifurcation of random matrix theory became inevitable. Dyson showed that the eigenvalues of random matrices obey the laws of a log gas from statistical mechanics, with the division algebra of matrix entries determining the temperature of the gas. Specifically, the parameter β equaled inverse temperature: β =. While the random kt matrices seemed limited to three classes, as Dyson himself emphasized in his Threefold Way paper of the same year, the eigenvalues generalized naturally to any β >. A researcher might choose to study random matrices at β =, 2, 4 or their eigenvalues for any β >, but the choice would likely lead to totally different methods. In more recent times, classical and general-β random matrix theory have been reunited by tridiagonal beta ensembles, stochastic differential operators, and Riccati diffusions. Tridiagonal beta ensembles, introduced by Dumitriu and Edelman [8], extend random matrices to general β >. Stochastic differential operators, introduced by Alan Edelman at the 23 SIAM Conference on Applied Linear Algebra [] and developed by Edelman and Sutton [2] and Ramírez, Rider, and Virág [5], are the n continuum limits of random matrices. Riccati diffusions for beta ensembles, introduced by Ramírez, Rider, and Virág [5], and the equivalent Sturm sequence characterization, discovered independently by Albrecht, Chan, and Edelman [], transform the eigenvalue problems from second-order differential equations to first-order diffusion processes. With these new tools, we can work directly with general-β operators rather than disembodied general-β eigenvalues. Still, there is much work to be done. Although the operators have been extended, few classical methods have made the transition. In this article, we develop new methods for working in the β regime. Specifically, we investigate two asymptotic expressions for the soft edge as β. The soft edge can be investigated by taking an n limit of many different random matrix 2
3 fβ(x) β = β = 44 β = β = 4.9 β = 2. β = x Figure. Largest eigenvalue for several values of β distributions. For a concrete case, let G be an n-by-n matrix with iid real standard Gaussian entries, and let H = 2 (G+GT ). Then H is a random symmetric matrix, and its distribution is called the Gaussian orthogonal ensemble (GOE) or the Hermite ensemble with β =. The soft edge appears as n approaches and the spectrum is recentered and rescaled to illuminate the largest eigenvalue. The limiting eigenvalue distribution is often called the Tracy-Widom distribution with β = [7]. If G has complex or quaternion entries, then the Tracy-Widom distribution with β = 2 or β = 4 is realized [8]. Beta ensembles generalize the β =, 2, 4 triad to arbitrary β >. Tracy-Widom distributions for several values of β are shown in Figure. They were computed with the numerical routine of Bloemendal and Sutton [2]. We argue the following two asymptotic statistics, in which λ β k is the centered and scaled kth eigenvalue at the soft edge, Ai is the bounded Airy function, a k is the kth zero of Ai, and G(x, x) is the diagonal of a Green s function defined in (7): SD[λ β k ] 2 ( ) 4 β Ai (a k ) Ai(x + a k) dx, β, () E[λ β k ] a k 4 ( ) 2 G(x, x) β Ai (a k ) Ai(x + a k) dx, β. (2) Our methods support error terms of O(β ) and O(β 3/2 ) for () and (2), respectively. For k =, we have the following numerics: 3
4 SD[λ β ] β, β, (3) ( ) 2 2 E[λ β ] β, β. (4) These are plotted in Figure 2 and compared with known statistics for β =, 2, 4, [5]: β Mean Asymptotic prediction β Standard deviation Asymptotic prediction Our normalization at β = 4 differs from that of Tracy and Widom by a factor of 2 /6 [2]. The asymptotics fit the known values remarkably well. The ideas behind () and (2) extend naturally to higher moments and higher-order asymptotics. We feel that their potential for further development is as exciting as the surprisingly good fits of Figure 2. Equation () was found earlier by Dumitriu and Edelman [9]. Their argument, based on finite-dimensional general-β matrix models, is reviewed in Section II. Two new arguments, based on stochastic differential operators and Riccati diffusions, are given in Sections III and IV, respectively. Equation (2) is derived for the first time in Section III. The stochastic operator and Riccati diffusion methods appear to be novel. II. FINITE BETA ENSEMBLES Dumitriu and Edelman previously derived expression () for the standard deviation using their beta ensembles [9]. This section reviews their argument. 4
5 .6.8 Exact values Large-β asymptotics.2 β = E[λ β ].4.6 β = β = β = / β Exact values Large-β asymptotics β = SD[λ β ].2.8 β = 4 β = β = / β Figure 2. Mean and standard deviation asymptotics at the soft edge The Gaussian orthogonal, unitary, and symplectic ensembles have joint eigenvalue density const e (β/2) n i= λ2 i i<j n λ i λ j β (5) for β =, 2, 4, respectively. The β-hermite ensemble is the n-by-n random symmetric 5
6 tridiagonal matrix H β 2β 2G χ (n )β χ (n )β 2G2 χ (n 2)β , (6) χ 2β 2Gn χ β 2Gn in which G,..., G n are standard Gaussian variables, χ r is a chi-distributed random variable with r degrees of freedom, and all entries in the upper-triangular part are independent [8]. The random tridiagonal has the eigenvalue density (5) for all positive β. The ensemble is extended further by defining H = lim β H β = 2 χ β n n n This deterministic matrix encodes the three-term recurrence for Hermite polynomials. Its eigenvalues are the roots h > h 2 > > h n of the nth Hermite polynomial H n, and its kth eigenvector is v k = (H n (h k ), H n 2 (h k ),..., H (h k ), H (h k )). Large-β asymptotics at the soft edge concern the largest eigenvalues of H β as β and n. Dumitriu and Edelman let β first and n second. First, they show lim β(h β H ) = Z β almost surely, in which Z is a symmetric tridiagonal matrix with independent mean-zero Gaussian entries having standard deviation on the diagonal and /2 on the superdiagonal. Essentially, H β H + β Z, β. 6
7 When β is large, eigenvalue perturbation theory applies. The kth eigenvalue λ k (H β ) satisfies λ k (H β ) h k + vk T Zv k β vk T v, β. k Next, n approaches. In this limit, the largest eigenvalue tends toward infinity, while its nearest neighbor becomes arbitrarily close. To see the eigenvalue more clearly, a recentering and rescaling are necessary: 2n /6 (λ k (H β ) 2n) 2n /6 (h k 2n) + 2n /6 vk T Zv k β vk T v, β. k The left-hand side converges in distribution to the general-β Tracy-Widom distribution. Using orthogonal polynomial asymptotics, Dumitriu and Edelman show that the right-hand side converges to a Gaussian whose mean is the Airy zero a k and whose standard deviation is given by (). III. STOCHASTIC DIFFERENTIAL OPERATORS In this section, we compute the mean eigenvalue asymptotics (2) for the first time and rederive the standard deviation result (). Our method is based on the stochastic operator approach. The stochastic operator approach works with n continuum limits of random matrices, rather than the limiting eigenvalue distributions alone. In particular, the Hermite ensemble H β of (6) has a continuum limit when scaled at the soft edge [2]: 2n /6 (H β 2n I) n A β, in which A β is the stochastic Airy operator A β = d2 dx 2 x + 2 β W x, b.c. s f() = f(+ ) =. W x denotes a diagonal white noise process, so that b a W xf(x)dx = b a f(x)db x is a stochastic integral. The continuum limit is justified by viewing the tridiagonal matrix as a finite difference approximation of the continuous operator [2, 5]. In particular, the first k eigenvectors of H β sample the first k eigenfunctions of A β on a grid. 7
8 When β =, the stochastic Airy operator becomes just the Airy operator A = d2 dx 2 x. Its eigenvalues are the zeros of the Airy function a k, and its eigenvectors are Ai (a k ) Ai(x+a k), k =, 2, 3,.... Our plan is to defrost the deterministic Airy operator and study A β = d2 dx 2 x + εw x for a small ε = 2 β. This approach appeared for the first time in Sutton s Ph.D. thesis [6]. A. Eigenvalue perturbation theory Express the kth eigenvalue λ β k and its associated eigenfunction vβ k in asymptotic series: λ β k = λ() + ελ () + ε 2 λ (2) + ε 3 λ (3) +, v β k = v() + εv () + ε 2 v (2) + ε 3 v (3) +. Eigenvalue perturbation theory yields λ () = a k, v () = Ai (a k ) Ai(x + a k), λ () = v (), W x v (), v () = (A a k ) + ( W x v () ), λ (2) = v (), W x v (), v (2) = (A a k ) + ( W x v () + λ () v () ), λ (3) = v (), W x v (2),..., with (L λ) + denoting the Moore-Penrose pseudoinverse of L λ, a.k.a. the reduced resolvent of L with respect to λ [3, II- 2.2]. B. Green s function The specific pseudoinverse (A a k ) + is an integral operator whose kernel is a generalized Green s function. In the following formula, Bi is the second standard solution of Airy s equation [7, 9.8.i]: 8
9 [(A a k ) + f](x) = G(x, y)f(y)dy, G(x, y) = Ai(x + a k) Ai (y + a k ) + Ai (x + a k ) Ai(y + a k ) Ai (a k ) 2 + π Bi (a k ) Ai (a k ) Ai(x + a k) Ai(y + a k ) Bi(x + a k ) Ai(y + a k ), x y π. Ai(x + a k ) Bi(y + a k ), x > y (7) Hence, (A a k )(A a k ) + f = (A a k ) + (A a k )f = f v (), f v () for every sufficiently smooth f. To prove (7), we show the following [6, V 4]:. G is symmetric, 2. G satisfies the boundary conditions G(, y) = lim x + G(x, y) =, 3. (A a k )G = v k (x)v k (y) for x y, in which v k (x) is the eigenvector Ai (a k ) Ai(x+a k), dg 4. lim δ dx y+δ x=y δ ( 5. G(x, y) =, and Ai(x + a Ai (a k ) k) ) dx =. G is symmetric by inspection. At x =, we find G(, y) = Ai (a k ) 2 Ai (a k ) Ai(y + a k ) π Bi(a k ) Ai(y + a k ) = Ai (a k ) [ + π Ai (a k ) Bi(a k )] Ai(y + a k ). From the Wronskian W{Ai, Bi} = Ai(x) Bi (x) Ai (x) Bi(x) = [7, 9.2.E7], we find = π π Ai(a k ) Bi (a k ) Ai (a k ) Bi(a k ) = Ai (a k ) Bi(a k ), which shows G(, y) =. At the other boundary, lim x + G(x, y) = because Ai and Ai decay at infinity. 9
10 and Away from the diagonal, d 2 G dx 2 = Ai (a k ) 2 (x + a k ) Ai(x + a k ) Ai (y + a k ) Ai (a k ) 2 (Ai(x + a k ) + (x + a k ) Ai (x + a k )) Ai(y + a k ) + π Bi (a k ) (x + a Ai (a k ) k) Ai(x + a k ) Ai(y + a k ) (x + a k ) Bi(x + a k ) Ai(y + a k ), x < y π (x + a k ) Ai(x + a k ) Bi(y + a k ), x > y and so (x + a k )G = (x + a k ) Ai (a k ) 2 Ai(x + a k ) Ai (y + a k ) (x + a k ) Ai (a k ) 2 Ai (x + a k ) Ai(y + a k ) + π Bi (a k ) (x + a Ai (a k ) k) Ai(x + a k ) Ai(y + a k ) (x + a k ) Bi(x + a k ) Ai(y + a k ), x y π, (x + a k ) Ai(x + a k ) Bi(y + a k ), x > y (A a k )G = d2 G dx 2 (x + a k)g = Ai (a k ) 2 Ai(x + a k ) Ai(y + a k ), x y. Looking for a jump discontinuity, we compute dg dx = Ai (a k ) 2 Ai (x + a k ) Ai (y + a k ) Ai (a k ) 2 (x + a k ) Ai(x + a k ) Ai(y + a k ) + π Bi (a k ) Ai (a k ) Ai (x + a k ) Ai(y + a k ) Bi (x + a k ) Ai(y + a k ), x y π Ai (x + a k ) Bi(y + a k ), x > y and find, using the Wronskian again, lim δ dg dx y+δ x=y δ = π Ai (y + a k ) Bi(y + a k ) + π Bi (y + a k ) Ai(y + a k ) =. Finally, we want to show G(x, y) Ai (a k ) Ai(x+a k)dx =, or equivalently G(x, y) Ai(x+
11 a k )dx =. The following integral computations are straightforward [7, 9..iv]: y y Ai 2 (x + a k )dx = Ai (a k ) 2, Ai 2 (x + a k )dx = ( Ai (y + a k ) 2 (y + a k ) Ai 2 (y + a k ) ), Ai(x + a k ) Bi(x + a k )dx = Ai (y + a k ) Bi (y + a k ) + (y + a k ) Ai(y + a k ) Bi(y + a k ) + Ai (a k ) Bi (a k ), Ai(x + a k ) Ai (x + a k )dx =. The desired integral G(x, y) Ai(x + a k )dx is the sum of the following five integrals: y y Ai (a k ) 2 Ai(x + a k ) Ai (y + a k ) Ai(x + a k )dx = Ai (y + a k ), Ai (a k ) 2 Ai (x + a k ) Ai(y + a k ) Ai(x + a k )dx =, π Bi (a k ) Ai (a k ) Ai(x+a k) Ai(y+a k ) Ai(x+a k )dx = π Ai (a k ) Bi (a k ) Ai(y+a k ), π Bi(x + a k ) Ai(y + a k ) Ai(x + a k )dx = π Ai(y + a k ) Ai (y + a k ) Bi (y + a k ) π(y + a k ) Ai 2 (y + a k ) Bi(y + a k ) π Ai (a k ) Bi (a k ) Ai(y + a k ), π Ai(x + a k ) Bi(y + a k ) Ai(x + a k )dx = π Ai (y + a k ) 2 Bi(y + a k ) + π(y + a k ) Ai 2 (y + a k ) Bi(y + a k ) = Ai (y + a k ) π Ai(y + a k ) Ai (y + a k ) Bi (y + a k ) + π(y + a k ) Ai 2 (y + a k ) Bi(y + a k ). All terms cancel, leaving G(x, y) Ai(x + a k )dx =. The Green s function is justified.
12 C. Eigenvalue asymptotics Now the eigenvalue perturbation terms can be computed. At the order of ε = 2 β, the eigenvalue perturbation is Gaussian: λ () = or more simply, Ai (a k ) Ai(x + a k)w x Ai (a k ) Ai(x + a k)dx, λ () = ( ) 2 Ai (a k ) Ai(x + a k) db x Its mean is and its standard deviation is ( 4 σ = Ai(x + a Ai (a k ) k)) dx. ( ) The perturbation of the mean is on the order of ε = β We find and [ ( )] v () (x) = (A a k ) + W x Ai(x + a Ai (a k ) k) ( ) = G(x, y) W y Ai(y + a Ai (a k ) k) dy λ (2) = or more simply, Ai(x + a Ai (a k ) k)w x G(x, y)w y Ai(y + a Ai (a k ) k) dy dx, λ (2) = Ai (a k ) 2 Ai(x + a k)g(x, y) Ai(y + a k )db y db x. The expected value is determined by the autocorrelation E[ f(x, y)db y db x ] = f(x, x)dx (informally, E[W x W y ] = δ(x y)): E[λ (2) ] = ( 2 G(x, x) Ai(x + a Ai (a k ) k)) dx. Asymptotic relations () and (2) follow. 2
13 IV. RICCATI DIFFUSION The eigenvalue-eigenvector equation for the stochastic Airy operator is ( d 2 dx x + 2 ) W 2 x ζ f =. β The Riccati transform Y x = f (x) f(x) dy x = (x + ζ Y 2 x )dx + 2 β db x, produces the equivalent diffusion process and the boundary conditions f() = f(+ ) = translate to Y = +, Y x Ai (x) Ai(x) x, x + [5]. In the eigenvalue-eigenvector equation, if ζ is to the right of all eigenvalues, then a solution f(x) satisfying the left boundary condition will fail to meet the right boundary condition. Experiencing no sign changes and failing to decay, it will grow without bound on the same order as Bi(x) as x +. The analogous statement for the Riccati diffusion is this: if ζ dominates all eigenvalues, then a solution Y x to the diffusion process, started at Y = +, will have no poles in the x > half-plane and will become asymptotic to Bi (x) Bi(x) x as x +. Vice versa, if ζ is to the left of some eigenvalue, then f(x) satisfying f() = will experience a sign change for some positive x, and Y x satisfying Y = + will reach at the same finite x. Thus, Pr[λ < ζ] = Pr (,+ ) [Y x does not hit ], in which the notation indicates a diffusion started at Y = + and run forward in time x. Equivalently, setting t = x + ζ, we have dy t = (t Y 2 t )dt + 2 β db t and Pr[λ < ζ] = Pr (ζ,+ ) [Y t does not hit ] as t runs from ζ to +. The CDF F (t) = F (t, ) of the rightmost eigenvalue will fall out as a special case after computing F (t, y ) = Pr (t,y )[Y t does not hit ] for a general initial condition Y t = y. Below, we write F (t, y) instead of F (t, y ) to keep the notation clean. The hitting probability has been investigated by Bloemendal and Virág [3, 4]. It is given by Kolmogorov s backward equation: 3
14 df dt + (t y2 ) F y F β y =, 2 F (+, y) =, ( β F (t, y) Φ 2 t y 2 y ), y. Φ denotes the CDF of the standard Gaussian distribution. A. Amelioration as β This PDE has a simple solution when β =, specifically F (t, y) = [y > Ai (t) ]. In Ai(t) addition, it is well behaved for β. However, as β, the equation is dominated by convection and the solution develops a region of rapid change around y = Ai (t) Ai(t). The developing cliff breaks into a jump discontinuity once β reaches infinity. To inspect the region of rapid change (the interesting region when β ) we apply a change of variables. Let F (t, y) = Φ( β u(t, y)) with Φ(z) the CDF of the standard Gaussian 2 distribution. We have ( ) df β β dt = Φ 2 u du 2 dt, ( ) F β β y = Φ 2 u du 2 dy, ( ) ( ) 2 2 ( ) F β β u β β y = 2 Φ 2 u + Φ 4 y 2 u 2 u 2 y. 2 Noting that Φ (z) = zφ (z) and dividing by Φ ( β 2 u ) β 2 gives du dt + (t y2 ) u y u β y ( ) 2 u 2 2 u =. (8) y The y boundary condition on F corresponds to u(t, y) t y2 y, y. The boundary condition at t = + is not necessary for the derivation below, but it is discussed in a separate paper by Bloemendal and Sutton [2]. Our goal is to analyze the solution u(t, y) to the above PDE. We shall ultimately arrive at the asymptotic expression (). We start with the educated guess that the second-derivative 4
15 term in (8) is negligible as β and study du dt + (t y2 ) u y ( ) 2 u 2 u =, (9) y u(t, y) t y2 y, y () Because F (t, y) jumps from to at y = Ai (t), the transformed u(t, y) should have a sign Ai(t) change along that curve. We intend to verify this and to linearize the solution about the contour u(t, y) =. B. Method of characteristics The first-order PDE (9) can be solved by the method of characteristics. Let p = u t q = u. Then the PDE can be written y and G(t, y, u, p, q) = p + (t y 2 )q 2 uq2 =. () The solution is a surface in (t, y, u, p, q)-space. We introduce a parameter s and seek a curve (t(s), y(s), u(s), p(s), q(s)) along the surface. This is obtained from an initial condition (t, y, u, p, q ) and the characteristic strip equations [9] t s = G p =, y s = G q = (t y 2 ) uq, u s = pg p + qg q = p + (t y 2 )q uq 2, p s = G t pg u = q + 2 pq2, q s = G y qg u = 2yq + 2 q3. Note that G = implies p = (t y 2 )q + 2 uq2. Hence, p can be eliminated: t s =, y s = (t y2 ) uq, u s = 2 uq2, q s = 2yq + 2 q3. 5
16 Recall that we are most interested in the region around u. Perhaps there is a solution to the characteristic strip equations with u(s) = identically. Taking t = s, this would require t(s) = s and y = (s s y2 ), which implies y(s) = Ai (s). This reduces the differential Ai(s) equation for q to q s = (s) 2Ai Ai(s) q + 2 q3. Every function of the form Ai(s) 2 q(s) = ± b s Ai(w)4 dw (2) is a solution. The initial condition is determined by (). We find p y Ai (s)/ Ai(s) s /4 as s +. So in (), p s /4, t y 2 2 s /2, and therefore q 2s /4 as s +. This fixes a positive sign on (2). Further, if the upper limit of integration b were less than +, then (2) would decay exponentially. However, with b = +, the solution has precisely the desired asymptotics q(s) 2s /4 as s +. Solving for p, the characteristic is t = s, y = Ai (s) Ai(s), u =, p = s Ai(s)2 + Ai (s) 2, Ai(w) 4 dw q = s Ai(s) 2. Ai(w) 4 dw s C. Eigenvalue asymptotics The Tracy-Widom distribution focuses attention on y = +, i.e., s = a. terminal point of the characteristic, p(a ) = a Ai(a ) 2 + Ai (a ) 2 = Ai(w) 4 dw a ( ( ) 4 /2 Ai(x + a Ai (a ) )) dx. At this 6
17 Hence, u(t, + ) can be linearized as follows: u(t, + ) + p(a )(t a ), t a. Then, the Tracy-Widom distribution is approximated by ( ) β F (t, + ) Φ 2 p(a )(t a ), t a, β. That is, for large β, the distribution should be approximately normal with mean a and standard deviation 2 β p(a ) = 2 β This is another argument supporting (). ( 4 Ai(x + a Ai (a ) k)) dx. V. CONCLUSION General-β random matrix theory is still a challenging business. We have shown how the asymptotic regime β can suggest new methods and provide new data. [] James T. Albrecht, Cy P. Chan, and Alan Edelman. Sturm sequences and random eigenvalue distributions. Foundations of Computational Mathematics, 9(4):46 483, 29. [2] Alex Bloemendal and Brian D. Sutton. General-beta computation at the soft edge. In preparation. [3] Alex Bloemendal and Bálint Virág. Limits of spiked random matrices I. arxiv:.877, November 2. [4] Alex Bloemendal and Bálint Virág. Limits of spiked random matrices II. arxiv:9.374, September 2. [5] F. Bornemann. On the numerical evaluation of distributions in random matrix theory: a review. Markov Processes and Related Fields, 6(4):83 866, 2. [6] R. Courant and D. Hilbert. Methods of mathematical physics. Vol. I. Interscience Publishers, Inc., New York, N.Y.,
18 [7] NIST Digital Library of Mathematical Functions. Release..6 of Online companion to [4]. [8] Ioana Dumitriu and Alan Edelman. Matrix models for beta ensembles. J. Math. Phys., 43(): , 22. [9] Ioana Dumitriu and Alan Edelman. Eigenvalues of Hermite and Laguerre ensembles: large beta asymptotics. Annales de l Institut Henri Poincaré. Probabilités et Statistiques, 4(6):83 99, 25. [] Freeman J. Dyson. Statistical theory of the energy levels of complex systems. I. Journal of Mathematical Physics, 3:4 56, 962. [] Alan Edelman. Stochastic differential equations and random matrices. SIAM Conference on Applied Linear Algebra, Williamsburg, Virginia, [2] Alan Edelman and Brian D. Sutton. From random matrices to stochastic operators. J. Stat. Phys., 27(6):2 65, 27. [3] Tosio Kato. Perturbation theory for linear operators. Classics in Mathematics. Springer-Verlag, Berlin, 995. Reprint of the 98 edition. [4] F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. NIST Handbook of Mathematical Functions. Cambridge University Press, New York, NY, 2. Print companion to [7]. [5] José Ramírez, Brian Rider, and Bálint Virág. Beta ensembles, stochastic Airy spectrum, and a diffusion. J. Amer. Math. Soc., 24(4):99 944, 2. [6] Brian D. Sutton. The stochastic operator approach to random matrix theory. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, 25. [7] Craig A. Tracy and Harold Widom. Level-spacing distributions and the Airy kernel. Physics Letters. B, 35(-2):5 8, 993. [8] Craig A. Tracy and Harold Widom. On orthogonal and symplectic matrix ensembles. Communications in Mathematical Physics, 77(3): , 996. [9] Daniel Zwillinger. Handbook of differential equations. Academic Press Inc., Boston, MA,
Progress in the method of Ghosts and Shadows for Beta Ensembles
Progress in the method of Ghosts and Shadows for Beta Ensembles Alan Edelman (MIT) Alex Dubbs (MIT) and Plamen Koev (SJS) Oct 8, 2012 1/47 Wishart Matrices (arbitrary covariance) G=mxn matrix of Gaussians
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV
More informationExponential tail inequalities for eigenvalues of random matrices
Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify
More informationThe Random Matrix Technique of Ghosts and Shadows
The Random Matrix Technique of Ghosts and Shadows Alan Edelman November 22, 2009 Abstract We propose to abandon the notion that a random matrix has to be sampled for it to exist. Much of today's applied
More informationAdvances in Random Matrix Theory: Let there be tools
Advances in Random Matrix Theory: Let there be tools Alan Edelman Brian Sutton, Plamen Koev, Ioana Dumitriu, Raj Rao and others MIT: Dept of Mathematics, Computer Science AI Laboratories World Congress,
More informationRandom Matrices Numerical Methods for Random Matrices. Per-Olof Persson
18.325 Random Matrices Numerical Methods for Random Matrices Per-Olof Persson (persson@mit.edu) December 19, 22 1 Largest Eigenvalue Distributions In this section, the distributions of the largest eigenvalue
More informationUniversality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium
Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium SEA 06@MIT, Workshop on Stochastic Eigen-Analysis and its Applications, MIT, Cambridge,
More informationNumerical Methods for Random Matrices
Numerical Methods for Random Matrices MIT 18.95 IAP Lecture Series Per-Olof Persson (persson@mit.edu) January 23, 26 1.9.8.7 Random Matrix Eigenvalue Distribution.7.6.5 β=1 β=2 β=4 Probability.6.5.4.4
More informationUpdate on the beta ensembles
Update on the beta ensembles Brian Rider Temple University with M. Krishnapur IISC, J. Ramírez Universidad Costa Rica, B. Virág University of Toronto The Tracy-Widom laws Consider a random Hermitian n
More informationBulk scaling limits, open questions
Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson
More informationDeterminantal point processes and random matrix theory in a nutshell
Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes
More informationA determinantal formula for the GOE Tracy-Widom distribution
A determinantal formula for the GOE Tracy-Widom distribution Patrik L. Ferrari and Herbert Spohn Technische Universität München Zentrum Mathematik and Physik Department e-mails: ferrari@ma.tum.de, spohn@ma.tum.de
More informationFluctuations of random tilings and discrete Beta-ensembles
Fluctuations of random tilings and discrete Beta-ensembles Alice Guionnet CRS (E S Lyon) Advances in Mathematics and Theoretical Physics, Roma, 2017 Joint work with A. Borodin, G. Borot, V. Gorin, J.Huang
More informationThe Tracy-Widom distribution is not infinitely divisible.
The Tracy-Widom distribution is not infinitely divisible. arxiv:1601.02898v1 [math.pr] 12 Jan 2016 J. Armando Domínguez-Molina Facultad de Ciencias Físico-Matemáticas Universidad Autónoma de Sinaloa, México
More informationarxiv:math-ph/ v1 27 Jan 2005
arxiv:math-ph/5168v1 27 Jan 25 Numerical Methods for Eigenvalue Distributions of Random Matrices Alan Edelman and Per-Olof Persson September 7, 218 Abstract We present efficient numerical techniques for
More informationLecture 12: Detailed balance and Eigenfunction methods
Miranda Holmes-Cerfon Applied Stochastic Analysis, Spring 2015 Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility),
More informationBeyond the Gaussian universality class
Beyond the Gaussian universality class MSRI/Evans Talk Ivan Corwin (Courant Institute, NYU) September 13, 2010 Outline Part 1: Random growth models Random deposition, ballistic deposition, corner growth
More information1 Tridiagonal matrices
Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate
More informationDifferential Equations for Dyson Processes
Differential Equations for Dyson Processes Joint work with Harold Widom I. Overview We call Dyson process any invariant process on ensembles of matrices in which the entries undergo diffusion. Dyson Brownian
More informationStochastic Differential Equations Related to Soft-Edge Scaling Limit
Stochastic Differential Equations Related to Soft-Edge Scaling Limit Hideki Tanemura Chiba univ. (Japan) joint work with Hirofumi Osada (Kyushu Unv.) 2012 March 29 Hideki Tanemura (Chiba univ.) () SDEs
More informationFluctuations from the Semicircle Law Lecture 4
Fluctuations from the Semicircle Law Lecture 4 Ioana Dumitriu University of Washington Women and Math, IAS 2014 May 23, 2014 Ioana Dumitriu (UW) Fluctuations from the Semicircle Law Lecture 4 May 23, 2014
More informationPainlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE
Painlevé Representations for Distribution Functions for Next-Largest, Next-Next-Largest, etc., Eigenvalues of GOE, GUE and GSE Craig A. Tracy UC Davis RHPIA 2005 SISSA, Trieste 1 Figure 1: Paul Painlevé,
More informationFluctuations of random tilings and discrete Beta-ensembles
Fluctuations of random tilings and discrete Beta-ensembles Alice Guionnet CRS (E S Lyon) Workshop in geometric functional analysis, MSRI, nov. 13 2017 Joint work with A. Borodin, G. Borot, V. Gorin, J.Huang
More informationWeek 9 Generators, duality, change of measure
Week 9 Generators, duality, change of measure Jonathan Goodman November 18, 013 1 Generators This section describes a common abstract way to describe many of the differential equations related to Markov
More informationOn the concentration of eigenvalues of random symmetric matrices
On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest
More informationMATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES
Ann. Inst. Fourier, Grenoble 55, 6 (5), 197 7 MATRIX KERNELS FOR THE GAUSSIAN ORTHOGONAL AND SYMPLECTIC ENSEMBLES b Craig A. TRACY & Harold WIDOM I. Introduction. For a large class of finite N determinantal
More informationLecture 12: Detailed balance and Eigenfunction methods
Lecture 12: Detailed balance and Eigenfunction methods Readings Recommended: Pavliotis [2014] 4.5-4.7 (eigenfunction methods and reversibility), 4.2-4.4 (explicit examples of eigenfunction methods) Gardiner
More informationA Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices
A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr
More informationEigenvalues and Singular Values of Random Matrices: A Tutorial Introduction
Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National
More informationDeterminantal point processes and random matrix theory in a nutshell
Determinantal point processes and random matrix theory in a nutshell part I Manuela Girotti based on M. Girotti s PhD thesis and A. Kuijlaars notes from Les Houches Winter School 202 Contents Point Processes
More informationUniversality of local spectral statistics of random matrices
Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.
More informationNUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS. Sheehan Olver NA Group, Oxford
NUMERICAL CALCULATION OF RANDOM MATRIX DISTRIBUTIONS AND ORTHOGONAL POLYNOMIALS Sheehan Olver NA Group, Oxford We are interested in numerically computing eigenvalue statistics of the GUE ensembles, i.e.,
More informationFinding eigenvalues for matrices acting on subspaces
Finding eigenvalues for matrices acting on subspaces Jakeniah Christiansen Department of Mathematics and Statistics Calvin College Grand Rapids, MI 49546 Faculty advisor: Prof Todd Kapitula Department
More informationOPSF, Random Matrices and Riemann-Hilbert problems
OPSF, Random Matrices and Riemann-Hilbert problems School on Orthogonal Polynomials in Approximation Theory and Mathematical Physics, ICMAT 23 27 October, 2017 Plan of the course lecture 1: Orthogonal
More informationNumerical analysis and random matrix theory. Tom Trogdon UC Irvine
Numerical analysis and random matrix theory Tom Trogdon ttrogdon@math.uci.edu UC Irvine Acknowledgements This is joint work with: Percy Deift Govind Menon Sheehan Olver Raj Rao Numerical analysis and random
More informationNumerical Evaluation of Standard Distributions in Random Matrix Theory
Numerical Evaluation of Standard Distributions in Random Matrix Theory A Review of Folkmar Bornemann s MATLAB Package and Paper Matt Redmond Department of Mathematics Massachusetts Institute of Technology
More information1 Intro to RMT (Gene)
M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i
More informationConcentration Inequalities for Random Matrices
Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic
More informationA Generalization of Wigner s Law
A Generalization of Wigner s Law Inna Zakharevich June 2, 2005 Abstract We present a generalization of Wigner s semicircle law: we consider a sequence of probability distributions (p, p 2,... ), with mean
More informationA Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit
A Detailed Look at a Discrete Randomw Walk with Spatially Dependent Moments and Its Continuum Limit David Vener Department of Mathematics, MIT May 5, 3 Introduction In 8.366, we discussed the relationship
More informationEigenvalue variance bounds for Wigner and covariance random matrices
Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual
More informationPage 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19
Page 404 Lecture : Simple Harmonic Oscillator: Energy Basis Date Given: 008/11/19 Date Revised: 008/11/19 Coordinate Basis Section 6. The One-Dimensional Simple Harmonic Oscillator: Coordinate Basis Page
More informationQuantum Chaos: An Exploration of the Stadium Billiard Using Finite Differences
Quantum Chaos: An Exploration of the Stadium Billiard Using Finite Differences Kyle Konrad & Dhrubo Jyoti Math 53: Chaos! Professor Alex Barnett Dartmouth College December 4, 2009 Abstract We investigate
More information8.1 Concentration inequality for Gaussian random matrix (cont d)
MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration
More informationOPSF, Random Matrices and Riemann-Hilbert problems
OPSF, Random Matrices and Riemann-Hilbert problems School on Orthogonal Polynomials in Approximation Theory and Mathematical Physics, ICMAT 23 27 October, 207 Plan of the course lecture : Orthogonal Polynomials
More informationON THE CONVERGENCE OF THE NEAREST NEIGHBOUR EIGENVALUE SPACING DISTRIBUTION FOR ORTHOGONAL AND SYMPLECTIC ENSEMBLES
O THE COVERGECE OF THE EAREST EIGHBOUR EIGEVALUE SPACIG DISTRIBUTIO FOR ORTHOGOAL AD SYMPLECTIC ESEMBLES Dissertation zur Erlangung des Doktorgrades der aturwissenschaften an der Fakultät für Mathematik
More informationPerformance Evaluation of Generalized Polynomial Chaos
Performance Evaluation of Generalized Polynomial Chaos Dongbin Xiu, Didier Lucor, C.-H. Su, and George Em Karniadakis 1 Division of Applied Mathematics, Brown University, Providence, RI 02912, USA, gk@dam.brown.edu
More informationSequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes
Sequential Monte Carlo methods for filtering of unobservable components of multidimensional diffusion Markov processes Ellida M. Khazen * 13395 Coppermine Rd. Apartment 410 Herndon VA 20171 USA Abstract
More informationSeparation of Variables in Linear PDE: One-Dimensional Problems
Separation of Variables in Linear PDE: One-Dimensional Problems Now we apply the theory of Hilbert spaces to linear differential equations with partial derivatives (PDE). We start with a particular example,
More informationLimits of spiked random matrices I
Limits of spiked random matrices I Alex Bloemendal Bálint Virág arxiv:1011.1877v2 [math.pr] 16 Sep 2011 September 16, 2011 Abstract Given a large, high-dimensional sample from a spiked population, the
More informationHow long does it take to compute the eigenvalues of a random symmetric matrix?
How long does it take to compute the eigenvalues of a random symmetric matrix? Govind Menon (Brown University) Joint work with: Christian Pfrang (Ph.D, Brown 2011) Percy Deift, Tom Trogdon (Courant Institute)
More informationarxiv:hep-th/ v1 14 Oct 1992
ITD 92/93 11 Level-Spacing Distributions and the Airy Kernel Craig A. Tracy Department of Mathematics and Institute of Theoretical Dynamics, University of California, Davis, CA 95616, USA arxiv:hep-th/9210074v1
More informationComparison of Virginia s College and Career Ready Mathematics Performance Expectations with the Common Core State Standards for Mathematics
Comparison of Virginia s College and Career Ready Mathematics Performance Expectations with the Common Core State Standards for Mathematics February 17, 2010 1 Number and Quantity The Real Number System
More informationORIGINS. E.P. Wigner, Conference on Neutron Physics by Time of Flight, November 1956
ORIGINS E.P. Wigner, Conference on Neutron Physics by Time of Flight, November 1956 P.W. Anderson, Absence of Diffusion in Certain Random Lattices ; Phys.Rev., 1958, v.109, p.1492 L.D. Landau, Fermi-Liquid
More informationOPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS
Stochastics and Dynamics, Vol. 2, No. 3 (22 395 42 c World Scientific Publishing Company OPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS Stoch. Dyn. 22.2:395-42. Downloaded from www.worldscientific.com by HARVARD
More informationFourier series: Fourier, Dirichlet, Poisson, Sturm, Liouville
Fourier series: Fourier, Dirichlet, Poisson, Sturm, Liouville Joseph Fourier (1768-1830) upon returning from Egypt in 1801 was appointed by Napoleon Prefect of the Department of Isères (where Grenoble
More informationFrom the mesoscopic to microscopic scale in random matrix theory
From the mesoscopic to microscopic scale in random matrix theory (fixed energy universality for random spectra) With L. Erdős, H.-T. Yau, J. Yin Introduction A spacially confined quantum mechanical system
More informationTail sums of Wishart and GUE eigenvalues beyond the bulk edge.
Tail sums of Wishart and GUE eigenvalues beyond the bulk edge. arxiv:74.6398v2 [math.st] 24 Jul 27 Iain M. Johnstone Stanford University and Australian National University July 26, 27 Abstract Consider
More informationMultiple Random Variables
Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x
More informationOnline solution of the average cost Kullback-Leibler optimization problem
Online solution of the average cost Kullback-Leibler optimization problem Joris Bierkens Radboud University Nijmegen j.bierkens@science.ru.nl Bert Kappen Radboud University Nijmegen b.kappen@science.ru.nl
More informationComparison Method in Random Matrix Theory
Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH
More informationIntroduction to Theory of Mesoscopic Systems
Introduction to Theory of Mesoscopic Systems Boris Altshuler Princeton University, Columbia University & NEC Laboratories America Lecture 3 Beforehand Weak Localization and Mesoscopic Fluctuations Today
More informationStochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions
International Journal of Control Vol. 00, No. 00, January 2007, 1 10 Stochastic Optimization with Inequality Constraints Using Simultaneous Perturbations and Penalty Functions I-JENG WANG and JAMES C.
More informationAiry and Pearcey Processes
Airy and Pearcey Processes Craig A. Tracy UC Davis Probability, Geometry and Integrable Systems MSRI December 2005 1 Probability Space: (Ω, Pr, F): Random Matrix Models Gaussian Orthogonal Ensemble (GOE,
More informationRANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS
RANDOM MATRIX THEORY AND TOEPLITZ DETERMINANTS David García-García May 13, 2016 Faculdade de Ciências da Universidade de Lisboa OVERVIEW Random Matrix Theory Introduction Matrix ensembles A sample computation:
More informationMaximal height of non-intersecting Brownian motions
Maximal height of non-intersecting Brownian motions G. Schehr Laboratoire de Physique Théorique et Modèles Statistiques CNRS-Université Paris Sud-XI, Orsay Collaborators: A. Comtet (LPTMS, Orsay) P. J.
More informationUniversality for random matrices and log-gases
Universality for random matrices and log-gases László Erdős IST, Austria Ludwig-Maximilians-Universität, Munich, Germany Encounters Between Discrete and Continuous Mathematics Eötvös Loránd University,
More informationMarkov operators, classical orthogonal polynomial ensembles, and random matrices
Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random
More informationMATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018
Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry
More informationRandom Matrix: From Wigner to Quantum Chaos
Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution
More information1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem.
STATE EXAM MATHEMATICS Variant A ANSWERS AND SOLUTIONS 1 1.1 Limits and Continuity. Precise definition of a limit and limit laws. Squeeze Theorem. Intermediate Value Theorem. Extreme Value Theorem. Definition
More informationDS-GA 1002 Lecture notes 2 Fall Random variables
DS-GA 12 Lecture notes 2 Fall 216 1 Introduction Random variables Random variables are a fundamental tool in probabilistic modeling. They allow us to model numerical quantities that are uncertain: the
More informationIntroduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More informationAsymptotic series in quantum mechanics: anharmonic oscillator
Asymptotic series in quantum mechanics: anharmonic oscillator Facultat de Física, Universitat de Barcelona, Diagonal 645, 0808 Barcelona, Spain Thesis supervisors: Dr Bartomeu Fiol Núñez, Dr Alejandro
More informationA Novel Nonparametric Density Estimator
A Novel Nonparametric Density Estimator Z. I. Botev The University of Queensland Australia Abstract We present a novel nonparametric density estimator and a new data-driven bandwidth selection method with
More informationSquared Bessel Process with Delay
Southern Illinois University Carbondale OpenSIUC Articles and Preprints Department of Mathematics 216 Squared Bessel Process with Delay Harry Randolph Hughes Southern Illinois University Carbondale, hrhughes@siu.edu
More informationCOMPUTATION OF BESSEL AND AIRY FUNCTIONS AND OF RELATED GAUSSIAN QUADRATURE FORMULAE
BIT 6-85//41-11 $16., Vol. 4, No. 1, pp. 11 118 c Swets & Zeitlinger COMPUTATION OF BESSEL AND AIRY FUNCTIONS AND OF RELATED GAUSSIAN QUADRATURE FORMULAE WALTER GAUTSCHI Department of Computer Sciences,
More informationPseudospectra and Nonnormal Dynamical Systems
Pseudospectra and Nonnormal Dynamical Systems Mark Embree and Russell Carden Computational and Applied Mathematics Rice University Houston, Texas ELGERSBURG MARCH 1 Overview of the Course These lectures
More informationFinite Rank Perturbations of Random Matrices and Their Continuum Limits. Alexander Bloemendal
Finite Rank Perturbations of Random Matrices and Their Continuum Limits by Alexander Bloemendal A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department
More informationKolmogorov Equations and Markov Processes
Kolmogorov Equations and Markov Processes May 3, 013 1 Transition measures and functions Consider a stochastic process {X(t)} t 0 whose state space is a product of intervals contained in R n. We define
More informationCESARO OPERATORS ON THE HARDY SPACES OF THE HALF-PLANE
CESARO OPERATORS ON THE HARDY SPACES OF THE HALF-PLANE ATHANASIOS G. ARVANITIDIS AND ARISTOMENIS G. SISKAKIS Abstract. In this article we study the Cesàro operator C(f)() = d, and its companion operator
More informationUniversal phenomena in random systems
Tuesday talk 1 Page 1 Universal phenomena in random systems Ivan Corwin (Clay Mathematics Institute, Columbia University, Institute Henri Poincare) Tuesday talk 1 Page 2 Integrable probabilistic systems
More informationTHE SIMPLE URN PROCESS AND THE STOCHASTIC APPROXIMATION OF ITS BEHAVIOR
THE SIMPLE URN PROCESS AND THE STOCHASTIC APPROXIMATION OF ITS BEHAVIOR MICHAEL KANE As a final project for STAT 637 (Deterministic and Stochastic Optimization) the simple urn model is studied, with special
More information1 Brownian Local Time
1 Brownian Local Time We first begin by defining the space and variables for Brownian local time. Let W t be a standard 1-D Wiener process. We know that for the set, {t : W t = } P (µ{t : W t = } = ) =
More informationConvergence of the Ensemble Kalman Filter in Hilbert Space
Convergence of the Ensemble Kalman Filter in Hilbert Space Jan Mandel Center for Computational Mathematics Department of Mathematical and Statistical Sciences University of Colorado Denver Parts based
More informationMath Ordinary Differential Equations
Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x
More informationOPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS
Stochastics and Dynamics c World Scientific Publishing Company OPTIMAL PERTURBATION OF UNCERTAIN SYSTEMS BRIAN F. FARRELL Division of Engineering and Applied Sciences, Harvard University Pierce Hall, 29
More informationThis is a closed everything exam, except for a 3x5 card with notes. Please put away all books, calculators and other portable electronic devices.
Math 54 final, Spring 00, John Lott This is a closed everything exam, except for a x5 card with notes. Please put away all books, calculators and other portable electronic devices. You need to justify
More informationLAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC
LAW OF LARGE NUMBERS FOR THE SIRS EPIDEMIC R. G. DOLGOARSHINNYKH Abstract. We establish law of large numbers for SIRS stochastic epidemic processes: as the population size increases the paths of SIRS epidemic
More informationBeyond Wiener Askey Expansions: Handling Arbitrary PDFs
Journal of Scientific Computing, Vol. 27, Nos. 1 3, June 2006 ( 2005) DOI: 10.1007/s10915-005-9038-8 Beyond Wiener Askey Expansions: Handling Arbitrary PDFs Xiaoliang Wan 1 and George Em Karniadakis 1
More informationCounting Matrices Over a Finite Field With All Eigenvalues in the Field
Counting Matrices Over a Finite Field With All Eigenvalues in the Field Lisa Kaylor David Offner Department of Mathematics and Computer Science Westminster College, Pennsylvania, USA kaylorlm@wclive.westminster.edu
More informationTASEP on a ring in sub-relaxation time scale
TASEP on a ring in sub-relaxation time scale Jinho Baik and Zhipeng Liu October 27, 2016 Abstract Interacting particle systems in the KPZ universality class on a ring of size L with OL number of particles
More informationLecture 3: Central Limit Theorem
Lecture 3: Central Limit Theorem Scribe: Jacy Bird (Division of Engineering and Applied Sciences, Harvard) February 8, 003 The goal of today s lecture is to investigate the asymptotic behavior of P N (
More informationFree Probability, Sample Covariance Matrices and Stochastic Eigen-Inference
Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Alan Edelman Department of Mathematics, Computer Science and AI Laboratories. E-mail: edelman@math.mit.edu N. Raj Rao Deparment
More informationCharacterizations of free Meixner distributions
Characterizations of free Meixner distributions Texas A&M University March 26, 2010 Jacobi parameters. Matrix. β 0 γ 0 0 0... 1 β 1 γ 1 0.. m n J =. 0 1 β 2 γ.. 2 ; J n =. 0 0 1 β.. 3............... A
More informationSpectral inequalities and equalities involving products of matrices
Spectral inequalities and equalities involving products of matrices Chi-Kwong Li 1 Department of Mathematics, College of William & Mary, Williamsburg, Virginia 23187 (ckli@math.wm.edu) Yiu-Tung Poon Department
More informationPhysics 250 Green s functions for ordinary differential equations
Physics 25 Green s functions for ordinary differential equations Peter Young November 25, 27 Homogeneous Equations We have already discussed second order linear homogeneous differential equations, which
More informationCOMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW
Serials Publications www.serialspublications.com OMPLEX HERMITE POLYOMIALS: FROM THE SEMI-IRULAR LAW TO THE IRULAR LAW MIHEL LEDOUX Abstract. We study asymptotics of orthogonal polynomial measures of the
More informationHomogenization of the Dyson Brownian Motion
Homogenization of the Dyson Brownian Motion P. Bourgade, joint work with L. Erdős, J. Yin, H.-T. Yau Cincinnati symposium on probability theory and applications, September 2014 Introduction...........
More informationM 340L CS Homework Set 12 Solutions. Note: Scale all eigenvectors so the largest component is +1.
M 34L CS Homework Set 2 Solutions Note: Scale all eigenvectors so the largest component is +.. For each of these matrices, find the characteristic polynomial p( ) det( A I). factor it to get the eigenvalues:,
More information