Distributions on matrix moment spaces

Size: px
Start display at page:

Download "Distributions on matrix moment spaces"

Transcription

1 Distributions on matrix moment spaces Holger Dette, Matthias Guhlich Ruhr-Universität Bochum Fakultät für Mathematik Bochum, Germany Jan Nagel Technische Universität München Fakultät für Mathematik Garching, Germany April 6, 202 Abstract In this paper we define distributions on the moment spaces corresponding to p p real or complex matrix measures on the real line with an unbounded support. For random vectors on the unbounded matricial moment spaces we prove the convergence in distribution to the Gaussian orthogonal ensemble or the Gaussian unitary ensemble, respectively. Keywords and Phrases: moment spaces, random matrices, matrix measures, random moments, Gaussian ensemble, Laguerre ensemble, Wigner s semi circle law, Marchenko-Pastur law AMS Subject Classification: 60F05;5A52;30E05 Introduction In recent years there has been considerable interest in generalizing many of the results on classical moment theory to the case of matrix measures. Wiener and Masani (957) introduced matrix measures on the unit circle in the study of multivariate stochastic processes and their spectral theory. Whittle (963) followed the same approach and established a connection to matrix polynomials, that is, polynomials with matricial coefficients. Already Karlin and McGregor (959) studied a random walk with a doubly infinite transition matrix with help of matrix polynomials, however without special consideration of the matricial structure. Delsarte et al. (978) orthogonalized polynomials with respect to matrix measures on the unit circle. Duran and van Assche (995), Duran (995, 999) and Duran and Lopez-Rodriguez (996) were the first who investigated matrix orthogonal polynomials with respect to matrix measures on the real line and generalized

2 many results from the scalar case to the matrix case. Typical examples include the three-termrecursion, quadrature formulas and ratio asymptotics. Applications to stochastic processes with two-dimensional state space were discussed by Dette et al. (2006), who expressed transition probabilities and the recurrence of states in terms of matrix measures and matrix orthogonal polynomials. In contrast to moment spaces corresponding to (probability) measures the structure of moment spaces corresponding to matrix measures is much richer and not very well understood. In the scalar case Chang et al. (993) investigated a uniform distribution on the moment space corresponding to measures on the interval [0, ]. Their investigation was motivated by the consideration of a typical point in the moment space and they studied the asymptotic properties of random moment vectors with increasing dimension. Gamboa and Lozada-Chang (2004) considered large deviation principles for random moment sequences on this space, while Lozada-Chang (2005) investigated similar problems for moment spaces corresponding to more general functions defined on a bounded set. More recently, Gamboa and Rouault (2009) discussed random spectral measures related to moment spaces of measures on the interval [0, ] and moment spaces related to measures defined on the unit circle. Dette and Nagel (202a) considered distributions on moment spaces corresponding to scalar measures on the real line with an unbounded support. For matrix measures the corresponding moment of a matrix measure is given by a symmetric (hermitian) matrix and Dette and Studden (2002) obtained a characterization of the compact moment space corresponding to matrix measures on a compact interval. Dette and Nagel (202b) used these results to investigate the asymptotic properties of random vectors with values in the moment space corresponding to matrix measures on the interval [0, ]. The aim of the present paper is to get a better understanding of the properties in the non compact case. For this purpose we define probability distributions on matrix moment spaces corresponding to measures with an unbounded support and study their asymptotic behavior with a increasing dimension. The remaining part of this paper is organized as follows. In Section 2 we introduce the basic notation, define distributions on the moment spaces corresponding to matrix measures on unbounded intervals and state our main results. In Section 3 we consider matrix orthogonal polynomials and their relation to moments of matrix measures. In Section 4 we use this relation to prove our main results. Finally in Section 5 we extend these results to matrix moment spaces corresponding to matrix measures with complex entries. Finally some technical details have been deferred to the appendix. 2 Matrix moment spaces Throughout this paper let (S p (R), B(S p (R))) denote the measurable space of all p p symmetric matrices with real entries, where B(S p (R)) is the Borel field corresponding to the Frobenius norm A = tr(a 2 ) on S p (R). The set S + p (R) S p (R) denotes the subset of positive definite matrices 2

3 and for a matrix A S p (R), A is the determinant of A. Let T be a subset of the real line with corresponding Borel field B(T ). A (S p (R)-valued) matrix measure Σ on a measureable space (T, B(T )) is a p p matrix of signed measures on (T, B(T )) such that for all Borel sets A T the matrix Σ(A) is symmetric and nonnegative definite. Additionally we require the matrix measure to be normalized, that is Σ(T ) = I p, where I p denotes the p p identity matrix. We consider on S p (R) the integration operator (2.) dx := i j dx ij, the product Lebesgue measure with respect to the independent entries of a symmetric matrix. For an integrable function f : S p (R) R the integral (2.2) f(x)dx is the iterated integral with respect to each of the elements x ij, i j (see Muirhead (982) or Gupta and Nagar (2000)). The k-th moment of a matrix measure is then defined as (2.3) M k (Σ) := x k dσ(x) for k 0. The set of all R-valued matrix measures on (T, B(T )) for which all moments exist is denoted by P p (T ) and we define the n-th moment space of matrix measures by (2.4) M p,n (T ) := { (M (Σ),..., M n (Σ)) T Σ P p (T ) }. Analogous to the compact case in Dette and Nagel (202b) we obtain a characterization of the moment spaces M p,n ([0, )) and M p,n (R) in terms of Hankel matrices, which are defined for matrices M k S p (R), k 0 as (2.5) H 2m = M 0 M m M M 2 M m M m+.., H 2m =.. M m... M 2m M m M m+... M 2m M 2m, and (2.6) H 2m+ = M M m+. M m+... M 2m+., H 2m+ = M 0 M M m M m+. M m M m+... M 2m M 2m+.. The following lemmas give a characterization of M p,n ([0, )) and M p,n (R). The proof follows by similar arguments as in Dette and Studden (2002) and is therefore omitted. Note that the authors consider non-normalized measures, but the arguments can be extended to matrix probability measures. 3

4 Lemma 2.. A vector of matrices (M,..., M n ) T S p (R) n is an element of the moment space M p,n ([0, )) if and only if for all k n the Hankel matrices H k are nonnegative definite. The vector (M,..., M n ) T is an interior point of M p,n ([0, )) if and only if for all k n the matrices H k are positive definite. We define the vectors of matrix moments and the symmetric matrices h T 2m = (M m+,, M 2m ), h T 2m = (M m,, M 2m ), h T 2m = (M m M m+,, M 2m M 2m ), h T 2m = (M m M m+,, M 2m 2 M 2m ), (2.7) (2.8) Mn+ := h T nh n h n, n, M n+ + := M n h T n H n h n, n 2, where for the sake of completeness we set M = 0 p, M + = M 0 and M + 2 = M. If (M,..., M n ) T is in the interior of M p,n ([0, )), there exist in contrast to the compact case no upper bound for the (n + )-th moment. A vector of symmetric matrices (M,..., M n+ ) T is a moment vector in M p,n+ ([0, )) if and only if (2.9) M n+ M n+ where the matrices are compared with respect to the Loewner ordering. The inequality (2.9) follows because M n+ Mn+ is the Schur complement of M n+ in H n+ and the matrix H n+ is nonnegative definite if and only if the matrix H n and the Schur complement M n+ Mn+ are nonnegative definite. (M,..., M n+ ) T is an interior point of M p,n+ ([0, )) if and only if Mn+ < M n+ holds. For the remaining case of the whole real line, we obtain the following characterization of elements of the corresponding moment space. Lemma 2.2. The vector of matrices (M,..., M n ) T S p (R) n is a moment vector in M p,n (R) if and only if for all k with 2k n the Hankel matrices H 2k are nonnegative definite. The vector (M,..., M n ) T in in the interior of M p,n (R) if and only if for all k with 2k n the Hankel matrices H 2k are positive definite. Consider the moment vector (M,..., M 2n ) T Int M p,2n (R). There exist no bounds (with respect to the Loewner ordering) for the next moment, that is for any M 2n+ S p (R) the matrix vector (M,..., M 2n+ ) T is an interior point in M p,2n+ (R). For the next even moment we can define as in (2.7) the matrix M 2n+2. Then (M,..., M 2n+2 ) T is a moment vector if and only if M 2n+2 M 2n+2 and it is in the interior of M p,2n+2 (R) if and only if M 2n+2 < M 2n+2. 4

5 Now we define a density on the moment space M p,n ([0, )) by n g p,n (γ,δ) (M n ) = c (Mk p,k M k ) (M k M k ) γ k (2.0) k= exp ( δ k tr(m k M k ) (M k M k )) {Mk >M k }, where the parameters satisfy γ k > 2 (p ) and δ k > 0 and we will show in Section 4 that the normalization constant is given by δ pγ k+ 2 p(p+)(n k+) k (2.) c p,k = Γ p (γ k + (p + )(n k + )). 2 Here Γ p denotes the multivariate gamma function, which is defined by Γ p (z) := X z 2 (p+) e tr(x) dx S + p (R) [see Muirhead (982)]. The density g p,n (γ,δ) is the matrix analog of the density on the scalar moment space considered by Dette and Nagel (202a), which in turn is the natural extension of beta-type densities in the compact case. Our next result gives the asymptotic distribution of the vector of the first k components of random matrix moments distributed according to the density g p,n (γ,δ). For this purpose recall that a random symmetric p p matrix is governed by the Gaussian orthogonal ensemble (GOE p ), if its density is given by (2.2) f G (X) = (2π) p/2 π p(p )/4 e 2 tr X2. We define the matricial Marchenko-Pastur distribution η p by x(4 x) (2.3) dη p (x) := {0<x<4} dx I p, 2πx and obtain from the diagonal structure of this measure that the k-th moment satisfies M k (η p ) = c k I p, where c k is the k-th moment of the scalar Marchenko-Pastur distribution, that is, the k-th Catalan number (see e.g. Tulino and Verdú (2004)). Theorem 2.3. Assume that the vector of random moments M n M p,n ([0, )) has a distribution with density g p,n (γ,δ) where δ i = n (n) (p + ) for all i. For k denote by M 2 k the projection of M n onto the first k matrices and let M k (η p ) = (c I p,..., c k I p ) T contain the first k moments of the matricial Marchenko-Pastur distribution η p. Then the convergence n 2 (p + )(C I p )(M (n) D k M k (η p )) X k n holds, where C R k k is a lower triangular matrix with entries C, =... = C k,k =, ( ) ( ) 2i 2i C i,j =, j < i, i j i j and X k = (X,..., X k ) T is a vector of independent, GOE p -distributed random matrices. 5

6 The proof of Theorem 2.3 and Theorem 2.4 below is referred to Section 4, which contains also several results of independent interest. It requires some more detailed explanation of the relation between the moment of a matrix measure and the coefficients of matrix orthogonal polynomials, which will be presented in the following section. For the definition of a density on the moment space M p,n (R) we also need some basic facts about matrix polynomials. A matrix polynomial is a polynomial P (x) = A n x n + A n x n A 0, with coefficients A k R p p. If Σ P p (R) is a matrix measure, we define the (left) inner product on the space of matrix polynomials by ( p ) (2.4) P (x), Q(x) := P (x)dσ(x)q(x) T = P (x) ik Q(x) jl dµ kl (x). k,l= i,j p This inner product is matrix valued and R-linear in both arguments, but it is not a scalar product. The inner product (2.4) was considered by Duran (995) and several applications are discussed in Sinap and van Assche (996). It is possible to construct matrix polynomials P k (x) orthogonal with respect to the inner product,, that is, (2.5) P n (x), P m (x) = 0 p for n m. The degree of a matrix polynomial P n (x) = n k=0 A kx k is n, if A n 0 p and P n (x) is called monic, if A n = I p. In order to construct monic orthogonal polynomials up to degree N with respect to a matrix measure Σ, we assume that the Hankel matrix M 0 M N H 2N 2 =. M N.... M 2N 2 of the moments of Σ is positive definite. By Lemma 2.2 this is equivalent to the fact that (M,..., M 2N 2 ) T Int M p,2n 2 (R). If P n (x) = n k=0 A kx k is a monic matrix polynomial of degree n < N, then for any z R p \ {0}, z T P n (x), P n (x) z =z T (A T 0,..., A T n)h 2n (A T 0,..., A T n) T z =(z T A T 0,..., z T A T n)h 2n (z T A T 0,..., z T A T n) T > 0. This shows that P n (x), P n (x) is positive definite. Then the Gram-Schmidt-procedure can be applied to the matrix monomials I p, xi p, x 2 I p,..., which results in P 0 (x) = I p and recursively (2.6) P n (x) =x n I p x n I p, P n (x) P n (x), P n (x) P n (x)... The assumption that the polynomials P n (x) is monic is indeed necessary, otherwise P n (x), P n (x) would be singular for all vectors z which are in the kernel of all matrices A,..., A n. 6

7 x n I p, P 0 (x) P 0 (x), P 0 (x) P 0 (x) for n N. The basic properties of the inner product yield (2.5) for n m. Note that if H 2N is not positive definite, P N (x), P N (x) may be singular and P N+ (x) can not be defined. In the following discussion we suppose that H 2N is positive definite for all N. As in the scalar case, the monic orthogonal matrix polynomials satisfy a three term recursion (2.7) xp n (x) = P n+ (x) + B n+ P n (x) + A n P n (x), n with matricial recursion coefficients A n, B n+ R p p. By an induction argument, the recursion coefficients can be recursively calculated from (2.8) (2.9) P n (x), x n I p =A n A n... A 0, P n (x), x n+ I p =B n+ A n... A 0 + A n B n A n... A A n... A B A 0 where A 0 = I p, I p = I p = M 0 (see Wall (948) for the scalar versions). Equation (2.8) gives the identity A k = P k (x), P k (x) P k (x), P k (x) and from equation (2.9) we get a recursion for B k. The mapping M 2n (B, A,..., B n ) is even invertible, since equations (2.8) and (2.9) allow a recursive calculation of the moments from the recursion coefficients. For our probabilistic analysis we introduce the symmetrized matrices (2.20) A k (M 2n ) := P k (x), P k (x) /2 P k (x), P k (x) P k (x), P k (x) /2 for k n and for k n, (2.2) B k (M 2n ) := P k (x), P k (x) /2 B k P k (x), P k (x) /2. Here we write A k (M 2n ) and B k (M 2n ) to emphasize the dependence of the recursion coefficients from the moments. Now we define a density on the moment space M p,n (R) by (2.22) h (γ,δ) 2n (M 2n ) = n c Bk exp ( δ 2k tr B k (M 2n ) 2) k= n c Ak A k (M 2n ) γ k exp ( δ 2k tr A k (M 2n )) {Ak >0 p} k= with parameters γ k >, δ k > 0. The normalization constants c Bk and c Ak are given by (2.23) (2.24) c Bk = 2 p/2 ( 2δ2k π ) p(p+)/4, δ pγ k+p(p+)(n k) 2k c Ak = Γ p (γ k + (p + )(n k)). 7

8 Again, this density defines the natural analog of the distributions analysed in the scalar case. The following result gives a central limit theorem for the corresponding vector of random moments. The centering constants are the moments of the matrix-semicircle distribution ρ p defined by (2.25) dρ p (x) := 4 x2 { 2<x<2} dx I p. 2π Theorem 2.4. Assume that the vector of random moments M 2n M p,2n (R) has a distribution with density h (γ,δ) 2n, where the parameters of the density satisfy δ 2i = n(p + ) and δ 2i = (n) n(p + ) for all i. For k fixed denote by M 2 k = (M,..., M k ) the vector of the first k matrices of M 2n. Then n(p + )(D I p )(M (n) D k M k (ρ p )) X k, n where M k (ρ p ) are the first moments of the matrix-semicircle distribution, the matrices in X k = (X,..., X k ) T are independent and GOE p -distributed and D is a k k lower triangular matrix with D i,j = 0 if i + j is odd, D, =... = D k,k = and the remaining entries are given by ( ) ( ) i i D i,j = i j i j. 2 2 Remark 2.5. Dette and Nagel (202b) proved a similar result for random matrix moments uniformly distributed on the space M p,n ([0, ]). It is also possible to obtain results of this type for a more general class of densities f (γ,δ) p,n (M n ) = n k= c p,k (M + k M k ) (M k M k ) γ k (M + k M k ) (M + k M k) δ k {M k <M k<m + k } where γ = (γ k ) k, δ = (δ k ) k are fixed sequences of parameters with γ k, δ k > 2 (p ). If γ k = δ k = 0 for all k, we obtain the uniform distribution on M p,n ([0, ]) considered in this reference. 3 Matrix orthogonal polynomials An important tool for the proof of the results in Section 2 are matrix polynomials. If the support of a matrix measure Σ is a subset of [0, ), it follows by similar arguments as in Dette and Studden (2002) that there exists a sequence of matrices Z n R p p, such that for n the recursion coefficients in (2.7) of the monic matrix polynomials are given by (3.) (3.2) A T n =Z 2n Z 2n, B T n =Z 2n 2 + Z 2n, 8

9 where for the sake of completeness we define Z 0 = 0 p (note that these authors define the inner product in a different way). Throughout this paper we call the variables Z k (and similar quantities) canonical variables. Dette and Studden (2002) also showed the representation (3.3) Z n = (M n M n ) (M n M n ) for n where M0 = 0 p. Since we assume all Hankel matrices H 2N to be positive definite, we have M n > Mn. This section will provide a recursive method to calculate the moments from the recursion coefficients or from the canonical variables Z n. A similar result in the scalar case was shown by Skibinsky (968). We first need to make some definitions. Denote by P(x) T = (P 0 (x) T, P (x) T, P 2 (x) T,... ) the vector of monic orthogonal matrix polynomials and by F (x) T = (I p, xi p, x 2 I p,... ) the vector of matricial monomials. If (3.4) B I p A B 2 I p J = A 2 B 3 I p is the block-tridiagonal matrix of the recursion coefficients, the recursion (2.7) can be written as (3.5) JP(x) = xp(x). The infinite Hankel matrix of the moments of Σ is denoted by (3.6) M = (M i+j ) i,j 0, and let L be the lower block-triangular matrix containing the coefficients of the matrix polynomials, that is (3.7) P(x) = LF (x). Furthermore, R is the shift-operator 0 p I p 0 p... R = 0 p 0 p I p 0 p Although we consider matrices of infinite size, a multiplication of two matrices involves only a finite sum and all products are well-defined. The following Lemma is due to Dette and Nagel (202a) 9

10 Lemma 3.. The matrix L defined by (3.7) is non-singular and the inverse K = L is recursively defined by RK = KJ. Let D = diag(d 0, D,... ) be the block-diagonal matrix with diagonal entries D n = P n (x), P n (x) then the moment matrix M of Σ has the representation M = KDK T. If we write out the block entries in RK = KJ, we see that the blocks of the matrix K can be calculated with the recursion K i+,j = K i,j + K i,j B j+ + K i,j+ A j+, 0 j i where K i,j = 0 p if j < 0 and K i,i = I p (note that the numbering of the blocks in K starts at 0). By the last assertion in Lemma 3., the moment M n is then given by the block in the position n, 0 in the matrix KDK T and therefore M n = K n,0 D 0,0 K 0,0 = K n,0 M 0 In order to obtain a recursion for the matrices Z n, we need the following lemma to express the canonical variables as the recursion coefficients of another measure. The proof is given in the appendix. Lemma 3.2. Let Σ be a matrix measure on [0, ) and Σ s the symmetric matrix measure on R defined by (3.8) Σ s ([ x, x]) = Σ([0, x 2 ]) for all x 0. Then the monic matrix polynomials orthogonal with respect to Σ s satisfy the recursion xp n (x) = P n+ (x) + Z n (Σ) T P n (x), n 0 where Z n (Σ) is the matrix in the recursion of the polynomials orthogonal with respect to Σ. Lemma 3. together with Lemma 3.2 enables us to prove a recursive relation to compute the moments of Σ from the canonical variables Z n. Theorem 3.3. Let Σ P p ([0, )) be a matrix probability measure with canonical variables Z n defined in (3.3). Define the triangular array of matrices G i,j, i, j 0 by G i,j = 0 if i > j, G 0,j = I p and for j i. Then G n,n = M n (Σ). G i,j = G i,j + Z j i+ G i,j 0

11 Proof: For the matrix measure Σ on [0, ) we define the symmetric measure Σ s by (3.8). According to Lemma 3.2 the monic orthogonal polynomials P n (x) of Σ s of even (odd) degree are even (odd) functions. This implies for the block matrix K = (K i,j ) i,j 0 in Lemma 3., which satisfies F (x) = KP(x), that block K i,j is the matrix of zeros if i + j is odd. Furthermore, the blocks in the matrix J of the recursion coefficients are B n = 0 p and A n = Z T n. Here Z n is calculated from the moments of Σ, not of Σ s. The equation RK = KJ gives the recursion (3.9) K i+2j,i = K i+2j,i + K i+2j,i+ Z T i+. Now define G i,j = K T i+j,j i for i j, else G i,j = 0 p, then the matrices G i,j satisfy the asserted recursion. From the equation M = KDK T in Lemma 3. we obtain M n (Σ) = M 2n (Σ s ) = K 0,0 D 0 K T 2n,0 = K T 2n,0 = G n,n. Here we used that Σ is a probability measure and D 0 = M 0 = I p. 4 Weak convergence of random matrix moments 4. Matrix measures on the half-line and proof of Theorem Convergence of canonical variables Dette and Nagel (202b) use symmetrized canonical moments to study random matrix moment on the interval [0, ]. The auxiliary variables in the study of the random moments on the halfline are symmetrized versions of the canonical variables Z k in the recursion of matrix polynomials orthogonal on [0, ). For moments M n = (M,..., M n ) T in the interior of M p,n ([0, )) we define (4.) Z k := (M k M k ) /2 (M k M k )(M k M k ) /2 S + p (R) for k =,..., n. The symmetric matrix Z k is positive definite and similar to Z k. We obtain a mapping (4.2) ψ p,n : { Int M β p,n ([0, )) S + p (R) n M n Z n = (Z,..., Z n ) T, that maps a moment vector to the vector of corresponding canonical variables. If Z k and M,..., M k are given, M k can be calculated with equation (4.). Recursively, all moments M,..., M n can be recovered from canonical variables Z,..., Z n and the mapping ψ p,n is oneto-one. The next Theorem gives the distribution of the canonical variables Z n of the random

12 vector of moments with density g p,n (γ,δ) defined in (2.0). Recall that the distribution of a random variable X S p (R) is given by the Laguerre orthogonal ensemble LOE p (γ, W ) with parameter γ > (p ) and scale matrix W 2 S+ p (R), if it has the density (4.3) f L (X) = Γ p (γ) W γ X γ 2 (p+) e tr W X {X S + p (R)} [see e.g. Muirhead (982)]. If W = I p, we call this distribution central Laguerre orthogonal ensemble and write X LOE p (γ). Theorem 4.. If the vector of random moments M n has a distribution with density g p,n (γ,δ) Z n = (Z,..., Z n ) = ψ p,n (M n ), then the random matrices Z,..., Z n are independent and and Z k LOE p (γ k + (p + )(n k + ), δ k 2 I p) for k =,..., n. Proof: By definition of ψ p,n the canonical variable Z k depends only on M,..., M k, and consequently the Jacobian determinant is obtained as vec(z n ) n vec(m n ) = vec(z k ) vec(m k ) = n M k M k 2 (p+) k= k= n n = Z... Z k 2 (p+) = Z k 2 (p+)(n k). k=2 Here we use the fact, that for X, A S p (R) the determinant of the Jacobi matrix J(Φ A ) of the transformation Φ A : X AXA is given by J(Φ A ) = A p+ [see e.g. Mathai (997)]. The joint density of the canonical variables Z,..., Z n is given by g (γ,δ) Z (Z n ) = k= n c p,k Z k γ k+ 2 (p+)(n k) exp( δ k tr Z k ) {Zk >0 p}, k= and we obtain the distribution according to Definition 4.3. Note that this argument also gives the value of the normalization constant in (2.). The following result provides the weak convergence of the Laguerre orthogonal ensemble if the parameter tends to infinity. Theorem 4.2. Suppose X n LOE p (γ n ) is a sequence of random matrices with parameters γ n. Then it holds that D (X n γ n I p ) GOE p. γn n 2

13 Proof: The moment generating function of X n (i.e., the joint moment generating function of all real random variables in X n ) can be written as a function (4.4) E [ e tr(kxn)] of a symmetric matrix K S p (R) (compare Gupta and Nagar (2000)). By Definition 4.3, E [ e tr(kxn)] = S p + (R) Γ p (γ n ) X γn 2 (p+) e tr X e tr(kx) dx I = I p K γn p K γn X γn 2 (p+) e tr(ip K)X dx = I p K γn Γ p (γ n ) S + p (R) for K < I p. The moment generating function of the standardized random variable is therefore given by E [e ] tr(k γn (X n γ ni p)) = E [e ] tr( γn KX n) e γ n tr K = I p K e γn γ n tr K. γn Let κ,..., κ p denote the eigenvalues of the matrix K, then the moment generating function can be written as p ( κ ) γn p { ( i e γ nκ i = exp γ n log κ ) i } γ n κ i. γn γn i= Now expanding the logarithm yields for I p < K < I p E [e ] { p tr(k γn (X n γ ni p)) = exp γ n k i= k= { } p p = exp 2 κ2 i + k κk 2 k i γn n i= k=3 i= i= ( κi γn ) k γ n κ i } { } ( ) exp 2 κ2 i = exp tr K2. 2 Because exp( tr 2 K2 ) is the moment generating function of the Gaussian orthogonal ensemble the assertion of Theorem 2.3 follows. The random canonical variables Z k are distributed according to the Laguerre orthogonal ensemble and the weak convergence for n follows directly from Theorem 4.2. Note that Z k LOE p (γ k + (p + )(n k + ), δ 2 k I p) implies that δ k Z k LOE p (γ k + (p + )(n k + )) 2 is distributed according to the central Laguerre orthogonal ensemble. The appropriately standardized random matrix δk (Z k I p ) = (δ k Z k δ k ) δk tends in distribution to the Gaussian orthogonal ensemble if δ k depends on n and tends to infinity with the same rate as γ k + (p + )(n k + ). This yields the following result. 2 3

14 Theorem 4.3. Assume that the vector of random moments M n has a distribution with density g p,n (γ,δ) defined in (2.0) and parameters δ k = (p + )n for all k. Then for each component Z 2 k of Z n = (Z,..., Z n ) = ψ p,n (M n ) the weak convergence 2 (p + )n(z D k I p ) GOE p n holds. The non-standardized canonical variable Z k converges under the assumptions of Theorem 4.3 in probability to the identity matrix. Recalling the definition of the matricial Marchenko-Pastur distribution (2.3) and definition (4.) the diagonal structure is carried over to the canonical variables. Therefore we obtain from the scalar case [see Lemma 3.3 in Dette and Nagel (202a)] Z 0 n = ψ p,n (M n (η p )) = ψ,n (c n ) I p = (,..., ) T I p = (I p,... I p ) T. In other words, the random canonical variables Z k converge in the situation of Theorem 4.3 in probability to the corresponding variables of the matricial Marchenko-Pastur distribution η p. By the continuity of the mapping ψp,n, the random moments M k converge to the moments c k I p of the matricial Marchenko-Pastur distribution Proof of Theorem 2.3 For a proof of Theorem 2.3 we need the following concept of differentiability on the level of matrices. Definition 4.4. Let O be an open subset of S p (R) n. A mapping F : O (R p p ) m is called matrix differentiable at X 0, if there exists a matrix L R mp np, such that F (X 0 + H ) F (X 0 ) = LH + o( H ) as H 0. In this case we define the matrix derivative of F at X 0 as F (X 0 ) = F X (X 0 ) = L. Matrix differentiability is a more restrictive form of Fréchet differentiability (for more details on Fréchet differentiability, we refer to Averbukh and Smolyanov (968)). In fact, we require the linear mapping which is the derivative in the Fréchet sense to be a left multiplication, so that we can apply a product rule. If F, G : O R p p are matrix differentiable at X 0, G(X 0 ) = xi p, then F G is matrix differentiable at X 0 and (F G) (X 0 ) = G(X 0 )F (X 0 ) + F (X 0 )G (X 0 ). Now the assertion of Theorem 2.3 is proven analogously to Theorem 2.2 in Dette and Nagel (202b) and follows from Theorem 4.3 and Lemma 4.5 below, which shows that the inverse of the mapping ψ p,k is matrix differentiable. The matrix derivative is the Kronecker product of the derivative in the scalar case with the identity matrix. 4

15 Lemma 4.5. The mapping ψ p,k : S+ p (R) k Int M p,k ([0, )) with ψ p,k (Z k) = M k is matrix differentiable at Z 0 k = (I p,..., I p ) T with derivative Proof: For r k consider the mapping (ψ p,k ) (Z 0 k) = C I p. F r : S + p (R) k R p p, F r (Z) = Z r, which maps the symmetric canonical variables onto the r-th non-symmetric canonical variable. We have the relation Z r = (M r M r ) /2 Z r (M r M r ) /2. Now arguments similar to those in the proof of Theorem 4.5 in Dette and Nagel (202b) show Z r Z (Z0 ) = e T r I p = z r z (z 0 ) I p. This means that the matrix derivative is the Kronecker product of the derivative in the scalar case with the identity matrix. By Lemma 3.2, the moments are sums of products of canonical variables and so the product rule implies the same structure for the derivative of the mapping ψ p,k, which yields ψ p,k Z (Z0 k) = M k Z (Z0 k) = m k z (z 0 k ) I p, and the assertion follows from the calculations in the scalar case (see Dette and Nagel (202a)). 4.2 Matrix measures on R and proof of Theorem Convergence of recursion coefficients In this section we prove the result for the moment space corresponding to matrix measures Σ P p (R). In this case the auxiliary variables A k and B k are the symmetrized versions of the recursion coefficients of orthogonal matrix polynomials defined in equations (2.20) and (2.2), respectively. Note that A k S p (R) and A k > 0 p which follows from the symmetry of the inner product in (2.20) and from the fact that P k (x), P k (x) is positive definite. The following Lemma is proven in the appendix and shows the symmetry of the recursion variables B k as well. Note that the matrices A k and A k and the matrices B k and B k are similar. Lemma 4.6. The matrix B k defined in (2.2) is symmetric, that is B k S p (R). 5

16 With definitions (2.20) and (2.2) we construct the continuous mapping (4.5) ξ p,2n : { Int Mp,2n (R) (S p (R) S + p (R)) n S p (R) M 2n R 2n = (B, A, B 2,..., B n ) T. The symmetric recursion coefficients are in a one-to-one correspondence with the non-symmetric coefficients and the mapping ξ p,2n is one-to-one (Note that A = A and P n (x), P n (x) = A... A n.) Theorem 4.7. Assume that the vector of random moments M 2n Int M p,2n (R) has a distribution with density h (γ,δ) 2n. Then the random recursion coefficients B, A,..., B n are independent and 2δ2k B k GOE p, δ 2k A k LOE p (γ k + (p + )(2n 2k)). 2 Proof: By the equations (2.9) and (2.2) the recursion coefficient B k depends only on the moments M,..., M 2k. The random variable A k depends only on M,... M 2k by identity (2.20). The Jacobi matrix of ξ p,2n is therefore a lower block-triangular matrix with determinant ( vec(ξ p,2n (M 2n )) n ) ( vec(m 2n ) = vec(b k ) n ) vec(a k ) vec(m 2k ) vec(m 2k ). We obtain from (2.20) k= A k = P k (x), P k (x) /2 M 2k P k (x), P k (x) /2 +R, where R is independent from M 2k and therefore (see Mathai (997)) vec(a k ) vec(m 2k ) = P k (x), P k (x) 2 (p+) = A k... A 2 (p+) = A k... A 2 (p+). Rearranging equation (2.9) and an application of (2.2) yields for the other recursion coefficient B k = P k (x), P k (x) /2 B k P k (x), P k (x) /2 = P k (x), P k (x) /2 M 2k (A k... A ) P k (x), P k (x) /2 +R 2 = P k (x), P k (x) /2 M 2k P k (x), P k (x) /2 +R 2, with a matrix R 2 not depending on M 2k, which gives vec(b k ) vec(m 2k ) = P k (x), P k (x) 2 (p+) = A k... A 2 (p+). k= 6

17 This gives for the Jacobian determinant of ξ p,2n ( vec(ξ p,2n (M 2n )) n ) ( vec(m 2n ) = n ) A k... A 2 (p+) A k... A 2 (p+) k=2 k=2 ( n ) ( n ) = A k 2 (p+)(n k) A k 2 (p+)(n k) k= n = k= The recursion coefficients have the joint density h (γ,δ) R (R 2n ) = A k 2 (p+)(2n 2k ). n c Bk exp ( ) δ 2k tr Bk 2 k= n k= c Ak A k γ k+ 2 (p+)(2n 2k ) exp ( δ 2k tr A k ) {Ak >0 p}. k= The product structure of the density yields the independence of the recursion parameters and the density of 2δ 2k B k is given by c Bk 2δ 2k I p 2 (p+) exp ( 2 tr B2 k) = cbk 2δ2k 2 p(p+) exp ( 2 tr B2 k). Consequently 2δ 2k B k GOE p and c Bk = 2δ 2k 2 p(p+) 2π 2 (p+) p = 2 p/2 ( 2δ2k π ) p(p+)/4. The distribution of A k is the non-central Laguerre orthogonal Ensemble LOE p (γ k + (p + )(n k), δ 2k I p), which implies δ 2k A k LOE p (γ k + (p + )(n k)) and c Ak = ( Γ p (γ k + (p + )(n k)) δ 2k I p γ k+(p+)(n k) ) = (Γ p (γ k + (p + )(n k))) δ pγ k+p(p+)(n k) 2k. This proves Theorem 4.7 and the identities (2.23) and (2.24). Theorem 4.8. If in the situation of Theorem 4.7 the parameters satisfy δ 2k = (p + )n and δ 2k = 2 (p + )n, then we have for the kth recursion coefficient B k (p + )nbk GOE p and the kth rescaled recursion coefficient A k converges in distribution to the Gaussian orthogonal Ensemble, that is D (p + )n(ak I p ) GOE p. n 7

18 Theorem 4.8 is a direct consequence from the distribution in Theorem 4.7. The weak convergence of A k follows from the convergence of the Laguerre orthogonal ensemble in Theorem 4.2. Under the assumptions of Theorem 4.8 the matrix vector R (n) k, which contains the first k matrix entries of R 2n = (B, A,..., B n ) T, converges for k fixed in probability to R 0 k, which is the projection of R 0 = (0 p, I p, 0 p, I p,... ) T onto the first k matrix entries. It follows directly from the scalar case in Dette and Nagel (202a), that R 0 k = ξ p,k (M 0 k ), where M 0 k is the projection of M 0 = (0 p, c I p, 0 p, c 2 I p,... ) T onto the first k matrices (the generalization of definition (4.5) to an even number of matricial arguments is straightforward). The vector M 0 contains the moments of the semicircle distribution ρ multiplied with the identity matrix, therefore M 0 = M (ρ p ) is the moment vector of the matrixsemicircle distribution Proof of Theorem 2.4 The proof of Theorem 2.4 now follows as the proof of Theorem 2.3 from the following Lemma and the weak convergence of the random recursion coefficients in Theorem 4.8. Lemma 4.9. The mapping ξ p,k with ξ p,k (R k) = M k is matrix differentiable at R 0 k with derivative (ξ p,k ) (R 0 k) = D I p. Proof: A similar argumentation as in the proof of Lemma 4.5 shows for 2i k and A i R (R0 k) = e T 2i I p = a i r (r 0 k ) I p B i R (R0 k) = e T 2i I p = b i r (r 0 k ) I p for 2i k. From the structure of these derivatives and Lemma 3. we conclude that the derivative of M i with respect to the recursion coefficients is the Kronecker product of the derivative in the scalar case with the identity matrix and ξ p,k R (R0 k) = M k R (R0 k) = m k r (r 0 k ) I p = D I p. 8

19 5 Complex random moments To a large extend, the case of complex matrix measures can be treated analogously to the case of real matrix measures. For the sake of brevity we state only the results and omit the proofs. The integration operator changes on the space of hermitian p p matrices S p (C) to dx = p dx ii drex ij dimx ij, i= i<j that is, we integrate with respect to the p 2 independent real entries of a hermitian matrix. The kth moment of a complex matrix measure on T is defined as M k (Σ) := x k dσ(x) S p (C), T where S p (C) denotes the space of p p hermitian matrices. The complex nth moment space is denoted by M 2 p,n(t ). 5. Random moments of complex matrix measures on the half-line We define the density on the complex moment space M 2 p,n([0, )) as in the real case by (5.) g (γ,δ) p,n (M n ) = n c p,k (M k M k ) (M k M k ) γ k k= exp ( δ k tr(m k M k ) (M k M k )) {Mk >M k }, where the parameters satisfy γ k > p and δ k > 0 and the normalization constant is given by (5.2) c p,k = Γ (2) δ pγ k+p 2 (n k+) k p (γ k + 2(n k + )). The canonical variables Z k in (3.3) are well defined and as in the real case the symmetrized version is given by (5.3) Z k := (M k M k ) /2 (M k M k )(M k M k ) /2 S + p (C) for k =,..., n. The Hermitian matrix Z k is positive definite and similar to Z k. We obtain a mapping (5.4) ψ p,n : { Int M 2 p,n ([0, )) S + p (C) n M n Z n = (Z,..., Z n ), 9

20 that maps a moment vector to the vector of corresponding canonical variables. Recall that the distribution of a random variable X S p (C) is given by the Laguerre unitary ensemble LUE p (γ, W ) with parameter γ > p and scale matrix W S + p (C), if it has the density f L (X) = Γ (2) p (γ) W γ X γ p e tr W X {X S + p (C)}. If W = I p, we call this distribution central Laguerre unitary ensemble and write X LUE p (γ). Γ (2) p is the complex multivariate gamma function and given by Γ (2) p (z) := X z p e tr X dx. S + p (C) Proceeding as in Section 4 gives the following result. Theorem 5.. Assume that the vector of random moments M n has a distribution with density g p,n (γ,δ) and Z n = (Z,..., Z n ) = ψ p,n (M n ), then the random matrices Z,..., Z n are independent and for k =,..., n. Z k LUE p (γ k + p(n k + ), δ k I p) We proceed with asymptotic results for the random matricial moments in M 2 p,n([0, )) for n. The following theorem provides the weak convergence of the Laguerre unitary ensemble if the parameter tends to infinity. For this purpose recall that a random variable X taking values in S p (C) is distributed according to the Gaussian unitary ensemble GUE p, if it has the density f G (X) = (2π p ) p/2 π p(p )/2 e 2 tr X2. Theorem 5.2. Suppose X n LUE p (γ n ) is a sequence of random matrices with parameters γ n. Then it holds that D (X n γ n I p ) GUE p. γn n The remaining arguments in Section 4 stay essentially unchanged, which yields the following result on the weak convergence of random complex moments. Theorem 5.3. Assume that the vector of random moments M n M 2 p,n([0, )) has a distribution with density g p,n (γ,δ) and parameters δ i = np for all i. For k fixed denote by M (n) k the projection of M n onto the first k matrices and assume that M k (η p ) = (c I p,..., c k I p ) contains the first k moments of the matricial Marchenko-Pastur distribution η p. Then the convergence np(c I p )(M (n) D k M k (η p )) X k n holds, where C R k k is the lower triangular matrix in Theorem 2.3 and X k = (X,..., X k ) is a vector of independent, GUE p -distributed random matrices. 20

21 5.2 Random moments of complex matrix measures on R The density on M 2 p,2n (R) in the complex case is given by (5.5) h (γ,δ) 2n (M 2n ) = n c Bk exp ( δ 2k tr B k (M 2n ) 2) k= n c Ak A k (M 2n ) γ k exp ( δ 2k tr A k (M 2n )) {Ak >0 p} k= with parameters γ k >, δ k > 0 and the normalization constants c Bk and c Ak are given by (5.6) (5.7) c Bk = 2 p/2 c Ak = Γ (2) p ( ) p 2 /2 2δ2k π δ pγ k+p 2 (2n 2k) 2k (γ k + p(2n 2k)) Theorem 5.4. Assume that the vector of random moments M 2n Int M 2 p,2n (R) has a distribution with density h (γ,δ) 2n. Then the random recursion coefficients B, A,..., B n are independent and 2δ2k B k GUE p, δ 2k A k LUE p (γ k + p(2n 2k)). Theorem 5.5. If in the situation of Theorem 5.4 the parameters satisfy δ 2k = 2np and δ 2k = np, then we have for the recursion coefficient B k 2npBk GUE p and the rescaled recursion coefficients A k converge in distribution to the Gaussian unitary ensemble, D 2np(Ak I p ) GUE p. n The following result completes this section and gives a central limit theorem for the complex random moments. Theorem 5.6. Assume that the vector of random moments M 2n M 2 p,2n (R) has a distribution with density h (γ,δ) 2n, where the parameters satisfy δ 2i = 2np and δ 2i = np for all i. For k fixed denote by M (n) k the vector of the first k matrices in M 2n. Then 2np(D I p )(M (n) D k M k (ρ p )) X k, n where M k (ρ p ) are the first moments of the matrix-semicircle distribution, X k = (X,..., X k ) is a vector of independent and GUE p -distributed random variables and D is the k k lower triangular matrix in Theorem

22 6 Appendix 6. Proof of Lemma 3.2 The monic polynomials orthogonal with respect to Σ s satisfy a recursion as in (2.7). Equation (2.6) and a simple induction argument yields that P 2n (x) is an even function and P 2n (x) is an odd function and consequently B n = 0 p for all n. For the other recursion coefficients we have A T n = (M 2n 2 (Σ s ) M 2n 2 (Σ s ) ) (M 2n (Σ s ) M 2n (Σ s ) ). A suitable approximation of the identity by step functions shows that the even moments of Σ s are given by M 2n (Σ s ) = x 2n dσ s (x) = x n dσ(x) = M n (Σ). Therefore M n (Σ) M 2n (Σ s ) and M 2n (Σ s ) M n (Σ), which implies M 2n (Σ s ) = M n (Σ). We get for the recursion coefficient A T n = (M n (Σ) M n (Σ) ) (M n (Σ) M n (Σ) ) = Z n (Σ). 6.2 Proof of Lemma 4.6 Denote by C (n) n the coefficient of x n in P n (x), then by the recursion (2.7) we have C (k) k = B k + C (k ) k 2. The construction of the matrix orthogonal polynomials in (2.6) yields C (k) k = xk I p, P k (x) P k (x), P k (x). Another application of the recursion gives for the recursion coefficient B k =C (k ) k 2 C (k) k = x k I p, P k (x) P k (x), P k (x) x k I p, P k 2 (x) P k 2 (x), P k 2 (x) = x k I p, xp k (x) P k (x), P k (x) x k I p, P k 2 (x) P k 2 (x), P k 2 (x) = ( x k I p, P k (x) + x k I p, P k (x) B T k + x k I p, P k 2 (x) A T k P k (x), P k (x) x k I p, P k 2 (x) P k 2 (x), P k 2 (x) = P k (x), P k (x) B T k P k (x), P k (x) 22 )

23 The last equation implies B k = P k (x), P k (x) /2 B k P k (x), P k (x) /2 = P k (x), P k (x) /2 Bk T P k (x), P k (x) /2 =Bk T. Acknowledgements. The work of the authors was supported by the Sonderforschungsbereich Tr/2 (project C2, Fluctuations and universality of invariant random matrix ensembles). References Averbukh, V. and Smolyanov, O. (968). The various definitions of the derivative in linear topological spaces. Russ. Math. Surv., 23:67 3. Chang, F. C., Kemperman, J. H. B., and Studden, W. J. (993). A normal limit theorem for moment sequences. Ann. Probab., 2: Delsarte, P., Genin, Y. V., and Kamp, Y. G. (978). Orthogonal polynomial matrices on the unit circle. IEEE Trans. Circuits Syst., 25: Dette, H. and Nagel, J. (202a). Distributions on unbounded moment spaces and random moment sequences. To appear in: Annals of Probability. Dette, H. and Nagel, J. (202b). Matrix measures, random moments and gaussian ensembles. Journal of Theoretical probability. Dette, H., Reuther, B., Studden, W. J., and Zygmunt, M. (2006). Matrix measures and random walks with a block tridiagonal transition matrix. SIAM J. Matrix Anal. Appl., 29:7 42. Dette, H. and Studden, W. J. (2002). Matrix measures, moment spaces and Favard s theorem on the interval [0; ] and [0; ). Linear Algebra Appl., 345: Duran, A. J. (995). On orthogonal polynomials with respect to a positive definite matrix of measures. Canad. J. Math., 47:88 2. Duran, A. J. (999). Ratio asymptotics for orthogonal matrix polynomials. J. Approximation Theory, 00: Duran, A. J. and Lopez-Rodriguez, P. (996). Orthogonal matrix polynomials: zeros and Blumenthal s theorem. J. Approximation Theory, 84:

24 Duran, A. J. and van Assche, W. (995). Orthogonal matrix polynomials and higher-order recurrence relations. Linear Algebra Appl., 29: Gamboa, F. and Lozada-Chang, L. V. (2004). Large deviations for random power moment problem. Ann. Probab., 32: Gamboa, F. and Rouault, A. (2009). Canonical moments and random spectral measures. J. Theoret. Probab., DOI 0.007/s Gupta, A. K. and Nagar, D. K. (2000). Matrix variate distributions. Capman & Hall/CRC, Boca Raton. Karlin, S. and McGregor, J. (959). Random walks. Illinois J. Math., 3:66 8. Lozada-Chang, L. V. (2005). Asymptotic behavior of moment sequences. Electron. J. Probab., 0: Mathai, A. (997). Jacobians of Matrix Transformations and Functions of Matrix Argument. World Scientific Publ., Singapore. Muirhead, R. J. (982). Aspects of Multivariate Statistical Theory. John Wiley & Sons, New York. Sinap, A. and van Assche, W. (996). Orthogonal matrix polynomials and applications. J. Comput. Appl. Math., 66: Skibinsky, M. (968). Extreme nth moments for distributions on [0, ] and the inverse of a moment space map. J. Appl. Probab., 5: Tulino, A. M. and Verdú, S. (2004). Random matrix theory and wireless communications. Found. Trends Commun. Inf. Theory, now Publishers Inc., Hanover, MA. Wall, H. S. (948). Analytic Theory of Continued Fractions. D. van Nostrand Company, Inc. XIII, New York. Whittle, P. (963). On the fitting of multivariate autoregressions, and the approximate canonical factorization of a spectral density matrix. Biometrika, 50: Wiener, N. and Masani, P. (957). The prediction theory of multivariate stochastic processes. Acta Math., 98:

Distributions on unbounded moment spaces and random moment sequences

Distributions on unbounded moment spaces and random moment sequences Distributions on unbounded moment spaces and random moment sequences Jan Nagel Ruhr-Universität Bochum Faultät für Mathemati 44780 Bochum, Germany e-mail: jan.nagel@rub.de Holger Dette Ruhr-Universität

More information

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random

More information

ORTHOGONAL POLYNOMIALS

ORTHOGONAL POLYNOMIALS ORTHOGONAL POLYNOMIALS 1. PRELUDE: THE VAN DER MONDE DETERMINANT The link between random matrix theory and the classical theory of orthogonal polynomials is van der Monde s determinant: 1 1 1 (1) n :=

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Lectures 6 7 : Marchenko-Pastur Law

Lectures 6 7 : Marchenko-Pastur Law Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Triangular matrices and biorthogonal ensembles

Triangular matrices and biorthogonal ensembles /26 Triangular matrices and biorthogonal ensembles Dimitris Cheliotis Department of Mathematics University of Athens UK Easter Probability Meeting April 8, 206 2/26 Special densities on R n Example. n

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

Zeros and ratio asymptotics for matrix orthogonal polynomials

Zeros and ratio asymptotics for matrix orthogonal polynomials Zeros and ratio asymptotics for matrix orthogonal polynomials Steven Delvaux, Holger Dette August 25, 2011 Abstract Ratio asymptotics for matrix orthogonal polynomials with recurrence coefficients A n

More information

Rectangular Young tableaux and the Jacobi ensemble

Rectangular Young tableaux and the Jacobi ensemble Rectangular Young tableaux and the Jacobi ensemble Philippe Marchal October 20, 2015 Abstract It has been shown by Pittel and Romik that the random surface associated with a large rectangular Young tableau

More information

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich Measures, orthogonal polynomials, and continued fractions Michael Anshelevich November 7, 2008 MEASURES AND ORTHOGONAL POLYNOMIALS. µ a positive measure on R. A linear functional µ[p ] = P (x) dµ(x), µ

More information

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich

Measures, orthogonal polynomials, and continued fractions. Michael Anshelevich Measures, orthogonal polynomials, and continued fractions Michael Anshelevich November 7, 2008 MEASURES AND ORTHOGONAL POLYNOMIALS. MEASURES AND ORTHOGONAL POLYNOMIALS. µ a positive measure on R. 2 MEASURES

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW

COMPLEX HERMITE POLYNOMIALS: FROM THE SEMI-CIRCULAR LAW TO THE CIRCULAR LAW Serials Publications www.serialspublications.com OMPLEX HERMITE POLYOMIALS: FROM THE SEMI-IRULAR LAW TO THE IRULAR LAW MIHEL LEDOUX Abstract. We study asymptotics of orthogonal polynomial measures of the

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Contraction Principles some applications

Contraction Principles some applications Contraction Principles some applications Fabrice Gamboa (Institut de Mathématiques de Toulouse) 7th of June 2017 Christian and Patrick 59th Birthday Overview Appetizer : Christian and Patrick secret lives

More information

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:

More information

Lecture I: Asymptotics for large GUE random matrices

Lecture I: Asymptotics for large GUE random matrices Lecture I: Asymptotics for large GUE random matrices Steen Thorbjørnsen, University of Aarhus andom Matrices Definition. Let (Ω, F, P) be a probability space and let n be a positive integer. Then a random

More information

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices

A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices A Note on the Central Limit Theorem for the Eigenvalue Counting Function of Wigner and Covariance Matrices S. Dallaporta University of Toulouse, France Abstract. This note presents some central limit theorems

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing

Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes with Killing Advances in Dynamical Systems and Applications ISSN 0973-5321, Volume 8, Number 2, pp. 401 412 (2013) http://campus.mst.edu/adsa Weighted Sums of Orthogonal Polynomials Related to Birth-Death Processes

More information

Approximate Optimal Designs for Multivariate Polynomial Regression

Approximate Optimal Designs for Multivariate Polynomial Regression Approximate Optimal Designs for Multivariate Polynomial Regression Fabrice Gamboa Collaboration with: Yohan de Castro, Didier Henrion, Roxana Hess, Jean-Bernard Lasserre Universität Potsdam 16th of February

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium

Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium SEA 06@MIT, Workshop on Stochastic Eigen-Analysis and its Applications, MIT, Cambridge,

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

Math 108b: Notes on the Spectral Theorem

Math 108b: Notes on the Spectral Theorem Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator

More information

1. Introduction. Consider a homogeneous Markov chain with state space

1. Introduction. Consider a homogeneous Markov chain with state space SIAM J MATRIX ANAL APPL Vol 29, No 1, pp 117 142 c 26 Society for Industrial and Applied Mathematics MATRIX MEASURES AND RANDOM WALKS WITH A BLOCK TRIDIAGONAL TRANSITION MATRIX HOLGER DETTE, BETTINA REUTHER,

More information

Optimal global rates of convergence for interpolation problems with random design

Optimal global rates of convergence for interpolation problems with random design Optimal global rates of convergence for interpolation problems with random design Michael Kohler 1 and Adam Krzyżak 2, 1 Fachbereich Mathematik, Technische Universität Darmstadt, Schlossgartenstr. 7, 64289

More information

arxiv: v1 [math-ph] 19 Oct 2018

arxiv: v1 [math-ph] 19 Oct 2018 COMMENT ON FINITE SIZE EFFECTS IN THE AVERAGED EIGENVALUE DENSITY OF WIGNER RANDOM-SIGN REAL SYMMETRIC MATRICES BY G.S. DHESI AND M. AUSLOOS PETER J. FORRESTER AND ALLAN K. TRINH arxiv:1810.08703v1 [math-ph]

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

ORTHOGONAL POLYNOMIALS IN PROBABILITY THEORY ABSTRACTS MINI-COURSES

ORTHOGONAL POLYNOMIALS IN PROBABILITY THEORY ABSTRACTS MINI-COURSES ORTHOGONAL POLYNOMIALS IN PROBABILITY THEORY ABSTRACTS Jinho Baik (University of Michigan) ORTHOGONAL POLYNOMIAL ENSEMBLES MINI-COURSES TALK 1: ORTHOGONAL POLYNOMIAL ENSEMBLES We introduce a certain joint

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Eigenvalue variance bounds for Wigner and covariance random matrices

Eigenvalue variance bounds for Wigner and covariance random matrices Eigenvalue variance bounds for Wigner and covariance random matrices S. Dallaporta University of Toulouse, France Abstract. This work is concerned with finite range bounds on the variance of individual

More information

APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION. Let V be a finite dimensional inner product space spanned by basis vector functions

APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION. Let V be a finite dimensional inner product space spanned by basis vector functions 301 APPENDIX B GRAM-SCHMIDT PROCEDURE OF ORTHOGONALIZATION Let V be a finite dimensional inner product space spanned by basis vector functions {w 1, w 2,, w n }. According to the Gram-Schmidt Process an

More information

Journal of Mathematical Analysis and Applications. Matrix differential equations and scalar polynomials satisfying higher order recursions

Journal of Mathematical Analysis and Applications. Matrix differential equations and scalar polynomials satisfying higher order recursions J Math Anal Appl 354 009 1 11 Contents lists available at ScienceDirect Journal of Mathematical Analysis and Applications wwwelseviercom/locate/jmaa Matrix differential equations and scalar polynomials

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 4 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 12, 2012 Andre Tkacenko

More information

STAT 7032 Probability Spring Wlodek Bryc

STAT 7032 Probability Spring Wlodek Bryc STAT 7032 Probability Spring 2018 Wlodek Bryc Created: Friday, Jan 2, 2014 Revised for Spring 2018 Printed: January 9, 2018 File: Grad-Prob-2018.TEX Department of Mathematical Sciences, University of Cincinnati,

More information

The norm of polynomials in large random matrices

The norm of polynomials in large random matrices The norm of polynomials in large random matrices Camille Mâle, École Normale Supérieure de Lyon, Ph.D. Student under the direction of Alice Guionnet. with a significant contribution by Dimitri Shlyakhtenko.

More information

Gegenbauer Matrix Polynomials and Second Order Matrix. differential equations.

Gegenbauer Matrix Polynomials and Second Order Matrix. differential equations. Gegenbauer Matrix Polynomials and Second Order Matrix Differential Equations Polinomios Matriciales de Gegenbauer y Ecuaciones Diferenciales Matriciales de Segundo Orden K. A. M. Sayyed, M. S. Metwally,

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS

L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Rend. Sem. Mat. Univ. Pol. Torino Vol. 57, 1999) L. Levaggi A. Tabacco WAVELETS ON THE INTERVAL AND RELATED TOPICS Abstract. We use an abstract framework to obtain a multilevel decomposition of a variety

More information

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY

OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY published in IMA Journal of Numerical Analysis (IMAJNA), Vol. 23, 1-9, 23. OPTIMAL SCALING FOR P -NORMS AND COMPONENTWISE DISTANCE TO SINGULARITY SIEGFRIED M. RUMP Abstract. In this note we give lower

More information

On the concentration of eigenvalues of random symmetric matrices

On the concentration of eigenvalues of random symmetric matrices On the concentration of eigenvalues of random symmetric matrices Noga Alon Michael Krivelevich Van H. Vu April 23, 2012 Abstract It is shown that for every 1 s n, the probability that the s-th largest

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

MATHEMATICAL METHODS AND APPLIED COMPUTING

MATHEMATICAL METHODS AND APPLIED COMPUTING Numerical Approximation to Multivariate Functions Using Fluctuationlessness Theorem with a Trigonometric Basis Function to Deal with Highly Oscillatory Functions N.A. BAYKARA Marmara University Department

More information

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0 Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Near extreme eigenvalues and the first gap of Hermitian random matrices

Near extreme eigenvalues and the first gap of Hermitian random matrices Near extreme eigenvalues and the first gap of Hermitian random matrices Grégory Schehr LPTMS, CNRS-Université Paris-Sud XI Sydney Random Matrix Theory Workshop 13-16 January 2014 Anthony Perret, G. S.,

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is

More information

Department of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing

Department of Applied Mathematics Faculty of EEMCS. University of Twente. Memorandum No Birth-death processes with killing Department of Applied Mathematics Faculty of EEMCS t University of Twente The Netherlands P.O. Box 27 75 AE Enschede The Netherlands Phone: +3-53-48934 Fax: +3-53-48934 Email: memo@math.utwente.nl www.math.utwente.nl/publications

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

The Dirichlet Problem for Infinite Networks

The Dirichlet Problem for Infinite Networks The Dirichlet Problem for Infinite Networks Nitin Saksena Summer 2002 Abstract This paper concerns the existence and uniqueness of solutions to the Dirichlet problem for infinite networks. We formulate

More information

ETNA Kent State University

ETNA Kent State University Electronic Transactions on Numerical Analysis Volume 14, pp 127-141, 22 Copyright 22, ISSN 168-9613 ETNA etna@mcskentedu RECENT TRENDS ON ANALYTIC PROPERTIES OF MATRIX ORTHONORMAL POLYNOMIALS F MARCELLÁN

More information

Orthogonal Symmetric Toeplitz Matrices

Orthogonal Symmetric Toeplitz Matrices Orthogonal Symmetric Toeplitz Matrices Albrecht Böttcher In Memory of Georgii Litvinchuk (1931-2006 Abstract We show that the number of orthogonal and symmetric Toeplitz matrices of a given order is finite

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

A Generalization of Wigner s Law

A Generalization of Wigner s Law A Generalization of Wigner s Law Inna Zakharevich June 2, 2005 Abstract We present a generalization of Wigner s semicircle law: we consider a sequence of probability distributions (p, p 2,... ), with mean

More information

Introduction to orthogonal polynomials. Michael Anshelevich

Introduction to orthogonal polynomials. Michael Anshelevich Introduction to orthogonal polynomials Michael Anshelevich November 6, 2003 µ = probability measure on R with finite moments m n (µ) = R xn dµ(x)

More information

On the smallest eigenvalues of covariance matrices of multivariate spatial processes

On the smallest eigenvalues of covariance matrices of multivariate spatial processes On the smallest eigenvalues of covariance matrices of multivariate spatial processes François Bachoc, Reinhard Furrer Toulouse Mathematics Institute, University Paul Sabatier, France Institute of Mathematics

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials

An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials [ 1 ] University of Cyprus An Arnoldi Gram-Schmidt process and Hessenberg matrices for Orthonormal Polynomials Nikos Stylianopoulos, University of Cyprus New Perspectives in Univariate and Multivariate

More information

Random matrices: Distribution of the least singular value (via Property Testing)

Random matrices: Distribution of the least singular value (via Property Testing) Random matrices: Distribution of the least singular value (via Property Testing) Van H. Vu Department of Mathematics Rutgers vanvu@math.rutgers.edu (joint work with T. Tao, UCLA) 1 Let ξ be a real or complex-valued

More information

MATH 581D FINAL EXAM Autumn December 12, 2016

MATH 581D FINAL EXAM Autumn December 12, 2016 MATH 58D FINAL EXAM Autumn 206 December 2, 206 NAME: SIGNATURE: Instructions: there are 6 problems on the final. Aim for solving 4 problems, but do as much as you can. Partial credit will be given on all

More information

arxiv: v4 [math.pr] 10 Sep 2018

arxiv: v4 [math.pr] 10 Sep 2018 Lectures on the local semicircle law for Wigner matrices arxiv:60.04055v4 [math.pr] 0 Sep 208 Florent Benaych-Georges Antti Knowles September, 208 These notes provide an introduction to the local semicircle

More information

A determinantal formula for the GOE Tracy-Widom distribution

A determinantal formula for the GOE Tracy-Widom distribution A determinantal formula for the GOE Tracy-Widom distribution Patrik L. Ferrari and Herbert Spohn Technische Universität München Zentrum Mathematik and Physik Department e-mails: ferrari@ma.tum.de, spohn@ma.tum.de

More information

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES

AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim

More information

SOLUTION OF THE TRUNCATED PARABOLIC MOMENT PROBLEM

SOLUTION OF THE TRUNCATED PARABOLIC MOMENT PROBLEM SOLUTION OF THE TRUNCATED PARABOLIC MOMENT PROBLEM RAÚL E. CURTO AND LAWRENCE A. FIALKOW Abstract. Given real numbers β β (2n) {β ij} i,j 0,i+j 2n, with γ 00 > 0, the truncated parabolic moment problem

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

1 Math 241A-B Homework Problem List for F2015 and W2016

1 Math 241A-B Homework Problem List for F2015 and W2016 1 Math 241A-B Homework Problem List for F2015 W2016 1.1 Homework 1. Due Wednesday, October 7, 2015 Notation 1.1 Let U be any set, g be a positive function on U, Y be a normed space. For any f : U Y let

More information

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices

A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices A Remark on Hypercontractivity and Tail Inequalities for the Largest Eigenvalues of Random Matrices Michel Ledoux Institut de Mathématiques, Université Paul Sabatier, 31062 Toulouse, France E-mail: ledoux@math.ups-tlse.fr

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Introduction to Orthogonal Polynomials: Definition and basic properties

Introduction to Orthogonal Polynomials: Definition and basic properties Introduction to Orthogonal Polynomials: Definition and basic properties Prof. Dr. Mama Foupouagnigni African Institute for Mathematical Sciences, Limbe, Cameroon and Department of Mathematics, Higher Teachers

More information

Markov Chains, Stochastic Processes, and Matrix Decompositions

Markov Chains, Stochastic Processes, and Matrix Decompositions Markov Chains, Stochastic Processes, and Matrix Decompositions 5 May 2014 Outline 1 Markov Chains Outline 1 Markov Chains 2 Introduction Perron-Frobenius Matrix Decompositions and Markov Chains Spectral

More information

A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES

A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES A CLASS OF SCHUR MULTIPLIERS ON SOME QUASI-BANACH SPACES OF INFINITE MATRICES NICOLAE POPA Abstract In this paper we characterize the Schur multipliers of scalar type (see definition below) acting on scattered

More information

FINITE-DIMENSIONAL LINEAR ALGEBRA

FINITE-DIMENSIONAL LINEAR ALGEBRA DISCRETE MATHEMATICS AND ITS APPLICATIONS Series Editor KENNETH H ROSEN FINITE-DIMENSIONAL LINEAR ALGEBRA Mark S Gockenbach Michigan Technological University Houghton, USA CRC Press Taylor & Francis Croup

More information

Numerical Methods for Random Matrices

Numerical Methods for Random Matrices Numerical Methods for Random Matrices MIT 18.95 IAP Lecture Series Per-Olof Persson (persson@mit.edu) January 23, 26 1.9.8.7 Random Matrix Eigenvalue Distribution.7.6.5 β=1 β=2 β=4 Probability.6.5.4.4

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part I Manuela Girotti based on M. Girotti s PhD thesis and A. Kuijlaars notes from Les Houches Winter School 202 Contents Point Processes

More information

Compound matrices and some classical inequalities

Compound matrices and some classical inequalities Compound matrices and some classical inequalities Tin-Yau Tam Mathematics & Statistics Auburn University Dec. 3, 04 We discuss some elegant proofs of several classical inequalities of matrices by using

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

COMPUTING MOMENTS OF FREE ADDITIVE CONVOLUTION OF MEASURES

COMPUTING MOMENTS OF FREE ADDITIVE CONVOLUTION OF MEASURES COMPUTING MOMENTS OF FREE ADDITIVE CONVOLUTION OF MEASURES W LODZIMIERZ BRYC Abstract. This short note explains how to use ready-to-use components of symbolic software to convert between the free cumulants

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

STA 294: Stochastic Processes & Bayesian Nonparametrics

STA 294: Stochastic Processes & Bayesian Nonparametrics MARKOV CHAINS AND CONVERGENCE CONCEPTS Markov chains are among the simplest stochastic processes, just one step beyond iid sequences of random variables. Traditionally they ve been used in modelling a

More information