Assessing the dependence of high-dimensional time series via sample autocovariances and correlations

Size: px
Start display at page:

Download "Assessing the dependence of high-dimensional time series via sample autocovariances and correlations"

Transcription

1 Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia), Alex Aue, Debashis Paul (UC Davis) and Jianfeng Yao (HKU). Copenhagen, J. Heiny Large correlation and covariance matrices 1 / 36

2 Motivation: S&P 500 Index Lower tail index Upper tail index Figure: Estimated tail indices of log-returns of 478 time series in the S&P 500 index. J. Heiny Large correlation and covariance matrices 2 / 36

3 Setup Data matrix X = X p : p n matrix with iid centered columns. X = (X it ) i=1,...,p;t=1,...,n Sample covariance matrix S = 1 n XX Ordered eigenvalues of S λ 1 (S) λ 2 (S) λ p (S) Applications: Principal Component Analysis Linear Regression,... J. Heiny Large correlation and covariance matrices 3 / 36

4 Sample Correlation Matrix Sample correlation matrix R with entries R ij = S ij Sii S jj, i, j = 1,..., p and eigenvalues λ 1 (R) λ 2 (R) λ p (R). J. Heiny Large correlation and covariance matrices 4 / 36

5 The Model Data structure: X p = A p Z p, where A p is a deterministic p p matrix such that ( A p ) is bounded and Z p = (Z it ) i=1,...,p;t=1,...,n has iid, centered entries with unit variance (if finite). J. Heiny Large correlation and covariance matrices 5 / 36

6 The Model Data structure: X p = A p Z p, where A p is a deterministic p p matrix such that ( A p ) is bounded and Z p = (Z it ) i=1,...,p;t=1,...,n has iid, centered entries with unit variance (if finite). Population covariance matrix Σ = AA. Population correlation matrix Γ = (diag(σ)) 1/2 Σ(diag(Σ)) 1/2 Note: E[S] = Σ but E[R ij ] = Γ ij + O(n 1 ). J. Heiny Large correlation and covariance matrices 5 / 36

7 The Model Sample Population Covariance matrix S Σ Correlation matrix R Γ J. Heiny Large correlation and covariance matrices 6 / 36

8 The Model Sample Population Covariance matrix S Σ Correlation matrix R Γ Growth regime: n = n p and p n p γ [0, ), as p. p High dimension: lim p n (0, ) p Moderate dimension: lim p n = 0 J. Heiny Large correlation and covariance matrices 6 / 36

9 Main Result Approximation Under Finite Fourth Moment Assume X = AZ and E[Z11 4 ] <. Then we have as p, a.s. n/p diag(s) diag(σ) 0. Approximation Under Infinite Fourth Moment Assume X = Z and E[Z11 4 ] =. Then we have as p, c np S diag(s) a.s. 0. }{{} 0 J. Heiny Large correlation and covariance matrices 7 / 36

10 Main result Assume X = AZ and E[Z11 4 ] <. Then we have as p, a.s. n/p diag(s) diag(σ) 0, and n/p (diag(s)) 1/2 (diag(σ)) 1/2 a.s. 0. J. Heiny Large correlation and covariance matrices 8 / 36

11 Main result Assume X = AZ and E[Z11 4 ] <. Then we have as p, a.s. n/p diag(s) diag(σ) 0, and n/p (diag(s)) 1/2 (diag(σ)) 1/2 a.s. 0. Relevance: Note that R = (diag(s)) 1/2 S (diag(s)) 1/2. S = 1 n XX and R = Y Y, where ( Y = (Y ij ) p n = X ij n t=1 X2 it In general, any two entries of Y are dependent. ) p n J. Heiny Large correlation and covariance matrices 8 / 36

12 A Comparison Under Finite Fourth Moment Approximation of the sample correlation matrix Assume X = AZ and E[Z11 4 ] <. Then we have n p R (diag(σ)) 1/2 S(diag(Σ)) 1/2 }{{} S Q a.s. 0. J. Heiny Large correlation and covariance matrices 9 / 36

13 A Comparison Under Finite Fourth Moment Approximation of the sample correlation matrix Assume X = AZ and E[Z11 4 ] <. Then we have n p R (diag(σ)) 1/2 S(diag(Σ)) 1/2 }{{} Spectrum comparison S Q a.s. 0. An application of Weyl s inequality yields n p max λ i (R) λ i (S Q n ) i=1,...,p p R SQ a.s. 0. J. Heiny Large correlation and covariance matrices 9 / 36

14 A Comparison Under Finite Fourth Moment Approximation of the sample correlation matrix Assume X = AZ and E[Z11 4 ] <. Then we have n p R (diag(σ)) 1/2 S(diag(Σ)) 1/2 }{{} Spectrum comparison S Q a.s. 0. An application of Weyl s inequality yields n p max λ i (R) λ i (S Q n ) i=1,...,p p R SQ a.s. 0. Operator norm consistent estimation R Γ = O( p/n) a.s. J. Heiny Large correlation and covariance matrices 9 / 36

15 Notation Empirical spectral distribution of p p matrix C with real eigenvalues λ 1 (C),..., λ p (C): F C (x) = 1 p p 1 {λi (C) x}, x R. i=1 Stieltjes transform: 1 s C (z) = x z df C(x) = 1 p tr(c zi) 1, z C +, R Limiting spectral distribution: Weak convergence of (F Cp ) to distribution function F a.s. J. Heiny Large correlation and covariance matrices 10 / 36

16 Limiting Spectral Distribution of R Assume X = AZ, E[Z 4 11 ] < and that F Γ converges to a probability distribution H. 1 If p/n γ (0, ), then F R converges weakly to a distribution function F γ,h, whose Stieltjes transform s satisfies s(z) = dh(t) t(1 γ γs(z)) z, z C+. 2 If p/n 0, then F converges weakly to a n/p(r Γ) distribution function F, whose Stieltjes transform s satisfies dh(t) s(z) = z + t s(z), z C+, where s is the unique solution to s(z) = (z + t s(z)) 1 t dh(t) and z C +. J. Heiny Large correlation and covariance matrices 11 / 36

17 Special Case A = I Simplified assumptions: d 1 iid, symmetric entries X it = X p 2 Growth regime: lim p n = γ [0, 1] J. Heiny Large correlation and covariance matrices 12 / 36

18 Marčenko Pastur and Semicircle Law Marčenko Pastur law F γ has density f γ (x) = { 1 2πxγ (b x)(x a), if x [a, b], 0, otherwise, where a = (1 γ) 2 and b = (1 + γ) 2. J. Heiny Large correlation and covariance matrices 13 / 36

19 Marčenko Pastur and Semicircle Law Marčenko Pastur law F γ has density f γ (x) = { 1 2πxγ (b x)(x a), if x [a, b], 0, otherwise, where a = (1 γ) 2 and b = (1 + γ) 2. Semicircle law SC J. Heiny Large correlation and covariance matrices 13 / 36

20 Extreme Eigenvalues Largest and smallest eigenvalues of R If p/n γ [0, 1] and E[X 4 ] <, then n/p (λ1 (R) 1) a.s. 2 + γ and n/p (λp (R) 1) a.s. 2 + γ. J. Heiny Large correlation and covariance matrices 14 / 36

21 Extreme Eigenvalues Largest and smallest eigenvalues of R If p/n γ [0, 1] and E[X 4 ] <, then n/p (λ1 (R) 1) a.s. 2 + γ and n/p (λp (R) 1) a.s. 2 + γ. Earlier: R Γ = O( p/n) a.s. In this case: n/p R Γ a.s. 2 + γ. J. Heiny Large correlation and covariance matrices 14 / 36

22 Limiting Spectral Distribution Marčenko Pastur Theorem Assume E[X 2 ] = 1. Then (F S ) converges weakly to F γ. If E[X 4 ] < and p/n 0, then (F ) converges weakly n/p (S I) to SC. J. Heiny Large correlation and covariance matrices 15 / 36

23 Limiting Spectral Distribution Marčenko Pastur Theorem Assume E[X 2 ] = 1. Then (F S ) converges weakly to F γ. If E[X 4 ] < and p/n 0, then (F ) converges weakly n/p (S I) to SC. JH (2018+) Under the domain of attraction type-condition for the Gaussian law, n lim p p ne[ Y11] 4 = 0, the sequence (F R ) converges weakly to F γ. If in addition p/n 0, then (F ) converges weakly to n/p (R I) SC. Here Y ij = X ij n. t=1 X2 it J. Heiny Large correlation and covariance matrices 15 / 36

24 Simulation Study Regular variation with index α > 0: P( X > x) = x α L(x), where L is a slowly varying function. This implies E[ X α+ε ] = for any ε > 0. J. Heiny Large correlation and covariance matrices 16 / 36

25 Simulation Study Regular variation with index α > 0: P( X > x) = x α L(x), where L is a slowly varying function. This implies E[ X α+ε ] = for any ε > 0. Procedure: 1 Simulate X 2 Plot histograms of (λ i (R)) and (λ i (S)) 3 Compare with Marčenko Pastur density J. Heiny Large correlation and covariance matrices 16 / 36

26 α = 6 α = 6, n = 2000, p = 1000 J. Heiny Large correlation and covariance matrices 17 / 36

27 J. Heiny Large correlation and covariance matrices 18 / 36

28 Infinite Fourth Moment Regular variation with index α (0, 4) Normalizing sequence (a 2 np) such that np P(X 2 > a 2 npx) x α/2, as n for x > 0. Then a np = (np) 1 /α l(np) for a slowly varying function l. J. Heiny Large correlation and covariance matrices 19 / 36

29 Reduction to Diagonal Diagonal X with iid regularly varying entries α (0, 4) and p = n β l(n) with β [0, 1]. We have a 2 np XX diag(xx ) P 0, where denotes the spectral norm. n (XX ) ij = X it X jt. t=1 J. Heiny Large correlation and covariance matrices 20 / 36

30 Eigenvalues Weyl s inequality max i=1,...,p λ i (A + B) λ i (A) B. Choose A + B = XX and A = diag(xx ) to obtain a 2 np max i=1,...,p λ i (XX ) λ i (diag(xx )) P 0, n. Note: Limit theory for (λ i (S)) reduced to (S ii ). J. Heiny Large correlation and covariance matrices 21 / 36

31 Example: Eigenvalues Figure: Smoothed histogram based on simulations of the approximation error for the normalized eigenvalue a 2 np λ 1 (S) for entries X it with α = 1.6, β = 1, n = 1000 and p = 200. J. Heiny Large correlation and covariance matrices 22 / 36

32 Eigenvectors v k unit eigenvector of S associated to λ k (S) Unit eigenvectors of diag(s) are canonical basisvectors e j. Eigenvectors X with iid regularly varying entries with index α (0, 4) and p n = n β l(n) with β [0, 1]. Then for any fixed k 1, v k e Lk l2 P 0, n. J. Heiny Large correlation and covariance matrices 23 / 36

33 Localization vs. Delocalization Pareto data Indices of components Size of components Figure: X Pareto(0.8) Normal Data Indices of Components Size of Components Figure: X N(0, 1) Components of eigenvector v 1. p = 200, n = J. Heiny Large correlation and covariance matrices 24 / 36

34 Point Process of Normalized Eigenvalues Point process convergence N n = p i=1 d δ a 2 np λ i (XX ) i=1 δ Γ 2/α i = N The limit is a PRM on (0, ) with mean measure µ(x, ) = x α/2, x > 0, and Γ i = E E i, (E i ) iid standard exponential. J. Heiny Large correlation and covariance matrices 25 / 36

35 Point Process of Normalized Eigenvalues Limiting distribution: For k 1, lim n P(a 2 np λ k x) = lim P(N n(x, ) < k) = P(N(x, ) < k) n = k 1 s=0 ( x α/2 ) s e x α/2, x > 0. s! J. Heiny Large correlation and covariance matrices 26 / 36

36 Point Process of Normalized Eigenvalues Limiting distribution: For k 1, lim n P(a 2 np λ k x) = lim P(N n(x, ) < k) = P(N(x, ) < k) n = k 1 s=0 ( x α/2 ) s e x α/2, x > 0. s! Largest eigenvalue n a 2 λ 1 (S) d Γ α/2 1, np where the limit has a Fréchet distribution with parameter α/2. Soshnikov (2006), Auffinger et al. (2009), Auffinger and Tang (2016), Davis et al. (2014, ), JH and Mikosch (2016) J. Heiny Large correlation and covariance matrices 26 / 36

37 α = 3.99 α = 3.99, n = 2000, p = 1000 J. Heiny Large correlation and covariance matrices 27 / 36

38 α = 3 α = 3, n = 2000, p = 1000 J. Heiny Large correlation and covariance matrices 28 / 36

39 α = 2.1 α = 2.1, n = 10000, p = 1000 J. Heiny Large correlation and covariance matrices 29 / 36

40 Heavy Tails and Dependence (Z it ): iid field of regularly varying random variables. Stochastic volatility model: X = ( Z it σ (n) ) it p n J. Heiny Large correlation and covariance matrices 30 / 36

41 Heavy Tails and Dependence (Z it ): iid field of regularly varying random variables. Stochastic volatility model: X = ( Z it σ (n) ) it p n Generate deterministic covariance structure A: Davis et al. (2014) X = A 1/2 Z J. Heiny Large correlation and covariance matrices 30 / 36

42 Heavy Tails and Dependence (Z it ): iid field of regularly varying random variables. Dependence among rows and columns: X it = h kl Z i k,t l l=0 k=0 with some constants h kl. Davis et al. (2016) J. Heiny Large correlation and covariance matrices 31 / 36

43 Heavy Tails and Dependence (Z it ): iid field of regularly varying random variables. Dependence among rows and columns: X it = h kl Z i k,t l l=0 k=0 with some constants h kl. Davis et al. (2016) Relation to iid case: XX = h k1 l 1 h k2 l 2 Z(k 1, l 1 )Z (k 2, l 2 ), l 1,l 2 =0 k 1,k 2 =0 where Z(k, l) = (Z i k,t l ) i=1,...,p;t=1,...,n, l, k Z. J. Heiny Large correlation and covariance matrices 31 / 36

44 Heavy Tails and Dependence (Z it ): iid field of regularly varying random variables. Dependence among rows and columns: X it = h kl Z i k,t l l=0 k=0 with some constants h kl. Davis et al. (2016) Relation to iid case: XX = h k1 l 1 h k2 l 2 Z(k 1, l 1 )Z (k 2, l 2 ), l 1,l 2 =0 k 1,k 2 =0 where Z(k, l) = (Z i k,t l ) i=1,...,p;t=1,...,n, l, k Z. Location of squares: M ij = h il h jl, i, j Z. l Z J. Heiny Large correlation and covariance matrices 31 / 36

45 Autocovariance Matrices For s 0, X n (s) = (X i,t+s ) i=1,...,p; t=1,...,n, n 1. Then X n = X n (0). Autocovariance matrix for lag s X n (0)X n (s) Limit theory for singular values of such matrices. J. Heiny Large correlation and covariance matrices 32 / 36

46 Autocovariance eigenvectors J. Heiny Large correlation and covariance matrices 33 / 36

47 Autocovariance eigenvectors Coordinates of eigenvector Eigenvectors of K(0,0) 1st eigenvector of P(0,0) nd eigenvector of P(0,0) 3rd eigenvector of P(0,0) Number of coordinate th eigenvector of P(0,0) 5th eigenvector of P(0,0) J. Heiny Large 395 correlation 400 and405 covariance 410 matrices 34 /

48 Future work Distribution free measures of dependence: Spearman s rank correlation matrices Kendall s τ correlation matrices, U-statistic! J. Heiny Large correlation and covariance matrices 35 / 36

49 Thank you! J. Heiny Large correlation and covariance matrices 36 / 36

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case

The largest eigenvalues of the sample covariance matrix. in the heavy-tail case The largest eigenvalues of the sample covariance matrix 1 in the heavy-tail case Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia NY), Johannes Heiny (Aarhus University)

More information

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao

A note on a Marčenko-Pastur type theorem for time series. Jianfeng. Yao A note on a Marčenko-Pastur type theorem for time series Jianfeng Yao Workshop on High-dimensional statistics The University of Hong Kong, October 2011 Overview 1 High-dimensional data and the sample covariance

More information

Large sample covariance matrices and the T 2 statistic

Large sample covariance matrices and the T 2 statistic Large sample covariance matrices and the T 2 statistic EURANDOM, the Netherlands Joint work with W. Zhou Outline 1 2 Basic setting Let {X ij }, i, j =, be i.i.d. r.v. Write n s j = (X 1j,, X pj ) T and

More information

Non white sample covariance matrices.

Non white sample covariance matrices. Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions

More information

Nonlinear Time Series Modeling

Nonlinear Time Series Modeling Nonlinear Time Series Modeling Part II: Time Series Models in Finance Richard A. Davis Colorado State University (http://www.stat.colostate.edu/~rdavis/lectures) MaPhySto Workshop Copenhagen September

More information

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR

On corrections of classical multivariate tests for high-dimensional data. Jian-feng. Yao Université de Rennes 1, IRMAR Introduction a two sample problem Marčenko-Pastur distributions and one-sample problems Random Fisher matrices and two-sample problems Testing cova On corrections of classical multivariate tests for high-dimensional

More information

Concentration Inequalities for Random Matrices

Concentration Inequalities for Random Matrices Concentration Inequalities for Random Matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify the asymptotic

More information

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2

More information

Extremogram and Ex-Periodogram for heavy-tailed time series

Extremogram and Ex-Periodogram for heavy-tailed time series Extremogram and Ex-Periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Jussieu, April 9, 2014 1 2 Extremal

More information

On corrections of classical multivariate tests for high-dimensional data

On corrections of classical multivariate tests for high-dimensional data On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics

More information

Estimation of large dimensional sparse covariance matrices

Estimation of large dimensional sparse covariance matrices Estimation of large dimensional sparse covariance matrices Department of Statistics UC, Berkeley May 5, 2009 Sample covariance matrix and its eigenvalues Data: n p matrix X n (independent identically distributed)

More information

Heavy Tailed Time Series with Extremal Independence

Heavy Tailed Time Series with Extremal Independence Heavy Tailed Time Series with Extremal Independence Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Herold Dehling Bochum January 16, 2015 Rafa l Kulik and Philippe Soulier Regular variation

More information

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA)

The circular law. Lewis Memorial Lecture / DIMACS minicourse March 19, Terence Tao (UCLA) The circular law Lewis Memorial Lecture / DIMACS minicourse March 19, 2008 Terence Tao (UCLA) 1 Eigenvalue distributions Let M = (a ij ) 1 i n;1 j n be a square matrix. Then one has n (generalised) eigenvalues

More information

Extremogram and ex-periodogram for heavy-tailed time series

Extremogram and ex-periodogram for heavy-tailed time series Extremogram and ex-periodogram for heavy-tailed time series 1 Thomas Mikosch University of Copenhagen Joint work with Richard A. Davis (Columbia) and Yuwei Zhao (Ulm) 1 Zagreb, June 6, 2014 1 2 Extremal

More information

Random Matrices: Invertibility, Structure, and Applications

Random Matrices: Invertibility, Structure, and Applications Random Matrices: Invertibility, Structure, and Applications Roman Vershynin University of Michigan Colloquium, October 11, 2011 Roman Vershynin (University of Michigan) Random Matrices Colloquium 1 / 37

More information

Limit theorems for dependent regularly varying functions of Markov chains

Limit theorems for dependent regularly varying functions of Markov chains Limit theorems for functions of with extremal linear behavior Limit theorems for dependent regularly varying functions of In collaboration with T. Mikosch Olivier Wintenberger wintenberger@ceremade.dauphine.fr

More information

Random Matrix Theory and its Applications to Econometrics

Random Matrix Theory and its Applications to Econometrics Random Matrix Theory and its Applications to Econometrics Hyungsik Roger Moon University of Southern California Conference to Celebrate Peter Phillips 40 Years at Yale, October 2018 Spectral Analysis of

More information

Lectures 6 7 : Marchenko-Pastur Law

Lectures 6 7 : Marchenko-Pastur Law Fall 2009 MATH 833 Random Matrices B. Valkó Lectures 6 7 : Marchenko-Pastur Law Notes prepared by: A. Ganguly We will now turn our attention to rectangular matrices. Let X = (X 1, X 2,..., X n ) R p n

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Exponential tail inequalities for eigenvalues of random matrices

Exponential tail inequalities for eigenvalues of random matrices Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title

On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime. Title itle On singular values distribution of a matrix large auto-covariance in the ultra-dimensional regime Authors Wang, Q; Yao, JJ Citation Random Matrices: heory and Applications, 205, v. 4, p. article no.

More information

Statistical signal processing

Statistical signal processing Statistical signal processing Short overview of the fundamentals Outline Random variables Random processes Stationarity Ergodicity Spectral analysis Random variable and processes Intuition: A random variable

More information

Weiming Li and Jianfeng Yao

Weiming Li and Jianfeng Yao LOCAL MOMENT ESTIMATION OF POPULATION SPECTRUM 1 A LOCAL MOMENT ESTIMATOR OF THE SPECTRUM OF A LARGE DIMENSIONAL COVARIANCE MATRIX Weiming Li and Jianfeng Yao arxiv:1302.0356v1 [stat.me] 2 Feb 2013 Abstract:

More information

Random Matrices for Big Data Signal Processing and Machine Learning

Random Matrices for Big Data Signal Processing and Machine Learning Random Matrices for Big Data Signal Processing and Machine Learning (ICASSP 2017, New Orleans) Romain COUILLET and Hafiz TIOMOKO ALI CentraleSupélec, France March, 2017 1 / 153 Outline Basics of Random

More information

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity.

LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. LECTURES 2-3 : Stochastic Processes, Autocorrelation function. Stationarity. Important points of Lecture 1: A time series {X t } is a series of observations taken sequentially over time: x t is an observation

More information

Wigner s semicircle law

Wigner s semicircle law CHAPTER 2 Wigner s semicircle law 1. Wigner matrices Definition 12. A Wigner matrix is a random matrix X =(X i, j ) i, j n where (1) X i, j, i < j are i.i.d (real or complex valued). (2) X i,i, i n are

More information

Random Toeplitz Matrices

Random Toeplitz Matrices Arnab Sen University of Minnesota Conference on Limits Theorems in Probability, IISc January 11, 2013 Joint work with Bálint Virág What are Toeplitz matrices? a0 a 1 a 2... a1 a0 a 1... a2 a1 a0... a (n

More information

Isotropic local laws for random matrices

Isotropic local laws for random matrices Isotropic local laws for random matrices Antti Knowles University of Geneva With Y. He and R. Rosenthal Random matrices Let H C N N be a large Hermitian random matrix, normalized so that H. Some motivations:

More information

Fluctuations from the Semicircle Law Lecture 4

Fluctuations from the Semicircle Law Lecture 4 Fluctuations from the Semicircle Law Lecture 4 Ioana Dumitriu University of Washington Women and Math, IAS 2014 May 23, 2014 Ioana Dumitriu (UW) Fluctuations from the Semicircle Law Lecture 4 May 23, 2014

More information

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices

Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices Local semicircle law, Wegner estimate and level repulsion for Wigner random matrices László Erdős University of Munich Oberwolfach, 2008 Dec Joint work with H.T. Yau (Harvard), B. Schlein (Cambrigde) Goal:

More information

8.1 Concentration inequality for Gaussian random matrix (cont d)

8.1 Concentration inequality for Gaussian random matrix (cont d) MGMT 69: Topics in High-dimensional Data Analysis Falll 26 Lecture 8: Spectral clustering and Laplacian matrices Lecturer: Jiaming Xu Scribe: Hyun-Ju Oh and Taotao He, October 4, 26 Outline Concentration

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference

Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Free Probability, Sample Covariance Matrices and Stochastic Eigen-Inference Alan Edelman Department of Mathematics, Computer Science and AI Laboratories. E-mail: edelman@math.mit.edu N. Raj Rao Deparment

More information

Long-range dependence

Long-range dependence Long-range dependence Kechagias Stefanos University of North Carolina at Chapel Hill May 23, 2013 Kechagias Stefanos (UNC) Long-range dependence May 23, 2013 1 / 45 Outline 1 Introduction to time series

More information

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices

Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices Strong Convergence of the Empirical Distribution of Eigenvalues of Large Dimensional Random Matrices by Jack W. Silverstein* Department of Mathematics Box 8205 North Carolina State University Raleigh,

More information

Applications of random matrix theory to principal component analysis(pca)

Applications of random matrix theory to principal component analysis(pca) Applications of random matrix theory to principal component analysis(pca) Jun Yin IAS, UW-Madison IAS, April-2014 Joint work with A. Knowles and H. T Yau. 1 Basic picture: Let H be a Wigner (symmetric)

More information

1 Intro to RMT (Gene)

1 Intro to RMT (Gene) M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i

More information

Numerical Solutions to the General Marcenko Pastur Equation

Numerical Solutions to the General Marcenko Pastur Equation Numerical Solutions to the General Marcenko Pastur Equation Miriam Huntley May 2, 23 Motivation: Data Analysis A frequent goal in real world data analysis is estimation of the covariance matrix of some

More information

by Søren Johansen Department of Economics, University of Copenhagen and CREATES, Aarhus University

by Søren Johansen Department of Economics, University of Copenhagen and CREATES, Aarhus University 1 THE COINTEGRATED VECTOR AUTOREGRESSIVE MODEL WITH AN APPLICATION TO THE ANALYSIS OF SEA LEVEL AND TEMPERATURE by Søren Johansen Department of Economics, University of Copenhagen and CREATES, Aarhus University

More information

Homework 1. Yuan Yao. September 18, 2011

Homework 1. Yuan Yao. September 18, 2011 Homework 1 Yuan Yao September 18, 2011 1. Singular Value Decomposition: The goal of this exercise is to refresh your memory about the singular value decomposition and matrix norms. A good reference to

More information

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline.

Dependence. Practitioner Course: Portfolio Optimization. John Dodson. September 10, Dependence. John Dodson. Outline. Practitioner Course: Portfolio Optimization September 10, 2008 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y ) (x,

More information

Multivariate Time Series

Multivariate Time Series Multivariate Time Series Notation: I do not use boldface (or anything else) to distinguish vectors from scalars. Tsay (and many other writers) do. I denote a multivariate stochastic process in the form

More information

5: MULTIVARATE STATIONARY PROCESSES

5: MULTIVARATE STATIONARY PROCESSES 5: MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalarvalued random variables on the same probability

More information

Review (Probability & Linear Algebra)

Review (Probability & Linear Algebra) Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint

More information

Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix

Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix Spectral analysis of the Moore-Penrose inverse of a large dimensional sample covariance matrix Taras Bodnar a, Holger Dette b and Nestor Parolya c a Department of Mathematics, Stockholm University, SE-069

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Inference For High Dimensional M-estimates. Fixed Design Results

Inference For High Dimensional M-estimates. Fixed Design Results : Fixed Design Results Lihua Lei Advisors: Peter J. Bickel, Michael I. Jordan joint work with Peter J. Bickel and Noureddine El Karoui Dec. 8, 2016 1/57 Table of Contents 1 Background 2 Main Results and

More information

The norm of polynomials in large random matrices

The norm of polynomials in large random matrices The norm of polynomials in large random matrices Camille Mâle, École Normale Supérieure de Lyon, Ph.D. Student under the direction of Alice Guionnet. with a significant contribution by Dimitri Shlyakhtenko.

More information

Random regular digraphs: singularity and spectrum

Random regular digraphs: singularity and spectrum Random regular digraphs: singularity and spectrum Nick Cook, UCLA Probability Seminar, Stanford University November 2, 2015 Universality Circular law Singularity probability Talk outline 1 Universality

More information

Empirical properties of large covariance matrices in finance

Empirical properties of large covariance matrices in finance Empirical properties of large covariance matrices in finance Ex: RiskMetrics Group, Geneva Since 2010: Swissquote, Gland December 2009 Covariance and large random matrices Many problems in finance require

More information

Preface to the Second Edition...vii Preface to the First Edition... ix

Preface to the Second Edition...vii Preface to the First Edition... ix Contents Preface to the Second Edition...vii Preface to the First Edition........... ix 1 Introduction.............................................. 1 1.1 Large Dimensional Data Analysis.........................

More information

Markov operators, classical orthogonal polynomial ensembles, and random matrices

Markov operators, classical orthogonal polynomial ensembles, and random matrices Markov operators, classical orthogonal polynomial ensembles, and random matrices M. Ledoux, Institut de Mathématiques de Toulouse, France 5ecm Amsterdam, July 2008 recent study of random matrix and random

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices

Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.

More information

Stochastic volatility models: tails and memory

Stochastic volatility models: tails and memory : tails and memory Rafa l Kulik and Philippe Soulier Conference in honour of Prof. Murad Taqqu 19 April 2012 Rafa l Kulik and Philippe Soulier Plan Model assumptions; Limit theorems for partial sums and

More information

Factor Analysis. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA

Factor Analysis. Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA Factor Analysis Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA 1 Factor Models The multivariate regression model Y = XB +U expresses each row Y i R p as a linear combination

More information

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013 Multivariate Gaussian Distribution Auxiliary notes for Time Series Analysis SF2943 Spring 203 Timo Koski Department of Mathematics KTH Royal Institute of Technology, Stockholm 2 Chapter Gaussian Vectors.

More information

Random Matrices in Machine Learning

Random Matrices in Machine Learning Random Matrices in Machine Learning Romain COUILLET CentraleSupélec, University of ParisSaclay, France GSTATS IDEX DataScience Chair, GIPSA-lab, University Grenoble Alpes, France. June 21, 2018 1 / 113

More information

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg

Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles

More information

Applications and fundamental results on random Vandermon

Applications and fundamental results on random Vandermon Applications and fundamental results on random Vandermonde matrices May 2008 Some important concepts from classical probability Random variables are functions (i.e. they commute w.r.t. multiplication)

More information

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data

Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations of High-Dimension, Low-Sample-Size Data Sri Lankan Journal of Applied Statistics (Special Issue) Modern Statistical Methodologies in the Cutting Edge of Science Asymptotic Distribution of the Largest Eigenvalue via Geometric Representations

More information

ECO 513 Fall 2009 C. Sims CONDITIONAL EXPECTATION; STOCHASTIC PROCESSES

ECO 513 Fall 2009 C. Sims CONDITIONAL EXPECTATION; STOCHASTIC PROCESSES ECO 513 Fall 2009 C. Sims CONDIIONAL EXPECAION; SOCHASIC PROCESSES 1. HREE EXAMPLES OF SOCHASIC PROCESSES (I) X t has three possible time paths. With probability.5 X t t, with probability.25 X t t, and

More information

Inhomogeneous circular laws for random matrices with non-identically distributed entries

Inhomogeneous circular laws for random matrices with non-identically distributed entries Inhomogeneous circular laws for random matrices with non-identically distributed entries Nick Cook with Walid Hachem (Telecom ParisTech), Jamal Najim (Paris-Est) and David Renfrew (SUNY Binghamton/IST

More information

A new method to bound rate of convergence

A new method to bound rate of convergence A new method to bound rate of convergence Arup Bose Indian Statistical Institute, Kolkata, abose@isical.ac.in Sourav Chatterjee University of California, Berkeley Empirical Spectral Distribution Distribution

More information

Semicircle law on short scales and delocalization for Wigner random matrices

Semicircle law on short scales and delocalization for Wigner random matrices Semicircle law on short scales and delocalization for Wigner random matrices László Erdős University of Munich Weizmann Institute, December 2007 Joint work with H.T. Yau (Harvard), B. Schlein (Munich)

More information

CS281 Section 4: Factor Analysis and PCA

CS281 Section 4: Factor Analysis and PCA CS81 Section 4: Factor Analysis and PCA Scott Linderman At this point we have seen a variety of machine learning models, with a particular emphasis on models for supervised learning. In particular, we

More information

Random Correlation Matrices, Top Eigenvalue with Heavy Tails and Financial Applications

Random Correlation Matrices, Top Eigenvalue with Heavy Tails and Financial Applications Random Correlation Matrices, Top Eigenvalue with Heavy Tails and Financial Applications J.P Bouchaud with: M. Potters, G. Biroli, L. Laloux, M. A. Miceli http://www.cfm.fr Portfolio theory: Basics Portfolio

More information

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction

Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Random Matrix Theory and its applications to Statistics and Wireless Communications Eigenvalues and Singular Values of Random Matrices: A Tutorial Introduction Sergio Verdú Princeton University National

More information

Estimation of the Global Minimum Variance Portfolio in High Dimensions

Estimation of the Global Minimum Variance Portfolio in High Dimensions Estimation of the Global Minimum Variance Portfolio in High Dimensions Taras Bodnar, Nestor Parolya and Wolfgang Schmid 07.FEBRUARY 2014 1 / 25 Outline Introduction Random Matrix Theory: Preliminary Results

More information

1 Tridiagonal matrices

1 Tridiagonal matrices Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate

More information

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular

DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix

More information

Sliced Inverse Regression

Sliced Inverse Regression Sliced Inverse Regression Ge Zhao gzz13@psu.edu Department of Statistics The Pennsylvania State University Outline Background of Sliced Inverse Regression (SIR) Dimension Reduction Definition of SIR Inversed

More information

Comparison Method in Random Matrix Theory

Comparison Method in Random Matrix Theory Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH

More information

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline.

Dependence. MFM Practitioner Module: Risk & Asset Allocation. John Dodson. September 11, Dependence. John Dodson. Outline. MFM Practitioner Module: Risk & Asset Allocation September 11, 2013 Before we define dependence, it is useful to define Random variables X and Y are independent iff For all x, y. In particular, F (X,Y

More information

10-704: Information Processing and Learning Fall Lecture 9: Sept 28

10-704: Information Processing and Learning Fall Lecture 9: Sept 28 10-704: Information Processing and Learning Fall 2016 Lecturer: Siheng Chen Lecture 9: Sept 28 Note: These notes are based on scribed notes from Spring15 offering of this course. LaTeX template courtesy

More information

Review. DS GA 1002 Statistical and Mathematical Models. Carlos Fernandez-Granda

Review. DS GA 1002 Statistical and Mathematical Models.   Carlos Fernandez-Granda Review DS GA 1002 Statistical and Mathematical Models http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall16 Carlos Fernandez-Granda Probability and statistics Probability: Framework for dealing with

More information

Regular Variation and Extreme Events for Stochastic Processes

Regular Variation and Extreme Events for Stochastic Processes 1 Regular Variation and Extreme Events for Stochastic Processes FILIP LINDSKOG Royal Institute of Technology, Stockholm 2005 based on joint work with Henrik Hult www.math.kth.se/ lindskog 2 Extremes for

More information

Invertibility of random matrices

Invertibility of random matrices University of Michigan February 2011, Princeton University Origins of Random Matrix Theory Statistics (Wishart matrices) PCA of a multivariate Gaussian distribution. [Gaël Varoquaux s blog gael-varoquaux.info]

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

ECE 598: Representation Learning: Algorithms and Models Fall 2017

ECE 598: Representation Learning: Algorithms and Models Fall 2017 ECE 598: Representation Learning: Algorithms and Models Fall 2017 Lecture 1: Tensor Methods in Machine Learning Lecturer: Pramod Viswanathan Scribe: Bharath V Raghavan, Oct 3, 2017 11 Introduction Tensors

More information

High Dimensional Covariance and Precision Matrix Estimation

High Dimensional Covariance and Precision Matrix Estimation High Dimensional Covariance and Precision Matrix Estimation Wei Wang Washington University in St. Louis Thursday 23 rd February, 2017 Wei Wang (Washington University in St. Louis) High Dimensional Covariance

More information

7. MULTIVARATE STATIONARY PROCESSES

7. MULTIVARATE STATIONARY PROCESSES 7. MULTIVARATE STATIONARY PROCESSES 1 1 Some Preliminary Definitions and Concepts Random Vector: A vector X = (X 1,..., X n ) whose components are scalar-valued random variables on the same probability

More information

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6)

Markov chains and the number of occurrences of a word in a sequence ( , 11.1,2,4,6) Markov chains and the number of occurrences of a word in a sequence (4.5 4.9,.,2,4,6) Prof. Tesler Math 283 Fall 208 Prof. Tesler Markov Chains Math 283 / Fall 208 / 44 Locating overlapping occurrences

More information

Probabilistic Latent Semantic Analysis

Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjin@postech.ac.kr

More information

Notes on Time Series Modeling

Notes on Time Series Modeling Notes on Time Series Modeling Garey Ramey University of California, San Diego January 17 1 Stationary processes De nition A stochastic process is any set of random variables y t indexed by t T : fy t g

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Inference for High Dimensional Robust Regression

Inference for High Dimensional Robust Regression Department of Statistics UC Berkeley Stanford-Berkeley Joint Colloquium, 2015 Table of Contents 1 Background 2 Main Results 3 OLS: A Motivating Example Table of Contents 1 Background 2 Main Results 3 OLS:

More information

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear

More information

Universality of local spectral statistics of random matrices

Universality of local spectral statistics of random matrices Universality of local spectral statistics of random matrices László Erdős Ludwig-Maximilians-Universität, Munich, Germany CRM, Montreal, Mar 19, 2012 Joint with P. Bourgade, B. Schlein, H.T. Yau, and J.

More information

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Some slides have been adopted from Prof. H.R. Rabiee s and also Prof. R. Gutierrez-Osuna

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Determinantal point processes and random matrix theory in a nutshell

Determinantal point processes and random matrix theory in a nutshell Determinantal point processes and random matrix theory in a nutshell part II Manuela Girotti based on M. Girotti s PhD thesis, A. Kuijlaars notes from Les Houches Winter School 202 and B. Eynard s notes

More information

MIT Spring 2015

MIT Spring 2015 Regression Analysis MIT 18.472 Dr. Kempthorne Spring 2015 1 Outline Regression Analysis 1 Regression Analysis 2 Multiple Linear Regression: Setup Data Set n cases i = 1, 2,..., n 1 Response (dependent)

More information

Does k-th Moment Exist?

Does k-th Moment Exist? Does k-th Moment Exist? Hitomi, K. 1 and Y. Nishiyama 2 1 Kyoto Institute of Technology, Japan 2 Institute of Economic Research, Kyoto University, Japan Email: hitomi@kit.ac.jp Keywords: Existence of moments,

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Practical conditions on Markov chains for weak convergence of tail empirical processes

Practical conditions on Markov chains for weak convergence of tail empirical processes Practical conditions on Markov chains for weak convergence of tail empirical processes Olivier Wintenberger University of Copenhagen and Paris VI Joint work with Rafa l Kulik and Philippe Soulier Toronto,

More information

Eigenvalue spectra of time-lagged covariance matrices: Possibilities for arbitrage?

Eigenvalue spectra of time-lagged covariance matrices: Possibilities for arbitrage? Eigenvalue spectra of time-lagged covariance matrices: Possibilities for arbitrage? Stefan Thurner www.complex-systems.meduniwien.ac.at www.santafe.edu London July 28 Foundations of theory of financial

More information

The Multivariate Normal Distribution. In this case according to our theorem

The Multivariate Normal Distribution. In this case according to our theorem The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this

More information

Notes on Random Vectors and Multivariate Normal

Notes on Random Vectors and Multivariate Normal MATH 590 Spring 06 Notes on Random Vectors and Multivariate Normal Properties of Random Vectors If X,, X n are random variables, then X = X,, X n ) is a random vector, with the cumulative distribution

More information