Cleaning correlation matrices, Random Matrix Theory & HCIZ integrals
|
|
- Virginia Robbins
- 6 years ago
- Views:
Transcription
1 Cleaning correlation matrices, Random Matrix Theory & HCIZ integrals J.P Bouchaud with: M. Potters, L. Laloux, R. Allez, J. Bun, S. Majumdar
2 Portfolio theory: Basics Portfolio weights w i, Asset returns X t i If expected/predicted gains are g i then the expected gain of the portfolio is G = i w i g i Let risk be defined as: variance of the portfolio returns (maybe not a good definition!) R 2 = ij w i σ i C ij σ j w j where σ 2 i is the variance of asset i, and C ij is the correlation matrix.
3 Markowitz Optimization Find the portfolio with maximum expected return for a given risk or equivalently, minimum risk for a given return (G) In matrix notation: w C = G C 1 g g T C 1 g where all gains are measured with respect to the risk-free rate and σ i = 1 (absorbed in g i ). Note: in the presence of non-linear contraints, e.g. i w i A a spin-glass problem! (see [JPB,Galluccio,Potters])
4 Markowitz Optimization More explicitly: w α λ 1 α (Ψ α g)ψ α = g + α (λ 1 α 1)(Ψ α g)ψ α Compared to the naive allocation w g: Eigenvectors with λ 1 are projected out Eigenvectors with λ 1 are overallocated Very important for stat. arb. strategies (for example)
5 Empirical Correlation Matrix Before inverting them, how should one estimate/clean correlation matrices? Empirical Equal-Time Correlation Matrix E E ij = 1 T t X t i Xt j σ i σ j Order N 2 quantities estimated with NT datapoints. When T < N, E is not even invertible. Typically: N = ; T = days (10 years Beware of high frequencies) q := N/T = O(1)
6 Risk of Optimized Portfolios In-sample risk (for G = 1): R 2 in = wt E Ew E = 1 g T E 1 g True minimal risk R 2 true = wt C Cw C = 1 g T C 1 g Out-of-sample risk R 2 out = wt E Cw E = gt E 1 CE 1 g (g T E 1 g) 2
7 Risk of Optimized Portfolios Let E be a noisy, unbiased estimator of C. Using convexity arguments, and for large matrices: R 2 in R2 true R2 out In fact, using RMT: R 2 out = R2 true (1 q) 1 = R 2 in (1 q) 2, indep. of C! (For large N) If C has some time dependence (beyond observation noise) one expects an even worse underestimation
8 In Sample vs. Out of Sample 150 Return Raw in-sample Cleaned in-sample Cleaned out-of-sample Raw out-of-sample Risk
9 Rotational invariance hypothesis (RIH) In the absence of any cogent prior on the eigenvectors, one can assume that C is a member of a Rotationally Invariant Ensemble RIH Surely not true for the market mode v 1 (1,1,...,1)/ N, with λ 1 Nρ but OK in the bulk (see below) A more plausible assumption: factor model hierarchical, block diagonal C s ( Parisi matrices ) Cleaning E within RIH: keep the eigenvectors, play with eigenvalues The simplest, classical scheme, shrinkage: C = (1 α)e + αi λ C = (1 α)λ E + α, α [0,1]
10 RMT: from ρ C (λ) to ρ E (λ) Solution using different techniques (replicas, diagrams, free matrices) gives the resolvent G E (z) = N 1 Tr(E zi) as: G E (z) = Note: One should work from ρ C G E 1 dλ ρ C (λ) z λ(1 q + qzg E (z)), Example 1: C = I (null hypothesis) Marcenko-Pastur [67] (λ+ λ)(λ λ ) ρ E (λ) = 2πqλ, λ [(1 q) 2,(1 + q) 2 ] Suggests a second cleaning scheme (Eigenvalue clipping, [Laloux et al. 1997]): any eigenvalue beyond the Marcenko-Pastur edge can be trusted, the rest is noise.
11 Eigenvalue clipping λ < λ + are replaced by a unique one, so as to preserve TrC = N.
12 RMT: from ρ C (λ) to ρ E (λ) Solution using different techniques (replicas, diagrams, free matrices) gives the resolvent G E (z) as: G E (z) = Note: One should work from ρ C G E 1 dλ ρ C (λ) z λ(1 q + qzg E (z)), Example 2: Power-law spectrum (motivated by data) ρ C (λ) = µa (λ λ 0 ) 1+µΘ(λ λ min) Suggests a third cleaning scheme (Eigenvalue substitution, Potters et al. 2009, El Karoui 2010): λ E is replaced by the theoretical λ C with the same rank k
13 Empirical Correlation Matrix ρ(λ) Data Dressed power law (µ=2) Raw power law (µ=2) Marcenko-Pastur κ rank λ MP and generalized MP fits of the spectrum
14 Eigenvalue cleaning Classical Shrinkage Ledoit-Wolf Shrinkage Power Law Substitution Eigenvalue Clipping 2 R α Out-of sample risk for different 1-parameter cleaning schemes
15 A RIH Bayesian approach All the above schemes lack a rigorous framework and are at best ad-hoc recipes A Bayesian framework: suppose C belongs to a RIE, with P(C) and assume Gaussian returns. Then one needs: C X t = i with DCCP(C {X t i }) P(C {Xi t }) = Z 1 exp [ NTrV (C, {Xi t })] ; where (Bayes): V (C, {X t i }) = 1 2q [ logc + EC 1 ] + V 0 (C)
16 A Bayesian approach: a fully soluble case V 0 (C) = (1 + b)lnc + bc 1, b > 0: Inverse Wishart ρ C (λ) (λ+ λ)(λ λ ) λ 2 ; λ ± = (1 + b ± (1 + b) 2 b 2 /4)/b In this case, the matrix integral can be done, leading exactly to the Shrinkage recipe, with α = f(b, q) Note that b can be determined from the empirical spectrum of E, using the generalized MP formula
17 The general case: HCIZ integrals A Coulomb gas approach: integrate over the orthogonal group C = OΛO, where Λ is diagonal. DOexp [ N2q Tr [ logλ + EO Λ 1 O + 2qV 0 (Λ) ]] Can one obtain a large N estimate of the HCIZ integral F(ρ A, ρ B ) = lim N 2 [ ] N ln DOexp N 2q TrAO BO in terms of the spectrum of A and B?
18 The general case: HCIZ integrals Can one obtain a large N estimate of the HCIZ integral F(ρ A, ρ B ) = lim N 2 [ ] N ln DOexp N 2q TrAO BO in terms of the spectrum of A and B? When A (or B) is of finite rank, such a formula exists in terms of the R-transform of B [Marinari, Parisi & Ritort, 1995]. When the rank of A,B are of order N, there is a formula due to Matytsin [94](in the unitary case), later shown rigorously by Zeitouni & Guionnet, but its derivation is quite obscure...
19 An instanton approach to large N HCIZ Consider Dyson s Brownian motion matrices. The eigenvalues obey: dx i = 2 βn dw + 1 N dt j i 1 x i x j, Constrain x i (t = 0) = λ Ai and x i (t = 1) = λ Bi. The probability of such a path is given by a large deviation/instanton formula, with: d 2 x i dt 2 = 2 N 2 l =i 1 (x i x l ) 3.
20 An instanton approach to large N HCIZ Constrain x i (t = 0) = λ Ai and x i (t = 1) = λ Bi. The probability of such a path is given by a large deviation/instanton formula, with: d 2 x i dt 2 = 2 N 2 l =i 1 (x i x l ) 3. This can be interpreted as the motion of particles interacting through an attractive two-body potential φ(r) = (Nr) 2. Using the virial formula, one finally gets Matytsin s equations: t ρ + x [ρv] = 0, t v + v x v = π 2 ρ x ρ.
21 An instanton approach to large N HCIZ Finally, the action associated to these trajectories is: S 1 2 dxρ [ v 2 + π2 3 ρ2 ] 1 2 [ dxdyρ Z (x)ρ Z (y)ln x y ] Z=B Z=A Now, the link with HCIZ comes from noticing that the propagator of the Brownian motion in matrix space is: P(B A) exp [ N 2 Tr(A B)2 ] = exp N 2 [TrA2 +TrB 2 2TrAOBO ] Disregarding the eigenvectors of B (i.e. integrating over O) leads to another expression for P(λ Bi λ Aj ) in terms of HCIZ that can be compared to the one using instantons The final result for F(ρ A, ρ B ) is exactly Matytsin s expression, up to details (!)
22 Back to eigenvalue cleaning... Estimating HCIZ at large N is only the first step, but......one still needs to apply it to B = C 1, A = E = X CX and to compute also correlation functions such as with the HCIZ weight O 2 ij E C 1 As we were working on this we discovered the work of Ledoit- Péché that solves the problem exactly using tools from RMT...
23 The Ledoit-Péché magic formula The Ledoit-Péché [2011] formula is a non-linear shrinkage, given by: λ C = λ E 1 q + qλ E lim ǫ 0 G E (λ E iǫ) 2. Note 1: Independent of C: only G E is needed (and is observable)! Note 2: When applied to the case where C is inverse Wishart, this gives again the linear shrinkage Note 3: Still to be done: reobtain these results using the HCIZ route (many interesting intermediate results to hope for!)
24 Eigenvalue cleaning: Ledoit-Péché Fit of the empirical distribution with V 0 (z) = a/z+b/z2 +c/z 3.
25 What about eigenvectors? Up to now, most results using RMT focus on eigenvalues What about eigenvectors? What natural null-hypothesis beyond RIH? Are eigen-values/eigen-directions stable in time? Important source of risk for market/sector neutral portfolios: a sudden/gradual rotation of the top eigenvectors!..a little movie...
26 What about eigenvectors? Correlation matrices need a certain time T to be measured Even if the true C is fixed, its empirical determination fluctuates: E t = C + noise What is the dynamics of the empirical eigenvectors induced by measurement noise? Can one detect a genuine evolution of these eigenvectors beyond noise effects?
27 What about eigenvectors? More generally, can one say something about the eigenvectors of randomly perturbed matrices: H = H 0 + ǫh 1 where H 0 is deterministic or random (e.g. GOE) and H 1 random.
28 Eigenvectors exchange An issue: upon pseudo-collisions of eigenvectors, eigenvalues exchange Example: 2 2 matrices H 11 = a, H 22 = a + ǫ, H 21 = H 12 = c, λ ± ǫ 0 a + ǫ 2 ± c 2 + ǫ2 4 Let c vary: quasi-crossing for c 0, with an exchange of the top eigenvector: (1, 1) (1, 1) For large matrices, these exchanges are extremely numerous labelling problem
29 Subspace stability An idea: follow the subspace spanned by P-eigenvectors: ψ k+1, ψ k+2,... ψ k+p ψ k+1, ψ k+2,... ψ k+p Form the P P matrix of scalar products: G ij = ψ k+i ψ k+j The determinant of this matrix is insensitive to label permutations and is a measure of the overlap between the two P-dimensional subspaces D = P 1 ln detg is a measure of how well the first subspace can be approximated by the second
30 Intermezzo Non equal time correlation matrices E τ ij = 1 T t X t i Xt+τ j σ i σ j N N but not symmetrical: leader-lagger relations General rectangular correlation matrices G αi = 1 T T t=1 Y t αx t i N input factors X; M output factors Y Example: Yα t = Xt+τ j, N = M
31 Intermezzo: Singular values Singular values: Square root of the non zero eigenvalues of GG T or G T G, with associated eigenvectors u k α and vk i 1 s 1 > s 2 >...s (M,N) 0 Interpretation: k = 1: best linear combination of input variables with weights vi 1, to optimally predict the linear combination of output variables with weights u 1 α, with a crosscorrelation = s 1. s 1 : measure of the predictive power of the set of Xs with respect to Y s Other singular values: orthogonal, less predictive, linear combinations
32 Intermezzo: Benchmark Null hypothesis: No correlations between Xs and Y s: G true 0 But arbitrary correlations among Xs, C X, and Y s, C Y, are possible Consider exact normalized principal components for the sample variables Xs and Y s: and define Ĝ = Ŷ ˆX T. ˆX t i = 1 λi j U ij X t j ; Ŷ t α =...
33 Intermezzo: Random SVD Final result:([wachter] (1980); [Laloux,Miceli,Potters,JPB]) ρ(s) = (m + n 1) + (s 2 γ )(γ + s 2 ) δ(s 1) + πs(1 s 2 ) with γ ± = n + m 2mn ± 2 mn(1 n)(1 m), 0 γ ± 1 Analogue of the Marcenko-Pastur result for rectangular correlation matrices Many applications; finance, econometrics ( large models), genomics, etc. and subspace stability!
34 Back to eigenvectors Extend the target subspace to avoid edge effects: ψ k+1, ψ k+2,... ψ k+p ψ k Q+1, ψ k+2,... ψ k+q Form the P Q matrix of scalar products: G ij = ψ k+i ψ k+j The singular values of G indicates how well the Q perturbed vectors approximate the initial ones D = 1 P i ln s i
35 Null hypothesis Note: if P and Q are large, D can be accidentally small One can compute D exactly in the limit P, Q, N, with fixed p = P/N, q = Q/N: Final result: (same problem as above!) with: and ρ(s) = D = 1 0 dsln s ρ(s) (s 2 γ )(γ + s 2 ) πs(1 s 2 ) γ ± = p + q 2pq ± 2 pq(1 p)(1 q), 0 γ ± 1
36 Back to eigenvectors: perturbation theory Consider a randomly perturbed matrix: H = H 0 + ǫh 1 Perturbation theory to second order in ǫ yields: D ǫ2 2P i {k+1,...,k+p } j {k Q+1,...,k+Q} ( ψi H 1 ψ j λ i λ j ) 2. The full distribution of s can again be computed exactly (in some limits) using free random matrix tools.
37 GOE: the full SV spectrum Initial eigenspace: spanned by [a, b] [ 2, 2], b a = Target eigenspace: spanned by [a δ, b + δ] [ 2,2] Two cases (set s = ǫ 2 ŝ): Weak fluctuations δ 1 ρ(ŝ) is a semi circle centered around δ 1, of width Strong fluctuations δ 1 ρ(ŝ) ŝ min /ŝ 2, with ŝ min 1 and ŝ max δ 1.
38 The case of correlation matrices Consider the empirical correlation matrix: E = C + η η = 1 T T t=1 (X t X t C) The noise η is correlated as: ηij η kl = 1 T (C ikc jl + C il C jk ) from which one derives: D 1 2TP P N i=1 j=q+1 λ i λ j (λ i λ j ) 2 (and a similar equation for eigenvalues).
39 Stability of eigenvalues: Correlations Eigenvalues clearly change: well known correlation crises
40 Stability of eigenspaces: Correlations D(τ) for a given T, P = 5, Q = 10
41 Stability of eigenspaces: Correlations D(τ = T) for P = 5, Q = 10
42 Conclusion Many RMT tools available to understand the eigenvalue spectrum and suggest cleaning schemes The understanding of eigenvectors is comparatively poorer The dynamics of the top eigenvector (aka market mode) is relatively well understood A plausible, realistic model for the true evolution of C is still lacking (many crazy attempts Multivariate GARCH, BEKK, etc., but second generation models are on their way)
43 Bibliography J.P. Bouchaud, M. Potters, Financial Applications of Random Matrix Theory: a short review, in The Oxford Handbook of Random Matrix Theory (2011) R. Allez and, Eigenvectors dynamics: general theory & some applications, arxiv P.-A. Reigneron, R. Allez and, Principal regression analysis and the index leverage effect, Physica A, Volume 390 (2011)
Eigenvector stability: Random Matrix Theory and Financial Applications
Eigenvector stability: Random Matrix Theory and Financial Applications J.P Bouchaud with: R. Allez, M. Potters http://www.cfm.fr Portfolio theory: Basics Portfolio weights w i, Asset returns X t i If expected/predicted
More informationRandom Correlation Matrices, Top Eigenvalue with Heavy Tails and Financial Applications
Random Correlation Matrices, Top Eigenvalue with Heavy Tails and Financial Applications J.P Bouchaud with: M. Potters, G. Biroli, L. Laloux, M. A. Miceli http://www.cfm.fr Portfolio theory: Basics Portfolio
More informationEIGENVECTOR OVERLAPS (AN RMT TALK) J.-Ph. Bouchaud (CFM/Imperial/ENS) (Joint work with Romain Allez, Joel Bun & Marc Potters)
EIGENVECTOR OVERLAPS (AN RMT TALK) J.-Ph. Bouchaud (CFM/Imperial/ENS) (Joint work with Romain Allez, Joel Bun & Marc Potters) Randomly Perturbed Matrices Questions in this talk: How similar are the eigenvectors
More informationEigenvalue spectra of time-lagged covariance matrices: Possibilities for arbitrage?
Eigenvalue spectra of time-lagged covariance matrices: Possibilities for arbitrage? Stefan Thurner www.complex-systems.meduniwien.ac.at www.santafe.edu London July 28 Foundations of theory of financial
More informationEmpirical properties of large covariance matrices in finance
Empirical properties of large covariance matrices in finance Ex: RiskMetrics Group, Geneva Since 2010: Swissquote, Gland December 2009 Covariance and large random matrices Many problems in finance require
More informationarxiv: v2 [cond-mat.stat-mech] 16 Jul 2012
EIGENVECTOR DYNAMICS: GENERAL THEORY AND SOME APPLICATIONS arxiv:1203.6228v2 [cond-mat.stat-mech] 16 Jul 2012 ROMAIN ALLEZ 1,2 AND JEAN-PHILIPPE BOUCHAUD 1 Abstract. We propose a general framework to study
More informationRandom Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws. Symeon Chatzinotas February 11, 2013 Luxembourg
Random Matrix Theory Lecture 1 Introduction, Ensembles and Basic Laws Symeon Chatzinotas February 11, 2013 Luxembourg Outline 1. Random Matrix Theory 1. Definition 2. Applications 3. Asymptotics 2. Ensembles
More informationNon white sample covariance matrices.
Non white sample covariance matrices. S. Péché, Université Grenoble 1, joint work with O. Ledoit, Uni. Zurich 17-21/05/2010, Université Marne la Vallée Workshop Probability and Geometry in High Dimensions
More informationEstimation of the Global Minimum Variance Portfolio in High Dimensions
Estimation of the Global Minimum Variance Portfolio in High Dimensions Taras Bodnar, Nestor Parolya and Wolfgang Schmid 07.FEBRUARY 2014 1 / 25 Outline Introduction Random Matrix Theory: Preliminary Results
More informationarxiv:cond-mat/ v2 [cond-mat.stat-mech] 3 Feb 2004
arxiv:cond-mat/0312496v2 [cond-mat.stat-mech] 3 Feb 2004 Signal and Noise in Financial Correlation Matrices Zdzis law Burda, Jerzy Jurkiewicz 1 M. Smoluchowski Institute of Physics, Jagiellonian University,
More informationPrior Information: Shrinkage and Black-Litterman. Prof. Daniel P. Palomar
Prior Information: Shrinkage and Black-Litterman Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics
More informationRandom Matrices and Multivariate Statistical Analysis
Random Matrices and Multivariate Statistical Analysis Iain Johnstone, Statistics, Stanford imj@stanford.edu SEA 06@MIT p.1 Agenda Classical multivariate techniques Principal Component Analysis Canonical
More informationEstimation of large dimensional sparse covariance matrices
Estimation of large dimensional sparse covariance matrices Department of Statistics UC, Berkeley May 5, 2009 Sample covariance matrix and its eigenvalues Data: n p matrix X n (independent identically distributed)
More informationarxiv: v1 [q-fin.st] 3 Oct 2007
The Student ensemble of correlation matrices: eigenvalue spectrum and Kullback-Leibler entropy Giulio Biroli,, Jean-Philippe Bouchaud, Marc Potters Service de Physique Théorique, Orme des Merisiers CEA
More informationLarge deviations of the top eigenvalue of random matrices and applications in statistical physics
Large deviations of the top eigenvalue of random matrices and applications in statistical physics Grégory Schehr LPTMS, CNRS-Université Paris-Sud XI Journées de Physique Statistique Paris, January 29-30,
More informationMATHEMATICS 217 NOTES
MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable
More informationLarge sample covariance matrices and the T 2 statistic
Large sample covariance matrices and the T 2 statistic EURANDOM, the Netherlands Joint work with W. Zhou Outline 1 2 Basic setting Let {X ij }, i, j =, be i.i.d. r.v. Write n s j = (X 1j,, X pj ) T and
More informationEconophysics III: Financial Correlations and Portfolio Optimization
FAKULTÄT FÜR PHYSIK Econophysics III: Financial Correlations and Portfolio Optimization Thomas Guhr Let s Face Chaos through Nonlinear Dynamics, Maribor 21 Outline Portfolio optimization is a key issue
More informationarxiv:physics/ v2 [physics.soc-ph] 8 Dec 2006
Non-Stationary Correlation Matrices And Noise André C. R. Martins GRIFE Escola de Artes, Ciências e Humanidades USP arxiv:physics/61165v2 [physics.soc-ph] 8 Dec 26 The exact meaning of the noise spectrum
More informationExponential tail inequalities for eigenvalues of random matrices
Exponential tail inequalities for eigenvalues of random matrices M. Ledoux Institut de Mathématiques de Toulouse, France exponential tail inequalities classical theme in probability and statistics quantify
More informationSingular Value Decomposition and Principal Component Analysis (PCA) I
Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression
More informationLinear Algebra- Final Exam Review
Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.
More information1 Intro to RMT (Gene)
M705 Spring 2013 Summary for Week 2 1 Intro to RMT (Gene) (Also see the Anderson - Guionnet - Zeitouni book, pp.6-11(?) ) We start with two independent families of R.V.s, {Z i,j } 1 i
More informationCAAM 335 Matrix Analysis
CAAM 335 Matrix Analysis Solutions to Homework 8 Problem (5+5+5=5 points The partial fraction expansion of the resolvent for the matrix B = is given by (si B = s } {{ } =P + s + } {{ } =P + (s (5 points
More informationSignal and noise in financial correlation matrices
Physica A 344 (2004) 67 72 www.elsevier.com/locate/physa Signal and noise in financial correlation matrices Zdzis"aw Burda, Jerzy Jurkiewicz 1 M. Smoluchowski Institute of Physics, Jagiellonian University,
More informationQuantum Physics II (8.05) Fall 2002 Assignment 3
Quantum Physics II (8.05) Fall 00 Assignment Readings The readings below will take you through the material for Problem Sets and 4. Cohen-Tannoudji Ch. II, III. Shankar Ch. 1 continues to be helpful. Sakurai
More informationVast Volatility Matrix Estimation for High Frequency Data
Vast Volatility Matrix Estimation for High Frequency Data Yazhen Wang National Science Foundation Yale Workshop, May 14-17, 2009 Disclaimer: My opinion, not the views of NSF Y. Wang (at NSF) 1 / 36 Outline
More informationPrincipal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,
Principal Component Analysis (PCA) PCA is a widely used statistical tool for dimension reduction. The objective of PCA is to find common factors, the so called principal components, in form of linear combinations
More informationTopic 1: Matrix diagonalization
Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it
More informationSignal and Noise in Correlation Matrix
arxiv:cond-mat/0305627v2 [cond-mat.stat-mech] 3 Feb 2004 Signal and Noise in Correlation Matrix Z. Burda, A. Görlich, A. Jarosz and J. Jurkiewicz M. Smoluchowski Institute of Physics, Jagellonian University,
More informationEmpirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems
Empirical Gramians and Balanced Truncation for Model Reduction of Nonlinear Systems Antoni Ras Departament de Matemàtica Aplicada 4 Universitat Politècnica de Catalunya Lecture goals To review the basic
More informationAssessing the dependence of high-dimensional time series via sample autocovariances and correlations
Assessing the dependence of high-dimensional time series via sample autocovariances and correlations Johannes Heiny University of Aarhus Joint work with Thomas Mikosch (Copenhagen), Richard Davis (Columbia),
More informationBackground Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationApplications of Random Matrix Theory to Economics, Finance and Political Science
Outline Applications of Random Matrix Theory to Economics, Finance and Political Science Matthew C. 1 1 Department of Economics, MIT Institute for Quantitative Social Science, Harvard University SEA 06
More informationarxiv:cond-mat/ v1 [cond-mat.stat-mech] 1 Aug 2001
A Random Matrix Approach to Cross-Correlations in Financial Data arxiv:cond-mat/0108023v1 [cond-mat.stat-mech] 1 Aug 2001 Vasiliki Plerou 1,2, Parameswaran Gopikrishnan 1, Bernd Rosenow 1,3, Luís A. Nunes
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationHomogenization of the Dyson Brownian Motion
Homogenization of the Dyson Brownian Motion P. Bourgade, joint work with L. Erdős, J. Yin, H.-T. Yau Cincinnati symposium on probability theory and applications, September 2014 Introduction...........
More informationBurgers equation in the complex plane. Govind Menon Division of Applied Mathematics Brown University
Burgers equation in the complex plane Govind Menon Division of Applied Mathematics Brown University What this talk contains Interesting instances of the appearance of Burgers equation in the complex plane
More informationTOP EIGENVALUE OF CAUCHY RANDOM MATRICES
TOP EIGENVALUE OF CAUCHY RANDOM MATRICES with Satya N. Majumdar, Gregory Schehr and Dario Villamaina Pierpaolo Vivo (LPTMS - CNRS - Paris XI) Gaussian Ensembles N = 5 Semicircle Law LARGEST EIGENVALUE
More informationDS-GA 1002 Lecture notes 10 November 23, Linear models
DS-GA 2 Lecture notes November 23, 2 Linear functions Linear models A linear model encodes the assumption that two quantities are linearly related. Mathematically, this is characterized using linear functions.
More informationNeural Network Training
Neural Network Training Sargur Srihari Topics in Network Training 0. Neural network parameters Probabilistic problem formulation Specifying the activation and error functions for Regression Binary classification
More informationStatistical Geometry Processing Winter Semester 2011/2012
Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian
More informationRigidity of the 3D hierarchical Coulomb gas. Sourav Chatterjee
Rigidity of point processes Let P be a Poisson point process of intensity n in R d, and let A R d be a set of nonzero volume. Let N(A) := A P. Then E(N(A)) = Var(N(A)) = vol(a)n. Thus, N(A) has fluctuations
More informationMatrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =
30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More informationChapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45
Chapter 6 Eigenvalues Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45 Closed Leontief Model In a closed Leontief input-output-model consumption and production coincide, i.e. V x = x
More informationPermutation-invariant regularization of large covariance matrices. Liza Levina
Liza Levina Permutation-invariant covariance regularization 1/42 Permutation-invariant regularization of large covariance matrices Liza Levina Department of Statistics University of Michigan Joint work
More informationMATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.
MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial. Geometric properties of determinants 2 2 determinants and plane geometry
More informationStat 159/259: Linear Algebra Notes
Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the
More informationLarge dimension forecasting models and random singular value spectra
arxiv:physics/0512090v1 [physics.data-an] 10 Dec 2005 Large dimension forecasting models and random singular value spectra Jean-Philippe Bouchaud 1,2, Laurent Laloux 1 M. Augusta Miceli 3,1, Marc Potters
More informationProbabilistic & Unsupervised Learning
Probabilistic & Unsupervised Learning Week 2: Latent Variable Models Maneesh Sahani maneesh@gatsby.ucl.ac.uk Gatsby Computational Neuroscience Unit, and MSc ML/CSML, Dept Computer Science University College
More informationMIT Final Exam Solutions, Spring 2017
MIT 8.6 Final Exam Solutions, Spring 7 Problem : For some real matrix A, the following vectors form a basis for its column space and null space: C(A) = span,, N(A) = span,,. (a) What is the size m n of
More informationProblem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show
MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,
More informationEstimated Correlation Matrices and Portfolio Optimization
Estimated Correlation Matrices and Portfolio Optimization Szilárd Pafka,2 and Imre Kondor,3 Department of Physics of Complex Systems, Eötvös University arxiv:cond-mat/0305475v [cond-mat.stat-mech] 20 May
More informationARANDOM-MATRIX-THEORY-BASEDANALYSISOF STOCKS OF MARKETS FROM DIFFERENT COUNTRIES
Advances in Complex Systems, Vol. 11, No. 5 (2008) 655 668 c World Scientific Publishing Company ARANDOM-MATRIX-THEORY-BASEDANALYSISOF STOCKS OF MARKETS FROM DIFFERENT COUNTRIES RICARDO COELHO,PETERRICHMONDandSTEFANHUTZLER
More informationarxiv: v1 [math.pr] 30 Oct 2017
An introduction to random matrix theory arxiv:17.792v1 [math.pr] 30 Oct 2017 Contents Gaëtan Borot 1 1 Preface 2 2 Motivations from statistics for data in high dimensions 3 2.1 Latent semantics...........................
More informationAngular matrix integrals
Montreal, 25 August 2008 1 Angular matrix integrals Montreal, 25 August 2008 J.-B. uber A. Prats Ferrer, B. Eynard, P. Di Francesco, J.-B... Correlation Functions of Harish-Chandra Integrals over the Orthogonal
More informationGeometric Dyson Brownian motion and May Wigner stability
Geometric Dyson Brownian motion and May Wigner stability Jesper R. Ipsen University of Melbourne Summer school: Randomness in Physics and Mathematics 2 Bielefeld 2016 joint work with Henning Schomerus
More informationMath 21b Final Exam Thursday, May 15, 2003 Solutions
Math 2b Final Exam Thursday, May 5, 2003 Solutions. (20 points) True or False. No justification is necessary, simply circle T or F for each statement. T F (a) If W is a subspace of R n and x is not in
More informationThe Matrix Dyson Equation in random matrix theory
The Matrix Dyson Equation in random matrix theory László Erdős IST, Austria Mathematical Physics seminar University of Bristol, Feb 3, 207 Joint work with O. Ajanki, T. Krüger Partially supported by ERC
More informationComparison Method in Random Matrix Theory
Comparison Method in Random Matrix Theory Jun Yin UW-Madison Valparaíso, Chile, July - 2015 Joint work with A. Knowles. 1 Some random matrices Wigner Matrix: H is N N square matrix, H : H ij = H ji, EH
More informationDuke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014
Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationDimension. Eigenvalue and eigenvector
Dimension. Eigenvalue and eigenvector Math 112, week 9 Goals: Bases, dimension, rank-nullity theorem. Eigenvalue and eigenvector. Suggested Textbook Readings: Sections 4.5, 4.6, 5.1, 5.2 Week 9: Dimension,
More informationL3: Review of linear algebra and MATLAB
L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna
More informationA complete criterion for convex-gaussian states detection
A complete criterion for convex-gaussian states detection Anna Vershynina Institute for Quantum Information, RWTH Aachen, Germany joint work with B. Terhal NSF/CBMS conference Quantum Spin Systems The
More informationJordan normal form notes (version date: 11/21/07)
Jordan normal form notes (version date: /2/7) If A has an eigenbasis {u,, u n }, ie a basis made up of eigenvectors, so that Au j = λ j u j, then A is diagonal with respect to that basis To see this, let
More informationDIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix
DIAGONALIZATION Definition We say that a matrix A of size n n is diagonalizable if there is a basis of R n consisting of eigenvectors of A ie if there are n linearly independent vectors v v n such that
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationRandom Matrix: From Wigner to Quantum Chaos
Random Matrix: From Wigner to Quantum Chaos Horng-Tzer Yau Harvard University Joint work with P. Bourgade, L. Erdős, B. Schlein and J. Yin 1 Perhaps I am now too courageous when I try to guess the distribution
More informationLecture 2: Linear Algebra Review
EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1
More information10. Cartan Weyl basis
10. Cartan Weyl basis 1 10. Cartan Weyl basis From this point on, the discussion will be restricted to semi-simple Lie algebras, which are the ones of principal interest in physics. In dealing with the
More informationCPSC 340: Machine Learning and Data Mining. More PCA Fall 2017
CPSC 340: Machine Learning and Data Mining More PCA Fall 2017 Admin Assignment 4: Due Friday of next week. No class Monday due to holiday. There will be tutorials next week on MAP/PCA (except Monday).
More information7 Principal Component Analysis
7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is
More informationMeasures and Jacobians of Singular Random Matrices. José A. Díaz-Garcia. Comunicación de CIMAT No. I-07-12/ (PE/CIMAT)
Measures and Jacobians of Singular Random Matrices José A. Díaz-Garcia Comunicación de CIMAT No. I-07-12/21.08.2007 (PE/CIMAT) Measures and Jacobians of singular random matrices José A. Díaz-García Universidad
More informationLog Covariance Matrix Estimation
Log Covariance Matrix Estimation Xinwei Deng Department of Statistics University of Wisconsin-Madison Joint work with Kam-Wah Tsui (Univ. of Wisconsin-Madsion) 1 Outline Background and Motivation The Proposed
More informationBalanced Truncation 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.242, Fall 2004: MODEL REDUCTION Balanced Truncation This lecture introduces balanced truncation for LTI
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More informationAlgebraic models for higher-order correlations
Algebraic models for higher-order correlations Lek-Heng Lim and Jason Morton U.C. Berkeley and Stanford Univ. December 15, 2008 L.-H. Lim & J. Morton (MSRI Workshop) Algebraic models for higher-order correlations
More informationIntroduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.
Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationLINEAR ALGEBRA SUMMARY SHEET.
LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized
More information9.520: Class 20. Bayesian Interpretations. Tomaso Poggio and Sayan Mukherjee
9.520: Class 20 Bayesian Interpretations Tomaso Poggio and Sayan Mukherjee Plan Bayesian interpretation of Regularization Bayesian interpretation of the regularizer Bayesian interpretation of quadratic
More information1 Tridiagonal matrices
Lecture Notes: β-ensembles Bálint Virág Notes with Diane Holcomb 1 Tridiagonal matrices Definition 1. Suppose you have a symmetric matrix A, we can define its spectral measure (at the first coordinate
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationOn corrections of classical multivariate tests for high-dimensional data
On corrections of classical multivariate tests for high-dimensional data Jian-feng Yao with Zhidong Bai, Dandan Jiang, Shurong Zheng Overview Introduction High-dimensional data and new challenge in statistics
More information2. Review of Linear Algebra
2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear
More informationarxiv: v5 [math.na] 16 Nov 2017
RANDOM PERTURBATION OF LOW RANK MATRICES: IMPROVING CLASSICAL BOUNDS arxiv:3.657v5 [math.na] 6 Nov 07 SEAN O ROURKE, VAN VU, AND KE WANG Abstract. Matrix perturbation inequalities, such as Weyl s theorem
More informationHigh-resolution Parametric Subspace Methods
High-resolution Parametric Subspace Methods The first parametric subspace-based method was the Pisarenko method,, which was further modified, leading to the MUltiple SIgnal Classification (MUSIC) method.
More informationUniversality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium
Universality of distribution functions in random matrix theory Arno Kuijlaars Katholieke Universiteit Leuven, Belgium SEA 06@MIT, Workshop on Stochastic Eigen-Analysis and its Applications, MIT, Cambridge,
More informationTop eigenvalue of a random matrix: Large deviations
Top eigenvalue of a random matrix: Large deviations Satya N. Majumdar Laboratoire de Physique Théorique et Modèles Statistiques,CNRS, Université Paris-Sud, France First Appearence of Random Matrices Covariance
More informationProperties of Matrices and Operations on Matrices
Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationMathematical Methods wk 2: Linear Operators
John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationEfficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter
Efficient Data Assimilation for Spatiotemporal Chaos: a Local Ensemble Transform Kalman Filter arxiv:physics/0511236 v1 28 Nov 2005 Brian R. Hunt Institute for Physical Science and Technology and Department
More informationRegression. Oscar García
Regression Oscar García Regression methods are fundamental in Forest Mensuration For a more concise and general presentation, we shall first review some matrix concepts 1 Matrices An order n m matrix is
More information