-dimensional space. d 1. d k (1 ρ 2 j,j+k;j+1...j+k 1). Γ(2m) m=1. 2 ) 2 4d)/4. Γ d ( d 2 ) (d 2)/2. Γ d 1 (d)
|
|
- Avice Clarke
- 5 years ago
- Views:
Transcription
1 Generating random correlation matrices based on partial correlation vines and the onion method Harry Joe Department of Statistics University of British Columbia Abstract Partial correlation vines and the onion method are presented for generating random correlation matrices. As a special case, a uniform distribution over the set of d d positive definite correlation matrices obtains. Byproducts are: (a) For a uniform distribution over the space of d d correlation matrices, the marginal distribution of each correlation is Beta(d/, d/) on (, ). (b) An identity is obtained for the determinant of a correlation matrix R via partial correlations in a vine. (c) A formula is obtained for the volume of the set of d d positive definite correlation matrices in ( d ) -dimensional space. Outline. Statement of key results on generating random correlation matrices. Parametrization with partial correlations 3. Regular vines, D-vines, C-vines 4. Random correlation matrices based on partial correlations 5. Random correlation matrices based on onion method Key results: R is a d d correlation matrix. Several simple ways to generate a random R that is uniform over set of positive definite d d correlation matrices; more generally with density [det(r)] α.. Identity: ( ρ )( ρ 3)( ρ 3;) for d = 3 det(r) = ( ρ i,i+) i= d k ( ρ j,j+k;j+...j+k ). k= j= [partial correlations in the double product] 3. Volume of d d correlation matrices in d()/ dimensional space is: ()/ π (d )/4 Γ(m) m=, if d is odd; (3d d(d )/4 π ()/4 Γ ( d+ ) 4d)/4 Γ d ( d ) (d )/ Γ(m) m=, if d is even. Γ (d) Partial correlations correlations ρ i,i+ for i =,..., d the partial correlations ρ ij;i+,...j for j i For example, ρ, ρ 3, ρ 34, ρ 3;, ρ 4;3, ρ 4;3 for d = 4. For multivariate normal random variables, partial correlations are conditional correlations. In general, they are functions of a correlation matrix. Advantage: the ( d ) parameters can independently take values in the interval (, ); different vines can be used. Disadvantage: the reparametrization is non-unique, and depends on the order of indexing, e.g.: ρ, ρ 3, ρ 3; or ρ, ρ 3, ρ 3;
2 d = 5: 3 different vines + matrix for onion method are shown. d-vine which is like Markovian dependence c-vine regular partial correlation vine onion method #distinct vines increases quickly as d increases, partial correlations independently in (, ). ρ ρ 3; ρ 4;3 ρ 5;34 ρ ρ 3 ρ 4;3 ρ 5;34 ρ 3 ρ 3 ρ 34 ρ 35;4 ρ 4 ρ 4 ρ 43 ρ 45 ρ 5 ρ 5 ρ 53 ρ 54 ρ ρ 3 ρ 4 ρ 5 ρ ρ 3; ρ 4; ρ 5; ρ 3 ρ 3 ρ 34; ρ 35; ρ 4 ρ 4 ρ 43 ρ 45;3 ρ 5 ρ 5 ρ 53 ρ 54 ρ ρ 3; ρ 4; ρ 5;4 ρ ρ 3 ρ 4 ρ 5;4 ρ 3 ρ 3 ρ 34; ρ 35;4 ρ 4 ρ 4 ρ 43 ρ 45 ρ 5 ρ 5 ρ 53 ρ 54 ρ ρ 3 ρ 4 ρ 5 ρ ρ 3 ρ 4 ρ 5 ρ 3 ρ 3 ρ 34 ρ 35 ρ 4 ρ 4 ρ 43 ρ 45 ρ 5 ρ 5 ρ 53 ρ 54 D-vine for d = 5: specify densities separately for each partial correlation; then density of the correlation matrix R is f R (r) = f ρ (r )f ρ3 (r 3 )f ρ34 (r 34 )f ρ45 (r 45 ) f ρ3; (r 3; )f ρ4;3 (r 4;3 )f ρ35;4 (r 35;4 ) f ρ4;3 (r 4;3 )f ρ5;34 (r 5;34 )f ρ5;34 (r 5;34 ) Jacobian Under appropriate choices, this leads to a uniform density. References for vines D. Kurowicka, R. Cooke, Uncertainty Analysis with High Dimensional Dependence Modelling, Wiley, 006. Vines are a graphical method for modeling multivariate dependencies 3 {} {3} {34} 3 4 {3 } {3} {} {4} 4 {3 } {4 3} {34 } {4 } {4 3}
3 3 {3 } {3} {} {4} {45} 4 5 {34 } {4 } {5 4} {35 4} {5 4} Random correlation matrices [Joe, 006, JMVA] Partial correlation approach: distributions separately for ρ i,i+ for i =,..., d, and ρ ij;i+,...j for j i. This leads to a joint distribution of {ρ ij : i < j d}. More generally, choose distributions separately for partial correlation in a regular vine. Previous papers : combination of random eigenvalues and random (orthogonal) matrices, or a random T matrix leading to T T, but cannot do distribution theory? Motivations: generation of ellipsoidal clusters to study clustering methods efficiency of estimating eq n methods for multiv probit model assess probability that a correlation matrix is a rank correlation matrix. prior for correlation matrix in Bayesian statistics? Main result With appropriate choices of beta distributions for ρ i,i+ for i =,..., d, and ρ ij;i+,...j for j i, can get a joint density for {ρ ij : i < j d} that is R α, where α > 0. More generally, same for partial correlations on a regular vine. In this case, the joint density is invariant to the order of indexing of variables for the partial correlations (and to the vine), and each ρ ij marginally has a Beta(α + d/, α + d/) distribution on (, ). Special case with α = : uniform density over the set of positive definite correlation matrices. where Anderson (958, p. 80): partial corr ρ j,j+k;j+,...,j+k ( k d ) is ρ j,j+k r (j, k)(r (j, k)) r 3 (j, k) [ ] / [ r (j, k)(r (j, k)) r (j, k) r 3 (j, k)(r (j, k)) r 3 (j, k) R[j : j + k] = r (j, k) ρ j,j+k r (j, k) R (j, k) r 3 (j, k), ρ j+k,j r 3(j, k) with r (j, k) = (ρ j,j+,..., ρ j,j+k ), r 3(j, k) = (ρ j+k,j+,..., ρ j+k,j+k ), and R (j, k) consisting of the middle k rows / columns. Hence ρ j,j+k = r (j, k)(r (j, k)) r 3 (j, k) + ρ j,j+k;j+,...,j+k D jk, ] / 3
4 where D jk = [ ][ ] r (j, k)(r (j, k)) r (j, k) r 3(j, k)(r (j, k)) r 3 (j, k). d = 3 [ρ for rv, r for dummy variable of function] Consider (ρ, ρ 3, ρ 3 ) (ρ, ρ 3, ρ 3 ). Since ρ 3 = ρ 3 ρ ρ 3 ( ρ )( ρ 3 ), the Jacobian of the transformation is: r 3 / r 3 = [( r )( r 3)] /. Note that r3 = r r3 r3 + r r 3 r 3 det(r) ( r )( r 3 ) = ( r )( r 3 ) det(r(r, r 3, r 3 )) = r r 3 r 3 + r r 3 r 3 Beta(α, α) density on (, ): [B(α, α)] ( u ) α or U = V, V Beta(α, α) on (0, ). ρ, ρ 3 Beta(α, α ), ρ 3 Beta(α, α ) independently. = ( r )( r 3)( r 3 ) (D) f ρ,ρ 3,ρ 3 (r, r 3, r 3 ) ( r 3 ) α [( r )( r 3)] α 3/ If α = α +, then the density is [det(r)] α (D) This is symmetric in r, r 3, r 3, which means the same joint density obtains from ρ, ρ 3 Beta(α, α ), ρ 3 Beta(α, α ) etc. α =, α = 3/ Uniform over set of correlation matrices. d > 3 : computer proof before math proof Based on the results for the d = 3 case, conjectured an extension of (D) as an identity for det(r), and conjectured the Beta distributions on ρ ij;i+,...,j needed to get joint density of (ρ ij ) to be [det(r)] α. The identity was verified numerically and the univariate margins of the random correlation matrix were also checked numerically before proving the results for general d. Results for d > 3. Thm : det(r) = ( ρ i,i+) i= d k ( ρ j,j+k;j+...j+k ). k= j= Thm 4: The determinant J d of the Jacobian for the transform of (ρ, ρ 3, ρ 3, ρ 34, ρ 4, ρ 4, ρ 45,..., ρ d ) to (ρ, ρ 3, ρ 3;, ρ 34, ρ 4;3, ρ 4;3, ρ 45,..., ρ d;... ) is: ( ρ i,i+) d [ i= d d k ( ρ i,i+k;i+...i+k ) k] /. k= i= 4
5 Thm 5: If α k = α + (d k), k =,..., d, and ρ i,i+k;i+...i+k is Beta(α k, α k ) on (, ) for i < i + k d, then the joint density becomes where the normalizing constant c d is (α +d k)(d k) k= c [ { }] α d det (rij ) i,j d, [ ( B α + (d k), α + (d k))] d k. k= If α = and α k = (d + k) for k =,..., d, leading to uniform joint density for {ρ ij, i < j}, then the normalizing constant is c d = k= (d k) [ ( B (d k + ), (d k + ))] d k k= and the recursion is = k= k [ ( B (k + ), (k + ))] k, k= c d = c () [ B( d, d)]. ( d ) Byproduct: normalizing constant is the volume of the set of d-dim positive definite correlation matrices in -dim space c d / d()/ decreases quickly as d increases. d c d c d / d()/ = π (/) = π (3/7) = π 6 (3/8) = π 6 (89/535) = π (n 7 /d 7 ) = π (n 8 /d 8 ) = π 0 (n 9 /d 9 ) = π 0 (n 0 /d 0 ) Random correlation matrix based on vines [Lewandowski, Kurowicka and Joe, 007] Extensions of Theorems,4,5 in Joe (006, JMVA). V = (e) is a partial correlation vine; for example, V = (, 3, 4, 3;, 4;, 34; ) with depth=(order )=(#conditioned variables): (0, 0, 0,,, ). For a vine based on d variables, there are d edges with depth k e = 0, d edges with depth k e =,..., and edge with depth k e = d. Let R be a (random) correlation matrix based on distributions for {ρ e : e V }. T: det(r) = e V ( ρ e) [Kurowicka and Cooke, 006, LAA] (product over edges of the vine) 5
6 T4: Jacobian of transform from correlations {ρ ij } to partial correlations {ρ e } is { } / e V ( ρ e) d k e T5. Let ρ e Beta(α ke +, α ke +) on (, ) independently, where α ke + = α + (d k e). Get uniform over space of correlation matrices if α =, and then marginal distribution for each correlation is Beta(d/, d/). The C-vine algorithm for generating correlation matrices with density [det(r)] η, η >.. Initialization β = η + (d )/.. Loop for k =,..., d. a) β β ; b) Loop for i = k +,..., d; i) generate partial corr p k,i;,...,k Beta(β, β) on (, ); ii) use recursive formula on p k,i;,...,k to get r k,i = r i,k [no matrix inversions]; 3. Return r, a d d correlation matrix. Onion method for random correlation matrices [Ghosh & Henderson, 003; extended LKJ 007] Result. Consider the spherical density c( w w) β for w R m, w w, where c = Γ(β+m/)π m/ /Γ(m/). If W has this density, then W = V U where V Beta(m/, β) and U is uniform on the surface of the m- dimensional hypersphere. If Q = AW, where A is an m m non-singular matrix, then the density of Q is c[det(aa )] / ( q [AA ] q) β, q q [A A] q. ( ) rm q Result. Partition r m+ = q where r m is an m m correlation matrix and q is a m-vector of correlations such that r m+ is an (m + ) (m + ) correlation matrix. Then det(r m+ ) = det(r m ) ( q r m q ). We use upper case letter of r m, q, r m+ to denote random vectors and matrices. Let β, β m > 0. If R m has density [det(r m )] βm and Q given R m = r m has density [det(r m )] / ( q r m q) β, then the density of R m+ is [det(r m )] βm 3/ ( q r m q) β. If β m = β +, then the density of R m+ is [det(r m+ )] β. Algorithm for the extended onion method to get random correlation matrices in dimension d with density [det(r)] η, η > 0. ( ) r. Initialization. β = η + (d )/, r u, where u Beta(β, β), r r. Loop for m =,..., d. 6
7 (a) β β ; (b) generate y Beta(m/, β), generate u = (u,..., u m ) uniform on the surface of m-dimensional hypersphere; (c) w y / u, obtain Cholesky decomp AA = r, set q Aw; ( ) r q (d) r q. 3. Return r, a d d correlation matrix. Why is this called the onion method??? By the symmetry, each ρ jk = R jk in the correlation matrix has a marginal Beta(η + [d ]/, η + [d ]/) density on (, ). For the special case of η = leading to uniform over the space of correlation matrices, the marginal density of each R jk is Beta(d/, d/) on (, ). Computational time: In C, onion method with incremental Cholesky is fastest, and C-vine is faster than onion method without incremental Cholesky. In Matlab, C-vine (which avoids matrix inversions) is fastest. D-vine is slower because of matrix inversions. Other comments: If partial correlations in a particular vine are needed, then use the appropriate vine. Choosing the vine and the densities for the partial corr in this vine, one could get random correlation matrices that have larger correlations at a few particular pairs. References H. Joe, Generating random correlation matrices based on partial correlations, Journal of Multivariate Analysis 97 (006) S. Ghosh, S.G. Henderson, Behavior of the NORTA method for correlated random vector generation as the dimension increases. ACM Transactions on Modeling and Computer Simulation (TOMACS) 3 (003), T. Bedford, R. Cooke, Vines - a new graphical model for dependent random variables, Annals of Statistics 30 (00) D. Kurowicka, R. Cooke, Uncertainty Analysis with High Dimensional Dependence Modelling, Wiley, 006. D. Kurowicka, R. M. Cooke, Completion problem with partial correlation vines, Linear Algebra and Its Applications 48 (006) D. Lewandowski, D. Kurowicka, H. Joe, Generating random correlation matrices based on vines and extended Onion method, (007), submitted. 7
Basic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationLinear Algebra Practice Final
. Let (a) First, Linear Algebra Practice Final Summer 3 3 A = 5 3 3 rref([a ) = 5 so if we let x 5 = t, then x 4 = t, x 3 =, x = t, and x = t, so that t t x = t = t t whence ker A = span(,,,, ) and a basis
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationHow to select a good vine
Universitetet i Oslo ingrihaf@math.uio.no International FocuStat Workshop on Focused Information Criteria and Related Themes, May 9-11, 2016 Copulae Regular vines Model selection and reduction Limitations
More information1 Data Arrays and Decompositions
1 Data Arrays and Decompositions 1.1 Variance Matrices and Eigenstructure Consider a p p positive definite and symmetric matrix V - a model parameter or a sample variance matrix. The eigenstructure is
More informationTHE VINE COPULA METHOD FOR REPRESENTING HIGH DIMENSIONAL DEPENDENT DISTRIBUTIONS: APPLICATION TO CONTINUOUS BELIEF NETS
Proceedings of the 00 Winter Simulation Conference E. Yücesan, C.-H. Chen, J. L. Snowdon, and J. M. Charnes, eds. THE VINE COPULA METHOD FOR REPRESENTING HIGH DIMENSIONAL DEPENDENT DISTRIBUTIONS: APPLICATION
More informationVines in Overview Invited Paper Third Brazilian Conference on Statistical Modelling in Insurance and Finance Maresias, March 25-30, 2007
Vines in Overview Invited Paper Third Brazilian Conference on Statistical Modelling in Insurance and Finance Maresias, March 25-30, 2007 R.M. Cooke, O. Morales, D. Kurowicka Resources for the Future and
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationMultivariate Gaussian Analysis
BS2 Statistical Inference, Lecture 7, Hilary Term 2009 February 13, 2009 Marginal and conditional distributions For a positive definite covariance matrix Σ, the multivariate Gaussian distribution has density
More informationPackage clustergeneration
Version 1.3.4 Date 2015-02-18 Package clustergeneration February 19, 2015 Title Random Cluster Generation (with Specified Degree of Separation) Author Weiliang Qiu , Harry Joe
More information3 (Maths) Linear Algebra
3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra
More informationApproximation Multivariate Distribution of Main Indices of Tehran Stock Exchange with Pair-Copula
Journal of Modern Applied Statistical Methods Volume Issue Article 5 --03 Approximation Multivariate Distribution of Main Indices of Tehran Stock Exchange with Pair-Copula G. Parham Shahid Chamran University,
More informationBackground Mathematics (2/2) 1. David Barber
Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and
More informationStat 5101 Lecture Slides: Deck 8 Dirichlet Distribution. Charles J. Geyer School of Statistics University of Minnesota
Stat 5101 Lecture Slides: Deck 8 Dirichlet Distribution Charles J. Geyer School of Statistics University of Minnesota 1 The Dirichlet Distribution The Dirichlet Distribution is to the beta distribution
More informationGaussian Process Vine Copulas for Multivariate Dependence
Gaussian Process Vine Copulas for Multivariate Dependence José Miguel Hernández-Lobato 1,2 joint work with David López-Paz 2,3 and Zoubin Ghahramani 1 1 Department of Engineering, Cambridge University,
More informationProgram and big picture Big data: can copula modelling be used for high dimensions, say
Conditional independence copula models with graphical representations Harry Joe (University of British Columbia) For multivariate Gaussian with a large number of variables, there are several approaches
More informationReview (Probability & Linear Algebra)
Review (Probability & Linear Algebra) CE-725 : Statistical Pattern Recognition Sharif University of Technology Spring 2013 M. Soleymani Outline Axioms of probability theory Conditional probability, Joint
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 3: Positive-Definite Systems; Cholesky Factorization Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 11 Symmetric
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationThe Multivariate Normal Distribution. In this case according to our theorem
The Multivariate Normal Distribution Defn: Z R 1 N(0, 1) iff f Z (z) = 1 2π e z2 /2. Defn: Z R p MV N p (0, I) if and only if Z = (Z 1,..., Z p ) T with the Z i independent and each Z i N(0, 1). In this
More informationVariational Inference with Copula Augmentation
Variational Inference with Copula Augmentation Dustin Tran 1 David M. Blei 2 Edoardo M. Airoldi 1 1 Department of Statistics, Harvard University 2 Department of Statistics & Computer Science, Columbia
More informationBayesian Inference for Pair-copula Constructions of Multiple Dependence
Bayesian Inference for Pair-copula Constructions of Multiple Dependence Claudia Czado and Aleksey Min Technische Universität München cczado@ma.tum.de, aleksmin@ma.tum.de December 7, 2007 Overview 1 Introduction
More informationSolving the 100 Swiss Francs Problem
Solving the 00 Swiss Francs Problem Mingfu Zhu, Guangran Jiang and Shuhong Gao Abstract. Sturmfels offered 00 Swiss Francs in 00 to a conjecture, which deals with a special case of the maximum likelihood
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationMeasures and Jacobians of Singular Random Matrices. José A. Díaz-Garcia. Comunicación de CIMAT No. I-07-12/ (PE/CIMAT)
Measures and Jacobians of Singular Random Matrices José A. Díaz-Garcia Comunicación de CIMAT No. I-07-12/21.08.2007 (PE/CIMAT) Measures and Jacobians of singular random matrices José A. Díaz-García Universidad
More informationA = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].
Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes
More informationarxiv:quant-ph/ v1 22 Aug 2005
Conditions for separability in generalized Laplacian matrices and nonnegative matrices as density matrices arxiv:quant-ph/58163v1 22 Aug 25 Abstract Chai Wah Wu IBM Research Division, Thomas J. Watson
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationA VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010
A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationL3: Review of linear algebra and MATLAB
L3: Review of linear algebra and MATLAB Vector and matrix notation Vectors Matrices Vector spaces Linear transformations Eigenvalues and eigenvectors MATLAB primer CSCE 666 Pattern Analysis Ricardo Gutierrez-Osuna
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationc 2005 Society for Industrial and Applied Mathematics
SIAM J. MATRIX ANAL. APPL. Vol. XX, No. X, pp. XX XX c 005 Society for Industrial and Applied Mathematics DISTRIBUTIONS OF THE EXTREME EIGENVALUES OF THE COMPLEX JACOBI RANDOM MATRIX ENSEMBLE PLAMEN KOEV
More informationcomponent risk analysis
273: Urban Systems Modeling Lec. 3 component risk analysis instructor: Matteo Pozzi 273: Urban Systems Modeling Lec. 3 component reliability outline risk analysis for components uncertain demand and uncertain
More informationx. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).
.8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics
More informationarxiv: v4 [math.pr] 27 Jul 2016
The Asymptotic Distribution of the Determinant of a Ranom Correlation Matrix arxiv:309768v4 mathpr] 7 Jul 06 AM Hanea a, & GF Nane b a Centre of xcellence for Biosecurity Risk Analysis, University of Melbourne,
More informationEdexcel GCE A Level Maths Further Maths 3 Matrices.
Edexcel GCE A Level Maths Further Maths 3 Matrices. Edited by: K V Kumaran kumarmathsweebly.com kumarmathsweebly.com 2 kumarmathsweebly.com 3 kumarmathsweebly.com 4 kumarmathsweebly.com 5 kumarmathsweebly.com
More informationI L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Canonical Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Canonical Slide
More informationDeterminants. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 25
Determinants opyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 25 Notation The determinant of a square matrix n n A is denoted det(a) or A. opyright c 2012 Dan Nettleton (Iowa State
More informationConsistent Bivariate Distribution
A Characterization of the Normal Conditional Distributions MATSUNO 79 Therefore, the function ( ) = G( : a/(1 b2)) = N(0, a/(1 b2)) is a solu- tion for the integral equation (10). The constant times of
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More informationdet(ka) = k n det A.
Properties of determinants Theorem. If A is n n, then for any k, det(ka) = k n det A. Multiplying one row of A by k multiplies the determinant by k. But ka has every row multiplied by k, so the determinant
More informationAPPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.
APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product
More informationBehavior of the NORTA Method for Correlated Random Vector Generation as the Dimension Increases
Behavior of the NORTA Method for Correlated Random Vector Generation as the Dimension Increases SOUMYADIP GHOSH and SHANE G. HENDERSON Cornell University The NORTA method is a fast general-purpose method
More informationGRAPH SIGNAL PROCESSING: A STATISTICAL VIEWPOINT
GRAPH SIGNAL PROCESSING: A STATISTICAL VIEWPOINT Cha Zhang Joint work with Dinei Florêncio and Philip A. Chou Microsoft Research Outline Gaussian Markov Random Field Graph construction Graph transform
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationAlgebra Workshops 10 and 11
Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on
More informationLinear Algebra Solutions 1
Math Camp 1 Do the following: Linear Algebra Solutions 1 1. Let A = and B = 3 8 5 A B = 3 5 9 A + B = 9 11 14 4 AB = 69 3 16 BA = 1 4 ( 1 3. Let v = and u = 5 uv = 13 u v = 13 v u = 13 Math Camp 1 ( 7
More informationDesign Moments. Many properties of experimental designs are quantified by design moments. Given a model matrix 1 x 11 x 21 x k1 1 x 12 x 22 x k2 X =
8.5 Rotatability Recall: Var[ŷ(x)] = σ 2 x (m) () 1 x (m) is the prediction variance and NVar[ŷ(x)]/σ 2 = Nx (m) () 1 x (m) is the scaled prediction variance. A design is rotatable if the prediction variance
More informationLecture 3: QR-Factorization
Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization
More informationMULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García. Comunicación Técnica No I-07-13/ (PE/CIMAT)
MULTIVARIATE ANALYSIS OF VARIANCE UNDER MULTIPLICITY José A. Díaz-García Comunicación Técnica No I-07-13/11-09-2007 (PE/CIMAT) Multivariate analysis of variance under multiplicity José A. Díaz-García Universidad
More informationElliptically Contoured Distributions
Elliptically Contoured Distributions Recall: if X N p µ, Σ), then { 1 f X x) = exp 1 } det πσ x µ) Σ 1 x µ) So f X x) depends on x only through x µ) Σ 1 x µ), and is therefore constant on the ellipsoidal
More informationTable of Contents. Multivariate methods. Introduction II. Introduction I
Table of Contents Introduction Antti Penttilä Department of Physics University of Helsinki Exactum summer school, 04 Construction of multinormal distribution Test of multinormality with 3 Interpretation
More informationDot Products, Transposes, and Orthogonal Projections
Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +
More informationMarkov Switching Regular Vine Copulas
Int. Statistical Inst.: Proc. 58th World Statistical Congress, 2011, Dublin (Session CPS057) p.5304 Markov Switching Regular Vine Copulas Stöber, Jakob and Czado, Claudia Lehrstuhl für Mathematische Statistik,
More informationAssignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.
Assignment 1 Math 5341 Linear Algebra Review Give complete answers to each of the following questions Show all of your work Note: You might struggle with some of these questions, either because it has
More informationRegression models for multivariate ordered responses via the Plackett distribution
Journal of Multivariate Analysis 99 (2008) 2472 2478 www.elsevier.com/locate/jmva Regression models for multivariate ordered responses via the Plackett distribution A. Forcina a,, V. Dardanoni b a Dipartimento
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationExample Linear Algebra Competency Test
Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,
More informationA Conditional Approach to Modeling Multivariate Extremes
A Approach to ing Multivariate Extremes By Heffernan & Tawn Department of Statistics Purdue University s April 30, 2014 Outline s s Multivariate Extremes s A central aim of multivariate extremes is trying
More information2. Matrix Algebra and Random Vectors
2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns
More informationResearch Article On the Marginal Distribution of the Diagonal Blocks in a Blocked Wishart Random Matrix
Hindawi Publishing Corporation nternational Journal of Analysis Volume 6, Article D 59678, 5 pages http://dx.doi.org/.55/6/59678 Research Article On the Marginal Distribution of the Diagonal Blocks in
More informationSome bounds for the spectral radius of the Hadamard product of matrices
Some bounds for the spectral radius of the Hadamard product of matrices Guang-Hui Cheng, Xiao-Yu Cheng, Ting-Zhu Huang, Tin-Yau Tam. June 1, 2004 Abstract Some bounds for the spectral radius of the Hadamard
More informationModelling Dropouts by Conditional Distribution, a Copula-Based Approach
The 8th Tartu Conference on MULTIVARIATE STATISTICS, The 6th Conference on MULTIVARIATE DISTRIBUTIONS with Fixed Marginals Modelling Dropouts by Conditional Distribution, a Copula-Based Approach Ene Käärik
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationLecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore
Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of
More informationChapter 4: Modelling
Chapter 4: Modelling Exchangeability and Invariance Markus Harva 17.10. / Reading Circle on Bayesian Theory Outline 1 Introduction 2 Models via exchangeability 3 Models via invariance 4 Exercise Statistical
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationTechnical Report
Department of Mathematics Technical Report 2008-2 Extreme Value Properties of Multivariate t Copulas Aristidis K Nikoloulopoulos, Harry Joe, Haijun Li April 2008 Postal address: Department of Mathematics,
More informationSample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson
Sample Geometry Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois Spring
More informationMP 472 Quantum Information and Computation
MP 472 Quantum Information and Computation http://www.thphys.may.ie/staff/jvala/mp472.htm Outline Open quantum systems The density operator ensemble of quantum states general properties the reduced density
More informationDiscussion of Dimension Reduction... by Dennis Cook
Discussion of Dimension Reduction... by Dennis Cook Department of Statistics University of Chicago Anthony Atkinson 70th birthday celebration London, December 14, 2007 Outline The renaissance of PCA? State
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More informationBehaviour of the NORTA Method for Correlated. Random Vector Generation as the Dimension
Behaviour of the NORTA Method for Correlated Random Vector Generation as the Dimension Increases SOUMYADIP GHOSH and SHANE G. HENDERSON Cornell University 1. INTRODUCTION There is a growing need to capture
More informationMATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS
MATH 320: PRACTICE PROBLEMS FOR THE FINAL AND SOLUTIONS There will be eight problems on the final. The following are sample problems. Problem 1. Let F be the vector space of all real valued functions on
More informationEstimation of a multivariate normal covariance matrix with staircase pattern data
AISM (2007) 59: 211 233 DOI 101007/s10463-006-0044-x Xiaoqian Sun Dongchu Sun Estimation of a multivariate normal covariance matrix with staircase pattern data Received: 20 January 2005 / Revised: 1 November
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationOn Expected Gaussian Random Determinants
On Expected Gaussian Random Determinants Moo K. Chung 1 Department of Statistics University of Wisconsin-Madison 1210 West Dayton St. Madison, WI 53706 Abstract The expectation of random determinants whose
More informationChapter 17: Undirected Graphical Models
Chapter 17: Undirected Graphical Models The Elements of Statistical Learning Biaobin Jiang Department of Biological Sciences Purdue University bjiang@purdue.edu October 30, 2014 Biaobin Jiang (Purdue)
More informationOrthonormal Transformations and Least Squares
Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving
More informationLecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation
Lecture Cholesky method QR decomposition Terminology Linear systems: Eigensystems: Jacobi transformations QR transformation Cholesky method: For a symmetric positive definite matrix, one can do an LU decomposition
More informationA Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems
A Study of Numerical Elimination for the Solution of Multivariate Polynomial Systems W Auzinger and H J Stetter Abstract In an earlier paper we had motivated and described am algorithm for the computation
More informationBehaviour of the NORTA Method for Correlated. Random Vector Generation as the Dimension
Behaviour of the NORTA Method for Correlated Random Vector Generation as the Dimension Increases SOUMYADIP GHOSH and SHANE G. HENDERSON Cornell University The NORTA method is a fast general-purpose method
More informationINFORMATION THEORY AND STATISTICS
INFORMATION THEORY AND STATISTICS Solomon Kullback DOVER PUBLICATIONS, INC. Mineola, New York Contents 1 DEFINITION OF INFORMATION 1 Introduction 1 2 Definition 3 3 Divergence 6 4 Examples 7 5 Problems...''.
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example
More informationA Probability Review
A Probability Review Outline: A probability review Shorthand notation: RV stands for random variable EE 527, Detection and Estimation Theory, # 0b 1 A Probability Review Reading: Go over handouts 2 5 in
More informationApplications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices
Applications of Randomized Methods for Decomposing and Simulating from Large Covariance Matrices Vahid Dehdari and Clayton V. Deutsch Geostatistical modeling involves many variables and many locations.
More informationSingular value decomposition (SVD) of large random matrices. India, 2010
Singular value decomposition (SVD) of large random matrices Marianna Bolla Budapest University of Technology and Economics marib@math.bme.hu India, 2010 Motivation New challenge of multivariate statistics:
More informationQuiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.
Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real
More informationA Potpourri of Nonlinear Algebra
a a 2 a 3 + c c 2 c 3 =2, a a 3 b 2 + c c 3 d 2 =0, a 2 a 3 b + c 2 c 3 d =0, a 3 b b 2 + c 3 d d 2 = 4, a a 2 b 3 + c c 2 d 3 =0, a b 2 b 3 + c d 2 d 3 = 4, a 2 b b 3 + c 2 d 3 d =4, b b 2 b 3 + d d 2
More informationGaussian Process Vine Copulas for Multivariate Dependence
Gaussian Process Vine Copulas for Multivariate Dependence José Miguel Hernández Lobato 1,2, David López Paz 3,2 and Zoubin Ghahramani 1 June 27, 2013 1 University of Cambridge 2 Equal Contributor 3 Ma-Planck-Institute
More informationarxiv:math/ v1 [math.co] 13 Jul 2005
A NEW PROOF OF THE REFINED ALTERNATING SIGN MATRIX THEOREM arxiv:math/0507270v1 [math.co] 13 Jul 2005 Ilse Fischer Fakultät für Mathematik, Universität Wien Nordbergstrasse 15, A-1090 Wien, Austria E-mail:
More informationLeast-Squares Fitting of Model Parameters to Experimental Data
Least-Squares Fitting of Model Parameters to Experimental Data Div. of Mathematical Sciences, Dept of Engineering Sciences and Mathematics, LTU, room E193 Outline of talk What about Science and Scientific
More informationHypothesis testing in multilevel models with block circular covariance structures
1/ 25 Hypothesis testing in multilevel models with block circular covariance structures Yuli Liang 1, Dietrich von Rosen 2,3 and Tatjana von Rosen 1 1 Department of Statistics, Stockholm University 2 Department
More informationGROUP THEORY PRIMER. New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule
GROUP THEORY PRIMER New terms: tensor, rank k tensor, Young tableau, Young diagram, hook, hook length, factors over hooks rule 1. Tensor methods for su(n) To study some aspects of representations of a
More informationMATH 38061/MATH48061/MATH68061: MULTIVARIATE STATISTICS Solutions to Problems on Random Vectors and Random Sampling. 1+ x2 +y 2 ) (n+2)/2
MATH 3806/MATH4806/MATH6806: MULTIVARIATE STATISTICS Solutions to Problems on Rom Vectors Rom Sampling Let X Y have the joint pdf: fx,y) + x +y ) n+)/ π n for < x < < y < this is particular case of the
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)
More information