Multivariate Statistical Analysis

Similar documents
Mathematical foundations - linear algebra

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2


Lecture 02 Linear Algebra Basics

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Math Matrix Algebra

CS 143 Linear Algebra Review

Review problems for MA 54, Fall 2004.

Positive Definite Matrix

Basic Calculus Review

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

Econ Slides from Lecture 7

Example Linear Algebra Competency Test

Chap 3. Linear Algebra

Functional Analysis Review

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Linear Algebra in Actuarial Science: Slides to the lecture

1 A factor can be considered to be an underlying latent variable: (a) on which people differ. (b) that is explained by unknown variables

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

UNIT 6: The singular value decomposition.

Review (Probability & Linear Algebra)

Lecture 8. Principal Component Analysis. Luigi Freda. ALCOR Lab DIAG University of Rome La Sapienza. December 13, 2016

Dimension reduction, PCA & eigenanalysis Based in part on slides from textbook, slides of Susan Holmes. October 3, Statistics 202: Data Mining

Study Notes on Matrices & Determinants for GATE 2017

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Review (probability, linear algebra) CE-717 : Machine Learning Sharif University of Technology

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Background Mathematics (2/2) 1. David Barber

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

1 Singular Value Decomposition and Principal Component

Singular Value Decomposition and Principal Component Analysis (PCA) I

1 Linearity and Linear Systems

Introduction to Matrix Algebra

Basic Concepts in Matrix Algebra

L3: Review of linear algebra and MATLAB

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Lecture 6 Positive Definite Matrices

Knowledge Discovery and Data Mining 1 (VO) ( )

CS 246 Review of Linear Algebra 01/17/19

MATH 829: Introduction to Data Mining and Analysis Principal component analysis

EE731 Lecture Notes: Matrix Computations for Signal Processing

Foundations of Computer Vision

Principal Components Analysis (PCA)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

CS281 Section 4: Factor Analysis and PCA

Introduction to Machine Learning

Lecture 3: Review of Linear Algebra

Linear Algebra for Machine Learning. Sargur N. Srihari

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

Lecture 3: Review of Linear Algebra

Unsupervised Learning: Dimensionality Reduction

I. Multiple Choice Questions (Answer any eight)

PRACTICE PROBLEMS FOR THE FINAL

The Singular Value Decomposition

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Linear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations

Quantum Computing Lecture 2. Review of Linear Algebra

Lecture 1: Review of linear algebra

Linear Algebra Review. Vectors

Next is material on matrix rank. Please see the handout

Conceptual Questions for Review

Lecture 7: Positive Semidefinite Matrices

1 Data Arrays and Decompositions

Linear Algebra (Review) Volker Tresp 2017

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

Linear Algebra: Characteristic Value Problem

Eigenvalues and diagonalization

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Lecture 11. Linear systems: Cholesky method. Eigensystems: Terminology. Jacobi transformations QR transformation

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Outline. Linear Algebra for Computer Vision

Introduction to Numerical Linear Algebra II

Eigenvalue and Eigenvector Homework

Image Registration Lecture 2: Vectors and Matrices

MAT Linear Algebra Collection of sample exams

1. The Polar Decomposition

Linear Algebra - Part II

15 Singular Value Decomposition

G1110 & 852G1 Numerical Linear Algebra

Large Scale Data Analysis Using Deep Learning

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Lecture 2: Linear Algebra Review

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

14 Singular Value Decomposition

Linear Algebra (Review) Volker Tresp 2018

Applied Mathematics 205. Unit V: Eigenvalue Problems. Lecturer: Dr. David Knezevic

Chapter 3 Transformations

Algebra Workshops 10 and 11

Singular Value Decompsition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Jeffrey D. Ullman Stanford University

Repeated Eigenvalues and Symmetric Matrices

Principal Component Analysis

Transcription:

Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis

Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

Matrix decompositions, there are many: singular value, cholesky, etc. Eigen values and eigen vectors of matrices are needed for some of the methods to be discussed, including principal components analysis, principal component regression. Determining the eigen values and eigen vectors of a matrix is a very difficult computational problem for all except the simplest cases. We will look at the eigen( function in R to do these. These decompositions will form the core of at least half our multivariate methods (although we need to mention at some point that we actually tend to use the singular value decomposition as a means of getting to these values. If A is a square p p matrix, the eigenvalues (latent roots, characteristic roots are the roots of the equation: A λi = 0

This (characteristic equation is a polynomial of degree p in λ. The roots, the eigenvalues of A are denoted by λ 1,λ 2,...,λ p. For each eigen value λ i there is a corresponding eigen vector e i which can be found by solving: (A λ i Ie i = 0 There are many solutions for e i. For our (statistical purposes, we usually set it to have length 1, i.e. we obtain a normalised eigenvector for λ i by a i = e i. ei T e i

Some properties of eigendecompositions (a trace(a = Σ p i=1 λ i (b A = p i=1 λ i Also, if A is symmetric: (c The normalised eigenvectors corresponding to unequal eigenvalues are orthonormal (this is a bit of circular definition, if the eigenvalues are equal the corresponding eigenvectors are not unique, and one fix is to choose orthonormal eigenvectors. (d Correlation and covariance matrices: are symmetric positive definite (or semi-definite. If such a matrix is of full rank p then all the eigen values are positive. If the matrix is of rank m < p then there will be m positive eigenvalues and p m zero eigenvalues.

Eigenanalysis Eigen values and eigen vectors Consider the following matrix: A = Find the eigenvalues: ( 1 4 2 3

1 λ 4 2 3 λ = (1 λ(3 λ 8 = 0 = 3 λ 3λ + λ 2 8 = 0 = λ 2 4λ 5 = 0 = (λ 5(λ + 1 = 0 = λ = 1,5

Now, assuming λ = 1: ( 1 4 A λi = A + I = 2 3 So that: (A + Ie = 0 = ( 2 4 2 4 ( 1 0 + 0 1 ( e1 e 2 = = ( 2 4 2 4 ( 0 0

So two solutions are given by: 2e 1 + 4e 2 = 0 2e 1 + 4e 2 = 0 which ( are the same(! and have a solution e 1 = 2,e 2 = 1, so 2 is an eigenvector. Given the length of this vector 1 ( 2 2 + 1 2, we have a normalised eigenvector as: ( 2/ 5 1/ 5

A normalised vector is one scaled to have unit length, For the vector x this can be found by taking e i x x Trivial in R: > x<-c(2,-1 > z<-x/sqrt(t(x%*%x > z [1] 0.8944272-0.4472136 > t(z %*% z ## check the length [,1] [1,] 1

Also, we might like to consider the case where λ = 5: A λi = A 5I = ( 1 4 2 3 ( 5 0 0 5 = ( 4 4 2 2

So that: Eigen values and eigen vectors (A 5Ie = 0 = So two solutions are given by: ( 4 4 2 2 ( e1 e 2 = ( 0 0 4e 1 + 4e 2 = 0 2e 1 2e 2 = 0 which ( are the same(! and have a solution e 1 = 1,e 2 = 1, so 1 is an eigenvector. Given the length of this vector 1 ( 1 2 + 1 2, we have a normalised eigenvector as: ( 1/ 2 1/ 2

Side note: Orthonormality We have already discussed that two vectors x and y, of order k 1 are orthogonal if x y = 0. but we can further say that if two vectors x and y are orthogonal and of unit length, i.e. if x y = 0, x x = 1 and y y = 1 then they are orthonormal.

Outline Eigen values and eigen vectors 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

Orthonormality Eigen values and eigen vectors Consider these three vectors. Which of these are orthogonal? 1 6 5 x = 2 3 y = 7 1 z = 4 5 4 2 7 Normalise the two orthogonal vectors, so that we have two orthonormal vectors.

Now identify the two orthonormal vectors: x norm = y norm = z norm = 1/ 30 2/ 30 3/ 30 4/ 30 6/ 90 7/ 90 1/ 90 2/ 90 5/ 115 4/ 115 5/ 115 7/ 115

Further practice Orthogonality Eigen values and eigen vectors Find a vector which is orthogonal to ( 1 u = 3 ( 3 1 ( 1 3 ( 3 1 ( 3 1

Further practice Orthogonality Eigen values and eigen vectors Find a vector which is orthogonal to ( 1 u = 3 ( 3 1 ( 1 3 ( 3 1 ( 3 1

Further practice Orthogonality Eigen values and eigen vectors Find a vector which is orthogonal to ( 1 u = 3 ( 3 1 ( 1 3 ( 3 1 ( 3 1

Further practice Orthogonality Eigen values and eigen vectors Find a vector which is orthogonal to ( 1 u = 3 ( 3 1 ( 1 3 ( 3 1 ( 3 1

Further practice Orthogonality Eigen values and eigen vectors A vector which is orthogonal to ( 1 u = 3 ( 3 1

Further practice Orthogonality Eigen values and eigen vectors A vector which is orthogonal to ( 1 u = 3 ( 3 1

Do please note carefully that this is not a unique solution. Any w 1 and w 2 that satisfies w 1 + 3w 2 = 0 will be orthogonal to u. Did you find any others?

And a vector which is orthogonal to: v = 2 4 1 2 4 0 2 1 0 0 2 1 0 4 2 1 2 0 2 1

And a vector which is orthogonal to: 2 v = 4 1 2 0 0 2 1

If you are really comfortable with eigenanalysis of 2 2 matrices have a go at these 3 3 matrices. e = 1 4 0 4 1 0 0 0 1 f = 4 0 0 0 9 0 0 0 1 g = 13 4 2 4 13 2 2 2 10

Outline Eigen values and eigen vectors 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

Turns the first column into the first row etc.. A transposed matrix is denoted by a superscripted, in other words A T is the transpose of A. If A = 3 1 5 6 4 4 then A T = ( 3 5 4 1 6 4 As with vectors, transposing matrices in R simply requires a call to t(, the dimensions can be checked with dim(.

Outline Eigen values and eigen vectors 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

Eigen values and eigen vectors Symmetric around the diagonal i = j. For matrix A, it is symmetric whenever a ij = a ji. The correlation matrix and the variance-covariance matrix are the most common symmetric matrices we will encounter,

Outline Eigen values and eigen vectors 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

Eigen values and eigen vectors Elements on the diagonal (where i = j and zero elsewhere (where i j. For example, the matrix A given as follows: A = 13 0 0 0 27 0 0 0 16 is a diagonal matrix. To save paper and ink, A can also be written as: A = diag ( 13 27 16

Outline Eigen values and eigen vectors 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

Finally, note that we can partition a large matrix into smaller ones: 2 5 4 0 7 8 4 3 4 ( 0 7 So we could work with submatrices such as. 4 3 ( X1 e.g. If X was partitioned as and ( Y X 1 Y 2 Y 3 then: 2 ( X1 Y XY = 1 X 1 Y 2 X 1 Y 3 X 2 Y 1 X 2 Y 2 X 2 Y 3