UVA CS 6316/4501 Fall 2016 Machine Learning. Lecture 18: Principal Component Analysis (PCA) Dr. Yanjun Qi. University of Virginia

Similar documents
Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

What is Principal Component Analysis?

Principal Component Analysis (PCA)

PCA FACE RECOGNITION

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Feature Extraction Techniques

Deriving Principal Component Analysis (PCA)

CS 4495 Computer Vision Principle Component Analysis

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Principal Component Analysis

Pattern Recognition 2

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Introduction to Machine Learning

Principal Components Analysis (PCA)

20 Unsupervised Learning and Principal Components Analysis (PCA)

Unsupervised Learning: K- Means & PCA

Image Analysis. PCA and Eigenfaces

Data Preprocessing Tasks

PRINCIPAL COMPONENT ANALYSIS

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

CSC 411 Lecture 12: Principal Component Analysis

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

Dimensionality Reduction

Principal Component Analysis

Advanced Introduction to Machine Learning CMU-10715

7. Variable extraction and dimensionality reduction

Example: Face Detection

PCA, Kernel PCA, ICA

PCA and admixture models

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Lecture: Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Classification 2: Linear discriminant analysis (continued); logistic regression

PRINCIPAL COMPONENTS ANALYSIS

Lecture: Face Recognition and Feature Reduction

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Lecture: Face Recognition and Feature Reduction

Classification. The goal: map from input X to a label Y. Y has a discrete set of possible values. We focused on binary Y (values 0 or 1).

Linear & Non-Linear Discriminant Analysis! Hugh R. Wilson

1 Principal Components Analysis

Covariance and Principal Components

Principal Components Analysis. Sargur Srihari University at Buffalo

Eigenimaging for Facial Recognition

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

UVA CS 4501: Machine Learning. Lecture 6: Linear Regression Model with Dr. Yanjun Qi. University of Virginia

Unsupervised Learning

Machine Learning (Spring 2012) Principal Component Analysis

CS 340 Lec. 6: Linear Dimensionality Reduction

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 7: Regression Models - Review HW1 DUE NOW / HW2 OUT TODAY 9/18/14

Face detection and recognition. Detection Recognition Sally

Unsupervised Learning: Dimensionality Reduction

Expectation Maximization

STA 414/2104: Lecture 8

Face Detection and Recognition

Announcements (repeat) Principal Components Analysis

STATISTICAL LEARNING SYSTEMS

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Lecture Notes 1: Vector spaces

Statistical Machine Learning

Dimensionality reduction

Lecture 13 Visual recognition

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Machine Learning 2nd Edition

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

CITS 4402 Computer Vision

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

L11: Pattern recognition principles

Data Mining Techniques

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS

Principal component analysis

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015

Face Recognition. Lecture-14

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Machine Learning - MT & 14. PCA and MDS

Principal Component Analysis and Linear Discriminant Analysis

Machine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang.

Eigenfaces. Face Recognition Using Principal Components Analysis

Machine Learning (CSE 446): Unsupervised Learning: K-means and Principal Component Analysis

Factor Analysis and Kalman Filtering (11/2/04)

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

UVA CS 4501: Machine Learning

Face recognition Computer Vision Spring 2018, Lecture 21

PCA and LDA. Man-Wai MAK

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Principal Component Analysis and Singular Value Decomposition. Volker Tresp, Clemens Otte Summer 2014

Methods for sparse analysis of high-dimensional data, II

Lecture 7: Con3nuous Latent Variable Models

PRINCIPAL COMPONENT ANALYSIS

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

Statistical Pattern Recognition

GEOG 4110/5100 Advanced Remote Sensing Lecture 15

Principal Component Analysis

November 28 th, Carlos Guestrin 1. Lower dimensional projections

Transcription:

UVA CS 6316/4501 Fall 2016 Machine Learning Lecture 18: Principal Component Analysis (PCA) Dr. Yanjun Qi University of Virginia Department of Computer Science 1

Kmeans + GMM + Hierarchical next aper classificaron? Basic PCA Bayes- Net HMM 2

Where are we? è Five major secrons of this course q Regression (supervised) q ClassificaRon (supervised) q Feature selecron q Unsupervised models q Dimension ReducRon (PCA) q Clustering (K-means, GMM/EM, Hierarchical ) q Learning theory q Graphical models 3

Yanjun Qi / UVA CS 4501-01-6501-07 An unlabeled Dataset X a data matrix of n observations on p variables x 1,x 2, x p Unsupervised learning = learning from raw (unlabeled, unannotated, etc) data, as opposed to supervised data where a classificaron/regression label of examples is given Data/points/instances/examples/samples/records: [ rows ] Features/a0ributes/dimensions/independent variables/covariates/predictors/regressors: [ columns]

Today n Dimensionality ReducRon (unsupervised) with Principal Components Analysis (PCA) q Review of eigenvalue, eigenvector q How to project samples into a line capturing the variaron of the whole dataset è Eigenvector / Eigenvalue of covariance matrix q PCA for dimension reducron q Eigenface è PCA for face recogniron 5

Review: Mean and Variance Variance: Var(X)= E((X µ)2 ) Discrete RVs: ConRnuous RVs: Covariance: 2 ( X) ( µ ) P( X ) = v i = V v v i + V( X) = ( x µ ) f ( x) dx 2 i Cov(X,Y )= E((X µ x )(Y µ y ))= E(XY ) µ x µ y 6

Review: Covariance matrix v(x 1 )c(x 1,x 2 )...c(x 1,x p ) c(x 1,x 2 ) v(x 2 )...c(x 2,x p ) c(x 1,x p )c(x 2,x p )...v(x p ) = E(XX T ) µµ T If data is centered, sample covariance matrix is obtained as : C = 1 n 1 X T X 7

Calculating eignevalues and eigenvectors Review: Review: Eigenvector Covariance / Eigenvalue matrix The eigenvalues λ i are found by solving the equation v( x c(x 1 1 ),x c(x p 1,x 2 )... c(x det(c-λi)=0 c(x,x ) v( x )... c(x,x u 0 ) C= 1 2 2 2 p Eigenvectors are columns of the matrix UA such that C=A D A T & λ U) c(xu T 1,x )... $ v( x Where D= 2 p 1,x p ) ) 1 0...0 $ 0 λ2p...0 $ 0 $ % 0... λ p # " From Dr. S. Narasimhan 8

An example Review: Eigenvalue, e.g. An example Let us take two variables with covariance c>0 & 1 $ % c c# 1" C-λI= & 1 # $ c λ1 cλ % c 1 λ " det(c-λi)=(1- λ)²-c² Solving this we find λ 1 =1+c λ =1-c < λ Let us take two variables with covariance c>0 C= C= & 1 $ % c c# 1" C-λI= & 1 λ $ % det(c-λi)=(1- λ)²-c² Solving this we find λ 1 =1+c c# " λ 2 =1-c < λ 1 u 0 From Dr. S. Narasimhan 9

and eigenvectors Any eigenvector A satisfies the condition CA=λA Solving we find " # $ $ % & + + 2 1 2 1 a ca ca a " " " # $ 2 1 a a " # $ % & 1 1 c c A= CA= = " # $ $ % & 2 1 a a λ λ " # $ $ % & 2 1 a a = A 1 A 2 10 and eigenvectors Any eigenvector A satisfies the condition CA=λA Solving we find " # $ $ % & + + 2 1 2 1 a ca ca a " " " # $ 2 1 a a " # $ % & 1 1 c c A= CA= = " # $ $ % & 2 1 a a λ λ " # $ $ % & 2 1 a a = A 1 A 2 Review: Eigenvector, e.g. u u u U u u u From Dr. S. Narasimhan In pracrce, much more advance methods, e.g. power method

Today n Dimensionality ReducRon (unsupervised) with Principal Components Analysis (PCA) q Review of eigenvalue, eigenvector q How to project samples into a line capturing the variaron of the whole dataset è Eigenvector / Eigenvalue of covariance matrix q PCA for dimension reducron q Eigenface è PCA for face recogniron 11

An unlabeled Dataset X a data matrix of n observations on p variables x 1,x 2, x p Data/points/instances/examples/samples/records: [ rows ] Features/a0ributes/dimensions/independent variables/ covariates/predictors/regressors: [ columns] 12

The Goal We wish to explain/summarize the underlying variance-covariance structure of a large set of variables through a few linear combinarons of these variables. PCA is introduced by Pearson (1901) and Hotelling (1933) 13

Trick: Rotate Coordinate Axes Suppose we have a sample popularon measured on p random variables X 1,,X p. Our goal is to develop a new set of K (K<p) axes (linear combinarons of the original p axes) in the direcrons of greatest variability: X 2 X 1 14 This could be accomplished by rotarng the axes (if data is centered).

Algebraic InterpretaRon Given n points in a p dimensional space, for large p, how to project on to a lower-dimensional (K<p) space while preserving broad trends in the data and allowing it to be visualized? FROM NOW we assume Data matrix is centered: è (we subtract the mean along each dimension, and center the original axis system at the centroid of all data points, for simplicity) 15

Algebraic InterpretaRon 1D Given n points in a p dimensional space, for large p, how to project on to a 1 dimensional space? Choose a line that fits the data so the points are spread out well along the line 16 From Dr. S. Narasimhan

Let us see it on a figure Good Beper 17

Algebraic InterpretaRon 1D Formally, to find a line that è Maximizing the sum of squares of data samples projecrons on that line 18 From Dr. S. Narasimhan

19

Algebraic InterpretaRon 1D Formally, to find a line (direcron) that è Maximizing the sum of squares of data samples projecrons on that line size of x s projection on vector v è u=x T v = v T x subject to v T v = 1 u=x T v x: p*1 vector v: p*1 vector u: 1*1 scalar 20

Algebraic InterpretaRon 1D Formally, to find a line (direcron) that è Maximizing the sum of squares of data samples projecrons on that line Center X n*p vs. not-centered X n*p subject to v T v = 1 u=x T v x: p*1 vector v: p*1 vector u: 1*1 scalar 21

Algebraic InterpretaRon 1D arg max v u i ( ) 2 ui 22

Algebraic InterpretaRon 1D How is the sum of squares of projecron lengths expressed in algebraic terms? L i n e P P P P t t t t 1 2 3 n Point 1: x 1 T Point 2: x 2 T Point 3: x 3 T : Point n: x n T L i n e v T X T X v n*p p*1 23 From Dr. S. Narasimhan

Algebraic InterpretaRon 1D How is the sum of squares of projecron lengths expressed in algebraic terms? max( v T X T Xv), subject to v T v = 1 24

Algebraic InterpretaRon 1D RewriRng this: max( v T X T Xv), subject to v T v = 1 v T X T Xv = λ = λ v T v = v T (λv) <=> v T (X T Xv λv) = 0 Show that the maximum value of v T X T Xv is obtained for those vectors / direcrons sarsfying X T Xv = λv So, find the largest λ and associated u such that the matrix X T X when applied to u, yields a new vector which is in the same direcron as u, only scaled by a factor λ. 25

Algebraic InterpretaRon 1D (X T X)v points in some other direcron (different from v) in general (X T X)v v è If u is an eigenvector and λ is corresponding eigenvalue X T X u = λ u u So, find the largest λ and associated u such that the matrix X T X when applied to u, yields a new vector which is in the same direcron as u, only scaled by a factor λ. 26

Algebraic InterpretaRon beyond 1D How many eigenvectors are there? For Real Symmetric Matrices except in degenerate cases when eigenvalues repeat, there are p eigenvectors u 1,,u p are the eigenvectors λ 1 λ p are the eigenvalues, large to small, ordered by its value all eigenvectors are mutually orthogonal and therefore form a new basis space Eigenvectors for disrnct eigenvalues are mutually orthogonal Eigenvectors corresponding to the same eigenvalue have the property that any linear combinaron is also an eigenvector with the same eigenvalue; one can then find as many orthogonal eigenvectors as the number of repeats of the eigenvalue. 27 From Dr. S. Narasimhan

Algebraic InterpretaRon beyond 1D For matrices of the form (symmetric) X T X All eigenvalues are non-negarve See Handout-1 linear algebra review / Page 18,19,20 λ 1 λ p are the eigenvalues, ordering from large to small, i.e. Ordered by the PC s importance 28

PCA Eigenvectors è Principal Components 5 4 2nd Principal Component, u 2 1st Principal Component, u 1 3 2 4.0 4.5 5.0 5.5 6.0 29

PCA 1-D : How the sum of squares of projecron lengths relates to VARIANCE? Formally, to find a line (direcron) that è Maximizing the sum of squares of data samples projecrons on that line size of one sample x s projection on vector v è u=x T v = v T x In a new coordinate system with v as axis, u is the posiron of sample x on this axis 30

PCA 1-D : How the sum of squares of projecron lengths relates to VARIANCE? convert x_i onto v, coordinate è u_i = (x_i) T V Consider the variaron along direcron v among all of the orange points {x_1, x_2,, x_n}: è The variance of posirons {u_1, u_2,, u_n} 31 From Dr. S. Narasimhan

PCA 1-D : How the sum of squares of projecron lengths relates to VARIANCE? Var u ( ) = ( u i µ ) 2 P( u = u ) i = u i u i ui ( ) 2 Assuming centered data matrix Formally, to find a line (direcron v ) that è Maximizing the sum of squares of data samples projecrons on that v line è Maximizing the variance of data samples projected representarons on the v axis 32

Today n Dimensionality ReducRon (unsupervised) with Principal Components Analysis (PCA) q Review of eigenvalue, eigenvector q How to project samples into a line capturing the variaron of the whole dataset è Eigenvector / Eigenvalue of covariance matrix q PCA for dimension reducron q Eigenface è PCA for face recogniron 33

ApplicaRons Uses: Data VisualizaRon Data ReducRon Data ClassificaRon Trend Analysis Factor Analysis Noise ReducRon Examples: How many unique sub-sets are in the sample? How are they similar / different? What are the underlying factors that influence the samples? How to best present what is interesrng? Which sub-set does this new sample righ~ully belong?. 34 From Dr. S. Narasimhan

InterpretaRon of PCA From p original variables: x 1,x 2,...,x p : Produce k new variables: u 1,u 2,...,u k : u 1 = a 11 x 1 + a 12 x 2 +... + a 1p x p u 2 = a 21 x 1 + a 22 x 2 +... + a 2p x p When p=2 35

InterpretaRon of PCA u k 's are Principal Components such that: u k 's are uncorrelated (orthogonal) u 1 explains as much as possible of original variance in data set u 2 explains as much as possible of remaining variance etc. k th PC retains the k th greatest fraction of the variation in the samples 36 From Dr. S. Narasimhan

InterpretaRon of PCA The new variables (PCs) have a variance equal to their corresponding eigenvalue, since Var(u i )= u it X T Xu i = u i T λ i u i = λ i u it u i = λ i for all i=1 p Small λ i ó small variance ó data change liple in the direcron of component u i PCA is useful for finding new, more informarve, uncorrelated features; it reduces dimensionality by rejecrng low variance features 37

PCA Eigenvalues 5 λ 1 λ 2 4 3 2 4.0 4.5 5.0 5.5 6.0 38

PCA Summary unrl now Rotates mulrvariate dataset into a new configuraron which is easier to interpret PCA is useful for finding new, more informarve, uncorrelated features; it reduces dimensionality by rejecrng low variance features 39

PCA for dimension reducron e.g. p=3 è (pick top 2 PCs) corresponds to choosing a 2D linear plane 40

Use PCA to reduce higher dimension Suppose each data point is p-dimensional The eigenvectors of data covariance matrix define a new coordinate system We can compress (i.e. perform projecron ) the data points by only using the top few eigenvectors corresponds to choosing a linear subspace represent points on a line, plane, or hyper-plane these eigenvectors are known as the principal components 41 From Dr. S. Narasimhan

How many components to keep? I. Variance: Enough PCs to have a cumularve variance explained by the PCs that is >50-70% II. Scree plot: represents the ability of PCs to explain the variaron in data, e.g. keep PCs with eigenvalues >1 42 From Dr. S. Narasimhan

Dimensionality ReducRon e.g. check eigenvalue (I) 43

Dimensionality ReducRon(2) e.g. check percentage of kept variance Can ignore the components of lesser significance. Variance (%) 25 20 15 10 5 The relative variance explained by each PC is given by λ i /sum(λ j ) 0 PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 You do lose some informaron, but if the eigenvalues are small, you don t lose much p dimensions in original data Calculate p eigenvectors and eigenvalues choose only the first k eigenvectors, based on their eigenvalues final projected data set has only k dimensions 44

Today n Dimensionality ReducRon (unsupervised) with Principal Components Analysis (PCA) q Review of eigenvalue, eigenvector q How to project samples into a line capturing the variaron of the whole dataset è Eigenvector / Eigenvalue of covariance matrix q PCA for dimension reducron q Eigenface è PCA for face recogniron 45

Example 1: ApplicaRon to image, e.g. a task of face recogniron 1. Treat pixels as a vector 2. Recognize face by 1-nearest neighbor x y 1...y n A face-image database of totally n different people k = argmin k T k y x 46 From Prof. Derek Hoiem

Example 1: the space of all face images When viewed as vectors of pixel values, face images are extremely high-dimensional 100x100 image = 10,000 dimensions Slow and lots of storage But very few 10,000-dimensional vectors are valid face images We want to effecrvely model the subspace of face images 47 From Prof. Derek Hoiem

Example 1:The space of all face images Eigenface idea: construct a low-dimensional linear subspace that best explains the variaron in the set of face images 48 From Prof. Derek Hoiem

Example 1: ApplicaRon to Faces, e.g. Eigenfaces (PCA on face images) 1. Compute covariance matrix of face images 2. Compute the principal components ( eigenfaces ) K eigenvectors with largest eigenvalues 3. Represent all face images in the dataset as linear combinarons of eigenfaces Perform nearest neighbors on these projected low-d coefficients M. Turk and A. Pentland, Face Recognition using Eigenfaces, CVPR 1991 49

Example 1: ApplicaRon to Faces Training images 50

Example 1: Eigenfaces example Top eigenvectors: u 1, u k Mean: µ µ = 1 N x k N k= 1 51 From Prof. Derek Hoiem

Example 1: VisualizaRon of eigenfaces Principal component (eigenvector) u k µ + 3σ k u k µ 3σ k u k 52 From Prof. Derek Hoiem

Example 1: RepresentaRon and reconstrucron of original x Face x in face space coordinates: = New representason Remarkably few eigenvector terms are needed to give a fair likeness of most people's faces. è subtract the mean along each dimension, in order to center the original axis system at the centroid of all data points 53

RepresentaRon and reconstrucron Face x in face space coordinates: = New representason ReconstrucRon: = + ^ x = µ + w 1 u 1 +w 2 u 2 +w 3 u 3 +w 4 u 4 + A human face may be considered to be a linear combinaron of these standardized eigen faces 54 From Prof. Derek Hoiem

New representaron in the lower-dim PC space 5 x i2 4 w i,2 w i,1 3 2 4.0 4.5 5.0 x i1 5.5 6.0 55 From Prof. Derek Hoiem

Key Property of Eigenspace RepresentaRon Given 2 images ĝ 1 xˆ, x 1 ˆ2 that are used to construct the Eigenspace is the eigenspace projecron of image ĝ 2 is the eigenspace projecron of image ˆx 1 ˆx 2 Then, gˆ gˆ xˆ xˆ 2 1 2 1 That is, distance in Eigenspace is approximately equal to the distance between two original images. 56

Classify / RecogniRon with eigenfaces Step I: Process labeled training images Find mean µ and covariance matrix Find k principal components (i.e. eigenvectors of Σ) è u 1, u k Project each training image x i onto subspace spanned by the top principal components: (w i1,,w ik ) = (u 1T (x i µ),, u kt (x i µ)) M. Turk and A. Pentland, Face Recognition using Eigenfaces, CVPR 1991 57

Classify / RecogniRon with eigenfaces Step 2: Nearest neighbor based face classifica:on Given a novel image x Project onto k PC s subspace: (w 1,,w k ) = (u 1T (x µ),, u k T (x µ)) ^ OpRonal: check reconstrucron error x x to determine whether the image is really a face Classify as closest training face(s) in the lower k-dimensional subspace M. Turk and A. Pentland, Face Recognition using Eigenfaces, CVPR 1991 58

Is this a face or not? 59

Example 2: e.g. Handwripen Digits 16 x 16 gray scale Total 658 such 3 s 130 is shown Image x i : R 256 Compute principal components w 1 w 2 e.g. From ESL book 60

K=2 @ Figure e.g. e.g. From ESL book 61

The new reduced representaron is easier to visualize and interpret e.g. From scikit learn 62

PCA summary General dimensionality reducron technique Preserves most of variance with a much more compact representaron Lower storage requirements (eigenvectors + a few numbers per face/sample) Faster matching (since matching within lower-dim) 63

PCA & Gaussian DistribuRons. PCA is similar to learning a Gaussian distriburon for the data. is the mean of the distriburon. Then the esrmate of the covariance. Dimension reducron occurs by ignoring the direcrons in which the covariance is small. 64

(1) LimitaRons of PCA PCA is not effecrve for some datasets. For example, if the data is a set of strings (1,0,0,0, ), (0,1,0,0 ),,(0,0,0,,1) then the eigenvalues do not fall off as PCA requires. 65

(2) PCA LimitaRons The direcron of maximum variance is not always good for classificaron (Example 1) For this case: + Ideal for capturing global variance + Not ideal for discriminaron First PC 66 From Prof. Derek Hoiem

PCA and DiscriminaRon PCA may not find the best direcrons for discriminarng between two classes. (Example 2) Example: suppose the two classes have 2D Gaussian densires as ellipsoids. 1 st eigenvector is best for represenrng the probabilires / overall data trend 2 nd eigenvector is best for discriminaron. 67 From Prof. Derek Hoiem

Yanjun Qi / UVA CS 4501-01-6501-07 Principal Component Analysis Task Dimension Reduction Representation Gaussian assumption Score Function Direction of maximum variance Search/Optimization Eigen-decomp Models, Parameters Principal components 68

References q HasRe, Trevor, et al. The elements of staisical learning. Vol. 2. No. 1. New York: Springer, 2009. q Dr. S. Narasimhan s PCA lectures q Prof. Derek Hoiem s eigenface lecture 69

Extra: A 2D Numerical Example 70 From Dr. S. Narasimhan

PCA Example STEP 1 Subtract the mean from each of the data dimensions. SubtracRng the mean makes variance and covariance calcularon easier by simplifying their equarons. The variance and co-variance values are not affected by the mean value. 71 From Dr. S. Narasimhan

PCA Example STEP 1 DATA: (p=2) x1 x2 2.5 2.4 0.5 0.7 2.2 2.9 1.9 2.2 3.1 3.0 2.3 2.7 2 1.6 1 1.1 1.5 1.6 1.1 0.9 ZERO MEAN DATA: x1 x2.69.49-1.31-1.21.39.99.09.29 1.29 1.09.49.79.19 -.31 -.81 -.81 -.31 -.31 72 -.71-1.01 From Dr. S. Narasimhan

PCA Example STEP 2 Calculate the covariance matrix cov =.616555556.615444444.615444444.716555556 since the non-diagonal elements in this covariance matrix are posirve, we should expect that the x1 and x2 variable increase together. 73 From Dr. S. Narasimhan

PCA Example STEP 3 Calculate the eigenvectors and eigenvalues of the covariance matrix eigenvalues = 1.28402771.0490833989 eigenvectors = -.677873399 -.735178656 -.735178656.677873399 74 From Dr. S. Narasimhan

PCA Example STEP 3 eigenvectors are ploped as diagonal doped lines on the plot. Note they are perpendicular to each other. Note one of the eigenvectors goes through the middle of the points, like drawing a line of best fit. The second eigenvector gives us the other, less important, papern in the data, that all the points follow the main line, but are off to the side of the main line by some amount. 75 From Dr. S. Narasimhan

PCA Example STEP 4 Reduce dimensionality and form feature vector the eigenvector with the highest eigenvalue is the principle component of the data set. In our example, the eigenvector with the largest eigenvalue was the one that pointed down the middle of the data. Once eigenvectors are found from the covariance matrix, the next step is to order them by eigenvalue, highest to lowest. This gives you the components in order of significance. 76 From Dr. S. Narasimhan

PCA Example STEP 4 Feature Vector FeatureVector = (eig 1 eig 2 eig 3 eig n ) We can either form a feature vector with both of the eigenvectors: -.677873399 -.735178656 -.735178656.677873399 or, we can choose to leave out the smaller, less significant component and only have a single column: -.677873399 -.735178656 Now, if you like, you can decide to ignore the components of lesser significance. You do lose some informaron, but if the eigenvalues are small, you don t lose much 77

PCA Example STEP 5 Deriving the new data FinalData = RowFeatureVector x RowZeroMeanData RowFeatureVector is the matrix with the eigenvectors in the columns transposed so that the eigenvectors are now in the rows, with the most significant eigenvector at the top RowZeroMeanData is the mean-adjusted data transposed, ie. the data items are in each column, with each row holding a separate dimension. 78 From Dr. S. Narasimhan

PCA Example STEP 5 FinalData transpose: dimensions along columns w1 w2 -.827970186 -.175115307 1.77758033.142857227 -.992197494.384374989 -.274210416.130417207-1.67580142 -.209498461 -.912949103.175282444.0991094375 -.349824698 1.14457216.0464172582.438046137.0177646297 1.22382056 -.162675287 79 From Dr. S. Narasimhan

PCA Example STEP 5 80 From Dr. S. Narasimhan

ReconstrucRon of original Data If we reduced the dimensionality, obviously, when reconstrucrng the data we would lose those dimensions we chose to discard. In our example let us assume that we considered only the w1 dimension 81 From Dr. S. Narasimhan

ReconstrucRon of original Data w1 -.827970186 1.77758033 -.992197494 -.274210416-1.67580142 -.912949103.0991094375 1.14457216.438046137 1.22382056 82 From Dr. S. Narasimhan