Covariance to PCA. CS 510 Lecture #8 February 17, 2014

Similar documents
Covariance to PCA. CS 510 Lecture #14 February 23, 2018

Announcements (repeat) Principal Components Analysis

PCA Review. CS 510 February 25 th, 2013

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Covariance and Principal Components

CSE 554 Lecture 7: Alignment

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

Lecture: Face Recognition and Feature Reduction

PCA, Kernel PCA, ICA

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Lecture: Face Recognition and Feature Reduction

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Multivariate Statistical Analysis

Noise & Data Reduction

Linear Algebra and Matrices

STATISTICAL SHAPE MODELS (SSM)

Image Registration Lecture 2: Vectors and Matrices

LECTURE 16: PCA AND SVD

Unsupervised Learning: Dimensionality Reduction

= main diagonal, in the order in which their corresponding eigenvectors appear as columns of E.

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

1 Inner Product and Orthogonality

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Section 13.4 The Cross Product

Data Preprocessing Tasks

Face Recognition and Biometric Systems

Data Mining Techniques

Principal Component Analysis

Lecture 1: Systems of linear equations and their solutions

Exercises * on Principal Component Analysis

Introduction to Machine Learning

CS 143 Linear Algebra Review

GEOG 4110/5100 Advanced Remote Sensing Lecture 15

Section 5.5: Matrices and Matrix Operations

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna

1 Singular Value Decomposition and Principal Component

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

Gaussian Quiz. Preamble to The Humble Gaussian Distribution. David MacKay 1

Principal Component Analysis (PCA)

Principal Components Analysis (PCA)

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

Independent Component Analysis

The Principal Component Analysis

Noise & Data Reduction

Numerical Methods I Singular Value Decomposition

Machine Learning (CSE 446): Unsupervised Learning: K-means and Principal Component Analysis

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Eigenvalues, Eigenvectors, and an Intro to PCA

Basic Concepts in Matrix Algebra

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

Econ Slides from Lecture 8

Probabilistic & Unsupervised Learning

Random Vectors, Random Matrices, and Matrix Expected Value

Machine learning for pervasive systems Classification in high-dimensional spaces

Principal Component Analysis CS498

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

The Matrix Algebra of Sample Statistics

Principal Component Analysis and Linear Discriminant Analysis

15 Singular Value Decomposition

Signal Analysis. Principal Component Analysis

1 Principal Components Analysis

Principal Components Theory Notes

Linear Subspace Models

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

Appendix A: Matrices

Principal Component Analysis

What is Principal Component Analysis?

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Principal Component Analysis

Face Detection and Recognition

CS 4495 Computer Vision Principle Component Analysis

Principal Component Analysis (PCA)

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

CSE 554 Lecture 6: Deformation I

Eigenvalues, Eigenvectors, and an Intro to PCA

CS168: The Modern Algorithmic Toolbox Lecture #8: PCA and the Power Iteration Method

2. Matrix Algebra and Random Vectors

20 Unsupervised Learning and Principal Components Analysis (PCA)

Eigenvalues, Eigenvectors, and an Intro to PCA

Homework 2. Solutions T =

Review of Linear Algebra

Singular Value Decomposition and Principal Component Analysis (PCA) I

ELE/MCE 503 Linear Algebra Facts Fall 2018

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Learning with Singular Vectors

CITS 4402 Computer Vision

Foundations of Computer Vision

The Mathematics of Facial Recognition

1 Linearity and Linear Systems

Math 308 Practice Final Exam Page and vector y =

7. Variable extraction and dimensionality reduction

Principal Component Analysis. Applied Multivariate Statistics Spring 2012

Central limit theorem - go to web applet

Matrices and Linear transformations

Transcription:

Covariance to PCA CS 510 Lecture 8 February 17, 2014

Status Update Programming Assignment 2 is due March 7 th Expect questions about your progress at the start of class I still owe you Assignment 1 back 3/23/10 2

Overview: Goal Assume you have a gallery (database) of images, and a probe (test) image. The goal is to find the database image that is most similar to the probe image. Similar defined according to any measure e.g. correlation 3

Example: Finding Cats Probe image -- image to be matched Gallery of database images 4

Registration Whole image matching presumes alignment Images are points in N-dimensional space Dimensions meaningless unless points correspond Comparisons undefined if sizes differ The same restrictions as (strict) correlation Faces, often eye s map to same positions. Specifies rotation, translation and scale. 5

Example: Aligned Cats Probe image, registered to gallery Registered Gallery of Images 6

Multiple Images of One Object Another reason for matching a probe against a gallery Sample possible object variants, e.g. Object seen from all (standard) viewpoints Object seen from all (standard) illuminations Goal: find pose or illumination condition Bit brute force, but strong if variants are present. 7

Alternate Example Example Probe image Five of 71 gallery images (COIL) 8

But Wait, This is Expensive! It is very costly to compare whole images. How can we save effort A lot of effort! Just how much variation is there in Faces of cats Is there a systematic way to measure and them work with less data? Yes, which takes us next to covariance. 9

Background Concepts: Variance Variance - the central tendency, variance is defined as: ( x x) 2 N Square root of variance is the standard deviation 10

Background Concepts: Covariance Covariance measures if two signals vary together: ( x i x) ( y i y) Ω = i How does this differ from correlation? N The range of the covariance of two signals? Note that the covariance of two signals is a scalar 11

Covariance Matrices (I) Let x and y be sets of vectors What if I want to know the relation between the ith element of x and the jth element of y? " 1 N Σ = σ x1,y 1 σ x1,y 2 σ x2,y 1 σ x2,y 2!! " ( ) σ xi,y j = x i x ( y j y) % ' ' ' &' 12

Background Concepts: Outer Products Remember outer products :! " ad ae af bd be bf cd ce cf! & & = & % " Why? Because if I have two vectors, their covariance is their outer product a b c &! &" & % d e f % 13

Covariance Matrices (II) The covariance between two vectors isn t too interesting, but What if I have two sets of vectors: Let X = {(a1, b1, c1), (a2, b2, c2)} Let Y = {(d1, e1, f1), (d2, e2, f2)} Assume the vectors are centered Meaning that the average X vector is subtracted from the X set, and the average Y is subtracted from the Y set What is the covariance between the sets of vectors? 14

Covariance Matrices (III) The covariance matrix is the outer product:! " a 2 b 2 c 2 &! & &" %& d 1 e 1 f 1 d 2 e 2 f 2! & %& = " d 1 a 2 d 2 e 1 a 2 e 2 f 1 a 2 f 2 d 1 b 2 d 2 e 1 b 2 e 2 f 1 b 2 f 2 d 1 c 2 d 2 e 1 c 2 e 2 f 1 c 2 f 2 & & & %& Ω i,j is the covariance of position i in set X with position j in set Y, assumes pair wise matches 15

Covariance Matrices (IV) It is interesting & meaningful to look at the covariance of a set with itself:! " a 2 b 2 c 2 &! & &" %& a 2 b 2 c 2! & %& = " a 2 a 2 a 2 b 2 a 2 c 2 b 2 a 2 b 2 b 2 b 2 c 2 c 2 a 2 c 2 b 2 c 2 c 2 & & & %& Now how do you interpret Ω i,j? Does Σ have any special properties? 16

Covariance Matrices (V) Covariance matrices of 2D data sets (easy to draw) What can you tell me about Ω x,y? What can you tell me about Ω x,y? 17

Principal Component Analysis PCA SVD(Cov(X)) = SVD(XX T /(n-1)) SVD: XX T = RΛR -1 R is a rotation matrix (the Eigenvector matrix) Λ is a diagonal matrix (diagonal values are the Eigenvalues) The Eigenvalues capture how much the dimensions in X co-vary The Eigenvectors show which combinations of dimensions tend to vary together 18

PCA (II) The Eigenvector with the largest Eigenvalue is the direction of maximum variance The Eigenvector with the 2 nd largest Eigenvalue is orthogonal to the 1 st vector and has the next greatest variance. And so on The Eigenvalues describe the amount of variance along the Eigenvectors We will mo)vate and illustrate these ideas using Gaussian Random Variables in the next lecture 19