Covariance to PCA. CS 510 Lecture #14 February 23, 2018

Similar documents
Covariance to PCA. CS 510 Lecture #8 February 17, 2014

Announcements (repeat) Principal Components Analysis

PCA Review. CS 510 February 25 th, 2013

Geometric Image Manipulation

Linear Algebra & Geometry why is linear algebra useful in computer vision?

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CSE 554 Lecture 7: Alignment

Lecture: Face Recognition and Feature Reduction

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

CS 143 Linear Algebra Review

Unsupervised Learning: Dimensionality Reduction

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

1 Singular Value Decomposition and Principal Component

Lecture: Face Recognition and Feature Reduction

2D Geometry. Tuesday September 2 rd, 2014.

Covariance and Principal Components

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Multivariate Statistical Analysis

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

Lecture 02: 2D Geometry. August 24, 2017

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

STATISTICAL SHAPE MODELS (SSM)

Image Registration Lecture 2: Vectors and Matrices

LECTURE 16: PCA AND SVD

PCA, Kernel PCA, ICA

DATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD

Data Mining Techniques

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

1 Inner Product and Orthogonality

Section 13.4 The Cross Product

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Preprocessing Tasks

Linear Algebra and Matrices

Noise & Data Reduction

Principal Component Analysis

Exercises * on Principal Component Analysis

Lecture 1: Systems of linear equations and their solutions

GEOG 4110/5100 Advanced Remote Sensing Lecture 15

Basic Concepts in Matrix Algebra

Section 5.5: Matrices and Matrix Operations

Inverse Theory. COST WaVaCS Winterschool Venice, February Stefan Buehler Luleå University of Technology Kiruna

Singular Value Decomposition and Principal Component Analysis (PCA) I

Machine learning for pervasive systems Classification in high-dimensional spaces

Principal Components Analysis (PCA)

Unsupervised Machine Learning and Data Mining. DS 5230 / DS Fall Lecture 7. Jan-Willem van de Meent

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

Principal Component Analysis

Independent Component Analysis

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Linear Subspace Models

Numerical Methods I Singular Value Decomposition

Face Detection and Recognition

CS 4495 Computer Vision Principle Component Analysis

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen PCA. Tobias Scheffer

Eigenvalues, Eigenvectors, and an Intro to PCA

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

Gaussian Quiz. Preamble to The Humble Gaussian Distribution. David MacKay 1

Closed-Form Solution Of Absolute Orientation Using Unit Quaternions

Random Vectors, Random Matrices, and Matrix Expected Value

Linear Algebra (Review) Volker Tresp 2018

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Face Recognition and Biometric Systems

Learning with Singular Vectors

Signal Analysis. Principal Component Analysis

Symmetry Transforms 1

Principal Components Theory Notes

1 Principal Components Analysis

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CITS 4402 Computer Vision

Tutorial on Principal Component Analysis

Principal Component Analysis (PCA)

Dimensionality Reduction and Principle Components

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

Basic Concepts in Linear Algebra

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

What is Principal Component Analysis?

Noise & Data Reduction

Machine Learning. Principal Components Analysis. Le Song. CSE6740/CS7641/ISYE6740, Fall 2012

Principal Component Analysis

CS 340 Lec. 6: Linear Dimensionality Reduction

Quantum Computing Lecture 2. Review of Linear Algebra

Principal Component Analysis (PCA)

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis

Principal Components Analysis. Sargur Srihari University at Buffalo

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Multivariate Statistics Fundamentals Part 1: Rotation-based Techniques

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA

20 Unsupervised Learning and Principal Components Analysis (PCA)

Homework 2. Solutions T =

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis CS498

A matrix over a field F is a rectangular array of elements from F. The symbol

Principal Component Analysis and Linear Discriminant Analysis

Fourier Matching. CS 510 Lecture #7 February 8 th, 2013

Machine Learning (CSE 446): Unsupervised Learning: K-means and Principal Component Analysis

Principal Component Analysis

Singular Value Decomposition

Foundations of Computer Vision

Transcription:

Covariance to PCA CS 510 Lecture 14 February 23, 2018

Overview: Goal Assume you have a gallery (database) of images, and a probe (test) image. The goal is to find the database image that is most similar to the probe image. Similar defined according to any measure e.g. correlation CS 510, Image Computation, Ross Beveridge & Bruce Draper 2

Example: Finding Cats Probe image -- image to be matched Gallery of database images CS 510, Image Computation, Ross Beveridge & Bruce Draper 3

Registration Whole image matching presumes alignment Images are points in N-dimensional space Dimensions meaningless unless points correspond Comparisons undefined if sizes differ The same restrictions as (strict) correlation Faces, often eye s map to same positions. Specifies rotation, translation and scale. CS 510, Image Computation, Ross Beveridge & Bruce Draper 4

Example: Aligned Cats Probe image, registered to gallery Registered Gallery of Images CS 510, Image Computation, Ross Beveridge & Bruce Draper 5

Multiple Images of One Object Another reason for matching a probe against a gallery Sample possible object variants, e.g. Object seen from all (standard) viewpoints Object seen from all (standard) illuminations Goal: find pose or illumination condition Bit brute force, but strong if variants are present. CS 510, Image Computation, Ross Beveridge & Bruce Draper 6

Alternate Example Example Probe image Five of 71 gallery images (COIL) CS 510, Image Computation, Ross Beveridge & Bruce Draper 7

But Wait, This is Expensive! It is very costly to compare whole images. How can we save effort A lot of effort! Just how much variation is there in Faces of cats Is there a systematic way to measure and then work with less data? Yes, which takes us next to covariance. CS 510, Image Computation, Ross Beveridge & Bruce Draper 8

Background Concepts: Variance Variance - the central tendency, variance is defined as: ( x x) 2 i N Square root of variance is the standard deviation CS 510, Image Computation, Ross Beveridge & Bruce Draper 9

Background Concepts: Covariance Covariance measures if two signals vary together: ( x l x) ( y l y) Ω = l N How does this differ from correlation? The range of the covariance of two signals? Note the covariance of two signals is a scalar CS 510, Image Computation, Ross Beveridge & Bruce Draper 10

Covariance Matrices (I) Let x and y be sets of vectors What if I want to know the relation between the i th element of x and the j th element of y? " 1 N Σ = σ x1,y 1 σ x1,y 2 σ x2,y 1 σ x2,y 2!! " ( ) σ xi,y j = x i,k x i % ' ' ' &' ( y j,k y ) j CS 510, Image Computation, Ross Beveridge & Bruce Draper 11

Background Concepts: Outer Products Remember outer products :! " ad ae af bd be bf cd ce cf! & & = & % " Why? Because if I have two vectors, their covariance term is their outer product a b c &! &" & % d e f % CS 510, Image Computation, Ross Beveridge & Bruce Draper 12

Covariance Matrices (II) The covariance between two vectors isn t too interesting, just a set of scalars, but What if we have two sets of vectors: Let X = {(,, ), (a 2, b 2, c 2 )} Let Y = {(d 1, e 1, f 1 ), (d 2, e 2, f 2 )} And assume the vectors are centered Meaning that the average X vector is subtracted from the X set, and the average Y is subtracted from the Y set What is the covariance between the sets of vectors? CS 510, Image Computation, Ross Beveridge & Bruce Draper 13

Covariance Matrices (III) The covariance matrix is the outer product:! " a 2 b 2 c 2 &! & &" %& d 1 e 1 f 1 d 2 e 2 f 2! & %& = " d 1 a 2 d 2 e 1 a 2 e 2 f 1 a 2 f 2 d 1 b 2 d 2 e 1 b 2 e 2 f 1 b 2 f 2 d 1 c 2 d 2 e 1 c 2 e 2 f 1 c 2 f 2 & & & %& W i,j is the covariance of position i in set X with position j in set Y, assumes pair wise matches CS 510, Image Computation, Ross Beveridge & Bruce Draper 14

Covariance Matrices (IV) It is interesting & meaningful to look at the covariance of a set with itself:! " a 2 b 2 c 2 &! & &" %& a 2 b 2 c 2! & %& = " a 2 a 2 a 2 b 2 a 2 c 2 b 2 a 2 b 2 b 2 b 2 c 2 c 2 a 2 c 2 b 2 c 2 c 2 & & & %& Now how do you interpret W i,j? CS 510, Image Computation, Ross Beveridge & Bruce Draper 15

Covariance Matrices (V) Covariance matrices of 2D data sets (easy to draw) What can you tell me about W x,y? What can you tell me about W x,y? CS 510, Image Computation, Ross Beveridge & Bruce Draper 16

Principal Component Analysis PCA º SVD(Cov(X)) = SVD(XX T /(n-1)) SVD: XX T = RLR -1 R is a rotation matrix (the Eigenvector matrix) L is a diagonal matrix (diagonal values are the Eigenvalues) The Eigenvalues capture how much the dimensions in X co-vary The Eigenvectors show which combinations of dimensions tend to vary together CS 510, Image Computation, Ross Beveridge & Bruce Draper 17

PCA (II) The Eigenvector with the largest Eigenvalue is the direction of maximum variance The Eigenvector with the 2 nd largest Eigenvalue is orthogonal to the 1 st vector and has the next greatest variance. And so on The Eigenvalues describe the amount of variance along the Eigenvectors We will motivate and illustrate these ideas using Gaussian Random Variables in the next lecture CS 510, Image Computation, Ross Beveridge & Bruce Draper 18