Signal Analysis. Principal Component Analysis

Size: px
Start display at page:

Download "Signal Analysis. Principal Component Analysis"

Transcription

1 Multi dimensional Signal Analysis Lecture 2E Principal Component Analysis

2 Subspace representation Note! Given avector space V of dimension N a scalar product defined by G 0 a subspace U of dimension M < N An N M basis matrix B of the subspace U a vector v V we can determine v 1 U that is closest to v v 1 U is a subspace representation of v V v 1 is independent of B, it only depends on U Lecture 2E, Klas Nordberg, LiU 2

3 Subspace representation V v U v 1 Lecture 2E, Klas Nordberg, LiU 3

4 Subspace representation More precisely: the coordinates of v 1 relative basis B is given by c = G 1 c Where c = B T G 0 v and G = B T G 0 B Lecture 2E, Klas Nordberg, LiU 4

5 Stochastic signals In the previous applications normalised convolution and filter optimisation the basis was fixed This means that U is fixed An alternative approach is to allow the signal vector v to be a stochastic variable and to determine an M dimensional subspace U such that v 1 is as close as possible to v in average Lecture 2E, Klas Nordberg, LiU 5

6 Initial problem formulation We want to minimise i i ² = E k v v 1 k 2 E means here to take the expectation value or mean where the expectation value is taken over all observations of the signal v ² is minimised i i over all M dimensional i l subspaces U Lecture 2E, Klas Nordberg, LiU 6

7 Problem formulation To simplify things, we assume that V is real = R N V is real R G 0 = I B is an ON basis of subspace U: Leads to: h u v i = u T v B = B B T B = I Lecture 2E, Klas Nordberg, LiU 7

8 Problem formulation This simplification i also means that: v 1 = B c = B B T v We want to determine an M dim subspace U (represented by ON basis B) such that we minimise ² = E k v B B T v k 2 Lecture 2E, Klas Nordberg, LiU 8

9 Solving the problem We assume that t v has known statistical ti ti properties We want to determine B that minimises ² This is the same as maximising ² 1 = E [v T BB T v] (why?) for ON basis B This is an optimisation problem with a set of constraints (B T B = I) Lecture 2E, Klas Nordberg, LiU 9

10 Practical solution As soon as M, = the dimension of U, = the number of columns in B, is determined, we optimise ² 1 over the N M elements in B with the constraints B T B = I Can be solved by means of standard techniques for constrained optimisation Lagrange s method Lecture 2E, Klas Nordberg, LiU 10

11 A simple example Simple example: M = 1 B has only one single column b 1 We want to maximise ² = E [v T 1T v] = E 1T v v T = T E [v v T 1 b 1 b ] [b b 1 ] b 1 ] b 1 with constraint c = b 1 T b 1 = 1 Lecture 2E, Klas Nordberg, LiU 11

12 A simple example Use Lagrange s method: ² 1 = λ c where the gradients are with respect to the elements in b 1 Leads to E [ vv T ] b 1 = λ b 1 Lecture 2E, Klas Nordberg, LiU 12

13 A simple example Consequently: b 1 is an eigenvector corresponding to eigenvalue λ of E [ vv T ] Remember: kb 1 k = 1, since B is ON Lecture 2E, Klas Nordberg, LiU 13

14 The correlation matrix It is clear that the matrix Approximation of C from P samples: C = E [ vv T ] C 1 P X 1 P k=1 v k v T k is an important thing to know in order to solve the problem C is called the correlation matrix of the signal C is symmetric and N N C is positive definite (why?) Lecture 2E, Klas Nordberg, LiU 14

15 A simple example With this choice of b 1, ² 1 becomes ² 1 = b 1 T Cb 1 = λ b 1T b 1 = λ Since we want to maximise ² 1 we should choose λ as large as possible b 1 is a normalised eigenvector corresponding to the largest eigenvalue of C Lecture 2E, Klas Nordberg, LiU 15

16 A simple example From ² 1 = λ 1 = largest eigenvalue of C follows that ² = sum of all eigenvalues of C except λ 1 (why?) Lecture 2E, Klas Nordberg, LiU 16

17 A slightly more complicated example Let U be 2 dimensional: B has two columns b 1 and b 2 We want to maximise i ² 1 = E [v T BB T v] ] with the constraints c 1 = b 1T b 1 = 1, c 2 = b 1T b 2 = 0, c 3 = b 2T b 2 = 1 Lecture 2E, Klas Nordberg, LiU 17

18 A slightly more complicated example In the general case, when M > 1, the subspace ON basis matrix B is not unique B = B Q with Q O(M) is also a solution (why?) We need additional constraints on B in order to make B unique We choose: B T C B is diagonal Subspace basis vectors are uncorrelated Lecture 2E, Klas Nordberg, LiU 18

19 A slightly more complicated example This additional constraint leads to Cb 1 = λ 1 b 1 C b 2 = λ 2 b 2 Both b 1 and b 2 must the normalised and mutually orthogonal eigenvectors of C, with eigenvalues λ 1 and λ 2 Lecture 2E, Klas Nordberg, LiU 19

20 A slightly more complicated example We want to maximise i (why?) ² 1 = E [v T BB T v] = b 1 T Cb 1 + b 2 T C b 2 = λ 1 + λ 2 and therefore we should choose b 1 and b 2 as normalised eigenvectors corresponding to the two largest eigenvalues of C Remember multiplicity of eigenvalues Since C is symmetric, b 1 and b 2 can always be chosen as orthogonal! Lecture 2E, Klas Nordberg, LiU 20

21 Generalisation Based on these examples we present a general result: We want to determine an M dimensional subspace U,, described by a ON basis matrix B. 1. Form the correlation matrix C 2. Compute eigenvalues and eigenvectors of C 3. The basis B consists of M eigenvectors corresponding to the M largest eigenvalues of C Lecture 2E, Klas Nordberg, LiU 21

22 Generalisation This gives ² 1 = [v T BB T v] = λ λ M and ² = λ M λ N where λ 1 λ 2... λ N are the eigenvalues of C Lecture 2E, Klas Nordberg, LiU 22

23 Principal components The eigenvectors of C are in this context referred to as principal p components The magnitude of a principal component is given by the corresponding eigenvalue The M dimensional subspace U is spanned by the M largest principal components of C To determine the basis B in this way is called principal component analysis or PCA Lecture 2E, Klas Nordberg, LiU 23

24 Analysis and reconstruction In PCA we analyse the signal v to get its coordinates relative to the subspace basis B: c = B T v If necessary, we can then reconstruct v 1 as v 1 = B c Lecture 2E, Klas Nordberg, LiU 24

25 Analysis and reconstruction Stochastic process PCA Reconstruction v R N c R M v 1 R N C R N N In some application, for example, clustering, the reconstruction step is not relevant Lecture 2E, Klas Nordberg, LiU 25

26 Applications In signal processing, PCA can be used for General data analysis For an unknown data/signal determine if it can be reduced in dimension Data/signal compression An N dimensional signal can be represented with an M dimensional basis (M < N) Reduces the amount to data needed to store/transmit the signal Data dependent compression (+) Effective since the compression is data dependent ( ) Overhead since B must be stored/transmitted as well Lecture 2E, Klas Nordberg, LiU 26

27 Signal model PCA can typically be applied to signals with a statistical model (mean and C known) PCA can also be applied to a finite data set in the form of high dimensional vectors Dimensionality reduction C is estimated from the finite data set Lecture 2E, Klas Nordberg, LiU 27

28 How large subspace? The idea behind PCA is to estimate C from a larges set of observation of the data/signal v and then choose the basis B as the M largest principal components How do we know a suitable value for M? No optimal strategy! Application dependent Lecture 2E, Klas Nordberg, LiU 28

29 How large subspace? In some applications i M may be fixed, i.e., already given In other applications it makes sense to analyse the eigenvalues λ 1,..., λ N in order to choose a suitable M For example: ² / ² 1 small (why?) Usually each dimension i of U corresponds to a cost Represents a coordinate of v 1 that needs to be stored or transmitted We want to keep M as small as possible Balance between cost and decrease in ² Lecture 2E, Klas Nordberg, LiU 29

30 An example A pixel image Lecture 2E, Klas Nordberg, LiU 30

31 An example Let us divide id this image into 8 8 pixels block This gives in total = 4096 blocks The pixels of each block constitute our signal v, a vector in R 64 (8 8 reshaped to a 64 dim column vector) From the 4096 observations of v we form the correlation matrix C We also compute the eigenvalues and eigenvectors of C Lecture 2E, Klas Nordberg, LiU 31

32 The eigenvalues of C Here are the 64 eigenvalues of C One large eigenvalue, the rest are relatively small Lecture 2E, Klas Nordberg, LiU 32

33 The eigenvalues of C If we remove the largest eigenvalue and plot the rest: Lecture 2E, Klas Nordberg, LiU 33

34 The eigenvectors of C Each eigenvector of C is a 64 dimensional vector that can be reshaped p into an 8 8 block b1 b4 b2 b5 Lecture 2E, Klas Nordberg, LiU b3 b6 34

35 The eigenvectors of C We notice that b 1 is approximately flat or constant (DC) b 2 and b 3 are approximately shaped like a plane with iha slope in two perpendicular directions. Linear b 4, b 5 and b 6 are approximately shaped like quadratic surfaces Lecture 2E, Klas Nordberg, LiU 35

36 An example The distribution of the eigenvalues implies that already by going gfrom 0 to 1 dimension of U the error ² is reduced quite a lot By adding a few more dimensions it should be possible to represent the signal quite well Let us try with some different values for M and look at the result Lecture 2E, Klas Nordberg, LiU 36

37 An example Each block, as a vector v R 64, is projected onto v 1 U v 1 can be represented with only M coordinates v 1 is a reasonably good approximation of v, at least the mean error should be low How good is determined by the distribution of eigenvalues relative to M Lecture 2E, Klas Nordberg, LiU 37

38 M = 1 Here v is projected onto the first principal component b 1 Since b 1 is flat or constant, each block becomes flat or constant Lecture 2E, Klas Nordberg, LiU 38

39 M = 2 Here v is projected onto the first two principal components b 1 and b 2 Since b 2 is a slope in one direction we can now represent that change in each block But not a slope in the orthogonal direction! Lecture 2E, Klas Nordberg, LiU 39

40 M = 3 Here v is projected onto the first three principal components b 1... b 3 Since b 3 is a slope in the other direction we can now have slopes in any direction within each block Lecture 2E, Klas Nordberg, LiU 40

41 M=6 Here v is projected onto the first six principal i components b 1... b 6 With 3 more dimensions, each block can contain more details Here, data has been reduced by a factor 64/6 11 Lecture 2E, Klas Nordberg, LiU 41

42 General observation A general observation: In areas of low spatial frequency the approximation is good Inareasof high spatialfrequency the approximation is worse The more dtil details or higher h spatial frequency, the more dimensions are needed for an accurate representation Lecture 2E, Klas Nordberg, LiU 42

43 Karhunen Loève transformation Introducing the new basis B in this way and computing the coordinates of v relative to B is sometimes also referred to as Karhunen Loève transformation B T v gives the transformed signal (= the coordinates of v relative to basis B) BB T v reconstructs the projected signal v 1 Lecture 2E, Klas Nordberg, LiU 43

44 Covariance and mean In some application PCA is used to make a statistical analysis of a signal v It is then very common to describe v in terms of its mean m v and its covariance matrix m v = E [ v ] Cov v = E [ (v m v ) (v m v ) T ] v v v Lecture 2E, Klas Nordberg, LiU 44

45 Covariance and mean The PCA is then based on the covariance matrix Cov v rather than the correlation C This corresponds to translating the origin of the subspace U to the mean m v and do correlation based PCA there Both approaches are called PCA Statistical data analysis typically use Cov v Data compression typically use C Lecture 2E, Klas Nordberg, LiU 45

46 The data matrix In the case that we approximate the statistics of v from a finite number of P observations, these can be described by a data matrix A that holds all these v in its columns An estimate of C = E [ vv T ] is then given by C = (1/P) A A T Lecture 2E, Klas Nordberg, LiU 46

47 SVD vs EVD The principal i components are the eigenvectors of AA T These are also the left singular vectors of A An alternative to computing the PCA, that in some cases may have better numerical properties: 1. Form the data matrix A 2. Compute its SVD 3. Form B from the left singular vector that corresponds to the M largest singular values Lecture 2E, Klas Nordberg, LiU 47

48 What you should know includes Formulation of the problem that t PCA solves: Find a subspace U that minimises ² = E k v B B T v k 2 This subspace is represented by an ON basis B Solution: B consists of the M largest eigenvectors of C = E[ v v T ] Called principal components Alternatively computed by means of SVD ² = sum of residual eigenvalues of C Applications: Dimensionality reduction Signal compression Lecture 2E, Klas Nordberg, LiU 48

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations

More information

1 Principal Components Analysis

1 Principal Components Analysis Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Principal Component Analysis

Principal Component Analysis Machine Learning Michaelmas 2017 James Worrell Principal Component Analysis 1 Introduction 1.1 Goals of PCA Principal components analysis (PCA) is a dimensionality reduction technique that can be used

More information

IV. Matrix Approximation using Least-Squares

IV. Matrix Approximation using Least-Squares IV. Matrix Approximation using Least-Squares The SVD and Matrix Approximation We begin with the following fundamental question. Let A be an M N matrix with rank R. What is the closest matrix to A that

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 51 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

EECS 275 Matrix Computation

EECS 275 Matrix Computation EECS 275 Matrix Computation Ming-Hsuan Yang Electrical Engineering and Computer Science University of California at Merced Merced, CA 95344 http://faculty.ucmerced.edu/mhyang Lecture 6 1 / 22 Overview

More information

Principal Component Analysis. Applied Multivariate Statistics Spring 2012

Principal Component Analysis. Applied Multivariate Statistics Spring 2012 Principal Component Analysis Applied Multivariate Statistics Spring 2012 Overview Intuition Four definitions Practical examples Mathematical example Case study 2 PCA: Goals Goal 1: Dimension reduction

More information

Face Recognition and Biometric Systems

Face Recognition and Biometric Systems The Eigenfaces method Plan of the lecture Principal Components Analysis main idea Feature extraction by PCA face recognition Eigenfaces training feature extraction Literature M.A.Turk, A.P.Pentland Face

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Salvador Dalí, Galatea of the Spheres CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang Some slides from Derek Hoiem and Alysha

More information

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision) CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions

More information

Principal Component Analysis CS498

Principal Component Analysis CS498 Principal Component Analysis CS498 Today s lecture Adaptive Feature Extraction Principal Component Analysis How, why, when, which A dual goal Find a good representation The features part Reduce redundancy

More information

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang

More information

Computation. For QDA we need to calculate: Lets first consider the case that

Computation. For QDA we need to calculate: Lets first consider the case that Computation For QDA we need to calculate: δ (x) = 1 2 log( Σ ) 1 2 (x µ ) Σ 1 (x µ ) + log(π ) Lets first consider the case that Σ = I,. This is the case where each distribution is spherical, around the

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

Problem # Max points possible Actual score Total 120

Problem # Max points possible Actual score Total 120 FINAL EXAMINATION - MATH 2121, FALL 2017. Name: ID#: Email: Lecture & Tutorial: Problem # Max points possible Actual score 1 15 2 15 3 10 4 15 5 15 6 15 7 10 8 10 9 15 Total 120 You have 180 minutes to

More information

Announcements (repeat) Principal Components Analysis

Announcements (repeat) Principal Components Analysis 4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long

More information

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17

Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis. Chris Funk. Lecture 17 Principal Component Analysis-I Geog 210C Introduction to Spatial Data Analysis Chris Funk Lecture 17 Outline Filters and Rotations Generating co-varying random fields Translating co-varying fields into

More information

Tutorial on Principal Component Analysis

Tutorial on Principal Component Analysis Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms

More information

7. Variable extraction and dimensionality reduction

7. Variable extraction and dimensionality reduction 7. Variable extraction and dimensionality reduction The goal of the variable selection in the preceding chapter was to find least useful variables so that it would be possible to reduce the dimensionality

More information

Exercises * on Principal Component Analysis

Exercises * on Principal Component Analysis Exercises * on Principal Component Analysis Laurenz Wiskott Institut für Neuroinformatik Ruhr-Universität Bochum, Germany, EU 4 February 207 Contents Intuition 3. Problem statement..........................................

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yuanzhen Shao MA 26500 Yuanzhen Shao PCA 1 / 13 Data as points in R n Assume that we have a collection of data in R n. x 11 x 21 x 12 S = {X 1 =., X x 22 2 =.,, X x m2 m =.

More information

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform KLT JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform Has many names cited in literature: Karhunen-Loève Transform (KLT); Karhunen-Loève Decomposition

More information

Covariance and Correlation Matrix

Covariance and Correlation Matrix Covariance and Correlation Matrix Given sample {x n } N 1, where x Rd, x n = x 1n x 2n. x dn sample mean x = 1 N N n=1 x n, and entries of sample mean are x i = 1 N N n=1 x in sample covariance matrix

More information

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare

COMP6237 Data Mining Covariance, EVD, PCA & SVD. Jonathon Hare COMP6237 Data Mining Covariance, EVD, PCA & SVD Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and covariance) in terms of

More information

Principal Component Analysis and Linear Discriminant Analysis

Principal Component Analysis and Linear Discriminant Analysis Principal Component Analysis and Linear Discriminant Analysis Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/29

More information

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction ECE 521 Lecture 11 (not on midterm material) 13 February 2017 K-means clustering, Dimensionality reduction With thanks to Ruslan Salakhutdinov for an earlier version of the slides Overview K-means clustering

More information

What is Principal Component Analysis?

What is Principal Component Analysis? What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most

More information

Econ Slides from Lecture 8

Econ Slides from Lecture 8 Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010 Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data.

Structure in Data. A major objective in data analysis is to identify interesting features or structure in the data. Structure in Data A major objective in data analysis is to identify interesting features or structure in the data. The graphical methods are very useful in discovering structure. There are basically two

More information

Maximum variance formulation

Maximum variance formulation 12.1. Principal Component Analysis 561 Figure 12.2 Principal component analysis seeks a space of lower dimensionality, known as the principal subspace and denoted by the magenta line, such that the orthogonal

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand

More information

1 Singular Value Decomposition and Principal Component

1 Singular Value Decomposition and Principal Component Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)

More information

LECTURE NOTE #10 PROF. ALAN YUILLE

LECTURE NOTE #10 PROF. ALAN YUILLE LECTURE NOTE #10 PROF. ALAN YUILLE 1. Principle Component Analysis (PCA) One way to deal with the curse of dimensionality is to project data down onto a space of low dimensions, see figure (1). Figure

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis November 24, 2015 From data to operators Given is data set X consisting of N vectors x n R D. Without loss of generality, assume x n = 0 (subtract mean). Let P be D N matrix

More information

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on

CS4670: Computer Vision Kavita Bala. Lecture 7: Harris Corner Detec=on CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detec=on Announcements HW 1 will be out soon Sign up for demo slots for PA 1 Remember that both partners have to be there We will ask you to

More information

Data Mining Lecture 4: Covariance, EVD, PCA & SVD

Data Mining Lecture 4: Covariance, EVD, PCA & SVD Data Mining Lecture 4: Covariance, EVD, PCA & SVD Jo Houghton ECS Southampton February 25, 2019 1 / 28 Variance and Covariance - Expectation A random variable takes on different values due to chance The

More information

Signal Analysis. Multi resolution Analysis (II)

Signal Analysis. Multi resolution Analysis (II) Multi dimensional Signal Analysis Lecture 2H Multi resolution Analysis (II) Discrete Wavelet Transform Recap (CWT) Continuous wavelet transform A mother wavelet ψ(t) Define µ 1 µ t b ψ a,b (t) = p ψ a

More information

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018 Lecture 13: Orthogonal projections and least squares (Section 3.2-3.3) Thang Huynh, UC San Diego 2/9/2018 Orthogonal projection onto subspaces Theorem. Let W be a subspace of R n. Then, each x in R n can

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name: Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition Due date: Friday, May 4, 2018 (1:35pm) Name: Section Number Assignment #10: Diagonalization

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

The Mathematics of Facial Recognition

The Mathematics of Facial Recognition William Dean Gowin Graduate Student Appalachian State University July 26, 2007 Outline EigenFaces Deconstruct a known face into an N-dimensional facespace where N is the number of faces in our data set.

More information

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26

Lecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26 Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Computational paradigms for the measurement signals processing. Metodologies for the development of classification algorithms.

Computational paradigms for the measurement signals processing. Metodologies for the development of classification algorithms. Computational paradigms for the measurement signals processing. Metodologies for the development of classification algorithms. January 5, 25 Outline Methodologies for the development of classification

More information

Covariance and Principal Components

Covariance and Principal Components COMP3204/COMP6223: Computer Vision Covariance and Principal Components Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and

More information

CSE 554 Lecture 7: Alignment

CSE 554 Lecture 7: Alignment CSE 554 Lecture 7: Alignment Fall 2012 CSE554 Alignment Slide 1 Review Fairing (smoothing) Relocating vertices to achieve a smoother appearance Method: centroid averaging Simplification Reducing vertex

More information

c Springer, Reprinted with permission.

c Springer, Reprinted with permission. Zhijian Yuan and Erkki Oja. A FastICA Algorithm for Non-negative Independent Component Analysis. In Puntonet, Carlos G.; Prieto, Alberto (Eds.), Proceedings of the Fifth International Symposium on Independent

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Preprocessing & dimensionality reduction

Preprocessing & dimensionality reduction Introduction to Data Mining Preprocessing & dimensionality reduction CPSC/AMTH 445a/545a Guy Wolf guy.wolf@yale.edu Yale University Fall 2016 CPSC 445 (Guy Wolf) Dimensionality reduction Yale - Fall 2016

More information

CS 4495 Computer Vision Principle Component Analysis

CS 4495 Computer Vision Principle Component Analysis CS 4495 Computer Vision Principle Component Analysis (and it s use in Computer Vision) Aaron Bobick School of Interactive Computing Administrivia PS6 is out. Due *** Sunday, Nov 24th at 11:55pm *** PS7

More information

Dimensionality Reduction

Dimensionality Reduction Dimensionality Reduction Given N vectors in n dims, find the k most important axes to project them k is user defined (k < n) Applications: information retrieval & indexing identify the k most important

More information

Covariance to PCA. CS 510 Lecture #8 February 17, 2014

Covariance to PCA. CS 510 Lecture #8 February 17, 2014 Covariance to PCA CS 510 Lecture 8 February 17, 2014 Status Update Programming Assignment 2 is due March 7 th Expect questions about your progress at the start of class I still owe you Assignment 1 back

More information

Principal Component Analysis (PCA) CSC411/2515 Tutorial

Principal Component Analysis (PCA) CSC411/2515 Tutorial Principal Component Analysis (PCA) CSC411/2515 Tutorial Harris Chan Based on previous tutorial slides by Wenjie Luo, Ladislav Rampasek University of Toronto hchan@cs.toronto.edu October 19th, 2017 (UofT)

More information

Singular Value Decomposition

Singular Value Decomposition Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the

More information

Machine Learning - MT & 14. PCA and MDS

Machine Learning - MT & 14. PCA and MDS Machine Learning - MT 2016 13 & 14. PCA and MDS Varun Kanade University of Oxford November 21 & 23, 2016 Announcements Sheet 4 due this Friday by noon Practical 3 this week (continue next week if necessary)

More information

Designing Information Devices and Systems II Fall 2015 Note 5

Designing Information Devices and Systems II Fall 2015 Note 5 EE 16B Designing Information Devices and Systems II Fall 01 Note Lecture given by Babak Ayazifar (9/10) Notes by: Ankit Mathur Spectral Leakage Example Compute the length-8 and length-6 DFT for the following

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Eigenvalues and diagonalization

Eigenvalues and diagonalization Eigenvalues and diagonalization Patrick Breheny November 15 Patrick Breheny BST 764: Applied Statistical Modeling 1/20 Introduction The next topic in our course, principal components analysis, revolves

More information

Statistical Geometry Processing Winter Semester 2011/2012

Statistical Geometry Processing Winter Semester 2011/2012 Statistical Geometry Processing Winter Semester 2011/2012 Linear Algebra, Function Spaces & Inverse Problems Vector and Function Spaces 3 Vectors vectors are arrows in space classically: 2 or 3 dim. Euclidian

More information

Covariance to PCA. CS 510 Lecture #14 February 23, 2018

Covariance to PCA. CS 510 Lecture #14 February 23, 2018 Covariance to PCA CS 510 Lecture 14 February 23, 2018 Overview: Goal Assume you have a gallery (database) of images, and a probe (test) image. The goal is to find the database image that is most similar

More information

CS540 Machine learning Lecture 5

CS540 Machine learning Lecture 5 CS540 Machine learning Lecture 5 1 Last time Basis functions for linear regression Normal equations QR SVD - briefly 2 This time Geometry of least squares (again) SVD more slowly LMS Ridge regression 3

More information

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if

More information

PCA and admixture models

PCA and admixture models PCA and admixture models CM226: Machine Learning for Bioinformatics. Fall 2016 Sriram Sankararaman Acknowledgments: Fei Sha, Ameet Talwalkar, Alkes Price PCA and admixture models 1 / 57 Announcements HW1

More information

Concentration Ellipsoids

Concentration Ellipsoids Concentration Ellipsoids ECE275A Lecture Supplement Fall 2008 Kenneth Kreutz Delgado Electrical and Computer Engineering Jacobs School of Engineering University of California, San Diego VERSION LSECE275CE

More information

Gopalkrishna Veni. Project 4 (Active Shape Models)

Gopalkrishna Veni. Project 4 (Active Shape Models) Gopalkrishna Veni Project 4 (Active Shape Models) Introduction Active shape Model (ASM) is a technique of building a model by learning the variability patterns from training datasets. ASMs try to deform

More information

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works

CS168: The Modern Algorithmic Toolbox Lecture #8: How PCA Works CS68: The Modern Algorithmic Toolbox Lecture #8: How PCA Works Tim Roughgarden & Gregory Valiant April 20, 206 Introduction Last lecture introduced the idea of principal components analysis (PCA). The

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Lecture: Face Recognition

Lecture: Face Recognition Lecture: Face Recognition Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 12-1 What we will learn today Introduction to face recognition The Eigenfaces Algorithm Linear

More information

Machine Learning 2nd Edition

Machine Learning 2nd Edition INTRODUCTION TO Lecture Slides for Machine Learning 2nd Edition ETHEM ALPAYDIN, modified by Leonardo Bobadilla and some parts from http://www.cs.tau.ac.il/~apartzin/machinelearning/ The MIT Press, 2010

More information

Singular Value Decomposition and Principal Component Analysis (PCA) I

Singular Value Decomposition and Principal Component Analysis (PCA) I Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i 000 000+ data points! The expression

More information

Least Squares Optimization

Least Squares Optimization Least Squares Optimization The following is a brief review of least squares optimization and constrained optimization techniques, which are widely used to analyze and visualize data. Least squares (LS)

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

PRINCIPAL COMPONENT ANALYSIS

PRINCIPAL COMPONENT ANALYSIS PRINCIPAL COMPONENT ANALYSIS 1 INTRODUCTION One of the main problems inherent in statistics with more than two variables is the issue of visualising or interpreting data. Fortunately, quite often the problem

More information

12.2 Dimensionality Reduction

12.2 Dimensionality Reduction 510 Chapter 12 of this dimensionality problem, regularization techniques such as SVD are almost always needed to perform the covariance matrix inversion. Because it appears to be a fundamental property

More information

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis .. December 20, 2013 Todays lecture. (PCA) (PLS-R) (LDA) . (PCA) is a method often used to reduce the dimension of a large dataset to one of a more manageble size. The new dataset can then be used to make

More information

LECTURE 16: PCA AND SVD

LECTURE 16: PCA AND SVD Instructor: Sael Lee CS549 Computational Biology LECTURE 16: PCA AND SVD Resource: PCA Slide by Iyad Batal Chapter 12 of PRML Shlens, J. (2003). A tutorial on principal component analysis. CONTENT Principal

More information

UNIT 6: The singular value decomposition.

UNIT 6: The singular value decomposition. UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Laurenz Wiskott Institute for Theoretical Biology Humboldt-University Berlin Invalidenstraße 43 D-10115 Berlin, Germany 11 March 2004 1 Intuition Problem Statement Experimental

More information

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

PRINCIPAL COMPONENTS ANALYSIS

PRINCIPAL COMPONENTS ANALYSIS 121 CHAPTER 11 PRINCIPAL COMPONENTS ANALYSIS We now have the tools necessary to discuss one of the most important concepts in mathematical statistics: Principal Components Analysis (PCA). PCA involves

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

STA 414/2104: Lecture 8

STA 414/2104: Lecture 8 STA 414/2104: Lecture 8 6-7 March 2017: Continuous Latent Variable Models, Neural networks With thanks to Russ Salakhutdinov, Jimmy Ba and others Outline Continuous latent variable models Background PCA

More information

Efficient Observation of Random Phenomena

Efficient Observation of Random Phenomena Lecture 9 Efficient Observation of Random Phenomena Tokyo Polytechnic University The 1st Century Center of Excellence Program Yukio Tamura POD Proper Orthogonal Decomposition Stochastic Representation

More information

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods

More information