Eigenimaging for Facial Recognition

Size: px
Start display at page:

Download "Eigenimaging for Facial Recognition"

Transcription

1 Eigenimaging for Facial Recognition Aaron Kosmatin, Clayton Broman December 2, 21 Abstract The interest of this paper is Principal Component Analysis, specifically its area of application to facial recognition known as eigenfaces. Principal Component Analysis will be used to create a face space into which images can be projected and known faces be recognized. Facial recognition has many applications. It can be used for security, verifying that the person accessing something is allowed clearance. It can be used in surveillance, picking a person of interest out of a crowd. It also has many implications for differentiating other objects. The camera industry is currently marketing digital cameras with just such facial recognition software that works in real time to adjust focus and highlight people s faces. There are three main approaches to facial recognition. There s Principle Components Analysis (), Linear Discriminant Analysis (LDA), and Elastic Bunch Graph Page 1 of 18

2 Mapping (EBGM). LDA uses multiple images of the same person with different facial expressions to create a face class that belongs to a specific person. EBGM Uses 3-dimensional analysis to create a wireframe of the face and uses that wireframe to recognize faces[3]. For the purposes of this project, will be used. The approach was developed by Kirby and Sirivich[]. This approach uses several images of different faces to create an average face. The differences of each face from the average face are used to create a vector representing the face in face space. Mathematically, will treat every image of the training set as a vector in a very high dimensional space. The eigenvectors of the covariance matrix of these vectors will incorporate the variation amongst the face images. [2] A sample of 11 pictures from 1 different people will be used. Some of the pictures from each person will be used to create a training matrix, Γ. Using Γ, a program can be trained to project images into face space, assigning a vector to each image it is given. This vector can be saved for each person, creating a database composed of the persons and their associated vectors. A picture not included in Γ, can be projected into face space to create its own vector. This vector can be compared against the database of vectors to find the identity of the person. Principal Component Analysis is a way of identifying patterns in data, and expressing the data in such a way as to highlight similarities and differences. [6] For the purpose of facial recognition the patterns will correspond to facial features that vary in all people, i.e. width of noses, distance between nose and mouth and width between eyes. The point of this section is to introduce before applying it to the correlation, compression, and identification of an image. The majority of the steps taken in this introduction to will be replicated with only minor alterations for the eigenface Page 2 of 18

3 recognition algorithm. A good way to understand analysis is to start in two dimensions so it can be represented visually. For the purpose of this demonstration the representative dimensions will be X and Y, each containing a dataset of points in two dimensional space, Figure 1a. To begin, the covariance of the two datasets must be found. Covariance is a way to measure how spread out data is, it is related to variance which is defined as the standard deviation squared, s 2, but while variance is only useful in one dimensional analysis, covariance is the correlation between the standard deviation of one dimension s dataset to that of another. Finding the covariance begins by calculating the average each dimension of the data set. Next, the average of each dimension is subtracted from the points in the respective dimension. This yields a new dataset comprised of the standard deviations for each dimension, X X and Y Ȳ, and will hence forth be known as the adjusted dataset. This step adjusts the data to be centered around the axis, see Figure 1b. The standard deviation of X and Y can be found with: s X = n i=1 (X i X) 2 n 1 s Y = n i=1 (Y i Ȳ )2 n 1 The standard deviation can be used to find both the variance and covariance of X Page 3 of 18

4 (a) Original data Page 4 of 18 (b) Adjusted Data Figure 1: Original graph and adjusted data set

5 and Y. The covariance of X and Y is: n i=1 cov(x, Y ) = (X i X) n 2 i=1 (Y i Ȳ )2 n 1 n 1 n i=1 = (X i X)(Y i Ȳ ) n 1 = s X s Y The vairance of one dimension is simply cov(x, X): n i=1 cov(x, X) = (X i X) n 2 i=1 (X i X) 2 n 1 n 1 n i=1 = (X i X)(X i X) n 1 = (s X ) 2 The covariance is a measure of how much the data in one dimension varies from that dimensions mean with respect to how much another dimensions data varies from its mean. If the value of this correlation is positive the two data sets are increasing together, if negative, one data set is increasing while the other is decreasing. A covariance of zero means that the data sets do not vary with with respect to one another and the the two dimensions are independent. By putting the adjusted datasets, X X, Y Ȳ, into the columns of a matrix, the covariance matrix is born. C = A T A where A = [X X, Y Ȳ ] In this case the matrix has only two dimensions but it is possible using this matrix technique to solve for N covariances at a time as the dimensions increases to R N the matrix A takes the shape of A = [X 1 X 1, X 2 X 2,..., X N X N ]. Page of 18

6 Dominant Eigenvector Figure 2: The adjusted dataset with eigenvectors. There are some important things to notice about a covariance matrix, first of all since cov(x, Y ) = cov(y, X), the matrix is symmetric across the main diagonal. Also, each value along the main diagonal is the variance for a dimension. The covariance matrix will always end up square and singular. The dominant eigenvector will create a line of best fit through the adjusted data, the other eigenvectors will be orthogonal to this vector. The eigenvectors will be contained in a matrix V. By observing Figure 2, it can be seen that the eigenvectors represent a line of best fit and a line perpendicular to that, notice also that the number of eigenvectors correlate to the number of dimensions the data sets are in and are independent of the number of data entries each dimension has. Any data from the adjusted dataset can be represented exactly as a linear combination of these eigenvectors. As the number of data points increases to R 3 and beyond, the additional vectors will also be perpendicular. Page 6 of 18

7 The largest eigenvalues of the covariance matrix will correspond to eigenvectors that point to where the greatest covariances are in the data line. These eigenvectors are the principal eigenvectors or principal components. It is possible to represent the data very closely with relatively few principal eigenvectors, and not lose much information about the original data. The principal eigenvector in the case of the sample dataset is the vector that passes through the majority of the data points and is aptly labeled dominate eigenvector in Figure 2. Since the second eigenvector contributes so little to the position of the adjusted data, a good representation of the data can be shown in one dimension. This concept is at the heart of. The last thing does to the dataset is put it in terms of the eigenvectors of the of the covariance matrix. Newdata = V T A T The new data is the product of the eigenvector matrix transpose and the adjusted dataset transpose. This shifts the axes of the dataset to fit more naturally with the data. In the event only one principal eigenvector is chosen to represent the data (to perhaps decrease computing time), the data will be meshed together to fit along that principal eigenvector. Figure 3 plots the adjusted data with stars alongside the adjusted data in terms of the dominant eigenvector (circles). It is important to note that the last steps were taken assuming that the eigenvectors of the covariance matrix were normalized. This is important so that when multiplied by the adjusted dataset no distortion of the data occurs, however most computing programs normalize the eigenvectors automatically. It is also important to retrieving the original data set back that the eigenvectors are unit vectors so that V 1 = V T, this will simplify the calculations greatly. The original data set can be brought back easily by reversing the process just described. That is; originaldata X = (V Newdata) + X originaldata Y = (V Newdata) + Ȳ Page 7 of 18

8 Figure 3: The adjusted dataset in terms of the dominate eigenvector. Page 8 of 18

9 Data along eigenvector Original data Figure 4: The original dataset along side the restored eigenvector dataset. Page 9 of 18

10 Figure 4 plots the original data with stars alongside the data retrieved using only a the single principal eigenvector(circles). Some of the information about the original data was lost, but the general relation of the points remains. The first step to identifying the faces is to create the training matrix. For the purpose of this paper, a set of images from Yale will be used. The set consists of 11 images each from 1 different persons, and are all pixels. The faces are in 8-bit greyscale, allowing for 2 8, or 26 unique values per pixel. The images are converted to a (32 243) 1 vector ( 77, 76 1), Γ i. In this paper, the number of pixels, 77,6, will be referred to as N; The number of images in the training set, 1, will be refered to as M. The training matrix is created by grouping the vectors into a matrix such that: Γ = [Γ 1, Γ 2,..., Γ M ] It is important to note that the values of the rows are not randomly distributed. Faces, being generally alike in overall structure will have greyscale variations in roughly the same areas corresponding to features such as eyes, cheeks, lips, etc. can be used to find the areas of highest variance. The first step in identification is to take an average of the columns of Γ to create the vector Ψ. Ψ = 1 M Γ i M Ψ, Figure, is analogous to X and Ȳ in the previous example. Ψ contains all the averages in a single vector. After Ψ has been created a new N M matrix, Φ is created from the difference of the columns of Γ and Ψ. i=1 Page 1 of 18

11 Page 11 of 18 Figure : Ψ.

12 Φ i = Γ i Ψ, for i=1 to M Φ is a matrix of which each row contains the difference between the original image in a given column and the average of the images. The next step in is to find the eigenvalues and eigenvectors of the covariance matrix. Calculating the covariance matrix, C, is usually: C = A T A The normal convention has the related data points running down the columns of A. In our case however, the related data points are the pixels, which run across the rows. As a result, A = Φ T. This implies: C = ( Φ T ) T Φ T = ΦΦ T For facial recognition, the number of pixels is usually much larger than the number of images. In this case, the images are pixels, making the covariance matrix 77, 76 77, 76 in size. This is a prohibitively large computation even with computer automation. The non-trivial eigenvalues and eigenvectors of the covariance matrix can be found another way. As stated by Sirovich and Kirby, If the number in the ensemble M is less than the dimension of C, then C is singular and cannot be of order greater than M. [] The rank of the covariance matrix is M. Although the covariance matrix is N N it was composed of an N M matrix. This implies that many of the columns are linear combinations of other columns. Sirovich and Kirby developed a method to find the eigenvectors of C without computing them from C itself. By starting with the eigenvectors of the inner product: Page 12 of 18 Φ T Φv i = λ i v i

13 they then multiply on the left of both sides by Φ giving: ΦΦ T Φv i = λφv i CΦv i = λφv i From this they found that the eigenvectors of the covariance matrix can be found by multiplying the eigenvectors of the Φ T Φ matrix on the left by Φ. This simplifies the process considerably since Φ T Φ is M M and much easier to find the eigenvalues for. The additional eigenvalues of ΦΦ T belong to the null space and can be ignored for the purpose of facial recognition. After multiplying the eigenvectors of Φ T Φ on the left by Φ, the vectors are no longer normalized. They can easily be renormalized by dividing them by their respective lengths. This allows us to replace V 1 with V T and simplifies some calculations. Each column of Φ can be found as a linear combination of eigenvectors of the covariance matrix. Since the eigenvalues and the columns Φ i are known, the specific linear combination can be found with: This implies: Φ i = c 1 v 1 + c 2 v c M v M = [c 1, c 2,..., c M ] i = [c 1, c 2,..., c M ] i V v 1 v 2... v M [c 1, c 2,..., c M ] i = V T Φ i Page 13 of 18

14 The vector of coefficients, [c 1, c 2,..., c M ] i, c i, represents the projection of Φ i into face space, where c i corresponds to a point in face space. To identify new faces which are not included in the training set, each face is projected into facespace, creating a representation of the face as a linear combination of the eigenvectors of the covariance matrix. First the images are added into a matrix, Γ, converting them to columns as was done to the images in the original Γ training set. Next the original Ψ matrix is subtracted, such that Φ k = Γ k Ψ. Then project Φ onto face space with V T Φ. This gives the linear combination, c, in terms of the eigenvectors in V. The last step is to identify the closest facial matches, which is done by treating each face as a point in facespace. The distance between faces is found by solving for the Euclidean distance between them. (c 1i c 1 )2 + (c 2i c 2 ) (c Mi c M )2 The point in c that is closest to c is recognized as the same face. The faces that have been projected into face space can be reconstructed. The reconstruction is a close approximation of the original face. The equation to reconstruct the face is c kv + Ψ. Turk and Pentland found that the reconstructed face had about a 2% difference from the original face. To make some of the computations faster, the eigenvectors can be sorted in descending order of magnitude of their respective eigenvalues. Some of the eigenvectors contribute Page 14 of 18

15 Page 1 of 18 Figure 6: A reconstructed face.

16 less to the faces than others, these can be omitted without losing too much information. In the Turks and Pentland study about 4 images were needed to train the algorithm. The number of eigenvectors kept can easily be adjusted arbitrarily to balance the computing requirements with accuracy. The program identifies the closest face in facespace to a given input face as that face. If the program is running on an insufficient basis, the input image varies too much from its correlated training set images, or the input image wasn t part of the original training set, the possibility of a false positive arises. A false positive occurs when the program recognizes the wrong person. An optimization possibility to confront this error is to check the distance of the points in face space with each other while identifying the faces. If the distance were too large between the new face and any of the previous faces the program could add it to the training set as a new person. If the distance between points was far enough to not be certain of identity but not large enough to be certain of a new face, the program could simply not identify that face. Principle Component Analysis is an effective way to identify faces. It finds the areas of highest variance, creates a subspace, face space, with the pixels according to that variance, and places the images in said subspace. Many of the parameters of a program, such as the number of eigenvectors used and size of the training set, can be adjusted to match accuracy needs with computation limitations. Page 16 of 18

17 Page 17 of 18 Figure 7: The finished program using a graphic interface to control parameters and view images.

18 References [1] H. Ganta, P. Tejwani.24 Clemson University. Face Recognition using Eigenfaces. [2] NSTC. 26 Face Recognition, National Science and Technology Counsil. [3] K. Josic. 23 Face Recognition, The Engines of Our Ingenuity. [4] M. Turk, A. Pentland Journal of Cognitive Neuroscience. Eigenfaces for Recognition. [] L. Sirovich, M. Kirby Division of Applied Mathematics, Brown University, Providence, Rhode Island. Low-Dimensional procedure for the characterization of human faces. [6] L. Smith. 22 Cornell University, A tutorial on Principal Component Analysis [7] D. Pissarenko. 23 Eigenface-based facial recognion. Page 18 of 18

Eigenface-based facial recognition

Eigenface-based facial recognition Eigenface-based facial recognition Dimitri PISSARENKO December 1, 2002 1 General This document is based upon Turk and Pentland (1991b), Turk and Pentland (1991a) and Smith (2002). 2 How does it work? The

More information

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction Robot Image Credit: Viktoriya Sukhanova 13RF.com Dimensionality Reduction Feature Selection vs. Dimensionality Reduction Feature Selection (last time) Select a subset of features. When classifying novel

More information

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations

More information

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization

More information

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Eigenface and

More information

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr

More information

Principal Component Analysis

Principal Component Analysis B: Chapter 1 HTF: Chapter 1.5 Principal Component Analysis Barnabás Póczos University of Alberta Nov, 009 Contents Motivation PCA algorithms Applications Face recognition Facial expression recognition

More information

The Mathematics of Facial Recognition

The Mathematics of Facial Recognition William Dean Gowin Graduate Student Appalachian State University July 26, 2007 Outline EigenFaces Deconstruct a known face into an N-dimensional facespace where N is the number of faces in our data set.

More information

Lecture: Face Recognition

Lecture: Face Recognition Lecture: Face Recognition Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 12-1 What we will learn today Introduction to face recognition The Eigenfaces Algorithm Linear

More information

Advanced Introduction to Machine Learning CMU-10715

Advanced Introduction to Machine Learning CMU-10715 Advanced Introduction to Machine Learning CMU-10715 Principal Component Analysis Barnabás Póczos Contents Motivation PCA algorithms Applications Some of these slides are taken from Karl Booksh Research

More information

What is Principal Component Analysis?

What is Principal Component Analysis? What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

PCA FACE RECOGNITION

PCA FACE RECOGNITION PCA FACE RECOGNITION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Shree Nayar (Columbia) including their own slides. Goal

More information

Deriving Principal Component Analysis (PCA)

Deriving Principal Component Analysis (PCA) -0 Mathematical Foundations for Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Deriving Principal Component Analysis (PCA) Matt Gormley Lecture 11 Oct.

More information

Unsupervised Learning: K- Means & PCA

Unsupervised Learning: K- Means & PCA Unsupervised Learning: K- Means & PCA Unsupervised Learning Supervised learning used labeled data pairs (x, y) to learn a func>on f : X Y But, what if we don t have labels? No labels = unsupervised learning

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

Eigenfaces. Face Recognition Using Principal Components Analysis

Eigenfaces. Face Recognition Using Principal Components Analysis Eigenfaces Face Recognition Using Principal Components Analysis M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive Neuroscience, 3(1), pp. 71-86, 1991. Slides : George Bebis, UNR

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 23, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 47 High dimensional

More information

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to: System 2 : Modelling & Recognising Modelling and Recognising Classes of Classes of Shapes Shape : PDM & PCA All the same shape? System 1 (last lecture) : limited to rigidly structured shapes System 2 :

More information

Face Detection and Recognition

Face Detection and Recognition Face Detection and Recognition Face Recognition Problem Reading: Chapter 18.10 and, optionally, Face Recognition using Eigenfaces by M. Turk and A. Pentland Queryimage face query database Face Verification

More information

Face Recognition Using Eigenfaces

Face Recognition Using Eigenfaces Face Recognition Using Eigenfaces Prof. V.P. Kshirsagar, M.R.Baviskar, M.E.Gaikwad, Dept. of CSE, Govt. Engineering College, Aurangabad (MS), India. vkshirsagar@gmail.com, madhumita_baviskar@yahoo.co.in,

More information

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering

Karhunen-Loève Transform KLT. JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform KLT JanKees van der Poel D.Sc. Student, Mechanical Engineering Karhunen-Loève Transform Has many names cited in literature: Karhunen-Loève Transform (KLT); Karhunen-Loève Decomposition

More information

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision) CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions

More information

Methods for sparse analysis of high-dimensional data, II

Methods for sparse analysis of high-dimensional data, II Methods for sparse analysis of high-dimensional data, II Rachel Ward May 26, 2011 High dimensional data with low-dimensional structure 300 by 300 pixel images = 90, 000 dimensions 2 / 55 High dimensional

More information

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction Using PCA/LDA Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction One approach to deal with high dimensional data is by reducing their

More information

Example: Face Detection

Example: Face Detection Announcements HW1 returned New attendance policy Face Recognition: Dimensionality Reduction On time: 1 point Five minutes or more late: 0.5 points Absent: 0 points Biometrics CSE 190 Lecture 14 CSE190,

More information

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY [Gaurav, 2(1): Jan., 2013] ISSN: 2277-9655 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY Face Identification & Detection Using Eigenfaces Sachin.S.Gurav *1, K.R.Desai 2 *1

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Additional reading can be found from non-assessed exercises (week 8) in this course unit teaching page. Textbooks: Sect. 6.3 in [1] and Ch. 12 in [2] Outline Introduction

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag

A Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data

More information

Image Analysis. PCA and Eigenfaces

Image Analysis. PCA and Eigenfaces Image Analysis PCA and Eigenfaces Christophoros Nikou cnikou@cs.uoi.gr Images taken from: D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 2003. Computer Vision course by Svetlana

More information

COS 429: COMPUTER VISON Face Recognition

COS 429: COMPUTER VISON Face Recognition COS 429: COMPUTER VISON Face Recognition Intro to recognition PCA and Eigenfaces LDA and Fisherfaces Face detection: Viola & Jones (Optional) generic object models for faces: the Constellation Model Reading:

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

20 Unsupervised Learning and Principal Components Analysis (PCA)

20 Unsupervised Learning and Principal Components Analysis (PCA) 116 Jonathan Richard Shewchuk 20 Unsupervised Learning and Principal Components Analysis (PCA) UNSUPERVISED LEARNING We have sample points, but no labels! No classes, no y-values, nothing to predict. Goal:

More information

Pattern Recognition 2

Pattern Recognition 2 Pattern Recognition 2 KNN,, Dr. Terence Sim School of Computing National University of Singapore Outline 1 2 3 4 5 Outline 1 2 3 4 5 The Bayes Classifier is theoretically optimum. That is, prob. of error

More information

Principal Component Analysis and Singular Value Decomposition. Volker Tresp, Clemens Otte Summer 2014

Principal Component Analysis and Singular Value Decomposition. Volker Tresp, Clemens Otte Summer 2014 Principal Component Analysis and Singular Value Decomposition Volker Tresp, Clemens Otte Summer 2014 1 Motivation So far we always argued for a high-dimensional feature space Still, in some cases it makes

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Anders Øland David Christiansen 1 Introduction Principal Component Analysis, or PCA, is a commonly used multi-purpose technique in data analysis. It can be used for feature

More information

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA)

CS168: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) CS68: The Modern Algorithmic Toolbox Lecture #7: Understanding Principal Component Analysis (PCA) Tim Roughgarden & Gregory Valiant April 0, 05 Introduction. Lecture Goal Principal components analysis

More information

Face Recognition. Lauren Barker

Face Recognition. Lauren Barker Face Recognition Lauren Barker 24th April 2011 Abstract This report presents an exploration into the various techniques involved in attempting to solve the problem of face recognition. Focus is paid to

More information

Covariance and Principal Components

Covariance and Principal Components COMP3204/COMP6223: Computer Vision Covariance and Principal Components Jonathon Hare jsh2@ecs.soton.ac.uk Variance and Covariance Random Variables and Expected Values Mathematicians talk variance (and

More information

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface Image Analysis & Retrieval Lec 14 Eigenface and Fisherface Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: lizhu@umkc.edu, Ph: x 2346. http://l.web.umkc.edu/lizhu Z. Li, Image Analysis & Retrv, Spring

More information

Face recognition Computer Vision Spring 2018, Lecture 21

Face recognition Computer Vision Spring 2018, Lecture 21 Face recognition http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 21 Course announcements Homework 6 has been posted and is due on April 27 th. - Any questions about the homework?

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Salvador Dalí, Galatea of the Spheres CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang Some slides from Derek Hoiem and Alysha

More information

Lecture 17: Face Recogni2on

Lecture 17: Face Recogni2on Lecture 17: Face Recogni2on Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei-Fei Li Stanford Vision Lab Lecture 17-1! What we will learn today Introduc2on to face recogni2on Principal Component Analysis

More information

A tutorial on Principal Components Analysis

A tutorial on Principal Components Analysis A tutorial on Principal Components Analysis Lindsay I Smith February 26, 2002 Chapter 1 Introduction This tutorial is designed to give the reader an understanding of Principal Components Analysis (PCA).

More information

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface CS/EE 5590 / ENG 401 Special Topics, Spring 2018 Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface Zhu Li Dept of CSEE, UMKC http://l.web.umkc.edu/lizhu Office Hour: Tue/Thr 2:30-4pm@FH560E, Contact:

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Recognition Using Class Specific Linear Projection Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Articles Eigenfaces vs. Fisherfaces Recognition Using Class Specific Linear Projection, Peter N. Belhumeur,

More information

Lecture 17: Face Recogni2on

Lecture 17: Face Recogni2on Lecture 17: Face Recogni2on Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei-Fei Li Stanford Vision Lab Lecture 17-1! What we will learn today Introduc2on to face recogni2on Principal Component Analysis

More information

Extreme Values and Positive/ Negative Definite Matrix Conditions

Extreme Values and Positive/ Negative Definite Matrix Conditions Extreme Values and Positive/ Negative Definite Matrix Conditions James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 8, 016 Outline 1

More information

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of CHAPTER III APPLICATIONS The eigenvalues are λ =, An orthonormal basis of eigenvectors consists of, The eigenvalues are λ =, A basis of eigenvectors consists of, 4 which are not perpendicular However,

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Announcements (repeat) Principal Components Analysis

Announcements (repeat) Principal Components Analysis 4/7/7 Announcements repeat Principal Components Analysis CS 5 Lecture #9 April 4 th, 7 PA4 is due Monday, April 7 th Test # will be Wednesday, April 9 th Test #3 is Monday, May 8 th at 8AM Just hour long

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces

Singular Value Decomposition. 1 Singular Value Decomposition and the Four Fundamental Subspaces Singular Value Decomposition This handout is a review of some basic concepts in linear algebra For a detailed introduction, consult a linear algebra text Linear lgebra and its pplications by Gilbert Strang

More information

Dimensionality reduction

Dimensionality reduction Dimensionality Reduction PCA continued Machine Learning CSE446 Carlos Guestrin University of Washington May 22, 2013 Carlos Guestrin 2005-2013 1 Dimensionality reduction n Input data may have thousands

More information

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System processes System Overview Previous Systems:

More information

Linear Algebra and Eigenproblems

Linear Algebra and Eigenproblems Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details

More information

Face Recognition Technique Based on Eigenfaces Method

Face Recognition Technique Based on Eigenfaces Method Proceeding of 3 rd scientific conference of the College of Science, University of Baghdad Ali and AL-Phalahi 24 Proceeding to 26 arch of 2009 3 rd scientific conference, 2009, PP 781-786 Face Recognition

More information

1 Principal Components Analysis

1 Principal Components Analysis Lecture 3 and 4 Sept. 18 and Sept.20-2006 Data Visualization STAT 442 / 890, CM 462 Lecture: Ali Ghodsi 1 Principal Components Analysis Principal components analysis (PCA) is a very popular technique for

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab 1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed in the

More information

Principal Components Analysis: A How-To Manual for R

Principal Components Analysis: A How-To Manual for R Principal Components Analysis: A How-To Manual for R Emily Mankin Introduction Principal Components Analysis (PCA) is one of several statistical tools available for reducing the dimensionality of a data

More information

PCA, Kernel PCA, ICA

PCA, Kernel PCA, ICA PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per

More information

Multilinear Subspace Analysis of Image Ensembles

Multilinear Subspace Analysis of Image Ensembles Multilinear Subspace Analysis of Image Ensembles M. Alex O. Vasilescu 1,2 and Demetri Terzopoulos 2,1 1 Department of Computer Science, University of Toronto, Toronto ON M5S 3G4, Canada 2 Courant Institute

More information

Wavelet Transform And Principal Component Analysis Based Feature Extraction

Wavelet Transform And Principal Component Analysis Based Feature Extraction Wavelet Transform And Principal Component Analysis Based Feature Extraction Keyun Tong June 3, 2010 As the amount of information grows rapidly and widely, feature extraction become an indispensable technique

More information

Face Recognition and Biometric Systems

Face Recognition and Biometric Systems The Eigenfaces method Plan of the lecture Principal Components Analysis main idea Feature extraction by PCA face recognition Eigenfaces training feature extraction Literature M.A.Turk, A.P.Pentland Face

More information

Face Recognition Technique Based on Eigenfaces Method

Face Recognition Technique Based on Eigenfaces Method Face Recognition Technique Based on Eigenfaces ethod S Ali * and KA AL-Phalahi ** * Remote Sensing Unit, College of Science, University of Baghdad, Iraq, Baghdad, Al- Jaderyia ** Department of Computer

More information

FACE RECOGNITION BY EIGENFACE AND ELASTIC BUNCH GRAPH MATCHING. Master of Philosophy Research Project First-term Report SUPERVISED BY

FACE RECOGNITION BY EIGENFACE AND ELASTIC BUNCH GRAPH MATCHING. Master of Philosophy Research Project First-term Report SUPERVISED BY FACE RECOGNITION BY EIGENFACE AND ELASTIC BUNCH GRAPH MATCHING Master of Philosophy Research Proect First-term Report SUPERVISED BY Professor LYU, Rung Tsong Michael PREPARED BY JANG KIM FUNG (01036550)

More information

Subspace Methods for Visual Learning and Recognition

Subspace Methods for Visual Learning and Recognition This is a shortened version of the tutorial given at the ECCV 2002, Copenhagen, and ICPR 2002, Quebec City. Copyright 2002 by Aleš Leonardis, University of Ljubljana, and Horst Bischof, Graz University

More information

Facial Expression Recognition using Eigenfaces and SVM

Facial Expression Recognition using Eigenfaces and SVM Facial Expression Recognition using Eigenfaces and SVM Prof. Lalita B. Patil Assistant Professor Dept of Electronics and Telecommunication, MGMCET, Kamothe, Navi Mumbai (Maharashtra), INDIA. Prof.V.R.Bhosale

More information

Lecture: Face Recognition and Feature Reduction

Lecture: Face Recognition and Feature Reduction Lecture: Face Recognition and Feature Reduction Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap - Curse of dimensionality Assume 5000 points uniformly distributed

More information

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin

Introduction to Machine Learning. PCA and Spectral Clustering. Introduction to Machine Learning, Slides: Eran Halperin 1 Introduction to Machine Learning PCA and Spectral Clustering Introduction to Machine Learning, 2013-14 Slides: Eran Halperin Singular Value Decomposition (SVD) The singular value decomposition (SVD)

More information

PCA Review. CS 510 February 25 th, 2013

PCA Review. CS 510 February 25 th, 2013 PCA Review CS 510 February 25 th, 2013 Recall the goal: image matching Probe image, registered to gallery Registered Gallery of Images 3/7/13 CS 510, Image Computa5on, Ross Beveridge & Bruce Draper 2 Getting

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION

CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 59 CHAPTER 4 PRINCIPAL COMPONENT ANALYSIS-BASED FUSION 4. INTRODUCTION Weighted average-based fusion algorithms are one of the widely used fusion methods for multi-sensor data integration. These methods

More information

Data Preprocessing Tasks

Data Preprocessing Tasks Data Tasks 1 2 3 Data Reduction 4 We re here. 1 Dimensionality Reduction Dimensionality reduction is a commonly used approach for generating fewer features. Typically used because too many features can

More information

Discriminant analysis and supervised classification

Discriminant analysis and supervised classification Discriminant analysis and supervised classification Angela Montanari 1 Linear discriminant analysis Linear discriminant analysis (LDA) also known as Fisher s linear discriminant analysis or as Canonical

More information

Random Sampling LDA for Face Recognition

Random Sampling LDA for Face Recognition Random Sampling LDA for Face Recognition Xiaogang Wang and Xiaoou ang Department of Information Engineering he Chinese University of Hong Kong {xgwang1, xtang}@ie.cuhk.edu.hk Abstract Linear Discriminant

More information

The Principal Component Analysis

The Principal Component Analysis The Principal Component Analysis Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) PCA Fall 2017 1 / 27 Introduction Every 80 minutes, the two Landsat satellites go around the world, recording images

More information

Machine Learning (Spring 2012) Principal Component Analysis

Machine Learning (Spring 2012) Principal Component Analysis 1-71 Machine Learning (Spring 1) Principal Component Analysis Yang Xu This note is partly based on Chapter 1.1 in Chris Bishop s book on PRML and the lecture slides on PCA written by Carlos Guestrin in

More information

15 Singular Value Decomposition

15 Singular Value Decomposition 15 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Eigenimages. Digital Image Processing: Bernd Girod, 2013 Stanford University -- Eigenimages 1

Eigenimages. Digital Image Processing: Bernd Girod, 2013 Stanford University -- Eigenimages 1 Eigenimages " Unitary transforms" Karhunen-Loève transform" Eigenimages for recognition" Sirovich and Kirby method" Example: eigenfaces" Eigenfaces vs. Fisherfaces" Digital Image Processing: Bernd Girod,

More information

Locality Preserving Projections

Locality Preserving Projections Locality Preserving Projections Xiaofei He Department of Computer Science The University of Chicago Chicago, IL 60637 xiaofei@cs.uchicago.edu Partha Niyogi Department of Computer Science The University

More information

Linear Dimensionality Reduction

Linear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis

More information

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis .. December 20, 2013 Todays lecture. (PCA) (PLS-R) (LDA) . (PCA) is a method often used to reduce the dimension of a large dataset to one of a more manageble size. The new dataset can then be used to make

More information

Singular Value Decomposition and Digital Image Compression

Singular Value Decomposition and Digital Image Compression Singular Value Decomposition and Digital Image Compression Chris Bingham December 1, 016 Page 1 of Abstract The purpose of this document is to be a very basic introduction to the singular value decomposition

More information

Linear Algebra Review. Fei-Fei Li

Linear Algebra Review. Fei-Fei Li Linear Algebra Review Fei-Fei Li 1 / 37 Vectors Vectors and matrices are just collections of ordered numbers that represent something: movements in space, scaling factors, pixel brightnesses, etc. A vector

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

A Unified Bayesian Framework for Face Recognition

A Unified Bayesian Framework for Face Recognition Appears in the IEEE Signal Processing Society International Conference on Image Processing, ICIP, October 4-7,, Chicago, Illinois, USA A Unified Bayesian Framework for Face Recognition Chengjun Liu and

More information

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

Course 495: Advanced Statistical Machine Learning/Pattern Recognition Course 495: Advanced Statistical Machine Learning/Pattern Recognition Deterministic Component Analysis Goal (Lecture): To present standard and modern Component Analysis (CA) techniques such as Principal

More information

Non-parametric Classification of Facial Features

Non-parametric Classification of Facial Features Non-parametric Classification of Facial Features Hyun Sung Chang Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Problem statement In this project, I attempted

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Principal Component Analysis

Principal Component Analysis Principal Component Analysis Yingyu Liang yliang@cs.wisc.edu Computer Sciences Department University of Wisconsin, Madison [based on slides from Nina Balcan] slide 1 Goals for the lecture you should understand

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Principal Component Analysis (PCA) CSC411/2515 Tutorial

Principal Component Analysis (PCA) CSC411/2515 Tutorial Principal Component Analysis (PCA) CSC411/2515 Tutorial Harris Chan Based on previous tutorial slides by Wenjie Luo, Ladislav Rampasek University of Toronto hchan@cs.toronto.edu October 19th, 2017 (UofT)

More information

PRINCIPAL COMPONENT ANALYSIS

PRINCIPAL COMPONENT ANALYSIS PRINCIPAL COMPONENT ANALYSIS 1 INTRODUCTION One of the main problems inherent in statistics with more than two variables is the issue of visualising or interpreting data. Fortunately, quite often the problem

More information

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS VIKAS CHANDRAKANT RAYKAR DECEMBER 5, 24 Abstract. We interpret spectral clustering algorithms in the light of unsupervised

More information

Tutorial on Principal Component Analysis

Tutorial on Principal Component Analysis Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms

More information