Performance Evaluation of Nonlinear Dimensionality Reduction Methods on the BANCA Database

Size: px
Start display at page:

Download "Performance Evaluation of Nonlinear Dimensionality Reduction Methods on the BANCA Database"

Transcription

1 Performance Evaluation of Nonlinear Dimensionality Reduction Methods on the BANCA Database CMPE 544 Pattern Recognition - Term Project Report Bogazici University Hasan Faik ALAN January 5, 2013 Abstract In this work, performance of two of the nonlinear dimensionality reduction methods namely Isomap and Locally Linear Embedding (LLE) are evaluated in the context of face verification. BANCA database protocol is used for performance evaluations. K-nearest neighbor classifier is used for classification. An initial dimensionality reduction is performed by PCA in a linear way before application of nonlinear dimensionality reduction methods due to high dimensionality of images. ISOMAP and LLE are evaluated using 7 distinct experimental configurations. 1

2 Contents 1 Introduction 3 2 BANCA Database Protocol 3 3 Methodology 5 4 ISOMAP 6 5 Locally Linear Embedding 7 6 Performance Evaluation 8 7 Conclusions 8 A BANCA Database Protocol Implementation 10 A.1 Sample XML Configuration File A.2 XML Parser A.3 Experiment Configuration Request A.4 False Acceptance Ratio A.5 False Rejection Ratio A.6 Sample Experiment B ISOMAP 13 B.1 ISOMAP Experiment B.2 ISOMAP Implementation C LLE 18 C.1 LLE Experiment C.2 LLE Implementation D PCA (Eigenfaces) Implementation 23 E Utility Methods Implementation 25 E.1 Generate Image Panorama

3 1 Introduction PCA and LDA methods are effectively used in face verification systems. However, they fail to recognize nonlinear structure if any in the image space. In this work two of the nonlinear dimensionality reduction methods will be evaluated using the BANCA database protocol [1] in the context of face verification. In [7] various face verification algorithms are discussed. They used PCA and LDA spaces with euclidean distance, normalized correlation and SVM classifiers. They have determined the classifier parameters using equal error rate criterion in the training stage and used different size of face image set than used in this work. However, results are beneficial for comparison between nonlinear methods discussed in this work and linear methods discussed in theirs. 2 BANCA Database Protocol The BANCA database is a multi-modal database inted for training and testing of biometric authentication systems. There are video and speech data for 52 subjects in 4 languages (English, French, Italian and Spanish). In this work, BANCA database experimental protocol [1] on English part of the face image database is used. Each subject in the database participated in 12 recording sessions. Face image database contains frontal face images extracted from each video recording. There are 7 experimental configurations defined using these images namely Matched Controlled (MC), Matched Degraded (MD), Matched Adverse(MA), Unmatched Degraded(UD), Unmatched Adverse (UA), Pooled test(p) and Grand test (G). For each configuration, images that will be used for training and evaluation is specified. An example XML file for MC test configuration is given in appix. After client training, client access and impostor attack tests are applied. There are two types of errors a verification system may produce namely false acceptance and false rejection. False acceptance means system accepted an impostor and false rejection implies a true client is rejected by the system. Training dataset for MC configuration is shown in figure 1. Only 1 image out of 5 images is shown for each person (i.e. 52 images are shown, 260 are present in MC training configuration). Client testing and impostor testing datasets for the same configuration are shown in figure 2 and figure 3 respectively. Figure 1: MC Client Training Set 3

4 Figure 2: MC Client Testing Set Figure 3: MC Impostor Testing Set 4

5 3 Methodology It is advised in [2] to perform an initial dimensionality reduction using PCA [4] in a purely linear way before application of nonlinear dimensionality reduction methods. In this work, before the application of ISOMAP and LLE methods, PCA is applied in a way that chosen principle components explain more than 99% of global variance. Proportion of variance explained with number of eigenvectors plot is shown in figure 4. Figure 5 shows eigenfaces for MC client training set. It is stated in [3] that ISOMAP does not learn a general mapping function so whole algorithm should be run for a new test point. In this work, client and impostor tests are done by apping each test image vector separately to training set and ISOMAP is run after test image inclusion. Figure 4: Proportion of Variance Explained on MC Training Data 5

6 Figure 5: Controlled Data (MC) Client Training Set Eigenfaces 4 ISOMAP ISOMAP [5] is a nonlinear dimensionality reduction method based on multidimensional scaling. It is assumed that data lies on a manifold and instead of using euclidean distance, geodesic distance is introduced along the manifold. Geodesic distance is a measure of shortest path among given points as if manifold lies on a flat surface. Multidimensional scaling is applied using geodesic distances between points and a low dimensional mapping is achieved. Below is the general procedure followed in this work for verification of test images using ISOMAP. ISOMAP implementation and face verification source codes are given in appix. 1. Apply PCA to the dataset consisting of training set and one test image. PCA is applied as an initial dimensionality reduction method in a linear way so that more than 99% of global variance is kept. 2. Find n nearest neighbors of all the data points in the high dimensional space 3. Compute geodesic distances between all pairs of points. 4. Embed data using MDS. 5. Using k-nearest neigbor classifier classify the test point. Figure 6 shows MC Training Set face images lying on a graph after ISOMAP application. 6

7 Figure 6: Face Graph After Isomap Application on MC Training Set 5 Locally Linear Embedding In Locally Linear Embeeding [6] it is assumed that data lies on a manifold and there are locally linear patches on the manifold. Below is the general procedure followed in this work for verification of test images using LLE. LLE implementation and face verification source codes are given in appix. 1. Construct data set using all the training set and one test instance. 2. Apply PCA as an initial dimensionality reduction method in a purely linear way that will keep more than 99% global variance. 3. Find k nearest neighbor of all points in the dataset (Eigenface weights of original images) 4. Calculate reconstruction weights of each point from it neigbours 5. Find embedding of the high dimensional points that is best reconstructed using weight found in the previous step. 6. Classify test point using k-nearest neighbor classifier. 7

8 6 Performance Evaluation False Acceptance Rate (FAR): total number of accepted imposters over total number of imposters False Rejection Rate (FRR): total number of rejected clients over total number of clients It can be seen in table [?] that both FRR and FAR is low for LLE in adverse conditions(ma, UA) compared to ISOMAP. Best FRR performance for LLE is obtained at MC configuration wheres for ISOMAP at MD configuration. Both ISOMAP and LLE obtained best FAR performance at MC. Performance of overal verification system reduced from controlled configurations to adverse configurations. MC MD MA UD UA P G FAR FRR ISOMAP LLE ISOMAP LLE ISOMAP LLE ISOMAP LLE ISOMAP LLE ISOMAP LLE ISOMAP LLE Table 1: Performance Evaluation of Local Binary Patterns and Eigenfaces 7 Conclusions In this work two of the nonlinear dimensionality reduction methods are used in the context of face verification. Performance of k-nn classifier in reduced spaces are evaluated using the BANCA database protocol in 7 distinct configurations. Same experiment can be conducted using different classifiers. [7] reported better results using SVM in LDA space. References [1] Bailly-Baillire, E., & Bengio, S The BANCA database and evaluation protocol. Proceedings of the 4th International Conference on Audio- and Video-Based Biometric Person Authentication, [2] Lee, J., & Verleysen, M. (2007). Nonlinear dimensionality reduction. [3] Alpaydin, E. (2010). Introduction to Machine Learning. [4] Turk, M., & Pentland, A Face recognition using eigenfaces. Vision and Pattern Recognition, [5] Tenenbaum, J. B., De Silva, V., & Langford, J. C A global geometric framework for nonlinear dimensionality reduction. 8

9 [6] Roweis, S. T., & Saul, L. K Nonlinear dimensionality reduction by locally linear embedding. Science (New York, N.Y.) [7] Sadeghi, M., Kittler, J., Kostin, A., & Messer, K. (2003). A comparative study of automatic face verification algorithms on the banca database. Audio-and Video-Based,

10 A BANCA Database Protocol Implementation A.1 Sample XML Configuration File <?xml v e r s i o n = 1.0 encoding= UTF 8?> <?RAVL c l a s s = OmniSoft : : ClaimSessionC?> <c l a i m s e s s i o n > <claim r e s u l t = a c t u a l i d = 1002 c l a i m e d i d = 1002 > <f a c e i d >1002 f g 1 s e n 1 </f a c e i d > <f a c e i d >1002 f g 1 s e n 2 </f a c e i d > <f a c e i d >1002 f g 1 s e n 3 </f a c e i d > <f a c e i d >1002 f g 1 s e n 4 </f a c e i d > <f a c e i d >1002 f g 1 s e n 5 </f a c e i d > </claim > <claim r e s u l t = a c t u a l i d = 1033 c l a i m e d i d = 1037 > <f a c e i d >1033 m g1 s en 1 </f a c e i d > <f a c e i d >1033 m g1 s en 2 </f a c e i d > <f a c e i d >1033 m g1 s en 3 </f a c e i d > <f a c e i d >1033 m g1 s en 4 </f a c e i d > <f a c e i d >1033 m g1 s en 5 </f a c e i d > </claim >... </c l a i m s e s s i o n > A.2 XML Parser %B o g a z i c i U n i v e r s i t y %@author : Hasan Faik ALAN %@date : 18/ 12/ 2012 f u n c t i o n [ s u i t ] = parsebanca( xml parsed ) %use experimentbanca.m, t h i s i s a u t i l i t y f u n c t i o n %parsebanca : e x t r a c t BANCA database t e s t c o n f i g u r a t i o n from parsed xml data %using parsexml.m t o t a l p e o p l e = f l o o r ( l e n g t h ( xml parsed ( 1, 2 ). Children ) / 2 ) ; %filename, a c t u a l i d, claimed id, r e s u l t f o r each person s u i t = c e l l ( t o t a l p e o p l e, 4 ) ; f o r i = 1 : t o t a l p e o p l e %f a c e i d : image f i l e name s u i t { i, 1 } = xml parsed ( 1, 2 ). Children (1,2 i ). Children ( 1, 2 ). Children ( 1, 1 ). Data ; %a c t u a l i d s u i t { i, 2 } = xml parsed ( 1, 2 ). Children (1,2 i ). A t t r i b u t e s ( 1, 1 ). Value ; %claimed id ( does not e x i s t in t r a i n i n g s u i t ) i f l ength ( xml parsed ( 1, 2 ). Children (1,2 i ). A t t r i b u t e s ) > 1 10

11 s u i t { i, 3 } = xml parsed ( 1, 2 ). Children (1,2 i ). A t t r i b u t e s ( 1, 2 ). Value ; A.3 Experiment Configuration Request %B o g a z i c i U n i v e r s i t y %@author : Hasan Faik ALAN %@date : 18/ 12/ 2012 f u n c t i o n [ t r a i n i n g, c l i e n t t e s t i n g, i m p o s t e r t e s t i n g ] = experimentbanca ( configuration nam %e x t r a c t t r a i n i n g, c l i e n t t e s t i n g and imposter t e s t i n g experimental %c o n f i g u r a t i o n s using BANCA database p r o t o c o l %configuration name : can be one o f MC, MD, MA, UD, UA, P, G %banca xml path : path to xml f i l e s that d e s c r i b e experimental c o n f i g u r a t i o n s %t r a i n i n g, c l i e n t t e s t i n g and i m p o s t e r t e s t i n g : %nx4 c e l l a r r a y s where %n : number o f people %4: imagename ( f a c e i d ), a c t u a l i d, claimed id, r e s u l t %r e s u l t i s always empty, w i l l be f i l l e d in the experiment %as 1 v e r i f i e d or 0 not v e r i f i e d by l o o k i n g only c l a i m e d i d %t r a i n i n g : c l a i m e d i d i s empty as no t e s t i n g w i l l be done %http : / /www. ee. s u r r e y. ac. uk/cvssp/banca/ %{ Sadeghi, M., K i t t l e r, J., Kostin, A., & Messer, K. ( ). A comparative study o f automatic f a c e v e r i f i c a t i o n algorithms on the banca database. Audio and Video Based, Retrieved from http : / /www. s p r i n g e r l i n k. com/ index /8MPXRWHYE144JYQN. pdf %} path = banca xml path ; configuration name = upper ( configuration name ) ; t r a i n i n g x m l = s t r c a t ( configuration name, T r a i n i n g. xml ) ; c l i e n t t e s t i n g x m l = s t r c a t ( configuration name, T e s t i n g g 1. xml ) ; i m p o s t e r t e s t i n g x m l = s t r c a t ( configuration name, T e s t i n g g 2. xml ) ; %parse xml f i l e s t r a i n i n g x m l = parsexml ( s t r c a t ( path, t r a i n i n g x m l ) ) ; c l i e n t t e s t i n g x m l = parsexml ( s t r c a t ( path, c l i e n t t e s t i n g x m l ) ) ; i m p o s t e r t e s t i n g x m l = parsexml ( s t r c a t ( path, i m p o s t e r t e s t i n g x m l ) ) ; %e x t r a c t banca c o n f i g u r a t i o n t r a i n i n g = parsebanca( t r a i n i n g x m l ) ; c l i e n t t e s t i n g = parsebanca( c l i e n t t e s t i n g x m l ) ; i m p o s t e r t e s t i n g = parsebanca( i m p o s t e r t e s t i n g x m l ) ; 11

12 A.4 False Acceptance Ratio f u n c t i o n [ f a r ] = f a r ( s e s s i o n ) %f a r f a l s e acceptance r a t i o %i n v a l i d inputs : a c t u a l i d = c l a i m e d i d i n v a l i d i n p u t = s e s s i o n ( f i n d ( strcmp ( s e s s i o n ( :, 2 ), s e s s i o n ( :, 3 ) ) ), : ) ; %accept one i n v a l i d input f o r t e s t i n g purposes %i n v a l i d i n p u t {1,4} = 1 ; %accepted i n v a l i d i nputs : a c t u a l i d = c l a i m e d i d and r e s u l t == 1 a c c e p t e d i n v a l i d i n p u t = s e s s i o n ( f i n d ( strcmp ( i n v a l i d i n p u t ( :, 4 ), 1 ) ) ) ; f a r = length ( a c c e p t e d i n v a l i d i n p u t )/ length ( i n v a l i d i n p u t ) ; A.5 False Rejection Ratio f u n c t i o n [ f r r ] = f r r ( s e s s i o n ) %f r r f a l s e r e j e c t i o n r a t i o %v a l i d inputs : a c t u a l i d == c l a i m e d i d v a l i d i n p u t = s e s s i o n ( f i n d ( strcmp ( s e s s i o n ( :, 2 ), s e s s i o n ( :, 3 ) ) ), : ) ; %r e j e c t one v a l i d input f o r t e s t i n g purposes %v a l i d i n p u t {1,4} = 0 ; %r e j e c t e d v a l i d inputs : a c t u a l i d == c l a i m e d i d and r e s u l t == 0 r e j e c t e d v a l i d i n p u t = s e s s i o n ( f i n d ( strcmp ( v a l i d i n p u t ( :, 4 ), 0 ) ) ) ; f r r = length ( r e j e c t e d v a l i d i n p u t )/ l ength ( v a l i d i n p u t ) ; %{ %r e j e c t a l l but 2 v a l i d i nputs f o r t e s t i n g purposes a = num2cell ( repmat ( 0, l e n g t h ( v a l i d i n p u t ), 1 ) ) ; v a l i d i n p u t ( :, 4 ) = a ; v a l i d i n p u t {4,4} = 1 ; v a l i d i n p u t {6,4} = 1 ; %} A.6 Sample Experiment %B o g a z i c i U n i v e r s i t y %@author : Hasan Faik ALAN %@date : 18/ 12/ 2012 c l e a r c l c 12

13 addpath.. / bancadata %Preparation f o r Experiment [ t r a i n i n g, c l i e n t t e s t i n g, i m p o s t e r t e s t i n g ] = experimentbanca ( MC,.. / bancadata/xml / ) ; %show one image from t r a i n i n g s e t imshow ( imread ( s t r c a t ( t r a i n i n g { 1, 1 },.pgm ) ) ) ; %Perform Experiment %j u s t use { 1, 1 } : image and { 1, 3 } : c l a i m e d i d f i l l {1,4} r e s u l t c l i e n t t e s t i n g {1,1} c l i e n t t e s t i n g {1,4} = 0 ; %one c l i e n t i s r e j e c t e d i m p o s t e r t e s t i n g {100,4} = 1 ;% one imposter i s accepted %Evaluate Performance f a l s e r e j e c t i o n = f r r ( c l i e n t t e s t i n g ) f a l s e a c c e p t a n c e = f a r ( i m p o s t e r t e s t i n g ) B ISOMAP B.1 ISOMAP Experiment %B o g a z i c i U n i v e r s i t y %@author : Hasan Faik ALAN %@date : 2/ 1/ 2013 c l e a r c l c addpath.. / bancaprotocol addpath.. / bancadata %Preparation f o r Experiment [ t r a i n i n g, c l i e n t t e s t i n g, i m p o s t e r t e s t i n g ] = experimentbanca ( MC,.. / bancadata/xml / ) ; %Perform Experiment %Create a matrix ( f a c e s ) that w i l l hold image v e c t o r s imagecount = l e n g t h ( t r a i n i n g ) ; tempimg = imread ( s t r c a t ( t r a i n i n g { 1, 1 },.pgm ) ) ; imgwidth = s i z e ( tempimg, 1 ) ; imgheight = s i z e ( tempimg, 2 ) ; 13

14 imglength = l e n g t h ( tempimg ( : ) ) ; f a c e s = uint8 ( z e r o s ( imglength, imagecount 5 ) ) ; %Put image v e c t o r s i n t o one matrix ( f a c e s ) k = 1 ; f o r i =1: imagecount imgname = t r a i n i n g { i, 1 } ; imgnamebase = imgname ( 1 : l e ngth (imgname) 1); f o r j =1:5 facetemplate = double ( imread ( s t r c a t ( imgnamebase, num2str ( j ),. pgm ) ) ) ; f a c e s ( :, k ) = facetemplate ( : ) ; k = k+1; t r a i n i n g F a c e s = f a c e s ;%keep t r a i n i n g f a c e s %V e r i f y images in c l i e n t t e s t i n g s e t and imposter t e s t i n g s e t f o r i =1: length ( c l i e n t t e s t i n g ) %add one t e s t image to f a c e s e t c l i e n t t e s t i n g { i, 1 } img = imread ( s t r c a t ( c l i e n t t e s t i n g { i, 1 },. pgm ) ) ; f a c e s ( :, k ) = img ( : ) ; %C a l c u l a t e average f a c e a v e r a g e f a c e = uint8 (mean( f a c e s, 2 ) ) ; a v e r a g e f a c e = reshape ( a v e r a g e f a c e, s i z e ( tempimg ) ) ; %Normalize each f a c e v e c t o r %imshow ( reshape ( f a c e s ( :, i ) a v e r a g e f a c e ( : ), s i z e ( tempimg ) ) ) %imshow ( reshape ( f a c e s ( :, i ), s i z e ( tempimg ) ) ) facesnormalized = f a c e s repmat ( a v e r a g e f a c e ( : ), 1, imagecount 5+1); %C a l c u l a t e c o v a r i a n c e matrix C = double ( facesnormalized ) double ( facesnormalized ) ; [COEFF, l a t e n t, e x p l a i n e d ] = pcacov (C) ; %Find e i g e n v e c t o r s o f c o v a r i a n c e matrix [ v, d ] = e i g (C) ; 14

15 d = diag ( d ) ; [ d, i x ] = s o r t (d, desc ) ; pc = v ( :, i x ( 1 : ) ) ; e x p l a i n e d v a r i a n c e = cumsum( d ). / sum( d ) ; e x p l a i n e d v a r i a n c e (160) %pc = princomp (C) ; %Create e i g e n f a c e s ( p r o j e c t images onto e i g e n v e c t o r s ) eigenfaces = double ( facesnormalized ) double ( pc ) ; %imshow ( uint8 ( reshape ( eigenfaces ( :, 1 ), s i z e ( tempimg ) ) ) ) ; %Contruct weight v e c t o r f o r each image wtraining = double ( eigenfaces ) double ( facesnormalized ) ; %C a l c u l a t e p a i r w i s e d i s t a n c e s between a l l p o i n t s D = p d i s t ( wtraining ) ; D = squareform (D) ; %Apply ISOMAP %[Y, R, E] = IsomapII (D, k, 5, o p t i o n s ) ; %authors implementation [Y,E] = banca isomap (D, 5 ) ; t e s t I n d e x = k ; neighbours = f i n d (E( testindex, : ) ) ; temp = s q r t (sum ( (Y( :, f i n d (E( testindex, : ) ) ) repmat (Y( :, 1 ), 1, l ength ( f i n d (E( testindex, : ) ) ) ) ) [ a, i x ] = s o r t ( temp ) ; neighbours ( i x ) %C l a s s i f y t e s t image using KNN with K = 5 mode( c e i l ( neighbours / 5)) c l i e n t t e s t i n g { i, 4 } = num2str ( strcmp ( t r a i n i n g (mode( c e i l ( neighbours / 5 ) ), 2 ), c l i e n t t e s t i n g ( i, %V e r i f y imposter t e s t i n g s e t i m p o s t e r t e s t i n g { i, 1 } img = imread ( s t r c a t ( i m p o s t e r t e s t i n g { i, 1 },. pgm ) ) ; f a c e s ( :, k ) = img ( : ) ; %C a l c u l a t e average f a c e 15

16 a v e r a g e f a c e = uint8 (mean( f a c e s, 2 ) ) ; a v e r a g e f a c e = reshape ( a v e r a g e f a c e, s i z e ( tempimg ) ) ; %Normalize each f a c e v e c t o r %imshow ( reshape ( f a c e s ( :, i ) a v e r a g e f a c e ( : ), s i z e ( tempimg ) ) ) %imshow ( reshape ( f a c e s ( :, i ), s i z e ( tempimg ) ) ) facesnormalized = f a c e s repmat ( a v e r a g e f a c e ( : ), 1, imagecount 5+1); %C a l c u l a t e c o v a r i a n c e matrix C = double ( facesnormalized ) double ( facesnormalized ) ; [COEFF, l a t e n t, e x p l a i n e d ] = pcacov (C) ; %Find e i g e n v e c t o r s o f c o v a r i a n c e matrix [ v, d ] = e i g (C) ; d = diag ( d ) ; [ d, i x ] = s o r t (d, desc ) ; pc = v ( :, i x ( 1 : ) ) ; e x p l a i n e d v a r i a n c e = cumsum( d ). / sum( d ) ; e x p l a i n e d v a r i a n c e (160) %pc = princomp (C) ; %Create e i g e n f a c e s ( p r o j e c t images onto e i g e n v e c t o r s ) eigenfaces = double ( facesnormalized ) double ( pc ) ; %imshow ( uint8 ( reshape ( eigenfaces ( :, 1 ), s i z e ( tempimg ) ) ) ) ; %Contruct weight v e c t o r f o r each image wtraining = double ( eigenfaces ) double ( facesnormalized ) ; %C a l c u l a t e p a i r w i s e d i s t a n c e s between a l l p o i n t s D = p d i s t ( wtraining ) ; D = squareform (D) ; %Apply ISOMAP %[Y, R, E] = IsomapII (D, k, 5, o p t i o n s ) ; %authors implementation [Y,E] = banca isomap (D, 5 ) ; t e s t I n d e x = k ; neighbours = f i n d (E( testindex, : ) ) ; 16

17 temp = s q r t (sum ( (Y( :, f i n d (E( testindex, : ) ) ) repmat (Y( :, 1 ), 1, l ength ( f i n d (E( testindex, : ) ) ) ) ) [ a, i x ] = s o r t ( temp ) ; neighbours ( i x ) mode( c e i l ( neighbours / 5)) i m p o s t e r t e s t i n g { i, 4 } = num2str ( strcmp ( t r a i n i n g (mode( c e i l ( neighbours / 5 ) ), 2 ), i m p o s t e r t e s t i n %Evaluate Performance f a l s e r e j e c t i o n = f r r ( c l i e n t t e s t i n g ) f a l s e a c c e p t a n c e = f a r ( i m p o s t e r t e s t i n g ) B.2 ISOMAP Implementation %r e f e r e n c e : http : / / isomap. s t a n f o r d. edu/ f u n c t i o n [ Y,E ] = banca isomap ( D, k ) N = s i z e (D, 1 ) ; K = k ; INF = i n f ; dims = 1 : 1 0 ; comp = 1 ; Y. coords = c e l l ( l e n g t h ( dims ), 1 ) ; R = z e r o s ( 1, l e n g t h ( dims ) ) ; %Construct Neighborhood Graph [ tmp, ind ] = s o r t (D) ; f o r i =1:N D( i, ind ((2+K) :, i ) ) = INF ; D = min (D,D ) ; %% Make sure d i s t a n c e matrix i s symmetric %Compute S h o r t e s t Path using Floyds Algorithm t i c ; f o r k=1:n D = min (D, repmat (D( :, k ), [ 1 N])+ repmat (D( k, : ), [ N 1 ] ) ) ; %Remove O u t l i e r s From Graph n connect = sum ( (D==INF ) ) ; %% number o f p o i n t s each point connects to [ tmp, f i r s t s ] = min (D==INF ) ; %% f i r s t point each point connects to [ comps, I, J ] = unique ( f i r s t s ) ; %% r e p r e s e n t each connected component once size comps = n connect ( comps ) ; %% s i z e o f each connected component 17

18 [ tmp, comp order ] = s o r t ( size comps ) ; %% s o r t connected components by s i z e comps = comps ( comp order ( : 1 : 1 ) ) ; size comps = size comps ( comp order ( : 1 : 1 ) ) ; n comps = l e n g t h ( comps ) ; %% number o f connected components i f (comp>n comps ) comp=1; %% d e f a u l t : use l a r g e s t component Y. index = f i n d ( f i r s t s==comps (comp ) ) ; D = D(Y. index, Y. index ) ; N = length (Y. index ) ; %Construct Embedding (MDS) opt. disp = 0 ; [ vec, val ] = e i g s (.5 (D. ˆ 2 sum(d. ˆ 2 ) ones ( 1,N)/N ones (N, 1 ) sum(d. ˆ 2 ) /N + sum(sum(d. ˆ 2 h = r e a l ( diag ( val ) ) ; [ foo, s o r t h ] = s o r t ( h ) ; s o r t h = s o r t h ( : 1 : 1 ) ; val = r e a l ( diag ( val ( sorth, s o r t h ) ) ) ; vec = vec ( :, s o r t h ) ; D = reshape (D,Nˆ 2, 1 ) ; f o r di = 1 : l e n g t h ( dims ) i f ( dims ( di)<=n) Y. coords { di } = r e a l ( vec ( :, 1 : dims ( di ) ). ( ones (N, 1 ) s q r t ( val ( 1 : dims ( di ) ) ) ) ) ; c l e a r D; C LLE C.1 LLE Experiment %B o g a z i c i U n i v e r s i t y %@author : Hasan Faik ALAN %@date : 5/ 1/ 2013 c l e a r c l c addpath.. / bancaprotocol addpath.. / bancadata %Preparation f o r Experiment [ t r a i n i n g, c l i e n t t e s t i n g, i m p o s t e r t e s t i n g ] = experimentbanca ( MC,.. / bancadata/xml / ) ; 18

19 %Perform Experiment %Create a matrix ( f a c e s ) that w i l l hold image v e c t o r s imagecount = l e n g t h ( t r a i n i n g ) ; tempimg = imread ( s t r c a t ( t r a i n i n g { 1, 1 },.pgm ) ) ; imgwidth = s i z e ( tempimg, 1 ) ; imgheight = s i z e ( tempimg, 2 ) ; imglength = l e n g t h ( tempimg ( : ) ) ; f a c e s = uint8 ( z e r o s ( imglength, imagecount 5 ) ) ; %Put image v e c t o r s i n t o one matrix ( f a c e s ) k = 1 ; f o r i =1: imagecount imgname = t r a i n i n g { i, 1 } ; imgnamebase = imgname ( 1 : l e ngth (imgname) 1); f o r j =1:5 facetemplate = double ( imread ( s t r c a t ( imgnamebase, num2str ( j ),. pgm ) ) ) ; f a c e s ( :, k ) = facetemplate ( : ) ; k = k+1; t r a i n i n g F a c e s = f a c e s ;%keep t r a i n i n g f a c e s %imshow ( reshape ( f a c e s ( :, 1 ), s i z e ( tempimg ) ) ) %V e r i f y images in c l i e n t t e s t i n g s e t f o r i =1: length ( c l i e n t t e s t i n g ) c l i e n t t e s t i n g { i, 1 } img = imread ( s t r c a t ( c l i e n t t e s t i n g { i, 1 },. pgm ) ) ; f a c e s ( :, k ) = img ( : ) ; %C a l c u l a t e average f a c e a v e r a g e f a c e = uint8 (mean( f a c e s, 2 ) ) ; a v e r a g e f a c e = reshape ( a v e r a g e f a c e, s i z e ( tempimg ) ) ; %Normalize each f a c e v e c t o r %imshow ( reshape ( f a c e s ( :, i ) a v e r a g e f a c e ( : ), s i z e ( tempimg ) ) ) %imshow ( reshape ( f a c e s ( :, i ), s i z e ( tempimg ) ) ) facesnormalized = f a c e s repmat ( a v e r a g e f a c e ( : ), 1, imagecount 5+1); 19

20 %C a l c u l a t e c o v a r i a n c e matrix C = double ( facesnormalized ) double ( facesnormalized ) ; [COEFF, l a t e n t, e x p l a i n e d ] = pcacov (C) ; %Find e i g e n v e c t o r s o f c o v a r i a n c e matrix [ v, d ] = e i g (C) ; d = diag ( d ) ; [ d, i x ] = s o r t (d, desc ) ; pc = v ( :, i x ( 1 : ) ) ; e x p l a i n e d v a r i a n c e = cumsum( d ). / sum( d ) e x p l a i n e d v a r i a n c e (160) %pc = princomp (C) ; %Create e i g e n f a c e s ( p r o j e c t images onto e i g e n v e c t o r s ) eigenfaces = double ( facesnormalized ) double ( pc ) ; %imshow ( uint8 ( reshape ( eigenfaces ( :, 1 ), s i z e ( tempimg ) ) ) ) ; c l i e n t t e s t i n g { i, 4 } = num2str ( strcmp ( t r a i n i n g (mode( c e i l ( neighbours / 5 ) ), 2 ), c l i e n t t e s t i n g ( i, %Contruct weight v e c t o r f o r each image wtraining = double ( eigenfaces ) double ( facesnormalized ) ; K = 5 ; d = 2 ; %Apply LLE %Y=l l e ( wtraining,k, d);% authors o r i g i n a l implementation o f l l e Y = b a n c a l l e ( wtraining,k, d ) ; t e s t I n d e x = k ; N = 5;% neighbour temp = s q r t (sum ( (Y repmat (Y( :, k ), 1, length (Y) ) ). ˆ 2 ) ) ; [ value, i x ] = s o r t ( temp ) ; neighbours = i x ( 1 :N) ; mode( c e i l ( neighbours / 5 ) ) ; %C l a s s i f y point using k n e a r e s t neighbor %V e r i f y imposter t e s t i n g s e t i m p o s t e r t e s t i n g { i, 1 } img = imread ( s t r c a t ( i m p o s t e r t e s t i n g { i, 1 },. pgm ) ) ; f a c e s ( :, k ) = img ( : ) ; 20

21 %C a l c u l a t e average f a c e a v e r a g e f a c e = uint8 (mean( f a c e s, 2 ) ) ; a v e r a g e f a c e = reshape ( a v e r a g e f a c e, s i z e ( tempimg ) ) ; %Normalize each f a c e v e c t o r %imshow ( reshape ( f a c e s ( :, i ) a v e r a g e f a c e ( : ), s i z e ( tempimg ) ) ) %imshow ( reshape ( f a c e s ( :, i ), s i z e ( tempimg ) ) ) facesnormalized = f a c e s repmat ( a v e r a g e f a c e ( : ), 1, imagecount 5+1); %C a l c u l a t e c o v a r i a n c e matrix C = double ( facesnormalized ) double ( facesnormalized ) ; [COEFF, l a t e n t, e x p l a i n e d ] = pcacov (C) ; %Find e i g e n v e c t o r s o f c o v a r i a n c e matrix [ v, d ] = e i g (C) ; d = diag ( d ) ; [ d, i x ] = s o r t (d, desc ) ; pc = v ( :, i x ( 1 : ) ) ; e x p l a i n e d v a r i a n c e = cumsum( d ). / sum( d ) ; e x p l a i n e d v a r i a n c e (160) %pc = princomp (C) ; %Create e i g e n f a c e s ( p r o j e c t images onto e i g e n v e c t o r s ) eigenfaces = double ( facesnormalized ) double ( pc ) ; %imshow ( uint8 ( reshape ( eigenfaces ( :, 1 ), s i z e ( tempimg ) ) ) ) ; %Contruct weight v e c t o r f o r each image wtraining = double ( eigenfaces ) double ( facesnormalized ) ; K = 5 ; d = 2 ; %Apply LLE %Y=l l e ( wtraining,k, d);% authors o r i g i n a l implementation o f l l e Y = b a n c a l l e ( wtraining,k, d ) ; t e s t I n d e x = k ; N = 5;% neighbour temp = s q r t (sum ( (Y repmat (Y( :, k ), 1, length (Y) ) ). ˆ 2 ) ) ; 21

22 [ value, i x ] = s o r t ( temp ) ; neighbours = i x ( 1 :N) ; mode( c e i l ( neighbours / 5 ) ) ; %C l a s s i f y point using k n e a r e s t neighbor i m p o s t e r t e s t i n g { i, 4 } = num2str ( strcmp ( t r a i n i n g (mode( c e i l ( neighbours / 5 ) ), 2 ), i m p o s t e r t e s t i n %Evaluate Performance f a l s e r e j e c t i o n = f r r ( c l i e n t t e s t i n g ) f a l s e a c c e p t a n c e = f a r ( i m p o s t e r t e s t i n g ) C.2 LLE Implementation f u n c t i o n [ Y ] = b a n c a l l e ( data,k, d ) %r e f e r e n c e : http : / /www. cs. nyu. edu / roweis / l l e / code. html [D,N] = s i z e ( data ) ; %Compute Pairwise Distances X2 = sum( data. ˆ 2, 1 ) ; d i s t a n c e = repmat (X2,N,1)+ repmat (X2, 1,N) 2 data data ; [ sorted, index ] = s o r t ( d i s t a n c e ) ; neighborhood = index (2:(1+K), : ) ; i f (K>D) t o l =1e 3; % r e g u l a r l i z e r in case c o n s t r a i n e d f i t s are i l l c o n d i t i o n e d e l s e t o l =0; %Find Reconstruction Weights W = z e r o s (K,N) ; f o r i i =1:N z = data ( :, neighborhood ( :, i i )) repmat ( data ( :, i i ), 1,K) ; % s h i f t i t h pt to o r i g i n C = z z ; % l o c a l c o v a r i a n c e C = C + eye (K,K) t o l t r a c e (C) ; % r e g u l a r l i z a t i o n (K>D) W( :, i i ) = C\ ones (K, 1 ) ; % s o l v e Cw=1 W( :, i i ) = W( :, i i )/sum(w( :, i i ) ) ; % e n f o r c e sum(w)=1 ; %Find Embedding M = s p a r s e ( 1 :N, 1 : N, ones ( 1,N),N,N, 4 K N) ; 22

23 f o r i i =1:N w = W( :, i i ) ; j j = neighborhood ( :, i i ) ; M( i i, j j ) = M( i i, j j ) w ; M( j j, i i ) = M( j j, i i ) w; M( j j, j j ) = M( j j, j j ) + w w ; ; o p t i o n s. disp = 0 ; o p t i o n s. i s r e a l = 1 ; o p t i o n s. issym = 1 ; [Y, e i g e n v a l s ] = e i g s (M, d+1,0, o p t i o n s ) ; Y = Y( :, 2 : d+1) s q r t (N) ; %Y = l l e ( data,k, d ) ; D PCA (Eigenfaces) Implementation %B o g a z i c i U n i v e r s i t y %@author : Hasan Faik ALAN %@date : 29/ 12/ 2012 c l e a r c l c addpath.. / bancaprotocol addpath.. / bancadata %Preparation f o r Experiment [ t r a i n i n g, c l i e n t t e s t i n g, i m p o s t e r t e s t i n g ] = experimentbanca ( MC,.. / bancadata/xml / ) ; %Perform Experiment %j u s t use { 1, 1 } : image and { 1, 3 } : c l a i m e d i d f i l l {1,4} r e s u l t %Create a matrix ( f a c e s ) that w i l l hold image v e c t o r s imagecount = l e n g t h ( t r a i n i n g ) ; tempimg = imread ( s t r c a t ( t r a i n i n g { 1, 1 },.pgm ) ) ; imglength = l e n g t h ( tempimg ( : ) ) ; f a c e s = uint8 ( z e r o s ( imglength, imagecount ) ) ; %Put image v e c t o r s i n t o one matrix ( f a c e s ) f o r i =1: imagecount 23

24 facetemplate = facetemplate + double ( imread ( s t r c a t ( imgnamebase, num2str ( j ),. pgm ) ) ) %Use average o f 5 images belonging to each person to c r e a t e a f a c e template %TODO: what e l s e can be done? add 5 image s e p e r a t e l y? imgname = t r a i n i n g { i, 1 } ; imgnamebase = imgname ( 1 : l e n g th (imgname) 1); facetemplate = z e r o s ( s i z e ( tempimg ) ) ; f o r j =1:5 facetemplate = facetemplate / 5 ; facetemplate = ( uint8 ( facetemplate ) ) ; %facetemplate = h i s t e q ( facetemplate ) ; %add image to f a c e s f a c e s ( :, i ) = facetemplate ( : ) ; %C a l c u l a t e average f a c e a v e r a g e f a c e = uint8 (mean( f a c e s, 2 ) ) ; a v e r a g e f a c e = reshape ( a v e r a g e f a c e, s i z e ( tempimg ) ) ; %Normalize each f a c e v e c t o r %imshow ( reshape ( f a c e s ( :, i ) a v e r a g e f a c e ( : ), s i z e ( tempimg ) ) ) %imshow ( reshape ( f a c e s ( :, i ), s i z e ( tempimg ) ) ) facesnormalized = f a c e s repmat ( a v e r a g e f a c e ( : ), 1, imagecount ) ; %C a l c u l a t e c o v a r i a n c e matrix C = double ( facesnormalized ) double ( facesnormalized ) ; %Find e i g e n v e c t o r s o f c o v a r i a n c e matrix [ v, d ] = e i g (C) ; d = diag ( d ) ; [ d, i x ] = s o r t (d, desc ) ; pc = v ( :, i x ( 1 : 4 0 ) ) ; %pc = princomp (C) ; %Create e i g e n f a c e s ( p r o j e c t images onto e i g e n v e c t o r s ) eigenfaces = double ( facesnormalized ) double ( pc ) ; %imshow ( uint8 ( reshape ( eigenfaces ( :, 1 ), s i z e ( tempimg ) ) ) ) ; 24

25 %Contruct weight v e c t o r f o r each t r a i n i n g image wtraining = double ( eigenfaces ) double ( facesnormalized ) ; %V e r i f y images in c l i e n t t e s t i n g s e t c o s t = c e l l ( l e n g t h ( c l i e n t t e s t i n g ), 4 ) ; f o r i =1: length ( c l i e n t t e s t i n g ) img = imread ( s t r c a t ( c l i e n t t e s t i n g { i, 1 },. pgm ) ) ; img weight = double ( eigenfaces ) double ( img (:) a v e r a g e f a c e ( : ) ) ; temp = wtraining repmat ( img weight, 1, imagecount ) ; temp = temp. temp ; temp = s q r t (sum( temp ) ) ; c l i e n t t e s t i n g { i, 4 } = num2str ( strcmp ( t r a i n i n g ( f i n d ( temp == min ( temp ) ), 2 ), c l i e n t t e s t i n g c o s t { i, 1 } = c l i e n t t e s t i n g { i, 2 } ; c o s t { i, 2 } = t r a i n i n g { f i n d ( temp == min ( temp ) ), 2 } ; temp = s o r t ( temp, DESCEND ) ; c o s t { i, 3 } = temp ( 1 ) ; c o s t { i, 4 } = temp ( 2 ) ; %length ( f i n d ( strcmp ( c l i e n t t e s t i n g ( 1 : 2 8 6, 2 ), c l i e n t t e s t i n g ( 1 : 2 8 6, 4 ) ) ) ) %V e r i f y images in i m p o s t e r t e s t i n g s e t f o r i =1: length ( i m p o s t e r t e s t i n g ) img = imread ( s t r c a t ( i m p o s t e r t e s t i n g { i, 1 },. pgm ) ) ; img weight = double ( eigenfaces ) double ( img (:) a v e r a g e f a c e ( : ) ) ; temp = wtraining repmat ( img weight, 1, imagecount ) ; temp = temp. temp ; temp = s q r t (sum( temp ) ) ; i m p o s t e r t e s t i n g { i, 4 } = num2str ( strcmp ( t r a i n i n g ( f i n d ( temp == min ( temp ) ), 2 ), i m p o s t e r t e s %c l i e n t t e s t i n g {1,4} = 0 ; %one c l i e n t i s r e j e c t e d %i m p o s t e r t e s t i n g {100,4} = 1 ;% one imposter i s accepted %Evaluate Performance f a l s e r e j e c t i o n = f r r ( c l i e n t t e s t i n g ) f a l s e a c c e p t a n c e = f a r ( i m p o s t e r t e s t i n g ) E Utility Methods Implementation E.1 Generate Image Panorama f u n c t i o n [ panoroma ] = panoroma ( imgset, img width, img height, linecount, imgperline ) %p = panoroma ( f a c e s, s i z e ( tempimg, 1 ), s i z e ( tempimg, 2 ), 1 3, 4 ) ; 25

26 panoroma = uint8 ( z e r o s ( img width imgperline, img height linecount ) ) ; f o r i = 1 : linecount f o r j = 1 : imgperline panoroma ( ( j 1) img width +1: j img width, ( i 1) img height +1: i img height ) = reshape ( i 26

Non-linear Dimensionality Reduction

Non-linear Dimensionality Reduction Non-linear Dimensionality Reduction CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Introduction Laplacian Eigenmaps Locally Linear Embedding (LLE)

More information

Manifold Learning and it s application

Manifold Learning and it s application Manifold Learning and it s application Nandan Dubey SE367 Outline 1 Introduction Manifold Examples image as vector Importance Dimension Reduction Techniques 2 Linear Methods PCA Example MDS Perception

More information

Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi

Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi Overview Introduction Linear Methods for Dimensionality Reduction Nonlinear Methods and Manifold

More information

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations. Previously Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations y = Ax Or A simply represents data Notion of eigenvectors,

More information

Automatic Identity Verification Using Face Images

Automatic Identity Verification Using Face Images Automatic Identity Verification Using Face Images Sabry F. Saraya and John F. W. Zaki Computer & Systems Eng. Dept. Faculty of Engineering Mansoura University. Abstract This paper presents two types of

More information

Dimension Reduction and Low-dimensional Embedding

Dimension Reduction and Low-dimensional Embedding Dimension Reduction and Low-dimensional Embedding Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1/26 Dimension

More information

Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Nonlinear Dimensionality Reduction Piyush Rai CS5350/6350: Machine Learning October 25, 2011 Recap: Linear Dimensionality Reduction Linear Dimensionality Reduction: Based on a linear projection of the

More information

Eigenfaces. Face Recognition Using Principal Components Analysis

Eigenfaces. Face Recognition Using Principal Components Analysis Eigenfaces Face Recognition Using Principal Components Analysis M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive Neuroscience, 3(1), pp. 71-86, 1991. Slides : George Bebis, UNR

More information

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Introduction and Data Representation Mikhail Belkin & Partha Niyogi Department of Electrical Engieering University of Minnesota Mar 21, 2017 1/22 Outline Introduction 1 Introduction 2 3 4 Connections to

More information

Machine Learning. Data visualization and dimensionality reduction. Eric Xing. Lecture 7, August 13, Eric Xing Eric CMU,

Machine Learning. Data visualization and dimensionality reduction. Eric Xing. Lecture 7, August 13, Eric Xing Eric CMU, Eric Xing Eric Xing @ CMU, 2006-2010 1 Machine Learning Data visualization and dimensionality reduction Eric Xing Lecture 7, August 13, 2010 Eric Xing Eric Xing @ CMU, 2006-2010 2 Text document retrieval/labelling

More information

Nonlinear Dimensionality Reduction. Jose A. Costa

Nonlinear Dimensionality Reduction. Jose A. Costa Nonlinear Dimensionality Reduction Jose A. Costa Mathematics of Information Seminar, Dec. Motivation Many useful of signals such as: Image databases; Gene expression microarrays; Internet traffic time

More information

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction Using PCA/LDA Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction One approach to deal with high dimensional data is by reducing their

More information

L26: Advanced dimensionality reduction

L26: Advanced dimensionality reduction L26: Advanced dimensionality reduction The snapshot CA approach Oriented rincipal Components Analysis Non-linear dimensionality reduction (manifold learning) ISOMA Locally Linear Embedding CSCE 666 attern

More information

Unsupervised dimensionality reduction

Unsupervised dimensionality reduction Unsupervised dimensionality reduction Guillaume Obozinski Ecole des Ponts - ParisTech SOCN course 2014 Guillaume Obozinski Unsupervised dimensionality reduction 1/30 Outline 1 PCA 2 Kernel PCA 3 Multidimensional

More information

Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Kernel PCA 2 Isomap 3 Locally Linear Embedding 4 Laplacian Eigenmap

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images

More information

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu Dimension Reduction Techniques Presented by Jie (Jerry) Yu Outline Problem Modeling Review of PCA and MDS Isomap Local Linear Embedding (LLE) Charting Background Advances in data collection and storage

More information

Discriminant Uncorrelated Neighborhood Preserving Projections

Discriminant Uncorrelated Neighborhood Preserving Projections Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a,

More information

Lecture 10: Dimension Reduction Techniques

Lecture 10: Dimension Reduction Techniques Lecture 10: Dimension Reduction Techniques Radu Balan Department of Mathematics, AMSC, CSCAMM and NWC University of Maryland, College Park, MD April 17, 2018 Input Data It is assumed that there is a set

More information

ISSN: (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies

ISSN: (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies ISSN: 2321-7782 (Online) Volume 3, Issue 5, May 2015 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at:

More information

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System processes System Overview Previous Systems:

More information

Face recognition Computer Vision Spring 2018, Lecture 21

Face recognition Computer Vision Spring 2018, Lecture 21 Face recognition http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 21 Course announcements Homework 6 has been posted and is due on April 27 th. - Any questions about the homework?

More information

Dimensionality Reduction AShortTutorial

Dimensionality Reduction AShortTutorial Dimensionality Reduction AShortTutorial Ali Ghodsi Department of Statistics and Actuarial Science University of Waterloo Waterloo, Ontario, Canada, 2006 c Ali Ghodsi, 2006 Contents 1 An Introduction to

More information

Global (ISOMAP) versus Local (LLE) Methods in Nonlinear Dimensionality Reduction

Global (ISOMAP) versus Local (LLE) Methods in Nonlinear Dimensionality Reduction Global (ISOMAP) versus Local (LLE) Methods in Nonlinear Dimensionality Reduction A presentation by Evan Ettinger on a Paper by Vin de Silva and Joshua B. Tenenbaum May 12, 2005 Outline Introduction The

More information

LECTURE NOTE #11 PROF. ALAN YUILLE

LECTURE NOTE #11 PROF. ALAN YUILLE LECTURE NOTE #11 PROF. ALAN YUILLE 1. NonLinear Dimension Reduction Spectral Methods. The basic idea is to assume that the data lies on a manifold/surface in D-dimensional space, see figure (1) Perform

More information

Nonlinear Methods. Data often lies on or near a nonlinear low-dimensional curve aka manifold.

Nonlinear Methods. Data often lies on or near a nonlinear low-dimensional curve aka manifold. Nonlinear Methods Data often lies on or near a nonlinear low-dimensional curve aka manifold. 27 Laplacian Eigenmaps Linear methods Lower-dimensional linear projection that preserves distances between all

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

Locality Preserving Projections

Locality Preserving Projections Locality Preserving Projections Xiaofei He Department of Computer Science The University of Chicago Chicago, IL 60637 xiaofei@cs.uchicago.edu Partha Niyogi Department of Computer Science The University

More information

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr

More information

Apprentissage non supervisée

Apprentissage non supervisée Apprentissage non supervisée Cours 3 Higher dimensions Jairo Cugliari Master ECD 2015-2016 From low to high dimension Density estimation Histograms and KDE Calibration can be done automacally But! Let

More information

Manifold Learning: From Linear to nonlinear. Presenter: Wei-Lun (Harry) Chao Date: April 26 and May 3, 2012 At: AMMAI 2012

Manifold Learning: From Linear to nonlinear. Presenter: Wei-Lun (Harry) Chao Date: April 26 and May 3, 2012 At: AMMAI 2012 Manifold Learning: From Linear to nonlinear Presenter: Wei-Lun (Harry) Chao Date: April 26 and May 3, 2012 At: AMMAI 2012 1 Preview Goal: Dimensionality Classification reduction and clustering Main idea:

More information

Face Detection and Recognition

Face Detection and Recognition Face Detection and Recognition Face Recognition Problem Reading: Chapter 18.10 and, optionally, Face Recognition using Eigenfaces by M. Turk and A. Pentland Queryimage face query database Face Verification

More information

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations

More information

Lecture: Face Recognition

Lecture: Face Recognition Lecture: Face Recognition Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 12-1 What we will learn today Introduction to face recognition The Eigenfaces Algorithm Linear

More information

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis Alvina Goh Vision Reading Group 13 October 2005 Connection of Local Linear Embedding, ISOMAP, and Kernel Principal

More information

Intrinsic Structure Study on Whale Vocalizations

Intrinsic Structure Study on Whale Vocalizations 1 2015 DCLDE Conference Intrinsic Structure Study on Whale Vocalizations Yin Xian 1, Xiaobai Sun 2, Yuan Zhang 3, Wenjing Liao 3 Doug Nowacek 1,4, Loren Nolte 1, Robert Calderbank 1,2,3 1 Department of

More information

Information Fusion for Local Gabor Features Based Frontal Face Verification

Information Fusion for Local Gabor Features Based Frontal Face Verification Information Fusion for Local Gabor Features Based Frontal Face Verification Enrique Argones Rúa 1, Josef Kittler 2, Jose Luis Alba Castro 1, and Daniel González Jiménez 1 1 Signal Theory Group, Signal

More information

Machine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang.

Machine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang. Machine Learning CUNY Graduate Center, Spring 2013 Lectures 11-12: Unsupervised Learning 1 (Clustering: k-means, EM, mixture models) Professor Liang Huang huang@cs.qc.cuny.edu http://acl.cs.qc.edu/~lhuang/teaching/machine-learning

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning Christoph Lampert Spring Semester 2015/2016 // Lecture 12 1 / 36 Unsupervised Learning Dimensionality Reduction 2 / 36 Dimensionality Reduction Given: data X = {x 1,..., x

More information

Time Series Classification

Time Series Classification Distance Measures Classifiers DTW vs. ED Further Work Questions August 31, 2017 Distance Measures Classifiers DTW vs. ED Further Work Questions Outline 1 2 Distance Measures 3 Classifiers 4 DTW vs. ED

More information

Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine

Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine Olga Kouropteva, Oleg Okun, Matti Pietikäinen Machine Vision Group, Infotech Oulu and

More information

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision) CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions

More information

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA Yoshua Bengio Pascal Vincent Jean-François Paiement University of Montreal April 2, Snowbird Learning 2003 Learning Modal Structures

More information

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction Robot Image Credit: Viktoriya Sukhanova 13RF.com Dimensionality Reduction Feature Selection vs. Dimensionality Reduction Feature Selection (last time) Select a subset of features. When classifying novel

More information

ISOMAP TRACKING WITH PARTICLE FILTER

ISOMAP TRACKING WITH PARTICLE FILTER Clemson University TigerPrints All Theses Theses 5-2007 ISOMAP TRACKING WITH PARTICLE FILTER Nikhil Rane Clemson University, nrane@clemson.edu Follow this and additional works at: https://tigerprints.clemson.edu/all_theses

More information

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Laplacian Eigenmaps for Dimensionality Reduction and Data Representation Neural Computation, June 2003; 15 (6):1373-1396 Presentation for CSE291 sp07 M. Belkin 1 P. Niyogi 2 1 University of Chicago, Department

More information

An Empirical Comparison of Dimensionality Reduction Methods for Classifying Gene and Protein Expression Datasets

An Empirical Comparison of Dimensionality Reduction Methods for Classifying Gene and Protein Expression Datasets An Empirical Comparison of Dimensionality Reduction Methods for Classifying Gene and Protein Expression Datasets George Lee 1, Carlos Rodriguez 2, and Anant Madabhushi 1 1 Rutgers, The State University

More information

Eigenface-based facial recognition

Eigenface-based facial recognition Eigenface-based facial recognition Dimitri PISSARENKO December 1, 2002 1 General This document is based upon Turk and Pentland (1991b), Turk and Pentland (1991a) and Smith (2002). 2 How does it work? The

More information

Dimensionality Reduction:

Dimensionality Reduction: Dimensionality Reduction: From Data Representation to General Framework Dong XU School of Computer Engineering Nanyang Technological University, Singapore What is Dimensionality Reduction? PCA LDA Examples:

More information

Face Recognition Using Eigenfaces

Face Recognition Using Eigenfaces Face Recognition Using Eigenfaces Prof. V.P. Kshirsagar, M.R.Baviskar, M.E.Gaikwad, Dept. of CSE, Govt. Engineering College, Aurangabad (MS), India. vkshirsagar@gmail.com, madhumita_baviskar@yahoo.co.in,

More information

PCA FACE RECOGNITION

PCA FACE RECOGNITION PCA FACE RECOGNITION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Shree Nayar (Columbia) including their own slides. Goal

More information

Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center

Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center 1 Outline Part I - Applications Motivation and Introduction Patient similarity application Part II

More information

Robust Laplacian Eigenmaps Using Global Information

Robust Laplacian Eigenmaps Using Global Information Manifold Learning and its Applications: Papers from the AAAI Fall Symposium (FS-9-) Robust Laplacian Eigenmaps Using Global Information Shounak Roychowdhury ECE University of Texas at Austin, Austin, TX

More information

Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota

Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota Université du Littoral- Calais July 11, 16 First..... to the

More information

ECE 661: Homework 10 Fall 2014

ECE 661: Homework 10 Fall 2014 ECE 661: Homework 10 Fall 2014 This homework consists of the following two parts: (1) Face recognition with PCA and LDA for dimensionality reduction and the nearest-neighborhood rule for classification;

More information

CSE 291. Assignment Spectral clustering versus k-means. Out: Wed May 23 Due: Wed Jun 13

CSE 291. Assignment Spectral clustering versus k-means. Out: Wed May 23 Due: Wed Jun 13 CSE 291. Assignment 3 Out: Wed May 23 Due: Wed Jun 13 3.1 Spectral clustering versus k-means Download the rings data set for this problem from the course web site. The data is stored in MATLAB format as

More information

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Eigenface and

More information

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to: System 2 : Modelling & Recognising Modelling and Recognising Classes of Classes of Shapes Shape : PDM & PCA All the same shape? System 1 (last lecture) : limited to rigidly structured shapes System 2 :

More information

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Multiple Similarities Based Kernel Subspace Learning for Image Classification Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY [Gaurav, 2(1): Jan., 2013] ISSN: 2277-9655 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY Face Identification & Detection Using Eigenfaces Sachin.S.Gurav *1, K.R.Desai 2 *1

More information

Locally Linear Embedded Eigenspace Analysis

Locally Linear Embedded Eigenspace Analysis Locally Linear Embedded Eigenspace Analysis IFP.TR-LEA.YunFu-Jan.1,2005 Yun Fu and Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign 405 North

More information

Supervised locally linear embedding

Supervised locally linear embedding Supervised locally linear embedding Dick de Ridder 1, Olga Kouropteva 2, Oleg Okun 2, Matti Pietikäinen 2 and Robert P.W. Duin 1 1 Pattern Recognition Group, Department of Imaging Science and Technology,

More information

A SEMI-SUPERVISED METRIC LEARNING FOR CONTENT-BASED IMAGE RETRIEVAL. {dimane,

A SEMI-SUPERVISED METRIC LEARNING FOR CONTENT-BASED IMAGE RETRIEVAL. {dimane, A SEMI-SUPERVISED METRIC LEARNING FOR CONTENT-BASED IMAGE RETRIEVAL I. Daoudi,, K. Idrissi, S. Ouatik 3 Université de Lyon, CNRS, INSA-Lyon, LIRIS, UMR505, F-696, France Faculté Des Sciences, UFR IT, Université

More information

Learning a Kernel Matrix for Nonlinear Dimensionality Reduction

Learning a Kernel Matrix for Nonlinear Dimensionality Reduction Learning a Kernel Matrix for Nonlinear Dimensionality Reduction Kilian Q. Weinberger kilianw@cis.upenn.edu Fei Sha feisha@cis.upenn.edu Lawrence K. Saul lsaul@cis.upenn.edu Department of Computer and Information

More information

The Curse of Dimensionality for Local Kernel Machines

The Curse of Dimensionality for Local Kernel Machines The Curse of Dimensionality for Local Kernel Machines Yoshua Bengio, Olivier Delalleau & Nicolas Le Roux April 7th 2005 Yoshua Bengio, Olivier Delalleau & Nicolas Le Roux Snowbird Learning Workshop Perspective

More information

Lecture 3: Compressive Classification

Lecture 3: Compressive Classification Lecture 3: Compressive Classification Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Sampling Signal Sparsity wideband signal samples large Gabor (TF) coefficients Fourier matrix Compressive

More information

Manifold Learning: Theory and Applications to HRI

Manifold Learning: Theory and Applications to HRI Manifold Learning: Theory and Applications to HRI Seungjin Choi Department of Computer Science Pohang University of Science and Technology, Korea seungjin@postech.ac.kr August 19, 2008 1 / 46 Greek Philosopher

More information

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface Image Analysis & Retrieval Lec 14 Eigenface and Fisherface Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: lizhu@umkc.edu, Ph: x 2346. http://l.web.umkc.edu/lizhu Z. Li, Image Analysis & Retrv, Spring

More information

Advanced Machine Learning & Perception

Advanced Machine Learning & Perception Advanced Machine Learning & Perception Instructor: Tony Jebara Topic 2 Nonlinear Manifold Learning Multidimensional Scaling (MDS) Locally Linear Embedding (LLE) Beyond Principal Components Analysis (PCA)

More information

Dimensionality Reduc1on

Dimensionality Reduc1on Dimensionality Reduc1on contd Aarti Singh Machine Learning 10-601 Nov 10, 2011 Slides Courtesy: Tom Mitchell, Eric Xing, Lawrence Saul 1 Principal Component Analysis (PCA) Principal Components are the

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Eigenimaging for Facial Recognition

Eigenimaging for Facial Recognition Eigenimaging for Facial Recognition Aaron Kosmatin, Clayton Broman December 2, 21 Abstract The interest of this paper is Principal Component Analysis, specifically its area of application to facial recognition

More information

Example: Face Detection

Example: Face Detection Announcements HW1 returned New attendance policy Face Recognition: Dimensionality Reduction On time: 1 point Five minutes or more late: 0.5 points Absent: 0 points Biometrics CSE 190 Lecture 14 CSE190,

More information

Linear and Non-Linear Dimensionality Reduction

Linear and Non-Linear Dimensionality Reduction Linear and Non-Linear Dimensionality Reduction Alexander Schulz aschulz(at)techfak.uni-bielefeld.de University of Pisa, Pisa 4.5.215 and 7.5.215 Overview Dimensionality Reduction Motivation Linear Projections

More information

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Recognition Using Class Specific Linear Projection Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Articles Eigenfaces vs. Fisherfaces Recognition Using Class Specific Linear Projection, Peter N. Belhumeur,

More information

Learning a kernel matrix for nonlinear dimensionality reduction

Learning a kernel matrix for nonlinear dimensionality reduction University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science 7-4-2004 Learning a kernel matrix for nonlinear dimensionality reduction Kilian Q. Weinberger

More information

Introduction to Signal Detection and Classification. Phani Chavali

Introduction to Signal Detection and Classification. Phani Chavali Introduction to Signal Detection and Classification Phani Chavali Outline Detection Problem Performance Measures Receiver Operating Characteristics (ROC) F-Test - Test Linear Discriminant Analysis (LDA)

More information

Facial Expression Recognition using Eigenfaces and SVM

Facial Expression Recognition using Eigenfaces and SVM Facial Expression Recognition using Eigenfaces and SVM Prof. Lalita B. Patil Assistant Professor Dept of Electronics and Telecommunication, MGMCET, Kamothe, Navi Mumbai (Maharashtra), INDIA. Prof.V.R.Bhosale

More information

Informative Laplacian Projection

Informative Laplacian Projection Informative Laplacian Projection Zhirong Yang and Jorma Laaksonen Department of Information and Computer Science Helsinki University of Technology P.O. Box 5400, FI-02015, TKK, Espoo, Finland {zhirong.yang,jorma.laaksonen}@tkk.fi

More information

Nonlinear Manifold Learning Summary

Nonlinear Manifold Learning Summary Nonlinear Manifold Learning 6.454 Summary Alexander Ihler ihler@mit.edu October 6, 2003 Abstract Manifold learning is the process of estimating a low-dimensional structure which underlies a collection

More information

COMP 408/508. Computer Vision Fall 2017 PCA for Recognition

COMP 408/508. Computer Vision Fall 2017 PCA for Recognition COMP 408/508 Computer Vision Fall 07 PCA or Recognition Recall: Color Gradient by PCA v λ ( G G, ) x x x R R v, v : eigenvectors o D D with v ^v (, ) x x λ, λ : eigenvalues o D D with λ >λ v λ ( B B, )

More information

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface CS/EE 5590 / ENG 401 Special Topics, Spring 2018 Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface Zhu Li Dept of CSEE, UMKC http://l.web.umkc.edu/lizhu Office Hour: Tue/Thr 2:30-4pm@FH560E, Contact:

More information

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS VIKAS CHANDRAKANT RAYKAR DECEMBER 5, 24 Abstract. We interpret spectral clustering algorithms in the light of unsupervised

More information

COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection

COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection COMP 551 Applied Machine Learning Lecture 13: Dimension reduction and feature selection Instructor: Herke van Hoof (herke.vanhoof@cs.mcgill.ca) Based on slides by:, Jackie Chi Kit Cheung Class web page:

More information

Machine Learning. Chao Lan

Machine Learning. Chao Lan Machine Learning Chao Lan Clustering and Dimensionality Reduction Clustering Kmeans DBSCAN Gaussian Mixture Model Dimensionality Reduction principal component analysis manifold learning Other Feature Processing

More information

Lecture 13 Visual recognition

Lecture 13 Visual recognition Lecture 13 Visual recognition Announcements Silvio Savarese Lecture 13-20-Feb-14 Lecture 13 Visual recognition Object classification bag of words models Discriminative methods Generative methods Object

More information

Smart PCA. Yi Zhang Machine Learning Department Carnegie Mellon University

Smart PCA. Yi Zhang Machine Learning Department Carnegie Mellon University Smart PCA Yi Zhang Machine Learning Department Carnegie Mellon University yizhang1@cs.cmu.edu Abstract PCA can be smarter and makes more sensible projections. In this paper, we propose smart PCA, an extension

More information

Face Recognition. Lecture-14

Face Recognition. Lecture-14 Face Recognition Lecture-14 Face Recognition imple Approach Recognize faces (mug shots) using gray levels (appearance). Each image is mapped to a long vector of gray levels. everal views of each person

More information

Face Recognition. Lecture-14

Face Recognition. Lecture-14 Face Recognition Lecture-14 Face Recognition imple Approach Recognize faces mug shots) using gray levels appearance). Each image is mapped to a long vector of gray levels. everal views of each person are

More information

Graph Metrics and Dimension Reduction

Graph Metrics and Dimension Reduction Graph Metrics and Dimension Reduction Minh Tang 1 Michael Trosset 2 1 Applied Mathematics and Statistics The Johns Hopkins University 2 Department of Statistics Indiana University, Bloomington November

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

2D Image Processing Face Detection and Recognition

2D Image Processing Face Detection and Recognition 2D Image Processing Face Detection and Recognition Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Metric Learning. 16 th Feb 2017 Rahul Dey Anurag Chowdhury

Metric Learning. 16 th Feb 2017 Rahul Dey Anurag Chowdhury Metric Learning 16 th Feb 2017 Rahul Dey Anurag Chowdhury 1 Presentation based on Bellet, Aurélien, Amaury Habrard, and Marc Sebban. "A survey on metric learning for feature vectors and structured data."

More information

Part I Generalized Principal Component Analysis

Part I Generalized Principal Component Analysis Part I Generalized Principal Component Analysis René Vidal Center for Imaging Science Institute for Computational Medicine Johns Hopkins University Principal Component Analysis (PCA) Given a set of points

More information

A Unified Bayesian Framework for Face Recognition

A Unified Bayesian Framework for Face Recognition Appears in the IEEE Signal Processing Society International Conference on Image Processing, ICIP, October 4-7,, Chicago, Illinois, USA A Unified Bayesian Framework for Face Recognition Chengjun Liu and

More information

Statistical and Computational Analysis of Locality Preserving Projection

Statistical and Computational Analysis of Locality Preserving Projection Statistical and Computational Analysis of Locality Preserving Projection Xiaofei He xiaofei@cs.uchicago.edu Department of Computer Science, University of Chicago, 00 East 58th Street, Chicago, IL 60637

More information

What is Principal Component Analysis?

What is Principal Component Analysis? What is Principal Component Analysis? Principal component analysis (PCA) Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables Retains most

More information