Bilinear Discriminant Analysis for Face Recognition

Size: px
Start display at page:

Download "Bilinear Discriminant Analysis for Face Recognition"

Transcription

1 Bilinear Discriminant Analysis for Face Recognition Muriel Visani 1, Christophe Garcia 1 and Jean-Michel Jolion 2 1 France Telecom Division R&D, TECH/IRIS 2 Laoratoire LIRIS, INSA Lyon 4, rue du Clos Courtel 20, Avenue Alert Einstein Cesson-Sevigne, France Villeuranne, cedex, France {muriel.visani,christophe.garcia}@rd.francetelecom.com Jean-Michel.Jolion@insa-lyon.fr Astract In this paper, a new statistical projection method called Bilinear Discriminant Analysis (BDA) is presented. The proposed method efficiently comines two complementary versions of Two-Dimensional-Oriented Linear Discriminant Analysis (2DoLDA), namely Column-Oriented Linear Discriminant Analysis () and Row-Oriented Linear Discriminant Analysis (), through an iterative algorithm using a generalized ilinear projectionased Fisher criterion. A series of experiments was performed on various international face image dataases in order to evaluate and compare the effectiveness of BDA to and. The experimental results indicate that BDA is more efficient than, and 2DPCA for the task of face recognition, while leading to a significant dimensionality reduction. 1 Introduction In the eigenfaces [1] (resp. fisherfaces [2]) method, the 2D face images of size h w are first transformed into 1D image vectors of size h w, and then a Principal Component Analysis (PCA) (resp. Linear Discriminant Analysis (LDA)) is applied to the high-dimensional image vector space, where statistical analysis is costly and may e unstale. To overcome these drawacks, Yang et al. [3] proposed the Two Dimensional PCA () method that aims at performing PCA using directly the face image matrices. It has een shown that is more efficient [3] and roust [4] than the eigenfaces when dealing with face segmentation inaccuracies, low-quality images and partial occlusions. In [5], we proposed the Two-Dimensional-Oriented Linear Discriminant Analysis (2DoLDA) approach, that consists in applying LDA on image matrices. We have shown on various face image dataases that 2DoLDA is more efficient than oth and the Fisherfaces method for the task of face recognition, and that it is more roust to variations in lighting conditions, facial expressions and head poses. In this paper, we propose a novel supervised projection method called Bilinear Discriminant Analysis (BDA) that achieves etter recognition results than 2DoLDA while sustantially reducing the computational cost of the recognition step, using a generalized Fisher criterion relying on ilinear projections. The remainder of the paper is organized as follows. In section 2, we remind the theory and algorithm of 2DoLDA. In section 3, we descrie in details the principle and algorithm of the proposed BDA method, pointing out its advantages over previous methods. In section 4, a series of three experiments, on different international data sets, is presented to demonstrate the effectiveness and roustness of BDA and compare its performances with respect to, and 2DPCA. Finally, conclusions are drawn in section 5. 2 Two-Dimensional Oriented Linear Discriminant Analysis (2Do LDA) In this section, we remind the 2DoLDA theory and algorithm. For more details refer to [5]. 2DoLDA may e implemented in two different ways: Row-oriented LDA () and Column-oriented LDA (). Let us first present. The model is constructed from a training set containing n face images stored as h w matrices X i laelled y their corresponding identity Ω c. The set of images corresponding to one person is called a class. The training set contains C classes. Let us consider a projection matrix P, of size w k. The projection Xi P of X i with P is given y: Xi P = X i P (1) Using, the signature of X i is Xi P, of size h k. Our aim is to determine the matrix P jointly maximizing separation etween signatures from different classes and minimizing separation etween signatures from the same class. Under the assumptions that the rows of the images are multinormal and that rows from different classes have the same within-class covariance, P maximizes the generalized Fisher criterion [5]: J(P ) = P T S P P T S wp (2) S w and S eing respectively the generalized within-class and etween-class covariance matrices of the training

2 set: C S w = (X i X c ) T (X i X c ) and S = X i Ω c C n c ( X c X) T ( X c X) (3) and X c and X are respectively the mean of the n c samples of class Ω c and the mean of the whole training set. If S w is non-singular (which is generally verified as w << n), the k columns of P, named Row-Oriented Discriminant Components (RoDCs) and maximizing criterion (2) are the eigenvectors of Sw 1 S with largest eigenvalues. A numerically stale way to compute them is given in [6]. Analogeously, for Column-Oriented Linear Discriminant Analysis (), the considered projection is: X Q i = Q T X i, (4) where Q is a projection matrix of size h k. Under the assumptions of multinormality and homoscedasticity of the columns, we can consider the following generalized Fisher criterion: J(Q) = QT Σ Q Q T (5) Σ w Q where Σ w and Σ are respectively the within-class and etween-class covariance matrices of the (X T i ) i {1...n}: Σ w = C X i Ω c (X i X c )(X i X c ) T and Σ = C n c ( X c X)( X c X) T. (6) If Σ w is non-singular, the k columns of Q, named Column-Oriented Dicriminant Components and maximizing criterion (5) are the k eigenvectors of Σ 1 w Σ with largest eigenvalues. There are at most C 1 RoDCs and CoDCs corresponding to non-zero eigenvalues; their numer k can e selected using the Wilks Lamda criteria, which is also known as the stepwise discriminant analysis [7]. This analysis shows that the numer k of 2Do-DCs required y oth methods is comparale, generally inferior to 15, even if the numer of classes is large, as shown in Figure 2(a), reporting an experiment performed on 107 classes. Recognition is performed in the projection space defined y (resp. ): two faces images X a and X are compared using the Euclidean distance etween their projections Xa P and X P (resp. XQ a and X Q ). 3 Bilinear Discriminant Analysis (BDA) 3.1 Why Comining and? We conducted four experiments highlighting the complementarity of and. In the following, all the face images are centered and cropped to a size of h w = pixels. The first two experiments are performed on susets of the Asian Face Image Dataase PF01 [8] containing 107 persons; they illustrate the fact that depending on the training and test data, can significantly outperform or the contrary. In the first experiment, the training set (see Figure 1(a)) contains 5 near-frontal views per person (535 images). The test set (see Figure 1()) contains 4 views per person, with stronger non-frontal poses than in the training set. (a) () (c) (d) Figure 1: Susets of the IML dataase used for experiments 1-2. (a): Extract of the training set used for the first experiment, (): extract of the test set used for the first experiment, (c): Extract of the training set used for the second experiment, (d): extract of the test set used for the second experiment. Figure 2(a) shows that oth and provide more than 92% recognition rate, and outperform 2D PCA. However, is more efficient than, with 4,5% improvement of the recognition rate etween the respective maxima. In the second experiment, the training set (see Figure 1(c)) containing 3 views per person and the test set (see Figure 1(d)) containing 2 views per person are randomly taken from the suset of the IML dataase containing facial expressions: neutral, happy, surprised, irritated and closed eyes. Figure 2() shows that oth and outperform, and that is more efficient than (with 5,6% improvement of the recognition rate etween the maxima) on that particular and complex -the recognition rates are inferior to 60%- suset of the IML dataase. The third and fourth experiments provide further comparison of the performances of and ; they are performed on the Yale Face Dataase [2] (see Figure 3(a-)) containing 11 views of each of 15 persons.

3 1 0,65 0,95 0,9 0,85 0, Numer of projection vectors 0,6 0,55 0,5 0,45 0, Numer of projection vectors (a) () Figure 2: Compared recognition rates of, and on a suset of the IML dataase showing (a) head pose changes () facial expression changes, when varying the numer k of projection vectors. These views present occlusions ( glasses, noglasses ) and dissimilarities in the lighting conditions and facial expressions. In the third experiment, the Yale face dataase is randomly partitioned into a training set containing four views per person, and a test set containing six views per person. To ensure homoscedasticity, the views of each set are consistent among the classes, e.g. all the wink views are included in the test set, and all the neutral in the training set. This operation is repeated several times. From each partition we compute a confusion matrix. The confusion matrices from five different random partitions, with k = C 1 = 14, are shown in Tale 1. In each confusion matrix, the top left cell contains the numer of faces correctly classified y oth and. The top right element is the numer of faces correctly classified y, ut misclassified y. The ottom left element is the numer of faces correctly classified y, ut misclassified y. The ottom right cell contains the numer of faces misclassified y oth and (a) () (c) (d) (e) Tale 1: Confusion matrices on random partitions of the Yale Face Dataase. Tale 1(a) shows that, on the first random partition of the Yale dataase, the performances of and are comparale (the recognition rates are respectively = 70% and 71,1%). However, classification results are very different: 21 samples (23,3% of the test set) are correctly classified y one method, ut not y the other, and 82,2% of the test faces are recognized y at least one of the two methods. Tale 1(-c) illustrates the fact that generally outperforms. Tale 1(d-e) shows that in some configurations where the rate of misclassification y oth methods is high (respectively = 17, 8% and 20% for partitions (d) and (e)), outperforms. The fourth experiment provides further qualitative analysis. The training set (see Figure 3(a)) contains four views for all the 15 sujects, with variations in the lighting conditions and facial expressions. Then, seven test sets are uilt (see Figure 3()), corresponding to the remaining views. Figure 3(c) compares the performances of and on these test sets. Figure 3(c) shows that, for a fixed training set containing few variations, is generally more efficient than, ut in some cases drastically outperforms, especially when the test set contains dissimetries of the image following the vertical axis ( leftlight and rightlight ). can also slightly outperform when the test set shows strong facial expression changes, e.g. surprised. Normal Centerlight Happy Glasses / Noglasses (a) Wink Surprised Leftlight Rightlight Sad Sleepy Occlusion 1 0,9 0,8 0,6 0,5 0,4 0,3 0,2 0,1 0 w ink surprised leftlight rightlight sad sleepy occlusion () (c) Figure 3: (a): Extract of the training set used for the fourth experiment (): extract of the 7 test sets of the fourth experiment. A suject not wearing eyeglasses in the training set wears eyeglasses in the occlusion set, and vice-versa. (c): Compared recognition rates of and on partitions of the Yale Dataase.

4 Choosing etween and therefore requires a preliminary qualitative analysis of the training and test sets, which is a difficult task. As oth and are very performant ut give different recognition results, efficiently comining them can lead to a highly performant method. In 2DoLDA, considering image matrices instead of vectors (as in the Fisherfaces method) when performing LDA leads to a reduced computational cost when uilding the model, and to a reduced storage cost [5]. But the size of the matrices Xa P and X P (resp. XQ a and X Q ) is h k for (resp. k w for ) and may e large. As exposed in the following section, using BDA for recognition leads to a drastic reduction in the signatures size, and therefore reduces the computational cost during the recognition step, which is often online. 3.2 Description of the method Let us consider two projection matrices Q R h k and P R w k. The projected sample X Q,P i of X i on (Q, P ) is given y: X Q,P i = Q T X i P (7) Let us define the signature of X i y its ilinear projection X Q,P i (of size k k) onto (Q, P ). We are searching for the optimal pair of matrices (Q, P ) maximizing separation etween signatures from different classes while minimizing separation etween signatures from the same class: Q,P C (Q, P S ) = Argmax (Q,P ) R h k R w k Sw Q,P = Argmax n c(xc Q,P X Q,P ) T (Xc Q,P X Q,P ) (Q,P ) R h k R w k (8) C i Ω c (X Q,P i Xc Q,P ) T (X Q,P i Xc Q,P ) Sw Q,P and S Q,P eing the within-class and etween-class covariance matrices of the set (X Q,P i ) i {1,...,n}. This ojective function is iquadratic and has no analytical solution. We therefore propose an iterative procedure that we call Bilinear Discriminant Analysis. Let us expand the expression (8): (Q, P ) = Argmax (Q,P ) R h k R w k [ Σ C n c(p T (X c X) T QQ T (X c X)P ) Σ C Σ i Ω c (P T (X i X c) T QQ T (X i X c)p ) For any fixed Q R h k, using equation (9), the ojective function (8) can e rewritten: [ ] P = Argmax P R w k h i P T Σ C n c(xc Q X Q ) T (Xc Q X Q ) P T h Σ C Σ i Ω c (XQ i XQ c ) T (X Q i XQ c ) P i P = Argmax P R w k ] P T S Q P P T S Q w P Sw Q and S Q eing respectively the generalized within-class covariance matrix and the generalized etween-class covariance matrix of the set (X Q i ) i {1...n} (from equation (4)). Therefore the columns of the matrix P are the 1 S Q k eigenvectors of Sw Q with largest eigenvalues, otained y applying on the set of the projected samples X Q i. Let us denote A = P T (X c X) T Q, matrix of size k k. Given that, for every square matrix A, A T A = AA T, the ojective function (8) can e rewritten: [ ] (Q, P Σ ) = Argmax C n c(q T (X c X)P P T (X c X) T Q) (11) (Q,P ) R h k R [Σ C Σ w k i Ω c (QT (X i X c )P P T (X i X c ) T Q) For a fixed P R w k, using equation (11) the ojective function (8) can e rewritten Q = Argmax Q R h k Σ P w and Σ P eing the generalized within-class and etween-class covariance matrices of the set ((XP i )T ) i {1...n}. (9) (10) Q T Σ P Q Q T Σ P w Q, with largest eigenvalues, otained y applying Therefore, the columns of Q are the k eigenvectors of (Σ P w) 1 Σ P on the set of the projected samples (Xi P ) i {1...n}. Recognition is performed in the projection space defined y BDA: two face image X a and X are compared using the Euclidean distance etween their ilinear projections X (Q,P ) a and X (Q,P ). We can note that the computational cost of one comparison is o(k 2 ) for BDA, versus o(h k) for and 2D PCA, and o(w k) for ; therefore BDA drastically reduces the computational cost of the recognition step. 3.3 Algorithm of the BDA approach Let us initialize Q 0 = I h, I h eing the identity matrix of R h h, and k 0 =C 1. The proposed algorithm for BDA is: 1. For i {1,..., n}, compute X Q t i = (Q t ) T X i. 2. Apply on (X Qt i 3. For i {1,..., n}, compute X P t 4. Apply on (X P t i ) i {1,...,n} : compute Sw Qt, S Qt i = X i P t. ) i {1,...,n} : compute Σ P t w, Σ P t and, from (S Qt w ) 1 S Qt, compute P t, of size w k t ;, and, from (ΣP t w ) 1 Σ P t, compute Q t, of size h k t ;

5 5. Compute χ 2 = (n h+c 2 1)ln( C 1 j=k t λ j ). 6. if χ 2 < p value [ χ 2 ((h k t )(C k t 1)) ], then t t+1, k t k t 1 1, and return to step else k t k t 1, Q Q t 1 and P P t 1 The stopping criterion (steps 6-7) derives from the Wilks Lamda criterion,testing the discriminatory power of the C 1 k t eigenvectors of (Σ P t w ) 1 Σ Pt removed at step 4, when keeping only the k t first columns of Q t. We consider the following test: H 0 : at least one of the eigenvectors k t +1,... C 1 is discriminative, and H 1 : non H 0. Under H 0, it can e easily shown that (n h+c 2 1)ln( C 1 1 j=k t+1 1+λ j ) corresponds to a χ 2 distriution, with (h k t )(C k t 1) degrees of freedom. The p-value can e chosen at confidence level of 5%. If χ 2 < p value, the C k t 1 last eigenvectors (with smallest eigenvalue) can e removed and the stepwise analysis goes on. If χ 2 > p value, the eigenvector k t + 1 = k t 1 is discriminative and should e kept. 4 Experimental Results Three experiments are performed on the Asian Face Image Dataase PF01 [8], the FERET [9] 1 face dataase, and the ORL Dataase [10], to assess the effectiveness of BDA and compare it with, and 2D-PCA. The first experiment is performed on the suset of the IML dataase containing different facial expressions (see Figure 1(c-d)). Figure 4(d) shows that BDA outperforms, and with at most 10% recognition rate improvement when k = 14. The second experiment, performed on FERET, aims at testing the generalization power of BDA, and comparing it to, and. Indeed, LDA-ased methods are known to e more efficient when comparing faces of known people, ut provide worse generalization results than unsupervised methods. The training set (see Figure 4(a)) contains 818 images of 152 persons with at least 4 views / person, taken on different days, under different lighting conditions. Two test sets (see Figure 4(-c)) are compared, each one containing 200 persons with one view per person, taken from FERET ut not present in the training set. From one test set to the other, the facial expressions vary. From Figure 4(e), we can conclude that when the training set contains many classes and important variations inside the classes, BDA provides etter generalization than the other methods. IML - facial expressions 0,9 FERET - Comparison of the two test sets 0,65 (a) () (c) 0,6 0,55 0,5 0,45 BDA 0,85 0,8 5 BDA 0, Numer of projection vectors (d) Numer of projection vectors Figure 4: (a): Extract of the training set used for the second experiment. (-c): Extracts of the two test sets compared in the second experiment. (d): Compared recognition rates of BDA,, and, on the suset of IML with expression variations. (e): Compared classification rates of BDA,, and 2D PCA on the suset of the FERET dataase, when varying the numer k of projection vectors. For the third experiment, the ORL dataase has een randomly partitioned into a training set containing 5 views, and a test set containing the 5 remaining views, for each of the 40 persons. This operation has een repeated 7 times and BDA, and have een applied. Figure 5 shows that BDA provides etter recognition rates than and on all the random partitions, whenever outperforms (partitions (a-) and (d-g)) or outperforms (partition (c)). The results are computed from the optimal numer of projection vectors, which is k = 14 for the three methods. For further analysis, the contingency tale summed over partitions (a-g) is given in Tale 2. The total numer of tested faces is = The logical symol stands for not, i.e. the element corresponding to the second row and first column of the tale is the numer of faces recognized y, ut misclassified y. 1 Portions of the research in this paper use the FERET dataase of facial images collected under the FERET program. (e)

6 0,99 0,97 0,95 0,93 0,91 0,89 0,87 0,85 a c d e f g BDA Figure 5: Compared recognition rates of BDA, and on 7 random partitions of the ORL dataase. BDA Total Tale 2: Contingency tale summed over partitions (a-g) of the ORL dataase. The second row and second column element is the numer of samples correctly classified y and BDA, ut misclassified y. From tale 2 we can see that BDA correctly classifies = 99.6% of the samples that were correctly classified y oth and. Moreover, it recognizes the major part of the samples that were correctly classified y only one of the two methods (72.7% for and 63.6% for ). It also correctly classifies 35,4% of the samples that were misclassified y oth methods, which shows the efficiency of the BDA iterative algorithm. It should e noted that the size of the signature for one sample is = 1050 for, = 910 for, and only 14 2 = 196 for BDA. 5 Conclusion In this paper, we have proposed a new class-ased projection method, called Bilinear Discriminant Analysis, that can e successfully applied to face recognition. This technique, y maximizing a generalized Fisher criterion lying on a ilinear projection and computed directly from face image matrices, constructs a discriminant projection matrix. It has already een shown [3,4] that outperforms the traditional eigenfaces method. We have already highlighted [5] that 2DoLDA outperforms and the fisherfaces method. In this paper, we have shown on various international dataases that BDA is more efficient than 2DoLDA. Moreover, BDA, y providing reduced size image signatures, allows an important computational gain compared to 2DoLDA and during the recognition step. Aknowledgement This research was supported y the European Commission under contract FP acemedia. 6 References [1] M.A. Turk and A.D. Pentland, Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 3(1), pages 71-86, [2] P.N. Belhumeur, J.P. Hespanha and D.J. Kriegman, Eigenfaces vs Fisherfaces : Recognition Using Class Specific Linear Projection. In IEEE Transactions on Pattern Analysis and Machine Intelligence, Special Issue on Face Recognition, 17(7), pages , [3] J. Yang, D. Zhang and A.F. Frangi, Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(1), pages , Jan [4] M. Visani, C. Garcia and C. Laurent, Comparing Roustness of Two-Dimensional PCA and Eigenfaces for Face Recognition. In Proceedings of the 1 st International Conference on Image Analysis and Recognition (ICIAR 04), Springer Lecture Notes in Computer Science (LNCS 3211), A. Campilho, M. Kamel (eds), pages Sept [5] M. Visani, C. Garcia and J.M. Jolion, Two-Dimensional-Oriented Linear Discriminant Analysis for Face Recognition. In Proceedings of the International Conference on Computer Vision and Graphics (ICCVG 2004), to appear in Computational Imaging and Vision series, Warsaw, Poland, Sept [6] D.L. Swets and J. Weng, Using Discriminant Eigenfeatures for Image Retrieval. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(8), pages , [7] R.L. Jenrich, Stepwise Discriminant Analysis, Standard Statistical Methods for Digital Computers, pages Ralston, A and Wilf, H.S. (eds), Wiley, New York, USA, [8] B.W. Hwang, M.C. Roh and S.W. Lee, Performance Evaluation of Face Recognition Algorithms on Asian Face Dataase. In IEEE International Conference on Automatic Face and Gesture Recognition, pages , Seoul, Korea, May [9] P.J. Phillips, H. Wechsler, J. Huang and P. Rauss, The FERET Dataase and Evaluation Procedure for Face Recognition

7 Algorithms. In Image and Vision Computing, 16(5), pages , [10] F. Samaria and A. Harter, Parameterisation of a Stochastic Model for Human Face Identification. In Proceedings of 2 nd IEEE Workshop on Applications of Computer Vision, Sarasota FL, Dec

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction Using PCA/LDA Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction One approach to deal with high dimensional data is by reducing their

More information

Lecture: Face Recognition

Lecture: Face Recognition Lecture: Face Recognition Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 12-1 What we will learn today Introduction to face recognition The Eigenfaces Algorithm Linear

More information

Symmetric Two Dimensional Linear Discriminant Analysis (2DLDA)

Symmetric Two Dimensional Linear Discriminant Analysis (2DLDA) Symmetric Two Dimensional inear Discriminant Analysis (2DDA) Dijun uo, Chris Ding, Heng Huang University of Texas at Arlington 701 S. Nedderman Drive Arlington, TX 76013 dijun.luo@gmail.com, {chqding,

More information

Two-Dimensional Linear Discriminant Analysis

Two-Dimensional Linear Discriminant Analysis Two-Dimensional Linear Discriminant Analysis Jieping Ye Department of CSE University of Minnesota jieping@cs.umn.edu Ravi Janardan Department of CSE University of Minnesota janardan@cs.umn.edu Qi Li Department

More information

Example: Face Detection

Example: Face Detection Announcements HW1 returned New attendance policy Face Recognition: Dimensionality Reduction On time: 1 point Five minutes or more late: 0.5 points Absent: 0 points Biometrics CSE 190 Lecture 14 CSE190,

More information

A Unified Bayesian Framework for Face Recognition

A Unified Bayesian Framework for Face Recognition Appears in the IEEE Signal Processing Society International Conference on Image Processing, ICIP, October 4-7,, Chicago, Illinois, USA A Unified Bayesian Framework for Face Recognition Chengjun Liu and

More information

An Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition

An Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition An Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition Jun Liu, Songcan Chen, Daoqiang Zhang, and Xiaoyang Tan Department of Computer Science & Engineering, Nanjing University

More information

Discriminant Uncorrelated Neighborhood Preserving Projections

Discriminant Uncorrelated Neighborhood Preserving Projections Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a,

More information

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Eigenface and

More information

When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants

When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants Sheng Zhang erence Sim School of Computing, National University of Singapore 3 Science Drive 2, Singapore 7543 {zhangshe, tsim}@comp.nus.edu.sg

More information

Eigenface-based facial recognition

Eigenface-based facial recognition Eigenface-based facial recognition Dimitri PISSARENKO December 1, 2002 1 General This document is based upon Turk and Pentland (1991b), Turk and Pentland (1991a) and Smith (2002). 2 How does it work? The

More information

Enhanced Fisher Linear Discriminant Models for Face Recognition

Enhanced Fisher Linear Discriminant Models for Face Recognition Appears in the 14th International Conference on Pattern Recognition, ICPR 98, Queensland, Australia, August 17-2, 1998 Enhanced isher Linear Discriminant Models for ace Recognition Chengjun Liu and Harry

More information

Image Region Selection and Ensemble for Face Recognition

Image Region Selection and Ensemble for Face Recognition Image Region Selection and Ensemble for Face Recognition Xin Geng and Zhi-Hua Zhou National Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China E-mail: {gengx, zhouzh}@lamda.nju.edu.cn

More information

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

Course 495: Advanced Statistical Machine Learning/Pattern Recognition Course 495: Advanced Statistical Machine Learning/Pattern Recognition Deterministic Component Analysis Goal (Lecture): To present standard and modern Component Analysis (CA) techniques such as Principal

More information

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Recognition Using Class Specific Linear Projection Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Articles Eigenfaces vs. Fisherfaces Recognition Using Class Specific Linear Projection, Peter N. Belhumeur,

More information

Comparative Assessment of Independent Component. Component Analysis (ICA) for Face Recognition.

Comparative Assessment of Independent Component. Component Analysis (ICA) for Face Recognition. Appears in the Second International Conference on Audio- and Video-based Biometric Person Authentication, AVBPA 99, ashington D. C. USA, March 22-2, 1999. Comparative Assessment of Independent Component

More information

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr

More information

Reconnaissance d objetsd et vision artificielle

Reconnaissance d objetsd et vision artificielle Reconnaissance d objetsd et vision artificielle http://www.di.ens.fr/willow/teaching/recvis09 Lecture 6 Face recognition Face detection Neural nets Attention! Troisième exercice de programmation du le

More information

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning LIU, LU, GU: GROUP SPARSE NMF FOR MULTI-MANIFOLD LEARNING 1 Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning Xiangyang Liu 1,2 liuxy@sjtu.edu.cn Hongtao Lu 1 htlu@sjtu.edu.cn

More information

Aruna Bhat Research Scholar, Department of Electrical Engineering, IIT Delhi, India

Aruna Bhat Research Scholar, Department of Electrical Engineering, IIT Delhi, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2017 IJSRCSEIT Volume 2 Issue 6 ISSN : 2456-3307 Robust Face Recognition System using Non Additive

More information

Model-based Characterization of Mammographic Masses

Model-based Characterization of Mammographic Masses Model-based Characterization of Mammographic Masses Sven-René von der Heidt 1, Matthias Elter 2, Thomas Wittenberg 2, Dietrich Paulus 1 1 Institut für Computervisualistik, Universität Koblenz-Landau 2

More information

Non-parametric Classification of Facial Features

Non-parametric Classification of Facial Features Non-parametric Classification of Facial Features Hyun Sung Chang Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Problem statement In this project, I attempted

More information

Face recognition Computer Vision Spring 2018, Lecture 21

Face recognition Computer Vision Spring 2018, Lecture 21 Face recognition http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 21 Course announcements Homework 6 has been posted and is due on April 27 th. - Any questions about the homework?

More information

COS 429: COMPUTER VISON Face Recognition

COS 429: COMPUTER VISON Face Recognition COS 429: COMPUTER VISON Face Recognition Intro to recognition PCA and Eigenfaces LDA and Fisherfaces Face detection: Viola & Jones (Optional) generic object models for faces: the Constellation Model Reading:

More information

Locality Preserving Projections

Locality Preserving Projections Locality Preserving Projections Xiaofei He Department of Computer Science The University of Chicago Chicago, IL 60637 xiaofei@cs.uchicago.edu Partha Niyogi Department of Computer Science The University

More information

Lecture 13 Visual recognition

Lecture 13 Visual recognition Lecture 13 Visual recognition Announcements Silvio Savarese Lecture 13-20-Feb-14 Lecture 13 Visual recognition Object classification bag of words models Discriminative methods Generative methods Object

More information

Lecture 17: Face Recogni2on

Lecture 17: Face Recogni2on Lecture 17: Face Recogni2on Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei-Fei Li Stanford Vision Lab Lecture 17-1! What we will learn today Introduc2on to face recogni2on Principal Component Analysis

More information

Face Recognition Using Multi-viewpoint Patterns for Robot Vision

Face Recognition Using Multi-viewpoint Patterns for Robot Vision 11th International Symposium of Robotics Research (ISRR2003), pp.192-201, 2003 Face Recognition Using Multi-viewpoint Patterns for Robot Vision Kazuhiro Fukui and Osamu Yamaguchi Corporate Research and

More information

CPM: A Covariance-preserving Projection Method

CPM: A Covariance-preserving Projection Method CPM: A Covariance-preserving Projection Method Jieping Ye Tao Xiong Ravi Janardan Astract Dimension reduction is critical in many areas of data mining and machine learning. In this paper, a Covariance-preserving

More information

Kazuhiro Fukui, University of Tsukuba

Kazuhiro Fukui, University of Tsukuba Subspace Methods Kazuhiro Fukui, University of Tsukuba Synonyms Multiple similarity method Related Concepts Principal component analysis (PCA) Subspace analysis Dimensionality reduction Definition Subspace

More information

Eigenimages. Digital Image Processing: Bernd Girod, 2013 Stanford University -- Eigenimages 1

Eigenimages. Digital Image Processing: Bernd Girod, 2013 Stanford University -- Eigenimages 1 Eigenimages " Unitary transforms" Karhunen-Loève transform" Eigenimages for recognition" Sirovich and Kirby method" Example: eigenfaces" Eigenfaces vs. Fisherfaces" Digital Image Processing: Bernd Girod,

More information

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface Image Analysis & Retrieval Lec 14 Eigenface and Fisherface Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: lizhu@umkc.edu, Ph: x 2346. http://l.web.umkc.edu/lizhu Z. Li, Image Analysis & Retrv, Spring

More information

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction Robot Image Credit: Viktoriya Sukhanova 13RF.com Dimensionality Reduction Feature Selection vs. Dimensionality Reduction Feature Selection (last time) Select a subset of features. When classifying novel

More information

Lecture 17: Face Recogni2on

Lecture 17: Face Recogni2on Lecture 17: Face Recogni2on Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei-Fei Li Stanford Vision Lab Lecture 17-1! What we will learn today Introduc2on to face recogni2on Principal Component Analysis

More information

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface CS/EE 5590 / ENG 401 Special Topics, Spring 2018 Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface Zhu Li Dept of CSEE, UMKC http://l.web.umkc.edu/lizhu Office Hour: Tue/Thr 2:30-4pm@FH560E, Contact:

More information

Eigenimaging for Facial Recognition

Eigenimaging for Facial Recognition Eigenimaging for Facial Recognition Aaron Kosmatin, Clayton Broman December 2, 21 Abstract The interest of this paper is Principal Component Analysis, specifically its area of application to facial recognition

More information

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Multiple Similarities Based Kernel Subspace Learning for Image Classification Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

A Unified Framework for Subspace Face Recognition

A Unified Framework for Subspace Face Recognition 1222 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 26, NO. 9, SEPTEMBER 2004 A Unified Framework for Subspace Face Recognition iaogang Wang, Student Member, IEEE, and iaoou Tang,

More information

Pattern Recognition 2

Pattern Recognition 2 Pattern Recognition 2 KNN,, Dr. Terence Sim School of Computing National University of Singapore Outline 1 2 3 4 5 Outline 1 2 3 4 5 The Bayes Classifier is theoretically optimum. That is, prob. of error

More information

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision) CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions

More information

Evolutionary Pursuit and Its Application to Face Recognition

Evolutionary Pursuit and Its Application to Face Recognition IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 6, pp. 50-52, 2000. Evolutionary Pursuit and Its Application to Face Recognition Chengjun Liu, Member, IEEE, and Harry Wechsler, Fellow,

More information

Automatic Identity Verification Using Face Images

Automatic Identity Verification Using Face Images Automatic Identity Verification Using Face Images Sabry F. Saraya and John F. W. Zaki Computer & Systems Eng. Dept. Faculty of Engineering Mansoura University. Abstract This paper presents two types of

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

Eigenfaces. Face Recognition Using Principal Components Analysis

Eigenfaces. Face Recognition Using Principal Components Analysis Eigenfaces Face Recognition Using Principal Components Analysis M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive Neuroscience, 3(1), pp. 71-86, 1991. Slides : George Bebis, UNR

More information

Random Sampling LDA for Face Recognition

Random Sampling LDA for Face Recognition Random Sampling LDA for Face Recognition Xiaogang Wang and Xiaoou ang Department of Information Engineering he Chinese University of Hong Kong {xgwang1, xtang}@ie.cuhk.edu.hk Abstract Linear Discriminant

More information

Subspace Methods for Visual Learning and Recognition

Subspace Methods for Visual Learning and Recognition This is a shortened version of the tutorial given at the ECCV 2002, Copenhagen, and ICPR 2002, Quebec City. Copyright 2002 by Aleš Leonardis, University of Ljubljana, and Horst Bischof, Graz University

More information

W vs. QCD Jet Tagging at the Large Hadron Collider

W vs. QCD Jet Tagging at the Large Hadron Collider W vs. QCD Jet Tagging at the Large Hadron Collider Bryan Anenberg: anenberg@stanford.edu; CS229 December 13, 2013 Problem Statement High energy collisions of protons at the Large Hadron Collider (LHC)

More information

Face Detection and Recognition

Face Detection and Recognition Face Detection and Recognition Face Recognition Problem Reading: Chapter 18.10 and, optionally, Face Recognition using Eigenfaces by M. Turk and A. Pentland Queryimage face query database Face Verification

More information

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization

More information

Myoelectrical signal classification based on S transform and two-directional 2DPCA

Myoelectrical signal classification based on S transform and two-directional 2DPCA Myoelectrical signal classification based on S transform and two-directional 2DPCA Hong-Bo Xie1 * and Hui Liu2 1 ARC Centre of Excellence for Mathematical and Statistical Frontiers Queensland University

More information

Simultaneous and Orthogonal Decomposition of Data using Multimodal Discriminant Analysis

Simultaneous and Orthogonal Decomposition of Data using Multimodal Discriminant Analysis Simultaneous and Orthogonal Decomposition of Data using Multimodal Discriminant Analysis Terence Sim Sheng Zhang Jianran Li Yan Chen School of Computing, National University of Singapore, Singapore 117417.

More information

ECE 661: Homework 10 Fall 2014

ECE 661: Homework 10 Fall 2014 ECE 661: Homework 10 Fall 2014 This homework consists of the following two parts: (1) Face recognition with PCA and LDA for dimensionality reduction and the nearest-neighborhood rule for classification;

More information

Locally Linear Embedded Eigenspace Analysis

Locally Linear Embedded Eigenspace Analysis Locally Linear Embedded Eigenspace Analysis IFP.TR-LEA.YunFu-Jan.1,2005 Yun Fu and Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign 405 North

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

Kernel quadratic discriminant analysis for small sample size problem

Kernel quadratic discriminant analysis for small sample size problem Pattern Recognition ( www.elsevier.com/locate/pr Kernel quadratic discriminant analysis for small sample size prolem Jie Wang a,, K.. Plataniotis a, Juwei Lu, A.. Venetsanopoulos c a Department of Electrical

More information

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 31, NO. 5, MAY ASYMMETRIC PRINCIPAL COMPONENT ANALYSIS

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 31, NO. 5, MAY ASYMMETRIC PRINCIPAL COMPONENT ANALYSIS IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 31, NO. 5, MAY 2009 931 Short Papers Asymmetric Principal Component and Discriminant Analyses for Pattern Classification Xudong Jiang,

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

Template-based Recognition of Static Sitting Postures

Template-based Recognition of Static Sitting Postures Template-based Recognition of Static Sitting Postures Manli Zhu 1, Aleix M. Martínez 1 and Hong Z. Tan 2 1 Dept. of Electrical Engineering The Ohio State University 2 School of Electrical and Computer

More information

Regularized Locality Preserving Projections with Two-Dimensional Discretized Laplacian Smoothing

Regularized Locality Preserving Projections with Two-Dimensional Discretized Laplacian Smoothing Report No. UIUCDCS-R-2006-2748 UILU-ENG-2006-1788 Regularized Locality Preserving Projections with Two-Dimensional Discretized Laplacian Smoothing by Deng Cai, Xiaofei He, and Jiawei Han July 2006 Regularized

More information

Weighted Generalized LDA for Undersampled Problems

Weighted Generalized LDA for Undersampled Problems Weighted Generalized LDA for Undersampled Problems JING YANG School of Computer Science and echnology Nanjing University of Science and echnology Xiao Lin Wei Street No.2 2194 Nanjing CHINA yangjing8624@163.com

More information

Robust Tensor Factorization Using R 1 Norm

Robust Tensor Factorization Using R 1 Norm Robust Tensor Factorization Using R Norm Heng Huang Computer Science and Engineering University of Texas at Arlington heng@uta.edu Chris Ding Computer Science and Engineering University of Texas at Arlington

More information

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to: System 2 : Modelling & Recognising Modelling and Recognising Classes of Classes of Shapes Shape : PDM & PCA All the same shape? System 1 (last lecture) : limited to rigidly structured shapes System 2 :

More information

Unsupervised Learning: K- Means & PCA

Unsupervised Learning: K- Means & PCA Unsupervised Learning: K- Means & PCA Unsupervised Learning Supervised learning used labeled data pairs (x, y) to learn a func>on f : X Y But, what if we don t have labels? No labels = unsupervised learning

More information

Advanced Introduction to Machine Learning CMU-10715

Advanced Introduction to Machine Learning CMU-10715 Advanced Introduction to Machine Learning CMU-10715 Principal Component Analysis Barnabás Póczos Contents Motivation PCA algorithms Applications Some of these slides are taken from Karl Booksh Research

More information

Machine learning for pervasive systems Classification in high-dimensional spaces

Machine learning for pervasive systems Classification in high-dimensional spaces Machine learning for pervasive systems Classification in high-dimensional spaces Department of Communications and Networking Aalto University, School of Electrical Engineering stephan.sigg@aalto.fi Version

More information

Kernel Uncorrelated and Orthogonal Discriminant Analysis: A Unified Approach

Kernel Uncorrelated and Orthogonal Discriminant Analysis: A Unified Approach Kernel Uncorrelated and Orthogonal Discriminant Analysis: A Unified Approach Tao Xiong Department of ECE University of Minnesota, Minneapolis txiong@ece.umn.edu Vladimir Cherkassky Department of ECE University

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

STUDY ON METHODS FOR COMPUTER-AIDED TOOTH SHADE DETERMINATION

STUDY ON METHODS FOR COMPUTER-AIDED TOOTH SHADE DETERMINATION INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 5, Number 3-4, Pages 351 358 c 2009 Institute for Scientific Computing and Information STUDY ON METHODS FOR COMPUTER-AIDED TOOTH SHADE DETERMINATION

More information

On New Radon-Based Translation, Rotation, and Scaling Invariant Transform for Face Recognition

On New Radon-Based Translation, Rotation, and Scaling Invariant Transform for Face Recognition On New Radon-Based Translation, Rotation, and Scaling Invariant Transform for Face Recognition Tomasz Arodź 1,2 1 Institute of Computer Science, AGH, al. Mickiewicza 30, 30-059 Kraków, Poland 2 Academic

More information

A Novel PCA-Based Bayes Classifier and Face Analysis

A Novel PCA-Based Bayes Classifier and Face Analysis A Novel PCA-Based Bayes Classifier and Face Analysis Zhong Jin 1,2, Franck Davoine 3,ZhenLou 2, and Jingyu Yang 2 1 Centre de Visió per Computador, Universitat Autònoma de Barcelona, Barcelona, Spain zhong.in@cvc.uab.es

More information

Pattern Recognition Approaches to Solving Combinatorial Problems in Free Groups

Pattern Recognition Approaches to Solving Combinatorial Problems in Free Groups Contemporary Mathematics Pattern Recognition Approaches to Solving Combinatorial Problems in Free Groups Robert M. Haralick, Alex D. Miasnikov, and Alexei G. Myasnikov Abstract. We review some basic methodologies

More information

The Mathematics of Facial Recognition

The Mathematics of Facial Recognition William Dean Gowin Graduate Student Appalachian State University July 26, 2007 Outline EigenFaces Deconstruct a known face into an N-dimensional facespace where N is the number of faces in our data set.

More information

Deriving Principal Component Analysis (PCA)

Deriving Principal Component Analysis (PCA) -0 Mathematical Foundations for Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Deriving Principal Component Analysis (PCA) Matt Gormley Lecture 11 Oct.

More information

Automatic Rank Determination in Projective Nonnegative Matrix Factorization

Automatic Rank Determination in Projective Nonnegative Matrix Factorization Automatic Rank Determination in Projective Nonnegative Matrix Factorization Zhirong Yang, Zhanxing Zhu, and Erkki Oja Department of Information and Computer Science Aalto University School of Science and

More information

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 4 4 4 6 Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System processes System Overview Previous Systems:

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

COVARIANCE REGULARIZATION FOR SUPERVISED LEARNING IN HIGH DIMENSIONS

COVARIANCE REGULARIZATION FOR SUPERVISED LEARNING IN HIGH DIMENSIONS COVARIANCE REGULARIZATION FOR SUPERVISED LEARNING IN HIGH DIMENSIONS DANIEL L. ELLIOTT CHARLES W. ANDERSON Department of Computer Science Colorado State University Fort Collins, Colorado, USA MICHAEL KIRBY

More information

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015 ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015 http://intelligentoptimization.org/lionbook Roberto Battiti

More information

FACE RECOGNITION USING HISTOGRAM OF CO-OCCURRENCE GABOR PHASE PATTERNS. Cong Wang, Zhenhua Chai, Zhenan Sun

FACE RECOGNITION USING HISTOGRAM OF CO-OCCURRENCE GABOR PHASE PATTERNS. Cong Wang, Zhenhua Chai, Zhenan Sun FACE RECOGNITION USING HISTOGRAM OF CO-OCCURRENCE GABOR PHASE PATTERNS Cong Wang, Zhenhua Chai, Zhenan Sun Center for Research on Intelligent Perception and Computing (CRIPAC), National Laboratory of Pattern

More information

Supervised locally linear embedding

Supervised locally linear embedding Supervised locally linear embedding Dick de Ridder 1, Olga Kouropteva 2, Oleg Okun 2, Matti Pietikäinen 2 and Robert P.W. Duin 1 1 Pattern Recognition Group, Department of Imaging Science and Technology,

More information

Two-Layered Face Detection System using Evolutionary Algorithm

Two-Layered Face Detection System using Evolutionary Algorithm Two-Layered Face Detection System using Evolutionary Algorithm Jun-Su Jang Jong-Hwan Kim Dept. of Electrical Engineering and Computer Science, Korea Advanced Institute of Science and Technology (KAIST),

More information

PCA and kernel PCA using polynomial filtering: a case study on face recognition

PCA and kernel PCA using polynomial filtering: a case study on face recognition PCA and kernel PCA using polynomial filtering: a case study on face recognition E. Kokiopoulou and Y. Saad Comp. Sci. & Eng. Dept University of Minnesota {kokiopou,saad}@cs.umn.edu November 16, 2004 Abstract

More information

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY [Gaurav, 2(1): Jan., 2013] ISSN: 2277-9655 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY Face Identification & Detection Using Eigenfaces Sachin.S.Gurav *1, K.R.Desai 2 *1

More information

L11: Pattern recognition principles

L11: Pattern recognition principles L11: Pattern recognition principles Bayesian decision theory Statistical classifiers Dimensionality reduction Clustering This lecture is partly based on [Huang, Acero and Hon, 2001, ch. 4] Introduction

More information

Main matrix factorizations

Main matrix factorizations Main matrix factorizations A P L U P permutation matrix, L lower triangular, U upper triangular Key use: Solve square linear system Ax b. A Q R Q unitary, R upper triangular Key use: Solve square or overdetrmined

More information

A Model of Illumination Variation for Robust Face Recognition

A Model of Illumination Variation for Robust Face Recognition A Model of Illumination Variation for Robust Face Recognition Florent Perronnin and Jean-Luc Dugelay Institut Eurécom Multimedia Communications Department BP 193, 06904 Sophia Antipolis Cedex, France {florent.perronnin,

More information

Non-negative Matrix Factorization on Kernels

Non-negative Matrix Factorization on Kernels Non-negative Matrix Factorization on Kernels Daoqiang Zhang, 2, Zhi-Hua Zhou 2, and Songcan Chen Department of Computer Science and Engineering Nanjing University of Aeronautics and Astronautics, Nanjing

More information

Eigenfaces and Fisherfaces

Eigenfaces and Fisherfaces Eigenfaces and Fisherfaces Dimension Reduction and Component Analysis Jason Corso University of Michigan EECS 598 Fall 2014 Foundations of Computer Vision JJ Corso (University of Michigan) Eigenfaces and

More information

Discriminant analysis and supervised classification

Discriminant analysis and supervised classification Discriminant analysis and supervised classification Angela Montanari 1 Linear discriminant analysis Linear discriminant analysis (LDA) also known as Fisher s linear discriminant analysis or as Canonical

More information

PCA FACE RECOGNITION

PCA FACE RECOGNITION PCA FACE RECOGNITION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Shree Nayar (Columbia) including their own slides. Goal

More information

Principal Component Analysis

Principal Component Analysis B: Chapter 1 HTF: Chapter 1.5 Principal Component Analysis Barnabás Póczos University of Alberta Nov, 009 Contents Motivation PCA algorithms Applications Face recognition Facial expression recognition

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

Appears in: The 3rd IEEE Int'l Conference on Automatic Face & Gesture Recognition, Nara, Japan, April 1998

Appears in: The 3rd IEEE Int'l Conference on Automatic Face & Gesture Recognition, Nara, Japan, April 1998 M.I.T Media Laboratory Perceptual Computing Section Technical Report No. 443 Appears in: The 3rd IEEE Int'l Conference on Automatic ace & Gesture Recognition, Nara, Japan, April 1998 Beyond Eigenfaces:

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

201 Broadway 20 Ames St. E (corresponding to variations between dierent. and P (j E ) which are derived from training data

201 Broadway 20 Ames St. E (corresponding to variations between dierent. and P (j E ) which are derived from training data Beyond Eigenfaces: Probabilistic Matching for ace Recognition Baback Moghaddam Wasiuddin Wahid and Alex Pentland Mitsubishi Electric Research Laboratory MIT Media Laboratory 201 Broadway 20 Ames St. Cambridge,

More information

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations

More information

Dictionary Learning Based Dimensionality Reduction for Classification

Dictionary Learning Based Dimensionality Reduction for Classification Dictionary Learning Based Dimensionality Reduction for Classification Karin Schnass and Pierre Vandergheynst Signal Processing Institute Swiss Federal Institute of Technology Lausanne, Switzerland {karin.schnass,

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,500 108,000 1.7 M Open access books available International authors and editors Downloads Our

More information