A Comparative Study of Face Recognition System Using PCA and LDA

Size: px
Start display at page:

Download "A Comparative Study of Face Recognition System Using PCA and LDA"

Transcription

1 12 A Comparative Study of Face Recognition System Using PCA and LDA Raj Kumar Sahu, Research Scholar, J.N.U., Jodhpur, India Dr. Yash Pal Singh, Director, (P.G.)I.T.M. Rewari, India Dr. Abhijit Kulshrestha, Professor, J.N.U., Jodhpur, India ABSTRACT This paper concern on Face recognition has a major impact in security measures which makes it one of the most appealing areas to explore. To perform face recognition, researchers adopt mathematical calculations to develop automatic recognition systems. As a face recognition system has to perform over wide range of database, dimension reduction techniques become a prime requirement to reduce time and increase accuracy. In this paper, face recognition is performed using Principal Component Analysis followed by Linear Discriminant Analysis based dimension reduction techniques. Sequencing of this paper is preprocessing, dimension reduction of training database set by PCA, extraction of features for class separability by LDA and finally testing by nearest mean classification techniques. The proposed method is tested over ORL face database. It is found that recognition rate on this database is 96.00% and hence showing efficiency of the proposed method than previously adopted methods of face recognition systems. Keywords Face Recognition, PCA, LDA, City Block Distance, Recognition Rate 1. INTRODUCTION Over the last few years, Face recognition researches have accelerated technological development in human identification for security and safety measures. A face recognition system that basically performs the functions of face detection, human authentication and finally person recognition has innumerable applications in defense, security and automatic access control, human-computer interface. Development of such systems are however very challenging, which includes variations in illumination conditions, poses, facial expression, aging, and disguises such as moustaches, beard, glasses or cosmetics. Among various methods developed by researchers, appearancebased approaches for face recognition yielded good results, which generally work straight on face and hence processed it as two dimensional patterns. The extracted face image features are then represented to belong with a class and separate faces to different classes. Features which are having high separability are continued for further processing and lower one is discarded [26] [14]. To reduce computational costs, face image data, generated after various linear or non-linear transformations, are represented in a lower dimensional space. Some successful appearance-based methods like Principal Component Analysis, Linear Discriminant Analysis and Independent Component Analysis, employ arithmetical techniques like eigenfaces, fischerfaces and independent components respectively, have been widely used to extract the abstract features of the face in facial analysis for face recognition. PCA gives class representations which are in an orthogonal linear space. However LDA generates class discriminatory information in a Linear separable space which is not necessarily orthogonal [1]. Good results are being obtained by these techniques under varying condition. Advantages of both of these techniques can be derived by first reducing dimension of face images by eigen faces and then applying fischer space for class separability. 2. LITERATURE STUDY AND RELATED WORK With the study of literature and journals for the PCA and LDA it is found that, feature based approach key information of facial features, like nose, eyes, chin and mouth, are collected with the help of deformable templates and extensive mathematics and converted into a feature vector. Yuille et. al. [23] proposed deformable templates techniques, where facial features are determined by interactions with the face images. In Image based approach information theoretical concepts like class information, class separability, independent components, face image energy contents are utilised. As whole face image is used so this method is also termed as holistic method. Turk et. al. [19] developed Principal Component Analysis based face recognition using eigenface techniques. As mathematical algorithms utilizing eigenvectors for representation of primary components of the face, it is termed as eigenface technique. It creates a unique weight for each face image, which are used to represent the eigenface features. Hence comparison of these weights permits identification of individual faces from a database. Zhao et. al. [25] uses linear discriminant analysis to maximize the scatter between different classes and

2 13 minimize the scatter between the input data in the same class. PCA tries to simplify the input data by extracting the features and LDA tries to distinguish the input data by dimension reduction. By combining both these techniques, subspace LDA based face recognition system is developed [7]. 3. IDENTIFICATION OF THE PROBLEM AND PRESENT RESEARCH WORK: To study the comparison between the PCA and LDA Also to find the integration with regard to Develop the Robust face recognition system. 4.PROPOSED FACE RECOGNITION ALGORITHM A. PRINCIPAL COMPONENT ANALYSIS A gray scale image is a two dimensional identity. In spatial domain its dimension is represented by number of pixels within the image.in gray scale images, each pixel value lies in the range of 0 to 255. Suppose a grey scale image of size (R x C), the dimension of the image space is Q, then Q will be R times C. Turk et. al. [19] used eigen values for principal component analysis, so it is also termed as Eigenface method. In this method, variance due to non-face images are eliminated and variation just coming out of the variation between the face images are collected for analysis. Hence features of images are obtained by finding maximum deviation of each image, in the spatial domain of image, from the mean image. Then eigenvectors of the covariance matrix of all the images are calculated. Then the training images are projected into the eigenface space. In the same way, test image is projected into this eigenface space and then distances of the projected test image with each of projected training images are calculated to identify the test image. Figure 1 is representing flowchart of Principal Component Analysis Based Face Recognition System. Steps are then explained with the help of mathematical equations after the flowchart. Face Acquisition Face Database formation Training Dataset Test Image Mean Image calculation Difference Matrix Difference Matrix Covariance Matrix Eigen Vector Calculation Weight Matrix Weight Matrix Eigen Face Space Training Projection Matrix Eigen Face Space Test Projection Matrix Simillarity Measurement {L 1 norm or L 2 norm or Covariance} Output Face Image Fig.1: Flowchart of PCA Based Face Recognition System 4.11 Training Phase Image: - The image matrix I of size (R x C) pixels, where R is number of rows and C is number of columns of a two dimensional grey scale image. It is then transformed into a single column vector, Γ, of size (Q x 1) where Q = (R x C) Training Set: - Image matrix is formed by appending column vector of each image one after another. Γ = [Γ 1 Γ 2... Γ M ] (1) and its size is (Q x M) where M is the number of the training images Mean Face: - Now column wise mean of this image matrix is found out. It is the arithmetic average of the training image vectors at each pixel point M Ψ = 1 M Γ 1 Its size is Q x 1. Figure 2 is showing mean image of the training database images used (2)

3 14 Mean Image Fig.2: Mean Face Image 4.14 Mean Subtracted Image: - Now difference of each training image vector is calculated from just found mean image vector. Φ = Γ Ψ (3) It size will be Q x Difference Matrix: - Each of mean subtracted training image vectors are then appended column wise to form difference matrix A= [Φ 1 Φ 2 Φ 3 (4) Φ 4 Φ M ] It size will be Q x M. Here also, M is the number of the training images 4.16 Covariance Matrix: - Covariance of difference matrix is then found, which its multiplication with its own transpose is. X = A. A T M = 1 M Φ T (5) i Φ i Hence its size will be Q x Q Eigen Vectors & Eigen Values: - In PCA, eigenvectors of the covariance matrix is calculated. For a face image having Q numbers pixels, the covariance matrix is of size (Q x Q). Recognition can be done with the help of this covariance matrix. But because of its very big size, calculation with take lots of time and hence cost of recognition will get increased tremendously. Hence PCA adopts eigenface method to calculate the eigenvectors of size (M x M) matrix, where M being the number of training face images. In eigen face method, instead of X, another matrix Y is calculated which is, Y = A T. A = 1 M M Γ i T (6) which is of size (M x M). Then, the eigenvectors v i and the eigenvalues μ i of Y are obtained, Y. v i = μ i.v i (7) i.e A T. A. v i = μ i.v i (8) On multiplying A both sides, A. A T. A. v i =A. μ i.v i (9) Since μ i is a scalar, it can also be written as A. A T. A. v i = μ i. A.v i (10) X. A. v i = μ i. A. v i (11) Γ i If Α. ν i is substituted by υ i = Α. ν i. Then υ i = A. v i (12) is one of the eigenvectors of Χ = Α.Α T and its size is (Q x 1). So it can be observed that eigenvectors of X can also be obtained from eigenvectors of Y. Instead of using a very big covariance matrix X of (Q X Q), another matrix Y of size (M x M) is used Selecting highest eigen vectors:- Now since most of the generalization power is contained in the first few eigenvectors. Around 40% of the total eigenvectors have % of the total generalization power [29]. So instead of using all the M numbers of the eigenfaces, M' M numbers of the eigenfaces is sufficient to show its effect. Larger number of eigenvalue signifies larger variance, which an eigenvector represents. Hence, the eigenvectors are arranged in descending order with respect to their corresponding eigenvalues. So, the most generalizing eigenvector comes first in the eigenvector matrix Testing Phase An image under test is also projected over eigenface space like that of training images. It is mean subtracted and projected onto the eigenface space and then classified; that is, the test image is assumed to belong to the nearest class by evaluating the shortest distance of the test image, with all other training images in eigen face space Test image vector: - Image under test, will be of (R x C) size. It is also converted into a single column vector, Γ T, to represent in image space. It size will be (Q x 1). Mean subtracted image: - It is the difference of the test image from the mean image. Φ T = Г T - Ψ (15) Its size will also be (Q x 1) Projected Test Image: - This mean subtracted test image is then projected over pre calculated eigenface space, ω, which was of size (Q x M ). Ω T = ω. Φ T (16) Its size will be (M x1) 4.22 Classification: - Classification is performed by similarity measures, which is defined as the distance between Projected Test Image and each image in Projection Matrix. δδ ii = Ω TT Ω = (Ω TT Ω kk ) 2 MM kk=1 (17)

4 15 It is the Euclidean distance (L2 norm) between projections. By finding location of minimum value of this Euclidean distance matrix, test image can be classified as, in which image class it lies Reconstructed image vector:- The training and test image vectors can be reconstructed by a back transformation from the eigenface space to the image vector space. Image can be reconstructed from in eigenface domain. Figure 4.7 is facial images, of first 16 individuals, reconstructed from eigenface space projection matrix. Г f = υ. Ω + Ψ = (18) Φ f + Ψ A representative image vector can be formed from the class of images for an individual. 4.24Average Class Projection: - Average class projection is the average of the projections of each class is the mean of all the class projected image vectors. c Ω Ψ = 1 c Ω i (19) This average class projection can be used as one of the vectors (representing an image class instead of an image vector) to compare with the test image vector Distance Threshold: - It is the maximum allowable distance, which is half of the distance between the two most distant classes. Θ = 1 2 max Ω Ψi (20) Ω Ψj 4.26Distance measure: - It is the distance between the mean subtracted image and the reconstructed image. ε 2 = Φ Φ f 2 (21) Criterion of inside database and out of face database image:- (a) If ε < Θ and for all i > Θ, δ i the image is an unknown face. (b) If ε < Θ and for one of i < Θ, δ i the image is a known face. Reconstructed Image no.1 Reconstructed Image no.2 Reconstructed Image no.3 Reconstructed Image no.4 Reconstructed Image no.5 Reconstructed Image no.6 Reconstructed Image no.7 Reconstructed Image no.8 Reconstructed Image no.9 Reconstructed Image no.10 Reconstructed Image no.11 Reconstructed Image no.12 Reconstructed Image no.13 Reconstructed Image no.14 Reconstructed Image no.15 Reconstructed Image no.16 Fig.4: Reconstructed Images of first 16 Individuals B. SUBSPACE LINEAR DISCRIMINANT ANALYSIS Linear discriminant analysis is used to overcome drawback of Principal Component Analysis of its application restricted to small image database. It is achieved by projecting the image onto the eigenface space by PCA and then implementing pure LDA over it, to classify the eigenface space projected data. Linear Discriminant Analysis is also termed as fischerface method. Belhumeour et. al. [6] uses linear discriminant analysis to maximize the scatter between different classes and minimize the scatter between the input data in the same class. Figure 5 is flowchart representation of LDA Based Face Recognition System, followed by their mathematical explanations. Here first part i.e. PCA tries to simplify the input data by extracting the features and then its next part, which is, LDA tries to distinguish the input data by dimension reduction. By combining both these techniques, subspace LDA based face recognition system is developed Training Phase 4.28Image: - The image matrix I of size (R x C) pixels, where R is number of rows and C is number of columns of a two dimensional grey scale image. It is then transformed

5 16 into a single column vector, Γ, of size (Q x 1) where Q = (R x C) Training Set: - Image matrix is formed by appending column vector of each image one after another. Γ = [Γ 1 Γ 2... Γ M ] (22) and its size is (Q x M) where M is the number of the training images. Its size is Q x Mean Face: - Now column wise mean of this image matrix is found out. It is the arithmetic average of the training image vectors at each pixel point. Face acquisition M Ψ = 1 M Γ i (23) Face Database formation Training Dataset Test Image Mean Image calculation Diffrence Matrix Diffrence Vector Covariance Matrix Eigen Vector Calculation Projection vector Weight Matrix Training Database Projection Matrix (Eigenface Space) Testing Image Projection Marix Eigenface Space & Class Mean Within Class & Between Class Scatter Matrix Eigenvector Calculation Eigenface Weight Matrix Classification Space Training Matrix Classification Space Testing Matrix Simillarity Measurement {L 1 norm or L 2 norm or Covariance} Fig.5: Flowchart of Subspace LDA based Face Recognition System 4.31 Mean subtracted image: - Now difference of each training image vector is calculated from just found mean image vector. Φ = Γ Ψ (24) It size will be Q x Difference Matrix: - Each of mean subtracted training image vectors are then appended column wise to form difference matrix. Output Face Image A= [Φ 1 Φ 2 Φ 3 Φ 4 Φ M ] (25) It size will be Q x M. Here also, M is the number of the training images 4.33 Covariance Matrix: - Covariance of difference matrix is then found, which its multiplication with its own transpose is.

6 17 X = A. A T M = 1 M Φ T (26) i Φ i Hence its size will be Q x Q Eigen Vectors & Eigen Values: - In PCA, eigenvectors of the covariance matrix is calculated. For a face image having Q numbers pixels, the covariance matrix is of size (Q x Q). Recognition can be done with the help of this covariance matrix. But because of its very big size, calculation with take lots of time and hence cost of recognition will get increased tremendously. Hence PCA adopts eigenface method to calculate the eigenvectors of size (M x M) matrix, where M being the number of training face images. Then, the eigenvectors v i and the eigenvalues μ i of Y are obtained, Y. v i = μ i.v i (27) i.e A T. A. v i = μ i.v i (28) Multiplying A both A. A T. A. v i (29) sides, =A. μ i.v i Since μ i is a scalar A. A T. A. v i = (30) quantity μ i. A.v i X. A. v i = μ i. (31) A. v i If υ i = Α. ν i X. υ i = μ i. υ i (32) So, υ i = A. v i is one of the eigenvectors of Χ = Α.Α T and its size is (Q x 1). So it can be observed that eigenvectors of X can also be obtained from eigenvectors of Y. Instead of using a very big covariance matrix X of (Q X Q), another matrix Y of size (M x M) is used Selecting highest eigen vectors: - Now since most of the generalization power is contained in the first few eigenvectors. Around 40% of the total eigenvectors have % of the total generalization power. So instead of using all the M numbers of the eigenfaces, M' M numbers of the eigenfaces is sufficient to show its effect. Eigen face Space: - It can be formed by calculating dot product of difference matrix with selected numbers of eigen vectors. ω=a. υ k (33) where k = 1, 2,, M. Hence its size will be (Q x M ). Projection Matrix: - Training images are then projected over eigenface space. Ω= ω. A (34) So its size is (M x M). This method transforms each image from image space to eigen face space. Earlier in image space, each image was of size (Q x 1), however eigenface transformation each image is of size (M x 1) in the eigenface space. Hence dimension reduction is achieved, as M <Q Linear Discriminant Analysis Instead of using the pixel values of the images, the eigenface projections of PCA transformation are used in the Subspace LDA method. Suppose in the training database, there are C individual person classes having q i number of training images in each of the class, such that total number of facial images M = C x q i. Eigenface Class Mean: - It is the arithmetic average of the eigenface projected training image vectors corresponding to the same individual class. q i m i = 1 q i Ω k k=1 (35) where i = 1, 2,, C and and size of each eigenface class mean is (M x 1). Eigenface Mean Face: - It is the arithmetic average of all the eigenface projected training image vectors. M m 0 = 1 M Ω k k=1 (36) Its size is (M x 1). Within Class Scatter Matrix: - This matrix represents the average scattering of the projection matrix Ω in the eigenface space of different individuals C i around their respective class means m i. Its size will be (M x M ), which is based on largest number of eigen vectors M, selected for eigenface space. Where average scattering is C S w = P(C i )Σ i Σ i = E[(Ω m i )(Ω m i ) T ] (37) (38) And Prior Class Probability is P(C i ) = 1 (39) C which is the selected with assumption that each class has equal prior probabilities. Between-Class Scatter Matrix: - This matrix represents the scatter of each projection classes mean mi around the overall mean vector m 0. C S b = P(C i )(m i m 0 )(m i m 0 ) T Its size is (M x M ). For LDA, a discriminatory power is defined as; Discriminatory Power J(T) = TT. S b. T (41) T T. S w. T where S b is the between-class and S w is the within-class scatter matrix. The objective is to maximize J(T); This can be achieved by obtaining projection W which maximizes between-class scatter and minimizes within-class scatter. Then, W can be obtained by solving the generalized eigenvalue problem; S b W = S w W λ w (42) Here also some highest numbers of fischer features, F, are selected such that F < M. Next, the eigenface projections of the training image vectors are projected to the fischeface space by the dot (40)

7 18 product of optimum projection, W and eigenface projection matrix, Ω. Classification space projection matrix: - This matrix is the projection of the training image vectors eigenface projections to the classification space. g(ω) = W T. Ω (43) It is of size (F x M). Testing Phase Test image vector: - Image under test, will be of (R x C) size. It is also converted into a single column vector, Γ T, to represent in image space. It size will be (Q x 1). Mean subtracted image: - It is the difference of the test image from the mean image. Φ T = Г T - Ψ (44) Its size will also be (Q x 1). Projected Test Image: - This mean subtracted test image is then projected on each of the eigenvectors. where k = 1, 2,, M Weight Matrix: - It is representation of the test image in the eigenface space. Ω T = [ω 1 ω 3.. ω M ] T ω 2 (46) Its size is (M x 1). Individual Class Classification test space projection: - This matrix is formed by eigenface projection of the test image vector (i.e. weight matrix) to the classification space in the same manner. g (Ω T ) = W T. Ω T (47) It is of size (F x 1). Distance measure: - Then distance between the classification test space projection and classification space projection matrix is determined by the Euclidean distance calculation. C i = g (Ω T ) g (Ω i ) = g k (Ω T ) g k (Ω i ) 2 images under test. T T ω k = υ k. Φ T = υ k. (Г T Ψ) (45) PCA (out of 59) LDA (out of 59) k=1 (48) Where for i = 1, 2 M. Location of shortest Euclidean distance represents class of RESULTS Table 1 shows, correctly recognized facial images out of 38 different test images. These results are of PCA and Subspace LDA algorithms. Further their recognition rates are also compared. In this comparison method, numbers of PCA features are taken as 80. Recognition Rate (%) By PCA Recognition Rate (%) By LDA

8 Table 1: Class Wise Result Variation of PCA and LDA Based Face Recognition System Feature wise variation in recognition rate of PCA & LDA System is shown in figure 6. With increase in features, in PCA recognition rate keep increasing upto 60 features and then it becomes constant. Moreover recognition rate of LDA is always greater than PCA PCA LDA Recognition Rate (%) Dimensions (Number of PCA Features) Fig.6: Feature Based Recognition Rate Comparison of PCA & LDA

9 20 Table II COMPARISON OF RECOGNITION RATE OF PROPOSED ALGORITHM WITH THAT OF OTHER METHODS Methods Recognition Rate ICA[28] 70.91% Linear Subspace[27] 79.40% PCA with L 2 Norm [19] 84.00% DCT Based Face Recognition [24] 84.50% K-means [10] 86.75% Kernel-Fisher[28] 93.94% PCA & LDA with L 2 Norm [23][25] 94.80% Fuzzy Ant with fuzzy C- means [10] 94.82% Proposed Algorithm 96.00% CONCLUSION In this paper a new face recognition system based on, dimension reduction technique by PCA followed by LDA and classification with City Block Distance Calculation, is presented. The Recognition Rates are compared with varying PCA features and LDA features. For experiment ORL Face Database is used, which provided face images under varying facial condition [12]. It is found that increasing PCA features and reducing LDA features increases recognition rate as shown graphically in figure 6. In our simulations, we considered 8 training images and 2 test images for each of 40 persons (totally 320 training and 80 test face images). Our experimental results show a recognition rate equal to %, which demonstrated an improvement in comparison with some other previous methods, as shown in table II. Hence from these experiments, it can be concluded that when the number of face images per subject is small PCA is better, but for large training face images and different training data set, its combination with LDA shows better result than pure PCA. Future research work: With the help and utilizing the comparative study of PCA and LDA we may able to develop the Robusr face recognition system. It may open varied areas of the Study. REFERENCES [1] Belhumeur P.N., Hespanha J.P., and Kriegman D. J., Eigen Faces vs. Fisher Faces: Recognition Using Class Specific Linear Projection, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 19, PP , [2] Eleyan Alaa and Demirel Hasan, PCA and LDA based Neural Networks for Human Face Recognition, Face Recognitin Book, ISBN , pp.558, I-Tech, Vienna, Austria, June [3] Etemad K. and Chellappa R., Face Recognition Using Discriminant Eigenvectors, IEEE Transaction for Pattern Recognition, pp ,1996. [4] Hu H., Zhang P. and Torre F. De la, Face recognition using enhanced linear discriminant analysis, IET Computer Vision,, Vol. 4, Iss. 3, pp , [5] Laurens van der Maaten, Eric Postma, and Jaap van den Herik, Dimensionality Reduction: A Comparative Review, Tilburg centre for Creative Computing, Technical Report,2009. [6] Lee S. J., Yung S. B, Kwon J. W., and Hong S. H., Face Detection and Recognition Using PCA, IEEE TENCON, pp , [7] Lone Manzoor Ahmad, Zakariya S. M. And Ali Rashid, Automatic Face Recognition System by Combining Four Individual Algorithms, proceeding of International Conference on Computational Intelligence and Communication Systems, [8] Lu Juwei, Plataniotis K.N., and Venetsanopoulos A.N., Face Recognition Using LDA Based Algorithms, IEEE Transactions On Neural Networks, [9] Mahyabadi M. P., Soltanizadeh H., Shokouhi Sh. B., Facial Detection based on PCA and Adaptive Resonance Theory 2A Neural Network, Proceedings of IJME - INTERTECH Conference, [10] Makdee S., Kimpan C., Pansang S., Invariant range image multi pose face recognition using Fuzzy ant algorithm and membership matching score Proceedings of IEEE International Symposium on Signal Processing and Information Technology, pp , 2007.

10 21 [11] Meedeniya D.A., Ratnaweera D.A.A.C., Enhanced Face Recognition Through Variation Of Principle Component Analysis (PCA), Proceeding of Second International Conference On Industrial And Information Systems, ICIIS, [12] Olivetti & Oracle Research Laboratory, The Olivetti & Oracle Research Laboratory Face database. [13] Pang S., Ozawa S., Kasabov N., Incremental Linear Discriminant Analysis For Classification Of Data Streams, IEEE Trans. on Systems, Man and Cybernetics, vol. 35, no. 5, pp , [14] Patil A.M., Kolhe S. R. and Patil P.M, 2D Face Recognition Techniques: A Survey, International Journal of Machine Intelligence, ISSN: , Volume 2, Issue 1, pp-74-8, [15] Phillips P. J., and Moon H., Comparison of Projection-Based Face Recognition Algorithms,, IEEE Transaction for Pattern Recognition, pp , [16] Poon Bruce, Amin M. Ashraful, Yan Hong, PCA Based Face Recognition And Testing Criteria, Proceedings of the Eighth International Conference on Machine Learning and Cybernetics, Baoding, [17] Sirovich L. and Kirby, M., "Low-dimensional procedure for the characterization of human faces", International Journal of Optical Society4, 3, pp , [18] Swets D.L. and Weng J.J., Using Discriminant Eigen Features For Image Retrieval, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 18, PP , [19] Turk M., Pentland A., "Eigen Faces For Face Recognition", Journal of Cognitive Neuroscience, Vol. 3, No.1, [20] Turk M. and Pentland A., Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3, 72 86, [21] Vijay Kumar B.G., Aravind R., Computationally efficient algorithm for face super-resolution using (2D) 2 - PCA based prior, IET Image Processing, Vol. 4, Iss. 2, pp , [22] Wendy S. Yambor, Analysis Of PCA-Based And Fisher Discriminant-Based Image Recognition Algorithms, Computer Science Technical Report CS , Colorado State University, [23] Yuille A. L., Cohen D. S., and Hallinan P. W., "Feature Extraction From Faces Using Deformable Templates", Proceeding of CVPR, [24] Zhao S., Grigat R. R., Multi block fusion scheme for face recognition, Int. Con. on Pattern recognition (ICPR), Vol. 1, pp , [25] Zhao W., Chellappa R., Krishnaswamy A,, Discriminant Analysis Of Principal Component For Face Recognition, IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol 8, [26] Zhao W., Chellappa R., Phillips P. J. and Rosenfeld A., Face Recognition: A Literature Survey, ACM Computing Surveys, Vol. 35, No. 4, pp , [27] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, Eigenfaces vs.fisherfaces: Recognition using class specific linear projection, IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp , Jul [28] M.-H. Yang, Face recognition using kernel methods, in Advances of Neural Information Processing Systems, vol. 14. Cambridge, MA: MIT Press, 2002, pp [29] Ahmet Bahtiyar Gul, Holistic Face Recognition By Dimension Reduction, Ph.D. Thesis, M.Sc., EEE, Middle East Technical University, ABOUT THE AUTHORS 1. Raj Kumar Sahu, working as Associate Professor,with C.S.I.T, Durg, and presently he is pursuing his research from Jodhpur National University, Jodhpur, Faculty of Electronics and Communication Engg. His area of interest is Face Recognition System. 2. Prof. Dr. Y.P. Singh, currently working as Director, Somany P.G. Institue of Technology and Management, Rewari, Haryana. He has also worked about 27 years as Lecturer, Dean of academics, Principal, Director in many Engineering institutions and organization. He has also served with Training and Technical Deptt,. Govt. Of Delhi, almost for 17 years. He has about 42 research paper published in National and 48 papers published in international journals in his credit. He has been selected and awarded by Govt. of Delhi as Best Technical Teacher He has been awarded with Outstanding Teacher Award 2012 and 2013 also. He is also an expert and Master Trainer for the Teachers, empanelled by SCERT/NCERT. And CSTT, MHRD, Govt of India..He is also the guide of research scholar for almost dozen of Universities. His area of Research is mobile and wireless communication, digital signal processing, Development of algorithms for the Data processing. 3. Dr. Abhijit Kulshrestha, Professor & Head, Department of Physics, Faculty of Engineering and Technology, Jodhpur National University, Jodhpur.

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Eigenface and

More information

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction Using PCA/LDA Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction One approach to deal with high dimensional data is by reducing their

More information

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr

More information

Lecture: Face Recognition

Lecture: Face Recognition Lecture: Face Recognition Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 12-1 What we will learn today Introduction to face recognition The Eigenfaces Algorithm Linear

More information

Eigenface-based facial recognition

Eigenface-based facial recognition Eigenface-based facial recognition Dimitri PISSARENKO December 1, 2002 1 General This document is based upon Turk and Pentland (1991b), Turk and Pentland (1991a) and Smith (2002). 2 How does it work? The

More information

Automatic Identity Verification Using Face Images

Automatic Identity Verification Using Face Images Automatic Identity Verification Using Face Images Sabry F. Saraya and John F. W. Zaki Computer & Systems Eng. Dept. Faculty of Engineering Mansoura University. Abstract This paper presents two types of

More information

Face Recognition Using Eigenfaces

Face Recognition Using Eigenfaces Face Recognition Using Eigenfaces Prof. V.P. Kshirsagar, M.R.Baviskar, M.E.Gaikwad, Dept. of CSE, Govt. Engineering College, Aurangabad (MS), India. vkshirsagar@gmail.com, madhumita_baviskar@yahoo.co.in,

More information

Facial Expression Recognition using Eigenfaces and SVM

Facial Expression Recognition using Eigenfaces and SVM Facial Expression Recognition using Eigenfaces and SVM Prof. Lalita B. Patil Assistant Professor Dept of Electronics and Telecommunication, MGMCET, Kamothe, Navi Mumbai (Maharashtra), INDIA. Prof.V.R.Bhosale

More information

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY

INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY [Gaurav, 2(1): Jan., 2013] ISSN: 2277-9655 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY Face Identification & Detection Using Eigenfaces Sachin.S.Gurav *1, K.R.Desai 2 *1

More information

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction Robot Image Credit: Viktoriya Sukhanova 13RF.com Dimensionality Reduction Feature Selection vs. Dimensionality Reduction Feature Selection (last time) Select a subset of features. When classifying novel

More information

Eigenimaging for Facial Recognition

Eigenimaging for Facial Recognition Eigenimaging for Facial Recognition Aaron Kosmatin, Clayton Broman December 2, 21 Abstract The interest of this paper is Principal Component Analysis, specifically its area of application to facial recognition

More information

A Unified Bayesian Framework for Face Recognition

A Unified Bayesian Framework for Face Recognition Appears in the IEEE Signal Processing Society International Conference on Image Processing, ICIP, October 4-7,, Chicago, Illinois, USA A Unified Bayesian Framework for Face Recognition Chengjun Liu and

More information

Example: Face Detection

Example: Face Detection Announcements HW1 returned New attendance policy Face Recognition: Dimensionality Reduction On time: 1 point Five minutes or more late: 0.5 points Absent: 0 points Biometrics CSE 190 Lecture 14 CSE190,

More information

Comparative Assessment of Independent Component. Component Analysis (ICA) for Face Recognition.

Comparative Assessment of Independent Component. Component Analysis (ICA) for Face Recognition. Appears in the Second International Conference on Audio- and Video-based Biometric Person Authentication, AVBPA 99, ashington D. C. USA, March 22-2, 1999. Comparative Assessment of Independent Component

More information

Symmetric Two Dimensional Linear Discriminant Analysis (2DLDA)

Symmetric Two Dimensional Linear Discriminant Analysis (2DLDA) Symmetric Two Dimensional inear Discriminant Analysis (2DDA) Dijun uo, Chris Ding, Heng Huang University of Texas at Arlington 701 S. Nedderman Drive Arlington, TX 76013 dijun.luo@gmail.com, {chqding,

More information

Eigenfaces. Face Recognition Using Principal Components Analysis

Eigenfaces. Face Recognition Using Principal Components Analysis Eigenfaces Face Recognition Using Principal Components Analysis M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive Neuroscience, 3(1), pp. 71-86, 1991. Slides : George Bebis, UNR

More information

FACE RECOGNITION BY EIGENFACE AND ELASTIC BUNCH GRAPH MATCHING. Master of Philosophy Research Project First-term Report SUPERVISED BY

FACE RECOGNITION BY EIGENFACE AND ELASTIC BUNCH GRAPH MATCHING. Master of Philosophy Research Project First-term Report SUPERVISED BY FACE RECOGNITION BY EIGENFACE AND ELASTIC BUNCH GRAPH MATCHING Master of Philosophy Research Proect First-term Report SUPERVISED BY Professor LYU, Rung Tsong Michael PREPARED BY JANG KIM FUNG (01036550)

More information

COS 429: COMPUTER VISON Face Recognition

COS 429: COMPUTER VISON Face Recognition COS 429: COMPUTER VISON Face Recognition Intro to recognition PCA and Eigenfaces LDA and Fisherfaces Face detection: Viola & Jones (Optional) generic object models for faces: the Constellation Model Reading:

More information

PCA FACE RECOGNITION

PCA FACE RECOGNITION PCA FACE RECOGNITION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Shree Nayar (Columbia) including their own slides. Goal

More information

Non-parametric Classification of Facial Features

Non-parametric Classification of Facial Features Non-parametric Classification of Facial Features Hyun Sung Chang Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Problem statement In this project, I attempted

More information

Reconnaissance d objetsd et vision artificielle

Reconnaissance d objetsd et vision artificielle Reconnaissance d objetsd et vision artificielle http://www.di.ens.fr/willow/teaching/recvis09 Lecture 6 Face recognition Face detection Neural nets Attention! Troisième exercice de programmation du le

More information

Discriminant Uncorrelated Neighborhood Preserving Projections

Discriminant Uncorrelated Neighborhood Preserving Projections Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a,

More information

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision) CS4495/6495 Introduction to Computer Vision 8B-L2 Principle Component Analysis (and its use in Computer Vision) Wavelength 2 Wavelength 2 Principal Components Principal components are all about the directions

More information

Deriving Principal Component Analysis (PCA)

Deriving Principal Component Analysis (PCA) -0 Mathematical Foundations for Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Deriving Principal Component Analysis (PCA) Matt Gormley Lecture 11 Oct.

More information

Kazuhiro Fukui, University of Tsukuba

Kazuhiro Fukui, University of Tsukuba Subspace Methods Kazuhiro Fukui, University of Tsukuba Synonyms Multiple similarity method Related Concepts Principal component analysis (PCA) Subspace analysis Dimensionality reduction Definition Subspace

More information

Lecture 13 Visual recognition

Lecture 13 Visual recognition Lecture 13 Visual recognition Announcements Silvio Savarese Lecture 13-20-Feb-14 Lecture 13 Visual recognition Object classification bag of words models Discriminative methods Generative methods Object

More information

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University Lecture 4: Principal Component Analysis Aykut Erdem May 016 Hacettepe University This week Motivation PCA algorithms Applications PCA shortcomings Autoencoders Kernel PCA PCA Applications Data Visualization

More information

Enhanced Fisher Linear Discriminant Models for Face Recognition

Enhanced Fisher Linear Discriminant Models for Face Recognition Appears in the 14th International Conference on Pattern Recognition, ICPR 98, Queensland, Australia, August 17-2, 1998 Enhanced isher Linear Discriminant Models for ace Recognition Chengjun Liu and Harry

More information

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Recognition Using Class Specific Linear Projection Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Articles Eigenfaces vs. Fisherfaces Recognition Using Class Specific Linear Projection, Peter N. Belhumeur,

More information

Subspace Methods for Visual Learning and Recognition

Subspace Methods for Visual Learning and Recognition This is a shortened version of the tutorial given at the ECCV 2002, Copenhagen, and ICPR 2002, Quebec City. Copyright 2002 by Aleš Leonardis, University of Ljubljana, and Horst Bischof, Graz University

More information

Face Recognition. Lecture-14

Face Recognition. Lecture-14 Face Recognition Lecture-14 Face Recognition imple Approach Recognize faces (mug shots) using gray levels (appearance). Each image is mapped to a long vector of gray levels. everal views of each person

More information

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface Image Analysis & Retrieval Lec 14 Eigenface and Fisherface Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: lizhu@umkc.edu, Ph: x 2346. http://l.web.umkc.edu/lizhu Z. Li, Image Analysis & Retrv, Spring

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Multiple Similarities Based Kernel Subspace Learning for Image Classification Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images

More information

Real Time Face Detection and Recognition using Haar - Based Cascade Classifier and Principal Component Analysis

Real Time Face Detection and Recognition using Haar - Based Cascade Classifier and Principal Component Analysis Real Time Face Detection and Recognition using Haar - Based Cascade Classifier and Principal Component Analysis Sarala A. Dabhade PG student M. Tech (Computer Egg) BVDU s COE Pune Prof. Mrunal S. Bewoor

More information

Face Detection and Recognition

Face Detection and Recognition Face Detection and Recognition Face Recognition Problem Reading: Chapter 18.10 and, optionally, Face Recognition using Eigenfaces by M. Turk and A. Pentland Queryimage face query database Face Verification

More information

Aruna Bhat Research Scholar, Department of Electrical Engineering, IIT Delhi, India

Aruna Bhat Research Scholar, Department of Electrical Engineering, IIT Delhi, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2017 IJSRCSEIT Volume 2 Issue 6 ISSN : 2456-3307 Robust Face Recognition System using Non Additive

More information

Lecture 17: Face Recogni2on

Lecture 17: Face Recogni2on Lecture 17: Face Recogni2on Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei-Fei Li Stanford Vision Lab Lecture 17-1! What we will learn today Introduc2on to face recogni2on Principal Component Analysis

More information

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface CS/EE 5590 / ENG 401 Special Topics, Spring 2018 Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface Zhu Li Dept of CSEE, UMKC http://l.web.umkc.edu/lizhu Office Hour: Tue/Thr 2:30-4pm@FH560E, Contact:

More information

An Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition

An Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition An Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition Jun Liu, Songcan Chen, Daoqiang Zhang, and Xiaoyang Tan Department of Computer Science & Engineering, Nanjing University

More information

When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants

When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants Sheng Zhang erence Sim School of Computing, National University of Singapore 3 Science Drive 2, Singapore 7543 {zhangshe, tsim}@comp.nus.edu.sg

More information

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani PCA & ICA CE-717: Machine Learning Sharif University of Technology Spring 2015 Soleymani Dimensionality Reduction: Feature Selection vs. Feature Extraction Feature selection Select a subset of a given

More information

PCA and LDA. Man-Wai MAK

PCA and LDA. Man-Wai MAK PCA and LDA Man-Wai MAK Dept. of Electronic and Information Engineering, The Hong Kong Polytechnic University enmwmak@polyu.edu.hk http://www.eie.polyu.edu.hk/ mwmak References: S.J.D. Prince,Computer

More information

Advanced Introduction to Machine Learning CMU-10715

Advanced Introduction to Machine Learning CMU-10715 Advanced Introduction to Machine Learning CMU-10715 Principal Component Analysis Barnabás Póczos Contents Motivation PCA algorithms Applications Some of these slides are taken from Karl Booksh Research

More information

Lecture 17: Face Recogni2on

Lecture 17: Face Recogni2on Lecture 17: Face Recogni2on Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei-Fei Li Stanford Vision Lab Lecture 17-1! What we will learn today Introduc2on to face recogni2on Principal Component Analysis

More information

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

Course 495: Advanced Statistical Machine Learning/Pattern Recognition Course 495: Advanced Statistical Machine Learning/Pattern Recognition Deterministic Component Analysis Goal (Lecture): To present standard and modern Component Analysis (CA) techniques such as Principal

More information

Face recognition Computer Vision Spring 2018, Lecture 21

Face recognition Computer Vision Spring 2018, Lecture 21 Face recognition http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 21 Course announcements Homework 6 has been posted and is due on April 27 th. - Any questions about the homework?

More information

Two-Layered Face Detection System using Evolutionary Algorithm

Two-Layered Face Detection System using Evolutionary Algorithm Two-Layered Face Detection System using Evolutionary Algorithm Jun-Su Jang Jong-Hwan Kim Dept. of Electrical Engineering and Computer Science, Korea Advanced Institute of Science and Technology (KAIST),

More information

Principal Component Analysis

Principal Component Analysis B: Chapter 1 HTF: Chapter 1.5 Principal Component Analysis Barnabás Póczos University of Alberta Nov, 009 Contents Motivation PCA algorithms Applications Face recognition Facial expression recognition

More information

Robust Tensor Factorization Using R 1 Norm

Robust Tensor Factorization Using R 1 Norm Robust Tensor Factorization Using R Norm Heng Huang Computer Science and Engineering University of Texas at Arlington heng@uta.edu Chris Ding Computer Science and Engineering University of Texas at Arlington

More information

ECE 661: Homework 10 Fall 2014

ECE 661: Homework 10 Fall 2014 ECE 661: Homework 10 Fall 2014 This homework consists of the following two parts: (1) Face recognition with PCA and LDA for dimensionality reduction and the nearest-neighborhood rule for classification;

More information

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier Seiichi Ozawa 1, Shaoning Pang 2, and Nikola Kasabov 2 1 Graduate School of Science and Technology,

More information

A Modified Incremental Principal Component Analysis for On-line Learning of Feature Space and Classifier

A Modified Incremental Principal Component Analysis for On-line Learning of Feature Space and Classifier A Modified Incremental Principal Component Analysis for On-line Learning of Feature Space and Classifier Seiichi Ozawa, Shaoning Pang, and Nikola Kasabov Graduate School of Science and Technology, Kobe

More information

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation) PCA transforms the original input space into a lower dimensional space, by constructing dimensions that are linear combinations

More information

Face Recognition. Lauren Barker

Face Recognition. Lauren Barker Face Recognition Lauren Barker 24th April 2011 Abstract This report presents an exploration into the various techniques involved in attempting to solve the problem of face recognition. Focus is paid to

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

Linear Subspace Models

Linear Subspace Models Linear Subspace Models Goal: Explore linear models of a data set. Motivation: A central question in vision concerns how we represent a collection of data vectors. The data vectors may be rasterized images,

More information

201 Broadway 20 Ames St. E (corresponding to variations between dierent. and P (j E ) which are derived from training data

201 Broadway 20 Ames St. E (corresponding to variations between dierent. and P (j E ) which are derived from training data Beyond Eigenfaces: Probabilistic Matching for ace Recognition Baback Moghaddam Wasiuddin Wahid and Alex Pentland Mitsubishi Electric Research Laboratory MIT Media Laboratory 201 Broadway 20 Ames St. Cambridge,

More information

Face Recognition and Biometric Systems

Face Recognition and Biometric Systems The Eigenfaces method Plan of the lecture Principal Components Analysis main idea Feature extraction by PCA face recognition Eigenfaces training feature extraction Literature M.A.Turk, A.P.Pentland Face

More information

Face Recognition. Lecture-14

Face Recognition. Lecture-14 Face Recognition Lecture-14 Face Recognition imple Approach Recognize faces mug shots) using gray levels appearance). Each image is mapped to a long vector of gray levels. everal views of each person are

More information

Face Recognition Using Multi-viewpoint Patterns for Robot Vision

Face Recognition Using Multi-viewpoint Patterns for Robot Vision 11th International Symposium of Robotics Research (ISRR2003), pp.192-201, 2003 Face Recognition Using Multi-viewpoint Patterns for Robot Vision Kazuhiro Fukui and Osamu Yamaguchi Corporate Research and

More information

Model-based Characterization of Mammographic Masses

Model-based Characterization of Mammographic Masses Model-based Characterization of Mammographic Masses Sven-René von der Heidt 1, Matthias Elter 2, Thomas Wittenberg 2, Dietrich Paulus 1 1 Institut für Computervisualistik, Universität Koblenz-Landau 2

More information

Pattern Recognition 2

Pattern Recognition 2 Pattern Recognition 2 KNN,, Dr. Terence Sim School of Computing National University of Singapore Outline 1 2 3 4 5 Outline 1 2 3 4 5 The Bayes Classifier is theoretically optimum. That is, prob. of error

More information

Locality Preserving Projections

Locality Preserving Projections Locality Preserving Projections Xiaofei He Department of Computer Science The University of Chicago Chicago, IL 60637 xiaofei@cs.uchicago.edu Partha Niyogi Department of Computer Science The University

More information

A Unified Framework for Subspace Face Recognition

A Unified Framework for Subspace Face Recognition 1222 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 26, NO. 9, SEPTEMBER 2004 A Unified Framework for Subspace Face Recognition iaogang Wang, Student Member, IEEE, and iaoou Tang,

More information

Face Recognition Technique Based on Eigenfaces Method

Face Recognition Technique Based on Eigenfaces Method Proceeding of 3 rd scientific conference of the College of Science, University of Baghdad Ali and AL-Phalahi 24 Proceeding to 26 arch of 2009 3 rd scientific conference, 2009, PP 781-786 Face Recognition

More information

Random Sampling LDA for Face Recognition

Random Sampling LDA for Face Recognition Random Sampling LDA for Face Recognition Xiaogang Wang and Xiaoou ang Department of Information Engineering he Chinese University of Hong Kong {xgwang1, xtang}@ie.cuhk.edu.hk Abstract Linear Discriminant

More information

2D Image Processing Face Detection and Recognition

2D Image Processing Face Detection and Recognition 2D Image Processing Face Detection and Recognition Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de

More information

Image Analysis. PCA and Eigenfaces

Image Analysis. PCA and Eigenfaces Image Analysis PCA and Eigenfaces Christophoros Nikou cnikou@cs.uoi.gr Images taken from: D. Forsyth and J. Ponce. Computer Vision: A Modern Approach, Prentice Hall, 2003. Computer Vision course by Svetlana

More information

Unsupervised Learning: K- Means & PCA

Unsupervised Learning: K- Means & PCA Unsupervised Learning: K- Means & PCA Unsupervised Learning Supervised learning used labeled data pairs (x, y) to learn a func>on f : X Y But, what if we don t have labels? No labels = unsupervised learning

More information

Eigenimages. Digital Image Processing: Bernd Girod, 2013 Stanford University -- Eigenimages 1

Eigenimages. Digital Image Processing: Bernd Girod, 2013 Stanford University -- Eigenimages 1 Eigenimages " Unitary transforms" Karhunen-Loève transform" Eigenimages for recognition" Sirovich and Kirby method" Example: eigenfaces" Eigenfaces vs. Fisherfaces" Digital Image Processing: Bernd Girod,

More information

Image Region Selection and Ensemble for Face Recognition

Image Region Selection and Ensemble for Face Recognition Image Region Selection and Ensemble for Face Recognition Xin Geng and Zhi-Hua Zhou National Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China E-mail: {gengx, zhouzh}@lamda.nju.edu.cn

More information

CS 4495 Computer Vision Principle Component Analysis

CS 4495 Computer Vision Principle Component Analysis CS 4495 Computer Vision Principle Component Analysis (and it s use in Computer Vision) Aaron Bobick School of Interactive Computing Administrivia PS6 is out. Due *** Sunday, Nov 24th at 11:55pm *** PS7

More information

Digital Image Processing Lectures 13 & 14

Digital Image Processing Lectures 13 & 14 Lectures 13 & 14, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013 Properties of KL Transform The KL transform has many desirable properties which makes

More information

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides

Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides Lecture 16: Small Sample Size Problems (Covariance Estimation) Many thanks to Carlos Thomaz who authored the original version of these slides Intelligent Data Analysis and Probabilistic Inference Lecture

More information

Simultaneous and Orthogonal Decomposition of Data using Multimodal Discriminant Analysis

Simultaneous and Orthogonal Decomposition of Data using Multimodal Discriminant Analysis Simultaneous and Orthogonal Decomposition of Data using Multimodal Discriminant Analysis Terence Sim Sheng Zhang Jianran Li Yan Chen School of Computing, National University of Singapore, Singapore 117417.

More information

Face Recognition Technique Based on Eigenfaces Method

Face Recognition Technique Based on Eigenfaces Method Face Recognition Technique Based on Eigenfaces ethod S Ali * and KA AL-Phalahi ** * Remote Sensing Unit, College of Science, University of Baghdad, Iraq, Baghdad, Al- Jaderyia ** Department of Computer

More information

Face Recognition Using Global Gabor Filter in Small Sample Case *

Face Recognition Using Global Gabor Filter in Small Sample Case * ISSN 1673-9418 CODEN JKYTA8 E-mail: fcst@public2.bta.net.cn Journal of Frontiers of Computer Science and Technology http://www.ceaj.org 1673-9418/2010/04(05)-0420-06 Tel: +86-10-51616056 DOI: 10.3778/j.issn.1673-9418.2010.05.004

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Evolutionary Pursuit and Its Application to Face Recognition

Evolutionary Pursuit and Its Application to Face Recognition IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 6, pp. 50-52, 2000. Evolutionary Pursuit and Its Application to Face Recognition Chengjun Liu, Member, IEEE, and Harry Wechsler, Fellow,

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Additional reading can be found from non-assessed exercises (week 8) in this course unit teaching page. Textbooks: Sect. 6.3 in [1] and Ch. 12 in [2] Outline Introduction

More information

Statistical Pattern Recognition

Statistical Pattern Recognition Statistical Pattern Recognition Feature Extraction Hamid R. Rabiee Jafar Muhammadi, Alireza Ghasemi, Payam Siyari Spring 2014 http://ce.sharif.edu/courses/92-93/2/ce725-2/ Agenda Dimensionality Reduction

More information

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015 ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015 http://intelligentoptimization.org/lionbook Roberto Battiti

More information

COMP 408/508. Computer Vision Fall 2017 PCA for Recognition

COMP 408/508. Computer Vision Fall 2017 PCA for Recognition COMP 408/508 Computer Vision Fall 07 PCA or Recognition Recall: Color Gradient by PCA v λ ( G G, ) x x x R R v, v : eigenvectors o D D with v ^v (, ) x x λ, λ : eigenvalues o D D with λ >λ v λ ( B B, )

More information

Principal Component Analysis (PCA)

Principal Component Analysis (PCA) Principal Component Analysis (PCA) Salvador Dalí, Galatea of the Spheres CSC411/2515: Machine Learning and Data Mining, Winter 2018 Michael Guerzhoy and Lisa Zhang Some slides from Derek Hoiem and Alysha

More information

W vs. QCD Jet Tagging at the Large Hadron Collider

W vs. QCD Jet Tagging at the Large Hadron Collider W vs. QCD Jet Tagging at the Large Hadron Collider Bryan Anenberg: anenberg@stanford.edu; CS229 December 13, 2013 Problem Statement High energy collisions of protons at the Large Hadron Collider (LHC)

More information

Myoelectrical signal classification based on S transform and two-directional 2DPCA

Myoelectrical signal classification based on S transform and two-directional 2DPCA Myoelectrical signal classification based on S transform and two-directional 2DPCA Hong-Bo Xie1 * and Hui Liu2 1 ARC Centre of Excellence for Mathematical and Statistical Frontiers Queensland University

More information

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to:

System 1 (last lecture) : limited to rigidly structured shapes. System 2 : recognition of a class of varying shapes. Need to: System 2 : Modelling & Recognising Modelling and Recognising Classes of Classes of Shapes Shape : PDM & PCA All the same shape? System 1 (last lecture) : limited to rigidly structured shapes System 2 :

More information

Linear Dimensionality Reduction

Linear Dimensionality Reduction Outline Hong Chang Institute of Computing Technology, Chinese Academy of Sciences Machine Learning Methods (Fall 2012) Outline Outline I 1 Introduction 2 Principal Component Analysis 3 Factor Analysis

More information

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning LIU, LU, GU: GROUP SPARSE NMF FOR MULTI-MANIFOLD LEARNING 1 Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning Xiangyang Liu 1,2 liuxy@sjtu.edu.cn Hongtao Lu 1 htlu@sjtu.edu.cn

More information

Introduction to Machine Learning

Introduction to Machine Learning 10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what

More information

Eigenfaces and Fisherfaces

Eigenfaces and Fisherfaces Eigenfaces and Fisherfaces Dimension Reduction and Component Analysis Jason Corso University of Michigan EECS 598 Fall 2014 Foundations of Computer Vision JJ Corso (University of Michigan) Eigenfaces and

More information

Orthogonal Laplacianfaces for Face Recognition

Orthogonal Laplacianfaces for Face Recognition 3608 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 11, NOVEMBER 2006 [29] G. Deng and J. C. Pinoli, Differentiation-based edge detection using the logarithmic image processing model, J. Math. Imag.

More information

COVARIANCE REGULARIZATION FOR SUPERVISED LEARNING IN HIGH DIMENSIONS

COVARIANCE REGULARIZATION FOR SUPERVISED LEARNING IN HIGH DIMENSIONS COVARIANCE REGULARIZATION FOR SUPERVISED LEARNING IN HIGH DIMENSIONS DANIEL L. ELLIOTT CHARLES W. ANDERSON Department of Computer Science Colorado State University Fort Collins, Colorado, USA MICHAEL KIRBY

More information

Fisher Linear Discriminant Analysis

Fisher Linear Discriminant Analysis Fisher Linear Discriminant Analysis Cheng Li, Bingyu Wang August 31, 2014 1 What s LDA Fisher Linear Discriminant Analysis (also called Linear Discriminant Analysis(LDA)) are methods used in statistics,

More information

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS VIKAS CHANDRAKANT RAYKAR DECEMBER 5, 24 Abstract. We interpret spectral clustering algorithms in the light of unsupervised

More information

Machine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Machine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395 Machine Learning Dimensionality reduction Hamid Beigy Sharif University of Technology Fall 1395 Hamid Beigy (Sharif University of Technology) Machine Learning Fall 1395 1 / 47 Table of contents 1 Introduction

More information

Principal Component Analysis and Singular Value Decomposition. Volker Tresp, Clemens Otte Summer 2014

Principal Component Analysis and Singular Value Decomposition. Volker Tresp, Clemens Otte Summer 2014 Principal Component Analysis and Singular Value Decomposition Volker Tresp, Clemens Otte Summer 2014 1 Motivation So far we always argued for a high-dimensional feature space Still, in some cases it makes

More information