An Effective Approach for Facial Expression Recognition with Local Binary Pattern and Support Vector Machine
|
|
- Brendan Fisher
- 5 years ago
- Views:
Transcription
1 An Effective Approach for Facial Expression Recognition with Local Binary 1 Cao Thi Nhan, 2 Ton That Hoa An, 3 Hyung Il Choi *1 School of Media, Soongsil University, ctnhen@yahoo.com 2 School of Media, Soongsil University, an_tth@yahoo.com 3 School of Media, Soongsil University, hic@ssu.ac.kr Abstract Many methods have been developed based on extracting Local Binary Pattern features associating different classifying techniques in order to get more and more better effects of facial expression recognition. In this work, we propose a novel method for recognizing facial expressions based on Local Binary Pattern features and Support Vector Machine with two effective improvements. First is the preprocessing step and second is the method of dividing face images into non-overlap square regions for extracting Local Binary Pattern features. The method has experimented on three typical kinds of database: small (213 images), medium (1386 images) and large (5130 images). Experimental results show the effectiveness of our method for obtaining remarkably better recognition rate in comparison with other methods. Besides, the proposed approach based on Local Binary Pattern features and Support Vector Machine is simple, fast and significant for real time applications. Keywords: Facial expression recognition, local binary pattern, support vector machine. 1. Introduction In recent years, with the rapid development of intelligent communication systems and data-driven animation, Facial Expression Recognition (FER) has become an interesting problem, but there are still many challenges. There are three essential steps for a facial expression recognition system: face acquisition, facial data extraction and representation, and facial expression recognition [1]. Face acquisition is a preprocessing stage to detect the face region of the input images, and associated with certain normalization techniques to obtain standardized images before using for extracting face expression features. The method being used much is the Adaboost algorithm proposed by Viola and Jones [2]. It is the method of fast face recognition based on a cascade of classifiers and using Harr-like features. Experiments confirmed that this method detects face region with high accurate rate and fast calculation. However, face images obtained from the Adaboost algorithm still need to be processed further to enhance accurate rate of facial expression recognition rate. To tackle the undesirable effect or redundant image regions, the face image may be geometrically standardized or cropped prior to extracting features. This normalization is usually performed based on references provided by the eyes or nostrils as in [3][4]. In this work, we propose a technique for cropping face images to retain essential information and eliminate unnecessary information or pixels with little information as in section 3. Facial data extraction and representation or face feature extraction is to find the most appropriate representation of face images for recognition with the aim of reducing the dimensionality of the input data as well as increasing accurate rate of classification. Generally, there are two approaches for facial representation: geometric features and appearance features [1]. Geometric features deal with the shape and locations of facial components (including mouth, eyes, brows, and nose), which are extracted to represent the face geometry [5]. The geometric features based on face representations commonly require accurate and reliable facial feature detection and tracking, which is difficult to adapt in many situations. Whereas appearance features present the appearance changes (skin texture) of the face (including wrinkles, bulges and furrows), which are extracted by applying image filters to either the whole face or specific facial regions [6]. Appearance features suffer less from issues of initialization and tracking errors, and can Research Notes in Information Science (RNIS) Volume14,June 2013 doi: /rnis.vol
2 encode changes in skin texture that are critical for facial expression modeling. There have been many methods developed to find facial expression features so that they are both succinct and full of facial expressions. Among them, Local Binary Pattern (LBP), originally proposed for texture analysis has emerged as an efficient non-parameter method summarizing the local structure of an image, and been introduced for facial representation for recent years [7-11]. The most important properties of LBP features are their tolerance against illumination changes and their computational simplicity. However, to enhance accurate rate of face expression recognition, several techniques have associated LBP features as in [12-14]. In this paper, we propose a new method of dividing face images into non-overlap regions for extracting LBP features as in section 4. For facial expression recognition, many methods have been applied such as Neural Networks in [15], the nearest neighbor as in [16], K-nearest neighbor classifier as in [17], Support Vector Machine (SVM) as in [18][12][19][14], Hidden Markov Model (HMM) as in [20][21]. In this paper, support vector machine technique is used for data classification of LBP features associated two improvements presented in section 5. The paper is organized as follows: In Section 2, previous related works are described. Section 3 deals with preprocess of face images. In Section 4, extracting Local Binary Pattern features is introduced. Section 5 shows Support Vector Machine technique for facial expression recognition. In Section 6, the experiment results are shown. Finally, Section 7, the conclusions are given. 2. Previous Related Works Facial expression recognition has attracted much attention from behavioral research community in last decades by their applications in intelligent interface systems and data-driven animation. Many achievements gained through published scientific works as well as thorough surveys were conducted as in [22-24]. In this work, we studied a big quantity of previous related main works as well as implemented large amount of experiments to develop techniques to effectively improve facial expression recognition rate Extracting Facial Expression Features Extracting facial expression features is a technique to derive a set of features from original face images for effectively recognizing facial expressions. Criteria of an optimal feature set is minimizing variations within a class and maximizing variations between classes. Since facial expression feature extraction plays an important role in facial expression recognition so there have been many proposed methods. Optical flow analysis method [25-28] estimated the displacements of feature points. But this method is easily disturbed by the nonrigid motion and varying lighting, as well as sensitive to the incorrectness of image registration and motion discontinuities [29]. AU (Action Unit) detection by classifying features calculated from tracked facial fiducial points is presented by Valstar et al. [30]. This method obtained similar or higher recognition rates than those reported in [31][32]. Although, authors solved the manual initialization of the facial points in the first frame of an input face video by a fully automatic AU detection system that can automatically localize facial points in the first frame and recognize AU temporal segments using a subset of most informative spatiotemporal features selected by AdaBoost, but the geometric feature-based representation commonly requires accurate and reliable facial feature detection and tracking, which is difficult to accommodate in many situations [24]. Another method called facial geometry analysis has been widely exploited for facial representation [33][34][31][35-37]. In this method, shapes and locations of facial components are extracted to represent face geometry. This method has some limitations and rate of recognizing facial expression is lower than other methods such as Gabor-wavelet [33]. One more kind of method to express faces is to model the appearance changes of faces by holistic spatial analysis involving Principal Component Analysis (PCA) [38], Linear Discriminant Analysis (LDA) [39], Gabor wavelet analysis [16] and Independent Component Analysis (ICA) [40]. These methods have been used to extract the facial appearance changes on either the specific face regions or whole face. In [41], different techniques have been explored to represent face images for facial action recognition, which include PCA, ICA, Local Feature Analysis (LFA), LDA and local schemes such as Gabor-wavelet 172
3 representation and local principal components (LPC). Performances obtained from using Gabor-wavelet representation and ICA is best. Since their superior performance, Gabor-wavelet representations have been widely adopted in face image analysis [33][16][3][42]. But the computation of Gabor-wavelet representations requires a lot of time and memory. In recent years, an effective face descriptor called Local Binary Patterns (LBP) has attracted widespread interest for facial expression representation [9][10][43][11][44][12][45]. Based on strong points in computing simplicity and high recognition effect, we proposed a method of appropriate image area division associating with LBP is proposed for extracting facial expression features in this paper Recognizing Facial Expressions Facial expression recognition is performed by a classifier, which combined with a decision procedure. A wide range of classifiers have been applied to the automatic expression recognition problem as in [22]. Besides, there are many other classification methods or existing method improvements are proposed. For examples, Artificial Neural Network (ANN) is used in [46][47][33][15][3], Bayesian Network (BN) is used in [48][49][17], Support Vector Machine (SVM) in [11][30][42][24][50][51] or Rule-Based Classifiers in [34][35][37]. Not long ago, another method called Discriminant kernel locally linear embedding (DKLLE) is used in [45]. With the temporal behaviors of facial expressions, some techniques have been proposed such as Hidden Markov Models (HMM) in [48][27][52][53][20], Dynamic Bayesian Networks (DBNs) in [28][36][29][54]. There are also several comparisons between different techniques. Cohen et al. compared different Bayes classifiers [48], and Gaussian Tree-Augmented-Naive (TAN), Bayes classifiers performed best. Bartlett et al. [42] performed systematic comparison of different techniques including AdaBoost, SVM and LDA for facial expression recognition, best results were obtained by selecting a subset of Gabor filters using AdaBoost and then training SVM on the outputs of the selected filters [24]. 3. Preprocessing Face Images Image preprocessing stage detects the face region in the input images or sequences and normalizing the face images. Human face images from camera or database commonly contain much redundant information e.g. background or body regions. Figure 1 shows human face images from database JAFFE. (a) Disgust (b) Fear (c) Sadness (d) Anger (e) Joy (f) Neutral (g) Surprise Figure 1. Face images from database JAFFE To eliminate redundant regions, the real-time face detection algorithm called Adaboost algorithm developed by Viola and Jones is employed [2]. Figure 2 shows Adaboost algorithm applied for a face image and Figure 3 shows human face images are obtained from Figure 1 by the algorithm. 173
4 Figure 2. Face region cropped by Adaboost algorithm (red square) However, to improve accurate recognition rate and processing speed, face images obtained from Adaboost algorithm can be applied different methods for resolution or cropping [3]. Here, we propose a cropping technique as in Figure 4. First determining size of square S used for cropping the human face in images. The side w2 of square S will be equal to the widthwise of the human face. The size of square S depends on each database even each image. However, we have tested all the three databases and we remarked that the widthwise of the human face accounts for from 75% to 80% of the widthwise of face images obtained from Adaboost algorithm. These percentages are calculated on each database to select the side w2 of the square in pixels. Values of w2 are counted as experimental parameters. (a) Disgust (b) Fear (c) Sadness (d) Anger (e) Joy (f) Neutral (g) Surprise Figure 3. Scaled face images obtained from Adaboost algorithm O(0,0) P(x,y) y = d/6 x = (w 1 -w 2 )/2 Square S for cropping d w 2 Human face image obtained from Adaboost algorithm w 1 Figure 4. Shows face region cropped by proposed method Then determine co-ordinate P(x, y) from left-up corner of the image applied the square S to crop the human face. Let O(0, 0) is co-ordinate at left-up corner of human face image obtained from the Adaboost 174
5 algorithm, d is the height of the face image, w1 is the width of the face image and w2 is the width of the square. So, co-ordinates y = d/6 and x = (w1-w2)/2. Expression y = d/6 based on neutral expression face image. Normally, forehead region occupies one-fourth of human face height. Thus forehead region occupies a not small region on human face region but it does not contain much essential information of face expressions. For this reason, we trimmed two-third (2/3) of upper forehead region and retain one-third (1/3) of lower forehead region from eyebrows. Finally, the human face image obtained from Adaboost algorithm is cropped by square S at co-ordinate P(x,y). Figure 5 shows proposed algorithm applied for a face image and Figure 6 shows the human face images were cropped by the proposed method. Face region cropped by Adaboost algorithm (red/large square) Figure 5. Face region cropped by proposed method (small square) (a) Disgust (b) Fear (c) Sadness (d) Anger (e) Joy (f) Neutral (g) Surprise Figure 6. Scaled face images after cropping The objective of our cropping method is to eliminate face image regions that contain little or nothing of facial expression information, such as upper brows, ears, hairs or background of images, etc. as much as possible. It means the cropped face image region contains as maximum facial expression information on a minimum area as possible. This method aims at reducing processing time in steps of feature extraction and facial expression recognition, and most important being to improve the rate of facial expression recognition. 4. Extracting Local Binary Pattern Features 4.1. Local Binary Pattern The LBP (Local Binary Pattern) operator was first introduced as a complementary measure for local image contrast [8]. The original operator worked with the circular eight-neighbors of a pixel, using the value of the center pixel as a threshold. If value of a pixel at a neighbor is greater than or equal to value of center pixel, it is labeled 1, otherwise is 0. The obtained results are considered as a binary number with 8 digits (called Local Binary Patterns or LBP codes of this pixel). Then the binary number is converted into a decimal one for calculating histogram. Figure 7 shows an example of computation a basic LBP operator. Based on the operator, each pixel of an image is labeled by a LBP code from 0 to 255 with its 3x3 neighborhoods. 175
6 Threshold = Binary: Decimal: =153 Figure 7. The basic LBP operator The LBP codes codify local primitives including different types of curved edges, spots, flat areas, etc. as in Figure 8, so each LBP code can be regarded as a micro-texton [10]. And a 256-bin histogram of the LBP labels computed over a region is used as a texture descriptor of the considered region. Figure 8. Examples of texture primitives which can be detected by LBP (white circles represent ones and black circles represent zeros) The basic LBP operator restricted in its small 3x3 neighborhood cannot capture dominant features with large scale structures. Therefore the operator then was extended to use neighborhoods of different sizes as in [8]. With circular neighborhoods and bilinear interpolating, the pixel values allow any radius and number of pixels in the neighborhood. Figure 9 shows examples of the extended LBP operators. The notation (P, R) denotes a neighborhood of P equally spaced sampling points on a circle of radius of R to form a circular and symmetric neighbor set. Figure 9. Examples of the extended LBP The parameters P and R influence the performance of LBP operator. Using an operator with big neighbor set or big radius can capture dominant features with large scale structures but makes the histogram long and thus more time consumed for calculating the distance matrix. On the other hand, choosing a small neighbor set or small radius makes the feature vector shorter but can lead to loss of information. In facial expression recognition, size of face image is not usually large and facial expressions are also changes in local regions. For these reasons, the basic LBP operator is used for extracting face expression features. Histograms of 256-bin LBP features are called original (non-uniform) pattern operator. In many cases, they usually consume much time for processing. So, further extension of LBP operator is use uniform LBP patterns as introduced in [8]. A Local Binary Pattern is called uniform if it contains at most two bitwise transitions from 0 to 1 or vice versa when the binary string is considered circular. For example, , and are uniform patterns. It is observed that uniform patterns account for 87.2% of all patterns in the (8, 1.0) neighborhood and for 66.9% in the (16, 2.0) neighborhood in texture images [8]. The notation LBP, denotes a uniform LBP operator. The subscript describes the operator using a (P,R) neighborhood; the superscript u2 indicates using only uniform patterns and labeling all remaining patterns with a single label. A histogram of a labeled image f k (x, y) can be defined as following: H = I(f (x, y) = i), i = 0,, n 1, (1) where n is the number of different labels produced by the LBP operator and I(A) = 1 A is true 0 A is false (2) 176
7 This histogram contains information about the distribution of the local micro-patterns, e.g. spots, edges, corners or flat areas etc., over the whole image. To represent the face efficiently, features extracted should retain spatial information. For this reason, the face image can be divided into m small regions R 0, R 1,, R m as shown in Figure 9. So a spatially enhanced histogram is expressed as where i = 0,, n-1, j = 0,, m-1. H, = I(f (x, y) = i) I (x, y) R, (3) 4.2. Extracting LBP Features for Face Expression Recognition There have been proposed methods for resizing and dividing the face images, for example 110x150 pixels with 6x7 regions shown in Figure 10.a as in [9], [11] or 256x256 pixels with 3x5 regions shown in Figure 10.b as in [14], or 64x64 pixels with 8 regions shown in Figure 10.c as in [12]. a) b) c) Figure 10. Proposed methods for resolution and region division regions of 8x8 pixels Here, we proposed face image dividing method for extracting LBP features as follow: after the face image cropped, resize the face image to appropriate resolution of square with 8-multiple side, then dividing the resized face image into non-overlap regions of 8x8 pixels for extracting features. Figure 11 shows an example of face image with small size 48x48 pixels divided 6x6 regions, each region being 8x8 pixels. For applications with high resolution face images, the cropped face images can be resized and divided into regions being multiple of 8 (i.e. 8x8, 16x16, 24x24 pixels). Our experiments presented in Table 1 shown that facial expression recognition rates are nearly high. However, this problem will be Figure 11. A face image resized 48x48 pixels and divided 6x6 Table 1. Experimental results of 8-multiple divided regions with different resolution levels on JAFFE database Resolutions 96x96 resolution 120x120 resolution Region Regions Average % Regions Average % 8x8 pixels 124 (12x12) (15x15) x16 pixels 36 (6x6) N/A N/A 24x24 pixels 16 (4x4) (5x5) fully presented in our other work. Here, we apply the region division technique to extract LBP features with regions of 8x8 pixels and its effect will be confirmed in experiment sections. Then each region is calculated LBP histograms or LBP feature as in Figure 12. The LBP features extracted from each region are concatenated from left to right and up to down into a single feature vector of the face image. Figure 12. Calculating LBP histogram for each region and concatenating them into a single feature vector 177
8 In this histogram, an effective description of the face on three different levels of locality: the labels for the histogram contain information about the patterns on a pixel-level, the labels are summed over a small region to produce information on a regional level and the regional histograms are concatenated to build a global description of the face [11]. The algorithm of extracting LBP features for facial expression recognition can be summarized as following: Face image registration for extracting LBP features Apply Adaboost algorithm to the face image Crop the face image as Figure 3 (section III) Resize the face image to appropriate resolution (square with 8-multiple side) Divide the face image into square regions being 8x8 pixels Calculate LBP code of each pixel in Figure 13. Experimental process each region Build up histogram uniform LBP of each region Concatenate histograms of regions from left to right, up to down to obtain LBP features of the face image or its feature vector. 5. Support Vector Machine Technique for Facial Expression Recognition Support Vector Machine (SVM) proposed by Vladimir N. Vapnik [55] is a powerful, effective and popular machine learning technique for data classification. This method based on statistical learning theory with close foundation of mathematics to ensure that the results are optimal. SVM performs an implicit mapping of data into higher (perhaps infinite) dimensional feature space and then constructs a separating hyperplane with the maximal margin separate data in this higher dimension space [56]. Many applications have confirmed SVM obtaining high results for classifying facial expression [51]. Given a training set of labeled examples {(x i, y i ), i =1,, l} where x i R n and y i {-1, 1}, a new test example x is classified by the following function: f(x) = sign( u y K(x, x) + b) (4) where u i are Lagrange multipliers of a dual optimization problem that describe the separating hyperplane, K(.,.) is a Kernel function, and b is the threshold parameter of the hyperplane. The training sample x i with u i > 0 is support vectors. K(x i, x j ) is kernel based on a non-linear mapping that mapped the input data into higher dimensional space and in the form of (x i ). (x j ). Some frequently used kernel functions being used in SVM are the linear, polynomial, and Radial Basis Function (RBF) kernels. SVM makes binary decisions, so the multi-class classification here is accomplished by using the one-against-rest technique, which trains binary classifiers to discriminate one expression from all others, and outputs the class with the largest output of binary classification [24]. In this work, we used the SVM functions of OpenCV version with Visual Studio 2008 for our 178
9 experiments and used Radial Basis Functions kernel. In order to choose optimal parameters, we implement grid-search approach as in [57]. Table 2. Confusion matrix of JAFFE database Anger (%) Disgust (%) Fear (%) Joy (%) Neutral (%) Sadness (%) Surprise (%) Anger Disgust Fear Joy Neutral Sadness Surprise Average: Table 3. Confusion matrix of CK database Anger (%) Disgust (%) Fear (%) Joy (%) Neutral (%) Sadness (%) Surprise (%) Anger Disgust Fear Joy Neutral Sadness Surprise Average: Table 4. Confusion matrix of MUG database Anger (%) Disgust (%) Fear (%) Joy (%) Neutral (%) Sadness (%) Surprise (%) Anger Disgust Fear Joy Neutral Sadness Surprise Average: Experiments We used three typical databases for experiments of our method. First is Japanese Female Facial Expression (JAFFE) database including 213 images, second is Cohn-Kanade (CK) database including 1386 images and third is Multimedia Understanding Group (MUG) database including 5130 images. Resolution of images is 64x64 pixels. Each region is 8x8 pixels. We used 3-fold cross-validation method for experiments on platform C++. Experimental process on databases is shown in Figure 13 and result of each database as following. 6.1 Experiments on the JAFFE database JAFFE database [57] includes 213 gray images of ten Japanese female facial expression. Each person represents seven different facial expressions: anger, disgust, fear, joy, neutral, sadness and surprise. Most each facial expression of each subject has 3 different images, but there are three cases having two images and six cases having four images. Original images from the database have a resolution of 256x256 pixels. In our experiments, we selected all 213 images as experiment samples. Experimental results are shown in Table Experiments on the Cohn Kanade database The Cohn-Kanade database [58][59] is one of the most comprehensive databases in the current facial 179
10 expression research community. The Cohn-Kanade database consists of 100 university students aged from 18 to 30 years, of which 65% were female, 15% were African-American, and 3% were Asian or Latino. Subjects were instructed to perform a series of 23 facial displays, six of which were based on description of basic emotions (i.e., Anger, Disgust, Fear, Joy, Sadness, and Surprise). Image sequences from neutral to target display were digitized into 640x490 pixel arrays with 8-bit precision for grayscale values. Table 5. Compare the accurate facial expression recognition rates of existing methods and proposed method on JAFFE database Hidden Proposed Classifying Topographic Markov Nearest neighbor Support Vector Machine (SVM) method Methods Mask (TM), Model (1-NN) (SVM) (HMM) Kind of feature Number of facial expressions Average accuracy (%) Reference Expressive texture (ET) or Active Texture (AT) Gabor wavelet ALBP, Tsallis, and NLDAI 2D-LDA 2DPCA and LBP Discriminant Kernel Locally Linear Embedding (DKLLE) LBP Xiaozhou Wei et al., 2008, [61] L. He et al., 2009, [53] Shu Liao et al., 2006, [12] Frank Y. Shih et al., 2008, [62] Daw-Tung Lina et al., 2009, [50] X. Zhao et al., 2012, [45] Cao Thi Nhan et al., 2013 Table 6. Compare the accurate facial expression recognition rates of existing methods and proposed method on CK database Hidden Nearest Proposed Classifying Topographic Markov Support Vector Machine (SVM) neighbor method Methods Mask (TM), Model (1-NN) (SVM) (HMM) Kind of feature Number of facial expressions Average accuracy (%) Reference Expressive texture (ET) or Active Texture (AT) PCA, Orientation LBP-Histogram histograms, bins Optical flow 2DPCA and LBP Active shape model, Gabor filter and Laplacian of Gaussian Discriminant Kernel Locally Linear Embedding (DKLLE) LBP Xiaozhou Wei et al., 2008, [61] Miriam Schmidt et al., 2010, [20] Caifeng Shan et al., 2008 [13] Daw-Tung Chen-Chiung Lina et al., Hsieh et al., 2009, [50] 2011, [51] X. Zhao et al., 2012, [45] Cao Thi Nhan et al., 2013 Table 7. Compare the accurate facial expression recognition rates of existing methods and proposed method on MUG database Classifying Methods Nearest neighbor (1-NN) Proposed method (SVM) Kind of feature Local Fisher Discriminant Analysis (LFDA) LBP Number of facial expressions 7 7 Average accuracy (%) Reference Yogachandran Rahulamathavan et al., Jan. 2013, [63] Cao Thi Nhan et al.,
11 For our experiments, we selected 50 subjects (38 females and 12 males) from the database. Each subject expresses basic emotions in sequences of images. For each sequence, two neutral face images at beginning of sequence and six images were used for prototypic expression recognition, resulting in 1,386 images (150 face images of anger, 204 face images of disgust, 114 face images of fear, 258 face images of joy, 120 face images of sadness, 246 face images of surprise, and 294 face images of neutral). Experimental results are shown in Table Experiments on the MUG database The MUG database [60] was created by the Multimedia Understanding Group. It was created to overcome some limitations of the other similar databases that preexisted at that time, such as high resolution, uniform lighting, many subjects and many takes per subject. The aim is to help the research on the field of expression recognition. The images of 52 subjects are available to authorized internet users. This part of database includes 52 Caucasian subjects, 22 females and 30 males (with or without beards), in the 20 to 35 age range. Each image was saved with a jpeg format, 896x896 pixels and a size ranging from 240 to 340 KB. In this work, we selected 50 subjects (22 females and 28 males). Each expression includes 15 face images with facial expressions from less to more. Because some subjects are not enough seven facial expressions, we totally selected 5130 human face images including 750 images of anger, 735 images of disgust, 705 images of fear, 735 images of joy, 750 images of neutral, 705 images of sadness and 750 images of surprise. Experimental results are shown in Table 4. It is almost impossible to cover all of the published works. However, to sum up, we would like to present several typical papers that represent state-of-the art methods of classification whereby we overview and compare the accurate rates of existing facial emotion recognition methods. Table 5 shows the proposed method in comparison with other methods implemented on JAFFE database. Table 6 expresses results of facial expression recognition of methods implemented on Cohn Kanade database. The MUG database has been released recently, so we have not yet found many papers experimented on it for comparison as in Table Conclusion We presented a novel approach for facial expression recognition based on LBP and SVM. The proposed method experimented on three kinds of typical databases: small (JAFFE), medium (CK) and large (MUG). Experimental results show that our method can obtain remarkably more accurate recognition rate in comparison with other methods even with small scaled images. In addition, our proposed method based on LBP and SVM is simple and fast. So, these advantages are great significance for applications in intelligent communication systems, especially for real time applications. 8. Acknowledgment This research was supported by the Seoul R&BD program (SS110013). We would like to thank Professor Michael J. Lyons for the use of JAFFE database, Professor Jeffery Cohn for authorizing us to use Cohn-Kanade database and Professor Anastasios Delopoulos for authorizing us to use MUG database in this work. 9. References [1] Y. L. Tian, T. Kanade, J. F. Cohn, Handbook of face recognition, chapter 11, Facial expression analysis Springer, Heidelberg, [2] P. Viola, M. Jones, Robust real-time face detection, International Journal of Computer Vision, 57(2), pp , [3] Y. Tian, Evaluation of face resolution for expression analysis, in Computer Vision Pattern Recognition Workshop for Face Processing in Video, IEEE,
12 [4] M.R. Everingham, A. Zisserman, Regression and classification approaches to eye localization in face images, in: IEEE International Conference on Automatic Face & Gesture Recognition (FG), pp , [5] M. Valstar and M. Pantic, Fully automatic facial action unit detection and temporal analysis. In IEEE Conference on Computer Vision and Pattern Recognition Workshop, page 149, [6] G. Littlewort, M. Bartlett, I. Fasel, J. Susskind, and J. Movellan. Dynamics of facial expression extracted automatically from video. Image and Vision Computing, 24(6): , June [7] T. Ojala, M. Pietikainen, D. Harwood, A comparative study of texture measures with classification based on featured distribution, Pattern Recognition 29 (1), pp , [8] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Transactions on Pattern Analysis and Machine Intelligence 24 (7), , [9] T. Ahonen, A. Hadid, M. Pietikainen, Face recognition with local binary patterns, in: European Conference on Computer Vision (ECCV), LNCS 3021, pp , [10] A. Hadid, M. Pietikainen, T. Ahonen, A discriminative feature space for detecting and recognizing faces, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), [11] C. Shan, S. Gong, P. W. McOwan, Robust facial expression recognition using local binary patterns, in: IEEE International Conference on Image Processing (ICIP), Genoa, vol. 2, pp , [12] S. Liao, W. Fan, Albert C. S. Chung, D. -Y. Yeung, Facial expression recognition using advanced local binary patterns, tsallis entropies and global appearance features, in: IEEE International Conference on Image Processing (ICIP), pp , [13] C. Shan and T. Gritti, Learning Discriminative LBP-Histogram Bins for Facial expression recognition, BMVC, [14] Z. Ying, L. Cai, J. Gan and S. He, Facial expression recognition with Local binary pattern and laplacian eigenmaps, The International Conference on Intelligent Computing (ICIC), LNCS 5754, pp , Springer Berlin Heidelberg, [15] Christine L. Lisetti, David E. Rumelhart, Facial Expression Recognition Using a Neural Network, Proceedings of the Eleventh International FLAIRS Conference, AAAI, [16] M. J. Lyons, J. Budynek, S. Akamatsu, Automatic classification of single facial images, IEEE Trans Pattern Analysis Machine Intelligent, 21 (12), , doi: / [17] N. Sebe, M. S. Lew, Y. Sun, I. Cohen, T. Gevers, T. S. Huang, Authentic facial expression analysis, Image Vision Computing 25 Vol. 12, , doi: /j.imavis [18] I. Kotsia, I Pitas, Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE Trans on Image Process. 16(1), , [19] M. S. Bartlett, G. Littlewort, I. Fasel, J. R. Movellan, Real time face detection and facial expression recognition: development and application to human computer interaction, in: CVPR Workshop on CVPR for HCI, [20] M. Schmidt, M. Schels, F. Schwenker, A Hidden Markov Model Based Approach for Facial Expression Recognition in Image Sequences, in Proceeding ANNPR 2010, pp , Springer-Verlag Berlin Heidelberg [21] H. Tang, M. Hasegawa-Johnson, T. Huang, Non-Frontal View Facial Expression Recognition Based on Ergodic Hidden Markov Model Supervectors, ICME 2010, IEEE. [22] M. Pantic, L. Rothkrantz, Automatic analysis of facial expressions: the state of art, IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 22, No. 12, pp , [23] B. Fasel, J. Luettin, Automatic facial expression analysis: a survey, Pattern Recognition 36, pp , [24] C. Shan, S. Gong, P. W. McOwan, Facial expression recognition based on Local Binary Patterns: A comprehensive study, Image and Vision Computing 27, pp , [25] Y. Yacoob, L. S. Davis, Recognizing human facial expression from long image sequences using optical flow, IEEE Transactions on Pattern Analysis and Machine Intelligence 18 (6), pp , [26] I. Essa, A. Pentland, Coding, analysis, interpretation, and recognition of facial expressions, IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (7), pp ,
13 [27] M. Yeasin, B. Bullot, R. Sharma, From facial expression to level of interests: a spatio-temporal approach, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2), 2004, [28] J. Hoey, J. J. Little, Value directed learning of gestures and facial displays, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp , vol. 2, [29] Y. Zhang, Q. Ji, Active and dynamic information fusion for facial expression understanding from image sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (5) (2005) [30] M. Valstar, I. Patras, M. Pantic, Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data, in: IEEE Conference on Computer Vision and Pattern Recognition Workshop, vol. 3, 2005, pp [31] Y. Tian, T. Kanade, J. Cohn, Recognizing action units for facial expression analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (2), , [32] M. Bartlett, G. Littlewort, C. Lainscsek, I. Fasel, J. Movellan, Machine learning methods for fully automatic recognition of facial expressions and facial actions, in: IEEE International Conference on Systems, Man & Cybernetics, Netherlands, [33] Z. Zhang, M.J. Lyons, M. Schuster, S. Akamatsu, Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perceptron, in: IEEE International Conference on Automatic Face & Gesture Recognition (FG), [34] M. Pantic, L. Rothkrantz, Expert system for automatic analysis of facial expression, Image and Vision Computing 18 (11), pp , [35] M. Pantic, L. J. M. Rothkrantz, Facial action recognition for facial expression analysis from static face images, IEEE Transactions on Systems, Man, and Cybernetics 34 (3), pp , [36] R. E. Kaliouby, P. Robinson, Real-time inference of complex mental states from facial expressions and head gestures, in: IEEE CVPR Workshop on Real-time Vision for Human Computer Interaction, [37] M. Pantic, I. Patras, Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences, IEEE Transactions on Systems, Man, and Cybernetics 36 (2), pp , [38] M. Turk, A. P. Pentland, Face recognition using eigenfaces, in: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), [39] P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman, Eigenfaces vs. fisherfaces: recognition using class specific linear projection, IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (7) (1997) [40] M. S. Bartlett, J. R. Movellan, T. J. Sejnowski, Face recognition by independent component analysis, IEEE Transactions on Neural Networks 13 (6), pp , [41] G. Donato, M. Bartlett, J. Hager, P. Ekman, T. Sejnowski, Classifying facial actions, IEEE Transactions on Pattern Analysis and Machine Intelligence 21 (10), pp , [42] M.S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, J. Movellan, Recognizing facial expression: machine learning and application to spontaneous behavior, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), [43] X. Feng, A. Hadid, M. Pietikainen, A coarse-to-fine classification scheme for facial expression recognition, International Conference on Image Analysis and Recognition (ICIAR), Lecture Notes in Computer Science, vol. 3212, Springer, pp , [44] C. Shan, S. Gong, P. W. McOwan, Appearance manifold of facial expression, in: N. Sebe, M.S. Lew, T.S. Huang (Eds.), IEEE ICCV workshop on Human Computer Interaction, Vol of Lecture Notes in Computer Science, Springer, Beijing, pp , [45] X. Zhao and S. Zhang, Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding, EURASIP journal on Advances in signal processing, [46] H. Kobayashi and F. Hara, Recognition of Six basic facial expressions and their strength by neural network, Robot and Human Communication, Proceedings, IEEE, pp , [47] C. Padgett, G. Cottrell, Representing face images for emotion classification, in: Advances in Neural Information Processing Systems (NIPS),
14 [48] I. Cohen, N. Sebe, A. Garg, L. Chen, T. S. Huang, Facial expression recognition from video sequences: temporal and static modeling, Computer Vision and Image Understanding 91, pp , [49] I. Cohen, N. Sebe, F. G. Cozman, M. C. Cirelo, T. S. Huang, Learning Bayesian network classifiers for facial expression recognition using both labeled and unlabeled data, Computer Vision and Pattern Recognition (CVPR), vol. 1, pp , 2003, DOI: /CVPR [50] D. -T. Lina, D. -C. Pan, Integrating a mixed-feature model and multiclass support vector machine for facial expression recognition, Integrated Computer-Aided Engineering 16, pp , DOI /ICA , IOS Press, [51] C. -C. Hsieh, M.-K. Jiang, A Facial Expression Classification System based on Active Shape Model and Support Vector Machine, IEEE International Symposium on Computer Science and Society, pp , [52] B. W. Miners, O. A. Basir, Dynamic Facial Expression Recognition Using Fuzzy Hidden Markov Models, International Conference on Systems, Man and Cybernetics, IEEE, [53] L. He, X. Wang, Member, IEEE, Chenglong Yu, Member, IEEE, Kun Wu, Facial Expression Recognition Using Embedded Hidden Markov Model, Systems, Man and Cybernetics (SMC), IEEE International Conference, pp , [54] Xiangyang Li, Member, IEEE, and Qiang Ji, Senior Member, IEEE, Active Affective State Detection and User Assistance With Dynamic Bayesian networks, IEEE transactions on systems, man, and cybernetics part a: systems and humans, vol. 35, no. 1, January [55] Corinna Cortes, Vladimir Vapnik, Support-Vector Networks, Machine Learning, 20, pp , [56] V. N. Vapnik, An Overview of Statistical Learning Theory, IEEE Transactions on Neural Network, Vol. 10, pp , [57] Chih-Wei Hsu, Chih-Chung Chang, Chih-Jen Lin, A Practical Guide to Support Vector Classification, Tech. Rep., Taipei, [58] T. Kanade, J. F. Cohn, Y. -L. Tian, Comprehensive database for facial expression analysis, International Conference on Face and Gesture Recognition, vol. 4. IEEE Computer Society, France, pp , [59] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp , [60] N. Aifanti, C. Papachristou and A. Delopoulos, The MUG Facial Expression Database, in Proc. 11th Int. Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), Desenzano, Italy, April [61] X. Wei, J. Loi and L. Yin, Classifying Facial Expressions Based on Topo-Feature Representation, ISBN: , [62] F. Y. Shih, C. -F. Chuang, P. S. P. Wang, Performance Comparisons of Facial Expression Recognition in JAFFE Database, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 22, No. 3, pp , World Scientific Publishing Company, [63] Yogachandran Rahulamathavan, Raphael C.-W. Phan, Jonathon A. Chambers, and David J. Parish, Facial Expression Recognition in the Encrypted Domain Based on Local Fisher Discriminant Analysis, IEEE Transactions on Affective Computing, Vol. 4, No. 1, pp ,
Improved Performance in Facial Expression Recognition Using 32 Geometric Features
Improved Performance in Facial Expression Recognition Using 32 Geometric Features Giuseppe Palestra 1(B), Adriana Pettinicchio 2, Marco Del Coco 2, Pierluigi Carcagnì 2, Marco Leo 2, and Cosimo Distante
More informationReconnaissance d objetsd et vision artificielle
Reconnaissance d objetsd et vision artificielle http://www.di.ens.fr/willow/teaching/recvis09 Lecture 6 Face recognition Face detection Neural nets Attention! Troisième exercice de programmation du le
More informationFacial Expression Recognition using Eigenfaces and SVM
Facial Expression Recognition using Eigenfaces and SVM Prof. Lalita B. Patil Assistant Professor Dept of Electronics and Telecommunication, MGMCET, Kamothe, Navi Mumbai (Maharashtra), INDIA. Prof.V.R.Bhosale
More informationMultiple Similarities Based Kernel Subspace Learning for Image Classification
Multiple Similarities Based Kernel Subspace Learning for Image Classification Wang Yan, Qingshan Liu, Hanqing Lu, and Songde Ma National Laboratory of Pattern Recognition, Institute of Automation, Chinese
More informationDiscriminant Uncorrelated Neighborhood Preserving Projections
Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a,
More informationBoosting: Algorithms and Applications
Boosting: Algorithms and Applications Lecture 11, ENGN 4522/6520, Statistical Pattern Recognition and Its Applications in Computer Vision ANU 2 nd Semester, 2008 Chunhua Shen, NICTA/RSISE Boosting Definition
More informationFace detection and recognition. Detection Recognition Sally
Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification
More informationObject Recognition Using Local Characterisation and Zernike Moments
Object Recognition Using Local Characterisation and Zernike Moments A. Choksuriwong, H. Laurent, C. Rosenberger, and C. Maaoui Laboratoire Vision et Robotique - UPRES EA 2078, ENSI de Bourges - Université
More informationOn The Role Of Head Motion In Affective Expression
On The Role Of Head Motion In Affective Expression Atanu Samanta, Tanaya Guha March 9, 2017 Department of Electrical Engineering Indian Institute of Technology, Kanpur, India Introduction Applications
More informationA Comparative Analysis of Thermal and Visual Modalities for Automated Facial Expression Recognition
A Comparative Analysis of Thermal and Visual Modalities for Automated Facial Expression Recognition Avinash Wesley, Pradeep Buddharaju, Robert Pienta, and Ioannis Pavlidis University of Houston and Georgia
More informationLecture: Face Recognition
Lecture: Face Recognition Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 12-1 What we will learn today Introduction to face recognition The Eigenfaces Algorithm Linear
More informationUnsupervised Learning of Hierarchical Models. in collaboration with Josh Susskind and Vlad Mnih
Unsupervised Learning of Hierarchical Models Marc'Aurelio Ranzato Geoff Hinton in collaboration with Josh Susskind and Vlad Mnih Advanced Machine Learning, 9 March 2011 Example: facial expression recognition
More informationFACE RECOGNITION USING HISTOGRAM OF CO-OCCURRENCE GABOR PHASE PATTERNS. Cong Wang, Zhenhua Chai, Zhenan Sun
FACE RECOGNITION USING HISTOGRAM OF CO-OCCURRENCE GABOR PHASE PATTERNS Cong Wang, Zhenhua Chai, Zhenan Sun Center for Research on Intelligent Perception and Computing (CRIPAC), National Laboratory of Pattern
More informationDYNAMIC TEXTURE RECOGNITION USING ENHANCED LBP FEATURES
DYNAMIC TEXTURE RECOGNITION USING ENHANCED FEATURES Jianfeng Ren BeingThere Centre Institute of Media Innovation Nanyang Technological University 50 Nanyang Drive, Singapore 637553. Xudong Jiang, Junsong
More informationKeywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY
Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Eigenface and
More informationFace recognition Computer Vision Spring 2018, Lecture 21
Face recognition http://www.cs.cmu.edu/~16385/ 16-385 Computer Vision Spring 2018, Lecture 21 Course announcements Homework 6 has been posted and is due on April 27 th. - Any questions about the homework?
More informationBrief Introduction of Machine Learning Techniques for Content Analysis
1 Brief Introduction of Machine Learning Techniques for Content Analysis Wei-Ta Chu 2008/11/20 Outline 2 Overview Gaussian Mixture Model (GMM) Hidden Markov Model (HMM) Support Vector Machine (SVM) Overview
More informationDynamic Data Modeling, Recognition, and Synthesis. Rui Zhao Thesis Defense Advisor: Professor Qiang Ji
Dynamic Data Modeling, Recognition, and Synthesis Rui Zhao Thesis Defense Advisor: Professor Qiang Ji Contents Introduction Related Work Dynamic Data Modeling & Analysis Temporal localization Insufficient
More informationAruna Bhat Research Scholar, Department of Electrical Engineering, IIT Delhi, India
International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2017 IJSRCSEIT Volume 2 Issue 6 ISSN : 2456-3307 Robust Face Recognition System using Non Additive
More informationRecognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015
Recognition Using Class Specific Linear Projection Magali Segal Stolrasky Nadav Ben Jakov April, 2015 Articles Eigenfaces vs. Fisherfaces Recognition Using Class Specific Linear Projection, Peter N. Belhumeur,
More informationSpectral and Spatial Methods for the Classification of Urban Remote Sensing Data
Spectral and Spatial Methods for the Classification of Urban Remote Sensing Data Mathieu Fauvel gipsa-lab/dis, Grenoble Institute of Technology - INPG - FRANCE Department of Electrical and Computer Engineering,
More informationLecture 13 Visual recognition
Lecture 13 Visual recognition Announcements Silvio Savarese Lecture 13-20-Feb-14 Lecture 13 Visual recognition Object classification bag of words models Discriminative methods Generative methods Object
More informationFace Recognition Using Multi-viewpoint Patterns for Robot Vision
11th International Symposium of Robotics Research (ISRR2003), pp.192-201, 2003 Face Recognition Using Multi-viewpoint Patterns for Robot Vision Kazuhiro Fukui and Osamu Yamaguchi Corporate Research and
More informationOrientation Map Based Palmprint Recognition
Orientation Map Based Palmprint Recognition (BM) 45 Orientation Map Based Palmprint Recognition B. H. Shekar, N. Harivinod bhshekar@gmail.com, harivinodn@gmail.com India, Mangalore University, Department
More informationBlur Insensitive Texture Classification Using Local Phase Quantization
Blur Insensitive Texture Classification Using Local Phase Quantization Ville Ojansivu and Janne Heikkilä Machine Vision Group, Department of Electrical and Information Engineering, University of Oulu,
More informationRandom Sampling LDA for Face Recognition
Random Sampling LDA for Face Recognition Xiaogang Wang and Xiaoou ang Department of Information Engineering he Chinese University of Hong Kong {xgwang1, xtang}@ie.cuhk.edu.hk Abstract Linear Discriminant
More informationTwo-stage Pedestrian Detection Based on Multiple Features and Machine Learning
38 3 Vol. 38, No. 3 2012 3 ACTA AUTOMATICA SINICA March, 2012 1 1 1, (Adaboost) (Support vector machine, SVM). (Four direction features, FDF) GAB (Gentle Adaboost) (Entropy-histograms of oriented gradients,
More informationMaterial presented. Direct Models for Classification. Agenda. Classification. Classification (2) Classification by machines 6/16/2010.
Material presented Direct Models for Classification SCARF JHU Summer School June 18, 2010 Patrick Nguyen (panguyen@microsoft.com) What is classification? What is a linear classifier? What are Direct Models?
More informationFARSI CHARACTER RECOGNITION USING NEW HYBRID FEATURE EXTRACTION METHODS
FARSI CHARACTER RECOGNITION USING NEW HYBRID FEATURE EXTRACTION METHODS Fataneh Alavipour 1 and Ali Broumandnia 2 1 Department of Electrical, Computer and Biomedical Engineering, Qazvin branch, Islamic
More information2D Image Processing Face Detection and Recognition
2D Image Processing Face Detection and Recognition Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de
More informationIntroduction to Support Vector Machines
Introduction to Support Vector Machines Hsuan-Tien Lin Learning Systems Group, California Institute of Technology Talk in NTU EE/CS Speech Lab, November 16, 2005 H.-T. Lin (Learning Systems Group) Introduction
More informationExample: Face Detection
Announcements HW1 returned New attendance policy Face Recognition: Dimensionality Reduction On time: 1 point Five minutes or more late: 0.5 points Absent: 0 points Biometrics CSE 190 Lecture 14 CSE190,
More informationPattern Recognition and Machine Learning
Christopher M. Bishop Pattern Recognition and Machine Learning ÖSpri inger Contents Preface Mathematical notation Contents vii xi xiii 1 Introduction 1 1.1 Example: Polynomial Curve Fitting 4 1.2 Probability
More informationPattern Recognition and Machine Learning. Perceptrons and Support Vector machines
Pattern Recognition and Machine Learning James L. Crowley ENSIMAG 3 - MMIS Fall Semester 2016 Lessons 6 10 Jan 2017 Outline Perceptrons and Support Vector machines Notation... 2 Perceptrons... 3 History...3
More informationScale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract
Scale-Invariance of Support Vector Machines based on the Triangular Kernel François Fleuret Hichem Sahbi IMEDIA Research Group INRIA Domaine de Voluceau 78150 Le Chesnay, France Abstract This paper focuses
More informationSingle-Image-Based Rain and Snow Removal Using Multi-guided Filter
Single-Image-Based Rain and Snow Removal Using Multi-guided Filter Xianhui Zheng 1, Yinghao Liao 1,,WeiGuo 2, Xueyang Fu 2, and Xinghao Ding 2 1 Department of Electronic Engineering, Xiamen University,
More informationCorners, Blobs & Descriptors. With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros
Corners, Blobs & Descriptors With slides from S. Lazebnik & S. Seitz, D. Lowe, A. Efros Motivation: Build a Panorama M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 How do we build panorama?
More informationCS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang
CS 231A Section 1: Linear Algebra & Probability Review Kevin Tang Kevin Tang Section 1-1 9/30/2011 Topics Support Vector Machines Boosting Viola Jones face detector Linear Algebra Review Notation Operations
More informationFace Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition
ace Recognition Identify person based on the appearance of face CSED441:Introduction to Computer Vision (2017) Lecture10: Subspace Methods and ace Recognition Bohyung Han CSE, POSTECH bhhan@postech.ac.kr
More informationIterative Laplacian Score for Feature Selection
Iterative Laplacian Score for Feature Selection Linling Zhu, Linsong Miao, and Daoqiang Zhang College of Computer Science and echnology, Nanjing University of Aeronautics and Astronautics, Nanjing 2006,
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh Lecture 06 Object Recognition Objectives To understand the concept of image based object recognition To learn how to match images
More informationSubcellular Localisation of Proteins in Living Cells Using a Genetic Algorithm and an Incremental Neural Network
Subcellular Localisation of Proteins in Living Cells Using a Genetic Algorithm and an Incremental Neural Network Marko Tscherepanow and Franz Kummert Applied Computer Science, Faculty of Technology, Bielefeld
More informationFace Recognition Using Global Gabor Filter in Small Sample Case *
ISSN 1673-9418 CODEN JKYTA8 E-mail: fcst@public2.bta.net.cn Journal of Frontiers of Computer Science and Technology http://www.ceaj.org 1673-9418/2010/04(05)-0420-06 Tel: +86-10-51616056 DOI: 10.3778/j.issn.1673-9418.2010.05.004
More informationShape of Gaussians as Feature Descriptors
Shape of Gaussians as Feature Descriptors Liyu Gong, Tianjiang Wang and Fang Liu Intelligent and Distributed Computing Lab, School of Computer Science and Technology Huazhong University of Science and
More informationPCA FACE RECOGNITION
PCA FACE RECOGNITION The slides are from several sources through James Hays (Brown); Srinivasa Narasimhan (CMU); Silvio Savarese (U. of Michigan); Shree Nayar (Columbia) including their own slides. Goal
More informationOnline Appearance Model Learning for Video-Based Face Recognition
Online Appearance Model Learning for Video-Based Face Recognition Liang Liu 1, Yunhong Wang 2,TieniuTan 1 1 National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences,
More informationCS 231A Section 1: Linear Algebra & Probability Review
CS 231A Section 1: Linear Algebra & Probability Review 1 Topics Support Vector Machines Boosting Viola-Jones face detector Linear Algebra Review Notation Operations & Properties Matrix Calculus Probability
More information38 1 Vol. 38, No ACTA AUTOMATICA SINICA January, Bag-of-phrases.. Image Representation Using Bag-of-phrases
38 1 Vol. 38, No. 1 2012 1 ACTA AUTOMATICA SINICA January, 2012 Bag-of-phrases 1, 2 1 1 1, Bag-of-words,,, Bag-of-words, Bag-of-phrases, Bag-of-words DOI,, Bag-of-words, Bag-of-phrases, SIFT 10.3724/SP.J.1004.2012.00046
More informationA Novel Rejection Measurement in Handwritten Numeral Recognition Based on Linear Discriminant Analysis
009 0th International Conference on Document Analysis and Recognition A Novel Rejection easurement in Handwritten Numeral Recognition Based on Linear Discriminant Analysis Chun Lei He Louisa Lam Ching
More informationCS4495/6495 Introduction to Computer Vision. 8C-L3 Support Vector Machines
CS4495/6495 Introduction to Computer Vision 8C-L3 Support Vector Machines Discriminative classifiers Discriminative classifiers find a division (surface) in feature space that separates the classes Several
More informationTwo-Layered Face Detection System using Evolutionary Algorithm
Two-Layered Face Detection System using Evolutionary Algorithm Jun-Su Jang Jong-Hwan Kim Dept. of Electrical Engineering and Computer Science, Korea Advanced Institute of Science and Technology (KAIST),
More informationHu and Zernike Moments for Sign Language Recognition
Hu and Zernike Moments for Sign Language Recognition K. C. Otiniano-Rodríguez 1, G. Cámara-Chávez 1, and D. Menotti 1 1 Computing Department, Federal University of Ouro Preto, Ouro Preto, MG, Brazil Abstract
More informationComparative Assessment of Independent Component. Component Analysis (ICA) for Face Recognition.
Appears in the Second International Conference on Audio- and Video-based Biometric Person Authentication, AVBPA 99, ashington D. C. USA, March 22-2, 1999. Comparative Assessment of Independent Component
More informationDETECTING HUMAN ACTIVITIES IN THE ARCTIC OCEAN BY CONSTRUCTING AND ANALYZING SUPER-RESOLUTION IMAGES FROM MODIS DATA INTRODUCTION
DETECTING HUMAN ACTIVITIES IN THE ARCTIC OCEAN BY CONSTRUCTING AND ANALYZING SUPER-RESOLUTION IMAGES FROM MODIS DATA Shizhi Chen and YingLi Tian Department of Electrical Engineering The City College of
More informationKai Yu NEC Laboratories America, Cupertino, California, USA
Kai Yu NEC Laboratories America, Cupertino, California, USA Joint work with Jinjun Wang, Fengjun Lv, Wei Xu, Yihong Gong Xi Zhou, Jianchao Yang, Thomas Huang, Tong Zhang Chen Wu NEC Laboratories America
More informationA Tutorial on Support Vector Machine
A Tutorial on School of Computing National University of Singapore Contents Theory on Using with Other s Contents Transforming Theory on Using with Other s What is a classifier? A function that maps instances
More informationVisual Object Detection
Visual Object Detection Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu 1 / 47 Visual Object
More informationFacial Expression Recognition by De-expression Residue Learning
Facial Expression Recognition by De-expression Residue Learning Huiyuan Yang, Umur Ciftci and Lijun Yin Department of Computer Science State University of New York at Binghamton, USA {hyang51, uciftci}@binghamton.edu;
More informationImage Region Selection and Ensemble for Face Recognition
Image Region Selection and Ensemble for Face Recognition Xin Geng and Zhi-Hua Zhou National Laboratory for Novel Software Technology, Nanjing University, Nanjing 210093, China E-mail: {gengx, zhouzh}@lamda.nju.edu.cn
More informationA Novel PCA-Based Bayes Classifier and Face Analysis
A Novel PCA-Based Bayes Classifier and Face Analysis Zhong Jin 1,2, Franck Davoine 3,ZhenLou 2, and Jingyu Yang 2 1 Centre de Visió per Computador, Universitat Autònoma de Barcelona, Barcelona, Spain zhong.in@cvc.uab.es
More informationAffect recognition from facial movements and body gestures by hierarchical deep spatio-temporal features and fusion strategy
Accepted Manuscript Affect recognition from facial movements and body gestures by hierarchical deep spatio-temporal features and fusion strategy Bo Sun, Siming Cao, Jun He, Lejun Yu PII: S0893-6080(17)30284-8
More informationInvariant Pattern Recognition using Dual-tree Complex Wavelets and Fourier Features
Invariant Pattern Recognition using Dual-tree Complex Wavelets and Fourier Features G. Y. Chen and B. Kégl Department of Computer Science and Operations Research, University of Montreal, CP 6128 succ.
More informationDiscriminative Direction for Kernel Classifiers
Discriminative Direction for Kernel Classifiers Polina Golland Artificial Intelligence Lab Massachusetts Institute of Technology Cambridge, MA 02139 polina@ai.mit.edu Abstract In many scientific and engineering
More informationCOMS 4771 Introduction to Machine Learning. Nakul Verma
COMS 4771 Introduction to Machine Learning Nakul Verma Announcements HW1 due next lecture Project details are available decide on the group and topic by Thursday Last time Generative vs. Discriminative
More informationDEPARTMENT OF COMPUTER SCIENCE Autumn Semester MACHINE LEARNING AND ADAPTIVE INTELLIGENCE
Data Provided: None DEPARTMENT OF COMPUTER SCIENCE Autumn Semester 203 204 MACHINE LEARNING AND ADAPTIVE INTELLIGENCE 2 hours Answer THREE of the four questions. All questions carry equal weight. Figures
More informationINTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY
[Gaurav, 2(1): Jan., 2013] ISSN: 2277-9655 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY Face Identification & Detection Using Eigenfaces Sachin.S.Gurav *1, K.R.Desai 2 *1
More informationFault prediction of power system distribution equipment based on support vector machine
Fault prediction of power system distribution equipment based on support vector machine Zhenqi Wang a, Hongyi Zhang b School of Control and Computer Engineering, North China Electric Power University,
More informationNon-parametric Classification of Facial Features
Non-parametric Classification of Facial Features Hyun Sung Chang Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Problem statement In this project, I attempted
More informationMultiple Wavelet Coefficients Fusion in Deep Residual Networks for Fault Diagnosis
Multiple Wavelet Coefficients Fusion in Deep Residual Networks for Fault Diagnosis Minghang Zhao, Myeongsu Kang, Baoping Tang, Michael Pecht 1 Backgrounds Accurate fault diagnosis is important to ensure
More informationStatistical Filters for Crowd Image Analysis
Statistical Filters for Crowd Image Analysis Ákos Utasi, Ákos Kiss and Tamás Szirányi Distributed Events Analysis Research Group, Computer and Automation Research Institute H-1111 Budapest, Kende utca
More informationKernel Methods and Support Vector Machines
Kernel Methods and Support Vector Machines Oliver Schulte - CMPT 726 Bishop PRML Ch. 6 Support Vector Machines Defining Characteristics Like logistic regression, good for continuous input features, discrete
More informationDimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014
Dimensionality Reduction Using PCA/LDA Hongyu Li School of Software Engineering TongJi University Fall, 2014 Dimensionality Reduction One approach to deal with high dimensional data is by reducing their
More informationKernel Methods & Support Vector Machines
Kernel Methods & Support Vector Machines Mahdi pakdaman Naeini PhD Candidate, University of Tehran Senior Researcher, TOSAN Intelligent Data Miners Outline Motivation Introduction to pattern recognition
More informationReal Time Face Detection and Recognition using Haar - Based Cascade Classifier and Principal Component Analysis
Real Time Face Detection and Recognition using Haar - Based Cascade Classifier and Principal Component Analysis Sarala A. Dabhade PG student M. Tech (Computer Egg) BVDU s COE Pune Prof. Mrunal S. Bewoor
More informationLarge-scale classification of traffic signs under real-world conditions
Large-scale classification of traffic signs under real-world conditions Lykele Hazelhoff a,b, Ivo Creusen a,b, Dennis van de Wouw a,b and Peter H.N. de With a,b a CycloMedia Technology B.V., Achterweg
More informationEmpirical Analysis of Invariance of Transform Coefficients under Rotation
International Journal of Engineering Research and Development e-issn: 2278-67X, p-issn: 2278-8X, www.ijerd.com Volume, Issue 5 (May 25), PP.43-5 Empirical Analysis of Invariance of Transform Coefficients
More informationProbabilistic Class-Specific Discriminant Analysis
Probabilistic Class-Specific Discriminant Analysis Alexros Iosifidis Department of Engineering, ECE, Aarhus University, Denmark alexros.iosifidis@eng.au.dk arxiv:8.05980v [cs.lg] 4 Dec 08 Abstract In this
More informationClassification of handwritten digits using supervised locally linear embedding algorithm and support vector machine
Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine Olga Kouropteva, Oleg Okun, Matti Pietikäinen Machine Vision Group, Infotech Oulu and
More informationA New Efficient Method for Producing Global Affine Invariants
A New Efficient Method for Producing Global Affine Invariants Esa Rahtu, Mikko Salo 2, and Janne Heikkilä Machine Vision Group, Department of Electrical and Information Engineering, P.O. Box 45, 94 University
More informationSTUDY ON METHODS FOR COMPUTER-AIDED TOOTH SHADE DETERMINATION
INTERNATIONAL JOURNAL OF INFORMATION AND SYSTEMS SCIENCES Volume 5, Number 3-4, Pages 351 358 c 2009 Institute for Scientific Computing and Information STUDY ON METHODS FOR COMPUTER-AIDED TOOTH SHADE DETERMINATION
More informationA Unified Bayesian Framework for Face Recognition
Appears in the IEEE Signal Processing Society International Conference on Image Processing, ICIP, October 4-7,, Chicago, Illinois, USA A Unified Bayesian Framework for Face Recognition Chengjun Liu and
More informationTwo-Stream Bidirectional Long Short-Term Memory for Mitosis Event Detection and Stage Localization in Phase-Contrast Microscopy Images
Two-Stream Bidirectional Long Short-Term Memory for Mitosis Event Detection and Stage Localization in Phase-Contrast Microscopy Images Yunxiang Mao and Zhaozheng Yin (B) Computer Science, Missouri University
More informationThe state of the art and beyond
Feature Detectors and Descriptors The state of the art and beyond Local covariant detectors and descriptors have been successful in many applications Registration Stereo vision Motion estimation Matching
More informationEnhanced Fourier Shape Descriptor Using Zero-Padding
Enhanced ourier Shape Descriptor Using Zero-Padding Iivari Kunttu, Leena Lepistö, and Ari Visa Tampere University of Technology, Institute of Signal Processing, P.O. Box 553, I-330 Tampere inland {Iivari.Kunttu,
More informationDynamic Time-Alignment Kernel in Support Vector Machine
Dynamic Time-Alignment Kernel in Support Vector Machine Hiroshi Shimodaira School of Information Science, Japan Advanced Institute of Science and Technology sim@jaist.ac.jp Mitsuru Nakai School of Information
More informationOVERLAPPING ANIMAL SOUND CLASSIFICATION USING SPARSE REPRESENTATION
OVERLAPPING ANIMAL SOUND CLASSIFICATION USING SPARSE REPRESENTATION Na Lin, Haixin Sun Xiamen University Key Laboratory of Underwater Acoustic Communication and Marine Information Technology, Ministry
More informationStefanos Zafeiriou, Anastasios Tefas, and Ioannis Pitas
GENDER DETERMINATION USING A SUPPORT VECTOR MACHINE VARIANT Stefanos Zafeiriou, Anastasios Tefas, and Ioannis Pitas Artificial Intelligence and Information Analysis Lab/Department of Informatics, Aristotle
More informationBayesian Networks Inference with Probabilistic Graphical Models
4190.408 2016-Spring Bayesian Networks Inference with Probabilistic Graphical Models Byoung-Tak Zhang intelligence Lab Seoul National University 4190.408 Artificial (2016-Spring) 1 Machine Learning? Learning
More informationCourse 495: Advanced Statistical Machine Learning/Pattern Recognition
Course 495: Advanced Statistical Machine Learning/Pattern Recognition Deterministic Component Analysis Goal (Lecture): To present standard and modern Component Analysis (CA) techniques such as Principal
More informationGlobal Scene Representations. Tilke Judd
Global Scene Representations Tilke Judd Papers Oliva and Torralba [2001] Fei Fei and Perona [2005] Labzebnik, Schmid and Ponce [2006] Commonalities Goal: Recognize natural scene categories Extract features
More informationFinal Examination CS540-2: Introduction to Artificial Intelligence
Final Examination CS540-2: Introduction to Artificial Intelligence May 9, 2018 LAST NAME: SOLUTIONS FIRST NAME: Directions 1. This exam contains 33 questions worth a total of 100 points 2. Fill in your
More informationHYPERGRAPH BASED SEMI-SUPERVISED LEARNING ALGORITHMS APPLIED TO SPEECH RECOGNITION PROBLEM: A NOVEL APPROACH
HYPERGRAPH BASED SEMI-SUPERVISED LEARNING ALGORITHMS APPLIED TO SPEECH RECOGNITION PROBLEM: A NOVEL APPROACH Hoang Trang 1, Tran Hoang Loc 1 1 Ho Chi Minh City University of Technology-VNU HCM, Ho Chi
More informationFace Recognition from Video: A CONDENSATION Approach
1 % Face Recognition from Video: A CONDENSATION Approach Shaohua Zhou Volker Krueger and Rama Chellappa Center for Automation Research (CfAR) Department of Electrical & Computer Engineering University
More informationMachine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang.
Machine Learning CUNY Graduate Center, Spring 2013 Lectures 11-12: Unsupervised Learning 1 (Clustering: k-means, EM, mixture models) Professor Liang Huang huang@cs.qc.cuny.edu http://acl.cs.qc.edu/~lhuang/teaching/machine-learning
More informationCS 3710: Visual Recognition Describing Images with Features. Adriana Kovashka Department of Computer Science January 8, 2015
CS 3710: Visual Recognition Describing Images with Features Adriana Kovashka Department of Computer Science January 8, 2015 Plan for Today Presentation assignments + schedule changes Image filtering Feature
More informationLocality Preserving Projections
Locality Preserving Projections Xiaofei He Department of Computer Science The University of Chicago Chicago, IL 60637 xiaofei@cs.uchicago.edu Partha Niyogi Department of Computer Science The University
More informationAn Automatic Face Recognition System from Frontal Face Images using Local Binary Pattern Feature Space and K-Nearest Neighbors Classification
An Automatic Face Recognition System from Frontal Face Images using Local Binary Pattern Feature Space and K-Nearest Neighbors Classification Dhiraj Kumar M. Tech, Software Engineering Rungta College of
More informationEECS490: Digital Image Processing. Lecture #26
Lecture #26 Moments; invariant moments Eigenvector, principal component analysis Boundary coding Image primitives Image representation: trees, graphs Object recognition and classes Minimum distance classifiers
More informationAn Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition
An Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition Jun Liu, Songcan Chen, Daoqiang Zhang, and Xiaoyang Tan Department of Computer Science & Engineering, Nanjing University
More informationSubspace Methods for Visual Learning and Recognition
This is a shortened version of the tutorial given at the ECCV 2002, Copenhagen, and ICPR 2002, Quebec City. Copyright 2002 by Aleš Leonardis, University of Ljubljana, and Horst Bischof, Graz University
More information