Discriminant Uncorrelated Neighborhood Preserving Projections

Similar documents
Face Recognition Using Laplacianfaces He et al. (IEEE Trans PAMI, 2005) presented by Hassan A. Kingravi

Locality Preserving Projections

Statistical and Computational Analysis of Locality Preserving Projection

Nonlinear Dimensionality Reduction. Jose A. Costa

Non-linear Dimensionality Reduction

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

Statistical Pattern Recognition

Distance Metric Learning in Data Mining (Part II) Fei Wang and Jimeng Sun IBM TJ Watson Research Center

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Informative Laplacian Projection

Symmetric Two Dimensional Linear Discriminant Analysis (2DLDA)

Iterative Laplacian Score for Feature Selection

Locally Linear Embedded Eigenspace Analysis

Dimensionality Reduction:

CSE 291. Assignment Spectral clustering versus k-means. Out: Wed May 23 Due: Wed Jun 13

Local Learning Projections

Orthogonal Laplacianfaces for Face Recognition

Nonnegative Matrix Factorization Clustering on Multiple Manifolds

Spectral Regression for Efficient Regularized Subspace Learning

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Unsupervised dimensionality reduction

Nonlinear Dimensionality Reduction

Regularized Locality Preserving Projections with Two-Dimensional Discretized Laplacian Smoothing

Lecture: Face Recognition

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Robust Laplacian Eigenmaps Using Global Information

2 GU, ZHOU: NEIGHBORHOOD PRESERVING NONNEGATIVE MATRIX FACTORIZATION graph regularized NMF (GNMF), which assumes that the nearby data points are likel

Graph-Laplacian PCA: Closed-form Solution and Robustness

Machine Learning. Data visualization and dimensionality reduction. Eric Xing. Lecture 7, August 13, Eric Xing Eric CMU,

Gaussian Process Latent Random Field

Example: Face Detection

Group Sparse Non-negative Matrix Factorization for Multi-Manifold Learning

Classification of handwritten digits using supervised locally linear embedding algorithm and support vector machine

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015

Face Recognition Using Multi-viewpoint Patterns for Robot Vision

When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants

Data Mining. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Reconnaissance d objetsd et vision artificielle

Enhanced graph-based dimensionality reduction with repulsion Laplaceans

Machine Learning. Dimensionality reduction. Hamid Beigy. Sharif University of Technology. Fall 1395

Supervised locally linear embedding

Dimension reduction methods: Algorithms and Applications Yousef Saad Department of Computer Science and Engineering University of Minnesota

The prediction of membrane protein types with NPE

Nonlinear Methods. Data often lies on or near a nonlinear low-dimensional curve aka manifold.

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Principal Component Analysis -- PCA (also called Karhunen-Loeve transformation)

Manifold Learning and it s application

Integrating Global and Local Structures: A Least Squares Framework for Dimensionality Reduction

Focus was on solving matrix inversion problems Now we look at other properties of matrices Useful when A represents a transformations.

ECE 661: Homework 10 Fall 2014

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface

Multiscale Manifold Learning

Lecture 10: Dimension Reduction Techniques

Eigenface-based facial recognition

Image Analysis & Retrieval Lec 14 - Eigenface & Fisherface

Data-dependent representations: Laplacian Eigenmaps

A Unified Bayesian Framework for Face Recognition

Lecture 13 Visual recognition

Department of Computer Science and Engineering

Unsupervised Learning Techniques Class 07, 1 March 2006 Andrea Caponnetto

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Learning Eigenfunctions: Links with Spectral Clustering and Kernel PCA

PCA and LDA. Man-Wai MAK

Learning a Kernel Matrix for Nonlinear Dimensionality Reduction

An Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition

CITS 4402 Computer Vision

L26: Advanced dimensionality reduction

A Local Non-Negative Pursuit Method for Intrinsic Manifold Structure Preservation

Dimension Reduction Techniques. Presented by Jie (Jerry) Yu

Machine Learning. CUNY Graduate Center, Spring Lectures 11-12: Unsupervised Learning 1. Professor Liang Huang.

Laplacian PCA and Its Applications

Keywords Eigenface, face recognition, kernel principal component analysis, machine learning. II. LITERATURE REVIEW & OVERVIEW OF PROPOSED METHODOLOGY

Nonlinear Dimensionality Reduction

Nonlinear Manifold Learning Summary

Neurocomputing 154 (2015) Contents lists available at ScienceDirect. Neurocomputing. journal homepage:

Learning a kernel matrix for nonlinear dimensionality reduction

Non-parametric Classification of Facial Features

PCA and LDA. Man-Wai MAK

Discriminative Direction for Kernel Classifiers

Intrinsic Structure Study on Whale Vocalizations

Pattern Recognition 2

Comparative Assessment of Independent Component. Component Analysis (ICA) for Face Recognition.

SINGLE-TASK AND MULTITASK SPARSE GAUSSIAN PROCESSES

Manifold Regularization

Face recognition Computer Vision Spring 2018, Lecture 21

Robot Image Credit: Viktoriya Sukhanova 123RF.com. Dimensionality Reduction

Linear Subspace Models

Localized Sliced Inverse Regression

Apprentissage non supervisée

ROBERTO BATTITI, MAURO BRUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Apr 2015

Linear Discriminant Analysis Using Rotational Invariant L 1 Norm

Principal Component Analysis (PCA)

CS4495/6495 Introduction to Computer Vision. 8B-L2 Principle Component Analysis (and its use in Computer Vision)

Connection of Local Linear Embedding, ISOMAP, and Kernel Principal Component Analysis

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS

W vs. QCD Jet Tagging at the Large Hadron Collider

Image Analysis & Retrieval. CS/EE 5590 Special Topics (Class Ids: 44873, 44874) Fall 2016, M/W Lec 15. Graph Laplacian Embedding

Enhanced Fisher Linear Discriminant Models for Face Recognition

Transcription:

Journal of Information & Computational Science 8: 14 (2011) 3019 3026 Available at http://www.joics.com Discriminant Uncorrelated Neighborhood Preserving Projections Guoqiang WANG a,, Weijuan ZHANG a, Dianting LIU b a Department of Computer and Information Engineering, Luoyang Institute of Science and Technology, 471023 Luoyang, China b Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL33124, USA Abstract Dimensionality reduction is a crucial step for pattern recognition. Recently, a new kind of dimensionality reduction method, manifold learning, has attracted much attention.among them, Neighborhood Preserving Projections (NPP) is one of the most promising techniques. In this paper, a novel manifold learning method called Discriminant Uncorrelated Neighborhood Preserving Projections (DUNPP), is proposed. Based on NPP, DUNPP takes into account the between-class information and designs a new differencebased optimization objective function with uncorrelated constraint. DUNPP not only preserves the within-class neighboring geometry, but also maximizes the between-class distance. Moreover, the features extracted via DUNPP are statistically uncorrelated with minimum redundancy, which is desirable for many pattern analysis applications. Thus it can obtain the stronger discriminant power. Experimental results on standard face database demonstrate the effectiveness of the proposed algorithm. Keywords: Manifold Learning; Within-Class Neighboring Geometry; Between-Class Distance; Uncorrelated Constraint; Face Recognition 1 Introduction In the past few years, manifold learning algorithms, based on local geometry analysis, have drawn much attention. Unlike Principal Component Analysis (PCA) [1] and Linear Discriminant Analysis (LDA) [2], which aim to preserve the global Euclidean structure of the data space, manifold learning algorithms aim to preserve the inherent manifold structure. The most representative algorithms include Isomap [3], Local Linear Embedding (LLE)[4], Laplacian Eigenmap [5], Local Tagent Space Alignment (LTSA) [6], Local Coordinates Alignment (LCA) [7], Local Spline Embedding (LSE) [8], and so on. Real performances on many data sets show that they are effective methods to discover the underlying structure hidden in the high-dimensional data set This work is supported by The Young Core Instructor of colleges and universities in Henan province Funding scheme (No. 2011GGJS-173). Corresponding author. Email address: wgq2211@163.com (Guoqiang WANG). 1548 7741/ Copyright 2010 Binary Information Press December 2011

3020 G. WANG et al. /Journal of Information & Computational Science 8: 14 (2011) 3019 3026 [4, 5, 6, 7, 8]. Nevertheless, the mappings derived from these methods are defined on the training set and how to evaluate a novel test data remains unclear. Therefore, these methods can not be applied directly to pattern classification problem. In order to overcome the drawback, linear manifold embedding methods are proposed, such as Neighborhood Preserving Projections (NPP) [9, 10], and Locality Preserving Projections (LPP) [11, 12], which can find the mapping on the whole data space, not just on training data. However, these manifold learning methods do not consider the class label information, which will inevitably lead to a heavy weakening of their performance on pattern recognition. To take advantage of the class separability, some discriminant manifold learning methods which integrate the locally geometric information and class label information have been proposed, such as Marginal Fisher Analysis (MFA) [13] and Locality sensitive discriminant analysis (LSDA) [14]. All the methods mentioned above are statistically correlated, and so the extracted features contain redundancy, which may distort the distribution of the features and even dramatically degrade recognition performance. Although LPP holds the same locality preserving characteristic as NPP, Unlike LPP, which sets the weights between points simply 0,1, or the value of a function, NPP obtains the reconstruction weights by solving a square optimal problem, and thus the weights can be always optimal and better characterize neighboring geometry relation. In this paper, based on NPP, We propose a Discriminant Uncorrelated Neighborhood Preserving Projections (DUNPP) method by designing a new difference-based optimization objective function with uncorrelated constraint for face recognition. This method achieves good discrimination ability by preserving within-class neighboring geometry structure, while maximizing the between-class distance. In addition, this method obtains statistically uncorrelated features with minimum redundancy. The effectiveness of the proposed method is verified by experiments on ORL face database. 2 A Brief Review of NPP The generic problem of linear dimensionality reduction is as follows: Give N points X = [x 1, x 2,..., x N ] in D dimensional space, find the transformation matrix V to map these points to new points Y = [y 1, y 2,..., y N ] in d dimensional space (d <<D) such that y i represents x i, where y i = V T x i. Neighborhood Preserving Projections (NPP) is a linear approximation of the original nonlinear Local Linear Embedding (LLE). NPP aims to find the optimal transformation V that can preserve the locality manifold structure of the data X by minimizing the following cost function: N N 2 J 1 (V ) = y i y i W ij i=1 j=1 = Y (I W ) 2 = tr{y (I W )(I W ) T Y T } (1) = tr{(v T X)(I W )(I W ) T (V T X) T } = tr{v T XMX T V } with the constraint(1/n)y Y T = (1/N)V T XX T V = I. where tr denotes the operation of trace, M = (I W ) T (I W ), I is an identity matrix. The basis idea behind NPP is that the same weights W ij that reconstruct the ith point x i in D dimensional space should also reconstruct

G. WANG et al. /Journal of Information & Computational Science 8: 14 (2011) 3019 3026 3021 its embedded counterpart y i in d dimensional space, the weight matrix W can be computed by minimizing the reconstruction error: N K 2 ɛ(w ) = x i W ij x j (2) where jw ij = 1, and W ij = 0, if x j is not one of the K nearest neighbors of x i. i=1 Finally, the minimization problem can be converted to solving a generalized eigenvalue problem as follows: XMX T V = λxx T V (3) Let the column vectors v 0, v 1,..., v d 1 be the solutions of Eq. (3), ordered according to their eigenvalues λ 0 λ 1... λ d 1, and V = [v 0, v 1,..., v d 1 ]. Thus, the embedding is as follows: j 1 x i y i = V T x i (4) For more detailed information about NPP, please refer to [9], [10]. 3 Discriminant Uncorrelated Neighborhood Preserving Projections NPP is an unsupervised manifold learning method. As mentioned above, discriminant information is very important for recognition problem. Here, we modify NPP with class label information and yield Discriminant Uncorrelated Neighborhood Preserving Projections (UDNPP). For DUNPP, the goal is to preserve the within-class neighborhood geometry structure, while maximizing the between-class distance. Since DUNPP attempts to solve a classification problem, it must take into account class label information to make the projection vectors from different classes far from each other. So, we propose to maximize the between-class distance: J 2 (V ) = = = C m i m j 2 i,j=1 C (1/n i) V T x (1/n j ) V T x x X i x X j i,j=1 C i,j=1 V T (µ i µ j ) 2 (5) { C } = tr V T (µ i µ j )(µ i µ j ) T V i,j=1 = tr{v T S b V } where m i = (1/n i ) y X i y, µ i = (1/n i ) x X i x, and S b is the between-class scatter matrix. According to [12], the total scatter matrix S t, within-class scatter matrix S w, and between-class 2

3022 G. WANG et al. /Journal of Information & Computational Science 8: 14 (2011) 3019 3026 scatter matrix S b can be formulated, respectively, as follows: N S t = (1/N) (x i µ)(x i µ) T = (1/N)X(I (1 N)ee T )X T = XGX T (6) i=1 S w = C i=1 x X i (x µ i )(x µ i ) T = X(I E)X T = XLX T (7) S b = S t S w = X(G L)X T = XBX T (8) where G = I (1 N)ee T, I is an identity matrix, e = (1, 1,..., 1) T, L = I E, E ij = 1/N c, if x i and x j belong to class X c ; otherwise, E ij = 0, and B = G L. Thus, Eq. (3) can be rewritten as: J 2 (V ) = tr{v T XBX T V } (9) When class labels are available, each data point is reconstructed by linear combination of other points which belong to the same class. The weight W ij can be computed by minimizing Eq. (2) with constraints j W ij = 1, and W ij = 0 if x j and x i are from different classes. Thus, the weight matrix not only reflects geometric relation but also carries discriminant information. As a result, the within-class geometry structure in DUNPP can be preserved. Combining Eq. (1) and Eq.(9), the objective function of DUNPP can be defined as follows: J(V ) = tr { V T XBX T V V T XMX T V } (10) Next, the statistically uncorrelated constraint is considered. Assuming that any two different components y i and y j (i j) of the extracted feature y = V T x are statistically uncorrelated, this means that: E[(y i E(y i ))(y j E(y j )) T ] = v i T S t v j = 0 (11) where v i and v j are two different columns of the matrix V, and S t is the total scatter matrix of the training set. Besides, v i should be normalized. Let v i satisfy: Eq. (11) and Eq. (12) can be summarized as: v i T S t v i = 1 (12) V T S t V = I (13) As a result, DUNPP can be formulated as the following constrained maximization problem maxj(v ) = max tr(v T X(B M)X T V ) V T S tv =I = max tr(v T X(B M)X T V ) (14) V T XGX T V =I Finally, the constrained maximization problem is reduced to a generalized eigenvalue problem X(B M)X T v = λxgx T v (15) The transformation matrix V is computed by the d eigenvectors corresponding to the first d largest eigenvalues ordered of Eq. (15). Once V is obtained, for any data point x in the input space, the feature is given as y = V T x.

G. WANG et al. /Journal of Information & Computational Science 8: 14 (2011) 3019 3026 3023 4 Experimental Results To demonstrate the effectiveness of our proposed method for face analysis (face representation, face distribution and face recognition), experiments were done on the ORL face database. The ORL face database contains 40 different subjects, and each subject has ten different images. The images include variation in face expression (smile or not, open/closed eyes) and pose. The size of each cropped image in all the experiments is 32 32 pixels, with 256 gray levels per pixel. Fig.1 illustrates one sample subjects of the ORL database along with variations in facial expression and pose. In this experiment, we compare our proposed algorithm with representative algorithms such as PCA, LDA, NPP, and LPP. Fig. 1: Sample face images from the ORL database 4.1 Face representation using DUNPP Affected by many complex factors such as expression, illumination and pose, the original image space might not be an optimal space for visual representation, distribution and recognition. In term of different theoretical analysis, PCA, LDA, NPP, LPP and DUNPP will generate different projected subspace. The images in the training set are used to learn such a face subspace spanned by the basis vectors of the corresponding algorithm. Thus, any other images in the projected subspace can be represented as a linear combination of these basis vector images. For PCA, LDA and LPP, these xbasis vector images are usually called Eigenfaces, Fisherfaces and Laplacianfaces since they are face-like. Using the randomly selected five image samples of each person from the ORL database as the training set, we present the first five basis vector images of DUNPP in Fig.2, together with Eigenfaces, Fisherfaces, those basis vector images of NPP and Laplacianfaces. It can be seen from Fig.2 that the basis vector images computed by DUNPP is different from those of NPP. Fig. 2: The row images corresponding to the first five basis vectors of PCA, LDA, NPP, LPP and DUNPP

3024 G. WANG et al. /Journal of Information & Computational Science 8: 14 (2011) 3019 3026 4.2 Distribution of face patterns The distribution of face patterns is highly non-convex and complex, especially when the patterns are subject to large variations in viewpoints as is the case with the ORL database. Here, the experiment aims to provide insights on how the DUNPP algorithm simplifies the face pattern distribution. For the sake of simplicity in visualization, we only use a subset of the database, which contains 120 images with randomly selected 3 images per individual. Five types of feature bases are generalized from the ORL subset by utilizing the PCA, LDA, NPP, LPP and DUNPP algorithms respectively. Then, the rest images of seven person randomly selected from database are projected onto the five subspaces. For each image, its projections in the first two most significant feature bases of each subspace are visualized in Fig.3, where the images from the same class are specified respectively by,,,, +, and. It can be seen from Fig.3(a) that the faces are not separable in the PCA subsapce, the reason is that the PCA-based features are optimized with focus on object reconstruction. LDA optimizes the low-dimensional representation of the objects based on separability criteria. As show in Fig.3(b), the two classes are separable. Unlike PCA and LDA which are global method, NPP and LPP possess locality preserving property. From Fig.3(c) and Fig.3(d) we observe that separability of NPP and LPP is much obvious better compared to that using PCA and LDA. In contrast to other four algorithms, we can see the linearization property of the DUNPP-based subspace, as depicted in Fig.3(d), where all of classes are almost separable, and there is small amount of overlapping. Fig. 3: Distribution of seven subjects in different subspace 4.3 Face recognition using DUNPP In this section, we investigate the use of DUNPP for face recognition. In this experiment, the training and testing set are selected randomly for each subject on ORL face database. The number of training samples per subject, n, increases from 3 to 6 on ORL face database. In each round, the training samples are selected randomly and the remaining samples are used for testing. This procedure was repeated 10 times by randomly choosing different training and testing sets. Finally, a nearest neighbor classifier is employed for classification.

G. WANG et al. /Journal of Information & Computational Science 8: 14 (2011) 3019 3026 3025 To evaluate the efficiency of the DUNPP algorithm, we compare the accurate recognition rate of DUNPP algorithm with that of other algorithms such as PCA, LDA, NPP, and LPP. Fig.4 gives comparison of the average recognition rates of five algorithms under different reduced dimensions. Table 1 contains comparative analysis of the obtained top average recognition rate and the corresponding standard deviation with reduced dimensionality on the ORL database. As can be seen, the DUNPP algorithm outperforms the other algorithms. It demonstrates that the performance is improved because DUNPP algorithm takes into account more geometric structure and more discriminant information; in addition, the extracted features are statically uncorrelated. Fig. 4: Recognition rate vs. dimensionality reduction on ORL face database. (a) Three samples for training. (b) Four samples for training. (c) Five samples for training. (d) Six samples for training Table 1: Recognition accuracy (%) comparison on ORL face database Method 3Train 4Train 5Train 6Train PCA 79.00± 1.76(119) 84.21± 1.65(159) 89.75 ± 1.84(199) 90.25 ± 1.39(239) LDA 86.14± 1.38(39) 90.75±1.73(38) 93.55±1.48(39) 94.50±2.16(39) NPP 87.21±1.41(39) 91.12±1.69(39) 93.85±1.27(39) 94.50±2.22(42) LPP 87.03±1.43(39) 91.33±1.36(39) 93.55±1.01(38) 94.87±1.76(39) DUNPP 90.78±2.15(39) 93.66±1.48(39) 96.65±1.00(39) 97.06±1.25(39)

3026 G. WANG et al. /Journal of Information & Computational Science 8: 14 (2011) 3019 3026 5 Conclusion and Future Work A new manifold learning algorithm called Discriminant Uncorrelated Neighborhood Preserving Projections (DUNPP) is proposed. DUNPP takes both class label information and structure of manifold into account, thus it can preserve the within-class neighborhood geometry structure and maximize the distance between difference classes simultaneously. Moreover, an uncorrelated constraint is introduced to make the extracted features statistically uncorrelated. Experimental results on face database show DUNPP is indeed effective and efficient. But DUNPP algorithm is still linear. In future work, we will extend DUNPP to nonlinear form by kernel trick. References [1] M. Turk, A. Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuoscience, 3 (1), 1991, 71 86. [2] P. N. Belhumeur, J. P. Hespanha, D. J. Kriegman, Eigenfaces vs. Fisherfaces. Recognition Using Class Specific Linear Projection, IEEE Trans. Pattern Analysis and Machine Intelligence, 19 (7), 1997, 711 720. [3] J. B. Tenenbaum, V. de Silva, J. C. Langford, A Global Geometric Framework for Nonlinear Dimensionality Reduction, Science, 290 (5500), 2000, 2319 2323. [4] L. Saul, S. Roweis, Think Globally, Fit Locally: Unsupervised Learning of Low Dimensional Manifolds, Journal of Machine Learning Research, 2003, 4, pp. 119 155. [5] M. Belkin, P. Niyogi, Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering, In. Advances in Neural Information Processing Systems, 2002. [6] Z. Zhang, H. Zha, Principle Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Aligenment, SIAM J.Sci. compute, 26 (1), 2004, 313 338. [7] T. H. Zhang, X. L. Li, D. C. Tao, J. Yang, Local Coordinates Alignment (LCA): a Novel Method for Manifold Learning, International Journal of Pattern Recognition and Artifical Intelligence, 22 (4), 2008, 667 690. [8] S. M. Xiang, F. P. Nie, C. S. Zhang, C. X. Zhang, Nonlinear Dimensionality Reduction with Local Spline Embedding, IEEE Trans. on Knowledge and Data Engineering, 21 (9), 2009, 1285 1298. [9] Y. W. Pang, L. Zhang, Z. K. Liu., Neighborhood Preserving Projections (NPP): A Novel Linear Dimension Reduction Method. In: Proceeding of International Conference on Intelligent Computing, 2005, 3644, pp. 117 125. [10] Y. W. Pang, N. H. Yu, H. Q. Li, R. Zhang, Z. K. Liu, Face Recognition Using Neighborhood Prserving Projections, In: Proceeding. of Pacific-Rim Conference on Multimedia, 2005, 3768, pp. 854 864. [11] X. He, P. Niyogi, Locality Preserving Projections, In: Proceeding of Advances in Neural Informaion Processing System, 2003, 16, pp. 153 160. [12] X. He, S. Yan, Y. Hu, P. Niyogi, H. J. Zhang, Face Recognition Using Laplacianfaces, IEEE.Trans. Pattern Analysis and Machine Intelligence, 27 (3), 2005, 328 340. [13] S. C. Yan, D. Xu, B. Y. Zhang, H. J. Zhang, Graph Embedding: A General Framework for Dimensionality Reduction, In: Proceeding of IEEE Comput, Soc. Conf. Computer Vision and Pattern Recognition, 2005, 2, pp. 20 25. [14] D. Cai, X. F. He, K. Zhou, Locality Sensitive discriminant analysis, In: Proceeding of Int. Conf. Artificial Intelligence, 2007.