Kernel Uncorrelated and Orthogonal Discriminant Analysis: A Unified Approach

Similar documents
Efficient Kernel Discriminant Analysis via QR Decomposition

When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants

Integrating Global and Local Structures: A Least Squares Framework for Dimensionality Reduction

An Efficient Pseudoinverse Linear Discriminant Analysis method for Face Recognition

Two-Dimensional Linear Discriminant Analysis

Multiple Similarities Based Kernel Subspace Learning for Image Classification

Symmetric Two Dimensional Linear Discriminant Analysis (2DLDA)

Nonlinear Projection Trick in kernel methods: An alternative to the Kernel Trick

Discriminative K-means for Clustering

Robust Fisher Discriminant Analysis

Adaptive Kernel Principal Component Analysis With Unsupervised Learning of Kernels

Weighted Generalized LDA for Undersampled Problems

CPM: A Covariance-preserving Projection Method

Example: Face Detection

A Least Squares Formulation for Canonical Correlation Analysis

Solving the face recognition problem using QR factorization

PCA and LDA. Man-Wai MAK

Discriminant Uncorrelated Neighborhood Preserving Projections

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Face Recognition. Face Recognition. Subspace-Based Face Recognition Algorithms. Application of Face Recognition

Learning Kernel Parameters by using Class Separability Measure

Machine Learning - MT & 14. PCA and MDS

PCA and LDA. Man-Wai MAK

Machine learning for pervasive systems Classification in high-dimensional spaces

Efficient Kernel Discriminant Analysis via Spectral Regression

Dimensionality Reduction Using PCA/LDA. Hongyu Li School of Software Engineering TongJi University Fall, 2014

Linear Algebra Methods for Data Mining

Lecture: Face Recognition

ECE 521. Lecture 11 (not on midterm material) 13 February K-means clustering, Dimensionality reduction

Efficient and Robust Feature Extraction by Maximum Margin Criterion

A PRACTICAL APPLICATION OF KERNEL BASED FUZZY DISCRIMINANT ANALYSIS

Principal Component Analysis

Worst-Case Linear Discriminant Analysis

Multiclass Classifiers Based on Dimension Reduction. with Generalized LDA

Principal Component Analysis (PCA)

Clustering VS Classification

Bayes Optimal Kernel Discriminant Analysis

Probabilistic Class-Specific Discriminant Analysis

ADAPTIVE QUASICONFORMAL KERNEL FISHER DISCRIMINANT ANALYSIS VIA WEIGHTED MAXIMUM MARGIN CRITERION. Received November 2011; revised March 2012

Recognition Using Class Specific Linear Projection. Magali Segal Stolrasky Nadav Ben Jakov April, 2015

Linear Dimensionality Reduction

SPECTRAL CLUSTERING AND KERNEL PRINCIPAL COMPONENT ANALYSIS ARE PURSUING GOOD PROJECTIONS

Kernel Discriminant Analysis for Regression Problems

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Adaptive Discriminant Analysis by Minimum Squared Errors

Image Analysis & Retrieval. Lec 14. Eigenface and Fisherface

What is Principal Component Analysis?

Discriminative Direction for Kernel Classifiers

Lecture 24: Principal Component Analysis. Aykut Erdem May 2016 Hacettepe University

LEARNING from multiple feature sets, which is also

An Improved Conjugate Gradient Scheme to the Solution of Least Squares SVM

Classifier Complexity and Support Vector Classifiers

PCA & ICA. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Reconnaissance d objetsd et vision artificielle

December 20, MAA704, Multivariate analysis. Christopher Engström. Multivariate. analysis. Principal component analysis

Lecture 5 Supspace Tranformations Eigendecompositions, kernel PCA and CCA

Sparse Support Vector Machines by Kernel Discriminant Analysis

PCA, Kernel PCA, ICA

Kernel-based Feature Extraction under Maximum Margin Criterion

A Unified Bayesian Framework for Face Recognition

Advanced Introduction to Machine Learning CMU-10715

Kernel Principal Component Analysis

COMS 4721: Machine Learning for Data Science Lecture 19, 4/6/2017

Two-stage Methods for Linear Discriminant Analysis: Equivalent Results at a Lower Cost

A Novel PCA-Based Bayes Classifier and Face Analysis

A Flexible and Efficient Algorithm for Regularized Fisher Discriminant Analysis

Cluster Kernels for Semi-Supervised Learning

Random Sampling LDA for Face Recognition

Discriminant Kernels based Support Vector Machine

Principal Component Analysis

The Laplacian PDF Distance: A Cost Function for Clustering in a Kernel Feature Space

Immediate Reward Reinforcement Learning for Projective Kernel Methods

Constructing Optimal Subspaces for Pattern Classification. Avinash Kak Purdue University. November 16, :40pm. An RVL Tutorial Presentation

A Modified Incremental Principal Component Analysis for On-line Learning of Feature Space and Classifier

Modeling Classes of Shapes Suppose you have a class of shapes with a range of variations: System 2 Overview

Supervised locally linear embedding

Fantope Regularization in Metric Learning

Kernel Discriminant Learning with Application to Face Recognition

ECE 661: Homework 10 Fall 2014

Linear & nonlinear classifiers

Discriminant analysis and supervised classification

Kazuhiro Fukui, University of Tsukuba

Subspace Analysis for Facial Image Recognition: A Comparative Study. Yongbin Zhang, Lixin Lang and Onur Hamsici

A Modified Incremental Principal Component Analysis for On-Line Learning of Feature Space and Classifier

Support Vector Machine (continued)

Universität Potsdam Institut für Informatik Lehrstuhl Maschinelles Lernen PCA. Tobias Scheffer

COS 429: COMPUTER VISON Face Recognition

STATISTICAL LEARNING SYSTEMS

Course 495: Advanced Statistical Machine Learning/Pattern Recognition

A Least Squares Formulation for Canonical Correlation Analysis

Principal Component Analysis and Linear Discriminant Analysis

STA 414/2104: Lecture 8

EECS 275 Matrix Computation

Principal Component Analysis

CS 231A Section 1: Linear Algebra & Probability Review

CS 231A Section 1: Linear Algebra & Probability Review. Kevin Tang

Stefanos Zafeiriou, Anastasios Tefas, and Ioannis Pitas

SUPPORT VECTOR MACHINE

Department of Computer Science and Engineering

Kernel Methods. Machine Learning A W VO

Transcription:

Kernel Uncorrelated and Orthogonal Discriminant Analysis: A Unified Approach Tao Xiong Department of ECE University of Minnesota, Minneapolis txiong@ece.umn.edu Vladimir Cherkassky Department of ECE University of Minnesota, Minneapolis cherkass@ece.umn.edu Jieping Ye Department of CSE Arizona State University, Tempe jieping.ye@asu.edu Abstract Several kernel algorithms have recently been proposed for nonlinear discriminant analysis. However, these methods mainly address the singularity problem in the high dimensional feature space. Less attention has been focused on the properties of the resulting discriminant vectors and feature vectors in the reduced dimensional space. In this paper, we present a new formulation for kernel discriminant analysis. The proposed formulation includes, as special cases, kernel uncorrelated discriminant analysis () and kernel orthogonal discriminant analysis (). The feature vectors of are uncorrelated, while the discriminant vectors of are orthogonal to each other in the feature space. We present theoretical derivations of proposed and algorithms. The experimental results show that both and are very competitive in comparison with other nonlinear discriminant algorithms in terms of classification accuracy. 1. Introduction In supervised learning, linear discriminant analysis (LDA) has become a classical statistical approach for dimension reduction and classification [3, 5]. LDA computes a linear transformation which maximizes the between-class scatter and minimizes the within-class scatter simultaneously, thus achieving maximum discrimination. Although it is conceptually simple, classical LDA requires the total scatter matrix to be nonsingular. In situations where the number of data points is smaller than the dimension of the data space, all scatter matrices are singular and we have the undersampled or singularity problem [8]. Moreover, classical LDA lacks the capacity to capture a nonlinearly clustered structure in the data. Nonlinear extensions of LDA use the kernel trick originally introduced in support vector machines (SVMs) [15]. The main idea of the kernel-based methods is to map the data in the original space to a feature space where inner product can be computed by a kernel function without requiring the knowledge of the nonlinear mapping function explicitly. However, such a nonlinear mapping needs to be applied with caution to LDA, since the dimension of the feature space is often much larger than that of the original data space, and any singularity problem in the original data space becomes more severe. Several kernel LDA (KDA) algorithms have been proposed which address the singularity problem. Mika et al. [1] proposed the kernel fisher discriminant algorithm by regularizing the within-class scatter matrix. Xiong et al. [16] presented an efficient KDA algorithm by using a two-stage analysis framework and QR decomposition. Yang et al. [7] proposed the kernel principal component analysis () [14] plus LDA framework where discriminant vectors can be extracted from double discriminant subspaces. More recently, a KDA algorithm using generalized singular value decomposition was proposed by Park and Park in [11]. In many applications, it is desirable to remove the redundancy among feature coordinates extracted in the reduced dimensional space and among discriminant vectors. Motivated by extracting feature vectors having statistically uncorrelated attributes, uncorrelated LDA (ULDA) was recently proposed in [6]. A direct nonlinear extension using kernel functions is presented in [9]. However, the algorithm involves solving a sequence of generalized eigenvalue problems and is computationally expensive for large

and high-dimensional dataset. Moreover, it does not address the singularity problem. Instead of adding constraint to the extracted coordinate vectors, Zheng et al. [18] extends the well known Foley-Sammon optimal discriminant vectors (FSODV) using a kernel approach, which they call KFSODV. The resulting discriminant vectors are orthogonal to each other in the feature space. The KFSODV algorithm presented in [18] has the same problem of involving several generalized eigenvalue problems and might be computationally infeasible for large data sets. Recently, an efficient algorithm, called orthogonal linear discriminant algorithm (), was proposed by Ye in [17]. In this paper, we propose a unified approach to general nonlinear discriminant analysis. We present a new formulation for kernel discriminant analysis which includes, as special cases, kernel uncorrelated discriminant analysis () and kernel orthogonal discriminant analysis (). Like other and algorithms, the feature coordinates of are uncorrelated, while the discriminant vectors of are orthogonal to each other in the feature space. We show how and can be computed efficiently under the same framework. More specifically, the solution to can be readily obtained with little extra cost based on the solution to the problem. The and algorithms proposed in this paper provide a simple and efficient way for computing uncorrelated and orthogonal transformations in the framework of KDA. We compare the proposed algorithms with other commonly used linear and nonlinear discriminant analysis (KDA) algorithms experimentally. This paper is organized as follows: In Section 2, we present a new framework for kernel discriminant analysis. Section 3 derives the and algorithms, respectively. Experimental results are presented in Section 4. Finally we conclude in Section 5. 2. New Framework for Nonlinear Kernel Discriminant Analysis Assume that X is a data set of n data vectors in an m- dimensional space, i.e., X =[x 1,,x n ]=[X 1,,X k ] R m n, (1) where the data is clustered into k classes and each block X i has n i data vectors whose column indices belong to N i. To extend LDA to the nonlinear case, X is first mapped into a feature space F R M through a nonlinear mapping function Φ. Without knowing the nonlinear mapping function Φ or the feature space F explicitly, we can work on the feature space F using kernel functions, as long as the problem formulation involves only the inner products between the data points in F and not the data points themselves. This is based on the fact that for any kernel function κ satisfying Mercer s condition, there exists a mapping function Φ such that Φ(x), Φ(y) = κ(x, y),where, is an inner product in the feature space. Let SB Φ, SΦ W and SΦ T denote the between-class, withinclass and total scatter matrix respectively. Then we have S Φ B = S Φ W = S Φ T = k n i (c Φ i c Φ )(c Φ i c Φ ) T k (Φ(x j ) c Φ i )(Φ(x j ) c Φ i ) T j N i k (Φ(x j ) c Φ )(Φ(x j ) c Φ ) T j N i where c Φ i and c Φ are the centroids of the i-th class and the global centroid, respectively in the feature space. To reduce the computation, instead of working on scatter matrices SB Φ, SΦ W and SΦ T directly, we express them as: where S Φ B = H B H T B,S Φ W = H W H T W,S Φ T = H T H T T, (2) H B = [ n 1 (c Φ 1 c Φ ),..., n k (c Φ k c Φ )], (3) H W = [Φ(X 1 ) c Φ 1 e T 1,...,Φ(X k ) c Φ k e T k ], (4) H T = [Φ(X 1 ) c Φ e T 1,...,Φ(X k ) c Φ e T k ], (5) where e i = [1,...,1] T R ni 1, i = 1,...,k and Φ(X i ) R M ni maps each column of X i into the feature space. After mapping the data to the feature space, LDA can be computed by finding in F a transformation matrix Ψ=[ψ 1,...,ψ k 1 ] R M (k 1), whose columns are the generalized eigenvectors corresponding to the largest k 1 eigenvalues of S Φ Bψ = λs Φ T ψ. (6) To solve the problem in (6) without knowing the mapping function Φ and the feature space F explicitly, we need the following lemma. Lemma 2.1 The discriminant vectors that solve the generalized eigenvalue problem (6) lie in the space spanned by the training points in the feature space, i.e. n ψ = γ i Φ(x i ),γ =[γ 1...γ n ] T. (7) Proof: Any vector ψ in the feature space can be represented as ψ = ψ s + ψ, (8)

where ψ s span{φ(x)} and ψ span{φ(x)}, where denotes the orthogonal complement. For any ψ Φ(X),wehaveS Φ B ψ =and S Φ T ψ =. Therefore, for any vector ψ that satisfies (6), S Φ Bψ s = S Φ Bψ = λs Φ T ψ = λs Φ T ψ s. (9) Hence, without losing any information, we can restrict the solution for (6) to span{φ(x)}, which completes the proof. Next we show how the generalized eigenvalue problem (6) can be expressed through kernel functions. Theorem 2.1 With ψ be expressed as in (7) and K the kernel matrix, i.e., K(i, j) =κ(x i,x j ), the generalized eigenvalue problem in (6) is equivalent to S K B γ = λs K T γ, (1) where SB K and SK T are between-class and total scatter matrices of K where each column in K is considered as a data point in the n-dimensional space. Proof: For any ξ = n α iφ(x i ) and α =[α 1...α n ] T, by left multiplying both sides of (6) by ξ T,wehave ξ T S Φ Bψ = λξ T S Φ T ψ. (11) Substituting ξ =Φ(X)α, ψ =Φ(X)γ into (11) and also from (2), α T Φ(X) T H B H B T Φ(X)γ = λα T Φ(X) T H T H T T Φ(X)γ. (12) Since (12) holds for any α, it follows that Φ(X) T H B H T B Φ(X)γ = λφ(x) T H T H T T Φ(X)γ. (13) Or equivalently, K B K T B γ = λk T K T T γ, where K B = [ b ij ], b ij = n j ( 1 κ(x i,x s ) 1 n κ(x i,x s )), (14) n j n s N j s=1 and K T =[t ij ], where t ij = κ(x i,x j ) 1 n n κ(x i,x s ). (15) s=1 The derivations of K B and K T follow naturally from (3), (5) and (7). It can be seen that K B K T B and K T K T T are the same as the between-class matrix SB K and total scatter matrix SK T of K if we view each column in K as a data points in the n-dimensional space. This completes the proof. Similarly, the within-class scatter matrix of K can be shown to be K W K T W where K W =[w ij ] and w i,j = κ(x i,x j ) 1 κ(x i,x s ). (16) n l s N l In summary, the kernel discriminant analysis finds a coefficient matrix Γ=[γ 1,...,γ k 1 ] R n (k 1) satisfying K B K B T Γ=K T K T T ΓΛ, (17) where Λ is a diagonal matrix containing generalized eigenvalues. Solving the generalized eigenvalue problem in (17) is equivalent to finding a transformation matrix Γ that maximizes the following criterion [5]: F (Γ) = trace(γ T K T K T T Γ) 1 (Γ T K B K B T Γ). (18) However, K T KT T is usually singular, which means the optimization criterion in (18) cannot be directly applied. In this paper, we adopt a new criterion, which is defined as F 1 (Γ) = trace((γ T K T K T T Γ) + (Γ T K B K B T Γ)), (19) where (Γ T K T K T T Γ) + denotes the pseudo-inverse of (Γ T K T K T T Γ). The optimal transformation matrix Γ is computed so that F 1 (Γ) is maximized. A similar criterion was used in [17] for linear discriminant analysis. The use of pseudo-inverse in discriminant analysis can also be found in [12]. Once Γ is computed, for any test vector y R m 1 in the original space, the dimension reduced representation of y is then given by [ψ 1,...,ψ k 1 ] T Φ(y) =Γ T κ(x i,y). κ(x n,y). (2) In order to solve the optimization problem (19), we need the following theorem. Theorem 2.2 Let Z R n n be a matrix that diagonalizes K B K T B, K W K T W, and K T K T T simultaneously, i.e. Z T K B K T Σb B Z = D b, (21) Z T K W K T Σw W Z = D w, (22) Z T K T K T It T Z = D t, (23) where Σ b and Σ w are diagonal matrices with diagonal elements of Σ b sorted in non-increasing order, and I t is an identity matrix. Let r = rank(k B K B T ) and Z r be the matrix consisting of the first r columns of Z. Then Γ=Z r M, for any nonsingular matrix M, maximizes F 1 defined in (19). Proof: The proof follows the derivations in [17], where a similar result is obtained for generalized linear discriminant analysis.

Next, we present an algorithm to compute such a Z that K B K T B, K W K T W and K T K T T can be diagonalized simultaneously. Let K T = UΣV T be the( SVD of K) T, where U and Σt V are orthogonal and Σ =, Σ t R t t is diagonal and t = rank(k T ). Then K T K T T = UΣΣ T U T = U ( Σ 2 t ) U T. (24) Let U be partitioned as U =(U 1,U 2 ), where U 1 R n t and U 2 R n (n t). From the fact that K T K T T = K B K B T + K W K W T, (25) and U2 T K T K T T U 2 =, we have U2 T K B K T B U 2 =, U2 T K W K T W U 2 =, U1 T K B K T B U 2 = and U1 T K W K T W U 2 =since both K B K T B and K W K T W are positive semidefinite. Therefore U T K B K T U T B U = 1 K B K T B U 1, (26) U T K W K W T U = ( U T 1 K W K T W U 1 ). (27) From (25 27), we have Σ 2 t = U T 1 K B K B T U 1 + U T 1 K W K W T U 2. It follows that I t =Σ 1 t U1 T K B K T B U 1 Σ 1 1 t +Σt U1 T K W K T W U 1 Σ 1 t. }{{}}{{}}{{}}{{} B B T W W T (28) Let B = P ΣQ T be the SVD of B, where P and Q are orthogonal and Σ is diagonal. Then Σ 1 t U1 T K B K T B U 1 Σ 1 t = P Σ 2 P T = P Σ b P T, (29) where Σ b = Σ 2 = diag(λ 1,...,λ t ), and λ 1,..., λ r > =λ r+1 =...= λ t, and r = rank(k B K T B ). It follows that I t = Σ b + P T WW T P. Hence P T Σ 1 t = I t Σ b Σ w is also diagonal. Therefore U T 1 K W K W T U 1 Σ 1 t Z = U ( Σt P I ) (3) diagonalizes K B K B T, K W K W T and K T K T T as in (21 23), where I in (3) is an identity matrix of a proper size. 3. Kernel Uncorrelated and Orthogonal Discriminant Analysis 3.1. Kernel Uncorrelated Discriminant Analysis Uncorrelated linear discriminant analysis (ULDA) was recently proposed in [6] to extract feature vectors having uncorrelated attributes. Statistically, a correlation describes the strength of an association between variables. An association between variables means that the value of one variable can be predicted, to some extent, by the value of the other. Therefore, uncorrelated features have the advantage of containing minimum redundancy, which is desirable in many applications. Let us denote S b, S w and S t to be the between-class, within-class and total scatter matrices in the original data space. The algorithm in [6] finds discriminant vectors successively that are S t -orthogonal. Recall that two vectors x and y are S t -orthogonal, if x T S t y =. The algorithm goes as follows: the i-th discriminant vector φ i of ULDA is the eigenvector corresponding to the maximum eigenvalue of the following generalized eigenvalue problem: P i S b ψ i = λ i S w ψ i, where U 1 = I,D i =[ψ 1,...,ψ i 1 ],P i = I S t Di T (D is t Sw 1 S t Di T ) 1 D i S t Sw 1, and I is the identity matrix. The limitation of the ULDA algorithm in [6] lies in the expensive computation of the s generalized eigenvalue problems, where s is the number of optimal discriminant vectors by ULDA. The algorithms presented in this paper, besides other advantages, resolves all the uncorrelated optimal discriminant vectors simultaneously in the feature space, thus saving huge amount of computation. The key to ULDA is S t -orthogonality of discriminant vectors. S t -orthogonality becomes ψ i ST Φψ j =where ψ i and ψ j are two different discriminant vectors in F. Using (2) (5) (7) and the equivalence in Theorem 2.1, wehave a constrained optimization problem for kernel uncorrelated discriminant analysis () as follows, Γ = arg max ΓT K T K T T Γ=I r F 1 (Γ), (31) where F 1 (Γ) is defined in (19) and I r is an identity matrix of size r r. In theorem 2.2, we have proved that for any nonsingular matrix M, Γ = Z r M maximizes the F 1 criterion. By choosing a specific M, we can solve the optimization in (31). Corollary 3.1 Γ = Z r, i.e., setting M = I r solves the optimization problem in (31). Proof: From the derivation of Z from last section and equation (23) particularly, Z T r K T K T T Z r = I r. With the fact that Z r also maximizes F 1, Z r is an optimal solution to (31). This completes the proof. The pseudo-code for is given in Algorithm 1. 3.2. Kernel Orthogonal Discriminant Analysis LDA with orthogonal discriminant vectors is a natural alternative to ULDA. In current literature, Foley-Sammon LDA (FSLDA) is well known for its orthogonal discriminant vectors [4, 2]. Specifically, assuming that i vectors

Algorithm 1: Input: data matrix X, test data y; Output: Reduced representation of y; 1. Construct kernel matrix K; 2. Form three matrices K B, K W, K T as in Eqs. (14 16); 3. Compute reduced SVD of K T as K T = U 1 Σ t V T 1 ; 4. B Σ 1 t U T 1 K B ; 5. Compute SVD of B as B = P ΣQ T ; r rank(b); 6. Z U 1 Σ 1 t P ; 7. Γ Z r ; 8. Reduced representation of y: Γ T κ(x i,y). κ(x n,y). ψ 1,ψ 2,...,ψ i are obtained, the (i +1)-th vector ψ i+1 of FSLDA is the one that maximizes the criterion function (18), subject to the constraints: ψi+1 T ψ j =,j =1,...,i. It has been proved that ψ i+1 is the eigenvector corresponding to the maximum eigenvalue of the following matrix: (I Sw 1 Di T S 1 i D i )Sw 1 S b, where D i =[ψ 1,...,ψ i ] T, and S i = D i Sw 1 Di T. The orthogonality constraint in the feature space becomes ψi T ψ j =for two different discriminant vectors ψ i and ψ j in F. Using the definition in (7) and the equivalence in Theorem 2.1, we have a constrained optimization problem for kernel orthogonal discriminant analysis () as follows, Γ = arg max Γ T KΓ=I r F 1 (Γ), (32) where F 1 (Γ) is defined in (23) and K is the kernel matrix defined in Theorem 2.1. Corollary 3.2 Let Zr T KZ r = UΣU T be the eigendecomposition of Zr T KZ r. Then Γ = Z r UΣ 1 2 solves the optimization problem in (32). Proof: In theorem 2.2, we have proved that for any nonsingular matrix M, Γ = Z r M maximizes the F 1 criterion. We have the freedom to choose M to make the constraint in (32) satisfied. With M = UΣ 1 2, M T Zr T KZ r M = Σ 1 2 U T UΣU T UΣ 1 2 = I r. This completes the proof. The pseudo-code for is given in Algorithm 2. 4. Experimental Comparisons In this section, we conduct experiments to demonstrate the effectiveness of the proposed and algorithms in comparison with other kernel-based nonlinear discriminant analysis algorithms as well as kernel PCA [14]. In all experiments, for simplicity, we apply the nearestneighbor classifier in the reduced dimensional space [1]. Algorithm 2: Input: data matrix X, test data y; Output: Reduced representation of y; 1. Compute the matrix Z r as in (steps 1 6 of Algorithm 1); 2. Compute the eigendecomposition of Zr T KZ r as Zr T KZ r = UΣU T ; 3. Γ Z r UΣ 1 2 ; 4. Reduced representation of y: Γ T κ(x i,y). κ(x n,y) Data set Vehicle Vowel Car Mfeature #class 4 11 4 1 #attribute 18 1 6 649 #data 846 528 1728 2 Table 1. Description of UCI data sets. Methods Vehicle Vowel Car Mfeature LDA.735.935.91.981.689.939.86.89.769.941.953.984.784.959.961.99 KRDA(.1).491.923.923.942 KRDA(.5).48.912.953.946 KRDA(1).734.942.962.955 KRDA(3).541.938.96.952 KRDA(5).535.937.959.931 Table 2. The prediction accuracy on UCI datasets based on 3 runs. The number in parenthesis for KRDA is regularization value. 4.1. UCI datasets In this experiment, we test the performance of proposed algorithms using data with relative low dimensionality. Data sets were collected from the UCI machine learning repository 1. The statistics of data sets are summarized in Table 1. We randomly split the data into equally sized training and test sets and cross validation is used with the training data to tune the parameter σ in the Gaussian RBF kernel: κ(x, y) =exp( x y 2 2σ ). The random split was repeated 2 3 times. A commonly used approach to deal with the singularity problem in the generalized eigenvalue problem (17), is to use the regularization technique, where a positive diagonal matrix ξi is added to K T KT T to make it nonsingular. Then 1 http://www.ics.uci.edu/ mlearn/mlrepository.html

.16.14 PIX: TESTA.8.7 PIX: TESTB error rate.12.1.8 error rate.6.5.4.6.3.4.2.2.1 1 2 3 4 5 6 7 8 9 1 1 2 3 4 5 6 7 8 9 1 Figure 1. Error rate of and vs other methods on PIX dataset: Test A. Figure 2. Error rate of and vs other methods on PIX dataset: Test B. the eigenvalue problem: (K T K T T + ξi) 1 K B K T B γ = λγ (33) is solved. We call this approach the kernel regularized discriminant analysis (KRDA). A similar approach dealing with the two-class classification problem was proposed in [1]. The performances of KRDAs are compared with and in our experiments. To make the comparison fair, we choose five different regularization parameter ξ values and report the prediction accuracies in Table 2. The prediction accuracies obtained by LDA and are also reported. In, for simplicity, we set the reduced dimension to the number of data points minus the number of classes. Experimental results in Table 2 show that both and perform competitively relative to other methods, with having a slight edge in terms of prediction accuracy. KRDA also gives reasonably good results. However, usually the optimal regularization parameter in KRDA needs to be determined experimentally and this procedure can be expensive, while and do not require any additional parameter optimization and can be computed efficiently. 4.2. Face image Databases To test the effectiveness of the proposed algorithms on the undersampled problems, we use two benchmark face databases: PIX and ORL. 4.2.1 PIX database PIX 2 contains 3 face images of 3 persons. The image size of PIX image is 512 512. We subsample the images down to a size of 1 1 = 1. We use two settings in our experiments. In test A, a subset with 2 images per individual was randomly sampled to form the training data set and the rest of the data forms the test data set. In test B, 2 http://peipa.essex.ac.uk/ipa/pix/faces/manchester/test-hard/ the size of randomly sampled data set was increased from 2 to 4. Again Gaussian kernel was used and σ was set to 2 since it gives overall good performance. We repeat the random split 3 times and the average prediction error rates are reported. We compare and with KRDA with five different regularization parameter values as well as and PCA plus LDA approach [1]. In PCA plus LDA, we report the best result on different PCA dimensions. The result on orthogonal linear discriminant analysis () is also reported [17]. Figures 1, 2 show the relative performance of different algorithms compared on PIX data sets. The vertical axis represents error rate while horizontal axis represents different algorithms. The names of different algorithms are listed on top of each bar in the figure. For KRDA, the numbers in parenthesis are regularization parameter values. We can see that in both tests, KRDA with parameter.5 achieves the lowest prediction error rate. outperforms and is very close to the best KRDA classifier. Note that there is no extra parameter to tune in and while it may be expensive to find the optimal regularization parameter for KRDA. 4.2.2 ORL database ORL 3 is a well-known dataset for face recognition [13]. It contains the face images of 4 persons, for a total of 4 images. The image size is 92 112. The face images are perfectly centralized. The major challenge on this dataset is the variation of the face pose and facial expressions. We use the whole image as an instance. We use the same experimental protocol as in section 4.2.1. The data set is randomly partitioned into training and test sets with 2 images per individual for training in test A and 4 images per individual for training in test B. The results in Figures 3 and 4 show the superiority of proposed and algorithms over all other algorithms compared. 3 http://www.cam-orl.co.uk/facedatabase.html

error rate.4.35.3.25.2.15.1.5 ORL: TESTA 1 2 3 4 5 6 7 8 9 1 Figure 3. Error rate of and vs other methods on ORL dataset: Test A. error rate.18.16.14.12.1.8.6.4.2 ORL: TESTB 1 2 3 4 5 6 7 8 9 1 Figure 4. Error rate of and vs other methods on ORL dataset: Test B. Surprisingly, achieves lower prediction error rate than, which is not the case in our previous experiments. It is also interesting to see that also performs very well and outperforms the PCA plus LDA method by a large margin. 5. Conclusions In this paper, we proposed a unified approach for kernel uncorrelated and orthogonal discriminant analysis. The derived and algorithms solve the undersampled problem in the feature space using a new optimization criterion. has the property that the features in the reduced space are uncorrelated while has the property that discriminant vectors obtained in the feature space are orthogonal to each other. The proposed and algorithms are efficient. Experimental comparisons on a variety of real-world data sets and face image data sets indicate that and are very competitive relative to existing nonlinear dimension reduction algorithms, in terms of generalization performance. References [1] P. Belhumeur, J. Hespanha, and D. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. In ECCV, pages 45 58, 1996. [2] L. Duchene and S. Leclerq. An optimal transformation for discriminant and principal component analysis. IEEE TPAMI, 1(6):978 983, 1988. [3] R. Duda, P. Hart, andd. Stork. Pattern Classification. Wiley, 2. [4] J. H. Friedman. Regularized discriminant analysis. JASA, 84(45):165 175, 1989. [5] K. Fukunaga. Introduction to Statistical Pattern Classification. Academic Press, California, USA, 199. [6] Z. Jin, J.-Y. Yang, Z.-S. Hu, and Z. Lou. Face recognition based on the uncorrelated discriminant transformation. Pattern Recognition, 34:145 1416, 21. [7] J.Yang, A. Frangi, J. Yang, and Z. Jin. plus LDA: A complete kernel fisher discriminant framework for feature extraction and recognition. IEEE TPAMI, 27(2):23 244, 25. [8] W. Krzanowski, P. Jonathan, W. McCarthy, and M. Thomas. Discriminant analysis with singular covariance matrices: methods and applications to spectroscopic data. Applied Statistics, 44:11 115, 1995. [9] Z. Liang and P. Shi. Uncorrelated discriminant vectors using a kernel method. Pattern Recognition, 38:37 31, 25. [1] S. Mika, G. Ratsch, J. Weston, B. Schökopf, and K.-R. Müller. Fisher discriminant analysis with kernels. In IEEE Neural Networks for Signal Processing Workshop, pages 41 48, 1999. [11] C. Park and H. Park. Nonlinear discriminant analysis using kernel functions and the generalized singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 27(1):87 12, 25. [12] S. Raudys and R. Duin. On expected classification error of the fisher linear classifier with pseudo-inverse covariance matrix. Pattern Recognition Letters, 19:385 392, 1998. [13] F. Samaria and A. Harter. Parameterisation of a stochastic model for human face identification. In Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, pages 138 142, 1994. [14] B. Schökopf, A. Smola, and K. Müller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 1(5):1299 1319, 1998. [15] V. Vapnik. The Nature of Statistical Leraning Theory. New York: Springer Verlag, 1995. [16] T. Xiong, J. Ye, Q. Li, R. Janardan, and V. Cherkassky. Efficient kernel discriminant analysis via QR decomposition. In NIPS, pages 1529 1536. 25. [17] J. Ye. Charaterization of a family of algorithms for generalized discriminant analysis on undersampled problems. Journal of Machine Learning Research, 6:483 52, 25. [18] W. Zheng, L. Zhao, and C. Zou. Foley-Sammon optimal discriminant vectors using kernel approach. IEEE Trans Neural Networks, 16(1):1 9, 25.