Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition

Size: px
Start display at page:

Download "Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition"

Transcription

1 TNN-2007-P-0332.R1 1 Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition Haiing Lu, K.N. Plataniotis and A.N. Venetsanooulos The Edward S. Rogers Sr. Deartment of Electrical and Comuter Engineering University of Toronto, M5S 3G4, Canada Ryerson University, Toronto, Ontario, M5B 2K3, Canada {haiing, kostas, anv}@comm.toronto.edu Abstract This aer rooses an uncorrelated multilinear discriminant analysis (UMLDA) framework for the recognition of multidimensional objects, known as tensor objects. Uncorrelated features are desirable in recognition tasks since they contain minimum redundancy and ensure indeendence of features. The UMLDA aims to extract uncorrelated discriminative features directly from tensorial data through solving a tensor-to-vector rojection. The solution consists of sequential iterative rocesses based on the alternating rojection method, and an adative regularization rocedure is incororated to enhance the erformance in the small samle size scenario. A simle nearest neighbor classifier is emloyed for classification. Furthermore, exloiting the comlementary information from differently initialized and regularized UMLDA recognizers, an aggregation scheme is adoted to combine them at the matching score level, resulting in enhanced generalization erformance while alleviating the regularization arameter selection roblem. The UMLDA-based recognition algorithm is then emirically shown on face and gait recognition tasks to outerform four multilinear subsace solutions (MPCA, DATER, GTDA, TR1DA) and four linear subsace solutions (Bayesian, LDA, ULDA, R-JD-LDA). Index Terms Multilinear discriminant analysis (MLDA), tensor objects, dimensionality reduction, feature extraction, face recognition, gait recognition, regularization, fusion. This aer was resented in art at Biometrics Symosium 2007, Baltimore, Maryland, Setember 11-13, 2007.

2 TNN-2007-P-0332.R1 2 I. INTRODUCTION Nowadays, there are growing interests in the rocessing of multidimensional objects, formally known as tensor objects, in a large number of emerging alications. Tensors are considered to be the extensions of vectors and matrices. The elements of a tensor are to be addressed by a number of indices [1], where the number of indices used in the descrition defines the order of the tensor object and each index defines one mode. By this definition, vectors are first-order tensors and matrices are second-order tensors. For examle, gray-level images are naturally second-order tensors with the column and row modes [2], [3] and color images are three-dimensional objects (third-order tensors) with the column, row and color modes [4]. Three-dimensional gray-level objects [5], such as 3-D gray-level faces [6], [7], are naturally third-order tensors with the column, row and deth modes, while the oular Gabor reresentation of gray-level images [8] are third-order tensors with the column, row and Gabor modes. Many sequential (sace-time) signals, such as surveillance video sequences [9], are naturally higher-order tensors. Graylevel video sequences can be viewed as third-order tensors with the column, row and time modes and color video sequences are fourth-order tensors with an additional color mode. Among the wide range of alications involving tensor objects [10], feature extraction for recognition uroses is an arguably most imortant one. Therefore, this aer focuses on feature extraction for tensor object recognition. In attern recognition alications, the tensor sace where a tyical tensor object is secified is often high-dimensional, and recognition methods oerating directly on this sace suffer from the so-called curse of dimensionality [11]. On the other hand, the entries of a tensor object are often highly correlated with surrounding entries, and the samles from a articular tensor object class, such as face images, are usually highly constrained and belong to a subsace, a manifold of intrinsically low dimension [11], [12]. Feature extraction or dimensionality reduction is thus an attemt to transform a high-dimensional data set into a low-dimensional sace of equivalent reresentation while retaining most of the underlying structure [13]. Traditional feature extraction algorithms, such as the classical rincial comonent analysis (PCA) and linear discriminant analysis (LDA), are linear algorithms that oerate on one-dimensional objects, i.e., first-order tensors (vectors). To aly these linear algorithms to higher-order (greater than one) tensor objects, such as images and videos, these tensor objects have to be reshaed (vectorized) into vectors first. However, it is well understood that such reshaing (vectorization) breaks the natural structure and correlation in the original data, reducing redundancies and/or higher order deendencies resent in the original data set, and losing otentially more comact or useful reresentations that can be obtained in the original tensorial forms [10]. Thus, dimensionality reduction algorithms oerating directly on the tensor

3 TNN-2007-P-0332.R1 3 objects rather than their vectorized versions are desirable. Recently, multilinear subsace feature extraction algorithms [2], [10], [14], [15] oerating directly on the tensorial reresentations rather than their vectorized versions are emerging, esecially in the oular area of biometrics-based human recognition, e.g. face and gait recognition. The multilinear rincial comonent analysis (MPCA) framework [10], a multilinear extension of the PCA, determines a multilinear rojection that rojects the original tensor objects into a lower-dimensional tensor subsace while reserving the variation in the original data. Similar to PCA, MPCA is an unsuervised method too and the feature extraction rocess does not make use of the class information. On the other hand, suervised multilinear feature extraction algorithms have also been develoed. Since LDA is a classical algorithm that has been very successful and alied widely in various alications, there have been several variants of its multilinear extension roosed, named as multilinear discriminant analysis (MLDA) in general in this aer. The two-dimensional LDA (2DLDA) firstly introduced in [16] was later extended to erform discriminant analysis on more general tensorial inuts [2]. In this so-called discriminant analysis with tensor reresentation (DATER) 1 aroach of [2], a tensor-based scatter ratio criterion is maximized. A so-called general tensor discriminant analysis (GTDA) algorithm is roosed in [15] where a scatter difference criterion is maximized [15]. DATER and GTDA are both based on the tensor-to-tensor rojection (TTP) [17]. In contrast, the tensor rank-one discriminant analysis (TR1DA) algorithm [18], [19] obtains a number of rank-one rojections with the scatter difference criterion from the reeatedlycalculated residues of the original tensor data, which is in fact the heuristic method in [20] for tensor aroximation, and it can be viewed to be based on the tensor-to-vector rojection (TVP) [17]. The multilinear extensions of linear grah-embedding algorithms were introduced similarly in [21] [25]. In the existing MLDA variants [2], [15], [18], [19], the attention focused mainly on the objective criterion in terms of (either the ratio of or the difference between) the between-class scatter and the withinclass scatter since it is well-known that the classical LDA aims to maximize the Fisher s discrimination criterion (FDC). However, they did not take the correlations among features into account. In other words, an imortant roerty of the classical LDA is ignored in these develoments: the classical LDA derives uncorrelated features, as roved in [26], [27], where the uncorrelated LDA (ULDA) introduced in [28] is shown to be equivalent to the classical LDA. Uncorrelated features contain minimum redundancy and ensure indeendence of features [27], [29]. They are highly desirable in ractical recognition tasks since the subsequent classification task can be greatly simlified. 1 Here, we use the name given when the algorithm was first roosed, which is more commonly refereed to in the literature.

4 TNN-2007-P-0332.R1 4 Motivated by the discussions above, this aer aims to develo a new MLDA solution that extracts uncorrelated features, named as the uncorrelated MLDA (UMLDA). The roosed UMLDA extracts discriminative features directly from tensorial data through solving a TVP so that the traditional FDC is maximized in each elementary rojection, while the features extracted are constrained to be uncorrelated. The solution is iterative, based on the alternating rojection method (APM), and an adative regularization factor is incororated to enhance the erformance in ractical alications where the inut dimensionality is very high but the samle size er class is often limited, such as face or gait recognition [30], [31]. The extracted features are classified through a simle classifier. Furthermore, as different initialization or regularization of UMLDA results in different features, an aggregation scheme that combines these features at the matching score level using the simle sum rule is adoted to enhance the recognition erformance, with the regularization arameter selection roblem alleviated at the same time. The main contributions of this work are: 1) The introduction of a UMLDA algorithm for uncorrelated discriminative feature extraction from tensors. As a multilinear extension of LDA, the algorithm not only obtains discriminative features through maximizing the traditional scatter-ratio-based criterion, but also enforces a constraint so that the features derived are uncorrelated. This contrasts to the traditional aroach of linear learning algorithms [30], [32], [33], where vector rather than tensor reresentation is used and thus the natural structural information is destroyed. It also differs from the MLDA variants in [2], [15], [18], [19], where there are correlations among extracted features. Another difference from the works in [2], [15] is that a TVP rather than a TTP is used here and this work takes a systematic aroach to solve such an TVP, in contrast with the heuristic aroach in [18], [19]. Furthermore, it rovides a new aroach with constraint enforcement in develoing multilinear learning algorithms. 2) The incororation of an adative regularization rocedure where the within-class scatter estimation is increased through a data-indeendent regularization arameter. This takes into account of the ractical small samle size (SSS) roblem, which often arises in biometrics alications, and the iterative nature of UMLDA, different from the scatter estimation without regularization in [2], [15], [18], [19]. 3) The adotion of an aggregation scheme that combines several differently initialized and differently regularized UMLDA feature extractors, which roduce different features, at the matching score level to achieve enhanced recognition erformance while alleviating the regularization arameter selection roblem faced in most regularization methods.

5 TNN-2007-P-0332.R1 5 The rest of this aer is organized as follows: Section II introduces the notations and basic multilinear algebra oerations, as well as the TVP. In Section III, the roblem of UMLDA is formulated and an iterative solution is derived, with an adative regularization rocedure introduced for better generalization in the SSS scenario. Next, the classifier emloyed is described and issues regarding the initialization method, rojection order, termination criteria, and convergence are addressed in this section, followed by the comutational asects and a brief discussion on the connections to LDA and other MLDA variants. Section IV resents the matching score level aggregation of multile UMLDA feature extractors that are differently initialized and regularized to enhance the recognition erformance. In Section V, exeriments on two face databases and one gait database are reorted. The roerties of the roosed UMLDA solutions are first illustrated, and detailed recognition results are then comared against cometing linear solutions as well as multilinear solutions. Finally, Section VI draws the conclusion of this work. II. MULTILINEAR BASICS AND THE TENSOR-TO-VECTOR PROJECTION This section introduces the foundations that are fundamentally imortant for the flow of this aer. Firstly, we review the notations and some basic multilinear oerations that are necessary in resenting the roosed MLDA solution. Secondly, the TVP used in the roosed algorithm is described in detail. Table I summarizes the imortant symbols used in this aer for quick reference. A. Notations and basic multilinear algebra concets The notations in this aer follow the conventions in the multilinear algebra, attern recognition and adative learning literature. In this aer, we denote vectors by lowercase boldface letters, e.g., x; matrices by uercase boldface letters, e.g., U; and tensors by calligrahic letters, e.g., A. Their elements are denoted with indices in arentheses. Indices are denoted by lowercase letters and san the range from 1 to the uercase letter of the index, e.g., n = 1, 2,..., N. Throughout this aer, the discussion is focused on real-valued vectors, matrices and tensors only and the extension to comlex-valued data is left for future work. An Nth-order tensor is denoted as: A R I 1 I 2... I N. It is addressed by N indices i n, n = 1,..., N, and each i n addresses the n-mode of A. The n-mode roduct of a tensor A by a matrix U R J n I n, denoted by A n U, is a tensor with entries: (A n U)(i 1,..., i n 1, j n, i n+1,..., i N ) = A(i 1,..., i N ) U(j n, i n ). (1) i n

6 TNN-2007-P-0332.R1 6 The scalar roduct of two tensors A, B R I 1 I 2... I N is defined as: < A, B >=... A(i 1, i 2,..., i N ) B(i 1, i 2,..., i N ). (2) i 1 i 2 i N The n-mode vectors of A are defined as the I n -dimensional vectors obtained from A by varying the index i n while keeing all the other indices fixed. A rank-one tensor A equals to the outer roduct of N vectors: A = u (1) u (2)... u (N), which means that A(i 1, i 2,..., i N ) = u (1) (i 1 ) u (2) (i 2 )... u (N) (i N ) for all values of indices. Unfolding A along the n-mode is denoted as A (n) R I n (I 1... I n 1 I n+1... I ) N, and the column vectors of A (n) are the n-mode vectors of A. An examle of the 1-mode vectors of a tensor A R can be found in Fig. 1(a). TABLE I LIST OF SYMBOLS X m u (n) = 1,..., P P the mth inut tensor samle, m = 1,..., M the n-mode rojection vector, n = 1,..., N the index of the EMP {u (n)t, n = 1,..., N} the th EMP y m y m () = y m = g (m) the number of EMPs in TVP the rojection of X m on the TVP {u (n)t, n = 1,..., N} P =1 the rojection of the mth samle X m on the th EMP {u (n)t, n = 1,..., N} S y B the between-class scatter of the th rojected features {y m, m = 1,..., M} S y W the within-class scatter of the th rojected features {y m, m = 1,..., M} g F y k K γ a = 1,..., A A c m vec(a) L C the th coordinate vector = Sy B S y, the Fisher s discrimination criterion for the th EMP W the iteration ste index in the UMLDA algorithm the maximum number of iterations in UMLDA the regularization arameter the index of the feature extractor in aggregation the number of feature extractors to be aggregated the class label for the mth training samle the vectorized reresentation of the tensor A the number of training samles for each class (subject) the number of classes (subjects) in training B. Tensor-to-Vector Projection The UMLDA framework develoed in this aer takes a multilinear subsace (or tensor subsace) [21] aroach of feature extraction, where tensorial data is rojected into a subsace for better discrimination.

7 TNN-2007-P-0332.R1 7 (a) (b) Fig. 1. Illustration of (a) an elementary multilinear rojection (EMP), (b) a tensor-to-vector rojection (TVP). As discussed in Sec. I, there are two general forms of multilinear rojection: the TTP [2], [10], [15] and the TVP [18], [19]. Since the rojections obtained by TTP can be viewed as a set of interdeendent rojections [17], the features extracted through TTP are likely to be correlated rather than uncorrelated. Therefore, we choose to develo the UMLDA by determining a subsace of tensor objects through TVP rather than TTP. The TVP rojects a tensor to a vector and it can be viewed as multile rojections from a tensor to a scalar, as illustrated in Fig. 1(b), where a TVP of a tensor A R to a P 1 vector consists of P rojections from A to a scalar. Thus, the rojection from a tensor to a scalar is considered first. A tensor X R I 1 I 2... I N can be rojected to a oint y through N unit rojection vectors {u (1)T, u (2)T,..., u (N)T } as: y = X 1 u (1)T 2 u (2)T... N u (N)T, u (n) = 1 for n = 1,..., N, where is the Euclidean norm for vectors. It can be written in the scalar roduct (2) as: y =< X, u (1) u (2)... u (N) >. Denote U = u (1) u (2)... u (N), then we have y =< X, U >. We name this

8 TNN-2007-P-0332.R1 8 multilinear rojection {u (1)T, u (2)T,..., u (N)T } as an elementary multilinear rojection (EMP), which is the rojection of a tensor on a single line (resulting a scalar) and it consists of one rojection vector in each mode. Figure 1(a) illustrates an EMP of a tensor A R As ointed out in [17], an EMP can be viewed as a constrained linear rojection since < X, U >=< vec(x ), vec(u) >= [vec(u)] T vec(x ), where vec(a) denotes the vectorized reresentation of the tensor A [34]. Thus, the TVP of a tensor object X to a vector y R P in a P -dimensional vector sace consists of P EMPs {u (1)T, u (2)T,..., u (N)T }, = 1,..., P, which can be written concisely as {u (n)t, n = 1,..., N} P =1. The TVP from X to y is then written as y = X N n=1 {u(n)t, n = 1,..., N} P =1, where the th comonent of y is obtained from the th EMP as: y() = X 1 u (1)T 2 u (2)T... N u (N)T. III. UNCORRELATED MLDA WITH REGULARIZATION FOR TENSOR OBJECT RECOGNITION This section rooses the UMLDA-based tensor object recognition system, which is a tyical recognition system as shown in Fig. 2. The normalization ste is a standard rocessing to ensure all inut tensors having the same size [10]. At the core of this system is the UMLDA framework for feature extraction to be resented in this section: the UMLDA with regularization (R-UMLDA). The R-UMLDA roduces vector features that can be fed into standard classifiers for classification. Besides the descrition of the fundamental units in Fig. 2, imlementation issues, comutational asects, and the connections with other algorithms are discussed in this section as well. Fig. 2. UMLDA-based tensor object recognition. A. The uncorrelated multilinear discriminant analysis with regularization (R-UMLDA) In the resentation, for the convenience of discussion, the training samles are assumed to be zeromean 2 so that the constraint of uncorrelated features is the same as orthogonal features 3. Before formally stating the objective of the UMLDA, a number of definitions are needed. 2 When the training samle mean is not zero, it can be subtracted to make the training samles to be zero-mean. 3 Let x and y be vector observations of the variables x and y. Then, x and y are orthogonal iff x T y = 0, and x and y are uncorrelated iff (x x) T (y ȳ) = 0, where x and ȳ are the means of x and y, resectively [35]. Thus, two zero-mean (centered) vectors are uncorrelated when they are orthogonal [36].

9 TNN-2007-P-0332.R1 9 The classical FDC in LDA [32] is defined as the scatter ratio for vector samles. Here, we adat it to scalar samles, which can be viewed as the degenerated version. The th rojected (scalar) features are {y m, m = 1,..., M}, where M is the number of training samles and y m is the rojection of the mth samle X m by the th EMP {u (n)t, n = 1,..., N}: y m = X m N n=1 {u(n)t, n = 1,..., N}. Their corresonding between-class scatter S y B and the within-class scatter S y W are S y B = C N c (ȳ c ȳ ) 2, S y W = c=1 M (y m ȳ cm ) 2, (3) where C is the number of classes, N c is the number of samles for class c, c m is the class label for the mth training samle, ȳ = 1 M m y m = 0 and ȳ c = 1 N c m,c y m=c m. Thus, the FDC for the th scalar samles is F y g (m) = y m. m=1 = Sy B S. In addition, let g y denote the th coordinate vector, with its mth comonent W A formal definition of the multilinear feature extraction roblem to be solved in UMLDA is then given in the following. A set of M training tensor object samles {X 1, X 2,..., X M } (with zero-mean) is available for training. Each tensor object X m R I 1 I 2... I N assumes values in the tensor sace R I 1 R I 2... R I N, where I n is the n-mode dimension of the tensor and denotes the Kronecker roduct. The objective of the UMLDA is to find a TVP, which consists of P EMPs {u (n) the original tensor sace R I 1 R I 2... R I N R In 1, n = 1,..., N} P =1, maing from into a vector subsace RP (with P < N n=1 I n): y m = X m N n=1 {u (n)t, n = 1,..., N} P =1, m = 1,..., M, (4) such that the FDC F y is maximized in each EMP direction, subject to the constraint that the P coordinate vectors {g R M, = 1,..., P } are uncorrelated. In other words, the UMLDA objective is to determine a set of P EMPs {u (n)t, n = 1,..., N} P =1 that maximize the scatter ratio while roducing features with zero-correlation. Thus, the objective function for the th EMP is {u (n)t, n = 1,..., N} = arg max F y, subject to g T g q g g q = δ q,, q = 1,..., P, (5) where δ q is the Kronecker delta (defined as 1 for = q and as 0 otherwise). To solve this roblem, we follow the successive determination aroach in the derivation of the ULDA in [28]. The P EMPs {u (n)t, n = 1,..., N} P =1 are determined sequentially (one by one) in P stes, with the th ste obtaining the th EMP. This stewise rocess roceeds as: Ste 1: Determine the first EMP {u (n)t 1, n = 1,..., N} by maximizing F y 1 without any constraint.

10 TNN-2007-P-0332.R1 10 Ste 2: Determine the second EMP {u (n)t 2, n = 1,..., N} by maximizing F y 2 subject to the constraint that g T 2 g 1 = 0. Ste 3: Determine the third EMP {u (n)t 3, n = 1,..., N} by maximizing F y 3 subject to the constraint that g T 3 g 1 = 0 and g T 3 g 2 = 0. Ste ( = 4,..., P ):Determine the th EMP {u (n)t, n = 1,..., N} by maximizing F y subject to the constraint that g T g q = 0 for q = 1,..., 1. In the following, the algorithm to comute these EMPs is resented in detail, which is summarized in the seudo-code in Fig. 3. In the figure, the stewise rocess described above corresonds to the loo indexed by. Inut: Outut: A set of zero-mean tensor samles {X m R I 1 I 2... I N, m = 1,..., M} with class labels c R M, the desired feature vector length P, the regularization arameter γ, the maximum number of iterations K and a small number ɛ for testing convergence. The P EMPs {u (n)t, n = 1,..., N} P =1 that best searate classes in the rojected sace. R-UMLDA algorithm: For = 1 : P (ste : determine the th EMPs) If > 1, calculate the coordinate vector g 1: g 1(m) = X m 1 u (1)T 1 For n = 1,..., N, initialize u (n) (0) R In. For k = 1 : K For n = 1 : N 2 u(2)t 1... N u(n)t 1. Calculate ỹ m (n) = X m 1 u (1)T (k)... n 1 u (n 1)T (k) n+1 u (n+1)t (k 1)... N u (N)T (k 1), for m = 1,..., M. Calculate R (n), S(n) B, and S (n) W. Set u (n) (k) to be the (unit) eigenvector of associated with the largest eigenvalue. ( ) If k = K or dist u (n) (k), u (n) (k 1) < ɛ for all n, set u (n) = u (n) k for all n, break. Outut {u (n) }. Go the ste + 1 if < P. Sto if = P. ( S(n) ) 1 (n) W R (n) S B Fig. 3. The seudo-code imlementation of the R-UMLDA algorithm for feature extraction from tensor objects. To solve for the th EMP {u (n)t, n = 1,..., N}, there are N sets of arameters corresonding to N rojection vectors to be determined, u (1), u (2),...u (N), one in each mode. Obviously, we would like to determine these N sets of arameters (N rojection vectors) in all modes simultaneously so that F y (globally) maximized, subject to the zero-correlation constraint. Unfortunately, this is a rather comlicated non-linear roblem without an existing otimal solution, excet when N = 1, which is the classical linear case where only one rojection vector is to be solved. Therefore, we derive a subotimal solution instead is

11 TNN-2007-P-0332.R1 11 by following the rincile of the alternating least square (ALS) algorithm [37] [39], where a multilinear (least-square) otimization roblem is reduced into smaller conditional subroblems that can be solved through simle established methods emloyed in the linear case. Thus, for each EMP to be determined, the arameters of the rojection vector for each mode n are estimated one by one searately, conditioned on {u (n), n n }, the arameter values of the rojection vectors for the other modes 4. Thus, by fixing {u (n), n n }, a new objective function deending only on is formulated and this conditional subroblem is linear and much simler. The arameter estimations for each mode are obtained in this way sequentially (the n loo) and iteratively (the k loo) until a stoing criterion is met. We name this iterative method the alternating rojection method (APM). It corresonds to the loo indexed by k in Fig. 3, and in each iteration k, the loo indexed by n in Fig. 3 consists of the N conditional subroblems. To solve for in the n -mode, assuming that {u (n), n n } is given, the tensor samles are rojected in these (N 1) modes {n n } first to obtain (vectors) ỹ (n ) m = X m 1 u (1)T... n 1 u (n 1) T ỹ (n ) m R I n. This conditional subroblem then becomes to determine n +1 u (n +1) T... N u (N)T, (6) that rojects the vector samles {ỹ (n ) m, m = 1,..., M} onto a line so that the scatter ratio is maximized, subject to the zerocorrelation constraint. This is a (linear and simler) ULDA roblem with the inut samles {ỹ (n ) m, m = 1,..., M}. The corresonding between-class scatter matrix S (n ) B matrix S (n ) W are then defined as S (n ) B = S (n ) W = C c=1 M m=1 and the (regularized) within-class scatter N c ( ỹ (n ) c ỹ (n ) )( ỹ (n ) c ỹ (n ) ) T, (7) (ỹ (n ) m ỹ (n ) c m )(ỹ (n ) m 1 ỹ (n ) c m ) T + γ λ max ( S (n ) W ) I I n, (8) where ỹ (n ) c = 1 ) N c m,c m =c ỹ(n m, ỹ (n ) = 1 ) M m ỹ(n m = 0, γ 0 is a regularization arameter, I In is an identity matrix of size I n I n, and λ max ( S (n ) W ) is the maximum eigenvalue of S (n ) W, which is the within-class scatter matrix for the n-mode vectors of the training samles, defined as S (n ) W = M m=1 ( Xm(n ) X ) ( cm (n ) Xm(n ) X ) T cm (n ), (9) 4 It should be noted that this conditioning imlies that the estimation of the arameters in one mode is deendent on the arameters in all the other modes.

12 TNN-2007-P-0332.R1 12 where X c(n ) is the n -mode unfolded matrix of the class mean tensor Xc = 1 N c m,c m=c X m. In the following, the motivation for introducing the regularization factor is exlained. In the targeted biometrics alications (and many other alications as well), the dimensionality of the inut data is very high while at the same time, the number of training samles for each class is often too small to reresent the true characteristics of their classes, resulting in the well-known SSS roblem [30]. Furthermore, our emirical study of the iterative UMLDA algorithm (i.e., γ = 0) under the SSS scenario indicates that the iterations tend to minimize the within-class scatter towards zero in order to maximize the scatter ratio (since the scatter ratio reaches maximum of infinity when the within-class scatter is zero and the between-class scatter is non-zero). However, the estimated within-class scatter on the training data is usually much smaller than the real within-class scatter, due to limited number of samles for each class. Therefore, regularization [40], which has been used for combatting the singularity roblem 5 in LDA-based algorithms under the SSS scenario [30], [41], is adoted here to imrove the generalization caability of UMLDA under the SSS scenario, leading to the R-UMLDA. The regularization term is introduced in (8) so that during the iteration, less focus is ut on shrinking the within-class scatter. Moreover, the regularization introduced is adative since γ is the only regularization arameter and the regularization term in the n -mode is scaled by λ max ( S (n ) ), which is an aroximate estimate of the W n -mode within-class scatter in the training data. The basic UMLDA is obtained by setting γ = 0. With (7) and (8), we are ready to solve the P EMPs. For = 1, the 1 that maximizes the FDC S (n ) B 1 ( S(n ) 1 in the rojected sace is obtained as the unit eigenvector of ) 1 S(n ) W 1 B 1 associated T 1 T 1 S (n ) W 1 1 with the largest eigenvalue for a nonsingular S (n ) W 1. Next, we show how to determine the th ( > 1) EMP given the first ( 1) EMPs. Given the first ( 1) EMPs, the th EMP aims to maximize the scatter ratio F y, subject to the constraint that features rojected by the th EMP is uncorrelated with those ) rojected by the first ( 1) EMPs. Let Ỹ(n R I n M be a matrix with its mth column to be ỹ (n ) m, [ ] i.e., Ỹ (n ) = ỹ (n ) 1, ỹ (n ) 2,..., ỹ (n ) ) M, then the th coordinate vector is obtained as g = T Ỹ(n. The constraint that g is uncorrelated with {g q, q = 1,..., 1} can be written as Thus, g T g q = T Ỹ (n ) g q = 0, q = 1,..., 1. (10) ( > 1) can be determined by solving the following constrained otimization roblem: u(n )T S(n ) B = arg max T S(n ) W, subject to T Ỹ (n ) g q = 0, q = 1,..., 1, (11) 5 While the numerical singularity roblem is common for LDA-based algorithms, as ointed out in [2], this is not the case for MLDA. Therefore, the motivation of using regularization here is different from the linear case in this asect.

13 TNN-2007-P-0332.R1 13 The solution is given by the following theorem for nonsingular S (n ) W : Theorem 1: When S (n ) W is nonsingular, the solution to the roblem (11) is the (unit-length) generalized eigenvector corresonding to the largest generalized eigenvalue of the following generalized eigenvalue roblem: R (n ) S (n ) B u = λ S (n ) W u, (12) where ( R (n ) ) = I In Ỹ(n G 1 G T ) T 1Ỹ(n S (n ) 1 W Ỹ (n ) G 1 ) 1 G T 1Ỹ(n ) T S (n ) 1 W, (13) G 1 = [g 1 g 2...g 1 ] R M ( 1). (14) Proof: The roof of Theorem 1 is given in Aendix A. By setting R (n ) 1 = I In and from Theorem 1, we have a unified solution for R-UMLDA when S (n ) W is nonsingular: for = 1,..., P, ( S(n ) is obtained as the unit eigenvector of ) 1 (n W R ) S(n ) B associated with the largest eigenvalue. Next, we will describe the classification of R-UMLDA features and then give detailed discussions on imlementation issues and connections to other algorithms. B. Classification of R-UMLDA features Desite working directly on tensorial data, the roosed R-UMLDA algorithm is a feature extraction algorithm that roduces feature vectors like traditional linear algorithms (through TVP). For recognition tasks, the features extracted are to be fed into a classifier to get the class label, as shown in Fig. 2. In this work, the feature vectors obtained through R-UMLDA are fed into the nearest neighbor classifier (NNC) with the Euclidean distance measure for classification. It should be noted at this oint that, since this aer focuses on multilinear feature extraction, a simle classifier is referred so that the recognition erformance is mainly contributed by the feature extraction algorithms rather than the classifier. The classification accuracy of the roosed method (and other methods comared in Section V) is exected to imrove if a more sohisticated classifier such as the suort vector machine (SVM) is used instead of the NNC. However, such an exeriment is out of the scoe of this aer. To classify a test samle X using NNC, X is first rojected to a feature vector y through the TVP obtained by R-UMLDA: y = X N n=1 {u(n)t, n = 1,..., N} P =1. The nearest neighbor is then found as m = arg min m y y m, where y m is the feature vector for the mth training samle. The class label of the m th training samle c m is then assigned to the test samle X.

14 TNN-2007-P-0332.R1 14 C. Initialization, rojection order, termination, and convergence In this subsection, we discuss the various imlementation issues of R-UMLDA, in the order of the algorithm flow in Fig. 3: initialization, rojection order, termination, and convergence. As the determination of each EMP {u (n), n = 1,..., N} is an iterative rocedure due to the nature of the R-UMLDA, like in other multilinear learning algorithms [1], [2], [15], [42] [44], initial estimations for the rojection vectors {u (n) } are necessary. However, there is no guidance from either the algorithm or the data on the best initialization that could result in the best searation of the classes in the feature sace. Thus, the determination of the otimal initialization in R-UMLDA is still an oen roblem, as in most iterative algorithms including other multilinear learning algorithms [2], [15], [18], [19]. In this work, we emirically study two simle and commonly used initialization methods [17]: uniform initialization and random initialization [18], [19], which do not deend on the data. In the uniform initialization, all n-mode rojection vectors are initialized to have unit length and the same value along the I n dimensions in n-mode, which is equivalent to the all ones vector 1 with roer normalization. In random initialization, each element of the n-mode rojection vectors is drawn from a zero-mean uniform distribution between [ 0.5, 0.5] and the initialized rojection vectors are normalized to have unit length. Our emirical studies in Sec. V-C indicate that the results of R-UMLDA are affected by initialization, and the uniform initialization gives better results. The mode ordering (the inner-most for loo in Fig. 3, indexed by n) in comuting the rojection vectors, named as the rojection order in this work, affects the solution as well. Similar to initialization, there is no way to determine the otimal rojection order and it is considered to be an oen roblem too. Emirical studies on the effects of the rojection order indicate that with all the other algorithm settings fixed, altering the rojection order does result in some erformance differences, but there is no guidance from either the data or the algorithm on what rojection order is the best in the iteration. Therefore, we have no reference on a articular rojection order and in ractice, we solve the rojection vectors sequentially (from 1-mode to N-mode), as in other multilinear algorithms [1], [2], [15], [19]. Remark 1: Although we are not able to determine the otimal initialization and the otimal rojection order, the aggregation scheme suggested in Sec. IV reduces the significance of their otimal determination. As seen from Fig. 3, the termination criterion can be simly set to a maximum number of iterations K ( ) or it can be set by examining the convergence of the rojection vectors: dist u (n) (k), u (n) (k 1) < ɛ, where ɛ is a user-defined small number threshold (e.g., ɛ = 10 3 ), and this distance is defined as ( ) ( ) dist u (n) (k), u (n) (k 1) = min u (n) (k) + u (n) (k 1), u (n) (k) u (n) (k 1) (15)

15 TNN-2007-P-0332.R1 15 since eigenvectors are unique u to sign. As to be shown in Sec. V-C, the recognition erformance increases slowly after the first a few iterations. Therefore, the iteration can be terminated by setting K in ractice for convenience, esecially when comutational cost is a concern. D. Comutational asects of UMLDA Next, the comutational comlexity and memory requirements of UMLDA are analyzed, in a similar way as in [10]. In the analysis, it is assumed that I 1 = I 2 =... = I N = ( N n=1 I n ) 1 n = I for simlicity. From a comutational comlexity oint of view, the most demanding stes involve the calculations of the rojection ỹ m (n), the comutation of eigenvector of ( S(n) ) 1 (n) W R S (n) B, S (n) W, and R (n), and the determination of the leading S(n) B. The comlexity of calculating ỹ m (n) for m = 1,..., M, (n) (n) S B, and S W are in order of O(M N n=2 In ), O(C I 2 ), and O(M I 2 ), resectively. The comutation of R (n) is in order of Lastly, the comutation of O ( I M ( 1) + I ( 1) I 2 + ( 1) I ( 1) 2) = O ( I 3 + ( 1) [I M + 2 I 2 + ( 1) I ( 1) ]). (16) ( S(n) ) 1 (n) W R S(n) B and its eigen-decomosition 6 can be comleted in O(2 I 3 ) and O(I 3 ), resectively. Therefore, the overall comutational comlexity er mode n for a single iteration k during ste is ( N O M I n + (C + M)I 2 + ( 1) [ I M + 2I 2 + ( 1) 2 + 2I( 1) ] ) + 4I 3. (17) n=2 With resect to the memory requirement, as in MPCA [10], the resective comutation can be done incrementally by reading X m sequentially. Hence, the memory needed for the UMLDA algorithm can be as low as O(I N ) excet for N = 1. It should be noted at this oint that sequential reading may lead to higher I/O cost. From the reorted analysis above, it can be seen that as a sequential iterative solution, UMLDA may result in a higher comutational and I/O cost. Nonetheless, since the oerations in solving the UMLDA rojection are usually erformed offline, only during the training hase, additional comutational and I/O cost due to iterations and sequential rocessing do not constitute a comarative disadvantage. During testing, the extraction (rojection) of features from a test samle is a simle linear oeration with a comutational comlexity similar to linear subsace rojection algorithms. 6 UMLDA needs only the largest eigenvalue and the corresonding eigenvector, so more efficient comutational methods may be alied in ractice.

16 TNN-2007-P-0332.R1 16 E. Connections to DATER, GTDA, TR1DA and LDA Detailed discussions on the relationshis between UMLDA, DATER, GTDA, TR1DA and LDA are resented in [17], which are summarized in Aendix B for comleteness. Briefly, the UMLDA is an MLDA variant that maximizes the scatter ratio through a TVP. LDA is the secial case of UMLDA with N = 1. DATER is an MLDA variant also based on scatter ratio but it solves a TTP rather than TVP. GTDA is an MLDA variant that maximizes the scatter difference through a TTP. TR1DA also solves for TVP as UMLDA, however, it maximizes scatter difference and it is a heuristic aroach with residue calculation, originally roosed for tensor aroximation. IV. AGGREGATION OF R-UMLDA RECOGNIZERS This section rooses the aggregation of a number of differently initialized and regularized UMLDA recognizers for enhanced erformance, which is motivated from two roerties of the basic R-UMLDA recognizer in Fig. 2. On one hand, as to be shown in Sec. V-C, the number of useful discriminative features that can be extracted by a single R-UMLDA is limited, artly due to the fact that EMPs to be solved in R-UMLDA corresond to very constrained situations in the linear case. On the other hand, since the R- UMLDA is affected by initialization and regularization, which cannot be otimally determined, different initialization or regularization could result in different discriminative features (also see Sec. V-C). From the generalization theory exlaining the success of random subsace method [45], bagging and boosting [46] [48], the sensitivity of the R-UMLDA to initialization and regularization suggests that R-UMLDA is not a very stable learner (feature extractor) and it is good for ensemble-based learning. Therefore, we roose the aggregation of several differently initialized and regularized UMLDA feature extractors to get the regularized UMLDA with aggregation (R-UMLDA-A) recognition system so that multile R-UMLDA recognizers can work together to achieve better erformance on tensor object recognition. Remark 2: Different rojection order also could result in different features so R-UMLDA with different rojection orders could be aggregated as well. However, since the effects of different rojection orders are similar to those of different initializations and the number of ossible rojection orders (which is N!) is much less than the number of ossible initializations (which is infinite), we fix the rojection order and vary the initialization and regularization only in this work. There are various ways to combine (or fuse) several extracted features, including the feature level fusion [49], fusion at the matching score level [50], [51], and more advanced ensemble-based learning such as boosting [46], [52], [53]. For similar argument of the choice of a simle classifier, the simle sum rule in combining matching scores is used in this work since the focus here is on the feature

17 TNN-2007-P-0332.R1 17 extraction. Although more sohisticated method such as boosting is exected to achieve better results, the investigation of alternative combination methods, such as other combination rules and feature-level fusion, are beyond the scoe of this aer and will be the toic of a forthcoming aer. Since high diversity of the learners to be combined is referred in ensemble-based learning [52], we choose to use both uniform and random initializations in R-UMLDA-A for more diversity. Thus, although we are not able to determine the best initialization, we aggregate several R-UMLDA with different initializations to make comlementary discriminative features working together to searate classes better. Furthermore, to introduce even more diversity and alleviate the roblem of regularization arameter selection at the same time, we roose to samle the regularization arameter γ a from an interval [10 7, 10 2 ], which is emirically chosen to cover a wide range of γ, uniformly in log scale so that each feature extractor is differently regularized, where a = 1,..., A is the index of the individual R-UMLDA feature extractor and A is the number of R-UMLDA feature extractors to be aggregated. Figure 4 rovides the seudo-code imlementation for the R-UMLDA-A for tensor object recognition. The inut training samles {X m } are fed into A differently initialized and regularized UMLDA feature extractors described in Fig. 3 with arameters P, K, and γ a to obtain a set of A TVPs {u (n)t, n = 1,..., N} P =1 (a), a = 1,..., A. The training samles {X m } are then rojected to R-UMLDA feature vectors {y m(a) } using the obtained TVPs. To classify a test samle X, it is rojected to A feature vectors {y (a) } using the A TVPs first. Next, for the ath R-UMLDA feature extractor, we calculate the nearest-neighbor distance of the test samle X to each candidate class c as: d(x, c, a) = The range of d(x, c, a) is then matched to the interval [0, 1] as: d(x, c, a) = min y (a) y m(a). (18) m,c m =c d(x, c, a) min c d(x, c, a) max c d(x, c, a) min c d(x, c, a). (19) Finally, the aggregated nearest-neighbor distance is obtained emloying the simle sum rule as: d(x, c) = A d(x, c, a), (20) a=1 and the test samle X is assigned the label: c = arg min c d(x, c). V. EXPERIMENTAL EVALUATION In this section, a number of exeriments are carried out on two biometrics alications in suort of the following two objectives:

18 TNN-2007-P-0332.R1 18 Inut: A set of zero-mean tensor samles {X m R I 1 I 2... I N, m = 1,..., M} with class labels c R M, a test tensor samle X, the desired feature vector length P, the R-UMLDA feature extractor (Fig. 3), the maximum number of iterations K, the number of R-UMLDA to be aggregated A. Outut:The class label for X. R-UMLDA-A algorithm: Ste 1. Feature extraction For a = 1 : A Obtain the ath TVP {u (n)t, n = 1,..., N} P =1 (a) from the ath R-UMLDA (Fig. 3) with the inut: {X m }, P, K, γ a, using random or uniform initialization. Project {X m } and X to {y m(a) } and y (a), resectively, using {u (n)t, n = 1,..., N} P =1 (a). Ste 2. Aggregation at the matching score level for classification For a = 1 : A For c = 1 : C Obtain the nearest-neighbor distance d(x, c, a). Normalize d(x, c, a) to [0, 1] to get d(x, c, a). Obtain the aggregated distance d(x, c). Outut c = arg min c d(x, c) as the class label for the test samle. Fig. 4. The seudo-code imlementation of the R-UMLDA-A algorithm for tensor object recognition. 1) Investigate the various roerties of the R-UMLDA algorithm. 2) Evaluate the R-UMLDA and R-UMLDA-A algorithms on two tensor object recognition roblems, face recognition (FR) and gait recognition (GR), by comaring their erformance with that of cometing mutlilinear learning algorithms as well as linear learning algorithms. Before resenting the exerimental results, the exerimental data and algorithms to be comared are described first. A. Exerimental data Three oular ublic databases are used in the exeriments: the Pose, Illumination, and Exression (PIE) database from Carnegie Mellon University (CMU) [54], the Facial Recognition Technology (FERET) database [55] and the HumanID gait challenge data set version 1.7 (V1.7) from the University of South Florida (USF). The CMU PIE database contains 68 individuals with face images catured under varying ose, illumi-

19 TNN-2007-P-0332.R1 19 nation and exression. We choose the seven oses (C05, C07, C09, C27, C29, C37, C11) with at most 45 degrees of ose variation, under the 21 illumination conditions (02 to 22). Thus, there are about 147 (7 21) samles er subject and there are a total number of 9,987 face images (with nine faces missing). This face database has a large number of samles for each subject, therefore, it is used to study the roerties of the roosed algorithm and the FR erformance under varying number of training samles er subject, denoted by L. The FERET database is a standard testing database for FR erformance evaluation, including 14,126 images from 1,199 individuals with views ranging from frontal to left and right rofiles. The common ractice is to use ortions of the database for secific studies. Here, we select a subset comosed of those subjects with each subject having at least six images with at most 45 degrees of ose variation, resulting in 2,803 face images from 335 subjects. The studies on the FR erformance under varying number of subjects C are carried out on this face database since there are a large number of subjects available. Face images from the PIE and FERET databases are manually aligned, croed and normalized to ixels, with 256 gray levels er ixel. The USF gait challenge data set V1.7 consists of 452 sequences from 74 subjects walking in ellitical aths in front of the camera, with two viewoints (left or right), two shoe tyes (A or B) and two surface tyes (grass or concrete). Here, we choose those sequences on grass surface only: the gallery set, and the robe sets A, B and C, with detailed information listed in Table II. The caturing condition for each set is summarized in the arentheses after the set name in the Table, where G, A, B, L, and R stand for grass surface, shoe tye A, shoe tye B, left view, and right view, resectively. Each set has only one sequence for a subject. Subjects are unique in the gallery and each robe set and there are no common sequences between the gallery set and any of the robe sets. In addition, all the robe sets are distinct. These gait data sets are emloyed to demonstrate the erformance on third-order tensors since gait silhouette sequences are naturally 3-D data [10]. We follow the rocedures in [10] to get gait samles from gait silhouette sequences and each gait samle is resized to a third-order tensor of The number of samles for each set is indicated in the arentheses following the number of sequences in Table II. TABLE II THE CHARACTERISTICS OF THE GAIT DATA FROM THE USF GAIT CHALLENGE DATA SET. Gait data set Gallery(GAR) Probe A(GAL) Probe B(GBR) Probe C(GBL) Number of sequences (samles) 71 (731) 71 (727) 41 (423) 41 (420) Difference from the gallery - View Shoe Shoe, view

20 TNN-2007-P-0332.R1 20 B. Performance Comarison Design In the FR and GR exeriments, we comare the erformance of the roosed algorithms against four multilinear learning algorithms and four linear learning algorithms listed in Table III 7, where the LDA algorithm takes the Fisherface aroach [32]. TABLE III LIST OF ALGORITHMS TO BE COMPARED Acronym Full name Linear/Multilinear Reference Bayes Maximum likelihood version of the Bayesian solution Linear [56] LDA linear discriminant analysis Linear [32] ULDA uncorrelated linear discriminant analysis Linear [33] R-JD-LDA regularized version of the revised direct LDA Linear [30], [57] MPCA multilinear rincial comonent analysis Multilinear [10] DATER discriminant analysis with tensor reresentation Multilinear [2] GTDA general tensor discriminant analysis Multilinear [15] TR1DA tensor rank-one discriminant analysis Multilinear [18], [19] For classification of extracted features, we use the NNC with Euclidean distance measure excet for the Bayesian solution, which uses the NNC with the Mahalanobis distance [56]. The MPCA, DATER and GTDA algorithms roduce features in tensor reresentation, which cannot be handled directly by the selected classifier. Since from [17], the commonly used tensor distance measure, the Frobenius norm, is equivalent to the Euclidean distance between vectorized reresentations, the tensor features from MPCA, DATER and GTDA are rearranged to vectors for direct comarison. They obtain the highestdimension rojection (P n = I n for n = 1,..., N) first and then the TTP is viewed as N n=1 I n EMPs. The discriminability of each such EMP is calculated on the training set and the EMPs are arranged in descending discriminability so that a feature vector is obtained, as in [10]. In the exeriments, for fair comarison and comutational concerns, we set the number of iterations in the four MLDA algorithms to be 10, unless otherwise stated. For MPCA, DATER, GTDA and TR1DA, u to 600 features were tested. The maximum number of features tested for LDA and ULDA is C 1. For the Bayesian solution, the maximum number of features tested is min{m C, 600}. For the TR1DA algorithm, we tested several values of the tuning arameter ζ for each L, and the best one for each L was used: ζ = 2 for L 7, ζ = 0.8 for 8 L 15, ζ = 0.6 for L 16. For R-JD-LDA, the 7 Note that the ULDA comared here is different from the ULDA in [28]. Therefore, it is different from the classical LDA.

21 TNN-2007-P-0332.R1 21 default maximum number ( 0.8 (C 1)) of features and a regularization arameter of originally suggested by the authors of [30], [57] are used. For the recognition exeriments of only one R-UMLDA, uniform initialization is used and we emirically set γ = 10 3, with u to 30 features tested. For R-UMLDA-A, u to 20 differently initialized and regularized versions of UMLDA feature extractors are combined with each roducing u to 30 features, also resulting in a total number of 600 features. Uniform initialization is used for a = 1, 5, 9, 13, 17 with corresonding γ a = 10 2, 10 3, 10 4, ( 10 5, 10 6, and random initialization ) is used for the rest values of a. In comuting the matrix inverse of G T ) T 1Ỹ(n S(n ) 1 W Ỹ (n ) G 1 in (13), a small term (κ I 1 ) is added, where κ = 10 3, in order to get better conditioned matrix for the inverse comutation. The recognition erformance is measured in correct recognition rate (CRR). The best recognition results reorted are obtained by varying the number of features used and the number of R-UMDLA recognizers aggregated for the R-UMLDA-A algorithm. For all the other algorithms, the best results are obtained by varying the number of features used. For fair comarison, there is no further fine tuning of other arameters (such as the regularization arameter) for otimal erformance on the testing data, including the roosed method. For better viewing, the to two recognition results in each exeriment are shown in bold in tables. In the comarative evaluation of recognition erformance in Figs. 8, 9, and 10, for the horizontal axis which indicates the number of features used for recognition, log scale has been used in order to highlight the comarative erformance in low-dimensional subsaces while showcasing the best overall erformance achieved. C. Emirical studies of the R-UMLDA roerties on the PIE database First, various roerties of the R-UMLDA are studied on the PIE database: the effects of initialization and regularization, the convergence, the number of useful features and the effects of aggregation. These exeriments are erformed on one random slit of the database into training and testing samles. 1) The effects of initialization and regularization: Figure 5 illustrates the effects of initialization and regularization on two FR exeriments: one with L = 2 and one with L = 20, corresonding to the SSS scenario and the scenario when a large number of samles (er subject) are available for training. The CRRs for various γs are deicted in Figs. 5(a) (L = 2) and 5(b) (L = 20) for the uniform initialization, and in Figs. 5(c) (L = 2) and 5(d) (L = 20) for the random initialization (averaged over 20 reeated trials). Figures 5(e) and 5(f) show the lots for the CRRs from six reetitions of the random initialization with γ = They demonstrate that the recognition results are affected by initialization and different initialization results in different results. By comaring Fig. 5(f) against Fig. 5(e), it can be seen that the

22 TNN-2007-P-0332.R1 22 (a) (b) (c) (d) (e) (f) Fig. 5. Illustration of the effects of initialization and regularization on recognition erformance: Uniform initialization with various γs for (a) L = 2 and (b) L = 20; Random initialization with various γs averaged over 20 reetitions for (c) L = 2 and (d) L = 20; Six reetitions of random initialization with γ = 10 3 for (e) L = 2 and (f) L = 20.

23 TNN-2007-P-0332.R1 23 sensitivity to initialization is smaller for a larger L. Furthermore, by comaring Fig. 5(a) against Fig. 5(c), and Fig. 5(b) against Fig. 5(d), we observe that the uniform initialization outerforms the random initialization for both a small L and a large L. Therefore, we use the uniform initialization when only one R-UMLDA feature extractor is emloyed. In the following discussions, we show the convergence and the number of useful features using the uniform initialization. Besides, the effects of regularization are also observed in Figs. 5(a), 5(b), 5(c), and 5(d). For a small L, UMLDA with a strong regularization (larger γ) can outerform that without regularization (γ = 0), while for a large L, a too strong regularization may results in oorer erformance, as observed in other regularization algorithms [30]. 2) Convergence: The convergence is illustrated in Fig. 6. Figures 6(a) and 6(b) deict two examles ( ) of the evolution of dist u (1) (k), u (1) (k 1) for = 1 and = 8, with various γs, u to 50 iterations. As seen in the figure, in the worst scenarios, the rojection vector converges around k = 15 for = 1 and around k = 30 for = 8. In addition, a stronger regularization (larger γ) is more likely to result in faster convergence. Furthermore, the recognition erformance is examined for various Ks, as shown in Figs. 6(c) and 6(d) with L = 5 for γ = 0 and γ = 10 3, resectively. It indicates that the first a few iterations imrove the recognition erformance the most, and more iterations afterwards give slow imrovement in the recognition rate, esecially for a larger γ. Therefore, we set K to a fixed number, K = 10, to terminate the iteration in ractice. When comutational efficiency is imortant, K can be further reduced to imrove rocessing seed, while sacrificing some recognition erformance. 3) The number of useful features and the effects of aggregation: The R-UMLDA is limited in the number of extracted features (P ) useful for recognition, as deicted in Fig. 7(a), where the CRRs are shown for u to 60 features for L = 5 and with various γs. In articular, the first a few features are very owerful, while beyond a certain number (e.g. 20), the erformance varies very slowly with an increased P. Fortunately, from the study of the effects of initialization and regularization, we find that different initialization or regularization roduces different results (Fig. 5). Thus, the roosed aggregation scheme makes use of this roerty and combines differently initialized and regularized R-UMLDA recognizers to achieve enhanced results. At the same time, the roblem of regularization arameter selection is alleviated. The results of aggregation are shown in Fig. 7(b) for L = 5 and u to 20 R-UMLDA recognizers to be combined, by the R-UMLDA-A described in Sec. V-B. The figure demonstrates that the aggregation is an effective rocedure and there are indeed comlementary discriminative information from differently initialized and regularized R-UMLDA recognizers.

24 TNN-2007-P-0332.R1 24 (a) (b) Fig. 6. (c) ( ) Illustration of the convergence for L = 5: the evolution of dist u (1) (k), u (1) (k 1) for various γs over 50 iterations for (a) = 1 and (b) = 8; the CRRs for various Ks (the maximum number of iterations) for (c) γ = 0 and (d) γ = (d) D. Face recognition results In FR exeriments, face images are inut directly as second-order tensors to the multilinear algorithms, while for the linear algorithms, they are vectorized to vectors as inut. For each subject in an FR exeriment, L samles are randomly selected for training and the rest are used for testing. We reort the results, including the mean and standard deviation (Std), averaged over 20 random slits. 1) FR results on the PIE database: In order to study the recognition erformance with different Ls, we erform nine FR exeriments on the PIE database with L = 2, 3, 4, 5, 6, 8, 10, 20, 40. The to CRRs are listed in Table IV, where our R-UMLDA-A erforms the best excet for L = 40. The detailed results for

25 TNN-2007-P-0332.R1 25 (a) (b) Fig. 7. of aggregation. Demonstration of (a) the recognition erformance for L = 5 as P increases for various γs, and (b) the effectiveness TABLE IV FACE RECOGNITION RESULTS ON PIE DATABASE: THE TOP CRRS (MEAN±STD%). L Bayes LDA ULDA R-JD-LDA MPCA DATER GTDA TR1DA R-UMLDA R-UMLDA-A ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ±0.3 L = 2, 5, 20 and 40 are deicted in Fig. 8. From the figure, the first a few features (around 10) extracted by the R-UMLDA are the most owerful features in recognition in all cases excet when L = 40, where the LDA and ULDA features are the most discriminative ones. In addition, it should be noted that in each exeriment, if we tune the regularization arameter for R- JD-LDA and R-UMLDA, and the range of γ for R-UMLDA-A, imroved erformance can be obtained since stronger regularization results in better erformance for a small L and weaker regularization is better for a larger L [30]. Nonetheless, with fixed range of γ, our R-UMLDA-A still outerforms all the

26 TNN-2007-P-0332.R1 26 (a) (b) (c) (d) Fig. 8. Face recognition results on the PIE database: correct recognition rate against the number of features used for (a) L = 2, (b) L = 5, (c) L = 20 and (d) L = 40. other algorithms for L ranging from 2 to 20. 2) FR results on the FERET database: It is argued in [52] that the learning caacity of any LDAlike algorithm is directly roortional to L, and recirocally roortional to C. Thus, to evaluate the recognition erformance with different Cs, we carry out four exeriments on the FERET database with

27 TNN-2007-P-0332.R1 27 (a) (b) (c) (d) Fig. 9. Face recognition results on the FERET database: correct recognition rate against the number of features used for (a) C = 80, (b) C = 160, (c) C = 240 and (d) C = 320. C = 80, 160, 240, 320 and fix L = 4 so that no more than half of the face images are used for training. The numbers of training and testing faces for each exeriment are detailed in Table V. Table VI lists the to CRRs, where R-UMLDA-A outerforms all the other methods in all cases. Moreover, we can observe that the recognition erformance of R-UMLDA-A and R-JD-LDA are just slightly affected by

28 TNN-2007-P-0332.R1 28 C. In contrast, LDA and ULDA are affected significantly by C. Detailed recognition results are shown in Fig. 9, where in all cases, the first a few (around 10) features extracted by R-UMLDA are the most discriminative ones again. TABLE V DETAILS OF THE FOUR EXPERIMENTS ON THE FERET DATABASE. C number of training faces number of testing faces TABLE VI FACE RECOGNITION RESULTS ON THE FERET DATABASE: THE TOP CRRS (MEAN±STD%). C Bayes LDA ULDA R-JD-LDA MPCA DATER GTDA TR1DA R-UMLDA R-UMLDA-A ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ±1.6 E. Gait recognition results After two sets of exeriments to evaluate the erformance on 2-D face images under varying L and C, we test the erformance on 3-D gait data. In GR exeriments, gait samles are inut directly as third-order tensors to the multilinear algorithms, while for the linear algorithms, they are vectorized to vectors as inut. We follow the standard testing rocedures in GR, where the gallery set is used for training and the robe sets (A, B and C) are used for testing. Since R-UMLDA-A involves random initialization, we reort the results (mean and standard deviation) averaged over 20 reeated exeriments. In addition to the CRRs of the gait samles, the CRRs of gait sequences are also reorted. The calculation of matching scores between two gait sequences follows that in [10]. Tables VII and VIII resent the to CRRs for individual gait samles and gait sequences, resectively, for robes A, B, C and their average. R-UMLDA-A and R-JD-LDA achieve the best erformance on recognizing individual gait samles, indicating that regularization is indeed effective for the challenging

29 TNN-2007-P-0332.R1 29 GR roblem as well. Regarding the erformance on recognizing gait sequences, R-UMLDA-A achieves the best results on average, showing that the roosed multilinear solution is indeed more owerful than other solutions in recognizing tensorial signals. Figures 10(a) and 10(b) lot the detailed CRRs averaged over the three robes for individual gait samles and gait sequences, resectively. Figure 10(a) shows that the first a few features (around 20) roduced by R-UMLDA are again the most discriminative ones in classifying gait samles, while Fig. 10(b) illustrates that R-UMLDA still has the best erformance when recognizing gait sequences using 3 to 20 features only. It is also noted that the Bayesian solution, which is originally roosed for FR, is not suitable for GR and its recognition erformance deteriorates when more than 70 features are used. (a) (b) Fig. 10. Gait recognition results on the USF gait challenge data sets (average over robes A, B and C): correct recognition rate against the number of features used for (a) individual gait samles, and (b) gait sequences. F. Discussions We have erformed a large number of exeriments on face and gait recognition to evaluate our roosed algorithms. From the results resented above, the following observations are made: 1) The first a few features extracted by the roosed R-UMLDA, obtained with fixed arameters, are the most discriminative ones in various scenarios, excet in the FR exeriment on the PIE database with L = 40. This demonstrates that extracting uncorrelated features directly from tensorial signals

30 TNN-2007-P-0332.R1 30 TABLE VII GAIT RECOGNITION RESULTS ON THE USF GAIT CHALLENGE DATA SETS: THE TOP CRRS (%) FOR INDIVIDUAL GAIT SAMPLES. Probe Bayes LDA ULDA R-JD-LDA MPCA DATER GTDA TR1DA R-UMLDA R-UMLDA-A A ±1.3 B ±1.0 C ±1.0 Average ±0.8 TABLE VIII GAIT RECOGNITION RESULTS ON THE USF GAIT CHALLENGE DATA SETS: THE TOP CRRS (%) FOR GAIT SEQUENCES. Probe Bayes LDA ULDA R-JD-LDA MPCA DATER GTDA TR1DA R-UMLDA R-UMLDA-A A ±1.5 B ±1.8 C ±1.9 Average ±1.0 using R-UMLDA is indeed beneficial for recognition, esecially in the low-dimensional sace under the SSS scenario, although the overall recognition erformance is limited by the number of useful features. 2) The roosed R-UMLDA-A overcomes the limitation of R-UMLDA in the number of useful features through aggregating several differently initialized and regularized UMLDA feature extractors. This scheme has not only enhanced the recognition erformance but also alleviated the regularization arameter selection roblem. Without tuning the regularization arameter, R-UMLDA-A achieves the best overall erformance in all the FR and GR exeriments excet in FR with L = 40 on the PIE database, where it is slightly outerformed by LDA and the Bayesian solution. This comarative evaluation demonstrates that R-UMLDA-A is a robust and effective recognition algorithm for tensor objects. 3) Although random initialization is emloyed in R-UMLDA-A, from the standard deviations based on 20 reeated trials reorted in Tables IV, VI, VII and VIII, the recognition results obtained by R-UMLDA-A have low variances, showing that it is cometitive in terms of stability as well. 4) Moreover, the best recognition erformance of R-UMLDA-A is reached with aroximately the same number of features in all the FR and GR exeriments. The addition of extra features only

31 TNN-2007-P-0332.R1 31 marginally alters its recognition erformance. It should be noted at this oint that unlike R-UMLDA- A, other algorithms exhibit oorer stability characteristics in the sense that variation in the number of features used may lead to large erformance variation. As case in oint, lease note that the erformance of the Bayesian solution starts to decrease significantly when more than 150 features are used in the FR exeriments on the FERET database with C = 320, and when more than 70 features are used in the GR exeriments on the USF database. LDA and ULDA significantly undererform the other algorithms on the FERET database with C = 320. On the PIE database with L = 2 and on the FERET database with C = 320, which are both very difficult learning roblems [52], R-JD-LDA shows abrut decrease in recognition erformance when the last a few features are included. The recognition rates of MPCA, DATER, and GTDA start to decrease when more than 100 features are used in the FR exeriments on the PIE database with L = 2 and L = 5. The investigation of the rather dramatic erformance variations for some algorithms observed when the number of features used changes will be interesting but it is beyond the scoe of this aer. (a) (b) (c) Fig. 11. Illustration of the first three EMPs (in to-to-down order) obtained by R-UMLDA from (a) the PIE database with L = 20, (b) the FERET database with C = 320, and (c) the USF gait gallery sequences (1-mode unfolding is shown). Finally, in order to rovide some insights into the R-UMLDA algorithm, Figs. 11(a), 11(b), and 11(c) deict, as gray-level images, the first three EMPs obtained by R-UMLDA using the PIE database with L = 20, the FERET database with C = 320, and the USF gait gallery set, resectively. From the EMPs for the face data, it can be seen that there is strong resence of structured information due to the multilinear nature, which is different from the information conveyed by the bases roduced by linear algorithms such as eigenface [58] or fisherface [32]. It is interesting to note that the eye area is more evident in the second and third EMPs in Fig. 11(b). Figs. 11(a) and 11(b) indicate that the EMPs obtained by R-UMLDA encode discriminative information found in a large ortion of the face, with each EMP caturing a articular attern. In the EMPs for the gait data, which are third-order tensors dislayed as their 1-mode unfolded

32 TNN-2007-P-0332.R1 32 matrices in Fig. 11(c), structure is again observed across the three modes (column, row, and time). The first gait EMP indicates that the most discriminative information encoded by R-UMLDA comes from the foot area (bottom), which is exected based on the hysiology of the human walking cycle. The second gait EMP demonstrates that in addition to the foot area, the head area (to) also rovides imortant discriminative information. The third gait EMP encodes discriminative information from other arts of the body such as the lower legs and the hi. These observations rovide insights into the nature of the features encoded by R-UMLDA and offer a better understanding of this algorithm s erformance when alied to certain data sets. VI. CONCLUSIONS In this aer, a novel UMLDA algorithm is roosed to extract uncorrelated discriminative features directly from tensorial data using the TVP of tensor objects, and a regularization term is incororated to tackle the SSS roblem, resulting in the regularized UMLDA. Furthermore, since the recognition erformance is affected by initialization and regularization, an aggregation scheme is roosed to combine several differently initialized and regularized UMLDA feature extractors to achieve better recognition erformance while mitigating the roblem of regularization arameter selection. Exeriments on biometric alications, tested on the PIE and FERET face databases and the USF gait database, demonstrate that the UMLDA-based recognizer achieves the best overall results comared with recognizers based on other multilinear subsace solutions as well as linear subsace algorithms. ACKNOWLEDGMENTS The authors would like to thank the anonymous reviewers for their insightful and constructive comments. The authors would also like to thank Dr. Jie Wang for her hel in the exerimental design, and the roviders of the PIE, FERET and USF gait databases. This work is artially suorted by the Ontario Centres of Excellence through the Communications and Information Technology Ontario Partnershi Program and the Bell University Labs - at the University of Toronto. APPENDIX A PROOF OF THEOREM 1 Proof: For a non-singular S(n ) W, any can be normalized such that T ) T and the ratio u(n S(n ) B u(n ) T S(n ) W u(n ) S(n ) W = 1 kees unchanged. Therefore, the maximization of this ratio is equivalent

33 TNN-2007-P-0332.R1 33 to the maximization of T S(n ) B with the constraint that T S(n ) W = 1 and Lagrange multiliers can be used to transform the roblem (11) to the following to include all the constraints: ( ) 1 F ( ) = T S (n ) B ν T S (n ) W 1 µ q T Ỹ (n ) g q, (21) where ν and {µ q, q = 1,..., 1} are Lagrange multiliers. zero: The otimization is erformed by setting the artial derivative of F ( ) with resect to Multilying (22) by T 2 T F ( ) = 2 S (n ) B 2ν S (n ) W results in S (n ) B 2ν T S (n ) W 1 which indicates that ν is exactly the criterion to be maximized. q=1 q=1 µ q Ỹ (n ) g q = 0. (22) u(n )T S(n ) B = 0 ν = T S(n ) W Next, a set of ( 1) equations are obtained by multilying (22) by g T q Ỹ(n ) T resectively: Let 2g T q Ỹ(n ) T S (n ) 1 W S(n ) B 1 q=1 µ q g T q Ỹ(n ) T to, (23) S(n ) 1 W, q = 1,..., 1, S (n ) 1 W Ỹ (n ) g q = 0. (24) µ 1 = [µ 1 µ 2... µ 1 ] T (25) and use (14), then the ( 1) equations of (24) can be reresented in a single matrix equation as following: Thus, 2G T 1Ỹ(n ) T S (n ) 1 W S(n ) B G T ) T 1Ỹ(n S (n ) 1 W Ỹ (n ) G 1 µ 1 = 0. (26) µ 1 = 2 (G T ) T 1Ỹ(n S (n ) 1 W Ỹ (n ) G 1 ) 1 G T 1 Ỹ (n ) T S (n ) 1 W S(n ) B. (27) Since from (14) and (25), 1 q=1 µ q Ỹ (n ) g q = Ỹ(n ) G 1 µ 1, (28)

34 TNN-2007-P-0332.R1 34 the equation (22) can be written as 2 S (n ) B 2ν S (n ) W Ỹ(n ) G 1 µ 1 = 0 ν S (n ) W = S (n ) B ) µ 1 Ỹ(n G 1 2 ( = S (n ) B ) Ỹ(n G 1 G T ) T 1Ỹ(n S (n ) 1 [ ( ) = I In Ỹ(n G 1 G T ) T 1Ỹ(n S (n ) 1 W Ỹ (n ) W Ỹ (n ) G 1 ) 1 G T 1Ỹ(n ) T G 1 ) 1 G T 1Ỹ(n ) T S (n ) 1 W S(n ) B ] S (n ) 1 W S (n ) B Using the definition in (13), a generalized eigenvalue roblem is obtained as R (n ) S(n ) B u = ν S (n ) W u. Since ν is the criterion to be maximized, the maximization is achieved by setting u (n) generalized eigenvector corresonding to the largest generalized eigenvalue of (12). to be the (unit) APPENDIX B CONNECTIONS AMONG VARIOUS MLDA ALGORITHMS AND LDA The traditional linear rojection is a vector-to-vector rojection (VVP). The two multilinear rojections are the TTP and TVP. Thus, LDA is based on VVP. DATER and GTDA are MLDA through TTP (MLDA- TTP), and TR1DA and UMLDA are MLDA through TVP (MLDA-TVP). The DATER algorithm [2] is a secific realization of the MLDA-TTP, with the objective of maximizing the scatter ratio. The GTDA algorithm [15] is an MLDA-TTP variant maximizing the scatter difference, The TR1DA algorithm [18], [19] is an MLDA-TVP variant maximizing the scatter difference, with a heuristic residue-based aroach. The UMLDA algorithm roosed in this aer is an MLDA-TVP variant maximizing the scatter ratio, while ursuing uncorrelated features. The relationshis between the LDA, MLDA-TTP and MLDA-TVP are revealed by examine the relationshis between VVP, TTP and TVP first. Obviously, VVP is the secial case of TTP and TVP with N = 1. Each rojected element in TTP can be viewed as the rojection of an EMP formed by taking one column from each rojection matrix so the rojected tensor by TTP is obtained effectively through N n=1 I n interdeendent EMPs, while in TVP, the P EMPs obtained successively are not interdeendent in general. Moreover, the rojection using an EMP {u (1)T, u (2)T,..., u (N)T } can be written as y =< X, U >=< vec(x ), vec(u) >= [vec(u)] T vec(x ). Therefore, an EMP is equivalent to a linear rojection of vec(x ) on a vector vec(u). Since U = u (1) u (2)... u (N), the EMP is in effect a linear rojection with constraint on the rojection vector such that it is the vectorized reresentation of a rank-one tensor. Comared with a rojection vector of size I 1 in VVP secified by I arameters (I = N n=1 I n for an Nth-order tensor), an EMP in TVP can be secified by N n=1 I n arameters. Hence, to roject a

35 TNN-2007-P-0332.R1 35 tensor of size N n=1 I n to a vector of size P 1, the TVP needs to estimate only P N n=1 I n arameters, while the VVP needs to estimate P N n=1 I n arameters. The imlication in attern recognition roblem is that the TVP has fewer arameters to estimate while being more constrained on the solutions, and the VVP has less constraint on the solutions sought while having more arameters to estimate. Summing u, LDA is a secial case of MLDA-TTP and MLDA-TVP when N = 1, with the scatter ratio as the searation criterion. On the other hand, the MLDA-TTP is looking for interdeendent EMPs while the EMPs sought successively in the MLDA-TVP are not interdeendent generally. Furthermore, for the same rojected vector size, the MLDA-TVP has fewer arameters to estimate while the rojection to be solved is more constrained, and LDA has more arameters to estimate while the rojection is less constrained. REFERENCES [1] L. D. Lathauwer, B. D. Moor, and J. Vandewalle, On the best rank-1 and rank-(r 1, R 2,..., R N ) aroximation of higherorder tensors, SIAM Journal of Matrix Analysis and Alications, vol. 21, no. 4, , [2] S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H. Zhang, Multilinear discriminant analysis for face recognition, IEEE Trans. Image Process., vol. 16, no. 1, , Jan [3] Z. Lei, R. Chu, R. He, S. Liao, and S. Z. Li, Face recognition by discriminant analysis with Gabor tensor reresentation, in Proc. Int. Conf. on Biometrics, August 2007, [4] Y.-D. Kim and S. Choi, Color face tensor factorization and slicing for illumination-robust recognition, in Proc. Int. Conf. on Biometrics, August 2007, [5] C. M. Cyr and B. B. Kimia, 3D object recognition using shae similiarity-based asect grah, in Proc. IEEE Conf. on Comuter Vision, vol. 1, Jul. 2001, [6] K. W. Bowyer, K. Chang, and P. Flynn, A survey of aroaches and challenges in 3D and multi-modal 3D + 2D face recognition, Comuter Vision and Image Understanding, vol. 101, no. 1,. 1 15, Jan [7] S. Z. Li, C. Zhao, X. Zhu, and Z. Lei, 3D+2D face recognition by fusion at both feature and decision levels, in Proc. IEEE Int. Worksho on Analysis and Modeling of Faces and Gestures, Oct [8] C. Liu and H. Wechsler, Indeendent comonent analysis of Gabor features for face recognition, IEEE Trans. Neural Netw., vol. 14, no. 4, , Jul [9] R. Chellaa, A. Roy-Chowdhury, and S. Zhou, Recognition of Humans and Their Activities Using Video. Morgan & Clayool Publishers, [10] H. Lu, K. N. Plataniotis, and A. N. Venetsanooulos, MPCA: Multilinear rincial comonent analysis of tensor objects, IEEE Trans. Neural Netw., vol. 19, no. 1, , Jan [11] G. Shakhnarovich and B. Moghaddam, Face recognition in subsaces, in Handbook of Face Recognition, S. Z. Li and A. K. Jain, Eds. Sringer-Verlag, 2004, [12] J. Zhang, S. Z. Li, and J. Wang, Manifold learning and alications in recognition, in Intelligent Multimedia Processing with Soft Comuting, Y. P. Tan, K. H. Ya, and L. Wang, Eds. Sringer-Verlag, 2004, [13] M. H. C. Law and A. K. Jain, Incremental nonlinear dimensionality reduction by manifold learning, IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 3, , Mar

36 TNN-2007-P-0332.R1 36 [14] H. Lu, K. N. Plataniotis, and A. N. Venetsanooulos, Gait recognition through MPCA lus LDA, in Proc. Biometrics Symosium 2006, Setember 2006, [15] D. Tao, X. Li, X. Wu, and S. J. Maybank, General tensor discriminant analysis and Gabor features for gait recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 10, , Oct [16] J. Ye, R. Janardan, and Q. Li, Two-dimensional linear discriminant analysis, in Advances in Neural Information Processing Systems (NIPS), 2004, [17] H. Lu, K. N. Plataniotis, and A. N. Venetsanooulos, A taxonomy of emerging multilinear discriminant analysis solutions for biometric signal recognition, in Biometrics: Theory, Methods, and Alications, N. V. Boulgouris, K. Plataniotis, and E. Micheli-Tzanakou, Eds. Wiley/IEEE, 2008, submitted. [Online]. Available: htt:// haiing/publication/mldataxonomy.df [18] Y. Wang and S. Gong, Tensor discriminant analysis for view-based object recognition, in Proc. Int. Conf. on Pattern Recognition, vol. 3, August 2006, [19] D. Tao, X. Li, X. Wu, and S. J. Maybank, Elased time in human gait recognition: A new aroach, in Proc. IEEE Int. Conf. on Acoustics, Seech and Signal Processing, vol. 2, Ar. 2006, [20] T. G. Kolda, Orthogonal tensor decomositions, SIAM Journal of Matrix Analysis and Alications, vol. 23, no. 1, , [21] X. He, D. Cai, and P. Niyogi, Tensor subsace analysis, in Advances in Neural Information Processing Systems 18 (NIPS), [22] G. Dai and D. Y. Yeung, Tensor embedding methods, in Proc. Twenty-First National Conference on Artificial Intelligence, Jul. 2006, [23] S. Yan, D. Xu, B. Zhang, H. J. Zhang, Q. Yang, and S. Lin, Grah embedding and extensions: A general framework for dimensionality reduction, IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 1, , Jan [24] D. Xu, S. Lin, S. Yan, and X. Tang, Rank-one rojections with adative margins for face recognition, IEEE Trans. Syst., Man, Cybern. B, vol. 37, no. 5, , Oct [25] G. Hua, P. A. Viola, and S. M. Drucker, Face recognition using discriminatively trained orthogonal rank one tensor rojections, in Proc. IEEE Comuter Society Conf. on Comuter Vision and Pattern Recognition, June 2007, [26] Z. Jin, J. Y. Yang, Z. M. Tang, and Z. S. Hu, A theorem on the uncorrelated otimal discriminant vectors, Pattern Recognition, vol. 34, no. 10, , Oct [27] J. Ye, R. Janardan, Q. Li, and H. Park, Feature reduction via generalized uncorrelated linear discriminant analysis, IEEE Trans. Knowl. Data Eng., vol. 18, no. 10, , Oct [28] Z. Jin, J. Y. Yang, Z. S. Hu, and Z. Lou, Face recognition based on the uncorrelated discriminant transformation, Pattern Recognition, vol. 34, , [29] H. Lu, K. N. Plataniotis, and A. N. Venetsanooulos, Uncorrelated multilinear rincial comonent analysis through successive variance maximization, in Proc. International Conference on Machine Learning, Jul. 2008, [30] J. Lu, K. N. Plataniotis, and A. N. Venetsanooulos, Regularization studies of linear discriminant analysis in small samle size scenarios with alication to face recognition, Pattern Recognition Letter, vol. 26, no. 2, , [31] H. Lu, K. N. Plataniotis, and A. N. Venetsanooulos, Uncorrelated multilinear discriminant analysis with regularization for gait recognition, in Proc. Biometrics Symosium 2007, Setember [32] P. N. Belhumeur, J. P. Hesanha, and D. J. Kriegman, Eigenfaces vs. fisherfaces: Recognition using class secific linear rojection, IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, , Jul

37 TNN-2007-P-0332.R1 37 [33] J. Ye, Characterization of a family of algorithms for generalized discriminant analysis on undersamled roblems, Journal of Machine Learning Research, vol. 6, , Ar [34] T. K.Moon and W. C. Stirling, Mathematical methods and Algorithms for Signal Processing. Prentice Hall, [35] J. L. Rodgers, W. A. Nicewander, and L. Toothaker, Linearly indeendent, orthogonal and uncorrelated variables, The American statistician, vol. 38, no. 2, , May [36] Y. Koren and L. Carmel, Robust linear dimensionality reduction, IEEE Trans. Vis. Comut. Grahics, vol. 10, no. 4, , July-Aug [37] J. D. Carroll and J. J. Chang, Analysis of individual differences in multidimensional scaling via an n-way generalization of eckart-young decomosition, Psychometrika, vol. 35, , [38] R. A. Harshman, Foundations of the arafac rocedure: Models and conditions for an exlanatory multi-modal factor analysis, UCLA Working Paers in Phonetics, vol. 16,. 1 84, [39] P. Kroonenberg and J. Leeuw, Princial comonent analysis of three-mode data by means of alternating least squares algorithms, Psychometrika, vol. 45, no. 1, , [40] J. H. Friedman, Regularized discriminant analysis, Journal of the American Statistical Association, vol. 84, no. 405, , Mar [41] C. Liu, Caitalize on dimensionality increasing techniques for imroving face recognition grand challenge erformance, IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 5, , May [42] M. A. O. Vasilescu and D. Terzooulos, Multilinear analysis of image ensembles: Tensorfaces, in Proc. 7th Euroean Conference on Comuter Vision, May 2002, [43] S. W. Park and M. Savvides, Individual kernel tensor-subsaces for robust face recognition: A comutationally efficient tensor framework without requiring mode factorization, IEEE Trans. Syst., Man, Cybern. B, vol. 37, no. 5, , Oct [44] K. Jia and S. Gong, Multi-modal tensor face for simultaneous suer-resolution and recognition, in Proc. Int. Conf. on Comuter Vision, vol. 2, Oct. 2005, [45] T. K. Ho, The random subsace method for constructing decision forests, IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 8, , Aug [46] R. E. Schaire, The boosting aroach to machine learning: An overview, in Nonlinear Estimation and Classification, D. D. Denison, M. H. Hansen, C. Holmes, B. Mallick, and B. Yu, Eds. Sringer, [47] R. P. W. D. M. Skurichina, Bagging, boosting and the random subsace method for linear classifiers, Pattern Analysis & Alications, vol. 5, no. 2, , [48] L. Breiman, Arcing classifiers, The Annals of Statistics, vol. 26, no. 3, , [49] A. Ross and R. Govindarajan, Feature level fusion of hand and face biometrics, in Proc. SPIE Conf. on Biometric Technology for Human Identification II, Mar. 2005, [50] A. K. J. A. Ross and J. Z. Qian, Information fusion in biometrics, Pattern Recognition Letters, vol. 24, , [51] J. Kittler, M. Hatef, R. P. W. Duin, and J. Matas, On combining classifiers, IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 3, , Mar [52] J. Lu, K. N. Plataniotis, A. N. Venetsanooulos, and S. Z. Li, Ensemble-based discriminant learning with boosting for face recognition, IEEE Trans. Neural Netw., vol. 17, no. 1, , Jan

38 TNN-2007-P-0332.R1 38 [53] H. Lu,, K. N. Plataniotis, and A. N. Venetsanooulos, Boosting LDA with regularization on MPCA features for gait recognition, in Proc. Biometrics Symosium 2007, Setember [54] T. Sim, S. Baker, and M. Bsat, The CMU ose, illumination, and exression database, IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 12, , Dec [55] P. J. Phillis, H. Moon, S. A. Rizvi, and P. Rauss, The FERET evaluation method for face recognition algorithms, IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no. 10, , Oct [56] B. Moghaddam, T. Jebara, and A. Pentland, Bayesian face recognition, Pattern Recogntion, vol. 33, no. 11, , [57] J. Lu, K. N. Plataniotis, and A. N. Venetsanooulos, Face recognition using LDA based algorithms, IEEE Trans. Neural Netw., vol. 14, no. 1, , Jan [58] M. Turk and A. Pentland, Eigenfaces for recognition, Journal of Cognitive Neurosicence, vol. 3, no. 1, , and data mining. Haiing Lu (S 02) received the B.Eng. and M.Eng degrees in electrical and electronic engineering from Nanyang Technological University, Singaore, in 2001 and 2004, resectively, and the Ph.D. degree in electrical and comuter engineering from University of Toronto, Toronto, ON, Canada, in Currently, he is a ostdoctoral fellow in the Edward S. Rogers Sr. Deartment of Electrical and Comuter Engineering, University of Toronto, Toronto, ON, Canada. His current research interests include statistical attern recognition, machine learning, multilinear algebra, tensor object rocessing, biometric encrytion,

39 TNN-2007-P-0332.R1 39 Konstantinos N. (Kostas) Plataniotis (S 90-M 92-SM 03) Konstantinos N. (Kostas) Plataniotis received his B. Eng. degree in Comuter Engineering from University of Patras, Greece in 1988 and his M.S. and Ph.D degrees in Electrical Engineering from Florida Institute of Technology in Melbourne, Florida, in 1992 and 1994 resectively. Currently, he is a Professor with the Edward S. Rogers Sr. Deartment of Electrical and Comuter Engineering at the University of Toronto in Toronto, ON, Canada, and an Adjunct Professor with the School of Comuter Science at Ryerson University, Toronto, ON, Canada. He is a founding member and the inaugural Associate Director-Research of the Identity, Privacy and Security Initiative (IPSI) at the University of Toronto. He servers on the Executive Committee of Knowledge Media Design Institute (KMDI) at the University of Toronto. Dr. Plataniotis was the Guest Editor for the March 2005 IEEE Signal Processing Magazine secial issue on Surveillance Networks and Services. He is an Associate Editor for the IEEE Trans. on Neural Networks and the IEEE Signal Processing Letters. He is a member of the 2008 IEEE Educational Activity Board and the Chair of the IEEE EAB Continuing Professional Education Committee. Dr. Plataniotis is the 2008 reresentative of the Comutational Intelligence Society to the IEEE Biometrics Council and a member of the Steering Committee for the IEEE Trans. on Mobile Comuting. He will serve as the Editor-in- Chief for the IEEE Signal Processing Letters from January 1, 2009 to December 31, He is a registered rofessional engineer in the rovince of Ontario, and a member of the Technical Chamber of Greece. He is a contributor to ten books, the co-author of Color Image Processing and Alications, Sringer Verlag, 2000, (ISBN ), the co-editor of Color Imaging: Methods and Alications, CRC Press, Setember 2006, (ISBN X). In 2005, he became the reciient of the IEEE Canada Engineering Educator Award for contributions to engineering education and insirational guidance of graduate students. Dr. Plataniotis is the joint reciient of the 2006 IEEE Trans. on Neural Networks Outstanding Paer Award for the aer Face Recognition Using Kernel Direct Discriminant Analysis Algorithms ublished in He is the Managing Guest Editor of the IEEE/Wiley Press volume on Biometrics: Theory, Method and Alications to be ublished in 2009.

40 TNN-2007-P-0332.R1 40 Anastasios N. Venetsanooulos (S 66-M 69-SM 79-F 88) received the Bachelors of Electrical and Mechanical Engineering degree from the National Technical University of Athens (NTU), Greece, in 1965, and the M.S., M.Phil., and Ph.D. degrees in Electrical Engineering from Yale University in 1966, 1968 and 1969 resectively. He joined the Deartment of Electrical and Comuter Engineering of the University of Toronto in Setember 1968 as a lecturer and he was romoted to assistant rofessor in 1970, associate rofessor in 1973, and rofessor in Prof. A.N. Venetsanooulos has served as chair of the Communications Grou and associate chair of the Deartment of Electrical Engineering. Between July 1997 and June 2001, he was associate chair: graduate studies in the Deartment of Electrical and Comuter Engineering and was acting chair during the sring term of 1998C1999. Prof. A.N. Venetsanooulos served as the Inaugural Bell Canada Chair in Multimedia 1999 and Between 2001 and 2006 he served as the 12th Dean of the Faculty of Alied Science and Engineering of the University of Toronto and on October 1, 2006, he acceted the osition of founding vice-resident research and innovation at Ryerson University. Prof. A.N. Venetsanooulos was on research leave at the Imerial College of Science and Technology, the National Technical University of Athens, the Swiss Federal Institute of Technology, the University of Florence, and the Federal University of Rio de Janeiro, and has also served as adjunct rofessor at Concordia University. He has served as lecturer in 138 short courses to industry and continuing education rograms and as consultant to numerous organizations; he is a contributor to 35 books and has ublished over 800 aers in refereed journals and conference roceedings on digital signal and image rocessing and digital communications. Prof. Venetsanooulos was elected as a fellow of the IEEE for contributions to digital signal and image rocessing ; he is also a fellow of the Engineering Institute of Canada (EIC) and was awarded an honorary doctorate from the National Technical University of Athens in October In October 1996 he was awarded the Excellence in Innovation Award of the Information Technology Research Centre of Ontario and Royal Bank of Canada, for innovative work in color image rocessing and its industrial alications. In November 2003 he was the reciient of the Millennium Medal of IEEE. In Aril 2001 he became a fellow of the Canadian Academy of Engineering. In 2003 he was the reciient of the highest award of the Canadian IEEE, the MacNaughton Award, and in 2003 he served as the chair of the Council of Deans of Engineering of the Province of Ontario (CODE). In 2006 he was selected as the joint reciient of the 2003 IEEE Transactions on Neural Networks Outstanding Paer Award.

Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition

Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition Haiing Lu, K.N. Plataniotis and A.N. Venetsanooulos The Edward S. Rogers Sr. Deartment of

More information

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment

More information

Semi-Orthogonal Multilinear PCA with Relaxed Start

Semi-Orthogonal Multilinear PCA with Relaxed Start Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) Semi-Orthogonal Multilinear PCA with Relaxed Start Qiuan Shi and Haiing Lu Deartment of Comuter Science

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

arxiv: v2 [stat.ml] 7 May 2015

arxiv: v2 [stat.ml] 7 May 2015 Semi-Orthogonal Multilinear PCA with Relaxed Start Qiuan Shi and Haiing Lu Deartment of Comuter Science Hong Kong Batist University, Hong Kong, China csshi@com.hkbu.edu.hk, haiing@hkbu.edu.hk arxiv:1504.08142v2

More information

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

State Estimation with ARMarkov Models

State Estimation with ARMarkov Models Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,

More information

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management

More information

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 MPCA: Multilinear Principal Component Analysis of Tensor Objects Haiping Lu, Student Member, IEEE, Konstantinos N. (Kostas) Plataniotis,

More information

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS Proceedings of DETC 03 ASME 003 Design Engineering Technical Conferences and Comuters and Information in Engineering Conference Chicago, Illinois USA, Setember -6, 003 DETC003/DAC-48760 AN EFFICIENT ALGORITHM

More information

Principal Components Analysis and Unsupervised Hebbian Learning

Principal Components Analysis and Unsupervised Hebbian Learning Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material

More information

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation Paer C Exact Volume Balance Versus Exact Mass Balance in Comositional Reservoir Simulation Submitted to Comutational Geosciences, December 2005. Exact Volume Balance Versus Exact Mass Balance in Comositional

More information

Metrics Performance Evaluation: Application to Face Recognition

Metrics Performance Evaluation: Application to Face Recognition Metrics Performance Evaluation: Alication to Face Recognition Naser Zaeri, Abeer AlSadeq, and Abdallah Cherri Electrical Engineering Det., Kuwait University, P.O. Box 5969, Safat 6, Kuwait {zaery, abeer,

More information

Contents. References 23

Contents. References 23 Contents 1 A Taxonomy of Emerging Multilinear Discriminant Analysis Solutions for Biometric Signal Recognition 1 Haiping Lu, K. N. Plataniotis and A. N. Venetsanopoulos 1.1 Introduction 1 1.2 Multilinear

More information

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points. Solved Problems Solved Problems P Solve the three simle classification roblems shown in Figure P by drawing a decision boundary Find weight and bias values that result in single-neuron ercetrons with the

More information

arxiv: v1 [physics.data-an] 26 Oct 2012

arxiv: v1 [physics.data-an] 26 Oct 2012 Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch

More information

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests 009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

PARTIAL FACE RECOGNITION: A SPARSE REPRESENTATION-BASED APPROACH. Luoluo Liu, Trac D. Tran, and Sang Peter Chin

PARTIAL FACE RECOGNITION: A SPARSE REPRESENTATION-BASED APPROACH. Luoluo Liu, Trac D. Tran, and Sang Peter Chin PARTIAL FACE RECOGNITION: A SPARSE REPRESENTATION-BASED APPROACH Luoluo Liu, Trac D. Tran, and Sang Peter Chin Det. of Electrical and Comuter Engineering, Johns Hokins Univ., Baltimore, MD 21218, USA {lliu69,trac,schin11}@jhu.edu

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and

More information

General Linear Model Introduction, Classes of Linear models and Estimation

General Linear Model Introduction, Classes of Linear models and Estimation Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell February 10, 2010 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition A Qualitative Event-based Aroach to Multile Fault Diagnosis in Continuous Systems using Structural Model Decomosition Matthew J. Daigle a,,, Anibal Bregon b,, Xenofon Koutsoukos c, Gautam Biswas c, Belarmino

More information

PER-PATCH METRIC LEARNING FOR ROBUST IMAGE MATCHING. Sezer Karaoglu, Ivo Everts, Jan C. van Gemert, and Theo Gevers

PER-PATCH METRIC LEARNING FOR ROBUST IMAGE MATCHING. Sezer Karaoglu, Ivo Everts, Jan C. van Gemert, and Theo Gevers PER-PATCH METRIC LEARNING FOR ROBUST IMAGE MATCHING Sezer Karaoglu, Ivo Everts, Jan C. van Gemert, and Theo Gevers Intelligent Systems Lab, Amsterdam, University of Amsterdam, 1098 XH Amsterdam, The Netherlands

More information

Probability Estimates for Multi-class Classification by Pairwise Coupling

Probability Estimates for Multi-class Classification by Pairwise Coupling Probability Estimates for Multi-class Classification by Pairwise Couling Ting-Fan Wu Chih-Jen Lin Deartment of Comuter Science National Taiwan University Taiei 06, Taiwan Ruby C. Weng Deartment of Statistics

More information

STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS

STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS Massimo Vaccarini Sauro Longhi M. Reza Katebi D.I.I.G.A., Università Politecnica delle Marche, Ancona, Italy

More information

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Journal of Modern Alied Statistical Methods Volume Issue Article 7 --03 A Comarison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Ghadban Khalaf King Khalid University, Saudi

More information

Recursive Estimation of the Preisach Density function for a Smart Actuator

Recursive Estimation of the Preisach Density function for a Smart Actuator Recursive Estimation of the Preisach Density function for a Smart Actuator Ram V. Iyer Deartment of Mathematics and Statistics, Texas Tech University, Lubbock, TX 7949-142. ABSTRACT The Preisach oerator

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell October 25, 2009 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

Unsupervised Hyperspectral Image Analysis Using Independent Component Analysis (ICA)

Unsupervised Hyperspectral Image Analysis Using Independent Component Analysis (ICA) Unsuervised Hyersectral Image Analysis Using Indeendent Comonent Analysis (ICA) Shao-Shan Chiang Chein-I Chang Irving W. Ginsberg Remote Sensing Signal and Image Processing Laboratory Deartment of Comuter

More information

The Value of Even Distribution for Temporal Resource Partitions

The Value of Even Distribution for Temporal Resource Partitions The Value of Even Distribution for Temoral Resource Partitions Yu Li, Albert M. K. Cheng Deartment of Comuter Science University of Houston Houston, TX, 7704, USA htt://www.cs.uh.edu Technical Reort Number

More information

Tensor-Based Sparsity Order Estimation for Big Data Applications

Tensor-Based Sparsity Order Estimation for Big Data Applications Tensor-Based Sarsity Order Estimation for Big Data Alications Kefei Liu, Florian Roemer, João Paulo C. L. da Costa, Jie Xiong, Yi-Sheng Yan, Wen-Qin Wang and Giovanni Del Galdo Indiana University School

More information

Linear diophantine equations for discrete tomography

Linear diophantine equations for discrete tomography Journal of X-Ray Science and Technology 10 001 59 66 59 IOS Press Linear diohantine euations for discrete tomograhy Yangbo Ye a,gewang b and Jiehua Zhu a a Deartment of Mathematics, The University of Iowa,

More information

Preconditioning techniques for Newton s method for the incompressible Navier Stokes equations

Preconditioning techniques for Newton s method for the incompressible Navier Stokes equations Preconditioning techniques for Newton s method for the incomressible Navier Stokes equations H. C. ELMAN 1, D. LOGHIN 2 and A. J. WATHEN 3 1 Deartment of Comuter Science, University of Maryland, College

More information

Hidden Predictors: A Factor Analysis Primer

Hidden Predictors: A Factor Analysis Primer Hidden Predictors: A Factor Analysis Primer Ryan C Sanchez Western Washington University Factor Analysis is a owerful statistical method in the modern research sychologist s toolbag When used roerly, factor

More information

Using a Computational Intelligence Hybrid Approach to Recognize the Faults of Variance Shifts for a Manufacturing Process

Using a Computational Intelligence Hybrid Approach to Recognize the Faults of Variance Shifts for a Manufacturing Process Journal of Industrial and Intelligent Information Vol. 4, No. 2, March 26 Using a Comutational Intelligence Hybrid Aroach to Recognize the Faults of Variance hifts for a Manufacturing Process Yuehjen E.

More information

An Analysis of Reliable Classifiers through ROC Isometrics

An Analysis of Reliable Classifiers through ROC Isometrics An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit

More information

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018 Comuter arithmetic Intensive Comutation Annalisa Massini 7/8 Intensive Comutation - 7/8 References Comuter Architecture - A Quantitative Aroach Hennessy Patterson Aendix J Intensive Comutation - 7/8 3

More information

Keywords: pile, liquefaction, lateral spreading, analysis ABSTRACT

Keywords: pile, liquefaction, lateral spreading, analysis ABSTRACT Key arameters in seudo-static analysis of iles in liquefying sand Misko Cubrinovski Deartment of Civil Engineering, University of Canterbury, Christchurch 814, New Zealand Keywords: ile, liquefaction,

More information

Distributed Rule-Based Inference in the Presence of Redundant Information

Distributed Rule-Based Inference in the Presence of Redundant Information istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced

More information

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model Shadow Comuting: An Energy-Aware Fault Tolerant Comuting Model Bryan Mills, Taieb Znati, Rami Melhem Deartment of Comuter Science University of Pittsburgh (bmills, znati, melhem)@cs.itt.edu Index Terms

More information

Finding a sparse vector in a subspace: linear sparsity using alternating directions

Finding a sparse vector in a subspace: linear sparsity using alternating directions IEEE TRANSACTION ON INFORMATION THEORY VOL XX NO XX 06 Finding a sarse vector in a subsace: linear sarsity using alternating directions Qing Qu Student Member IEEE Ju Sun Student Member IEEE and John Wright

More information

Research of power plant parameter based on the Principal Component Analysis method

Research of power plant parameter based on the Principal Component Analysis method Research of ower lant arameter based on the Princial Comonent Analysis method Yang Yang *a, Di Zhang b a b School of Engineering, Bohai University, Liaoning Jinzhou, 3; Liaoning Datang international Jinzhou

More information

Deriving Indicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V.

Deriving Indicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V. Deriving ndicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V. Deutsch Centre for Comutational Geostatistics Deartment of Civil &

More information

Re-entry Protocols for Seismically Active Mines Using Statistical Analysis of Aftershock Sequences

Re-entry Protocols for Seismically Active Mines Using Statistical Analysis of Aftershock Sequences Re-entry Protocols for Seismically Active Mines Using Statistical Analysis of Aftershock Sequences J.A. Vallejos & S.M. McKinnon Queen s University, Kingston, ON, Canada ABSTRACT: Re-entry rotocols are

More information

Pairwise active appearance model and its application to echocardiography tracking

Pairwise active appearance model and its application to echocardiography tracking Pairwise active aearance model and its alication to echocardiograhy tracking S. Kevin Zhou 1, J. Shao 2, B. Georgescu 1, and D. Comaniciu 1 1 Integrated Data Systems, Siemens Cororate Research, Inc., Princeton,

More information

Multi-Operation Multi-Machine Scheduling

Multi-Operation Multi-Machine Scheduling Multi-Oeration Multi-Machine Scheduling Weizhen Mao he College of William and Mary, Williamsburg VA 3185, USA Abstract. In the multi-oeration scheduling that arises in industrial engineering, each job

More information

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO) Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment

More information

A SIMPLE PLASTICITY MODEL FOR PREDICTING TRANSVERSE COMPOSITE RESPONSE AND FAILURE

A SIMPLE PLASTICITY MODEL FOR PREDICTING TRANSVERSE COMPOSITE RESPONSE AND FAILURE THE 19 TH INTERNATIONAL CONFERENCE ON COMPOSITE MATERIALS A SIMPLE PLASTICITY MODEL FOR PREDICTING TRANSVERSE COMPOSITE RESPONSE AND FAILURE K.W. Gan*, M.R. Wisnom, S.R. Hallett, G. Allegri Advanced Comosites

More information

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split A Bound on the Error of Cross Validation Using the Aroximation and Estimation Rates, with Consequences for the Training-Test Slit Michael Kearns AT&T Bell Laboratories Murray Hill, NJ 7974 mkearns@research.att.com

More information

Universal Finite Memory Coding of Binary Sequences

Universal Finite Memory Coding of Binary Sequences Deartment of Electrical Engineering Systems Universal Finite Memory Coding of Binary Sequences Thesis submitted towards the degree of Master of Science in Electrical and Electronic Engineering in Tel-Aviv

More information

VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES

VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES Journal of Sound and Vibration (998) 22(5), 78 85 VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES Acoustics and Dynamics Laboratory, Deartment of Mechanical Engineering, The

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

DIFFERENTIAL evolution (DE) [3] has become a popular

DIFFERENTIAL evolution (DE) [3] has become a popular Self-adative Differential Evolution with Neighborhood Search Zhenyu Yang, Ke Tang and Xin Yao Abstract In this aer we investigate several self-adative mechanisms to imrove our revious work on [], which

More information

MPCA: Multilinear Principal Component. Analysis of Tensor Objects

MPCA: Multilinear Principal Component. Analysis of Tensor Objects MPCA: Multilinear Principal Component 1 Analysis of Tensor Objects Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University

More information

KEY ISSUES IN THE ANALYSIS OF PILES IN LIQUEFYING SOILS

KEY ISSUES IN THE ANALYSIS OF PILES IN LIQUEFYING SOILS 4 th International Conference on Earthquake Geotechnical Engineering June 2-28, 27 KEY ISSUES IN THE ANALYSIS OF PILES IN LIQUEFYING SOILS Misko CUBRINOVSKI 1, Hayden BOWEN 1 ABSTRACT Two methods for analysis

More information

Multilayer Perceptron Neural Network (MLPs) For Analyzing the Properties of Jordan Oil Shale

Multilayer Perceptron Neural Network (MLPs) For Analyzing the Properties of Jordan Oil Shale World Alied Sciences Journal 5 (5): 546-552, 2008 ISSN 1818-4952 IDOSI Publications, 2008 Multilayer Percetron Neural Network (MLPs) For Analyzing the Proerties of Jordan Oil Shale 1 Jamal M. Nazzal, 2

More information

Churilova Maria Saint-Petersburg State Polytechnical University Department of Applied Mathematics

Churilova Maria Saint-Petersburg State Polytechnical University Department of Applied Mathematics Churilova Maria Saint-Petersburg State Polytechnical University Deartment of Alied Mathematics Technology of EHIS (staming) alied to roduction of automotive arts The roblem described in this reort originated

More information

Observer/Kalman Filter Time Varying System Identification

Observer/Kalman Filter Time Varying System Identification Observer/Kalman Filter Time Varying System Identification Manoranjan Majji Texas A&M University, College Station, Texas, USA Jer-Nan Juang 2 National Cheng Kung University, Tainan, Taiwan and John L. Junins

More information

GENERATING FUZZY RULES FOR PROTEIN CLASSIFICATION E. G. MANSOORI, M. J. ZOLGHADRI, S. D. KATEBI, H. MOHABATKAR, R. BOOSTANI AND M. H.

GENERATING FUZZY RULES FOR PROTEIN CLASSIFICATION E. G. MANSOORI, M. J. ZOLGHADRI, S. D. KATEBI, H. MOHABATKAR, R. BOOSTANI AND M. H. Iranian Journal of Fuzzy Systems Vol. 5, No. 2, (2008). 21-33 GENERATING FUZZY RULES FOR PROTEIN CLASSIFICATION E. G. MANSOORI, M. J. ZOLGHADRI, S. D. KATEBI, H. MOHABATKAR, R. BOOSTANI AND M. H. SADREDDINI

More information

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Ketan N. Patel, Igor L. Markov and John P. Hayes University of Michigan, Ann Arbor 48109-2122 {knatel,imarkov,jhayes}@eecs.umich.edu

More information

An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators

An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators S. K. Mallik, Student Member, IEEE, S. Chakrabarti, Senior Member, IEEE, S. N. Singh, Senior Member, IEEE Deartment of Electrical

More information

Chapter 1 Fundamentals

Chapter 1 Fundamentals Chater Fundamentals. Overview of Thermodynamics Industrial Revolution brought in large scale automation of many tedious tasks which were earlier being erformed through manual or animal labour. Inventors

More information

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is

More information

Detection Algorithm of Particle Contamination in Reticle Images with Continuous Wavelet Transform

Detection Algorithm of Particle Contamination in Reticle Images with Continuous Wavelet Transform Detection Algorithm of Particle Contamination in Reticle Images with Continuous Wavelet Transform Chaoquan Chen and Guoing Qiu School of Comuter Science and IT Jubilee Camus, University of Nottingham Nottingham

More information

Research of PMU Optimal Placement in Power Systems

Research of PMU Optimal Placement in Power Systems Proceedings of the 5th WSEAS/IASME Int. Conf. on SYSTEMS THEORY and SCIENTIFIC COMPUTATION, Malta, Setember 15-17, 2005 (38-43) Research of PMU Otimal Placement in Power Systems TIAN-TIAN CAI, QIAN AI

More information

John Weatherwax. Analysis of Parallel Depth First Search Algorithms

John Weatherwax. Analysis of Parallel Depth First Search Algorithms Sulementary Discussions and Solutions to Selected Problems in: Introduction to Parallel Comuting by Viin Kumar, Ananth Grama, Anshul Guta, & George Karyis John Weatherwax Chater 8 Analysis of Parallel

More information

Statistical Models approach for Solar Radiation Prediction

Statistical Models approach for Solar Radiation Prediction Statistical Models aroach for Solar Radiation Prediction S. Ferrari, M. Lazzaroni, and V. Piuri Universita degli Studi di Milano Milan, Italy Email: {stefano.ferrari, massimo.lazzaroni, vincenzo.iuri}@unimi.it

More information

On parameter estimation in deformable models

On parameter estimation in deformable models Downloaded from orbitdtudk on: Dec 7, 07 On arameter estimation in deformable models Fisker, Rune; Carstensen, Jens Michael Published in: Proceedings of the 4th International Conference on Pattern Recognition

More information

Proximal methods for the latent group lasso penalty

Proximal methods for the latent group lasso penalty Proximal methods for the latent grou lasso enalty The MIT Faculty has made this article oenly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Villa,

More information

A Survey of Multilinear Subspace Learning for Tensor Data

A Survey of Multilinear Subspace Learning for Tensor Data A Survey of Multilinear Subspace Learning for Tensor Data Haiping Lu a, K. N. Plataniotis b, A. N. Venetsanopoulos b,c a Institute for Infocomm Research, Agency for Science, Technology and Research, #21-01

More information

Fault Tolerant Quantum Computing Robert Rogers, Thomas Sylwester, Abe Pauls

Fault Tolerant Quantum Computing Robert Rogers, Thomas Sylwester, Abe Pauls CIS 410/510, Introduction to Quantum Information Theory Due: June 8th, 2016 Sring 2016, University of Oregon Date: June 7, 2016 Fault Tolerant Quantum Comuting Robert Rogers, Thomas Sylwester, Abe Pauls

More information

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES

RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES RANDOM WALKS AND PERCOLATION: AN ANALYSIS OF CURRENT RESEARCH ON MODELING NATURAL PROCESSES AARON ZWIEBACH Abstract. In this aer we will analyze research that has been recently done in the field of discrete

More information

Brownian Motion and Random Prime Factorization

Brownian Motion and Random Prime Factorization Brownian Motion and Random Prime Factorization Kendrick Tang June 4, 202 Contents Introduction 2 2 Brownian Motion 2 2. Develoing Brownian Motion.................... 2 2.. Measure Saces and Borel Sigma-Algebras.........

More information

Principles of Computed Tomography (CT)

Principles of Computed Tomography (CT) Page 298 Princiles of Comuted Tomograhy (CT) The theoretical foundation of CT dates back to Johann Radon, a mathematician from Vienna who derived a method in 1907 for rojecting a 2-D object along arallel

More information

TIME-FREQUENCY BASED SENSOR FUSION IN THE ASSESSMENT AND MONITORING OF MACHINE PERFORMANCE DEGRADATION

TIME-FREQUENCY BASED SENSOR FUSION IN THE ASSESSMENT AND MONITORING OF MACHINE PERFORMANCE DEGRADATION Proceedings of IMECE 0 00 ASME International Mechanical Engineering Congress & Exosition New Orleans, Louisiana, November 17-, 00 IMECE00-MED-303 TIME-FREQUENCY BASED SENSOR FUSION IN THE ASSESSMENT AND

More information

Statics and dynamics: some elementary concepts

Statics and dynamics: some elementary concepts 1 Statics and dynamics: some elementary concets Dynamics is the study of the movement through time of variables such as heartbeat, temerature, secies oulation, voltage, roduction, emloyment, rices and

More information

On split sample and randomized confidence intervals for binomial proportions

On split sample and randomized confidence intervals for binomial proportions On slit samle and randomized confidence intervals for binomial roortions Måns Thulin Deartment of Mathematics, Usala University arxiv:1402.6536v1 [stat.me] 26 Feb 2014 Abstract Slit samle methods have

More information

SIMULATED ANNEALING AND JOINT MANUFACTURING BATCH-SIZING. Ruhul SARKER. Xin YAO

SIMULATED ANNEALING AND JOINT MANUFACTURING BATCH-SIZING. Ruhul SARKER. Xin YAO Yugoslav Journal of Oerations Research 13 (003), Number, 45-59 SIMULATED ANNEALING AND JOINT MANUFACTURING BATCH-SIZING Ruhul SARKER School of Comuter Science, The University of New South Wales, ADFA,

More information

arxiv: v2 [stat.me] 3 Nov 2014

arxiv: v2 [stat.me] 3 Nov 2014 onarametric Stein-tye Shrinkage Covariance Matrix Estimators in High-Dimensional Settings Anestis Touloumis Cancer Research UK Cambridge Institute University of Cambridge Cambridge CB2 0RE, U.K. Anestis.Touloumis@cruk.cam.ac.uk

More information

Developing A Deterioration Probabilistic Model for Rail Wear

Developing A Deterioration Probabilistic Model for Rail Wear International Journal of Traffic and Transortation Engineering 2012, 1(2): 13-18 DOI: 10.5923/j.ijtte.20120102.02 Develoing A Deterioration Probabilistic Model for Rail Wear Jabbar-Ali Zakeri *, Shahrbanoo

More information

ON THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS AND COMMUTATOR ARGUMENTS FOR SOLVING STOKES CONTROL PROBLEMS

ON THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS AND COMMUTATOR ARGUMENTS FOR SOLVING STOKES CONTROL PROBLEMS Electronic Transactions on Numerical Analysis. Volume 44,. 53 72, 25. Coyright c 25,. ISSN 68 963. ETNA ON THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS AND COMMUTATOR ARGUMENTS FOR SOLVING STOKES

More information

Generalized Coiflets: A New Family of Orthonormal Wavelets

Generalized Coiflets: A New Family of Orthonormal Wavelets Generalized Coiflets A New Family of Orthonormal Wavelets Dong Wei, Alan C Bovik, and Brian L Evans Laboratory for Image and Video Engineering Deartment of Electrical and Comuter Engineering The University

More information

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem An Ant Colony Otimization Aroach to the Probabilistic Traveling Salesman Problem Leonora Bianchi 1, Luca Maria Gambardella 1, and Marco Dorigo 2 1 IDSIA, Strada Cantonale Galleria 2, CH-6928 Manno, Switzerland

More information

RUN-TO-RUN CONTROL AND PERFORMANCE MONITORING OF OVERLAY IN SEMICONDUCTOR MANUFACTURING. 3 Department of Chemical Engineering

RUN-TO-RUN CONTROL AND PERFORMANCE MONITORING OF OVERLAY IN SEMICONDUCTOR MANUFACTURING. 3 Department of Chemical Engineering Coyright 2002 IFAC 15th Triennial World Congress, Barcelona, Sain RUN-TO-RUN CONTROL AND PERFORMANCE MONITORING OF OVERLAY IN SEMICONDUCTOR MANUFACTURING C.A. Bode 1, B.S. Ko 2, and T.F. Edgar 3 1 Advanced

More information

A Parallel Algorithm for Minimization of Finite Automata

A Parallel Algorithm for Minimization of Finite Automata A Parallel Algorithm for Minimization of Finite Automata B. Ravikumar X. Xiong Deartment of Comuter Science University of Rhode Island Kingston, RI 02881 E-mail: fravi,xiongg@cs.uri.edu Abstract In this

More information

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems

Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various Families of R n Norms and Some Open Problems Int. J. Oen Problems Comt. Math., Vol. 3, No. 2, June 2010 ISSN 1998-6262; Coyright c ICSRS Publication, 2010 www.i-csrs.org Various Proofs for the Decrease Monotonicity of the Schatten s Power Norm, Various

More information

Specialized and hybrid Newton schemes for the matrix pth root

Specialized and hybrid Newton schemes for the matrix pth root Secialized and hybrid Newton schemes for the matrix th root Braulio De Abreu Marlliny Monsalve Marcos Raydan June 15, 2006 Abstract We discuss different variants of Newton s method for comuting the th

More information

MULTIVARIATE STATISTICAL PROCESS OF HOTELLING S T CONTROL CHARTS PROCEDURES WITH INDUSTRIAL APPLICATION

MULTIVARIATE STATISTICAL PROCESS OF HOTELLING S T CONTROL CHARTS PROCEDURES WITH INDUSTRIAL APPLICATION Journal of Statistics: Advances in heory and Alications Volume 8, Number, 07, Pages -44 Available at htt://scientificadvances.co.in DOI: htt://dx.doi.org/0.864/jsata_700868 MULIVARIAE SAISICAL PROCESS

More information

Scaling Multiple Point Statistics for Non-Stationary Geostatistical Modeling

Scaling Multiple Point Statistics for Non-Stationary Geostatistical Modeling Scaling Multile Point Statistics or Non-Stationary Geostatistical Modeling Julián M. Ortiz, Steven Lyster and Clayton V. Deutsch Centre or Comutational Geostatistics Deartment o Civil & Environmental Engineering

More information

GAIT RECOGNITION THROUGH MPCA PLUS LDA. Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos

GAIT RECOGNITION THROUGH MPCA PLUS LDA. Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos GAIT RECOGNITION THROUGH MPCA PLUS LDA Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto, M5S 3G4, Canada

More information

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS

#A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS #A64 INTEGERS 18 (2018) APPLYING MODULAR ARITHMETIC TO DIOPHANTINE EQUATIONS Ramy F. Taki ElDin Physics and Engineering Mathematics Deartment, Faculty of Engineering, Ain Shams University, Cairo, Egyt

More information

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelsach, K. P. White, and M. Fu, eds. EFFICIENT RARE EVENT SIMULATION FOR HEAVY-TAILED SYSTEMS VIA CROSS ENTROPY Jose

More information

arxiv: v1 [stat.ml] 10 Mar 2016

arxiv: v1 [stat.ml] 10 Mar 2016 Global and Local Uncertainty Princiles for Signals on Grahs Nathanael Perraudin, Benjamin Ricaud, David I Shuman, and Pierre Vandergheynst March, 206 arxiv:603.03030v [stat.ml] 0 Mar 206 Abstract Uncertainty

More information

Ensemble Forecasting the Number of New Car Registrations

Ensemble Forecasting the Number of New Car Registrations Ensemble Forecasting the Number of New Car Registrations SJOERT FLEURKE Radiocommunications Agency Netherlands Emmasingel 1, 9700 AL, Groningen THE NETHERLANDS sjoert.fleurke@agentschatelecom.nl htt://www.agentschatelecom.nl

More information

MODEL-BASED MULTIPLE FAULT DETECTION AND ISOLATION FOR NONLINEAR SYSTEMS

MODEL-BASED MULTIPLE FAULT DETECTION AND ISOLATION FOR NONLINEAR SYSTEMS MODEL-BASED MULIPLE FAUL DEECION AND ISOLAION FOR NONLINEAR SYSEMS Ivan Castillo, and homas F. Edgar he University of exas at Austin Austin, X 78712 David Hill Chemstations Houston, X 77009 Abstract A

More information

CMSC 425: Lecture 4 Geometry and Geometric Programming

CMSC 425: Lecture 4 Geometry and Geometric Programming CMSC 425: Lecture 4 Geometry and Geometric Programming Geometry for Game Programming and Grahics: For the next few lectures, we will discuss some of the basic elements of geometry. There are many areas

More information