Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition

Size: px
Start display at page:

Download "Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition"

Transcription

1 Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition Haiing Lu, K.N. Plataniotis and A.N. Venetsanooulos The Edward S. Rogers Sr. Deartment of Electrical and Comuter Engineering University of Toronto, M5S 3G4, Canada Ryerson University, Toronto, Ontario, M5B 2K3, Canada {haiing, kostas, 1 Abstract This aer rooses an uncorrelated multilinear discriminant analysis (UMLDA) framework for the recognition of multidimensional objects, known as tensor objects. Uncorrelated features are desirable in recognition tasks since they contain minimum redundancy and ensure indeendence of features. The UMLDA aims to extract uncorrelated discriminative features directly from tensorial data through solving a tensor-to-vector rojection. The solution consists of sequential iterative rocesses based on the alternating rojection method, and an adative regularization rocedure is incororated to enhance the erformance in the small samle size scenario. A simle nearest neighbor classifier is emloyed for classification. Furthermore, exloiting the comlementary information from differently initialized and regularized UMLDA recognizers, an aggregation scheme is adoted to combine them at the matching score level, resulting in enhanced generalization erformance while alleviating the regularization arameter selection roblem. The UMLDA-based recognition algorithm is then emirically shown on face and gait recognition tasks to outerform four multilinear subsace solutions (MPCA, DATER, GTDA, TR1DA) and three linear subsace solutions (LDA, ULDA, R-JD-LDA). Index Terms Multilinear discriminant analysis (MLDA), tensor objects, dimensionality reduction, feature extraction, face recognition, gait recognition, regularization, fusion. This aer was resented in art at Biometrics Symosium 2007, Baltimore, Maryland, Setember 11-13, 2007.

2 2 I. INTRODUCTION Nowadays, there are growing interests in the rocessing of multidimensional objects, formally known as tensor objects, in a large number of emerging alications. Tensors are considered to be the extensions of vectors and matrices. The elements of a tensor are to be addressed by a number of indices [1], where the number of indices used in the descrition defines the order of the tensor object and each index defines one mode. By this definition, vectors are first-order tensors and matrices are second-order tensors. For examle, gray-level images are naturally second-order tensors with the column and row modes [2] and color images are three-dimensional objects (third-order tensors) with the column, row and color modes [3]. Three-dimensional gray-level objects [4], such as 3-D gray-level faces [5], [6], are naturally thirdorder tensors with the column, row and deth modes, while the oular Gabor reresentation of gray-level images [7] are third-order tensors with the column, row and Gabor modes. Many sequential (sace-time) signals, such as surveillance video sequences [8], are naturally higher-order tensors. Gray-level video sequences can be viewed as third-order tensors with the column, row and time modes and color video sequences are fourth-order tensors with an additional color mode. Among the wide range of alications involving tensor objects [9], feature extraction for recognition uroses is an arguably most imortant one. Therefore, this aer focuses on feature extraction for tensor object recognition. In attern recognition alications, the tensor sace where a tyical tensor object is secified is often high-dimensional, and recognition methods oerating directly on this sace suffer from the so-called curse of dimensionality [10]. On the other hand, the entries of a tensor object are often highly correlated with surrounding entries, and the samles from a articular tensor object class, such as face images, are usually highly constrained and belong to a subsace, a manifold of intrinsically low dimension [10], [11]. Feature extraction or dimensionality reduction is thus an attemt to transform a high-dimensional data set into a low-dimensional sace of equivalent reresentation while retaining most of the underlying structure [12]. Traditional feature extraction algorithms, such as the classical rincial comonent analysis (PCA) and linear discriminant analysis (LDA), are linear algorithms that oerate on one-dimensional objects, i.e., first-order tensors (vectors). To aly these linear algorithms to higher-order (greater than one) tensor objects, such as images and videos, these tensor objects have to be reshaed (vectorized) into vectors first. However, it is well understood that such reshaing (vectorization) breaks the natural structure and correlation in the original data, reducing redundancies and/or higher order deendencies resent in the original data set, and losing otentially more comact or useful reresentations that can be obtained in the original tensorial forms [9]. Thus, dimensionality reduction algorithms oerating directly on the tensor

3 3 objects rather than their vectorized versions are desirable. Recently, multilinear subsace feature extraction algorithms [2], [9], [13], [14] oerating directly on the tensorial reresentations rather than their vectorized versions are emerging, esecially in the oular area of biometrics-based human recognition, e.g. face and gait recognition. The multilinear rincial comonent analysis (MPCA) framework [9], a multilinear extension of the PCA, determines a multilinear rojection that rojects the original tensor objects into a lower-dimensional tensor subsace while reserving the variation in the original data. Similar to PCA, MPCA is an unsuervised method too and the feature extraction rocess does not make use of the class information. On the other hand, suervised multilinear feature extraction algorithms have also been develoed. Since LDA is a classical algorithm that has been very successful and alied widely in various alications, there have been several variants of its multilinear extension roosed, named as multilinear discriminant analysis (MLDA) in general in this aer. The two-dimensional LDA (2DLDA) firstly introduced in [15] was later extended to erform discriminant analysis on more general tensorial inuts [2]. In this so-called discriminant analysis with tensor reresentation (DATER) 1 aroach of [2], a tensor-based scatter ratio criterion is maximized. A so-called general tensor discriminant analysis (GTDA) algorithm is roosed in [14] where a scatter difference criterion is maximized [14]. DATER and GTDA are both based on the tensor-to-tensor rojection (TTP) [16]. In contrast, the tensor rank-one discriminant analysis (TR1DA) algorithm [17], [18] obtains a number of rank-one rojections with the scatter difference criterion from the reeatedlycalculated residues of the original tensor data, which is in fact the heuristic method in [19] for tensor aroximation, and it can be viewed to be based on the tensor-to-vector rojection (TVP) [16]. The multilinear extensions of linear grah-embedding algorithms were introduced similarly in [20] [24]. In the existing MLDA variants [2], [14], [17], [18], the attention focused mainly on the objective criterion in terms of (either the ratio of or the difference between) the between-class scatter and the withinclass scatter since it is well-known that the classical LDA aims to maximize the Fisher s discrimination criterion (FDC). However, they did not take the correlations among features into account. In other words, an imortant roerty of the classical LDA is ignored in these develoments: the classical LDA derives uncorrelated features, as roved in [25], [26], where the uncorrelated LDA (ULDA) introduced in [27] is shown to be equivalent to the classical LDA. Uncorrelated features contain minimum redundancy and ensure indeendence of features so they are highly desirable in many alications [26]. Motivated by the discussions above, this aer aims to develo a new MLDA solution that extracts 1 Here, we use the name given when the algorithm was first roosed, which is more commonly refereed to in the literature.

4 4 uncorrelated features, named as the uncorrelated MLDA (UMLDA). The roosed UMLDA extracts discriminative features directly from tensorial data through solving a TVP so that the traditional FDC is maximized in each elementary rojection, while the features extracted are constrained to be uncorrelated. The solution is iterative, based on the alternating rojection method (APM), and an adative regularization factor is incororated to enhance the erformance in ractical alications where the inut dimensionality is very hight but the samle size er class is often limited, such as face or gait recognition [28], [29]. The extracted features are classified through a simle classifier. Furthermore, as different initialization or regularization of UMLDA results in different features, an aggregation scheme that combines these features at the matching score level using the simle sum rule is adoted to enhance the recognition erformance, with the regularization arameter selection roblem alleviated at the same time. The main contributions of this work are: 1) The introduction of a UMLDA algorithm for uncorrelated discriminative feature extraction from tensors. As a multilinear extension of LDA, the algorithm not only obtains discriminative features through maximizing the traditional scatter-ratio-based criterion, but also enforces a constraint so that the features derived are uncorrelated. This contrasts to the traditional aroach of linear learning algorithms [28], [30], [31], where vector rather than tensor reresentation is used and thus the natural structural information is destroyed. It also differs from the MLDA variants in [2], [14], [17], [18], where there are correlations among extracted features. Another difference from the works in [2], [14] is that a TVP rather than a TTP is used here and this work takes a systematic aroach to solve such an TVP, in contrast with the heuristic aroach in [17], [18]. Furthermore, it rovides a new aroach with constraint enforcement in develoing multilinear learning algorithms. 2) The incororation of an adative regularization rocedure where the within-class scatter estimation is increased through a data-indeendent regularization arameter. This takes into account of the ractical small samle size (SSS) roblem, which often arises in biometrics alications, and the iterative nature of UMLDA, different from the scatter estimation without regularization in [2], [14], [17], [18]. 3) The adotion of an aggregation scheme that combines several differently initialized and differently regularized UMLDA feature extractors, which roduce different features, at the matching score level to achieve enhanced recognition erformance while alleviating the regularization arameter selection roblem faced in most regularization methods. The rest of this aer is organized as follows: Section II introduces the notations and basic multilinear

5 5 algebra oerations, as well as the TVP. In Section III, the roblem of UMLDA is formulated and an iterative solution is derived, with an adative regularization rocedure introduced for better generalization in the SSS scenario. Next, the classifier emloyed is described and issues regarding the initialization method, rojection order, termination criteria, and convergence issue are addressed in this section, followed by a brief discussion on the connections to LDA and other MLDA variants. Section IV resents the matching score level aggregation of multile UMLDA feature extractors that are differently initialized and regularized to enhance the recognition erformance. In Section V, exeriments on two face databases and one gait database are reorted. The roerties of the roosed UMLDA solutions are first illustrated, and detailed recognition results are then comared against cometing linear solutions as well as multilinear solutions. Finally, Section VI draws the conclusion of this work. II. MULTILINEAR BASICS AND THE TENSOR-TO-VECTOR PROJECTION This section introduces the foundations that are fundamentally imortant for the flow of this aer. Firstly, we review the notations and some basic multilinear oerations that are necessary in resenting the roosed MLDA solution. Secondly, the TVP used in the roosed algorithm is described in detail. Table I summarizes the imortant symbols used in this aer for quick reference. A. Notations and basic multilinear algebra concets The notations in this aer follow the conventions in the multilinear algebra, attern recognition and adative learning literature. In this aer, we denote vectors by lowercase boldface letters, e.g., x; matrices by uercase boldface, e.g., U; and tensors by calligrahic letters, e.g., A. Their elements are denoted with indices in arentheses. Indices are denoted by lowercase letters and san the range from 1 to the uercase letter of the index, e.g., n = 1, 2,..., N. Throughout this aer, the discussion is focused on real-valued vectors, matrices and tensors only and the extension to comlex-valued data is left for future work. An Nth-order tensor is denoted as: A R I1 I2... IN. It is addressed by N indices i n, n = 1,..., N, and each i n addresses the n-mode of A. The n-mode roduct of a tensor A by a matrix U R Jn In, denoted by A n U, is a tensor with entries: (A n U)(i 1,..., i n 1, j n, i n+1,..., i N ) = A(i 1,..., i N ) U(j n, i n ). (1) i n The scalar roduct of two tensors A, B R I1 I2... IN is defined as: < A, B >=... A(i 1, i 2,..., i N ) B(i 1, i 2,..., i N ). (2) i 1 i 2 i N

6 6 The n-mode vectors of A are defined as the I n -dimensional vectors obtained from A by varying the index i n while keeing all the other indices fixed. A rank-one tensor A equals to the outer roduct of N vectors: A = u (1) u (2)... u (N), which means that A(i 1, i 2,..., i N ) = u (1) (i 1 ) u (2) (i 2 )... u (N) (i N ) for all values of indices. Unfolding A along the n-mode is denoted as A (n) R In (I1... In 1 In+1... IN ), and the column vectors of A (n) are the n-mode vectors of A. An examle of the 1-mode vectors of a tensor A R can be found in Fig. 1(a). TABLE I LIST OF SYMBOLS X m u (n) = 1,..., P P the mth inut tensor samle, m = 1,..., M the n-mode rojection vector, n = 1,..., N the index of the EMP the number of EMPs in TVP {u (n)t, n = 1,..., N} the th EMP y m y m() = y m = g (m) the rojection of X m on the TVP {u (n)t, n = 1,..., N} P =1 the rojection of the mth samle X m on the th EMP {u (n)t, n = 1,..., N} S y B the between-class scatter of the th rojected features {y m, m = 1,..., M} S y W the within-class scatter of the th rojected features {y m, m = 1,..., M} g F y k K γ a = 1,..., A A c m vec(a) L C the th coordinate vector = Sy B S y, the Fisher s discrimination criterion for the th EMP W the iteration ste index in the UMLDA algorithm the maximum number of iterations in UMLDA the regularization arameter the index of the feature extractor in aggregation the number of feature extractors to be aggregated the class label for the mth training samle the vectorized reresentation of the tensor A the number of training samles for each class (subject) the number of classes (subjects) in training B. Tensor-to-Vector Projection The UMLDA framework develoed in this aer takes a multilinear subsace (or tensor subsace) [20] aroach of feature extraction, where tensorial data is rojected into a subsace for better discrimination. As discussed in Sec. I, there are two general forms of multilinear rojection: the TTP [2], [9], [14] and the TVP [17], [18]. Since the rojections obtained by TTP can be viewed as a set of interdeendent

7 7 (a) (b) Fig. 1. Illustration of (a) an elementary multilinear rojection (EMP), (b) a tensor-to-vector rojection (TVP). rojections [16], the features extracted through TTP are likely to be correlated rather than uncorrelated. Therefore, we choose to develo the UMLDA by determining a subsace of tensor objects through TVP rather than TTP. The TVP rojects a tensor to a vector and it can be viewed as multile rojections from a tensor to a scalar, as illustrated in Fig. 1(b), where a TVP of a tensor A R to a P 1 vector consists of P rojections from A to a scalar. Thus, the rojection from a tensor to a scalar is considered first. A tensor X R I1 I2... IN can be rojected to a oint y through N unit rojection vectors {u (1)T, u (2)T,..., u (N)T } as: y = X 1 u (1)T 2 u (2)T... N u (N)T, u (n) = 1 for n = 1,..., N, where is the Euclidean norm for vectors. It can be written in the scalar roduct (2) as: y =< X, u (1) u (2)... u (N) >. Denote U = u (1) u (2)... u (N), then we have y =< X, U >. We name this multilinear rojection {u (1)T, u (2)T,..., u (N)T } as an elementary multilinear rojection (EMP), which is the rojection of a tensor on a single line (resulting a scalar) and it consists of one rojection vector in

8 8 each mode. Figure 1(a) illustrates an EMP of a tensor A R As ointed out in [16], an EMP can be viewed as a constrained linear rojection since < X, U >=< vec(x ), vec(u) >= [vec(u)] T vec(x ), where vec(a) denotes the vectorized reresentation of the tensor A [32]. Thus, the TVP of a tensor object X to a vector y R P in a P -dimensional vector sace consists of P EMPs {u (1)T, u (2)T,..., u (N)T }, = 1,..., P, which can be written concisely as {u (n)t, n = 1,..., N} P =1. The TVP from X to y is then written as y = X N n=1 {u(n)t, n = 1,..., N} P =1, where the th comonent of y is obtained from the th EMP as: y() = X 1 u (1)T 2 u (2)T... N u (N)T. III. UNCORRELATED MLDA WITH REGULARIZATION FOR TENSOR OBJECT RECOGNITION This section rooses the UMLDA-based tensor object recognition system, which is a tyical recognition system as shown in Fig. 2. The normalization ste is a standard rocessing to ensure all inut tensors having the same size [9]. At the core of this system is the UMLDA framework for feature extraction to be resented in this section: the UMLDA with regularization (R-UMLDA). The R-UMLDA roduces vector features that can be fed into standard classifiers for classification. Besides the descrition of the fundamental units in Fig. 2, imlementation issues and the connections with other algorithms are discussed in this section as well. Fig. 2. UMLDA-based tensor object recognition. A. The uncorrelated multilinear discriminant analysis with regularization (R-UMLDA) In the resentation, for the convenience of discussion, the training samles are assumed to be zeromean 2 so that the constraint of uncorrelated features is the same as orthogonal features 3. Before formally stating the objective of the UMLDA, a number of definitions are needed. The classical FDC in LDA [30] is defined as the scatter ratio for vector samles. Here, we adat it to scalar samles, which can be viewed as the degenerated version. The th rojected (scalar) features 2 When the training samle mean is not zero, it can be subtracted to make the training samles to be zero-mean. 3 Let x and y be vector observations of the variables x and y. Then, x and y are orthogonal iff x T y = 0, and x and y are uncorrelated iff (x x) T (y ȳ) = 0, where x and ȳ are the means of x and y, resectively [33]. Thus, two zero-mean (centered) vectors are uncorrelated when they are orthogonal [34].

9 9 are {y m, m = 1,..., M}, where M is the number of training samles and y m is the rojection of the mth samle X m by the th EMP {u (n)t, n = 1,..., N}: y m = X m N n=1 {u(n)t, n = 1,..., N}. Their corresonding between-class scatter S y B and the within-class scatter S y W are S y B = C N c (ȳ c ȳ ) 2, S y W = c=1 M (y m ȳ cm ) 2, (3) where C is the number of classes, N c is the number of samles for class c, c m is the class label for the mth training samle, ȳ = 1 M m y m = 0 and ȳ c = 1 N c m,c y m=c m. Thus, the FDC for the th scalar samles is F y g (m) = y m. m=1 = Sy B S. In addition, let g y denote the th coordinate vector, with its mth comonent W A formal definition of the multilinear feature extraction roblem to be solved in UMLDA is then given in the following. A set of M training tensor object samles {X 1, X 2,..., X M } (with zero-mean) is available for training. Each tensor object X m R I1 I2... IN assumes values in the tensor sace R I1 R I 2... R IN, where I n is the n-mode dimension of the tensor. The objective of the UMLDA is to find a TVP, which consists of P EMPs {u (n) R In 1, n = 1,..., N} P =1, maing from the original tensor sace RI1 R I 2... R IN into a vector subsace R P (with P < N n=1 I n): y m = X m N n=1 {u (n)t, n = 1,..., N} P =1, m = 1,..., M, (4) such that the FDC F y is maximized in each EMP direction, subject to the constraint that the P coordinate vectors {g R M, = 1,..., P } are uncorrelated. In other words, the UMLDA objective is to determine a set of P EMPs {u (n)t, n = 1,..., N} P =1 that maximize the scatter ratio while roducing features with zero-correlation. Thus, the objective function for the th EMP is {u (n)t, n = 1,..., N} = arg max F y, subject to g T g q g g q = δ q,, q = 1,..., P, (5) where δ q is the Kronecker delta (defined as 1 for = q and as 0 otherwise). To solve this roblem, we follow the successive determination aroach in the derivation of the ULDA in [27]. The P EMPs {u (n)t, n = 1,..., N} P =1 are determined sequentially (one by one) in P stes, with the th ste obtaining the th EMP. This stewise rocess roceeds as: Ste 1: Determine the first EMP {u (n)t 1, n = 1,..., N} by maximizing F y 1 without any constraint. Ste 2: Determine the second EMP {u (n)t 2, n = 1,..., N} by maximizing F y 2 subject to the constraint that g T 2 g 1 = 0.

10 10 Ste 3: Determine the third EMP {u (n)t 3, n = 1,..., N} by maximizing F y 3 subject to the constraint that g T 3 g 1 = 0 and g T 3 g 2 = 0. Ste ( = 4,..., P ): Determine the th EMP {u (n)t, n = 1,..., N} by maximizing F y subject to the constraint that g T g q = 0 for q = 1,..., 1. In the following, the algorithm to comute these EMPs is resented in detail, which is summarized in the seudo-code in Fig. 3. In the figure, the stewise rocess described above corresonds to the loo indexed by. Inut: Outut: A set of zero-mean tensor samles {X m R I 1 I 2... I N, m = 1,..., M} with class labels c R M, the desired feature vector length P, the regularization arameter γ, the maximum number of iterations K and a small number ɛ for testing convergence. The P EMPs {u (n)t, n = 1,..., N} P =1 that best searate classes in the rojected sace. R-UMLDA algorithm: For = 1 : P (ste : determine the th EMPs) If > 1, calculate the coordinate vector g 1: g 1(m) = X m 1 u (1)T 1 For n = 1,..., N, initialize u (n) (0) R In. For k = 1 : K For n = 1 : N 2 u(2)t 1... N u(n)t 1. Calculate ỹ m (n) = X m 1 u (1)T (k)... n 1 u (n 1)T (k) n+1 u (n+1)t (k 1)... N u (N)T (k 1), for m = 1,..., M. Calculate R (n), S(n) B and S (n) W. Set u (n) (k) to be the (unit) eigenvector of associated with the largest eigenvalue. ( ) If k = K or dist u (n) (k), u (n) (k 1) < ɛ for all n, set u (n) = u (n) k for all n, break. Outut {u (n) }. Go the ste + 1 if < P. Sto if = P. ( S(n) ) 1 (n) W R (n) S B Fig. 3. The seudo-code imlementation of the R-UMLDA algorithm for feature extraction from tensor objects. To solve for the th EMP {u (n)t, n = 1,..., N}, there are N sets of arameters corresonding to N rojection vectors to be determined, u (1), u (2),...u (N), one in each mode. Obviously, we would like to determine these N sets of arameters (N rojection vectors) in all modes simultaneously so that F y is (globally) maximized, subject to the zero-correlation constraint. Unfortunately, this is a rather comlicated non-linear roblem without an existing otimal solution, excet when N = 1, which is the classical linear case where only one rojection vector is to be solved. Therefore, we solve this tyical multilinear roblem by following the rincile of the alternating least square (ALS) algorithm [35] [37], where a multilinear (least-square) otimization roblem is reduced into smaller conditional subroblems

11 11 that can be solved through simle established methods emloyed in the linear case. Thus, for each EMP to be determined, the arameters of the rojection vector for each mode n are estimated one by one searately, conditioned on {u (n), n n }, the arameter values of the rojection vectors for the other modes. Thus, by fixing {u (n), n n }, a new objective function deending only on is formulated and this conditional subroblem is linear and much simler. The arameter estimations for each mode are obtained in this way sequentially (the n loo) and iteratively (the k loo) until a stoing criterion is met. We name this iterative method the alternating rojection method (APM). It corresonds to the loo indexed by k in Fig. 3, and in each iteration k, the loo indexed by n in Fig. 3 consists of the N conditional subroblems. To solve for in the n -mode, assuming that {u (n), n n } is given, the tensor samles are rojected in these (N 1) modes {n n } first to obtain (vectors) ỹ (n ) m = X m 1 u (1)T... n 1 u (n 1) T ỹ (n ) m R I n. This conditional subroblem then becomes to determine n +1 u (n +1) T... N u (N)T, (6) that rojects the vector samles {ỹ (n ) m, m = 1,..., M} onto a line so that the scatter ratio is maximized, subject to the zerocorrelation constraint. This is a (linear and simler) ULDA roblem with the inut samles {ỹ (n ) m, m = 1,..., M}. The corresonding between-class scatter matrix S (n ) B matrix S (n ) W are then defined as S (n ) B = S (n ) W = C c=1 M m=1 and the (regularized) within-class scatter N c ( ỹ (n ) c ỹ (n ) )( ỹ (n ) c ỹ (n ) ) T, (7) (ỹ (n ) m ỹ (n ) c m )(ỹ (n ) m 1 ỹ (n ) c m ) T + γ λ max ( S (n ) W ) I I n, (8) where ỹ (n ) c = 1 ) N c m,c m=c ỹ(n m, ỹ (n ) = 1 ) M m ỹ(n m = 0, γ 0 is a regularization arameter, I In is an identity matrix of size I n I n and λ max ( S (n ) W ) is the maximum eigenvalue of S (n ) W, which is the within-class scatter matrix for the n-mode vectors of the training samles, defined as S (n ) W = M m=1 ( Xm(n ) X ) ( cm(n ) Xm(n ) X ) T cm(n ), (9) where X c(n ) is the n -mode unfolded matrix of the class mean tensor Xc = 1 N c m,c m=c X m. In the following, the motivation for introducing the regularization factor is exlained. In the targeted biometrics alications (and many other alications as well), the dimensionality of the inut data is very high while at the same time, the number of training samles for each class is often too small to reresent the true characteristics of their classes, resulting in the well-known SSS roblem [28].

12 12 Furthermore, our emirical study of the iterative UMLDA algorithm (i.e., γ = 0) under the SSS scenario indicates that the iterations tend to minimize the within-class scatter towards zero in order to maximize the scatter ratio (since the scatter ratio reaches maximum of infinity when the within-class scatter is zero and the between-class scatter is non-zero). However, the estimated within-class scatter on the training data is usually much smaller than the real within-class scatter, due to limited number of samles for each class. Therefore, regularization [38], which has been used for combatting the singularity roblem 4 in LDA-based algorithms under the SSS scenario [28], [39], is adoted here to imrove the generalization caability of UMLDA under the SSS scenario, leading to the R-UMLDA. The regularization term is introduced in (8) so that during the iteration, less focus is ut on shrinking the within-class scatter. Moreover, the regularization introduced is adative since γ is the only regularization arameter and the regularization term in the n -mode is scaled by λ max ( S (n ) W ), which is an aroximate estimate of the n -mode within-class scatter in the training data. The basic UMLDA is obtained by setting γ = 0. With (7) and (8), we are ready to solve the P EMPs. For = 1, the 1 that maximizes the FDC S (n ) B 1 ( S(n ) 1 in the rojected sace is obtained as the unit eigenvector of ) 1 S(n ) W 1 B 1 associated T 1 T 1 S (n ) W 1 1 with the largest eigenvalue. Next, we show how to determine the th ( > 1) EMP given the first ( 1) EMPs. Given the first ( 1) EMPs, the th EMP aims to maximize the scatter ratio F y, subject to the constraint that features rojected by the th EMP is uncorrelated with those rojected by the first ( 1) EMPs. Let Ỹ (n ) R I n M be a matrix with its mth column to be ỹ (n ) m, i.e., [ ] Ỹ (n ) = ỹ (n ) 1, ỹ (n ) 2,..., ỹ (n ) ) M, then the th coordinate vector is obtained as g = T Ỹ(n. The constraint that g is uncorrelated with {g q, q = 1,..., 1} can be written as Thus, g T g q = T Ỹ (n ) g q = 0, q = 1,..., 1. (10) ( > 1) can be determined by solving the following constrained otimization roblem: u(n )T S(n ) B = arg max T S(n ) W The solution is given by the following theorem:, subject to T Ỹ (n ) g q = 0, q = 1,..., 1, (11) Theorem 1: The solution to the roblem (11) is the (unit-length) generalized eigenvector corresonding to the largest generalized eigenvalue of the following generalized eigenvalue roblem: R (n ) S (n ) B u = λ S (n ) W u, (12) 4 While the numerical singularity roblem is common for LDA-based algorithms, as ointed out in [2], this is not the case for MLDA. Therefore, the motivation of using regularization here is different from the linear case in this asect.

13 13 where ( R (n ) ) = I In Ỹ(n G 1 G T ) T 1Ỹ(n S (n ) 1 W Ỹ (n ) G 1 ) 1 G T 1Ỹ(n ) T S (n ) 1 W, (13) G 1 = [g 1 g 2...g 1 ] R M ( 1). (14) Proof: The roof of Theorem 1 is given in Aendix A. By setting R (n ) 1 = I In and from Theorem 1, we have a unified solution for R-UMLDA: for = 1,..., P, ( S(n ) is obtained as the unit eigenvector of ) 1 (n W R ) S(n ) B associated with the largest eigenvalue. Next, we will describe the classification of R-UMLDA features and then give detailed discussions on imlementation issues and connections to other algorithms. B. Classification of R-UMLDA features Desite working directly on tensorial data, the roosed R-UMLDA algorithm is a feature extraction algorithm that roduces feature vectors like traditional linear algorithms (through TVP). For recognition tasks, the features extracted are to be fed into a classifier to get the class label, as shown in Fig. 2. In this work, the feature vectors obtained through R-UMLDA are fed into the nearest neighbor classifier (NNC) with the Euclidean distance measure for classification. It should be noted at this oint that, since this aer focuses on multilinear feature extraction, a simle classifier is referred so that the recognition erformance is mainly contributed by the feature extraction algorithms rather than the classifier. The classification accuracy of the roosed method (and other methods comared in Section V) is exected to imrove if a more sohisticated classifier such as the suort vector machine (SVM) is used instead of the NNC. However, such an exeriment is out of the scoe of this aer. To classify a test samle X using NNC, X is first rojected to a feature vector y through the TVP obtained by R-UMLDA: y = X N n=1 {u(n)t, n = 1,..., N} P =1. The nearest neighbor is then found as m = arg min m y y m, where y m is the feature vector for the mth training samle. The class label of the m th training samle c m is then assigned to the test samle X. C. Initialization, rojection order, termination and convergence In this subsection, we discuss the various imlementation issues of R-UMLDA, in the order of the algorithm flow in Fig. 3: initialization, rojection order, termination and convergence. As the determination of each EMP {u (n), n = 1,..., N} is an iterative rocedure due to the nature of the R-UMLDA, like in other multilinear learning algorithms [1], [2], [14], initial estimations for the

14 14 rojection vectors {u (n) } are necessary. However, there is no guidance from either the algorithm or the data on the best initialization that could result in the best searation of the classes in the feature sace. Thus, the determination of the otimal initialization in R-UMLDA is still an oen roblem, as in most iterative algorithms including other multilinear learning algorithms [2], [14], [17], [18]. In this work, we emirically study two simle and commonly used initialization methods [16]: uniform initialization and random initialization [17], [18], which do not deend on the data. In the uniform initialization, all n-mode rojection vectors are initialized to have unit length and the same value along the I n dimensions in n-mode, which is equivalent to the all ones vector 1 with roer normalization. In random initialization, each element of the n-mode rojection vectors is drawn from a zero-mean uniform distribution between [ 0.5, 0.5] and the initialized rojection vectors are normalized to have unit length. Our emirical studies in Sec. V-C indicate that the results of R-UMLDA are affected by initialization, and the uniform initialization gives better results. The mode ordering (the inner-most for loo in Fig. 3, indexed by n) in comuting the rojection vectors, named as the rojection order in this work, affects the solution as well. Similar to initialization, there is no way to determine the otimal rojection order and it is considered to be an oen roblem too. Emirical studies on the effects of the rojection order indicate that with all the other algorithm settings fixed, altering the rojection order does result in some small erformance differences, but there is no guidance from either the data or the algorithm on what rojection order is the best in the iteration. Therefore, we have no reference on a articular rojection order and in ractice, we solve the rojection vectors sequentially (from 1-mode to N-mode), as in other multilinear algorithms [1], [2], [14], [18]. Remark 1: Although we are not able to determine the otimal initialization and the otimal rojection order, the aggregation scheme suggested in Sec. IV reduces the significance of their otimal determination. As seen from Fig. 3, the termination criterion can be simly set to a maximum number of iterations K ( ) or it can be set by examining the convergence of the rojection vectors: dist u (n) (k), u (n) (k 1) < ɛ, where ɛ is a user-defined small number threshold (e.g., ɛ = 10 3 ), and this distance is defined as ( ) ( ) dist u (n) (k), u (n) (k 1) = min u (n) (k) + u (n) (k 1), u (n) (k) u (n) (k 1) since eigenvectors are unique u to sign. As to be shown in Sec. V-C, the recognition erformance increases slowly after the first a few iterations. Therefore, the iteration can be terminated by setting K in ractice for convenience, esecially when comutational cost is a concern. Remark 2: As a sequential iterative solution, R-UMLDA may have a higher cost in memory and comutation than tyical linear subsace algorithms, which are usually non-iterative solutions. Nevertheless, (15)

15 15 since solving for the EMPs using R-UMLDA is only in the training hase of the targeted recognition tasks, it can be done offline and the additional I/O and comutational cost due to iterations are not considered a disadvantage of the roosed R-UMLDA solution. D. Connections to DATER, GTDA, TR1DA and LDA Detailed discussions on the relationshis between UMLDA, DATER, GTDA, TR1DA and LDA are resented in [16], which are summarized in Aendix B for comleteness. Briefly, the UMLDA is an MLDA variant that maximizes the scatter ratio through a TVP. LDA is the secial case of UMLDA with N = 1. DATER is an MLDA variant also based on scatter ratio but it solves a TTP rather than TVP. GTDA is an MLDA variant that maximizes the scatter difference through a TTP. TR1DA also solves for TVP as UMLDA, however, it maximizes scatter difference and it is a heuristic aroach with residue calculation, originally roosed for tensor aroximation. IV. AGGREGATION OF R-UMLDA RECOGNIZERS This section rooses the aggregation of a number of differently initialized and regularized UMLDA recognizers for enhanced erformance, which is motivated from two roerties of the basic R-UMLDA recognizer in Fig. 2. On one hand, as to be shown in Sec. V-C, the number of useful discriminative features that can be extracted by a single R-UMLDA is limited, artly due to the fact that EMPs to be solved in R-UMLDA corresond to very constrained situations in the linear case. On the other hand, since the R- UMLDA is affected by initialization and regularization, which cannot be otimally determined, different initialization or regularization could result in different discriminative features (also see Sec. V-C). From the generalization theory exlaining the success of random subsace method [40], bagging and boosting [41] [43], the sensitivity of the R-UMLDA to initialization and regularization suggests that R-UMLDA is not a very stable learner (feature extractor) and it is good for ensemble-based learning. Therefore, we roose the aggregation of several differently initialized and regularized UMLDA feature extractors to get the regularized UMLDA with aggregation (R-UMLDA-A) recognition system so that multile R-UMLDA recognizers can work together to achieve better erformance on tensor object recognition. Remark 3: Different rojection order also could result in different features so R-UMLDA with different rojection orders could be aggregated as well. However, since the effects of different rojection orders are similar to those of different initializations and the number of ossible rojection orders (which is N!) is much less than the number of ossible initializations (which is infinite), we fix the rojection order and vary the initialization and regularization only in this work.

16 16 There are various ways to combine (or fuse) several extracted features, including the feature level fusion [44], fusion at the matching score level [45], [46], and more advanced ensemble-based learning such as boosting [41], [47], [48]. For similar argument of the choice of a simle classifier, the simle sum rule in combining matching scores is used in this work since the focus here is on the feature extraction. Although more sohisticated method such as boosting is exected to achieve better results, the investigation of alternative combination methods, such as other combination rules and feature-level fusion, are beyond the scoe of this aer and will be the toic of a forthcoming aer. Since high diversity of the learners to be combined is referred in ensemble-based learning [47], we choose to use both uniform and random initializations in R-UMLDA-A for more diversity. Thus, although we are not able to determine the best initialization, we aggregate several R-UMLDA with different initializations to make comlementary discriminative features working together to searate classes better. Furthermore, to introduce even more diversity and alleviate the roblem of regularization arameter selection at the same time, we roose to samle the regularization arameter γ a from an interval [10 7, 10 2 ], which is emirically chosen to cover a wide range of γ, uniformly in log scale so that each feature extractor is differently regularized, where a = 1,..., A is the index of the individual R-UMLDA feature extractor and A is the number of R-UMLDA feature extractors to be aggregated. Figure 4 rovides the seudo-code imlementation for the R-UMLDA-A for tensor object recognition. The inut training samles {X m } are fed into A differently initialized and regularized UMLDA feature extractors described in Fig. 3 with arameters P, K and γ a to obtain a set of A TVPs {u (n)t, n = 1,..., N} P =1 (a), a = 1,..., A. The training samles {X m } are then rojected to R-UMLDA feature vectors {y m(a) } using the obtained TVPs. To classify a test samle X, it is rojected to A feature vectors {y (a) } using the A TVPs first. Next, for the ath R-UMLDA feature extractor, we calculate the nearest-neighbor distance of the test samle X to each candidate class c as: d(x, c, a) = The range of d(x, c, a) is then matched to the interval [0, 1] as: d(x, c, a) = min y (a) y m(a). (16) m,c m=c d(x, c, a) min c d(x, c, a) max c d(x, c, a) min c d(x, c, a). (17) Finally, the aggregated nearest-neighbor distance is obtained emloying the simle sum rule as: d(x, c) = A d(x, c, a), (18) a=1 and the test samle X is assigned the label: c = arg min c d(x, c).

17 17 Inut: A set of zero-mean tensor samles {X m R I 1 I 2... I N, m = 1,..., M} with class labels c R M, a test tensor samle X, the desired feature vector length P, the R-UMLDA feature extractor (Fig. 3), the maximum number of iterations K, the number of R-UMLDA to be aggregated A. Outut:The class label for X. R-UMLDA-A algorithm: Ste 1. Feature extraction For a = 1 : A Obtain the ath TVP {u (n)t, n = 1,..., N} P =1 (a) from the ath R-UMLDA (Fig. 3) with the inut: {X m}, P, K, γ a, using random or uniform initialization. Project {X m} and X to {y m(a) } and y (a), resectively, using {u (n)t, n = 1,..., N} P =1 (a). Ste 2. Aggregation at the matching score level for classification For a = 1 : A For c = 1 : C Obtain the nearest-neighbor distance d(x, c, a). Normalize d(x, c, a) to [0, 1] to get d(x, c, a). Obtain the aggregated distance d(x, c). Outut c = arg min c d(x, c) as the class label for the test samle. Fig. 4. The seudo-code imlementation of the R-UMLDA-A algorithm for tensor object recognition. V. EXPERIMENTAL EVALUATION In this section, a number of exeriments are carried out on two biometrics alications in suort of the following two objectives: 1) Investigate the various roerties of the R-UMLDA algorithm. 2) Evaluate the R-UMLDA and R-UMLDA-A algorithms on two tensor object recognition roblems, face recognition (FR) and gait recognition (GR), by comaring their erformance with that of cometing mutlilinear learning algorithms as well as linear learning algorithms. Before resenting the exerimental results, the exerimental data and algorithms to be comared are described first. A. Exerimental data Three oular ublic databases are used in the exeriments: the Pose, Illumination, and Exression (PIE) database from Carnegie Mellon University (CMU) [49], the Facial Recognition Technology

18 18 (FERET) database [50] and the HumanID gait challenge data set version 1.7 (V1.7) from the University of South Florida (USF). The CMU PIE database contains 68 individuals with face images catured under varying ose, illumination and exression. We choose the seven oses (C05, C07, C09, C27, C29, C37, C11) with at most 45 degrees of ose variation, under the 21 illumination conditions (02 to 22). Thus, there are about 147 (7 21) samles er subject and there are a total number of 9,987 face images (with nine faces missing). This face database has a large number of samles for each subject, therefore, it is used to study the roerties of the roosed algorithm and the FR erformance under varying number of training samles er subject, denoted by L. The FERET database is a standard testing database for FR erformance evaluation, including 14,126 images from 1,199 individuals with views ranging from frontal to left and right rofiles. The common ractice is to use ortions of the database for secific studies. Here, we select a subset comosed of those subjects with each subject having at least six images with at most 45 degrees of ose variation, resulting in 2,803 face images from 335 subjects. The studies on the FR erformance under varying number of subjects C are carried out on this face database since there are a large number of subjects available. Face images from the PIE and FERET databases are manually aligned, croed and normalized to ixels, with 256 gray levels er ixel. The USF gait challenge data set V1.7 consists of 452 sequences from 74 subjects walking in ellitical aths in front of the camera, with two viewoints (left or right), two shoe tyes (A or B) and two surface tyes (grass or concrete). Here, we choose those sequences on grass surface only: the gallery set, and the robe sets A, B and C, with detailed information listed in Table II. The caturing condition for each set is summarized in the arentheses after the set name in the Table, where G, A, B, L, and R stand for grass surface, shoe tye A, shoe tye B, left view, and right view, resectively. Each set has only one sequence for a subject. Subjects are unique in the gallery and each robe set and there are no common sequences between the gallery set and any of the robe sets. In addition, all the robe sets are distinct. These gait data sets are emloyed to demonstrate the erformance on third-order tensors since gait silhouette sequences are naturally 3-D data [9]. We follow the rocedures in [9] to get gait samles from gait silhouette sequences and each gait samle is resized to a third-order tensor of The number of samles for each set is indicated in the arentheses following the number of sequences in Table II.

19 19 TABLE II THE CHARACTERISTICS OF THE GAIT DATA FROM THE USF GAIT CHALLENGE DATA SET. Gait data set Gallery(GAR) Probe A(GAL) Probe B(GBR) Probe C(GBL) Number of sequences (samles) 71 (731) 71 (727) 41 (423) 41 (420) Difference from the gallery - View Shoe Shoe, view B. Performance Comarison Design In the FR and GR exeriments, we comare the erformance of the roosed algorithms against four multilinear learning algorithms and three linear learning algorithms listed in Table III 5, where the LDA algorithm takes the Fisherface aroach [30]. TABLE III LIST OF ALGORITHMS TO BE COMPARED Acronym Full name Linear/Multilinear Reference LDA linear discriminant analysis Linear [30] ULDA uncorrelated linear discriminant analysis Linear [31] R-JD-LDA regularized version of the revised direct LDA Linear [28], [51] MPCA multilinear rincial comonent analysis Multilinear [9] DATER discriminant analysis with tensor reresentation Multilinear [2] GTDA general tensor discriminant analysis Multilinear [14] TR1DA tensor rank-one discriminant analysis Multilinear [17], [18] For classification of extracted features, we use the NNC with Euclidean distance measure. The MPCA, DATER and GTDA algorithms roduce features in tensor reresentation, which cannot be handled directly by the selected classifier. Since from [16], the commonly used tensor distance measure, the Frobenius norm, is equivalent to the Euclidean distance between vectorized reresentations, the tensor features from MPCA, DATER and GTDA are rearranged to vectors for direct comarison. They obtain the highestdimension rojection (P n = I n for n = 1,..., N) first and then the TTP is viewed as N n=1 I n EMPs. The discriminability of each such EMP is calculated on the training set and the EMPs are arranged in descending discriminability so that a feature vector is obtained, as in [9]. In the exeriments, for fair comarison and comutational concerns, we set the number of iterations in the four MLDA algorithms to be 10, unless otherwise stated. For MPCA, DATER, GTDA and TR1DA, 5 Note that the ULDA comared here is different from the ULDA in [27]. Therefore, it is different from the classical LDA.

20 20 u to 600 features were tested. The maximum number of features tested for LDA and ULDA is C 1. For the TR1DA algorithm, we tested several values of the tuning arameter ζ for each L, and the best one for each L was used: ζ = 2 for L 7, ζ = 0.8 for 8 L 15, ζ = 0.6 for L 16. For the R-JD-LDA, we use the default maximum number of features and regularization arameter suggested by the authors. For the recognition exeriments of only one R-UMLDA, uniform initialization is used and we emirically set γ = 10 3, with u to 30 features tested. For R-UMLDA-A, u to 20 differently initialized and regularized versions of UMLDA feature extractors are combined with each roducing u to 30 features, also resulting in a total number of 600 features. Uniform initialization is used for a = 1, 5, 9, 13, 17 with corresonding γ a = 10 2, 10 3, 10 4, ( 10 5, 10 6, and random initialization ) is used for the rest values of a. In comuting the matrix inverse of G T ) T 1Ỹ(n S(n ) 1 W Ỹ (n ) G 1 in (13), a small term (κ I 1 ) is added, where κ = 10 3, in order to get better conditioned matrix for the inverse comutation. The recognition erformance is measured in correct recognition rate (CRR). The best recognition results reorted are obtained by varying the number of features used and the number of R-UMDLA recognizers aggregated for the R-UMLDA-A algorithm. For all the other algorithms, the best results are obtained by varying the number of features used. For fair comarison, there is no further fine tuning of other arameters (such as the regularization arameter) for otimal erformance on the testing data, including the roosed method. For better viewing, the to two recognition results in each exeriment are shown in bold in tables. C. Emirical studies of the R-UMLDA roerties on the PIE database First, various roerties of the R-UMLDA are studied on the PIE database: the effects of initialization and regularization, the convergence, the number of useful features and the effects of aggregation. These exeriments are erformed on one random slit of the database into training and testing samles. 1) the effects of initialization and regularization: Figure 5 illustrates the effects of initialization and regularization on two FR exeriments: one with L = 2 and one with L = 20, corresonding to the SSS scenario and the scenario when a large number of samles (er subject) are available for training. The CRRs for various γs are deicted in Figs. 5(a) (L = 2) and 5(b) (L = 20) for the uniform initialization, and in Figs. 5(c) (L = 2) and 5(d) (L = 20) for the random initialization (averaged over 20 reeated trials). Figures 5(e) and 5(f) show the lots for the CRRs from eight reetitions of the random initialization with γ = They demonstrate that the recognition results are affected by initialization and different initialization results in different results. By comaring Fig. 5(f) against Fig. 5(e), it can be seen that the

21 21 (a) (b) (c) (d) (e) (f) Fig. 5. Illustration of the effects of initialization and regularization on recognition erformance: Uniform initialization with various γs for (a) L = 2 and (b) L = 20; Random initialization with various γs averaged over 20 reetitions for (c) L = 2 and (d) L = 20; Eight reetitions of random initialization with γ = 10 3 for (e) L = 2 and (f) L = 20.

Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition

Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition TNN-2007-P-0332.R1 1 Uncorrelated Multilinear Discriminant Analysis with Regularization and Aggregation for Tensor Object Recognition Haiing Lu, K.N. Plataniotis and A.N. Venetsanooulos The Edward S. Rogers

More information

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning

Uncorrelated Multilinear Principal Component Analysis for Unsupervised Multilinear Subspace Learning TNN-2009-P-1186.R2 1 Uncorrelated Multilinear Princial Comonent Analysis for Unsuervised Multilinear Subsace Learning Haiing Lu, K. N. Plataniotis and A. N. Venetsanooulos The Edward S. Rogers Sr. Deartment

More information

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization

Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Uncorrelated Multilinear Principal Component Analysis through Successive Variance Maximization Haiping Lu 1 K. N. Plataniotis 1 A. N. Venetsanopoulos 1,2 1 Department of Electrical & Computer Engineering,

More information

Semi-Orthogonal Multilinear PCA with Relaxed Start

Semi-Orthogonal Multilinear PCA with Relaxed Start Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI 2015) Semi-Orthogonal Multilinear PCA with Relaxed Start Qiuan Shi and Haiing Lu Deartment of Comuter Science

More information

arxiv: v2 [stat.ml] 7 May 2015

arxiv: v2 [stat.ml] 7 May 2015 Semi-Orthogonal Multilinear PCA with Relaxed Start Qiuan Shi and Haiing Lu Deartment of Comuter Science Hong Kong Batist University, Hong Kong, China csshi@com.hkbu.edu.hk, haiing@hkbu.edu.hk arxiv:1504.08142v2

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Introduction to Neural Networks : Lecture 13 John A. Bullinaria, 2004 1. The RBF Maing 2. The RBF Network Architecture 3. Comutational Power of RBF Networks 4.

More information

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL

MODELING THE RELIABILITY OF C4ISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Technical Sciences and Alied Mathematics MODELING THE RELIABILITY OF CISR SYSTEMS HARDWARE/SOFTWARE COMPONENTS USING AN IMPROVED MARKOV MODEL Cezar VASILESCU Regional Deartment of Defense Resources Management

More information

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III

AI*IA 2003 Fusion of Multiple Pattern Classifiers PART III AI*IA 23 Fusion of Multile Pattern Classifiers PART III AI*IA 23 Tutorial on Fusion of Multile Pattern Classifiers by F. Roli 49 Methods for fusing multile classifiers Methods for fusing multile classifiers

More information

Contents. References 23

Contents. References 23 Contents 1 A Taxonomy of Emerging Multilinear Discriminant Analysis Solutions for Biometric Signal Recognition 1 Haiping Lu, K. N. Plataniotis and A. N. Venetsanopoulos 1.1 Introduction 1 1.2 Multilinear

More information

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS

DETC2003/DAC AN EFFICIENT ALGORITHM FOR CONSTRUCTING OPTIMAL DESIGN OF COMPUTER EXPERIMENTS Proceedings of DETC 03 ASME 003 Design Engineering Technical Conferences and Comuters and Information in Engineering Conference Chicago, Illinois USA, Setember -6, 003 DETC003/DAC-48760 AN EFFICIENT ALGORITHM

More information

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008

18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 18 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 1, JANUARY 2008 MPCA: Multilinear Principal Component Analysis of Tensor Objects Haiping Lu, Student Member, IEEE, Konstantinos N. (Kostas) Plataniotis,

More information

Metrics Performance Evaluation: Application to Face Recognition

Metrics Performance Evaluation: Application to Face Recognition Metrics Performance Evaluation: Alication to Face Recognition Naser Zaeri, Abeer AlSadeq, and Abdallah Cherri Electrical Engineering Det., Kuwait University, P.O. Box 5969, Safat 6, Kuwait {zaery, abeer,

More information

State Estimation with ARMarkov Models

State Estimation with ARMarkov Models Deartment of Mechanical and Aerosace Engineering Technical Reort No. 3046, October 1998. Princeton University, Princeton, NJ. State Estimation with ARMarkov Models Ryoung K. Lim 1 Columbia University,

More information

Estimation of the large covariance matrix with two-step monotone missing data

Estimation of the large covariance matrix with two-step monotone missing data Estimation of the large covariance matrix with two-ste monotone missing data Masashi Hyodo, Nobumichi Shutoh 2, Takashi Seo, and Tatjana Pavlenko 3 Deartment of Mathematical Information Science, Tokyo

More information

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points.

Solved Problems. (a) (b) (c) Figure P4.1 Simple Classification Problems First we draw a line between each set of dark and light data points. Solved Problems Solved Problems P Solve the three simle classification roblems shown in Figure P by drawing a decision boundary Find weight and bias values that result in single-neuron ercetrons with the

More information

Principal Components Analysis and Unsupervised Hebbian Learning

Principal Components Analysis and Unsupervised Hebbian Learning Princial Comonents Analysis and Unsuervised Hebbian Learning Robert Jacobs Deartment of Brain & Cognitive Sciences University of Rochester Rochester, NY 1467, USA August 8, 008 Reference: Much of the material

More information

arxiv: v1 [physics.data-an] 26 Oct 2012

arxiv: v1 [physics.data-an] 26 Oct 2012 Constraints on Yield Parameters in Extended Maximum Likelihood Fits Till Moritz Karbach a, Maximilian Schlu b a TU Dortmund, Germany, moritz.karbach@cern.ch b TU Dortmund, Germany, maximilian.schlu@cern.ch

More information

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation

Paper C Exact Volume Balance Versus Exact Mass Balance in Compositional Reservoir Simulation Paer C Exact Volume Balance Versus Exact Mass Balance in Comositional Reservoir Simulation Submitted to Comutational Geosciences, December 2005. Exact Volume Balance Versus Exact Mass Balance in Comositional

More information

Feedback-error control

Feedback-error control Chater 4 Feedback-error control 4.1 Introduction This chater exlains the feedback-error (FBE) control scheme originally described by Kawato [, 87, 8]. FBE is a widely used neural network based controller

More information

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests

System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests 009 American Control Conference Hyatt Regency Riverfront, St. Louis, MO, USA June 0-, 009 FrB4. System Reliability Estimation and Confidence Regions from Subsystem and Full System Tests James C. Sall Abstract

More information

PER-PATCH METRIC LEARNING FOR ROBUST IMAGE MATCHING. Sezer Karaoglu, Ivo Everts, Jan C. van Gemert, and Theo Gevers

PER-PATCH METRIC LEARNING FOR ROBUST IMAGE MATCHING. Sezer Karaoglu, Ivo Everts, Jan C. van Gemert, and Theo Gevers PER-PATCH METRIC LEARNING FOR ROBUST IMAGE MATCHING Sezer Karaoglu, Ivo Everts, Jan C. van Gemert, and Theo Gevers Intelligent Systems Lab, Amsterdam, University of Amsterdam, 1098 XH Amsterdam, The Netherlands

More information

PARTIAL FACE RECOGNITION: A SPARSE REPRESENTATION-BASED APPROACH. Luoluo Liu, Trac D. Tran, and Sang Peter Chin

PARTIAL FACE RECOGNITION: A SPARSE REPRESENTATION-BASED APPROACH. Luoluo Liu, Trac D. Tran, and Sang Peter Chin PARTIAL FACE RECOGNITION: A SPARSE REPRESENTATION-BASED APPROACH Luoluo Liu, Trac D. Tran, and Sang Peter Chin Det. of Electrical and Comuter Engineering, Johns Hokins Univ., Baltimore, MD 21218, USA {lliu69,trac,schin11}@jhu.edu

More information

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition

A Qualitative Event-based Approach to Multiple Fault Diagnosis in Continuous Systems using Structural Model Decomposition A Qualitative Event-based Aroach to Multile Fault Diagnosis in Continuous Systems using Structural Model Decomosition Matthew J. Daigle a,,, Anibal Bregon b,, Xenofon Koutsoukos c, Gautam Biswas c, Belarmino

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell February 10, 2010 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

4. Score normalization technical details We now discuss the technical details of the score normalization method.

4. Score normalization technical details We now discuss the technical details of the score normalization method. SMT SCORING SYSTEM This document describes the scoring system for the Stanford Math Tournament We begin by giving an overview of the changes to scoring and a non-technical descrition of the scoring rules

More information

Tensor-Based Sparsity Order Estimation for Big Data Applications

Tensor-Based Sparsity Order Estimation for Big Data Applications Tensor-Based Sarsity Order Estimation for Big Data Alications Kefei Liu, Florian Roemer, João Paulo C. L. da Costa, Jie Xiong, Yi-Sheng Yan, Wen-Qin Wang and Giovanni Del Galdo Indiana University School

More information

Unsupervised Hyperspectral Image Analysis Using Independent Component Analysis (ICA)

Unsupervised Hyperspectral Image Analysis Using Independent Component Analysis (ICA) Unsuervised Hyersectral Image Analysis Using Indeendent Comonent Analysis (ICA) Shao-Shan Chiang Chein-I Chang Irving W. Ginsberg Remote Sensing Signal and Image Processing Laboratory Deartment of Comuter

More information

General Linear Model Introduction, Classes of Linear models and Estimation

General Linear Model Introduction, Classes of Linear models and Estimation Stat 740 General Linear Model Introduction, Classes of Linear models and Estimation An aim of scientific enquiry: To describe or to discover relationshis among events (variables) in the controlled (laboratory)

More information

STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS

STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS STABILITY ANALYSIS TOOL FOR TUNING UNCONSTRAINED DECENTRALIZED MODEL PREDICTIVE CONTROLLERS Massimo Vaccarini Sauro Longhi M. Reza Katebi D.I.I.G.A., Università Politecnica delle Marche, Ancona, Italy

More information

Deriving Indicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V.

Deriving Indicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V. Deriving ndicator Direct and Cross Variograms from a Normal Scores Variogram Model (bigaus-full) David F. Machuca Mory and Clayton V. Deutsch Centre for Comutational Geostatistics Deartment of Civil &

More information

Probability Estimates for Multi-class Classification by Pairwise Coupling

Probability Estimates for Multi-class Classification by Pairwise Coupling Probability Estimates for Multi-class Classification by Pairwise Couling Ting-Fan Wu Chih-Jen Lin Deartment of Comuter Science National Taiwan University Taiei 06, Taiwan Ruby C. Weng Deartment of Statistics

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Numerous alications in statistics, articularly in the fitting of linear models. Notation and conventions: Elements of a matrix A are denoted by a ij, where i indexes the rows and

More information

Pairwise active appearance model and its application to echocardiography tracking

Pairwise active appearance model and its application to echocardiography tracking Pairwise active aearance model and its alication to echocardiograhy tracking S. Kevin Zhou 1, J. Shao 2, B. Georgescu 1, and D. Comaniciu 1 1 Integrated Data Systems, Siemens Cororate Research, Inc., Princeton,

More information

Information collection on a graph

Information collection on a graph Information collection on a grah Ilya O. Ryzhov Warren Powell October 25, 2009 Abstract We derive a knowledge gradient olicy for an otimal learning roblem on a grah, in which we use sequential measurements

More information

The Value of Even Distribution for Temporal Resource Partitions

The Value of Even Distribution for Temporal Resource Partitions The Value of Even Distribution for Temoral Resource Partitions Yu Li, Albert M. K. Cheng Deartment of Comuter Science University of Houston Houston, TX, 7704, USA htt://www.cs.uh.edu Technical Reort Number

More information

An Analysis of Reliable Classifiers through ROC Isometrics

An Analysis of Reliable Classifiers through ROC Isometrics An Analysis of Reliable Classifiers through ROC Isometrics Stijn Vanderlooy s.vanderlooy@cs.unimaas.nl Ida G. Srinkhuizen-Kuyer kuyer@cs.unimaas.nl Evgueni N. Smirnov smirnov@cs.unimaas.nl MICC-IKAT, Universiteit

More information

Recursive Estimation of the Preisach Density function for a Smart Actuator

Recursive Estimation of the Preisach Density function for a Smart Actuator Recursive Estimation of the Preisach Density function for a Smart Actuator Ram V. Iyer Deartment of Mathematics and Statistics, Texas Tech University, Lubbock, TX 7949-142. ABSTRACT The Preisach oerator

More information

Distributed Rule-Based Inference in the Presence of Redundant Information

Distributed Rule-Based Inference in the Presence of Redundant Information istribution Statement : roved for ublic release; distribution is unlimited. istributed Rule-ased Inference in the Presence of Redundant Information June 8, 004 William J. Farrell III Lockheed Martin dvanced

More information

Finding a sparse vector in a subspace: linear sparsity using alternating directions

Finding a sparse vector in a subspace: linear sparsity using alternating directions IEEE TRANSACTION ON INFORMATION THEORY VOL XX NO XX 06 Finding a sarse vector in a subsace: linear sarsity using alternating directions Qing Qu Student Member IEEE Ju Sun Student Member IEEE and John Wright

More information

Multilayer Perceptron Neural Network (MLPs) For Analyzing the Properties of Jordan Oil Shale

Multilayer Perceptron Neural Network (MLPs) For Analyzing the Properties of Jordan Oil Shale World Alied Sciences Journal 5 (5): 546-552, 2008 ISSN 1818-4952 IDOSI Publications, 2008 Multilayer Percetron Neural Network (MLPs) For Analyzing the Proerties of Jordan Oil Shale 1 Jamal M. Nazzal, 2

More information

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression

A Comparison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Journal of Modern Alied Statistical Methods Volume Issue Article 7 --03 A Comarison between Biased and Unbiased Estimators in Ordinary Least Squares Regression Ghadban Khalaf King Khalid University, Saudi

More information

Keywords: pile, liquefaction, lateral spreading, analysis ABSTRACT

Keywords: pile, liquefaction, lateral spreading, analysis ABSTRACT Key arameters in seudo-static analysis of iles in liquefying sand Misko Cubrinovski Deartment of Civil Engineering, University of Canterbury, Christchurch 814, New Zealand Keywords: ile, liquefaction,

More information

VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES

VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES Journal of Sound and Vibration (998) 22(5), 78 85 VIBRATION ANALYSIS OF BEAMS WITH MULTIPLE CONSTRAINED LAYER DAMPING PATCHES Acoustics and Dynamics Laboratory, Deartment of Mechanical Engineering, The

More information

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules

CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules CHAPTER-II Control Charts for Fraction Nonconforming using m-of-m Runs Rules. Introduction: The is widely used in industry to monitor the number of fraction nonconforming units. A nonconforming unit is

More information

MPCA: Multilinear Principal Component. Analysis of Tensor Objects

MPCA: Multilinear Principal Component. Analysis of Tensor Objects MPCA: Multilinear Principal Component 1 Analysis of Tensor Objects Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University

More information

Approximating min-max k-clustering

Approximating min-max k-clustering Aroximating min-max k-clustering Asaf Levin July 24, 2007 Abstract We consider the roblems of set artitioning into k clusters with minimum total cost and minimum of the maximum cost of a cluster. The cost

More information

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model

Shadow Computing: An Energy-Aware Fault Tolerant Computing Model Shadow Comuting: An Energy-Aware Fault Tolerant Comuting Model Bryan Mills, Taieb Znati, Rami Melhem Deartment of Comuter Science University of Pittsburgh (bmills, znati, melhem)@cs.itt.edu Index Terms

More information

Using a Computational Intelligence Hybrid Approach to Recognize the Faults of Variance Shifts for a Manufacturing Process

Using a Computational Intelligence Hybrid Approach to Recognize the Faults of Variance Shifts for a Manufacturing Process Journal of Industrial and Intelligent Information Vol. 4, No. 2, March 26 Using a Comutational Intelligence Hybrid Aroach to Recognize the Faults of Variance hifts for a Manufacturing Process Yuehjen E.

More information

Multi-Operation Multi-Machine Scheduling

Multi-Operation Multi-Machine Scheduling Multi-Oeration Multi-Machine Scheduling Weizhen Mao he College of William and Mary, Williamsburg VA 3185, USA Abstract. In the multi-oeration scheduling that arises in industrial engineering, each job

More information

Universal Finite Memory Coding of Binary Sequences

Universal Finite Memory Coding of Binary Sequences Deartment of Electrical Engineering Systems Universal Finite Memory Coding of Binary Sequences Thesis submitted towards the degree of Master of Science in Electrical and Electronic Engineering in Tel-Aviv

More information

GENERATING FUZZY RULES FOR PROTEIN CLASSIFICATION E. G. MANSOORI, M. J. ZOLGHADRI, S. D. KATEBI, H. MOHABATKAR, R. BOOSTANI AND M. H.

GENERATING FUZZY RULES FOR PROTEIN CLASSIFICATION E. G. MANSOORI, M. J. ZOLGHADRI, S. D. KATEBI, H. MOHABATKAR, R. BOOSTANI AND M. H. Iranian Journal of Fuzzy Systems Vol. 5, No. 2, (2008). 21-33 GENERATING FUZZY RULES FOR PROTEIN CLASSIFICATION E. G. MANSOORI, M. J. ZOLGHADRI, S. D. KATEBI, H. MOHABATKAR, R. BOOSTANI AND M. H. SADREDDINI

More information

Linear diophantine equations for discrete tomography

Linear diophantine equations for discrete tomography Journal of X-Ray Science and Technology 10 001 59 66 59 IOS Press Linear diohantine euations for discrete tomograhy Yangbo Ye a,gewang b and Jiehua Zhu a a Deartment of Mathematics, The University of Iowa,

More information

DIFFERENTIAL evolution (DE) [3] has become a popular

DIFFERENTIAL evolution (DE) [3] has become a popular Self-adative Differential Evolution with Neighborhood Search Zhenyu Yang, Ke Tang and Xin Yao Abstract In this aer we investigate several self-adative mechanisms to imrove our revious work on [], which

More information

Re-entry Protocols for Seismically Active Mines Using Statistical Analysis of Aftershock Sequences

Re-entry Protocols for Seismically Active Mines Using Statistical Analysis of Aftershock Sequences Re-entry Protocols for Seismically Active Mines Using Statistical Analysis of Aftershock Sequences J.A. Vallejos & S.M. McKinnon Queen s University, Kingston, ON, Canada ABSTRACT: Re-entry rotocols are

More information

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018

Computer arithmetic. Intensive Computation. Annalisa Massini 2017/2018 Comuter arithmetic Intensive Comutation Annalisa Massini 7/8 Intensive Comutation - 7/8 References Comuter Architecture - A Quantitative Aroach Hennessy Patterson Aendix J Intensive Comutation - 7/8 3

More information

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models

Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Evaluating Circuit Reliability Under Probabilistic Gate-Level Fault Models Ketan N. Patel, Igor L. Markov and John P. Hayes University of Michigan, Ann Arbor 48109-2122 {knatel,imarkov,jhayes}@eecs.umich.edu

More information

Churilova Maria Saint-Petersburg State Polytechnical University Department of Applied Mathematics

Churilova Maria Saint-Petersburg State Polytechnical University Department of Applied Mathematics Churilova Maria Saint-Petersburg State Polytechnical University Deartment of Alied Mathematics Technology of EHIS (staming) alied to roduction of automotive arts The roblem described in this reort originated

More information

Research of PMU Optimal Placement in Power Systems

Research of PMU Optimal Placement in Power Systems Proceedings of the 5th WSEAS/IASME Int. Conf. on SYSTEMS THEORY and SCIENTIFIC COMPUTATION, Malta, Setember 15-17, 2005 (38-43) Research of PMU Otimal Placement in Power Systems TIAN-TIAN CAI, QIAN AI

More information

Research of power plant parameter based on the Principal Component Analysis method

Research of power plant parameter based on the Principal Component Analysis method Research of ower lant arameter based on the Princial Comonent Analysis method Yang Yang *a, Di Zhang b a b School of Engineering, Bohai University, Liaoning Jinzhou, 3; Liaoning Datang international Jinzhou

More information

Detection Algorithm of Particle Contamination in Reticle Images with Continuous Wavelet Transform

Detection Algorithm of Particle Contamination in Reticle Images with Continuous Wavelet Transform Detection Algorithm of Particle Contamination in Reticle Images with Continuous Wavelet Transform Chaoquan Chen and Guoing Qiu School of Comuter Science and IT Jubilee Camus, University of Nottingham Nottingham

More information

Observer/Kalman Filter Time Varying System Identification

Observer/Kalman Filter Time Varying System Identification Observer/Kalman Filter Time Varying System Identification Manoranjan Majji Texas A&M University, College Station, Texas, USA Jer-Nan Juang 2 National Cheng Kung University, Tainan, Taiwan and John L. Junins

More information

Hidden Predictors: A Factor Analysis Primer

Hidden Predictors: A Factor Analysis Primer Hidden Predictors: A Factor Analysis Primer Ryan C Sanchez Western Washington University Factor Analysis is a owerful statistical method in the modern research sychologist s toolbag When used roerly, factor

More information

KEY ISSUES IN THE ANALYSIS OF PILES IN LIQUEFYING SOILS

KEY ISSUES IN THE ANALYSIS OF PILES IN LIQUEFYING SOILS 4 th International Conference on Earthquake Geotechnical Engineering June 2-28, 27 KEY ISSUES IN THE ANALYSIS OF PILES IN LIQUEFYING SOILS Misko CUBRINOVSKI 1, Hayden BOWEN 1 ABSTRACT Two methods for analysis

More information

Statistical Models approach for Solar Radiation Prediction

Statistical Models approach for Solar Radiation Prediction Statistical Models aroach for Solar Radiation Prediction S. Ferrari, M. Lazzaroni, and V. Piuri Universita degli Studi di Milano Milan, Italy Email: {stefano.ferrari, massimo.lazzaroni, vincenzo.iuri}@unimi.it

More information

Preconditioning techniques for Newton s method for the incompressible Navier Stokes equations

Preconditioning techniques for Newton s method for the incompressible Navier Stokes equations Preconditioning techniques for Newton s method for the incomressible Navier Stokes equations H. C. ELMAN 1, D. LOGHIN 2 and A. J. WATHEN 3 1 Deartment of Comuter Science, University of Maryland, College

More information

John Weatherwax. Analysis of Parallel Depth First Search Algorithms

John Weatherwax. Analysis of Parallel Depth First Search Algorithms Sulementary Discussions and Solutions to Selected Problems in: Introduction to Parallel Comuting by Viin Kumar, Ananth Grama, Anshul Guta, & George Karyis John Weatherwax Chater 8 Analysis of Parallel

More information

CMSC 425: Lecture 4 Geometry and Geometric Programming

CMSC 425: Lecture 4 Geometry and Geometric Programming CMSC 425: Lecture 4 Geometry and Geometric Programming Geometry for Game Programming and Grahics: For the next few lectures, we will discuss some of the basic elements of geometry. There are many areas

More information

Efficient & Robust LK for Mobile Vision

Efficient & Robust LK for Mobile Vision Efficient & Robust LK for Mobile Vision Instructor - Simon Lucey 16-623 - Designing Comuter Vision As Direct Method (ours) Indirect Method (ORB+RANSAC) H. Alismail, B. Browning, S. Lucey Bit-Planes: Dense

More information

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split

A Bound on the Error of Cross Validation Using the Approximation and Estimation Rates, with Consequences for the Training-Test Split A Bound on the Error of Cross Validation Using the Aroximation and Estimation Rates, with Consequences for the Training-Test Slit Michael Kearns AT&T Bell Laboratories Murray Hill, NJ 7974 mkearns@research.att.com

More information

MODEL-BASED MULTIPLE FAULT DETECTION AND ISOLATION FOR NONLINEAR SYSTEMS

MODEL-BASED MULTIPLE FAULT DETECTION AND ISOLATION FOR NONLINEAR SYSTEMS MODEL-BASED MULIPLE FAUL DEECION AND ISOLAION FOR NONLINEAR SYSEMS Ivan Castillo, and homas F. Edgar he University of exas at Austin Austin, X 78712 David Hill Chemstations Houston, X 77009 Abstract A

More information

Principles of Computed Tomography (CT)

Principles of Computed Tomography (CT) Page 298 Princiles of Comuted Tomograhy (CT) The theoretical foundation of CT dates back to Johann Radon, a mathematician from Vienna who derived a method in 1907 for rojecting a 2-D object along arallel

More information

Proximal methods for the latent group lasso penalty

Proximal methods for the latent group lasso penalty Proximal methods for the latent grou lasso enalty The MIT Faculty has made this article oenly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Villa,

More information

GAIT RECOGNITION THROUGH MPCA PLUS LDA. Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos

GAIT RECOGNITION THROUGH MPCA PLUS LDA. Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos GAIT RECOGNITION THROUGH MPCA PLUS LDA Haiping Lu, K.N. Plataniotis and A.N. Venetsanopoulos The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto, M5S 3G4, Canada

More information

Fault Tolerant Quantum Computing Robert Rogers, Thomas Sylwester, Abe Pauls

Fault Tolerant Quantum Computing Robert Rogers, Thomas Sylwester, Abe Pauls CIS 410/510, Introduction to Quantum Information Theory Due: June 8th, 2016 Sring 2016, University of Oregon Date: June 7, 2016 Fault Tolerant Quantum Comuting Robert Rogers, Thomas Sylwester, Abe Pauls

More information

Generalized Coiflets: A New Family of Orthonormal Wavelets

Generalized Coiflets: A New Family of Orthonormal Wavelets Generalized Coiflets A New Family of Orthonormal Wavelets Dong Wei, Alan C Bovik, and Brian L Evans Laboratory for Image and Video Engineering Deartment of Electrical and Comuter Engineering The University

More information

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO)

Combining Logistic Regression with Kriging for Mapping the Risk of Occurrence of Unexploded Ordnance (UXO) Combining Logistic Regression with Kriging for Maing the Risk of Occurrence of Unexloded Ordnance (UXO) H. Saito (), P. Goovaerts (), S. A. McKenna (2) Environmental and Water Resources Engineering, Deartment

More information

Damage Identification from Power Spectrum Density Transmissibility

Damage Identification from Power Spectrum Density Transmissibility 6th Euroean Worksho on Structural Health Monitoring - h.3.d.3 More info about this article: htt://www.ndt.net/?id=14083 Damage Identification from Power Sectrum Density ransmissibility Y. ZHOU, R. PERERA

More information

Chapter 1 Fundamentals

Chapter 1 Fundamentals Chater Fundamentals. Overview of Thermodynamics Industrial Revolution brought in large scale automation of many tedious tasks which were earlier being erformed through manual or animal labour. Inventors

More information

RUN-TO-RUN CONTROL AND PERFORMANCE MONITORING OF OVERLAY IN SEMICONDUCTOR MANUFACTURING. 3 Department of Chemical Engineering

RUN-TO-RUN CONTROL AND PERFORMANCE MONITORING OF OVERLAY IN SEMICONDUCTOR MANUFACTURING. 3 Department of Chemical Engineering Coyright 2002 IFAC 15th Triennial World Congress, Barcelona, Sain RUN-TO-RUN CONTROL AND PERFORMANCE MONITORING OF OVERLAY IN SEMICONDUCTOR MANUFACTURING C.A. Bode 1, B.S. Ko 2, and T.F. Edgar 3 1 Advanced

More information

An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators

An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators An Investigation on the Numerical Ill-conditioning of Hybrid State Estimators S. K. Mallik, Student Member, IEEE, S. Chakrabarti, Senior Member, IEEE, S. N. Singh, Senior Member, IEEE Deartment of Electrical

More information

A Survey of Multilinear Subspace Learning for Tensor Data

A Survey of Multilinear Subspace Learning for Tensor Data A Survey of Multilinear Subspace Learning for Tensor Data Haiping Lu a, K. N. Plataniotis b, A. N. Venetsanopoulos b,c a Institute for Infocomm Research, Agency for Science, Technology and Research, #21-01

More information

Uncertainty Modeling with Interval Type-2 Fuzzy Logic Systems in Mobile Robotics

Uncertainty Modeling with Interval Type-2 Fuzzy Logic Systems in Mobile Robotics Uncertainty Modeling with Interval Tye-2 Fuzzy Logic Systems in Mobile Robotics Ondrej Linda, Student Member, IEEE, Milos Manic, Senior Member, IEEE bstract Interval Tye-2 Fuzzy Logic Systems (IT2 FLSs)

More information

ON THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS AND COMMUTATOR ARGUMENTS FOR SOLVING STOKES CONTROL PROBLEMS

ON THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS AND COMMUTATOR ARGUMENTS FOR SOLVING STOKES CONTROL PROBLEMS Electronic Transactions on Numerical Analysis. Volume 44,. 53 72, 25. Coyright c 25,. ISSN 68 963. ETNA ON THE DEVELOPMENT OF PARAMETER-ROBUST PRECONDITIONERS AND COMMUTATOR ARGUMENTS FOR SOLVING STOKES

More information

Brownian Motion and Random Prime Factorization

Brownian Motion and Random Prime Factorization Brownian Motion and Random Prime Factorization Kendrick Tang June 4, 202 Contents Introduction 2 2 Brownian Motion 2 2. Develoing Brownian Motion.................... 2 2.. Measure Saces and Borel Sigma-Algebras.........

More information

A SIMPLE PLASTICITY MODEL FOR PREDICTING TRANSVERSE COMPOSITE RESPONSE AND FAILURE

A SIMPLE PLASTICITY MODEL FOR PREDICTING TRANSVERSE COMPOSITE RESPONSE AND FAILURE THE 19 TH INTERNATIONAL CONFERENCE ON COMPOSITE MATERIALS A SIMPLE PLASTICITY MODEL FOR PREDICTING TRANSVERSE COMPOSITE RESPONSE AND FAILURE K.W. Gan*, M.R. Wisnom, S.R. Hallett, G. Allegri Advanced Comosites

More information

arxiv: v2 [stat.me] 3 Nov 2014

arxiv: v2 [stat.me] 3 Nov 2014 onarametric Stein-tye Shrinkage Covariance Matrix Estimators in High-Dimensional Settings Anestis Touloumis Cancer Research UK Cambridge Institute University of Cambridge Cambridge CB2 0RE, U.K. Anestis.Touloumis@cruk.cam.ac.uk

More information

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle]

Model checking, verification of CTL. One must verify or expel... doubts, and convert them into the certainty of YES [Thomas Carlyle] Chater 5 Model checking, verification of CTL One must verify or exel... doubts, and convert them into the certainty of YES or NO. [Thomas Carlyle] 5. The verification setting Page 66 We introduce linear

More information

A Closed-Form Solution to the Minimum V 2

A Closed-Form Solution to the Minimum V 2 Celestial Mechanics and Dynamical Astronomy manuscrit No. (will be inserted by the editor) Martín Avendaño Daniele Mortari A Closed-Form Solution to the Minimum V tot Lambert s Problem Received: Month

More information

Convex Optimization methods for Computing Channel Capacity

Convex Optimization methods for Computing Channel Capacity Convex Otimization methods for Comuting Channel Caacity Abhishek Sinha Laboratory for Information and Decision Systems (LIDS), MIT sinhaa@mit.edu May 15, 2014 We consider a classical comutational roblem

More information

Maximum Entropy and the Stress Distribution in Soft Disk Packings Above Jamming

Maximum Entropy and the Stress Distribution in Soft Disk Packings Above Jamming Maximum Entroy and the Stress Distribution in Soft Disk Packings Above Jamming Yegang Wu and S. Teitel Deartment of Physics and Astronomy, University of ochester, ochester, New York 467, USA (Dated: August

More information

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA

Yixi Shi. Jose Blanchet. IEOR Department Columbia University New York, NY 10027, USA. IEOR Department Columbia University New York, NY 10027, USA Proceedings of the 2011 Winter Simulation Conference S. Jain, R. R. Creasey, J. Himmelsach, K. P. White, and M. Fu, eds. EFFICIENT RARE EVENT SIMULATION FOR HEAVY-TAILED SYSTEMS VIA CROSS ENTROPY Jose

More information

EXACTLY PERIODIC SUBSPACE DECOMPOSITION BASED APPROACH FOR IDENTIFYING TANDEM REPEATS IN DNA SEQUENCES

EXACTLY PERIODIC SUBSPACE DECOMPOSITION BASED APPROACH FOR IDENTIFYING TANDEM REPEATS IN DNA SEQUENCES EXACTLY ERIODIC SUBSACE DECOMOSITION BASED AROACH FOR IDENTIFYING TANDEM REEATS IN DNA SEUENCES Ravi Guta, Divya Sarthi, Ankush Mittal, and Kuldi Singh Deartment of Electronics & Comuter Engineering, Indian

More information

Notes on Instrumental Variables Methods

Notes on Instrumental Variables Methods Notes on Instrumental Variables Methods Michele Pellizzari IGIER-Bocconi, IZA and frdb 1 The Instrumental Variable Estimator Instrumental variable estimation is the classical solution to the roblem of

More information

arxiv: v1 [stat.ml] 10 Mar 2016

arxiv: v1 [stat.ml] 10 Mar 2016 Global and Local Uncertainty Princiles for Signals on Grahs Nathanael Perraudin, Benjamin Ricaud, David I Shuman, and Pierre Vandergheynst March, 206 arxiv:603.03030v [stat.ml] 0 Mar 206 Abstract Uncertainty

More information

Improved Capacity Bounds for the Binary Energy Harvesting Channel

Improved Capacity Bounds for the Binary Energy Harvesting Channel Imroved Caacity Bounds for the Binary Energy Harvesting Channel Kaya Tutuncuoglu 1, Omur Ozel 2, Aylin Yener 1, and Sennur Ulukus 2 1 Deartment of Electrical Engineering, The Pennsylvania State University,

More information

A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search

A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search A New Method of DDB Logical Structure Synthesis Using Distributed Tabu Search Eduard Babkin and Margarita Karunina 2, National Research University Higher School of Economics Det of nformation Systems and

More information

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem

An Ant Colony Optimization Approach to the Probabilistic Traveling Salesman Problem An Ant Colony Otimization Aroach to the Probabilistic Traveling Salesman Problem Leonora Bianchi 1, Luca Maria Gambardella 1, and Marco Dorigo 2 1 IDSIA, Strada Cantonale Galleria 2, CH-6928 Manno, Switzerland

More information

The Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule

The Graph Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule The Grah Accessibility Problem and the Universality of the Collision CRCW Conflict Resolution Rule STEFAN D. BRUDA Deartment of Comuter Science Bisho s University Lennoxville, Quebec J1M 1Z7 CANADA bruda@cs.ubishos.ca

More information

A New Asymmetric Interaction Ridge (AIR) Regression Method

A New Asymmetric Interaction Ridge (AIR) Regression Method A New Asymmetric Interaction Ridge (AIR) Regression Method by Kristofer Månsson, Ghazi Shukur, and Pär Sölander The Swedish Retail Institute, HUI Research, Stockholm, Sweden. Deartment of Economics and

More information

Comparative study on different walking load models

Comparative study on different walking load models Comarative study on different walking load models *Jining Wang 1) and Jun Chen ) 1), ) Deartment of Structural Engineering, Tongji University, Shanghai, China 1) 1510157@tongji.edu.cn ABSTRACT Since the

More information