Subspace Learning Based on Tensor Analysis. by Deng Cai, Xiaofei He, and Jiawei Han
|
|
- Clarissa Horton
- 6 years ago
- Views:
Transcription
1 Report No. UIUCDCS-R UILU-ENG Subspace Learnng Based on Tensor Analyss by Deng Ca, Xaofe He, and Jawe Han May 2005
2 Subspace Learnng Based on Tensor Analyss Deng Ca Xaofe He Jawe Han Department of Computer Scence, Unversty of Illnos at Urbana-Champagn Department of Computer Scence, Unversty of Chcago Abstract Lnear dmensonalty reducton technques have been wdely used n pattern recognton and computer vson, such as face recognton, mage retreval, etc. The typcal methods nclude Prncpal Component Analyss (PCA) whch s unsupervsed and Lnear Dscrmnant Analyss (LDA) whch s supervsed. Both of them consder an m 1 m 2 mage as a hgh dmensonal vector n R m1 m2. Such a vector representaton fals to take nto account the spatal localty of pxels n the mage. An mage s ntrnscally a matrx. In ths paper, we consder an mage as the second order tensor n R m1 R m2 and propose two novel tensor subspace learnng algorthms called TensorPCA and TensorLDA. Our algorthms explctly take nto account the relatonshp between column vectors of the mage matrx and the relatonshp between the row vectors of the mage matrx. We compare our proposed approaches wth PCA and LDA for face recognton on three standard face databases. Expermental results show that tensor analyss acheves better recognton accuracy, whle beng much more effcent. The work was supported n part by the U.S. Natonal Scence Foundaton NSF IIS /IIS Any opnons, fndngs, and conclusons or recommendatons expressed n ths paper are those of the authors and do not necessarly reflect the vews of the fundng agences. 1
3 1 Introducton Real data of natural and socal scences s often very hgh-dmensonal. However, the underlyng structure can n many cases be characterzed by a small number of parameters. Reducng the dmensonalty of such data s benefcal for vsualzng the ntrnsc structure and t s also an mportant preprocessng step n many statstcal pattern recognton problems, such as face recognton [8, 1] and mage retreval [7, 2]. One of the fundamental problems n data analyss s how to represent the data. In the most of prevous works on face representaton, the face s represented as a vector n hgh-dmensonal space. However, an mage s ntrnscally a matrx, or the second order tensor. In vector representaton, the face mage has to be converted to a vector. A typcal way to do ths so-called matrx-to-vector algnment s to concatenate all the rows n the matrx together to get a sngle vector. In ths way, an mage of sze m 1 m 2 s dentfed wth a pont n R m 1 m 2. Thus, a lnear transformaton n R m 1 m 2 s characterzed by y = A T x, where A s an (m 1 m 2 ) (m 1 m 2 ) matrx. To acqure such lnear transformaton, tradtonal subspace learnng methods (PCA and LDA) need to egendecomposton of some matrces wth sze (m 1 m 2 ) (m 1 m 2 ), whch s computatonal expensve. Moreover, the learnng parameters n PCA and LDA (the dmenson of the projecton vector) are very large. These methods mght not acqure good performance when the number of tranng samples s small. Recently, multlnear algebra, the algebra of hgher-order tensors, was appled for analyzng the multfactor structure of mage ensembles [10, 11, 12]. aslescu and Terzopoulos have proposed a novel face representaton algorthm called Tensorface [10]. Tensorface represents the set of face mages by a hgher-order tensor and extends tradtonal PCA to hgher-order tensor decomposton. In ths way, the multple factors related to expresson, llumnaton and pose can be separated from dfferent dmensons of the tensor. However, Tensorface stll consders each face mage as a vector nstead of 2-d tensor. Thus, Tensorface s computatonally expensve. Moreover, t does not encode dscrmnatng nformaton, thus t s not optmal for recognton. In ths paper, we systemcally study the tensor analyss for subspace learnng and propose two novel algorthms called TensorPCA and TensorLDA for learnng a tensor subspace. TensorPCA and TensorLDA are fundamentally based on the tradtonal Prncple Component Analyss and Lnear Dscrmnant Analyss algorthms. An mage s represented as a matrx X R m 1 R m 2 and the lnear transformaton n the tensor space s naturally represented as Y = U T X, where U s a m 1 m 1 matrx and s a m 2 m 2 matrx. The new algorthm s dstnct from several perspectves: 1. Comparng to tradtonal subspace learnng algorthm such as PCA and LDA, our algorthm s much more computatonally effcent. Ths can be smply seen from the fact that the matrces U and are much smaller than the matrx A. 2
4 2. Comparng to recently proposed Tensorface approach [10], our algorthm explctly model each face mage as a 2-D tensor whch consders spatal localty of pxels n the face mage. Also, the TesorLDA ncorporated the label nformaton whch can get a better performance for face recognton. 3. Our algorthm s especally sutable for the small sample ssue, snce the number of parameters n our algorthm s much smaller that of tradtonal subspace learnng algorthms. The rest of ths paper s organzed as follows. We gve a bref revew of PCA and LDA n Secton 2. The TensorPCA and TensorLDA algorthms are ntroduced n Secton 3. Expermental results are shown n Secton 4. Fnally, we provde some Concludng remarks and suggestons for future work n Secton 5. 2 A Bref revew of PCA and LDA PCA s a canoncal lnear dmensonalty reducton algorthm. The basc dea of PCA s to project the data along the drectons of maxmal varances so that the reconstructon error can be mnmzed. Gven a set of data ponts x 1, x 2,, x n,letwbethe transformaton vector and y = w T x. The objectve functon of PCA s as follows: w opt = argmax w = argmax w n (y y) 2 =1 wt Cw where y = 1 n y and C s the data covarance matrx. The bass functons of PCA are the egenvectors of the data covarance matrx assocated wth the largest egenvalues. Whle PCA seeks drectons that are effcent for representaton, Lnear Dscrmnant Analyss (LDA) seeks drectons that are effcent for dscrmnaton. Suppose we have a set of n samples x 1, x 2,, x n, belongng to k classes. The objectve functon of LDA s as follows: S W = S B = w opt =argmax w w T S B w w T S W w k n (m () m)(m () m) T =1 k n =1 j=1 (x () j m () )(x () j m () ) T where m s the total sample mean vector, n s the number of samples n the -th class, m () s the average vector of the -th class, and x () j s the j-th sample n the -th class. We call S W the wthn-class scatter matrx and S B the between-class scatter matrx. 3
5 3 Subspace Learnng based on Tensor Analyss Let X R m 1 m 2 denote an mage of sze m 1 m 2. Mathematcally, X can be thought of as the 2 nd order tensor (or, 2-tensor) n the tensor space R m 1 R m 2. Let (u 1,, u m1 )beasetof orthonormal bass functons of R m 1. Let (v 1,, v m2 ) be a set of orthonormal bass functons of R m 2. Thus, an 2-tensor X can be unquely wrtten as: X = j (u T Xv j )u v T j Ths ndcates that {u v T j } forms a bass of the tensor space Rm 1 R m 2. Defne two matrces U =[u 1,, u l1 ] R m 1 l 1 and =[v 1,, v l2 ] R m 2 l 2.LetUbeasubspace of R m 1 spanned by {u } l 1 =1 and be a subspace of R m 2 spanned by {v j } l 2 j=1. Thus, the tensor product U s a subspace of R m 1 R m 2. The projecton of X R m 1 m 2 onto the space U s Y = U T X R l 1 l 2 Suppose we have n mages, X 1,,X n R m 1 m 2. These mages belong to k categores C 1,,C k. For the -th category, there are n mages. The mean of each category M (X) s computed by takng the average of X n category,.e., = 1 n and the global mean M (X) s defned as M (X) M (X) = 1 n j C X j Let Y = U T X R l 1 l 2. Lkewse, we can defne M (Y ) = 1 n j C Y j and M (Y ) = 1 n n j=1 Y j. It s easy to check that M (Y ) = U T M (X) and M (Y ) = U T M (X). The tensor subspace learnng problem ams at fndng the (l 1 l 2 )-dmensonal space U based on the specfc objectve functons. Partcularly, we wll ntroduce two novel algorthms called TensorPCA and TensorLDA n ths secton. It would be mportant to note that, the number of parameters n the tensor subspace learnng problem s much smaller than that n the orgnal subspace learnng problem lke PCA and LDA. Therefore, tensor analyss should be partcularly applcable for the small sample ssue. n j=1 X j 3.1 Tensor PCA TensorPCA s fundamentally based on PCA. It tres to project the data to the tensor subspace of maxmal varances so that the reconstructon error can be mnmzed. The objectve functon of TensorPCA can be descrbed as follows: Y M (Y ) 2 (1) max U, 4
6 Note that we use tensor norm of the dfference of two tensors to measure the dstance of two mages. Snce order two tensor s essentally matrx, we use Frobenus norm of a matrx as our 2-d tensor norm. Snce A 2 =tr(aa T ), we have n Y M (Y ) 2 = =1 = n =1 n =1 tr ((Y ) M (Y ) )(Y M (Y ) ) T ( ) tr U T (X M (X) ) T (X M (X) ) T U ( ( n ( )) ) = tr U T (X M (X) ) T (X M (X) ) T U =1. = tr ( U T M U ) where M = n ( =1 (X M (X) ) T (X M (X) ) T ). Smlarly, A 2 =tr(a T A), so we also have ( ( n Y M (Y ) 2 = tr T ( (X M (X) ) T UU T (X M (X) ) )) ) =1. = tr ( T M U ) where M U = n ( =1 (X M (X) ) T UU T (X M (X) ) ). Thus, the optmal projecton U should be the egenvectors of M and the optmal projecton should be the egenvectors of M U. One mght notce that U and can not be computed ndependently. In our algorthm, we try to fnd an optmal coordnate system of R m 1 R m 2. That s, we assume that both U and are orthonormal,.e. U T U = UU T = I and T = T = I. Insuchcase, and M = M U = n ((X ) M (X) )(X M (X) ) T =1 n =1 ( ) (X M (X) ) T (X M (X) ) It s clear that M no longer depends on,andm U no longer depends on U. Therefore, the matrx U can be smply computed as the egenvectors of M and the matrx can be computed as the egenvectors of M U. Note that, both M U and M are symmetrc, hence ther egenvectors are orthonormal. Ths s consstent wth our assumptons. If we try to reduce the orgnal tensor space to a l 1 l 2 tensor subspace, we choose the frst l 1 column vectors n U and the frst l 2 column vectors n. 3.2 Tensor LDA TensorPCA s performed under unsupervsed mode. Its goal s to mnmze the reconstructon error. In ths subsecton, we ntroduce the TensorLDA algorthm whch s performed under supervsed 5
7 mode. It tres to fnd the most dscrmnatve tensor subspace. By projectng the data ponts nto the tensor subspace, the data ponts of dfferent classes are far from each other whle data ponts of the same class are close to each other. Smlar to tradtonal Lnear Dscrmnant Analyss, we try to maxmze the followng objectve functon: Snce A 2 =tr(aa T ), we have max U, k k Y j M (Y ) 2 = tr =1 j C =1 j C k = tr where Sw = k =1 follows: k =1 j C (X j M (X) n M (Y ) M (Y ) 2 = k =1 j C Y j M (Y ) 2 k =1 n M (Y ) M (Y ) 2 (2) =1. = tr ( U T Sw U ) ( (Y j M (Y ) )(Y j M (Y ) ) T ) U T (X j M (X) j C ) T (X j M (X) ) T U ) T (X j M (X) ) T. The denomnator can be computed as k =1 = tr ( n (tr )) (M (Y ) M (Y ) )(M (Y ) M (Y ) ) T ( k =1. = tr ( U T S b U) n U T (M (X) M (X) ) T (M (X) M (X) ) T U ) where S b = k =1 n (M (X) M (X) ) T (M (X) M (X) ) T. Smlarly, A 2 = trace(a T A), so we also have S U w = k Y j M (Y ) 2 =tr ( T Sw U ) =1 j C k (X j M (X) ) T UU T (X j M (X) ) j C =1 and S U b = k =1 k =1 n (M (X) n M (Y ) M (Y ) 2 =tr ( T S U b ) M (X) ) T UU T (M (X) M (X) ) 6
8 The optmal projectons that preserve the gven class structure should maxmze U T Sw U/U T Sb U and T Sw U / T Sb U smultaneously. Thus U and can be obtaned by solvng the followng two generalzed egenvector problems: Sb a = λs w a (3) Sb U a = λsu w a (4) Unfortunately, the matrces Sb,SU b,s w,sw U are not fxed whle depend on U or. Therefore, U and can not be computed ndependently. To relax such a problem, we add a constrant that U and be orthonormal. That s, U T U = UU T = I and T = T = I. Thus, S U w = S U b = k S w = S b = k (X j M (X) ) T (X j M (X) ) =1 j C =1 n (M (X) M (X) ) T (M (X) M (X) ) k (X j M (X) )(X j M (X) ) T =1 j C k =1 n (M (X) M (X) )(M (X) M (X) ) T (5) It s clear that, wth the constrants, the above four matrces can be computed ndependently only dependng on the tranng data ponts. In the followng, we wll descrbe how to compute U and. It would be mportant to note that, the computatons of U and are essentally the same. Wthout loss of generalty, we use S w to represent Sw U or Sw, and use S b to represent Sb U or Sb. Now our problem becomes: a T S b a a 1 =argmax a a T S w a wth the constrant a k =argmax a a T S b a a T S w a a T k a 1 = a T k a 2 = = a T k a k 1 =0 Snce S w s n general nonsgular, for any a, we can always normalze t such that a T S w a =1, and the rato of a T S b a and a T S w a keeps unchanged. Thus, the above mnmzaton problem s equvalent to mnmzng the value of a T S b a wth an addtonal constrant as follows, a T S w a =1 Note that, the above normalzaton s only for smplfyng the computaton. optmal solutons, we can re-normalze them to get a othonormal bass vectors. Once we get the 7
9 It s easy to check that a 1 s the egenvector of the generalzed egen-problem: S b a = λs w a assocated wth the largest egenvalue. Snce S w s general nonsngular, a 1 s the egenvector of the matrx (S w ) 1 S b assocated wth the largest egenvalue. In order to get the k-th bass vector, we maxmze the followng objectve functon: wth the constrants: f(a k )= at k S ba k a T k S wa k (6) a T k a 1 = a T k a 2 = = a T k a k 1 =0, a T k S wa k =1 We can use the Lagrange multplers to transform the above objectve functon to nclude all the constrants C (k) = a T k S ba k λ(a T k S wa k 1) µ 1 a T k a 1 µ k 1 a T k a k 1 The optmzaton s performed by settng the partal dervatve of C (k) wth respect to a k to zero: C (k) =0 a k 2S b a k 2λS w a k µ 1 a 1 µ k 1 a k 1 =0 (7) Multplyng the left sde of (7) by a T k, we obtan 2a T k S ba k 2λa T k S wa k =0 λ = at k S ba k a T k S wa k (8) Comparng to (6), λ exactly represents the expresson to be maxmzed. Multplyng the left sde of (7) successvely by a T 1 S 1 w,, a T k 1 S 1 w, we now obtan a set of k 1 equatons: µ 1 a T 1 S 1 w a µ k 1 a T 1 S 1 w a k 1 =2a T 1 S 1 w S b a k µ 1 a T 2 S 1 w a µ k 1 a T 2 S 1 w a k 1 =2a T 2 S 1 w S b a k µ 1 a T k 1 S 1 w a µ k 1 a T k 1 S 1 w a k 1 =2a T k 1 S 1 w S b a k We defne: µ (k 1) =[µ 1,,µ k 1 ] T, A (k 1) =[a 1,, a k 1 ] [ ] [ B (k 1) = = A (k 1)] T S 1 w A (k 1), B (k 1) j B (k 1) j = a T S 1 w a j 8
10 Usng ths smplfed notaton, the prevous set of k 1 equatons can be represented n a sngle matrx relatonshp [ B (k 1) µ (k 1) =2 A (k 1)] T S 1 w S b a k thus Let us now multply the left sde of (7) by S 1 w µ (k 1) =2 [B (k 1)] 1 [ A (k 1)] T S 1 w S b a k (9) 2S 1 w S b a k 2λa k µ 1 S 1 w a 1 µ k 1 S 1 w a k 1 =0 Ths can be expressed usng matrx notaton as 2S 1 w S b a k 2λa k S 1 w A (k 1) µ (k 1) =0 Wth equaton (9), we obtan { I S 1 w A (k 1) [ B (k 1)] 1 [ A (k 1)] T } S 1 w S b a k = λa k As shown n (8), λ s just the crteron to be maxmzed, thus a k s the egenvector of { [ M (k) = I Sw 1 A (k 1) B (k 1)] 1 [ A (k 1)] } T Sw 1 S b assocated wth the largest egenvalue of M (k). In summery, our TensorLDA algorthm s performed as follows: 1. Compute the S U w, S U b, S w, S b as descrbed n Eqn Calculate the orthonormal matrx U =[u 1,, u m1 ]as (a) u 1 s the egenvector of (S w ) 1 S b (b) u k s the egenvector of M (k) M (k) = { I ( S w U (k 1) = [u 1,, u k 1 ] B (k 1) = correspondng to the largest egenvalue. correspondng to the largest egenvalue, where ) [ 1 ] U (k 1) B (k 1) 1 [ U (k 1)] } T ( ) S 1 w S b [ U (k 1)] T ( S w ) 1 U (k 1) 3. Calculate the orthonormal matrx =[v 1,, v m2 ]as (a) v 1 s the egenvector of (S U w ) 1 S U b correspondng to the largest egenvalue. 9
11 (b) v k s the egenvector of M (k) U correspondng to the largest egenvalue, where M (k) U = { I ( S U w (k 1) = [v 1,, v k 1 ] B (k 1) U = ) [ 1 ] (k 1) B (k 1) 1 [ U (k 1)] } T ( ) S U 1 w S U b [ (k 1)] T ( S U w ) 1 (k 1) If we try to reduce the orgnal tensor space to a l 1 l 2 tensor subspace, we choose the frst l 1 column vectors n U and the frst l 2 column vectors n. 4 Expermental Results In ths secton, we nvestgate the performance of our proposed TensorPCA and TensorLDA methods for face recognton. The system performance s compared wth the Egenfaces method (PCA) [9] and the Fsherfaces method (PCA+LDA) [1], two of the most popular lnear methods n face recognton. In ths study, three face databases were tested. The frst one s the Yale database 1, the second one s the ORL (Olvett Research Laboratory) datebase 2, and the thrd one s the PIE (pose, llumnaton, and expresson) database from CMU [6]. In all the experments, preprocessng to locate the faces was appled. Orgnal mages were normalzed (n scale and orentaton) such that the two eyes were algned at the same poston. Then, the facal areas were cropped nto the fnal mages for matchng. The sze of each cropped mage n Yale and PIE databases s pxels, and pxels for the ORL database, wth 256 gray levels per pxel. For Fsherfaces and Egenfaces, each mage s represented by a 1, 024-dmensonal (or 4, 096-dmensonal for ORL database) vector n mage space. For TensorPCA and TensorLDA, the mages are represented as matrces. Dfferent pattern classfers have been appled for face recognton, ncludng nearest-neghbor [1], Bayesan [4], Support ector Machne [5], etc. In ths paper, we apply the nearest-neghbor classfer for ts smplcty. The Eucldean metrc s used as our dstance measure. In short, the recognton process has three steps. Frst, we calculate the face subspace from the tranng set of face mages; then the new face mage to be dentfed s projected nto d-dmensonal subspace (PCA and LDA) or (d d)-dmensonal tensor subspace (TensorPCA and TensroLDA); fnally, the new face mage s dentfed by a nearest neghbor classfer. 4.1 Yale Database The Yale face database was constructed at the Yale Center for Computatonal son and Control. It contans 165 grayscale mages of 15 ndvduals. The mages demonstrate varatons n lghtng
12 Fgure 1: Sample face mages from the Yale database. For each subject, there are 11 face mages under dfferent lghtng condtons wth facal expresson. Table 1: Performance comparsons on the Yale database Method Dms (d) Error Rate Baselne % Egenfaces (PCA) % Fsherfaces (PCA+LDA) % TensorPCA % TensorLDA % condton (left-lght, center-lght, rght-lght), facal expresson (normal, happy, sad, sleepy, surprsed, and wnk), and wth/wthout glasses. Fgure 1 show the 11 mages of one ndvdual n Yale data base. A random subset wth 2 mages per ndvdual (hence, 30 mages n total) was taken wth labels to form the tranng set. The rest of the database was consdered to be the testng set. 20 tmes of random selecton for tranng examples were performed and the average recognton result was recorded. The tranng samples were used to learn the subspace. The testng samples were then projected nto the low-dmensonal representaton subspace. Recognton was performed usng a nearest-neghbor classfer. Note that, for LDA, there are at most c 1 nonzero generalzed egenvalues and, so, an upper bound on the dmenson of the reduced space s c 1, where c s the number of ndvduals [1]. In general, the performance of the all these methods vares wth the number of dmensons. We show the best results obtaned by Fsherfaces, Egenfaces, Tensor- PCA, TensorLDA and Baselne n Table 1. It s found that the TensorLDA method sgnfcantly outperforms both Egenfaces and Fsherfaces methods. The error rates of Egenfaces, Fsherfaces, TensorPCA and TensorLDA are 56.5%, 54.3%, 53.8% and 47.9%, respectvely. Fgure 2 shows the plots of error rate versus dmensonalty reducton. It s nterestng to note that when kept c 1 dmensons, Fsherfaces s even worse than baselne. Ths result s consstent wth the observaton n [3] that Egenfaces can outperform Fsherfaces n small tranng sample. 4.2 ORL Database ORL (Olvett Research Laboratory) face database s a well-known dataset for face recognton. It contans 400 mages of 40 ndvduals. Some mages were captured at dfferent tmes and have dfferent varatons ncludng expresson (open or closed eyes, smlng or non-smlng) and facal 11
13 80 Error rate (%) TensorLDA TensorPCA FsherFace (PCA+LDA) EgenFace (PCA) Baselne Dmenson (d) Fgure 2: Error rate vs. dmensonalty reducton on Yale database Fgure 3: Sample face mages from the ORL database. For each subject, there are 10 face mages wth dfferent facal expresson and detals. detals (glasses or no glasses). The mages were taken wth a tolerance for some tltng and rotaton of the face up to 20 degrees. All mages are grayscale and of sze sample mages of one ndvdual n the ORL database are dsplayed n Fgure 3. A random subset wth 2 mages per ndvdual (hence, 80 mages n total) was taken wth labels to form the tranng set. The rest of the database was consdered to be the testng set. 20 tmes of random selecton for tranng example were performed and the average recognton result was recorded. The expermental protocol s the same as before. The recognton results are shown n Table 2. It s found that the TensorLDA method sgnfcantly outperforms both Egenfaces and Fsherfaces methods. The error rates of Egenfaces, Fsherfaces, TensorPCA and TensorLDA are 33.5%, 29.84%, 30.78% and 23.41%, respectvely. Fgure 2 shows the plots of error rate versus dmensonalty reducton. 4.3 PIE Database The CMU PIE face database contans 68 ndvduals wth 41,368 face mages as a whole. The face mages were captured by 13 synchronzed cameras and 21 flashes, under varyng pose, llumnaton, and expresson. We choose the fve near frontal pose (C05, C07, C09, C27, C29) and use all the mages under dfferent llumnaton, lghtng and expresson whch leave us 170 near frontal face mages for each ndvdual. Fgure 5 shows several sample mages of one ndvdual wth dfferent poses, expressons and llumnatons. We randomly choose 20 mages for each ndvdual as tranng data. We repeat ths process 20 tmes and compute the average performance. Table 3 12
14 Table 2: Performance comparsons on the ORL database Method Dms (d) Error Rate Baselne % Egenfaces (PCA) % Fsherfaces (PCA+LDA) % TensorPCA % TensorLDA % 60 Error rate (%) TensorLDA TensorPCA FsherFace (PCA+LDA) EgenFace (PCA) Baselne Dmenson (d) Fgure 4: Error rate vs. dmensonalty reducton on ORL database shows the recognton results. As can be seen, Fsherfaces performs a bt worse than our algorthms on ths database, whle Egenfaces performs poorly. The error rate for TensorLDA, TensorPCA, Fsherfaces, and Egenfaces are 13.32%, 38.15%, 15.44%, and 38.14%, respectvely. Fgure 6 shows a plot of error rate versus dmensonalty reducton. As can be seen, the error rate of our TensorLDA method decreases fast as the dmensonalty of the face tensor subspace ncreases, and acheves the best result wth d = 13 dmenson. There s no sgnfcant mprovement f more dmensons are used. For Fsherfaces, the dmenson of the face subspace s bounded by c 1, and t acheves the best result wth c 1 = 67 dmensons. 5 Conclusons Tensor based subspace learnng s ntroduced n ths paper. We propose two new lnear dmensonalty reducton algorthm called TensorPCA and TensorLDA. Dfferent from tradtonal algorthms lke PCA and LDA, our algorthms are performed n the tensor space rather than the vector space. As a result they are especally sutable for face analyss whch s ntrnscally the second order tensor. 13
15 Fgure 5: Sample face mages from the CMU PIE database. For each subject, there are 170 near frontal face mages under varyng pose, llumnaton, and expresson. Table 3: Performance comparsons on the PIE database Method Dms (d) Error Rate Baselne % Egenfaces (PCA) % Fsherfaces (PCA+LDA) % TensorPCA % TensorLDA % Several questons reman to be nvestgated n our future work, 1. In ths paper, we appled our algorthms to face recognton n whch the face mages are naturally represented as the second order tensors. It s unclear f there s any other applcaton domans n whch tensor representaton s essental. 2. Our algorthms are lnear. Therefore, t can not capture the nonlnear structure of the data manfold. It remans unclear how to generalze our algorthms to nonlnear case. References [1] P. Belhumeur, J. Hepanha, and D. Kregman. Egenfaces vs. fsherfaces: recognton usng class specfc lnear projecton. IEEE Trans. on Pattern Analyss and Machne Intellgence, 19(7): , [2] J. Dy, C. E. Brodley, A. Kak, L. S. Broderck, and A. M. Asen. Herarchcal feature selecton appled to content-based mage retreval. IEEE Trans. on Pattern Analyss and Machne Intellgence, 25: , [3] A. M. Martnez and A. C. Kak. PCA versus LDA. IEEE Trans. on PAMI, 23(2): ,
16 60 50 Error rate (%) TensorLDA TensorPCA FsherFace (PCA+LDA) EgenFace (PCA) Baselne Dmenson (d) Fgure 6: Error rate vs. dmensonalty reducton on PIE database [4] B. Moghaddam and A. Pentland. Probablstc vsual learnng for object representaton. IEEE Trans. on PAMI, 19(7): , [5] P. J. Phllps. Support vector machnes appled to face recognton. Advances n Neural Informaton Processng Systems, 11: , [6] T. Sm, S. Baker, and M. Bsat. The CMU pose, llumnlaton, and expresson database. IEEE Trans. on PAMI, 25(12): , [7] D. L. Swets and J. Weng. Usng dscrmnant egenfeatures for mage retreval. IEEE Trans. on Pattern Analyss and Machne Intellgence, 18(8): , [8] M. Turk and A. Pentland. Egenfaces for recognton. Journal of Cogntve Neuroscence, 3(1):71 86, [9] M. Turk and A. P. Pentland. Face recognton usng egenfaces. In IEEE Conference on Computer son and Pattern Recognton, Mau, Hawa, [10] M. A. O. aslescu and D. Terzopoulos. Multlnear subspace analyss for mage ensembles. In IEEE Conference on Computer son and Pattern Recognton, [11] J. Yang, D. Zhang, A. Frang, and J. Yang. Two-dmensonal PCA: A new approach to appearance-based face representaton and recognton. IEEE Trans. PAMI, 26(1): , Jan [12] J. Ye, R. Janardan, and Q. L. Two-dmensonal lnear dscrmnant analyss. In Advances n Neural Informaton Processng Systems 17,
Tensor Subspace Analysis
Tensor Subspace Analyss Xaofe He 1 Deng Ca Partha Nyog 1 1 Department of Computer Scence, Unversty of Chcago {xaofe, nyog}@cs.uchcago.edu Department of Computer Scence, Unversty of Illnos at Urbana-Champagn
More informationRegularized Discriminant Analysis for Face Recognition
1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths
More informationA Novel Biometric Feature Extraction Algorithm using Two Dimensional Fisherface in 2DPCA subspace for Face Recognition
A Novel ometrc Feature Extracton Algorthm usng wo Dmensonal Fsherface n 2DPA subspace for Face Recognton R. M. MUELO, W.L. WOO, and S.S. DLAY School of Electrcal, Electronc and omputer Engneerng Unversty
More informationFeature Extraction by Maximizing the Average Neighborhood Margin
Feature Extracton by Maxmzng the Average Neghborhood Margn Fe Wang, Changshu Zhang State Key Laboratory of Intellgent Technologes and Systems Department of Automaton, Tsnghua Unversty, Bejng, Chna. 184.
More informationUnified Subspace Analysis for Face Recognition
Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA
More informationLearning with Tensor Representation
Report No. UIUCDCS-R-2006-276 UILU-ENG-2006-748 Learnng wth Tensor Representaton by Deng Ca, Xaofe He, and Jawe Han Aprl 2006 Learnng wth Tensor Representaton Deng Ca Xaofe He Jawe Han Department of Computer
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationSemi-supervised Classification with Active Query Selection
Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationStatistical pattern recognition
Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve
More informationFisher Linear Discriminant Analysis
Fsher Lnear Dscrmnant Analyss Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan Fsher lnear
More informationMULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN
MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology
More informationEfficient, General Point Cloud Registration with Kernel Feature Maps
Effcent, General Pont Cloud Regstraton wth Kernel Feature Maps Hanchen Xong, Sandor Szedmak, Justus Pater Insttute of Computer Scence Unversty of Innsbruck 30 May 2013 Hanchen Xong (Un.Innsbruck) 3D Regstraton
More informationA New Facial Expression Recognition Method Based on * Local Gabor Filter Bank and PCA plus LDA
Hong-Bo Deng, Lan-Wen Jn, L-Xn Zhen, Jan-Cheng Huang A New Facal Expresson Recognton Method Based on Local Gabor Flter Bank and PCA plus LDA A New Facal Expresson Recognton Method Based on * Local Gabor
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationSome Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)
Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationEstimating the Fundamental Matrix by Transforming Image Points in Projective Space 1
Estmatng the Fundamental Matrx by Transformng Image Ponts n Projectve Space 1 Zhengyou Zhang and Charles Loop Mcrosoft Research, One Mcrosoft Way, Redmond, WA 98052, USA E-mal: fzhang,cloopg@mcrosoft.com
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationMulti-Task Learning in Heterogeneous Feature Spaces
Proceedngs of the Twenty-Ffth AAAI Conference on Artfcal Intellgence Mult-Task Learnng n Heterogeneous Feature Spaces Yu Zhang & Dt-Yan Yeung Department of Computer Scence and Engneerng Hong Kong Unversty
More informationBACKGROUND SUBTRACTION WITH EIGEN BACKGROUND METHODS USING MATLAB
BACKGROUND SUBTRACTION WITH EIGEN BACKGROUND METHODS USING MATLAB 1 Ilmyat Sar 2 Nola Marna 1 Pusat Stud Komputas Matematka, Unverstas Gunadarma e-mal: lmyat@staff.gunadarma.ac.d 2 Pusat Stud Komputas
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationDiscriminative Dictionary Learning with Low-Rank Regularization for Face Recognition
Dscrmnatve Dctonary Learnng wth Low-Rank Regularzaton for Face Recognton Langyue L, Sheng L, and Yun Fu Department of Electrcal and Computer Engneerng Northeastern Unversty Boston, MA 02115, USA {l.langy,
More informationThe Prncpal Component Transform The Prncpal Component Transform s also called Karhunen-Loeve Transform (KLT, Hotellng Transform, oregenvector Transfor
Prncpal Component Transform Multvarate Random Sgnals A real tme sgnal x(t can be consdered as a random process and ts samples x m (m =0; ;N, 1 a random vector: The mean vector of X s X =[x0; ;x N,1] T
More informationC/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1
C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationOn a direct solver for linear least squares problems
ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear
More informationLecture 10: Dimensionality reduction
Lecture : Dmensonalt reducton g The curse of dmensonalt g Feature etracton s. feature selecton g Prncpal Components Analss g Lnear Dscrmnant Analss Intellgent Sensor Sstems Rcardo Guterrez-Osuna Wrght
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationCS 468 Lecture 16: Isometry Invariance and Spectral Techniques
CS 468 Lecture 16: Isometry Invarance and Spectral Technques Justn Solomon Scrbe: Evan Gawlk Introducton. In geometry processng, t s often desrable to characterze the shape of an object n a manner that
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationTWO-DIMENSIONAL MAXIMUM MARGIN PROJECTION FOR FACE RECOGNITION
Journal of heoretcal and Appled Informaton echnology 10 th January 013. Vol. 47 No.1 005-013 JAI & LLS. All rghts reserved. ISSN: 199-8645.att.org E-ISSN: 1817-3195 WO-DIMENSIONAL MAXIMUM MARGIN PROJECION
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationCS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015
CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationLECTURE 9 CANONICAL CORRELATION ANALYSIS
LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationAdaptive Manifold Learning
Adaptve Manfold Learnng Jng Wang, Zhenyue Zhang Department of Mathematcs Zhejang Unversty, Yuquan Campus, Hangzhou, 327, P. R. Chna wroarng@sohu.com zyzhang@zju.edu.cn Hongyuan Zha Department of Computer
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationImage Clustering with Tensor Representation
Image Clusterng wth Tensor Representaton Xaofe He 1 Deng Ca Hafeng Lu 3 Jawe Han 1 Department of Computer Scence, Unversty of Chcago, Chcago, IL 637 xaofe@cs.uchcago.edu Department of Computer Scence,
More informationCSE 252C: Computer Vision III
CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationAutomatic Object Trajectory- Based Motion Recognition Using Gaussian Mixture Models
Automatc Object Trajectory- Based Moton Recognton Usng Gaussan Mxture Models Fasal I. Bashr, Ashfaq A. Khokhar, Dan Schonfeld Electrcal and Computer Engneerng, Unversty of Illnos at Chcago. Chcago, IL,
More informationHongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)
ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of
More informationA Tutorial on Data Reduction. Linear Discriminant Analysis (LDA) Shireen Elhabian and Aly A. Farag. University of Louisville, CVIP Lab September 2009
A utoral on Data Reducton Lnear Dscrmnant Analss (LDA) hreen Elhaban and Al A Farag Unverst of Lousvlle, CVIP Lab eptember 009 Outlne LDA objectve Recall PCA No LDA LDA o Classes Counter eample LDA C Classes
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationUsing Random Subspace to Combine Multiple Features for Face Recognition
Usng Random Subspace to Combne Multple Features for Face Recognton Xaogang Wang and Xaoou ang Department of Informaton Engneerng he Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang1, xtang}@e.cuhk.edu.hk
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationPattern Classification
Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher
More informationP R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /
Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons
More informationSection 8.3 Polar Form of Complex Numbers
80 Chapter 8 Secton 8 Polar Form of Complex Numbers From prevous classes, you may have encountered magnary numbers the square roots of negatve numbers and, more generally, complex numbers whch are the
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationChapter 8 Indicator Variables
Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n
More informationHidden Markov Models & The Multivariate Gaussian (10/26/04)
CS281A/Stat241A: Statstcal Learnng Theory Hdden Markov Models & The Multvarate Gaussan (10/26/04) Lecturer: Mchael I. Jordan Scrbes: Jonathan W. Hu 1 Hdden Markov Models As a bref revew, hdden Markov models
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationBayesian predictive Configural Frequency Analysis
Psychologcal Test and Assessment Modelng, Volume 54, 2012 (3), 285-292 Bayesan predctve Confgural Frequency Analyss Eduardo Gutérrez-Peña 1 Abstract Confgural Frequency Analyss s a method for cell-wse
More informationHead Pose Estimation Using Spectral Regression Discriminant Analysis
Head Pose Estmaton Usng Spectral Regresson Dscrmnant Analyss Cafeng Shan and We Chen Phlps Research Hgh Tech Campus 3, AE Endhoven, The Netherlands {cafeng.shan, w.chen}@phlps.com Abstract In ths paper,
More informationFace Recognition CS 663
Face Recognton CS 663 Importance of face recognton The most common way for humans to recognze each other Study of the process of face recognton has applcatons n () securty/survellance/authentcaton, ()
More informationMathematical Preparations
1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationAffine transformations and convexity
Affne transformatons and convexty The purpose of ths document s to prove some basc propertes of affne transformatons nvolvng convex sets. Here are a few onlne references for background nformaton: http://math.ucr.edu/
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationPhysics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1
P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the
More informationx = , so that calculated
Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationEfficient and Robust Feature Extraction by Maximum Margin Criterion
Effcent and Robust Feature Extracton by Maxmum Margn Crteron Hafeng L Tao Jang Department of Computer Scence Unversty of Calforna Rversde, CA 95 {hl,jang}@cs.ucr.edu Keshu Zhang Department of Electrcal
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More informationDepartment of Statistics University of Toronto STA305H1S / 1004 HS Design and Analysis of Experiments Term Test - Winter Solution
Department of Statstcs Unversty of Toronto STA35HS / HS Desgn and Analyss of Experments Term Test - Wnter - Soluton February, Last Name: Frst Name: Student Number: Instructons: Tme: hours. Ads: a non-programmable
More informationA Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function
A Local Varatonal Problem of Second Order for a Class of Optmal Control Problems wth Nonsmooth Objectve Functon Alexander P. Afanasev Insttute for Informaton Transmsson Problems, Russan Academy of Scences,
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationFUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM
Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationarxiv:cs.cv/ Jun 2000
Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More information