Tensor Subspace Analysis

Size: px
Start display at page:

Download "Tensor Subspace Analysis"

Transcription

1 Tensor Subspace Analyss Xaofe He 1 Deng Ca Partha Nyog 1 1 Department of Computer Scence, Unversty of Chcago {xaofe, nyog}@cs.uchcago.edu Department of Computer Scence, Unversty of Illnos at Urbana-Champagn dengca@uuc.edu Abstract Prevous work has demonstrated that the mage varatons of many objects (human faces n partcular) under varable lghtng can be effectvely modeled by low dmensonal lnear spaces. The typcal lnear subspace learnng algorthms nclude Prncpal Component Analyss (PCA), Lnear Dscrmnant Analyss (LDA), and Localty Preservng Projecton (LPP). All of these methods consder an n 1 n mage as a hgh dmensonal vector n R n1 n, whle an mage represented n the plane s ntrnscally a matrx. In ths paper, we propose a new algorthm called Tensor Subspace Analyss (). consders an mage as the second order tensor n R n1 R n, where R n1 and R n are two vector spaces. The relatonshp between the column vectors of the mage matrx and that between the row vectors can be naturally characterzed by. detects the ntrnsc local geometrcal structure of the tensor space by learnng a lower dmensonal tensor subspace. We compare our proposed approach wth PCA, LDA and LPP methods on two standard databases. Expermental results demonstrate that acheves better recognton rate, whle beng much more effcent. 1 Introducton There s currently a great deal of nterest n appearance-based approaches to face recognton [1], [5], [8]. When usng appearance-based approaches, we usually represent an mage of sze n 1 n pxels by a vector n R n1 n. Throughout ths paper, we denote by face space the set of all the face mages. The face space s generally a low dmensonal manfold embedded n the ambent space [6], [7], []. The typcal lnear algorthms for learnng such a face manfold for recognton nclude Prncpal Component Analyss (PCA), Lnear Dscrmnant Analyss (LDA) and Localty Preservng Projecton (LPP) [4]. Most of prevous works on statstcal mage analyss represent an mage by a vector n hgh-dmensonal space. However, an mage s ntrnscally a matrx, or the second order tensor. The relatonshp between the rows vectors of the matrx and that between the column vectors mght be mportant for fndng a projecton, especally when the number of tranng samples s small. Recently, multlnear algebra, the algebra of hgher-order tensors, was appled for analyzng the multfactor structure of mage ensembles [9], [11], [1]. Vaslescu and Terzopoulos have proposed a novel face representaton algorthm called Tensorface [9]. Tensorface represents the set of face mages by a hgher-order tensor and

2 extends Sngular Value Decomposton (SVD) to hgher-order tensor data. In ths way, the multple factors related to expresson, llumnaton and pose can be separated from dfferent dmensons of the tensor. In ths paper, we propose a new algorthm for mage (human faces n partcular) representaton based on the consderatons of multlnear algebra and dfferental geometry. We call t Tensor Subspace Analyss (). For an mage of sze n 1 n, t s represented as the second order tensor (or, matrx) n the tensor space R n1 R n. On the other hand, the face space s generally a submanfold embedded n R n1 R n. Gven some mages sampled from the face manfold, we can buld an adjacency graph to model the local geometrcal structure of the manfold. fnds a projecton that respects ths graph structure. The obtaned tensor subspace provdes an optmal lnear approxmaton to the face manfold n the sense of local sometry. Vaslescu shows how to extend SVD(PCA) to hgher order tensor data. We extend Laplacan based dea to tensor data. It s worthwhle to hghlght several aspects of the proposed approach here: 1. Whle tradtonal lnear dmensonalty reducton algorthms lke PCA, LDA and LPP fnd a map from R n to R l (l < n), fnds a map from R n1 R n to R l1 R l (l 1 < n 1,l < n ). Ths leads to structured dmensonalty reducton.. can be performed n ether supervsed, unsupervsed, or sem-supervsed manner. When label nformaton s avalable, t can be easly ncorporated nto the graph structure. Also, by preservng neghborhood structure, s less senstve to nose and outlers. 3. The computaton of s very smple. It can be obtaned by solvng two egenvector problems. The matrces n the egen-problems are of sze n 1 n 1 or n n, whch are much smaller than the matrces of sze n n (n = n 1 n ) n PCA, LDA and LPP. Therefore, s much more computatonally effcent n tme and storage. There are few parameters that are ndependently estmated, so performance n small data sets s very good. 4. explctly takes nto account the manfold structure of the mage space. The local geometrcal structure s modeled by an adjacency graph. 5. Ths paper s prmarly focused on the second order tensors (or, matrces). However, the algorthm and analyss presented here can also be appled to hgher order tensors. Tensor Subspace Analyss In ths secton, we ntroduce a new algorthm called Tensor Subspace Analyss for learnng a tensor subspace whch respects the geometrcal and dscrmnatve structures of the orgnal data space..1 Laplacan based Dmensonalty Reducton Problems of dmensonalty reducton has been consdered. One general approach s based on graph Laplacan []. The objectve functon of Laplacan egenmap s as follows: mn (f(x ) f(x j )) S j f j where S s a smlarty matrx. These optmal functons are nonlnear but may be expensve to compute. A class of algorthms may be optmzed by restrctng problem to more tractable famles of functons. One natural approach restrcts to lnear functon gvng rse to LPP [4]. In ths

3 paper we wll consder a more structured subset of lnear functons that arse out of tensor analyss. Ths provded greater computatonal benefts.. The Lnear Dmensonalty Reducton Problem n Tensor Space The generc problem of lnear dmensonalty reducton n the second order tensor space s the followng. Gven a set of data ponts X 1,,X m n R n1 R n, fnd two transformaton matrces U of sze n 1 l 1 and V of sze n l that maps these m ponts to a set of ponts Y 1,,Y m R l1 R l (l 1 < n 1,l < n ), such that Y represents X, where Y = U T X V. Our method s of partcular applcablty n the specal case where X 1,,X m M and M s a nonlnear submanfold embedded n R n1 R n..3 Optmal Lnear Embeddngs As we descrbed prevously, the face space s probably a nonlnear submanfold embedded n the tensor space. One hopes then to estmate geometrcal and topologcal propertes of the submanfold from random ponts ( scattered data ) lyng on ths unknown submanfold. In ths secton, we consder the partcular queston of fndng a lnear subspace approxmaton to the submanfold n the sense of local sometry. Our method s fundamentally based on LPP [4]. Gven m data ponts X = {X 1,,X m } sampled from the face submanfold M R n1 R n1, one can buld a nearest neghbor graph G to model the local geometrcal structure of M. Let S be the weght matrx of G. A possble defnton of S s as follows: S j = e X X j t, f X s among the k nearest neghbors of X j, or X j s among the k nearest neghbors of X ; 0, otherwse. where t s a sutable constant. The functon exp( X X j /t) s the so called heat kernel whch s ntmately related to the manfold structure. s the Frobenus norm of matrx,.e. A = j a j. When the label nformaton s avalable, t can be easly ncorporated nto the graph as follows: { S j = e X X j t, f X and X j share the same label; () 0, otherwse. Let U and V be the transformaton matrces. A reasonable transformaton respectng the graph structure can be obtaned by solvng the followng objectve functons: U T X V U T X j V S j (3) mn U,V j The objectve functon ncurs a heavy penalty f neghborng ponts X and X j are mapped far apart. Therefore, mnmzng t s an attempt to ensure that f X and X j are close then U T X V and U T X j V are close as well. Let Y = U T X V. Let D be a dagonal matrx, D = j S j. Snce A (AA T ), we see that: 1 U T X V U T X j V S j = 1 tr ( (Y Y j )(Y Y j ) T) S j j = 1 j ( tr ( Y Y T D Y Y T + Y j Y T j j Y Y T j ) S j Y Yj T j Y j Y T ) Sj (1)

4 ( D U T X V V T X T U ) S j U T X V V T Xj T U j (U T( D X V V T X T S j X V V T X T ) ) j U j. ( U T (D V S V ) U ) where D V = D X V V T X T and S V = j S jx V V T X T j. Smlarly, A = tr(a T A), so we also have 1 U T X V U T X j V S j j = 1 tr ( (Y Y j ) T (Y Y j ) ) S j j = 1 j ( tr ( Y T Y + Y T j Y j Y T Y j Y T j Y ) Sj D Y T Y j S j Y T Y j ) (V T( D X T UU T X X T UU T ) ) X j V j. ( V T (D U S U ) V ) where D U = D X T UUT X and S U = j S jx T UUT X j. Therefore, we should smultaneously mnmze tr ( U T (D V S V ) U ) and tr ( V T (D U S U ) V ). In addton to preservng the graph structure, we also am at maxmzng the global varance on the manfold. Recall that the varance of a random varable x can be wrtten as follows: var(x) = (x µ) dp(x), µ = xdp(x) M where M s the data manfold, µ s the expected value of x and dp s the probablty measure on the manfold. By spectral graph theory [3], dp can be dscretely estmated by the dagonal matrx D(D = j S j) on the sample ponts. Let Y = U T XV denote the random varable n the tensor subspace and suppose the data ponts have a zero mean. Thus, the weghted varance can be estmated as follows: var(y ) = Y D = ( V T ( tr(y T Y )D = ) D X T UU T X )V M tr(v T X T UU T X V )D ( V T D U V ) Smlarly, Y (Y Y T ), so we also have: ( var(y ) = tr(y Y T )D U T ( D X V V T X T ) ) U ( U T D V U ) Fnally, we get the followng optmzaton problems: tr ( U T (D V S V ) U ) mn U,V tr (U T D V U) (4)

5 tr ( V T (D U S U ) V ) mn U,V tr (V T (5) D U V ) The above two mnmzaton problems (4) and (5) depends on each other, and hence can not be solved ndependently. In the followng subsecton, we descrbe a smple computatonal method to solve these two optmzaton problems..4 Computaton In ths subsecton, we dscuss how to solve the optmzaton problems (4) and (5). It s easy to see that the optmal U should be the generalzed egenvectors of (D V S V,D V ) and the optmal V should be the generalzed egenvectors of (D U S U,D U ). However, t s dffcult to compute the optmal U and V smultaneously snce the matrces D V,S V,D U,S U are not fxed. In ths paper, we compute U and V teratvely as follows. We frst fx U, then V can be computed by solvng the followng generalzed egenvector problem: (D U S U )v = λd U v (6) Once V s obtaned, U can be updated by solvng the followng generalzed egenvector problem: (D V S V )u = λd V u (7) Thus, the optmal U and V can be obtaned by teratvely computng the generalzed egenvectors of (6) and (7). In our experments, U s ntally set to the dentty matrx. It s easy to show that the matrces D U,D V,D U S U, and D V S V are all symmetrc and postve sem-defnte. 3 Expermental Results In ths secton, several experments are carred out to show the effcency and effectveness of our proposed algorthm for face recognton. We compare our algorthm wth the Egenface (PCA) [8], Fsherface (LDA) [1], and Laplacanface (LPP) [5] methods, three of the most popular lnear methods for face recognton. Two face databases were used. The frst one s the PIE (Pose, Illumnaton, and Experence) database from CMU, and the second one s the ORL database. In all the experments, preprocessng to locate the faces was appled. Orgnal mages were normalzed (n scale and orentaton) such that the two eyes were algned at the same poston. Then, the facal areas were cropped nto the fnal mages for matchng. The sze of each cropped mage n all the experments s 3 3 pxels, wth 56 gray levels per pxel. No further preprocessng s done. For the Egenface, Fsherface, and Laplacanface methods, the mage s represented as a 4-dmensonal vector, whle n our algorthm the mage s represented as a (3 3)-dmensonal matrx, or the second order tensor. The nearest neghbor classfer s used for classfcaton for ts smplcty. In short, the recognton process has three steps. Frst, we calculate the face subspace from the tranng set of face mages; then the new face mage to be dentfed s projected nto d-dmensonal subspace (PCA, LDA, and LPP) or (d d)-dmensonal tensor subspace (); fnally, the new face mage s dentfed by nearest neghbor classfer. In our algorthm, the number of teratons s taken to be Experments on PIE Database The CMU PIE face database contans 68 subjects wth 41,368 face mages as a whole. The face mages were captured by 13 synchronzed cameras and 1 flashes, under varyng pose, llumnaton and expresson. We choose the fve near frontal poses (C05, C07, C09, C7,

6 Laplacanfaces (PCA+LPP) Fsherfaces (PCA+LDA) Egenfaces (PCA) Baselne 60 Laplacanfaces (PCA+LPP) Fsherfaces (PCA+LDA) Egenfaces (PCA) Baselne Laplacanfaces (PCA+LPP) Fsherfaces (PCA+LDA) Egenfaces (PCA) Baselne 5 15 Laplacanfaces (PCA+LPP) Fsherfaces (PCA+LDA) Egenfaces (PCA) Baselne Dms d (d d for ) (a) 5 Tran Dms d (d d for ) (b) Tran Dms d (d d for ) (c) Tran Fgure 1: Error rate vs. dmensonalty reducton on PIE database Table 1: Performance comparson on PIE database Dms d (d d for ) (d) Tran Method 5 Tran Tran error dm tme(s) error dm tme(s) Baselne 69.9% % 4 - Egenfaces 69.9% % Fsherfaces 31.5% % Laplacanfaces.8% % % % Tran Tran Method error dm tme(s) error dm tme(s) Baselne 38.% 4-7.9% 4 - Egenfaces 38.1% % Fsherfaces 15.4% % Laplacanfaces 14.1% % % % C9) and use all the mages under dfferent llumnatons and expressons, thus we get 170 mages for each ndvdual. For each ndvdual, l(= 5,,, ) mages are randomly selected for tranng and the rest are used for testng. The tranng set s utlzed to learn the subspace representaton of the face manfold by usng Egenface, Fsherface, Laplacanface and our algorthm. The testng mages are projected nto the face subspace n whch recognton s then performed. For each gven l, we average the results over random splts. It would be mportant to note that the Laplacanface algorthm and our algorthm share the same graph structure as defned n Eqn. (). Fgure 1 shows the plots of error rate versus dmensonalty reducton for the Egenface, Fsherface, Laplacanface, and baselne methods. For the baselne method, the recognton s smply performed n the orgnal 4-dmensonal mage space wthout any dmensonalty reducton. Note that, the upper bound of the dmensonalty of Fsherface s c 1 where c s the number of ndvduals. For our algorthm, we only show ts performance n the (d d)-dmensonal tensor subspace, say, 1, 4, 9, etc. As can be seen, the performance of the Egenface, Fsherface, Laplacanface, and algorthms vares wth the number of dmensons. We show the best results obtaned by them n Table 1 and the correspondng face subspaces are called optmal face subspace for each method. It s found that our method outperforms the other four methods wth dfferent numbers of tranng samples (5,,, ) per ndvdual. The Egenface method performs the worst. It does not obtan any mprovement over the baselne method. The Fsherface and Laplacanface methods perform comparatvely to each each. The dmensons of the optmal subspaces are also gven n Table 1. As we have dscussed, can be mplemented very effcently. We show the runnng tme n seconds for each method n Table 1. As can be seen, s much faster than the

7 Laplacanfaces (PCA+LPP) Fsherfaces (PCA+LDA) Egenfaces (PCA) Baselne Laplacanfaces (PCA+LPP) Fsherfaces (PCA+LDA) Egenfaces (PCA) Baselne Laplacanfaces (PCA+LPP) Fsherfaces (PCA+LDA) Egenfaces (PCA) Baselne Laplacanfaces (PCA+LPP) Fsherfaces (PCA+LDA) Egenfaces (PCA) Baselne Dms d (a) Tran Dms d (b) 3 Tran Dms d (c) 4 Tran Dms d (d) 5 Tran Fgure : Error rate vs. dmensonalty reducton on ORL database Table : Performance comparson on ORL database Method Tran 3 Tran error dm tme error dm tme Baselne.% 4 -.4% 4 - Egenfaces.% % Fsherfaces 5.% % Laplacanfaces.% % % % Tran 5 Tran Method error dm tme error dm tme Baselne 16.0% % 4 - Egenfaces 15.9% % Fsherfaces 9.17% % Laplacanfaces 8.54% % % %.97 Egenface, Fsherface and Laplacanface methods. All the algorthms were mplemented n Matlab 6.5 and run on a Intel P4.566GHz PC wth 1GB memory. 3. Experments on ORL Database The ORL (Olvett Research Laboratory) face database s used n ths test. It conssts of a total of 0 face mages, of a total of people ( samples per person). The mages were captured at dfferent tmes and have dfferent varatons ncludng expressons (open or closed eyes, smlng or non-smlng) and facal detals (glasses or no glasses). The mages were taken wth a tolerance for some tltng and rotaton of the face up to degrees. For each ndvdual, l(=,3,4,5) mages are randomly selected for tranng and the rest are used for testng. The expermental desgn s the same as that n the last subsecton. For each gven l, we average the results over random splts. Fgure 3. shows the plots of error rate versus dmensonalty reducton for the Egenface, Fsherface, Laplacanface, and baselne methods. Note that, the presentaton of the performance of the algorthm s dfferent from that n the last subsecton. Here, for a gven d, we show ts performance n the (d d)- dmensonal tensor subspace. The reason s for better comparson, snce the Egenface and Laplacanface methods start to converge after 70 dmensons and there s no need to show ther performance after that. The best result obtaned n the optmal subspace and the runnng tme (mllsecond) of computng the egenvectors for each method are shown n Table. As can be seen, our algorthm performed the best n all the cases. The Fsherface and Laplacanface methods performed comparatvely to our method, whle the Egenface method performed poorly.

8 4 Conclusons and Future Work Tensor based face analyss (representaton and recognton) s ntroduced n ths paper n order to detect the underlyng nonlnear face manfold structure n the manner of tensor subspace learnng. The manfold structure s approxmated by the adjacency graph computed from the data ponts. The optmal tensor subspace respectng the graph structure s then obtaned by solvng an optmzaton problem. We call ths Tensor Subspace Analyss method. Most of tradtonal appearance based face recognton methods (.e. Egenface, Fsherface, and Laplacanface) consder an mage as a vector n hgh dmensonal space. Such representaton gnores the spacal relatonshps between the pxels n the mage. In our work, an mage s naturally represented as a matrx, or the second order tensor. Tensor representaton makes our algorthm much more computatonally effcent than PCA, LDA, and LPP. Expermental results on PIE and ORL databases demonstrate the effcency and effectveness of our method. s lnear. Therefore, f the face manfold s hghly nonlnear, t may fal to dscover the ntrnsc geometrcal structure. It remans unclear how to generalze our algorthm to nonlnear case. Also, n our algorthm, the adjacency graph s nduced from the local geometry and class nformaton. Dfferent graph structures lead to dfferent projectons. It remans unclear how to defne the optmal graph structure n the sense of dscrmnaton. References [1] P.N. Belhumeur, J.P. Hepanha, and D.J. Kregman, Egenfaces vs. fsherfaces: recognton usng class specfc lnear projecton, IEEE. Trans. Pattern Analyss and Machne Intellgence, vol. 19, no. 7, pp , July [] M. Belkn and P. Nyog, Laplacan Egenmaps and Spectral Technques for Embeddng and Clusterng, Advances n Neural Informaton Processng Systems 14, 01. [3] Fan R. K. Chung, Spectral Graph Theory, Regonal Conference Seres n Mathematcs, number 9, [4] X. He and P. Nyog, Localty Preservng Projectons, Advance n Neural Informaton Processng Systems 16, Vancouver, Canada, December 03. [5] X. He, S. Yan, Y. Hu, P. Nyog, and H.-J. Zhang, Face Recognton usng Laplacanfaces, IEEE. Trans. Pattern Analyss and Machne Intellgence, vol. 7, No. 3, 05. [6] S. Rowes, and L. K. Saul, Nonlnear Dmensonalty Reducton by Locally Lnear Embeddng, Scence, vol 90, December 00. [7] J. B. Tenenbaum, V. de Slva, and J. C. Langford, A Global Geometrc Framework for Nonlnear Dmensonalty Reducton, Scence, vol 90, December 00. [8] M. Turk and A. Pentland, Egenfaces for recognton, Journal of Cogntve Neuroscence, 3(1):71-86, [9] M. A. O. Vaslescu and D. Terzopoulos, Multlnear Subspace Analyss for Image Ensembles, IEEE Conference on Computer Vson and Pattern Recognton, 03. [] K. Q. Wenberger and L. K. Saul, Unsupervsed Learnng of Image Manfolds by SemDefnte Programmng, IEEE Conference on Computer Vson and Pattern Recognton, Washngton, DC, 04. [11] J. Yang, D. Zhang, A. Frang, and J. Yang, Two-dmensonal PCA: a new approach to appearance-based face representaton and recognton, IEEE. Trans. Pattern Analyss and Machne Intellgence, vol. 6, No. 1, 04. [1] J. Ye, R. Janardan, Q. L, Two-Dmensonal Lnear Dscrmnant Analyss, Advances n Neural Informaton Processng Systems 17, 04.

Subspace Learning Based on Tensor Analysis. by Deng Cai, Xiaofei He, and Jiawei Han

Subspace Learning Based on Tensor Analysis. by Deng Cai, Xiaofei He, and Jiawei Han Report No. UIUCDCS-R-2005-2572 UILU-ENG-2005-1767 Subspace Learnng Based on Tensor Analyss by Deng Ca, Xaofe He, and Jawe Han May 2005 Subspace Learnng Based on Tensor Analyss Deng Ca Xaofe He Jawe Han

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

Feature Extraction by Maximizing the Average Neighborhood Margin

Feature Extraction by Maximizing the Average Neighborhood Margin Feature Extracton by Maxmzng the Average Neghborhood Margn Fe Wang, Changshu Zhang State Key Laboratory of Intellgent Technologes and Systems Department of Automaton, Tsnghua Unversty, Bejng, Chna. 184.

More information

A Novel Biometric Feature Extraction Algorithm using Two Dimensional Fisherface in 2DPCA subspace for Face Recognition

A Novel Biometric Feature Extraction Algorithm using Two Dimensional Fisherface in 2DPCA subspace for Face Recognition A Novel ometrc Feature Extracton Algorthm usng wo Dmensonal Fsherface n 2DPA subspace for Face Recognton R. M. MUELO, W.L. WOO, and S.S. DLAY School of Electrcal, Electronc and omputer Engneerng Unversty

More information

Learning with Tensor Representation

Learning with Tensor Representation Report No. UIUCDCS-R-2006-276 UILU-ENG-2006-748 Learnng wth Tensor Representaton by Deng Ca, Xaofe He, and Jawe Han Aprl 2006 Learnng wth Tensor Representaton Deng Ca Xaofe He Jawe Han Department of Computer

More information

Image Clustering with Tensor Representation

Image Clustering with Tensor Representation Image Clusterng wth Tensor Representaton Xaofe He 1 Deng Ca Hafeng Lu 3 Jawe Han 1 Department of Computer Scence, Unversty of Chcago, Chcago, IL 637 xaofe@cs.uchcago.edu Department of Computer Scence,

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

Unified Subspace Analysis for Face Recognition

Unified Subspace Analysis for Face Recognition Unfed Subspace Analyss for Face Recognton Xaogang Wang and Xaoou Tang Department of Informaton Engneerng The Chnese Unversty of Hong Kong Shatn, Hong Kong {xgwang, xtang}@e.cuhk.edu.hk Abstract PCA, LDA

More information

TWO-DIMENSIONAL MAXIMUM MARGIN PROJECTION FOR FACE RECOGNITION

TWO-DIMENSIONAL MAXIMUM MARGIN PROJECTION FOR FACE RECOGNITION Journal of heoretcal and Appled Informaton echnology 10 th January 013. Vol. 47 No.1 005-013 JAI & LLS. All rghts reserved. ISSN: 199-8645.att.org E-ISSN: 1817-3195 WO-DIMENSIONAL MAXIMUM MARGIN PROJECION

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Statistical pattern recognition

Statistical pattern recognition Statstcal pattern recognton Bayes theorem Problem: decdng f a patent has a partcular condton based on a partcular test However, the test s mperfect Someone wth the condton may go undetected (false negatve

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Adaptive Manifold Learning

Adaptive Manifold Learning Adaptve Manfold Learnng Jng Wang, Zhenyue Zhang Department of Mathematcs Zhejang Unversty, Yuquan Campus, Hangzhou, 327, P. R. Chna wroarng@sohu.com zyzhang@zju.edu.cn Hongyuan Zha Department of Computer

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

A New Facial Expression Recognition Method Based on * Local Gabor Filter Bank and PCA plus LDA

A New Facial Expression Recognition Method Based on * Local Gabor Filter Bank and PCA plus LDA Hong-Bo Deng, Lan-Wen Jn, L-Xn Zhen, Jan-Cheng Huang A New Facal Expresson Recognton Method Based on Local Gabor Flter Bank and PCA plus LDA A New Facal Expresson Recognton Method Based on * Local Gabor

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

Discriminative Dictionary Learning with Low-Rank Regularization for Face Recognition

Discriminative Dictionary Learning with Low-Rank Regularization for Face Recognition Dscrmnatve Dctonary Learnng wth Low-Rank Regularzaton for Face Recognton Langyue L, Sheng L, and Yun Fu Department of Electrcal and Computer Engneerng Northeastern Unversty Boston, MA 02115, USA {l.langy,

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Efficient, General Point Cloud Registration with Kernel Feature Maps

Efficient, General Point Cloud Registration with Kernel Feature Maps Effcent, General Pont Cloud Regstraton wth Kernel Feature Maps Hanchen Xong, Sandor Szedmak, Justus Pater Insttute of Computer Scence Unversty of Innsbruck 30 May 2013 Hanchen Xong (Un.Innsbruck) 3D Regstraton

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016 U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and

More information

Fisher Linear Discriminant Analysis

Fisher Linear Discriminant Analysis Fsher Lnear Dscrmnant Analyss Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada wellng@cs.toronto.edu Abstract Ths s a note to explan Fsher lnear

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Multi-Task Learning in Heterogeneous Feature Spaces

Multi-Task Learning in Heterogeneous Feature Spaces Proceedngs of the Twenty-Ffth AAAI Conference on Artfcal Intellgence Mult-Task Learnng n Heterogeneous Feature Spaces Yu Zhang & Dt-Yan Yeung Department of Computer Scence and Engneerng Hong Kong Unversty

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

18.1 Introduction and Recap

18.1 Introduction and Recap CS787: Advanced Algorthms Scrbe: Pryananda Shenoy and Shjn Kong Lecturer: Shuch Chawla Topc: Streamng Algorthmscontnued) Date: 0/26/2007 We contnue talng about streamng algorthms n ths lecture, ncludng

More information

Unsupervised Feature Extraction Inspired by Latent Low-Rank Representation

Unsupervised Feature Extraction Inspired by Latent Low-Rank Representation Unsupervsed Feature Extracton Inspred by Latent Low-Rank Representaton Yamng Wang, Vlad I. Moraru, Larry S. Davs Unversty of Maryland, College Park, MD, 20742, USA {wym, moraru, lsd}@umacs.umd.edu Abstract

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Head Pose Estimation Using Spectral Regression Discriminant Analysis

Head Pose Estimation Using Spectral Regression Discriminant Analysis Head Pose Estmaton Usng Spectral Regresson Dscrmnant Analyss Cafeng Shan and We Chen Phlps Research Hgh Tech Campus 3, AE Endhoven, The Netherlands {cafeng.shan, w.chen}@phlps.com Abstract In ths paper,

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Non-Negative Graph Embedding

Non-Negative Graph Embedding Non-Negatve Graph Embeddng Janchao Yang, Shucheng Yang 2, Yun Fu, Xuelong L 3, Thomas Huang ECE Department, Unversty of Illnos at Urbana-Champagn, USA 2 ECE Department, Natonal Unversty of Sngapore, Sngapore

More information

Lecture 17: Lee-Sidford Barrier

Lecture 17: Lee-Sidford Barrier CSE 599: Interplay between Convex Optmzaton and Geometry Wnter 2018 Lecturer: Yn Tat Lee Lecture 17: Lee-Sdford Barrer Dsclamer: Please tell me any mstake you notced. In ths lecture, we talk about the

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Probability Theory (revisited)

Probability Theory (revisited) Probablty Theory (revsted) Summary Probablty v.s. plausblty Random varables Smulaton of Random Experments Challenge The alarm of a shop rang. Soon afterwards, a man was seen runnng n the street, persecuted

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology

More information

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018 INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton

More information

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique

Outline and Reading. Dynamic Programming. Dynamic Programming revealed. Computing Fibonacci. The General Dynamic Programming Technique Outlne and Readng Dynamc Programmng The General Technque ( 5.3.2) -1 Knapsac Problem ( 5.3.3) Matrx Chan-Product ( 5.3.1) Dynamc Programmng verson 1.4 1 Dynamc Programmng verson 1.4 2 Dynamc Programmng

More information

Module 9. Lecture 6. Duality in Assignment Problems

Module 9. Lecture 6. Duality in Assignment Problems Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

BACKGROUND SUBTRACTION WITH EIGEN BACKGROUND METHODS USING MATLAB

BACKGROUND SUBTRACTION WITH EIGEN BACKGROUND METHODS USING MATLAB BACKGROUND SUBTRACTION WITH EIGEN BACKGROUND METHODS USING MATLAB 1 Ilmyat Sar 2 Nola Marna 1 Pusat Stud Komputas Matematka, Unverstas Gunadarma e-mal: lmyat@staff.gunadarma.ac.d 2 Pusat Stud Komputas

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Natural Images, Gaussian Mixtures and Dead Leaves Supplementary Material

Natural Images, Gaussian Mixtures and Dead Leaves Supplementary Material Natural Images, Gaussan Mxtures and Dead Leaves Supplementary Materal Danel Zoran Interdscplnary Center for Neural Computaton Hebrew Unversty of Jerusalem Israel http://www.cs.huj.ac.l/ danez Yar Wess

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

Lecture 4: Constant Time SVD Approximation

Lecture 4: Constant Time SVD Approximation Spectral Algorthms and Representatons eb. 17, Mar. 3 and 8, 005 Lecture 4: Constant Tme SVD Approxmaton Lecturer: Santosh Vempala Scrbe: Jangzhuo Chen Ths topc conssts of three lectures 0/17, 03/03, 03/08),

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

CS4495/6495 Introduction to Computer Vision. 3C-L3 Calibrating cameras

CS4495/6495 Introduction to Computer Vision. 3C-L3 Calibrating cameras CS4495/6495 Introducton to Computer Vson 3C-L3 Calbratng cameras Fnally (last tme): Camera parameters Projecton equaton the cumulatve effect of all parameters: M (3x4) f s x ' 1 0 0 0 c R 0 I T 3 3 3 x1

More information

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model

Prof. Dr. I. Nasser Phys 630, T Aug-15 One_dimensional_Ising_Model EXACT OE-DIMESIOAL ISIG MODEL The one-dmensonal Isng model conssts of a chan of spns, each spn nteractng only wth ts two nearest neghbors. The smple Isng problem n one dmenson can be solved drectly n several

More information

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980 MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and

More information

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU

MIMA Group. Chapter 2 Bayesian Decision Theory. School of Computer Science and Technology, Shandong University. Xin-Shun SDU Group M D L M Chapter Bayesan Decson heory Xn-Shun Xu @ SDU School of Computer Scence and echnology, Shandong Unversty Bayesan Decson heory Bayesan decson theory s a statstcal approach to data mnng/pattern

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Norms, Condition Numbers, Eigenvalues and Eigenvectors

Norms, Condition Numbers, Eigenvalues and Eigenvectors Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b

More information

Notes on Frequency Estimation in Data Streams

Notes on Frequency Estimation in Data Streams Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Lecture 10 Support Vector Machines. Oct

Lecture 10 Support Vector Machines. Oct Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron

More information

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2

Salmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2 Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to

More information

Eigenvalues of Random Graphs

Eigenvalues of Random Graphs Spectral Graph Theory Lecture 2 Egenvalues of Random Graphs Danel A. Spelman November 4, 202 2. Introducton In ths lecture, we consder a random graph on n vertces n whch each edge s chosen to be n the

More information

Lecture 10: May 6, 2013

Lecture 10: May 6, 2013 TTIC/CMSC 31150 Mathematcal Toolkt Sprng 013 Madhur Tulsan Lecture 10: May 6, 013 Scrbe: Wenje Luo In today s lecture, we manly talked about random walk on graphs and ntroduce the concept of graph expander,

More information

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z )

C4B Machine Learning Answers II. = σ(z) (1 σ(z)) 1 1 e z. e z = σ(1 σ) (1 + e z ) C4B Machne Learnng Answers II.(a) Show that for the logstc sgmod functon dσ(z) dz = σ(z) ( σ(z)) A. Zsserman, Hlary Term 20 Start from the defnton of σ(z) Note that Then σ(z) = σ = dσ(z) dz = + e z e z

More information

2.3 Nilpotent endomorphisms

2.3 Nilpotent endomorphisms s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms

More information