Learning with Tensor Representation

Size: px
Start display at page:

Download "Learning with Tensor Representation"

Transcription

1 Report No. UIUCDCS-R UILU-ENG Learnng wth Tensor Representaton by Deng Ca, Xaofe He, and Jawe Han Aprl 2006

2 Learnng wth Tensor Representaton Deng Ca Xaofe He Jawe Han Department of Computer Scence, Unversty of Illnos at Urbana-Champagn Yahoo! Research Labs Abstract Most of the exstng learnng algorthms take vectors as ther nput data. A functon s then learned n such a vector space for classfcaton, clusterng, or dmensonalty reducton. However, n some stuatons, there s reason to consder data as tensors. For example, an mage can be consdered as a second order tensor and a vdeo can be consdered as a thrd order tensor. In ths paper, we propose two novel algorthms called Support Tensor Machnes (STM) and Tensor Least Square (TLS). These two algorthms operate n the tesnor space. Specfcally, we represent data as the second order tensors (or, matrces) n R n R n2, where R n and R n2 are two vector spaces. STM ams at fndng a maxmum margn classfer n the tensor space, whle TLS ams at fndng a mnmum resdual sum-of-squares classfer. Wth tensor representaton, the number of parameters estmated by STM (TLS) can be greatly reduced. Therefore, our algorthms are especally sutable for small sample cases. We compare our proposed algorthms wth SVM and the ordnary Least Square method on sx databases. Expermental results show the effectveness of our algorthms. Introducton The problem of data representaton has been at the core of machne learnng. Most of the tradtonal learnng algorthms are based on the Vector Space Model. That s, the data are represented as vectors x R n. The learnng algorthms am at fndng a lnear (or nonlnear) functon f(x) = w T x accordng to some pre-defned crtera, where w = (w,, w n ) T are the parameters to estmate. However, n some stuatons, there mght be reason to consder data as tensors. For example, an mage s essentally a second order tensor, or matrx. It s reasonable to consder that pxels close to each other are correlated to some extent. Smlarly, a vdeo s essentally a thrd order tensor wth the thrd dmenson beng tme. Also, two consecutve frames n a vdeo are probably correlated. The work was supported n part by the U.S. Natonal Scence Foundaton NSF IIS /IIS Any opnons, fndngs, and conclusons or recommendatons expressed n ths paper are those of the authors and do not necessarly reflect the vews of the fundng agences.

3 In supervsed learnng settngs wth many nput features, overfttng s usually a potental problem unless there s ample tranng data. For example, t s well known that for unregularzed dscrmnatve models ft va tranng-error mnmzaton, sample complexty (.e., the number of tranng examples needed to learn well ) grows lnearly wth the Vapnk-Chernovenks (VC) dmenson. Further, the VC dmenson for most models grows about lnearly n the number of parameters (Vapnk, 982), whch typcally grows at least lnearly n the number of nput features. All these reasons lead us to consder new representatons and correspondng learnng algorthms wth less number of parameters. In ths paper, we propose two novel supervsed learnng algorthms for data classfcaton, whch are called Support Tensor Machnes (STM) and Tensor Least Square (TLS). Dfferent from most of prevous classfcaton algorthms whch take vectors n R n as nputs, STM and TLS take the second order tensors n R n R n 2 as nputs, where n n 2 n. For example, a vector x R n can be transformed by some means to a second order tensor X R n R n 2. A lnear classfer n R n can be represented as w T x + b n whch there are n + ( n n 2 + ) parameters (b, w, =,, n). Smlarly, a lnear classfer n the tensor space R n R n 2 can be represented as u T Xv + b where u R n and v R n 2. Thus, there are only n + n 2 + parameters. Ths property makes STM and TLS especally sutable for small sample cases. Recently there have been a lot of nterests n tensor based approaches to data analyss n hgh dmensonal spaces. Vaslescu and Terzopoulos (2003) have proposed a novel face representaton algrothm called Tensorface. Tensorface represents the set of face mages by a hgher-order tensor and extends Sngular Value Decomposton (SVD) to hgher-order tensor data. Some other researchers have also shown how to extend Prncpal Component Analyss, Lnear Dscrmnant Analyss, Localty Preservng Projecton, and Non-negatve Matrx Factorzaton to hgher order tensor data (Ca et al., 2005; He et al., 2005; Shashua & Hazan, 2005; Ye et al., 2004). Most of prevous tensor based learnng algorthms are focused on dmensonalty reducton. In ths paper, we extend SVM and Least Square based deas to tensor data for classfcaton. The rest of ths paper s organzed as follows: Secton 2 gves some descrptons on tensor space model for data representaton. The Tensor Least Square (TLS) and Support Tensor Machnes (STM) approaches for classfcaton n tensor space are descrbed n Secton 3. The expermental results on sx datasets from UCI Repostory are presented n Secton 4. In Secton 5, we provde a general algorthm for learnng functons n tensor space. Fnally, we provde some concludng remarks and suggestons for future work n Secton 6. 2 Tensor based Data Representaton Tradtonally, a data sample s represented by a vector n hgh dmensonal space. Some learnng algorthms are then appled n such a vector space for classfcaton. In ths secton, we ntroduce 2

4 Vector to Tensor Converson (a) x (b) Fgure : Vector to tensor converson. 9 denote the postons n the vector and tensor formats. (a) and (b) are two possble tensors. The x n tensor (b) s a paddng constant. a new data representaton model: Tensor Space Model (TSM). In Tensor Space Model, a data sample s represented as a tensor. Each element n the tensor corresponds to a feature. For a data sample x R n, we can convert t to the second order tensor (or matrx) X R n n 2, where n n 2 n. Fgure shows an example of convertng a vector to a tensor. There are two ssues about convertng a vector to a tensor. The frst one s how to choose the sze of the tensor,.e., how to select n and n 2. In fgure, we present two possble tensors for a 9-dmensonal vector. Suppose n n 2, n order to have at least n entres n the tensor whle mnmzng the sze of the tensor, we have (n ) n 2 < n n n 2. Wth such a requrement, there are stll many choces of n and n 2, especally when n s large. Generally all these (n, n 2 ) combnatons can be used. However, t s worth notcng that the number of parameters of a lnear functon n the tensor space s n + n 2. Therefore, one may try to mnmze n + n 2. In other words, n and n 2 should be as close as possble. The second ssue s how to sort the features n the tensor. In vector space model, we mplctly assume that the features are ndependent. A lnear functon n the vector space can be wrtten as g(x) = w T x + b. Clearly, the change of the order of the features has no mpact on the functon learnng. In tensor space model, a lnear functon can be wrtten as f(x) = u T Xv + b. Thus, the ndependency assumpton of the features no longer holds for the learnng algorthms n the tensor space model. Dfferent feature sortng wll lead to dfferent learnng result n the tensor space model. A possble approach for sortng the features s usng the w learned by a vector classfer. Each w n w, where =,, n, corresponds to a feature. Thus, the problem of sortng features n the tensor can be converted to fll these n elements w nto the n n 2 tensor. We dvde w nto three groups: postve, negatve and zero. The correspondng features n the same group tend to be correlated. Suppose each group has l k elements, where k =, 2, 3. For each group, we sort l k elements by ther absolute value and get ( w k wk l k ). We take the frst n elements to form the frst column of the tensor and the next n elements to form the second column of the tensor Note that, n ths paper our prmary nterest s focused on the second order tensors. However, the TSM presented here and the algorthms presented n the next secton can also be appled to hgher order tensors. 3

5 and so on. For each group, we wll fll l k /n columns. The remanders n each group wll be put together to fll the remanng entres n the tensor. In ths paper, we take n n 2 n whch we called square tensors. The features are flled nto the tensor entres by usng the way we descrbed above. The better ways of convertng a data vector to a data tensor wth theoretcal guarantee wll be left for our future work. 3 Classfcaton wth Tensor Representaton Gven a set of tranng samples {X, y }, =,, m, where X s the data pont n order-2 tensor space, X R n R n 2 and y {, } s the label assocated wth X. Let (u,,u n ) be a set of orthonormal bass functons of R n. Let (v,,v n2 ) be a set of orthonormal bass functons of R n 2. Thus, X can be unquely wrtten as: X = j (u T Xv j )u v T j A lnear classfer n the tensor space can be wrtten as f(x) = u T Xv + b, where u R n and v R n 2 are two vectors. The problem of lnear classfcaton n tensor space model s to fnd the u and v based on a specfc objectve functon. In the followng two subsectons, we ntroduce two novel classfers n tensor space model based on dfferent objectve functons, whch are Tensor Least Square classfer and Support Tensor Machnes. One s the tensor generalzaton of least square classfer and the other s the tensor generalzaton of support vector machnes. 3. Tensor Least Square Analyss The least square classfer mght be one of the most well known classfers (Duda et al., 2000; Haste et al., 200). In ths subsecton, we extend the least square dea to tensor space model and develop a Tensor Least Square classfer (TLS). The objectve functon n tensor least square analyss s as follows 2 : mn u,v ( u T ) 2 X v y () 2 Note that, we need to add a constant column (or row) to each data tensor to ft the ntercept. 4

6 By smple algebra, we see that: ( u T ) 2 X v y Smlarly, we also have: = ( u T X vv T X T u 2y u T X v + y 2 ) ( m ( = u T X vv T X )u T u T 2 y X )v + y 2 (2) ( u T ) 2 X v y = ( v T X T uu T X v 2y v T X T u + y 2 ) ( m ) ) = v T X T uu T X v v (2 T y X T u + y 2 (3) Requrng the dervatve of Eqn (2) wth respect to u to be zero, we get: ( ) ( X vv T X T u = 2 y X )v Thus, we get: ( ) ( u = X vv T X T 2 y X )v (4) Smlarly, we can get: ( ) ( v = X T uu T X 2 ) y X T u (5) Notce that u and v are dependent on each other, and can not be solved ndependently. We can use an teratve approach to compute the optmal u and v,.e., we frst fx u, and compute v by Equaton (5); Then we fx v, and compute u by Equaton (4). The convergence proof of such an teratve approach s gven n Secton Support Tensor Machnes Support Vector Machnes are a famly of pattern classfcaton algorthms developed by Vapnk (995) and collaborators. SVM tranng algorthms are based on the dea of structural rsk mnmzaton rather than emprcal rsk mnmzaton, and gve rse to new ways of tranng polynomal, 5

7 neural network, and radal bass functon (RBF) classfers. SVM has proven to be effectve for many classfcaton tasks (Joachms, 998; Ronfard et al., 2002). Specfcally, SVM try to fnd a decson surface that maxmzes the margn between the data ponts n a tranng set. The objectve functon of lnear SVM can be stated as: mn w,b,ξ 2 wt w + C ξ subject to y (w T x + b) ξ, (6) ξ 0, =,, m. Our Support Tensor Machnes (STM) s fundamentally based on the same dea. As we dscussed before, A lnear classfer n the tensor space can be naturally represented as follows: f(x) = u T Xv + b, u R n,v R n 2 (7) Equaton (7) can be rewrtten through matrx nner product as follows: f(x) = < X,uv T > +b, u R n,v R n 2 (8) Thus, the large margn optmzaton problem n the tensor space s reduced to the followng: mn u,v,b,ξ 2 uvt 2 + C ξ subject to y (u T X v + b) ξ, (9) ξ 0, =,, m. We wll now swtch to a Lagrangan formulaton of the problem. We ntroduce postve Lagrange multplers α, µ, =,, m, one for each of the nequalty constrants (9). Ths gves Lagrangan: L P = 2 uvt 2 + C ξ ( α y u T X v + b ) + α α ξ µ ξ Note that 2 uvt 2 = 2 trace( uv T vu T) = ( v T v ) trace ( uu T) 2 = ( v T v ) ( u T u ) 2 6

8 Thus, we have: L P = ( v T v ) ( u T u ) + C 2 ( α y u T X v + b ) ξ + α α ξ µ ξ Requrng that the gradent of L P wth respect to u, v, b and ξ vansh gve the condtons: u = α y X v v T (0) v v = α y u T X u T () u α y = 0 (2) C α µ = 0, =,, m (3) From Equatons (0) and (), we see that u and v are dependent on each other, and can not be solved ndependently. In the followng, we descrbe a smple yet effectve computatonal method to solve ths optmzaton problem. We frst fx u. Let β = u 2 and x = X T u. Thus, the optmzaton problem (9) can be rewrtten as follows: mn v,b,ξ 2 β v 2 + C ξ subject to y (v T x + b) ξ, (4) ξ 0, =,, m. It s clear that the new optmzaton problem (4) s dentcal to the standard SVM optmzaton problem. Thus, we can use the same computatonal methods of SVM to solve (4), such as (Graf et al., 2004; Platt, 998; Platt, 999). Once v s obtaned, let β 2 = v 2 and x = Xv. Thus, u can be obtaned by solvng the followng optmzaton problem: mn u,b,ξ 2 β 2 u 2 + C ξ subject to y (u T x + b) ξ, (5) ξ 0, =,, m. Agan, we can use the standard SVM computatonal methods to solve ths optmzaton problem. Thus, v and u can be obtaned by teratvely solvng the optmzaton problems (4) and (5). In our experments, u s ntally set to the vector of all ones. 7

9 3.3 Convergence Proof In ths secton, we provde a convergence proof of the teratve computatonal method n STM and TLS algorthms. We have the followng theorem: Theorem The teratve procedure to solve the optmzaton problems (4) and (5) wll monotoncally decreases the objectve functon value n (9), and hence the STM algorthm converges. Proof Defne: f(u,v) = 2 uvt 2 + C Let u 0 be the ntal value. Fxng u 0, we get v 0 by solvng the optmzaton problem (4). Lkewse, fxng v 0, we get u by solvng the optmzaton problem (5). Notce that the optmzaton problem of SVM s convex, so the soluton of SVM s globally optmum (Burges, 998; Fletcher, 987). Specfcally, the solutons of equatons (4) and (5) are globally optmum. Thus, we have: f(u 0,v 0 ) f(u,v 0 ) ξ Fnally, we get: f(u 0,v 0 ) f(u,v 0 ) f(u,v ) f(u 2,v ) Snce f s bounded from below by 0, t converges. Smlarly, for Tensor Least Square algorthm, snce the soluton for least square s globally optmum, teratve procedure (4) and (5) wll monotoncally decreases the objectve functon value n (), and hence the TLS algorthm converges. 3.4 From Matrx to Hgh Order Tensor The TLS and STM algorthms descrbed above take order-2 tensors,.e., matrces, as nput data. However, these algorthms can also be extended to hgh order tensors. In ths secton, we brefly descrbe the extenson of these algorthms to hgh order (> 2) tensors. we take STM as an example. Let (T, y ), =,, m denote the tranng samples, where T R n R n k. The decson functon of STM s: f(t) = T(a,a 2,,a k ) + b a R n,a 2 R n 2,,a k R n k where T(a,a 2,,a k ) = T,, k a a k k n. k n k 8

10 As before, a,,a k can also be computed teratvely. We frst ntroduce the l-mode product of a tensor T and a vector a, whch we denote as T l a. The result of l-mode product of a tensor T R n R n k and a vector a R n l, l k wll be a new tensor B R n R n l R n l+ R n k, where n l B,, l, l+,, k = T,, l, l, l+,, k a l Thus, the decson functon n hgher order tensor space can also be wrtten as: f(t) = T a 2 a 2 k a k + b l = The optmzaton problem of STM n hgh order tensors s: mn a,,a k,b,ξ 2 a a k 2 + C ξ (6) subject to y (T (a,a 2,,a k ) + b) ξ, ξ 0, =,, m. Here a a k denotes the tensor norm of a a k (Lee, 2002). Frst, to compute a, we fx a 2,,a k. Let β 2 = a 2 2,, β k = a k 2. We then defne t = T 2 a 2 k a k. Thus, the optmzaton problem (6) can be reduced as follows: mn a,b,ξ 2 β 2 β k a 2 + C subject to y (a T t + b) ξ, (7) ξ 0, =,, m. Agan, we can use the standard SVM computatonal methods to solve ths optmzaton problem. Once a s computed, we can fx a,a 3,,a k to compute a 2. So on, all the a can be computed n such teratve manner. ξ 4 Experments To evaluate the performance of tensor based classfers, we performed experments on sx datasets from UCI Repostory and report results for all of them. These sx datasets nclude BREAST- CANCER, DIABETES, IONOSPHERE, MUSHROOMS, SONAR, as well as the ADULT data n a representaton produced by Platt (999). All of these datasets are bnary categores and the features were scaled to [, ]. The preprocessed datasets can also be downloaded from LIBSVM dataset webpage

11 Table : Classfcaton accuraces on varous data sets. n s the number of features and m s the total number of examples Data set n m Tran TLS LS TLS vs LS STM SVM STM vs SVM Adult % Breast-cancer % Dabetes % Ionosphere % > Mushrooms % Sonar % or means P-value 0.0 n the pared T-test on the 50 random splts > or < means 0.0 < P-value 0.05 means P-value > 0.05 All the datasets were randomly splt nto tranng and testng sets. Notce that the tranng sets are pretty small snce we are partcularly nterested n the performance on the small tranng sze cases. We averaged the results over 50 random splts and reported the average performance. 4. Expermental Results on TLS and LS We used the regress functon n statstcs toolbox of Matlab 7.04 to solve the least square problems n both Tensor Least Square classfer (TLS) and Least Square classfer (LS). Table summarzed the results. Besdes the average accuracy, a pared T-test on the 50 random splts s also reported, whch ndcates whether the dfference between two systems s sgnfcant. We can see that n 3 out of 6 datasets, TLS s better than LS. In the remanng 3 datasets, both algorthms are comparable. To get a more detaled pcture of the performance of TLS wth respect to the tranng set sze. We tested TLS and LS over varous tranng szes (from % to 0%) on BREAST-CANCER dataset. The results are shown n Fgure 2. As can be seen, when the tranng set s small (%, 2% and 3%), TLS outperforms LS. As the number of tranng samples ncreases, TLS tends to get same results wth LS. 4.2 Expermental Results on STM and SVM In ths experment, we compared the classfcaton performance of Support Tensor Machnes (STM) and tradtonal Support Vector Machnes (SVM). We used the LIBSVM system (Chang & Ln, 200) and tested t wth the lnear model. For both STM and SVM, there s a parameter C need to be set. We use cross-valdaton on the 0

12 96 94 Accuracy (%) Tensor Least Square Least Square Tranng sample rato (%) Fgure 2: Classfcaton accuracy wth respect to the tranng sample sze of TLS and LS on BREAST-CANCER dataset. As can be seen, when the tranng set s small (%, 2% and 3%), TLS outperforms LS. As the number of tranng samples ncreases, TLS tends to get same results wth LS. tranng set to fnd the best parameter C. Table summarzed the results. We can see that n 4 out of 6 datasets, STM s better than SVM. In the remanng 2 datasets, both algorthms are comparable. We also tested STM and SVM over varous tranng sze (from % to 0%) on BREAST-CANCER dataset. The results are shown n Fgure 3. As can be seen, when the tranng set s small (%, 2% and 3%), STM outperforms SVM. As the number of tranng samples ncreases, STM tends to get same results wth SVM. In all the cases, large margn classfers (STM and SVM) are better than least square classfers (TLS and LS). Both of these two experments suggest that classfers based on tensor representaton are partcularly sutable for small sample problems. Ths mght be due to the fact that the number of parameters need to be estmated n tensor classfers s n + n 2 whch can be much smaller than n n 2 n vector classfers. 5 The General Algorthm for Learnng Functons n Tensor Space In the above sectons, we have developed an teratve algorthm for solvng the optmzaton problems of STM and TLS. It can be notced that these two algorthms share the smlar teratve procedure. Actually, such teratve algorthm can be generalze to solve a broad famly of tensor verson of tradtonal vector based learnng algorthms. Suppose the lnear classfer n vector space s g(x) = w T x + b and ts objectve functon s mnv (w,x, y, =,, m) (8)

13 Accuracy (%) STM SVM Tranng sample rato (%) Fgure 3: Classfcaton accuracy wth respect to the tranng sample sze of STM and SVM on BREAST-CANCER dataset. As can be seen, when the tranng set s small (%, 2% and 3%), STM outperforms SVM. As the number of tranng samples ncreases, STM tends to get same results wth SVM. The correspondng lnear classfer n tensor space s f(x) = u T Xv + b wth objectve functon mnt(u,v,x, y, =,, m) (9) The algorthmc procedure for solvng the optmzaton problem (9) s formally stated below:. Intalzaton: Let u = (,, ) T. 2. Computng v: Let x = X T u. The tensor classfer can be rewrtten as f(x) = u T Xv + b = x T v + b = g (x) Thus, the optmzaton problem of (9) can be rewrtten as the optmzaton problem n vector space: mn V (v,x, y, =,, m) (20) Note: Any computatonal method for solvng (8) can also be used here. 3. Computng u: Once v s obtaned, let x = X v. Smlarly, The tensor classfer can be rewrtten as f(x) = u T Xv + b = u T x + b = g ( x) Thus, u can be computed by solvng the same vector space optmzaton problem: mn V (u, x, y, =,, m) (2) Note: As above, Any computatonal method for solvng (8) can also be used to solve (2). 2

14 4. Iteratvely computng u and v: By step 2 and 3, we can teratvely compute u and v untl they tend to converge. Note: As long as the soluton of optmzaton problem (8) s globally optmum, the teratve procedure descrbed here wll converge. The convergence proof wll be smlar to the proof we gven n Secton Conclusons In ths paper we have ntroduced a tensor framework for data representaton and classfcaton. In partcular, we have proposed two new classfcaton algorthms called Support Tensor Machnes (STM) and Tensor Least Square (TLS) for learnng a lnear classfer n tensor space. Our expermental results on 6 databases from UCI Repostory demonstrate that tensor based classfers are especally sutable for small sample cases. Ths s due to the fact that the number of parameters estmated by a tensor classfer s much less than that estmated by a tradtonal vector classfer. There are several nterestng problems that we are gong to explore n the future work:. In ths paper, we emprcally construct the tensor. The better ways of convertng a data vector to a data tensor wth theoretcal guarantee need to be studed. 2. Both STM and TLS are lnear methods. Thus, they fal to dscover the nonlnear structure of the data space. It remans unclear how to generalzed our algorthms to nonlnear case. A possble way of nonlnear generalzaton s to use kernel technques. 3. In ths paper, we use an teratve computatonal method for solvng the optmzaton problems of STM and TLS. We expect that there exst more effcent computatonal methods. References Burges, C. J. C. (998). A tutoral on support vector machnes for pattern recognton. Data Mnng and Knowledge Dscovery, 2, Ca, D., He, X., & Han, J. (2005). Subspace learnng based on tensor analyss (Techncal Report). Computer Scence Department, UIUC, UIUCDCS-R Chang, C.-C., & Ln, C.-J. (200). LIBSVM: a lbrary for support vector machnes. Software avalable at Duda, R. O., Hart, P. E., & Stork, D. G. (2000). Pattern classfcaton. Hoboken, NJ: Wley- Interscence. 2nd edton. Fletcher, R. (987). Practcal methods of optmzaton. John Wley and Sons. 2nd edton edton. 3

15 Graf, H. P., Cosatto, E., Bottou, L., Dourdanovc, I., & Vapnk, V. (2004). Parallel support vector machnes: The cascade SVM. Advances n Neural Informaton Processng Systems 7. Haste, T., Tbshran, R., & Fredman, J. H. (200). The elements of statstcal learnng. Sprnger- Verlag. He, X., Ca, D., & Nyog, P. (2005). Tensor subspace analyss. Advances n Neural Informaton Processng Systems 8. Joachms, T. (998). Text categorzaton wth support vector machnes: Learnng wth many relevant features. ECML 98. Lee, J. M. (2002). Introducton to smooth manfolds. Sprnger-Verlag New York. Platt, J. C. (998). Usng sparseness and analytc QP to speed tranng of support vector machnes. Advances n Neural Informaton Processng Systems. Platt, J. C. (999). Fast tranng of support vector machnes usng sequental mnmal optmzaton, Cambrdge, MA, USA: MIT Press. Ronfard, R., Schmd, C., & Trggs, B. (2002). Learnng to parse pctures of people. ECCV 02. Shashua, A., & Hazan, T. (2005). Non-negatve tensor factorzaton wth applcatons to statstcs and computer vson. Proc Int. Conf. Machne Learnng (ICML 05). Vapnk, V. N. (982). Estmaton of dependences based on emprcal data. Sprnger-Verlag. Vapnk, V. N. (995). The nature of statstcal learnng theory. Sprnger-Verlag. Vaslescu, M. A. O., & Terzopoulos, D. (2003). Multlnear subspace analyss for mage ensembles. IEEE Conference on Computer Vson and Pattern Recognton. Ye, J., Janardan, R., & L, Q. (2004). Two-dmensonal lnear dscrmnant analyss. Advances n Neural Informaton Processng Systems 7. 4

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

Tensor Subspace Analysis

Tensor Subspace Analysis Tensor Subspace Analyss Xaofe He 1 Deng Ca Partha Nyog 1 1 Department of Computer Scence, Unversty of Chcago {xaofe, nyog}@cs.uchcago.edu Department of Computer Scence, Unversty of Illnos at Urbana-Champagn

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?

Image classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them? Image classfcaton Gven te bag-of-features representatons of mages from dfferent classes ow do we learn a model for dstngusng tem? Classfers Learn a decson rule assgnng bag-offeatures representatons of

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

Kristin P. Bennett. Rensselaer Polytechnic Institute

Kristin P. Bennett. Rensselaer Polytechnic Institute Support Vector Machnes and Other Kernel Methods Krstn P. Bennett Mathematcal Scences Department Rensselaer Polytechnc Insttute Support Vector Machnes (SVM) A methodology for nference based on Statstcal

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Linear Classification, SVMs and Nearest Neighbors

Linear Classification, SVMs and Nearest Neighbors 1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush

More information

Singular Value Decomposition: Theory and Applications

Singular Value Decomposition: Theory and Applications Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Subspace Learning Based on Tensor Analysis. by Deng Cai, Xiaofei He, and Jiawei Han

Subspace Learning Based on Tensor Analysis. by Deng Cai, Xiaofei He, and Jiawei Han Report No. UIUCDCS-R-2005-2572 UILU-ENG-2005-1767 Subspace Learnng Based on Tensor Analyss by Deng Ca, Xaofe He, and Jawe Han May 2005 Subspace Learnng Based on Tensor Analyss Deng Ca Xaofe He Jawe Han

More information

Lecture 20: November 7

Lecture 20: November 7 0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:

More information

CSE 252C: Computer Vision III

CSE 252C: Computer Vision III CSE 252C: Computer Vson III Lecturer: Serge Belonge Scrbe: Catherne Wah LECTURE 15 Kernel Machnes 15.1. Kernels We wll study two methods based on a specal knd of functon k(x, y) called a kernel: Kernel

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

On a direct solver for linear least squares problems

On a direct solver for linear least squares problems ISSN 2066-6594 Ann. Acad. Rom. Sc. Ser. Math. Appl. Vol. 8, No. 2/2016 On a drect solver for lnear least squares problems Constantn Popa Abstract The Null Space (NS) algorthm s a drect solver for lnear

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

1 Matrix representations of canonical matrices

1 Matrix representations of canonical matrices 1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:

More information

Support Vector Machines

Support Vector Machines Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x n class

More information

Chapter 6 Support vector machine. Séparateurs à vaste marge

Chapter 6 Support vector machine. Séparateurs à vaste marge Chapter 6 Support vector machne Séparateurs à vaste marge Méthode de classfcaton bnare par apprentssage Introdute par Vladmr Vapnk en 1995 Repose sur l exstence d un classfcateur lnéare Apprentssage supervsé

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

Intro to Visual Recognition

Intro to Visual Recognition CS 2770: Computer Vson Intro to Vsual Recognton Prof. Adrana Kovashka Unversty of Pttsburgh February 13, 2018 Plan for today What s recognton? a.k.a. classfcaton, categorzaton Support vector machnes Separable

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Evaluation of simple performance measures for tuning SVM hyperparameters

Evaluation of simple performance measures for tuning SVM hyperparameters Evaluaton of smple performance measures for tunng SVM hyperparameters Kabo Duan, S Sathya Keerth, Aun Neow Poo Department of Mechancal Engneerng, Natonal Unversty of Sngapore, 0 Kent Rdge Crescent, 960,

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Inexact Newton Methods for Inverse Eigenvalue Problems

Inexact Newton Methods for Inverse Eigenvalue Problems Inexact Newton Methods for Inverse Egenvalue Problems Zheng-jan Ba Abstract In ths paper, we survey some of the latest development n usng nexact Newton-lke methods for solvng nverse egenvalue problems.

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Finding Dense Subgraphs in G(n, 1/2)

Finding Dense Subgraphs in G(n, 1/2) Fndng Dense Subgraphs n Gn, 1/ Atsh Das Sarma 1, Amt Deshpande, and Rav Kannan 1 Georga Insttute of Technology,atsh@cc.gatech.edu Mcrosoft Research-Bangalore,amtdesh,annan@mcrosoft.com Abstract. Fndng

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Lecture 3: Dual problems and Kernels

Lecture 3: Dual problems and Kernels Lecture 3: Dual problems and Kernels C4B Machne Learnng Hlary 211 A. Zsserman Prmal and dual forms Lnear separablty revsted Feature mappng Kernels for SVMs Kernel trck requrements radal bass functons SVM

More information

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012

Support Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012 Support Vector Machnes Je Tang Knowledge Engneerng Group Department of Computer Scence and Technology Tsnghua Unversty 2012 1 Outlne What s a Support Vector Machne? Solvng SVMs Kernel Trcks 2 What s a

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts

More information

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering /

P R. Lecture 4. Theory and Applications of Pattern Recognition. Dept. of Electrical and Computer Engineering / Theory and Applcatons of Pattern Recognton 003, Rob Polkar, Rowan Unversty, Glassboro, NJ Lecture 4 Bayes Classfcaton Rule Dept. of Electrcal and Computer Engneerng 0909.40.0 / 0909.504.04 Theory & Applcatons

More information

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD

Matrix Approximation via Sampling, Subspace Embedding. 1 Solving Linear Systems Using SVD Matrx Approxmaton va Samplng, Subspace Embeddng Lecturer: Anup Rao Scrbe: Rashth Sharma, Peng Zhang 0/01/016 1 Solvng Lnear Systems Usng SVD Two applcatons of SVD have been covered so far. Today we loo

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learnng Theory: Lecture Notes Lecturer: Kamalka Chaudhur Scrbe: Qush Wang October 27, 2012 1 The Agnostc PAC Model Recall that one of the constrants of the PAC model s that the data dstrbuton has to be

More information

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017

U.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017 U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

Lagrange Multipliers Kernel Trick

Lagrange Multipliers Kernel Trick Lagrange Multplers Kernel Trck Ncholas Ruozz Unversty of Texas at Dallas Based roughly on the sldes of Davd Sontag General Optmzaton A mathematcal detour, we ll come back to SVMs soon! subject to: f x

More information

PHYS 705: Classical Mechanics. Calculus of Variations II

PHYS 705: Classical Mechanics. Calculus of Variations II 1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary

More information

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont.

UVA CS / Introduc8on to Machine Learning and Data Mining. Lecture 10: Classifica8on with Support Vector Machine (cont. UVA CS 4501-001 / 6501 007 Introduc8on to Machne Learnng and Data Mnng Lecture 10: Classfca8on wth Support Vector Machne (cont. ) Yanjun Q / Jane Unversty of Vrgna Department of Computer Scence 9/6/14

More information

Nonlinear Classifiers II

Nonlinear Classifiers II Nonlnear Classfers II Nonlnear Classfers: Introducton Classfers Supervsed Classfers Lnear Classfers Perceptron Least Squares Methods Lnear Support Vector Machne Nonlnear Classfers Part I: Mult Layer Neural

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Newton s Method for One - Dimensional Optimization - Theory

Newton s Method for One - Dimensional Optimization - Theory Numercal Methods Newton s Method for One - Dmensonal Optmzaton - Theory For more detals on ths topc Go to Clck on Keyword Clck on Newton s Method for One- Dmensonal Optmzaton You are free to Share to copy,

More information

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018

INF 5860 Machine learning for image classification. Lecture 3 : Image classification and regression part II Anne Solberg January 31, 2018 INF 5860 Machne learnng for mage classfcaton Lecture 3 : Image classfcaton and regresson part II Anne Solberg January 3, 08 Today s topcs Multclass logstc regresson and softma Regularzaton Image classfcaton

More information

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal

Inner Product. Euclidean Space. Orthonormal Basis. Orthogonal Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

The Expectation-Maximization Algorithm

The Expectation-Maximization Algorithm The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

Support Vector Machines

Support Vector Machines /14/018 Separatng boundary, defned by w Support Vector Machnes CISC 5800 Professor Danel Leeds Separatng hyperplane splts class 0 and class 1 Plane s defned by lne w perpendcular to plan Is data pont x

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

Bounds on the Generalization Performance of Kernel Machines Ensembles

Bounds on the Generalization Performance of Kernel Machines Ensembles Bounds on the Generalzaton Performance of Kernel Machnes Ensembles Theodoros Evgenou theos@a.mt.edu Lus Perez-Breva lpbreva@a.mt.edu Massmlano Pontl pontl@a.mt.edu Tomaso Poggo tp@a.mt.edu Center for Bologcal

More information

Non-linear Canonical Correlation Analysis Using a RBF Network

Non-linear Canonical Correlation Analysis Using a RBF Network ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Solutions to exam in SF1811 Optimization, Jan 14, 2015

Solutions to exam in SF1811 Optimization, Jan 14, 2015 Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )

Yong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 ) Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often

More information

COS 521: Advanced Algorithms Game Theory and Linear Programming

COS 521: Advanced Algorithms Game Theory and Linear Programming COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton

More information

Sparse Gaussian Processes Using Backward Elimination

Sparse Gaussian Processes Using Backward Elimination Sparse Gaussan Processes Usng Backward Elmnaton Lefeng Bo, Lng Wang, and Lcheng Jao Insttute of Intellgent Informaton Processng and Natonal Key Laboratory for Radar Sgnal Processng, Xdan Unversty, X an

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Maximal Margin Classifier

Maximal Margin Classifier CS81B/Stat41B: Advanced Topcs n Learnng & Decson Makng Mamal Margn Classfer Lecturer: Mchael Jordan Scrbes: Jana van Greunen Corrected verson - /1/004 1 References/Recommended Readng 1.1 Webstes www.kernel-machnes.org

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}. CS 189 Introducton to Machne Learnng Sprng 2018 Note 26 1 Boostng We have seen that n the case of random forests, combnng many mperfect models can produce a snglodel that works very well. Ths s the dea

More information

Difference Equations

Difference Equations Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Vapnik-Chervonenkis theory

Vapnik-Chervonenkis theory Vapnk-Chervonenks theory Rs Kondor June 13, 2008 For the purposes of ths lecture, we restrct ourselves to the bnary supervsed batch learnng settng. We assume that we have an nput space X, and an unknown

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information