Feature Selection in Multi-instance Learning
|
|
- Shannon Dean
- 5 years ago
- Views:
Transcription
1 The Nnth Internatonal Symposum on Operatons Research and Its Applcatons (ISORA 10) Chengdu-Juzhagou, Chna, August 19 23, 2010 Copyrght 2010 ORSC & APORC, pp Feature Selecton n Mult-nstance Learnng Chun-Hua Zhang 1 Jun-Yan Tan 2, Na-Yang Deng 2, 1 Informaton School, Ren Unversty of Chna, Bejng, Chna, College of Scence, Chna Agrcultural Unversty, Bejng, Chna, Abstract Ths paper focuses on the feature selecton n mult-nstance learnng. A new verson of support vector machne named p-misvm s proposed. In the p-misvm model, the problem needs to be solved s non-dfferentable and non-convex. By usng the constraned concave-convex procedure (CCCP), a lnearzaton algorthm s presented that solves a successon of fast lnear programs that converges to a local optmal soluton. Furthermore, the lower bounds for the absolute value of nonzero components n every local optmal soluton s establshed, whch can elate zero components n any numercal soluton. The numercal experments show that the p-misvm s effectve n selectng relevant features, compared wth the popular MICA. Keywords Support vector machne; feature selecton; p-norm; mult-nstance learnng 1 Introducton Feature selecton s very mportant n many applcatons of data ng. By restrctng the nput space to a small subset of nput varables, t has obvous benefts n terms of data storage, computatonal reurements, and cost of future data collecton. Ths paper focuses on feature secton n mult-nstance learnng va a new verson of support vector machne (SVM). Mult-nstance learnng (MIL) s a growng feld of research n data ng. In the MIL problem, the tranng set s composed of many bags, each nvolves n many nstances. A bag s postvely labeled f t contans at least one postve nstance; otherwse t s labeled as a negatve bag. The task s to fnd some decson functon from the tranng set for correctly labelng unseen bag. MIL problem was frst ntroduced by Detterch et al.[1] n drug actvty predcton. So far, MIL has been appled to many felds such as mage retreval ([2]), face detecton ([3]), scene classfcaton, text categorzaton, etc and s often found to be superor than a conventonal supervsed learnng approaches. ([4]) proposed a framework called Dverse Densty algorthm. Snce then varous varants of standard sngle nstance learnng algorthms lke Boost-ng ([3], [5]), SVM ([2], [6] ), Logstc Regresson ([7]), nearest neghbor ([8]) etc. have been modfed to adapt to the MIL problem. Ths work s supported by the Key Project of the Natonal Natural Scence Foundaton of Chna (No ), the Natonal Natural Scence Foundaton of Chna (No ) Correspondng author. E-mal: tanjunyan0@126.com Correspondng author. E-mal: dengnayang@cau.edu.cn
2 Feature Selecton n Mult-nstance Learnng 463 Based on the standard SVM, some methods ncludng MI, m [9], etc. have been proposed for the MIL problem. There are few works on feature selecton n MIL. In [10], the MICA algorthm s ntroduced, whch employs 1-norm, rather than 2-norm n MI and m. Because the 1-norm SVM formulaton s known to lead to sparse solutons ([11], [12]), MICA can get few features when a lnear classfer s used. Recently, an effectve method, named p-norm SVM (0 < p < 1), s proposed on feature selecton n the standard classfcaton problems n [13], whch motvates us to apply t to the MIL problem. Ths paper proposes p-norm mult-nstance SVM (p-misvm), whch replaces the 2-norm penalty by the p-norm (0 < p < 1) penalty n the objectve functon of the prmal problem n the MI. The p-misvm conducts feature selecton and classfcaton smultaneously. However, there are two dffcultes n solvng p-misvm model: (). It s mpossble to solve the prmal problem va ts dual problem and the prmal problem tself s hard to be solved, because t s nether dfferentable nor convex; (). Feature selecton needs to fnd the nonzero components of the soluton to the prmal problem. However, usually algorthms can only provde an approxmate soluton where nonzero components n the soluton can not be dentfed theoretcally. Frstly, for the dffculty (), by usng the constraned concave-convex procedure (CCCP) ([14], [15]), a lnearzaton algorthm s presented that solves a successon of fast lnear programs that converges to a local optmal soluton to the prmal problem. Furthermore, for the dffculty (), the lower bounds for the absolute value of nonzero entres n every local optmal soluton s establshed, whch can elate zero entres n any numercal soluton. Lastly, the performance of p-misvm s llustrated on the smulated datasets. Now we descrbe our notaton. For a vector x n R n, [x] ( = 1,2,,n) denotes the -th component of x. x denotes a vector n R n of absolute value of the components of x. x p denotes that ( [x] 1 p + + [x] n p ) 1 p. Strctly speakng, x p s not a general norm when 0 < p < 1, but we stll follow ths term p-norm, because the forms are same except that the values of p are dfferent. x 0 s the number of nonzero components of x. Ths paper s organzed as follows. In secton 2, the p-misvm for feature secton s ntroduced. In secton 3, the CCCP s proposed to solve p-misvm model. In secton 4, the absolute lower bounds of the local optmal soluton s establshed. In secton 5, numercal experments are gven to demonstrate the effectveness of our method. We conclude ths paper n secton 6. 2 p-norm mult-nstance support vector machne For feature selecton, p-misvm s an embedded method n whch tranng data are gven to a learnng machne, whch returns a predctor and a subset of features on whch t performs predctons. In fact, feature selecton s performed n the process of learnng. Consder the mult-nstance classfcaton problem wth the tranng set T s gven by {(X 1,y 1 ),,(X l,y l )}, (1) where X = {x 1,,x l },x j R n ( = 1,,l, j = 1,,l ),y { 1,1}. Here, when y = 1, X s called as a postve bag and (X,y ) mples that there exsts at least one nstance wth postve label n X ; when y = 1, X s called as a negatve bag and there exsts no any nstance wth postve label n X. The task s to fnd a functon g(x) such that
3 464 The 9th Internatonal Symposum on Operatons Research and Its Applcatons the label of any nstance n R n can be deduced by the decson functon f (x) = sgn(g(x)). For convenence, the tranng set (1) s represented as {(X 1,y 1 ),,(X,y ),(x r+1,y r+1 ),,(x,y )}, (2) where y 1 = = y = 1,y r+1 = = y = 1, (X,1) mples that there exsts at least one nstance wth postve label and (x, 1) mples that the label of the nstance x s negatve. All of nstances n postve bags X 1,,X are x 1,,x r. I() ( = 1,,) denotes the ndex set of nstances n X. The feature vector g = ([x 1 ],[x 2 ],,[x ] ) T,( = 1,,n) (3) denotes the values of -th feature n all nstances. Suppose the decson functon s gven by f (x) = sgn((w x) + b), the p-misvm solves the optmzaton problem: w,b,ξ w p p +C 1 ξ +C 2 ξ, max ((w x j) + b) 1 ξ, = 1,,, (w x ) + b 1 + ξ, = r + 1,,r + s, ξ 0, = 1,,,r + 1,,r + s, where C 1 (C 1 > 0), C 2 (C 2 > 0) and p(0 < p < 1) are parameters. Now we descrbe our new method such as followng: Algorthm 1. (p-misvm) (1) Gven a tranng set (2); Select the parameters C 1 (C 1 > 0),C 2 (C 2 > 0) and p (0 < p < 1); (2) Solve the optmzaton problem (4) and get ts global soluton (w,b,ξ ); (3) Select the feature set: { [w ] 0,( = 1,,n)}; (4) Construct the decson functon f (x) = sgn(w x) + b ). Note that, n the Algorthm 1, there are two dffcultes () and () that have been addressed n Secton 1, so the followng sectons wll consder them respectvely. 3 CCCP for the p-misvm model The constraned concave-convex procedure (CCCP) ([14], [15]) s an optmzaton tool for problems whose objectve and constraned functons can be expressed as the dfferences of convex functons. Consder the followng optmzaton problem: x f 0 (x) g 0 (x) f (x) g (x) c, = 1,,m, where f,g ( = 0,,m) are real-valued, convex and dfferentable functons on R n, and c R. Gven an ntal x (0), CCCP computes x (t+1) from x (t) by replacng g (x) wth ts frst-order Taylor expanson at x (t), and then settng x (t+1) to the soluton of the followng optmzaton problem: (4) (5) x f 0 (x) [g 0 (x (t) ) + g 0 (x (t) ) (x x (t) )] f (x) [g (x (t) ) + g (x (t) ) (x x (t) )] c, = 1,,m. (6)
4 Feature Selecton n Mult-nstance Learnng 465 Here, g( x) s the gradent of the functon g at x. For non-smooth functons,the gradent should be replaced by the subgradent. It can be shown that CCCP converges to a local mum soluton of (5) n [15]. Consder the problem (4), we frstly ntroduce the varable v = ([v] 1,,[v] n ) to elate the absolute value from the objectve functon, whch leads to the followng euvalent problem: w,b,ξ v p p +C 1 ξ +C 2 ξ, (7) max ((w x j) + b) 1 ξ, = 1,,, (8) (w x ) + b 1 + ξ, = r + 1,,r + s, (9) ξ 0, = 1,,,r + 1,,r + s, (10) v w v (11) where v p p = [v] p [v]p n, due to the last constrant (11). Furthermore, we note that the objectve functon and the constrant functons n the problem (7)-(11) can be regarded as the dfferences of two convex functons. Hence, we can solve the problem (7)-(11) wth CCCP. Note that max (w x j ) n (8) s convex, but a non-smooth functon of w.to use the CCCP, we have to replace the gradent by the subgradents. It s easy to obtan that for = 1,,, the subgradents max (w x j ) = { β j x j β j R,β j 0}, where { = 0, f(w x j ) max β j = (w x j), wth β j = 1. At the k-th teraton, denote 0, otherwse the current w,b,ξ,v estmate and the correspondng β j by w (k),b (k),ξ (k),v (k) and β (k) j, respectvely. In the experments, we ntalzed β (0), for j I(). For convenence, we pck the subgradent wth: β (k) j = then the optmzaton problem s: j = 1 I() { 1, f j = argmaxk I() (w x k ), 0, otherwse. w,b,ξ,v p(v (k) ) p 1 v +C 1 ξ +C 2 ξ, (w x (k) ) + b 1 ξ, = 1,,, (w x ) + b 1 + ξ, = r + 1,,r + s, ξ 0, = 1,,,r + 1,,r + s, v w v. (12) whch s a standard lnear programg, then the followng algorthm s establshed: Algorthm 2. (1) Gven a tranng set (2); Select the parameters C 1 (C 1 > 0), C 2 (C 2 > 0) and p(0 < p < 1); (2) Let k = 0, select x (k) = 1 I() x j, = 1,, and v (k) = 0; (3) Solve the followng optmzaton problem (12), and get ts soluton (w (k+1),b (k+1), ξ (k+1),v (k+1) );
5 466 The 9th Internatonal Symposum on Operatons Research and Its Applcatons (4) Compute g(x j ) = (w (k+1) x j )+b (k+1), for j I() and = 1,,, select x (k+1) = argmax g(x j ); (5) If x (k) x (k+1) = 0, for = 1,,, then let w = w (k+1),b = b (k+1),ξ = ξ (k+1) and stop; otherwse, let k = k + 1, go to step (3). 4 The lower bounds of nonzero entres of local optmal soluton to the problem (4) Usng the same strategy n [16], we get the followng theorem 1, whch can be used to dentfy nonzero components n the local optmal solutons to the problem (4), even though the Algorthm 2 can only fnd the approxmate local optmal soluton. Theorem 1 For any local optmal soluton (w,b,ξ ) to the problem (4), f [w ] p ( ) C 2 1 +C2 2 1 p 1, then [w ] = 0, = 1,2,,n, where g s defned n (3). s g Proof: Suppose w 0 = k. Wthout loss of generalty, let w = ([w ] 1,[w ] 2,,[w ] k, 0,0 0) T and z = ([w ] 1,[w ] 2,,[w ] k ) T. For the new nstance x = ([x ] 1,[x ] 2,,[x ] k ), we consder the followng optmzaton problem z,b, ξ z p p +C 1 ξ +C 2 ξ, max ((z x j) + b) 1 ξ, = 1,,, (z x ) + b 1 + ξ, = r + 1,,r + s, ξ 0, = 1,,,r + 1,,r + s. (13) It has been ponted out by [10] that the constrant max ((z x j )+b) 1 ξ, s euvalent to the fact that there exst convex combnaton coeffcents v j 0, v j = 1, such that (w v j x j) + b 1 ξ. Then, the above problem (13) s euvalent to: z,b, ξ z p p +C 1 ξ +C 2 ξ, (z v j x j) + b 1 ξ, = 1,,, (z x ) + b 1 + ξ, = r + 1,,r + s, ξ 0, = 1,,,r + 1,,r + s, v j 0, j I(), = 1,,, v j = 1, = 1,,. (14) The Lagrange functon of (14) s: L(z,b,ξ,α,ζ,λ,µ) = z p p +C 1 ξ +C 2 α ((z x ) + b + 1 ξ ) ζ ξ ξ ζ ξ α ((z v j x j) + b 1 + ξ )+ λ jv j µ( v j 1).
6 Feature Selecton n Mult-nstance Learnng 467 It s easy to know that (z,b,ξ,v ) s a local optmal soluton of (14), accordng to the KKT condton, we have p z p 1 sgn(z ) = α v j x α x, (15) 0 α C 1, = 1,,, (16) 0 α C 2, = r + 1,,r + s. (17) Accordng to (15)-(17) and Cauchy-Schwarz neualty, we have p [z ] p 1 = [ α v j x (α 1,,α, α r+1,, α ) ([ I(1) α x ] (18) v 1 j x j],,[ v j x j],[ x r+1 ],,[ x ] ) (19) I(p) C 2 1 +C2 2 s g (20) whch means [z p ] ( ) C 2 1 +C2 2 1 p 1, then the concluson s obtaned. s g Accordng to Theorem 1, we can dentfy the nonzero components of the local optmal soluton to (4). Based on the Algorthm 2 and the Theorem 1, the new algorthm 3 s establshed as follows: Algorthm 3. (1) Gven a tranng set (2); Select the parameters C 1 (C 1 > 0),C 2 (C 2 > 0) and p (0 < p < 1); (3) Usng the Algorthm 2 to get the local optmal soluton (w,b,ξ ) to the problem (4); p (4) Compute L = ( ) C 2 1 +C2 2 1 p 1, for = 1,,n; Select the feature set: { [w ] > s g L,( = 1,,n)}; (5) Construct the decson functon f (x) = sgn(( w x) + b ), where the components of w are nonzero components of w and the components of x are also correspondng to nonzero components of w. Note that, n the followng secton, our experments are conducted accordng to the algorthm 3. 5 Numercal experments In ths secton, some experments on four smulated datasets are conducted, by comparng p-misvm wth MICA. The four smulaton datasets (I, II, III, IV) are generated by the followng steps: Accordng to two dfferent dstrbutons, ndependently generate n 0 postve and negatve feature vectors g + R l +, g R l, = 1,2,,n 0 where l + and l are respectvely the number of the postve and negatve ponts;
7 468 The 9th Internatonal Symposum on Operatons Research and Its Applcatons Table 1: Four smulate datasets Data features relevant features Dstrbuton of g + Dstrbuton of g I 20 3 N(1, 0.5) N( 1, 0.5) II 20 3 U( 0.5, 1) U( 1, 0) III N(1, 0.5) N( 1, 0.5) IV U( 0.5, 1) U( 1, 0) Table 2: Results on the four Smulated datasets Dataset Methods No. of sele- Percent of rele- Average Parameters cted features vant features(%) accuracy(%) I p-misvm p=0.5, C = 2.8 MICA C=2 II p-misvm p = 0.5,C = 1 MICA C=0.7 III p-misvm p = 0.6,C = 0.57 MICA C=2 IV p-misvm p = 0.6,C = 0.7 MICA C=0.7 Accordng to other dstrbutons, ndependently generate some stochastc vectors that are rrelevant to the class; The postve bags contans three ponts that are stochastc generated n the rectangular regon, the radus of ths rectangular regon s 2 and the center s the postve ponts generated by the frst step. The negatve bags s just the negatve ponts generated by the frst step. The descrpton of the four data sets s lsted n Table 1. Accordng to Algorthm 3, 100 experments are conducted for every dataset. Note that, there are three parameters C 1, C 2 and p n Algorthm 3. Usually we set C 1 = C 2 = C n our experments, and the best value of these parameters s chosen by ten-fold cross valdaton. Our expermental results are llustrated n Table 2, where the best results are gven by the bold form. Obvously, p-misvm performs the best among two methods. In Table 2, the data n 4th column shows the percentage of the number of the rght features over the number of the selected features, whch means the bgger the value the better the result. The average accuracy s computed by averagng the test accuracy among 100 experments. It s easy to see that p-misvm selects the least features wth the hgh accuracy, compared wth the MICA. 6 Concluson Feature selecton s very mportant n many applcatons of data ng. Ths paper ntroduces a new verson of SVM named p-misvm to feature selecton and mult-nstance
8 Feature Selecton n Mult-nstance Learnng 469 classfcaton. By usng the CCCP method, a lnearzaton algorthm s proposed to get the approxmate local optmal soluton to p-misvm. And the lower bounds for the absolute value of nonzero components n every local optmal soluton s establshed, whch can elate zero components n any numercal soluton. The numercal experments show that the p-norm support vector machnes s effectve n selectng relevant features, compared wth the popular MICA. References [1] Detterch, T. G., Lathrop, R. H., Lozano Perez, T. Solvng the multple-nstance problem wth axs-parallel rectangles. Artfcal Intellgence, 89, 31-71, [2] Andrews, S., Tsochantards, I., Hofmann, T. Support vector machnes for multple nstance learnng. In Advances n neural nformaton processng systems 15, [3] Vola, P., Platt, J., Zhang, C. Multple nstance boostng for object detecton. In Advances n neural nformaton processng systems 18, , [4] Maron, O., Lozano-Perez, T. A framework for multple-nstance learnng. In Advances n neural nformaton processng systems 10, , [5] Xn, X., Frank, E. Logstc regresson and boostng for labeled bags of nstances. Proc 8th Pacfc-Asa Conference on Knowledge Dscovery and Data Mnng, , [6] Fung, G., Dundar, M., Krshnapuram, B., Rao, R. B. Multple nstance learnng for computer aded dagnoss. In Advances n neural nfor- maton processng systems 19, , [7] Settles, B., Craven, M., Ray, S. Multple nstance actve learnng. In Advances n neural nformaton processng systems 20, [8] Wang, J., Zucker, J. Solvng the multple- nstance problem: A lazy learnng approach. In Proceedngs of the 17th nternatonal conference on machne learnng, , [9] Andrews S., Tsochantards I., Hofmann T. Support vector machnes for multple-nstance learnng. Advances n Neural Informaton Processng Systems. 15 Cambrdge, MA: MIT Press, [10] Mangasaran O.L., Wld E.W. Multple nstance classfcaton va successve lnear programg. Journal of Optmzaton Theory and Applcaton. 137(1):, ,2008. [11] Bradley P.S., Mangasaran O.L. Feature selecton va concave mzaton and support vector machnes. In Proc. 13th ICML, 82-90, [12] Zhu J., Rosset S., Haste T. Tbshran R. 1-norm svms, Advances n Neural Informaton Processng Systems 16, [13] Tan, J.Y., Zhang, C.H., Deng, N.Y. Cancer gene dentfcaton va p-norm support vector machne, ISB2010, to be accepted. [14] Yulle A., Rangarajan A. The concave-convex procedure. Neural Computaton. 15:, , [15] Smola A.J, Vshwanathan S.V.N., Hofmann T. Kernel methods for mssng varables. Proceedngs of the Tenth Internatonal Workshop on Artfcal Intellgence and Statstcs. Barbados [16] Chen X.J.,Xu F.M., and Ye Y.Y. Lower bound theory of nonzero entres n solutons of l 2 -l p mzaton. Techncal report, Department of Appled Mathematcs, the Hong Kong Polytechnc Unversty, 2009.
Lecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationNatural Language Processing and Information Retrieval
Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support
More informationSemi-supervised Classification with Active Query Selection
Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples
More informationA Regularization Framework for Multiple-Instance Learning
Pak-Mng Cheung pakg@cs.ust.hk James T. Kwok jamesk@cs.ust.hk Department of Computer Scence, Hong Kong Unversty of Scence and Technology, Kowloon, Hong Kong Abstract Ths paper focuses on kernel methods
More informationMAXIMUM A POSTERIORI TRANSDUCTION
MAXIMUM A POSTERIORI TRANSDUCTION LI-WEI WANG, JU-FU FENG School of Mathematcal Scences, Peng Unversty, Bejng, 0087, Chna Center for Informaton Scences, Peng Unversty, Bejng, 0087, Chna E-MIAL: {wanglw,
More informationCS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015
CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationInteractive Bi-Level Multi-Objective Integer. Non-linear Programming Problem
Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More informationLecture 20: November 7
0-725/36-725: Convex Optmzaton Fall 205 Lecturer: Ryan Tbshran Lecture 20: November 7 Scrbes: Varsha Chnnaobreddy, Joon Sk Km, Lngyao Zhang Note: LaTeX template courtesy of UC Berkeley EECS dept. Dsclamer:
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationSupport Vector Machines
Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far So far Supervsed machne learnng Lnear models Non-lnear models Unsupervsed machne learnng Generc scaffoldng So far
More information10-701/ Machine Learning, Fall 2005 Homework 3
10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40
More informationSupport Vector Machines
CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More information18-660: Numerical Methods for Engineering Design and Optimization
8-66: Numercal Methods for Engneerng Desgn and Optmzaton n L Department of EE arnege Mellon Unversty Pttsburgh, PA 53 Slde Overve lassfcaton Support vector machne Regularzaton Slde lassfcaton Predct categorcal
More informationThe lower and upper bounds on Perron root of nonnegative irreducible matrices
Journal of Computatonal Appled Mathematcs 217 (2008) 259 267 wwwelsevercom/locate/cam The lower upper bounds on Perron root of nonnegatve rreducble matrces Guang-Xn Huang a,, Feng Yn b,keguo a a College
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationSupport Vector Machines
Support Vector Machnes Konstantn Tretyakov (kt@ut.ee) MTAT.03.227 Machne Learnng So far Supervsed machne learnng Lnear models Least squares regresson Fsher s dscrmnant, Perceptron, Logstc model Non-lnear
More informationOn an Extension of Stochastic Approximation EM Algorithm for Incomplete Data Problems. Vahid Tadayon 1
On an Extenson of Stochastc Approxmaton EM Algorthm for Incomplete Data Problems Vahd Tadayon Abstract: The Stochastc Approxmaton EM (SAEM algorthm, a varant stochastc approxmaton of EM, s a versatle tool
More informationSupport Vector Machines CS434
Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? + + + + + + + + + Intuton of Margn Consder ponts
More informationSolutions HW #2. minimize. Ax = b. Give the dual problem, and make the implicit equality constraints explicit. Solution.
Solutons HW #2 Dual of general LP. Fnd the dual functon of the LP mnmze subject to c T x Gx h Ax = b. Gve the dual problem, and make the mplct equalty constrants explct. Soluton. 1. The Lagrangan s L(x,
More informationLogistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI
Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton
More informationParameter Estimation for Dynamic System using Unscented Kalman filter
Parameter Estmaton for Dynamc System usng Unscented Kalman flter Jhoon Seung 1,a, Amr Atya F. 2,b, Alexander G.Parlos 3,c, and Klto Chong 1,4,d* 1 Dvson of Electroncs Engneerng, Chonbuk Natonal Unversty,
More informationLecture 10 Support Vector Machines. Oct
Lecture 10 Support Vector Machnes Oct - 20-2008 Lnear Separators Whch of the lnear separators s optmal? Concept of Margn Recall that n Perceptron, we learned that the convergence rate of the Perceptron
More informationAdmin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester
0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #
More informationResearch on Route guidance of logistic scheduling problem under fuzzy time window
Advanced Scence and Technology Letters, pp.21-30 http://dx.do.org/10.14257/astl.2014.78.05 Research on Route gudance of logstc schedulng problem under fuzzy tme wndow Yuqang Chen 1, Janlan Guo 2 * Department
More informationThe Expectation-Maximization Algorithm
The Expectaton-Maxmaton Algorthm Charles Elan elan@cs.ucsd.edu November 16, 2007 Ths chapter explans the EM algorthm at multple levels of generalty. Secton 1 gves the standard hgh-level verson of the algorthm.
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationImprove Multi-Instance Neural Networks through Feature Selection
Improve Mult-Instance Neural Networks through Feature Selecton Mn-Lng Zhang and Zh-Hua Zhou* State Key Laboratory for Novel Software Technology, Nanjng Unversty, Nanjng 210093, Chna Abstract. Mult-nstance
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationFUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM
Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL
More informationYong Joon Ryang. 1. Introduction Consider the multicommodity transportation problem with convex quadratic cost function. 1 2 (x x0 ) T Q(x x 0 )
Kangweon-Kyungk Math. Jour. 4 1996), No. 1, pp. 7 16 AN ITERATIVE ROW-ACTION METHOD FOR MULTICOMMODITY TRANSPORTATION PROBLEMS Yong Joon Ryang Abstract. The optmzaton problems wth quadratc constrants often
More informationSupport Vector Machines. Jie Tang Knowledge Engineering Group Department of Computer Science and Technology Tsinghua University 2012
Support Vector Machnes Je Tang Knowledge Engneerng Group Department of Computer Scence and Technology Tsnghua Unversty 2012 1 Outlne What s a Support Vector Machne? Solvng SVMs Kernel Trcks 2 What s a
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationWavelet chaotic neural networks and their application to continuous function optimization
Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationChapter 2 A Class of Robust Solution for Linear Bilevel Programming
Chapter 2 A Class of Robust Soluton for Lnear Blevel Programmng Bo Lu, Bo L and Yan L Abstract Under the way of the centralzed decson-makng, the lnear b-level programmng (BLP) whose coeffcents are supposed
More informationImage classification. Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing i them?
Image classfcaton Gven te bag-of-features representatons of mages from dfferent classes ow do we learn a model for dstngusng tem? Classfers Learn a decson rule assgnng bag-offeatures representatons of
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationCONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING INTRODUCTION
CONTRAST ENHANCEMENT FOR MIMIMUM MEAN BRIGHTNESS ERROR FROM HISTOGRAM PARTITIONING N. Phanthuna 1,2, F. Cheevasuvt 2 and S. Chtwong 2 1 Department of Electrcal Engneerng, Faculty of Engneerng Rajamangala
More informationKernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan
Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems
More informationVARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES
VARIATION OF CONSTANT SUM CONSTRAINT FOR INTEGER MODEL WITH NON UNIFORM VARIABLES BÂRZĂ, Slvu Faculty of Mathematcs-Informatcs Spru Haret Unversty barza_slvu@yahoo.com Abstract Ths paper wants to contnue
More informationA New Evolutionary Computation Based Approach for Learning Bayesian Network
Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationCHAPTER-5 INFORMATION MEASURE OF FUZZY MATRIX AND FUZZY BINARY RELATION
CAPTER- INFORMATION MEASURE OF FUZZY MATRI AN FUZZY BINARY RELATION Introducton The basc concept of the fuzz matr theor s ver smple and can be appled to socal and natural stuatons A branch of fuzz matr
More informationPower law and dimension of the maximum value for belief distribution with the max Deng entropy
Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationLinear Classification, SVMs and Nearest Neighbors
1 CSE 473 Lecture 25 (Chapter 18) Lnear Classfcaton, SVMs and Nearest Neghbors CSE AI faculty + Chrs Bshop, Dan Klen, Stuart Russell, Andrew Moore Motvaton: Face Detecton How do we buld a classfer to dstngush
More information10) Activity analysis
3C3 Mathematcal Methods for Economsts (6 cr) 1) Actvty analyss Abolfazl Keshvar Ph.D. Aalto Unversty School of Busness Sldes orgnally by: Tmo Kuosmanen Updated by: Abolfazl Keshvar 1 Outlne Hstorcal development
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationRBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis
Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural
More informationThe Geometry of Logit and Probit
The Geometry of Logt and Probt Ths short note s meant as a supplement to Chapters and 3 of Spatal Models of Parlamentary Votng and the notaton and reference to fgures n the text below s to those two chapters.
More informationMDL-Based Unsupervised Attribute Ranking
MDL-Based Unsupervsed Attrbute Rankng Zdravko Markov Computer Scence Department Central Connectcut State Unversty New Brtan, CT 06050, USA http://www.cs.ccsu.edu/~markov/ markovz@ccsu.edu MDL-Based Unsupervsed
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationGeneral viscosity iterative method for a sequence of quasi-nonexpansive mappings
Avalable onlne at www.tjnsa.com J. Nonlnear Sc. Appl. 9 (2016), 5672 5682 Research Artcle General vscosty teratve method for a sequence of quas-nonexpansve mappngs Cuje Zhang, Ynan Wang College of Scence,
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationWe present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.
CS 189 Introducton to Machne Learnng Sprng 2018 Note 26 1 Boostng We have seen that n the case of random forests, combnng many mperfect models can produce a snglodel that works very well. Ths s the dea
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationApplication of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems
Mathematca Aeterna, Vol. 1, 011, no. 06, 405 415 Applcaton of B-Splne to Numercal Soluton of a System of Sngularly Perturbed Problems Yogesh Gupta Department of Mathematcs Unted College of Engneerng &
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationCSCI B609: Foundations of Data Science
CSCI B609: Foundatons of Data Scence Lecture 13/14: Gradent Descent, Boostng and Learnng from Experts Sldes at http://grgory.us/data-scence-class.html Grgory Yaroslavtsev http://grgory.us Constraned Convex
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationThe Two-scale Finite Element Errors Analysis for One Class of Thermoelastic Problem in Periodic Composites
7 Asa-Pacfc Engneerng Technology Conference (APETC 7) ISBN: 978--6595-443- The Two-scale Fnte Element Errors Analyss for One Class of Thermoelastc Problem n Perodc Compostes Xaoun Deng Mngxang Deng ABSTRACT
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationMACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression
11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationA Method for Filling up the Missed Data in Information Table
A Method for Fllng up the Mssed Data Gao Xuedong, E Xu, L Teke & Zhang Qun A Method for Fllng up the Mssed Data n Informaton Table Gao Xuedong School of Management, nversty of Scence and Technology Beng,
More informationUsing T.O.M to Estimate Parameter of distributions that have not Single Exponential Family
IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran
More informationSupport Vector Novelty Detection
Support Vector Novelty Detecton Dscusson of Support Vector Method for Novelty Detecton (NIPS 2000) and Estmatng the Support of a Hgh- Dmensonal Dstrbuton (Neural Computaton 13, 2001) Bernhard Scholkopf,
More information829. An adaptive method for inertia force identification in cantilever under moving mass
89. An adaptve method for nerta force dentfcaton n cantlever under movng mass Qang Chen 1, Mnzhuo Wang, Hao Yan 3, Haonan Ye 4, Guola Yang 5 1,, 3, 4 Department of Control and System Engneerng, Nanng Unversty,
More informationSparse Gaussian Processes Using Backward Elimination
Sparse Gaussan Processes Usng Backward Elmnaton Lefeng Bo, Lng Wang, and Lcheng Jao Insttute of Intellgent Informaton Processng and Natonal Key Laboratory for Radar Sgnal Processng, Xdan Unversty, X an
More informationLearning with Tensor Representation
Report No. UIUCDCS-R-2006-276 UILU-ENG-2006-748 Learnng wth Tensor Representaton by Deng Ca, Xaofe He, and Jawe Han Aprl 2006 Learnng wth Tensor Representaton Deng Ca Xaofe He Jawe Han Department of Computer
More informationAn Admission Control Algorithm in Cloud Computing Systems
An Admsson Control Algorthm n Cloud Computng Systems Authors: Frank Yeong-Sung Ln Department of Informaton Management Natonal Tawan Unversty Tape, Tawan, R.O.C. ysln@m.ntu.edu.tw Yngje Lan Management Scence
More informationDiscretization of Continuous Attributes in Rough Set Theory and Its Application*
Dscretzaton of Contnuous Attrbutes n Rough Set Theory and Its Applcaton* Gexang Zhang 1,2, Lazhao Hu 1, and Wedong Jn 2 1 Natonal EW Laboratory, Chengdu 610036 Schuan, Chna dylan7237@sna.com 2 School of
More informationA new Approach for Solving Linear Ordinary Differential Equations
, ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More informationMATH 829: Introduction to Data Mining and Analysis The EM algorithm (part 2)
1/16 MATH 829: Introducton to Data Mnng and Analyss The EM algorthm (part 2) Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 20, 2016 Recall 2/16 We are gven ndependent observatons
More informationADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING
1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N
More informationMultiple Sound Source Location in 3D Space with a Synchronized Neural System
Multple Sound Source Locaton n D Space wth a Synchronzed Neural System Yum Takzawa and Atsush Fukasawa Insttute of Statstcal Mathematcs Research Organzaton of Informaton and Systems 0- Mdor-cho, Tachkawa,
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More information