Ensemble of GA based Selective Neural Network Ensembles

Size: px
Start display at page:

Download "Ensemble of GA based Selective Neural Network Ensembles"

Transcription

1 Ensemble of GA based Selectve eural etwork Ensembles Jan-Xn WU Zh-Hua ZHOU Zhao-Qan CHE atonal Laboratory for ovel Software Technology anjng Unversty anjng, 0093, P.R.Chna {zhouzh, Abstract eural network ensemble s a learnng paradgm where several neural networks are jontly used to solve a problem. In ths paper, e-gase, a two-layer neural network ensemble archtecture s proposed, n whch the base learners of the fnal ensemble are also ensembles. Expermental results show that e-gase generalzes better than a popular ensemble method. The reason why e-gase works s also dscussed. We beleve that the dfferent layers of e-gase attan good generalzaton ablty for dfferent reasons. The frst layer ensembles proft from the selected ndvdual neural networks that are moderately dvergent but generalze well, whle the second layer ensemble profts from the dvergency among the frst layer ensembles. Introducton Snce neural computng has no rgorous theoretcal framework untl now, whether a neural network based applcaton wll be successful or not s almost fully determned by the practtoner. In general, the more experenced the practtoner s, the more chances the applcaton wll have of beng success. However, users are often wth lttle knowledge on neural computng. Therefore the rewards that neural network technques may return do not always appear. In the begnnng of the 990 s, Hansen and Salamon showed that the generalzaton ablty of a neural network system can be sgnfcantly mproved through ensemblng ndvdual neural networks,.e. tranng several neural networks and combnng ther results n some way[]. Later, Sollch and Krogh defned neural network ensemble as a collecton of a (fnte) number of neural networks that are traned for the same task[]. Snce t behaves remarkably well and s vey easy to use, neural network ensemble s regarded as a promsng methodology that can beneft not only experts n neural computng but also ordnary engneers. And neural network ensemble has already been used n many real domans such as handwrtten dgt recognton[3], scentfc mage analyss[4], face recognton[5][6], OCR[7], sesmc sgnal classfcaton[8], etc. Many works have been done to nvestgat why and how neural network ensemble works. The classcal one s Krogh and Vedelsby[9] s work, n whch they derved the famous equaton E = E A. It clearly demonstrates that the generalzaton ablty of the ensemble s determned by the average generalzaton ablty and the average ambguty (dvergency) of the ndvdual neural networks that consttute the ensemble. Many ensemble methods have been proposed n the lterature. The most attractve methods manly nclude smple ensemble[], AdaBoost[0], and baggng[]. These methods combne outputs of all the base learners at hand. Usually, the base learner s a neural network or a classfcaton tree. If a base learner s of hgh generalzaton error and low ambguty, addng t nto the ensemble wll defntely deterorate the ensemble s generalzaton ablty. However, there s no gurantee that such bad base learner wll never appear. Ths means that n some crcumstances usng all the base learners at hand may not be the best choce. GASE (Genetc Algorthm based Selectve Esemble) was proposed n [], whch trans several neural networks and then employs genetc algorthm to select an optmum subset of those networks to consttute an ensemble. Experments

2 show that GASE s superor to smple ensemble, even f t tends to select only a small quantty of neural networks. In ths paper, we argue that f we employ a two-layer ensemble archtecture,.e. when the base learner tself s also an ensemble, the fnal ensemble wll have better generalzaton ablty. The archtecture we used n ths paper s a smple ensemble of GASE. The fnal ensemble s composed of several GASEs, and the fnal ensemble s output s the average of all ndvdual GASE s outputs. Ths archtecture s abberevated as e-gase. By analyzng the expermental results, we beleve that GASE and e-gase promotes the fnal ensemble s generalzaton ablty n dfferent ways. The rest of ths paper s organzed as follows. In Secton, the equaton E = E A,.e. relaton between the generalzaton ablty of the ensemble, the generalzaton ablty of ndvdual base learners and the average ambguty of the base learners, s frst explaned. Then GASE s brefly ntroduced. In Secton 3, e-gase s proposed. Some experments are aslo reported. In Secton 4, The facts revealed by experments are dscussed. The reason about how and why such a two-layer ensemble archtecture works s analysed. Fnally n Secton 5, conclusons are drawn and several ssues for future work are ndcated. GASE Suppose the learnng task s to use an ensemble that comprses base learners to approxmate a functon f: R m R n. The predctons of the base learners are combned through weghted averagng, where a weght w ( =,,, ) s assgned to the ndvdual base learner f, and w satsfes equaton () and (): 0 < w < () = w = () The output of the ensemble s computed accordng to equaton (3), where f s the output of the -th base learner. f ( x) = w f ( x ) (3) = For convenence of dscusson, here we assume that each base learner has only one output component,.e. the functon to be approxmated s f: R m R. But note that t can be easly generalzed to stuatons where each base learner has multple output components. Suppose x R m s randomly sampled accordng to a dstrbuton p(x). The expected output for x s d(x). Then the error E (x) of the -th base learner on nput x and the error E(x) of the ensemble on nput x are respectvely: E E ( x) ( f ( x) d( x) ) = (4) ( x) f ( x) d( x) = (5) Then the generalzaton error E of the -th base learner on the dstrbuton p(x) and the generalzaton error E of the ensemble on the dstrbuton p(x) are respectvely: E = d xp( x) E x (6) E = d xp( x) E x (7) The average error of the base learners on nput x s: E ( x) = w E ( x) = (8) Then the average generalzaton error of the base learners on the dstrbuton p(x) s: E = d xp( x) E x (9) Accordngly, the ambguty of the -th base learner on nput x, the ambguty of the -th base learner on the dstrbuton p(x), the average ambguty of base learners on nput x, and the average ambguty of the base learners on the dstrbuton p(x) are defned respectvely as: = ( ) A x f x f x (0) A = d xp( x) A ( x) () w A ( x) A x = () = A= d xp( x) A x (3) After a few algebrac manpulatons, Krogh and Vedelsby reached the famous formula (4) whch states that the generalzaton ablty of the ensemble

3 s determned by the average generalzaton ablty and the average ambguty of the base learners that consttutes the ensemble. E = E A (4) ow we defne the correlaton between the -th and the j-th ndvdual base learner as: Cj = dxp( x) ( f ( x) d( x) )( f j ( x) d( x) ) Then, accordng to [] and [3], we have E = = j= w w C j j (5) (6) When the base learners are combned usng the smple ensemble method,.e. w =/ for every, we have = j (7) = j= E C It s proved that when usng the smple ensemble method and when formula (8) s satsfed, then omttng the k-th base learner wll mprove the ensemble s generalzaton ablty []. ( ) Cj < ( ) Ck + ( ) Ek = j= k j k = k (8) ow a concluson s arrved that after the neural networks are traned, n some cases ensemblng an approprate subset of the neural networks s superor to ensemblng all of them. The ndvdual neural network that should be omtted satsfy equaton (8). Ths statement s also partly approved by Lu, Yao and Hguch[4]. After traned several neural networks wth negatve correlaton learnng and evolutonary computaton, they used the k-means algorthms to dvde the ndvduals nto dfferent clusters. In every cluster, the fttest ndvdual network s selected as a representatve of the cluster. They compared the ensemble formed of these representatves and the ensemble formed of all the networks. o statstcally sgnfcant dfference s observed between them n ther experments. Ths observaton mples that the ensemble does not have to use all the networks to achve good performance. GASE s proposed based on ths concluson, whch frst trans several ndvdual neural networks ndependently and then employs genetc algorthm to select an optmum subset of ndvdual networks to consttute an ensemble. The selected neural networks are combned toghther usng smple averagng. However, expermental data show that GASE s superor to usng all the avalable networks at hand []. Although genetc algorthm s used n both methods of [4] and [], they are qute dfferent. In [4] genetc algorthm s used to evolve a populaton of neural networks that are negatvely correlated, whle n [] genetc algorthm s used to select a subset of neural networks to consttute the ensemble. 3 e-gase It s well known that n order for an ensemble to work well, the ndvdual neural networks should respond as ndependent as possble to an nput. If the ndependency requrement s satsfed, the ensemble s generalzaton error wll decrease when more neural networks are added nto t. However, the margnal error reduced by every newly added neural network tends to decrease when the ensemble grows larger and larger[3][5]. The GASE method underwent a genetc algorthm based selecton process. After selecton, GASE s sze,.e. the number of neural networks survved the selecton process, s rather small. Expermental data n [] show that f neural networks are traned, averagely GASE wll select only about /4 among them to form an ensemble. The benefts brought by the genetcal selecton process s enjoyable. However, we beleve that f more neural networks are ncluded, n some cases the generalzaton error of the ensemble may be further reduced. Ths s the motvaton of e-gase, whch s a natural extenson of GASE. Gven a learnng task, we may tran several ensembles usng the GASE algorthm frst. Then, an e-gase s formed by combnng these GASEs by usng the smple ensemble method,.e. averagng the output of several GASEs on an nput to form the e-gase s output. The e-gase s a two-layer ensemble archteture. Snce e-gase s formed by averagng several GASEs and every GASE s constructed by averagng several sngle neural networks, e-gase

4 Table Expermental results on smple ensemble, GASE, and e-gase Data set smple ensemble GASE e-gase error devaton Error devaton error devaton Fredman# Boston Housng Ozone Servo Table The mean error-ambguty decomposton of generalzaton error Data set smple ensemble GASE e-gase E E A E E A E E A Fredman# Boston Housng Ozone Servo may be vewed as averagng some selected sngle neural networks. So we may defne the sze of an e-gase as the number of sngle neural networks contaned n t. In ths sense, the sze of an e-gase equals the sum of szes of all ts component GASEs. We use four regresson problems that were used n [] to compare the performance of smple ensemble, GASE and e-gase. The frst problem s Fredman# proposed by Fredman [6]. There are 5 contnuous attrbutes. The data set s generated accordng to equaton (9) where the nose tem ε satsfes normal dstrbuton (0, ) and x ( =,,, 5) satsfes unform dstrbuton U[0, ]. In our experments the sze of the tranng set and the test set are respectvely 00 and 000. t ( π x x ) + 0( x 0.5) + 0x + + ε = 0sn 3 4 5x5 (9) The second problem s Boston Housng from UCI machne learnng repostory[7]. There are contnuous attrbutes and categorcal attrbute. The data set comprses 506 examples among whch 400 examples make up the tranng set and the rest 06 examples make up the test set n our experments. The thrd problem s Ozone proposed by Breman and Fredman [8]. There are 9 contnuous attrbutes. The data set comprses 366 examples. Snce the ntenton of the experments s not to compare the ablty of dealng wth mssng values, attrbute and 36 examples that has mssng values are omtted. Therefore n our experments there are 8 contnuous attrbutes and 330 examples among whch 50 examples make up the tranng set and the rest 80 examples make up the test set. The fourth problem s Servo from UCI machne learnng repostory. There are 4 categorcal attrbutes. The data set comprses 67 examples among whch 30 examples make up the tranng set and the rest 37 examples make up the test set n our experments. ote that some researchers [9] beleve that ths problem s very dffcult because t nvolves some knd of extreme nonlnearty. For each problem we use baggng on the tranng set to generate 0 sngle-hdden-layered BP networks. The smple ensemble method s formed by averagng these networks. After performng genetcal selecton, a GASE s constructed by averagng the selected networks. For every problem we perform 0 runs and record the average mean squared error and the standard devatons of these errors on the test set. An e-gase s formed by averagng 4 GASEs. So there are totally 5 runs of e-gase. The average mean squared error and correspondng standard devaton s also recorded. Expermental results are shown n Table. Statstcal tests show that on the Fredman#, Boston Housng, and Ozone data sets, GASE s generalzaton error s sgnfcantly lower than that of the smple ensemble method, and e-gase attans stll lower generalzaton errors than GASE. On the

5 Servo data set, GASE s slghtly nferor to smple ensemble. The e-gase method s performance, however, has no sgnfcant dfference wth that of the smple ensemble method. From the aforementoned statstcs we may conclude that e-gase s superor to both GASE and smple ensemble. 4 Dscusson Untl now, we are not very clear through what mechansm e-gase works so well. Followng formula (4), the generalzaton error of an ensemble (E) can be decomposed as dfference of the mean error part E and the mean ambguty part A. The mean error-ambguty decomposton of smple ensemble, GASE and e-gase on the four data sets are tabulated n Table. The mean error part E s calculated by averagng the error of the ndvdual neural networks that consst the ensemble on the test set. Then, The mean ambguty part A s calculated from formula (4) wth the help of E. It s clear that the mean error part E of GASE s qute smaller than that of smple ensemble. The mean ambguty part A of GASE s also smaller than that of smple ensemble, the decrease n E s more sgnfcant. It means that GASE may attan better generalzaton ablty by selects base learners that are of only moderate ambguty but are well-generalzed. Ths analyss s somewhat dfferent from our prevous one n [], n whch we beleve that GASE s genetcal selecton process wll ncrease ambguty. Snce every GASE has a relatvely small generalzaton error, e-gase s hard to get smaller error by further lowerng the mean error part E. From Table we can fnd that the mean error of GASE and e-gase has no obvous dfference, whle e-gase has a hgher mean ambguty. Therefore we beleve that e-gase manly profts from the dvergence among dfferent GASEs. 5 Conclusons and future work In ths paper, we re-examned the GASE,.e. Genetc Algorthm based Selectve Esemble method and proposed e-gase, a natural extenson of GASE, whch combnes several GASEs by usng smple averagng. Through analyses of expermental results, a conjecture on how and why e-gase works s proposed. We beleve that GASE works by selectng neural networks that are of only moderate ambguty but are well-generalzed. And, the e-gase method gans from the dvergence among dfferent GASEs. However, analyses presented n ths paper are very prelmnary. More experments and theoretcal works are stll needed to clarfy the rules behnd GASE and e-gase. For theoretcal analyss, we beleve that the bas-varance decomposton may be helpful [0]. Moreover, whether other knds of ensembles, e.g. weghted averaged one, can play the roles of GASE n e-gase, s an nterestng ssue for future exploraton. Acknowledgements The atonal atural Scence Foundaton of P. R.Chna and the atural Scence Foundaton of Jangsu Provnce, P. R.Chna, supported ths research. References [] L. K. Hansen and P. Salamon, eural network ensembles, IEEE Trans. Pattern Analyss and Machne Intellgence, vol., no. 0, pp , 990. [] P. Sollch, A. Krogh, Learnng wth Ensembles: How over-fttng can be useful, In Advances n eural Informaton Processng Systems 8, pp , 996. [3] L. K. Hansen, L. Lsberg, and P. Salamon, Ensemble methods for handwrtten dgt recognton, In Proc. IEEE-SP Workshop on eural etworks for Sgnal Processng, pp , 99, IEEE Computer Socety. [4] K. J. Cherkauer, Human expert level performance on a scentfc mage analyss task by a system usng combned artfcal neural networks, In Proc. 3th AAAI Workshop on Integratng Multple Learned Models for Improvng and Scalng Machne Learnng Algorthms, pp. 5-, 996, AAAI. [5] S. Gutta and H. Wechsler, Face recognton usng

6 hybrd classfer systems, In Proc. IEEE Int. Conf. on eural etworks, pp. 07-0, 996, IEEE Computer Socety. [6] F. J. Huang, Z.-H. Zhou, H.-J. Zhang, and T. Chen, Pose nvarant face recognton, In Proc. 4th IEEE Int. Conf. on Automatc Face and Gesture Recognton, pp , Grenoble, France, 000, IEEE Computer Socety. [7] J. Mao, A case study on baggng, boostng and basc ensembles of neural networks for OCR, In Proc. IEEE Int. Jont Conf. on eural etworks, vol.3, pp , 998, IEEE Computer Socety. [8] Y. Shmshon and. Intrator, Classfcaton of sesmc sgnals by ntegratng ensembles of neural networks, IEEE Trans. Sgnal Processng, vol. 46, no. 5, pp. 94-0, 998. [9] A. Krogh and J. Vedelsby, eural network ensembles, cross valdaton, and actve learnng, In Advances n eural Informaton Processng Systems 7, pp. 3-38, 995, MIT. [0] Y. Freund and R. E. Schapre, A decson-theoretc generalzaton of on-lne learnng and an applcaton to boostng, Journal of Computer and System Scences, vol. 55, no., pp. 9-39, 997. [] L. Breman, Baggng predctors, Machne Learnng, vol. 4, no., pp. 3-40, 996. [] Z.-H. Zhou, J.-X. WU, Y. Jang, and S.-F. Chen, Genetc Algorthm based Selectve eural etwork Ensemble, to appear n Proc. IJCAI 0, Seattle, USA, 00. [3] M. P. Perrone, L.. Cooper, When networks dsagree: Ensemble method for neural networks, In Mammone R.J. eds. Artfcal erual etworks for Speech and Vson, London, Chapman-Hall, pp. 6~4, 993. [4] Y. Lu, X. Yao and T. Hguch, Evolutonary Ensembles wth egatve Correlaton Learnng, IEEE Trans. Evolutonary Computaton, vol. 4, no. 4, pp , 000. [5] R. E. Schapre, Y. Freund, P. Bartlett, and W. S. Lee, Boostng the margn: A new explanaton for the effectveness of votng methods, The Annals of Statstcs, vol. 6, no. 5, pp , 998. [6] J. Fredman, Multvarate adaptve regresson splnes (wth dscusson), Annals of Statstcs, vol. 9, no., pp. -4, 99. [7] C. Blake, E. Keogh, and C. J. Merz, UCI Repostory of machne learnng databases [ ww.cs.uc.edu/~mlearn/mlrepostory.html], Dept. of Informaton and Computer Scence, Unversty of Calforna, Irvne, Calforna, 998. [8] L. Breman and J. Fredman, Estmatng optmal transformatons n multple regresson and correlaton (wth dscusson), Journal of the Amercan Statstcal Assocaton, vol. 80, pp , 985. [9] J. R. Qunlan, C4.5: Programs for Machne Learnng, Morgan Kaufmann, San Mateo, Calforna, 993. [0] L. Breman, Bas, varance and arcng classfers, Techncal Report 460, Statstcs Department, Unversty of Calforna at Berkeley, 996. (ftp://ftp.stat.berkeley.edu/users/breman/ arcall.ps.z.)

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Ensembling Neural Networks: Many Could Be Better Than All

Ensembling Neural Networks: Many Could Be Better Than All Artfcal Intellgence, 22, 37(-2): 239-263. @Elsever Ensemblng eural etworks: Many Could Be Better Than All Zh-Hua Zhou*, Janxn Wu, We Tang atonal Laboratory for ovel Software Technology, anng Unversty,

More information

Sparse Gaussian Processes Using Backward Elimination

Sparse Gaussian Processes Using Backward Elimination Sparse Gaussan Processes Usng Backward Elmnaton Lefeng Bo, Lng Wang, and Lcheng Jao Insttute of Intellgent Informaton Processng and Natonal Key Laboratory for Radar Sgnal Processng, Xdan Unversty, X an

More information

A New Evolutionary Computation Based Approach for Learning Bayesian Network

A New Evolutionary Computation Based Approach for Learning Bayesian Network Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang

More information

Bounds on the Generalization Performance of Kernel Machines Ensembles

Bounds on the Generalization Performance of Kernel Machines Ensembles Bounds on the Generalzaton Performance of Kernel Machnes Ensembles Theodoros Evgenou theos@a.mt.edu Lus Perez-Breva lpbreva@a.mt.edu Massmlano Pontl pontl@a.mt.edu Tomaso Poggo tp@a.mt.edu Center for Bologcal

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA

4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA 4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected

More information

Statistical Foundations of Pattern Recognition

Statistical Foundations of Pattern Recognition Statstcal Foundatons of Pattern Recognton Learnng Objectves Bayes Theorem Decson-mang Confdence factors Dscrmnants The connecton to neural nets Statstcal Foundatons of Pattern Recognton NDE measurement

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models

Computation of Higher Order Moments from Two Multinomial Overdispersion Likelihood Models Computaton of Hgher Order Moments from Two Multnomal Overdsperson Lkelhood Models BY J. T. NEWCOMER, N. K. NEERCHAL Department of Mathematcs and Statstcs, Unversty of Maryland, Baltmore County, Baltmore,

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Dynamic Ensemble Selection and Instantaneous Pruning for Regression

Dynamic Ensemble Selection and Instantaneous Pruning for Regression Dynamc Ensemble Selecton and Instantaneous Prunng for Regresson Kaushala Das and Terry Wndeatt Centre for Vson Speech and Sgnal Processng Faculty of Engneerng and Physcal Scences Unversty of Surrey, Guldford,

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers

Psychology 282 Lecture #24 Outline Regression Diagnostics: Outliers Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

More metrics on cartesian products

More metrics on cartesian products More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6

Department of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6 Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction

The Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also

More information

Study of Selective Ensemble Learning Methods Based on Support Vector Machine

Study of Selective Ensemble Learning Methods Based on Support Vector Machine Avalable onlne at www.scencedrect.com Physcs Proceda 33 (2012 ) 1518 1525 2012 Internatonal Conference on Medcal Physcs and Bomedcal Engneerng Study of Selectve Ensemble Learnng Methods Based on Support

More information

Instance-Based Learning (a.k.a. memory-based learning) Part I: Nearest Neighbor Classification

Instance-Based Learning (a.k.a. memory-based learning) Part I: Nearest Neighbor Classification Instance-Based earnng (a.k.a. memory-based learnng) Part I: Nearest Neghbor Classfcaton Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n

More information

arxiv:cs.cv/ Jun 2000

arxiv:cs.cv/ Jun 2000 Correlaton over Decomposed Sgnals: A Non-Lnear Approach to Fast and Effectve Sequences Comparson Lucano da Fontoura Costa arxv:cs.cv/0006040 28 Jun 2000 Cybernetc Vson Research Group IFSC Unversty of São

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables

LINEAR REGRESSION ANALYSIS. MODULE VIII Lecture Indicator Variables LINEAR REGRESSION ANALYSIS MODULE VIII Lecture - 7 Indcator Varables Dr. Shalabh Department of Maematcs and Statstcs Indan Insttute of Technology Kanpur Indcator varables versus quanttatve explanatory

More information

An (almost) unbiased estimator for the S-Gini index

An (almost) unbiased estimator for the S-Gini index An (almost unbased estmator for the S-Gn ndex Thomas Demuynck February 25, 2009 Abstract Ths note provdes an unbased estmator for the absolute S-Gn and an almost unbased estmator for the relatve S-Gn for

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

Lecture 2: Prelude to the big shrink

Lecture 2: Prelude to the big shrink Lecture 2: Prelude to the bg shrnk Last tme A slght detour wth vsualzaton tools (hey, t was the frst day... why not start out wth somethng pretty to look at?) Then, we consdered a smple 120a-style regresson

More information

1 The Mistake Bound Model

1 The Mistake Bound Model 5-850: Advanced Algorthms CMU, Sprng 07 Lecture #: Onlne Learnng and Multplcatve Weghts February 7, 07 Lecturer: Anupam Gupta Scrbe: Bryan Lee,Albert Gu, Eugene Cho he Mstake Bound Model Suppose there

More information

CSC 411 / CSC D11 / CSC C11

CSC 411 / CSC D11 / CSC C11 18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Jifeng Zuo School of Science, Agricultural University of Hebei, Baoding , Hebei,China

Jifeng Zuo School of Science, Agricultural University of Hebei, Baoding , Hebei,China Rev. Téc. Ing. Unv. Zula. Vol. 39, Nº 7, 76 80, 016 do:10.1311/001.39.7.34 Proof on Decson Tree Algorthm Jfeng Zuo School of Scence, Agrcultural Unversty of Hebe, Baodng 071001, Hebe,Chna Pepe Ja Basc

More information

Bayesian Learning. Smart Home Health Analytics Spring Nirmalya Roy Department of Information Systems University of Maryland Baltimore County

Bayesian Learning. Smart Home Health Analytics Spring Nirmalya Roy Department of Information Systems University of Maryland Baltimore County Smart Home Health Analytcs Sprng 2018 Bayesan Learnng Nrmalya Roy Department of Informaton Systems Unversty of Maryland Baltmore ounty www.umbc.edu Bayesan Learnng ombnes pror knowledge wth evdence to

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,* Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980

Chat eld, C. and A.J.Collins, Introduction to multivariate analysis. Chapman & Hall, 1980 MT07: Multvarate Statstcal Methods Mke Tso: emal mke.tso@manchester.ac.uk Webpage for notes: http://www.maths.manchester.ac.uk/~mkt/new_teachng.htm. Introducton to multvarate data. Books Chat eld, C. and

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Fuzzy Boundaries of Sample Selection Model

Fuzzy Boundaries of Sample Selection Model Proceedngs of the 9th WSES Internatonal Conference on ppled Mathematcs, Istanbul, Turkey, May 7-9, 006 (pp309-34) Fuzzy Boundares of Sample Selecton Model L. MUHMD SFIIH, NTON BDULBSH KMIL, M. T. BU OSMN

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Discretization of Continuous Attributes in Rough Set Theory and Its Application*

Discretization of Continuous Attributes in Rough Set Theory and Its Application* Dscretzaton of Contnuous Attrbutes n Rough Set Theory and Its Applcaton* Gexang Zhang 1,2, Lazhao Hu 1, and Wedong Jn 2 1 Natonal EW Laboratory, Chengdu 610036 Schuan, Chna dylan7237@sna.com 2 School of

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

2016 Wiley. Study Session 2: Ethical and Professional Standards Application

2016 Wiley. Study Session 2: Ethical and Professional Standards Application 6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

A kernel method for canonical correlation analysis

A kernel method for canonical correlation analysis A kernel method for canoncal correlaton analyss Shotaro Akaho AIST Neuroscence Research Insttute, Central 2, - Umezono, Tsukuba, Ibarak 3058568, Japan s.akaho@ast.go.jp http://staff.ast.go.jp/s.akaho/

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

An Improved multiple fractal algorithm

An Improved multiple fractal algorithm Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton

More information

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y)

Here is the rationale: If X and y have a strong positive relationship to one another, then ( x x) will tend to be positive when ( y y) Secton 1.5 Correlaton In the prevous sectons, we looked at regresson and the value r was a measurement of how much of the varaton n y can be attrbuted to the lnear relatonshp between y and x. In ths secton,

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Chapter 6. Supplemental Text Material

Chapter 6. Supplemental Text Material Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.

More information

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}.

We present the algorithm first, then derive it later. Assume access to a dataset {(x i, y i )} n i=1, where x i R d and y i { 1, 1}. CS 189 Introducton to Machne Learnng Sprng 2018 Note 26 1 Boostng We have seen that n the case of random forests, combnng many mperfect models can produce a snglodel that works very well. Ths s the dea

More information

Basically, if you have a dummy dependent variable you will be estimating a probability.

Basically, if you have a dummy dependent variable you will be estimating a probability. ECON 497: Lecture Notes 13 Page 1 of 1 Metropoltan State Unversty ECON 497: Research and Forecastng Lecture Notes 13 Dummy Dependent Varable Technques Studenmund Chapter 13 Bascally, f you have a dummy

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Credit Card Pricing and Impact of Adverse Selection

Credit Card Pricing and Impact of Adverse Selection Credt Card Prcng and Impact of Adverse Selecton Bo Huang and Lyn C. Thomas Unversty of Southampton Contents Background Aucton model of credt card solctaton - Errors n probablty of beng Good - Errors n

More information

Feature Selection in Multi-instance Learning

Feature Selection in Multi-instance Learning The Nnth Internatonal Symposum on Operatons Research and Its Applcatons (ISORA 10) Chengdu-Juzhagou, Chna, August 19 23, 2010 Copyrght 2010 ORSC & APORC, pp. 462 469 Feature Selecton n Mult-nstance Learnng

More information

UNR Joint Economics Working Paper Series Working Paper No Further Analysis of the Zipf Law: Does the Rank-Size Rule Really Exist?

UNR Joint Economics Working Paper Series Working Paper No Further Analysis of the Zipf Law: Does the Rank-Size Rule Really Exist? UNR Jont Economcs Workng Paper Seres Workng Paper No. 08-005 Further Analyss of the Zpf Law: Does the Rank-Sze Rule Really Exst? Fungsa Nota and Shunfeng Song Department of Economcs /030 Unversty of Nevada,

More information

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural

More information

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis

Resource Allocation and Decision Analysis (ECON 8010) Spring 2014 Foundations of Regression Analysis Resource Allocaton and Decson Analss (ECON 800) Sprng 04 Foundatons of Regresson Analss Readng: Regresson Analss (ECON 800 Coursepak, Page 3) Defntons and Concepts: Regresson Analss statstcal technques

More information

SDMML HT MSc Problem Sheet 4

SDMML HT MSc Problem Sheet 4 SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu

BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS. M. Krishna Reddy, B. Naveen Kumar and Y. Ramu BOOTSTRAP METHOD FOR TESTING OF EQUALITY OF SEVERAL MEANS M. Krshna Reddy, B. Naveen Kumar and Y. Ramu Department of Statstcs, Osmana Unversty, Hyderabad -500 007, Inda. nanbyrozu@gmal.com, ramu0@gmal.com

More information

Power law and dimension of the maximum value for belief distribution with the max Deng entropy

Power law and dimension of the maximum value for belief distribution with the max Deng entropy Power law and dmenson of the maxmum value for belef dstrbuton wth the max Deng entropy Bngy Kang a, a College of Informaton Engneerng, Northwest A&F Unversty, Yanglng, Shaanx, 712100, Chna. Abstract Deng

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Statistics for Business and Economics

Statistics for Business and Economics Statstcs for Busness and Economcs Chapter 11 Smple Regresson Copyrght 010 Pearson Educaton, Inc. Publshng as Prentce Hall Ch. 11-1 11.1 Overvew of Lnear Models n An equaton can be ft to show the best lnear

More information

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University

PHYS 450 Spring semester Lecture 02: Dealing with Experimental Uncertainties. Ron Reifenberger Birck Nanotechnology Center Purdue University PHYS 45 Sprng semester 7 Lecture : Dealng wth Expermental Uncertantes Ron Refenberger Brck anotechnology Center Purdue Unversty Lecture Introductory Comments Expermental errors (really expermental uncertantes)

More information

SIMPLE LINEAR REGRESSION

SIMPLE LINEAR REGRESSION Smple Lnear Regresson and Correlaton Introducton Prevousl, our attenton has been focused on one varable whch we desgnated b x. Frequentl, t s desrable to learn somethng about the relatonshp between two

More information

Lecture 4. Instructor: Haipeng Luo

Lecture 4. Instructor: Haipeng Luo Lecture 4 Instructor: Hapeng Luo In the followng lectures, we focus on the expert problem and study more adaptve algorthms. Although Hedge s proven to be worst-case optmal, one may wonder how well t would

More information

FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES

FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES Proceedngs of the Fourth Internatonal Conference on Machne Learnng and Cybernetcs, Guangzhou, 8- August 005 FORECASTING EXCHANGE RATE USING SUPPORT VECTOR MACHINES DING-ZHOU CAO, SU-LIN PANG, YUAN-HUAI

More information

Transient Stability Assessment of Power System Based on Support Vector Machine

Transient Stability Assessment of Power System Based on Support Vector Machine ransent Stablty Assessment of Power System Based on Support Vector Machne Shengyong Ye Yongkang Zheng Qngquan Qan School of Electrcal Engneerng, Southwest Jaotong Unversty, Chengdu 610031, P. R. Chna Abstract

More information

Basic Business Statistics, 10/e

Basic Business Statistics, 10/e Chapter 13 13-1 Basc Busness Statstcs 11 th Edton Chapter 13 Smple Lnear Regresson Basc Busness Statstcs, 11e 009 Prentce-Hall, Inc. Chap 13-1 Learnng Objectves In ths chapter, you learn: How to use regresson

More information

Linear regression. Regression Models. Chapter 11 Student Lecture Notes Regression Analysis is the

Linear regression. Regression Models. Chapter 11 Student Lecture Notes Regression Analysis is the Chapter 11 Student Lecture Notes 11-1 Lnear regresson Wenl lu Dept. Health statstcs School of publc health Tanjn medcal unversty 1 Regresson Models 1. Answer What Is the Relatonshp Between the Varables?.

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

THE SUMMATION NOTATION Ʃ

THE SUMMATION NOTATION Ʃ Sngle Subscrpt otaton THE SUMMATIO OTATIO Ʃ Most of the calculatons we perform n statstcs are repettve operatons on lsts of numbers. For example, we compute the sum of a set of numbers, or the sum of the

More information

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013

Feature Selection & Dynamic Tracking F&P Textbook New: Ch 11, Old: Ch 17 Guido Gerig CS 6320, Spring 2013 Feature Selecton & Dynamc Trackng F&P Textbook New: Ch 11, Old: Ch 17 Gudo Gerg CS 6320, Sprng 2013 Credts: Materal Greg Welch & Gary Bshop, UNC Chapel Hll, some sldes modfed from J.M. Frahm/ M. Pollefeys,

More information

Testing for seasonal unit roots in heterogeneous panels

Testing for seasonal unit roots in heterogeneous panels Testng for seasonal unt roots n heterogeneous panels Jesus Otero * Facultad de Economía Unversdad del Rosaro, Colomba Jeremy Smth Department of Economcs Unversty of arwck Monca Gulett Aston Busness School

More information

Orientation Model of Elite Education and Mass Education

Orientation Model of Elite Education and Mass Education Proceedngs of the 8th Internatonal Conference on Innovaton & Management 723 Orentaton Model of Elte Educaton and Mass Educaton Ye Peng Huanggang Normal Unversty, Huanggang, P.R.Chna, 438 (E-mal: yepeng@hgnc.edu.cn)

More information