HIERARCHICAL RANK DENSITY GENETIC ALGORITHM FOR RADIAL-BASIS FUNCTION NEURAL NETWORK DESIGN
|
|
- Camron Hunt
- 5 years ago
- Views:
Transcription
1 HIERARCHICAL RANK DEITY GENETIC ALGORITHM FOR RADIAL-BASIS FUNCTION NEURAL NETWORK DESIGN Gary G. Yen Hamng Lu Intellgent Systems and Control Laboratory School of Electrcal and Computer Engneerng Oklahoma State Unversty Stllwater, OK Abstract In ths paper, we propose a genetc algorthm based desgn procedure for a radal-bass functon neural network. A Herarchcal Rank Densty Genetc Algorthm (HRDGA) s used to evolve both the neural network s topology and parameters. In addton, the rank-densty based ftness assgnment technque s used to optmze the performance and topology of the evolved neural network to deal wth the conflcton between the tranng performance and network complexty. Instead of producng a sngle optmal network, HRDGA provdes a set of near-optmal neural networks to the desgners or the decson makers so that they can have more flexblty for the fnal decson-makng based on ther preferences. In terms of searchng for a near-complete set of canddate networks wth hgh performances, the networks desgned by the proposed algorthm prove to be compettve, or even superor, to three selected tradtonal radal-bass functon networks for predctng Mackey-Glass chaotc tme seres. I. INTRODUCTION Snce the orgnal emergence of Artfcal Neural Networks n 1940 s, there has been an extensve qualtatve and quanttatve analyss on dfferent classes of neural networks possessng varous archtectures and tranng algorthms. Wthout a proven gudelne, the desgn of an optmal neural network for a gven problem s often regarded as an ad hoc process. Gven a suffcent of neurons, more than one neural network structure (.e., wth dfferent weghtng coeffcents and s of neurons) can be traned to solve a gven problem wthn an error bound f gven enough tranng tme. The decson of whch network s the best s often decded by whch network wll better meet the user s needs for a gven problem. It s known that the performance of neural networks s senstve to the of hdden neurons. Too few neurons can result n underfttng problems (poor approxmaton), whle too many neurons may contrbute to overfttng problems. Obvously, achevng a better network performance and smplfyng the network topology are two conflctng objectves. Ths has promoted research on how to dentfy an optmal and effcent neural network structure. AIC (Akake Informaton Crteron) [1] and PMDL (Predctve Mnmum Descrpton Length) [] are two well-adopted approaches. However, AIC can be nconsstent and has a tendency to overft a model, whle PMDL only succeeded n relatvely smple neural network structures and seemed very dffcult to extend to a complex NN structure optmzaton problem. Moreover, all of these approaches tend to produce a sngle neural network by each run, whch does not offer the desgners wth alternatve choces. Over the past decade, evolutonary algorthms have been successfully appled to the desgn of network topologes and the choce of learnng parameters [3]. They reported some encouragng results that are comparable wth conventonal neural network desgn approaches. However, the multobjectve trade-off characterstc of the neural network desgn has not been well studed and appled n the real world applcatons. In ths paper, we propose a Herarchcal Rank Densty Genetc Algorthm (HRDGA) for neural network desgn n order to evolve a set of near-optmal neural networks. Wthout loss of generalty, we wll restrct our dscussons to the radal bass functon neural network. The remander of ths paper s organzed as follows. Secton II dscusses the neural network desgn dlemma and the dffculty of fndng a sngle optmal neural network. Secton III revews a Herarchcal Genetc Algorthm based Neural Network (HGA-NN) desgn approach and apples herarchcal genotype representaton to a Radal-Bass Functon (RBF) neural network desgn. Secton IV ntroduces the proposed rank-densty ftness assgnment technque for multobjectve genetc algorthms and descrbes HRDGA parameters and desgn procedure. Secton V presents a feasble applcaton on the Mackey-Glass chaotc tme seres predcton usng HRDGA evolved neural networks. A tme seres wth a specfc chaotc character s traned and the performance s compared wth those of the k-nearest neghbors, generalzed regresson and orthogonal least square tranng algorthms. Fnally, Secton VI provdes some concludng remarks along wth pertnent observatons. II. NEURAL NETWORK DESIGN DILEMMA To generate a neural network that possesses the practcal applcablty, several essental condtons need to be consdered. 1) A tranng algorthm that can search for the optmal parameters (.e., weghts and bases) for the specfed network structure and tranng task. ) A rule or algorthm that can determne the network complexty and ensure t to be suffcent for solvng the gven tranng problem. 3) A metrc or measure to evaluate the relablty and generalzaton of the produced neural network. The desgn of an optmal neural network nvolves all of these three problems. As gven n [4], the ultmate goal of the
2 constructon of a neural network wth the nput-output relaton y = f s the mnmzaton of the expectaton ( x, ω) of a cost functon g ( X, Y) as where ( T f E[ g ( f ( X, Y)] = g ( f ( x, y) f ( x, y dxdy (1) T T x, y ) f x,y ( x, y) denotes the jont pdf that depends on the nput vector x and the target output vector y. Gven a network structure, a famly of nput-output relatons F, parameterzed by ω, consstng of all = { f ( x, ω)} network functons that may be formed wth dfferent choces of the weghts can be assgned. The structure ' s sad to be domnated by " f F. In order to choose the ' F" optmal neural network, we need to fnd the determnaton of the network functon (x) (.e., the determnaton of the respectve weghts ω ) that gves the mnmal cost value wthn the famly F f f ( x) = f ( x, ω ) arg mn E[ g ( f ( X, Y)], () = ω and the determnaton of the network structure that realzes the mnmal cost value wthn a set of structures {} = arg mn E[ g ( f ( X), Y)]. (3) F T Obvously, the solutons of ths task need not result nto a unque network. In [5], f several structures 1, meet,l the crteron as shown n Equaton (3), the one wth the mnmal of hdden neurons s defned as an optmal. However, as a neural network can only tune the weghts by the gven tranng data sets, and these data sets are always fnte, there wll be a trade-off between NN learnng capablty and the varaton of the hdden neuron s. A network wth nsuffcent neurons mght not be able to approxmate well enough the functonal relatonshp between nput and target output. On the other hand, f the of neurons s excessve, the realzed network functon wll depend greatly on the resultng realzaton of the gven lmted tranng set. Ths trade-off characterstc mples that a sngle optmal neural network s very dffcult to fnd as extractng f from by usng a fnte tranng data set ( x) F s a dffcult task, f not mpossble [5]. Therefore, nstead of tryng to obtan a sngle optmal neural network, fndng a set of near-optmal networks wth dfferent network structures seems more feasble. Each ndvdual n ths neural network set may provde dfferent tranng and testng performances for dfferent tranng and testng data sets. Moreover, the dea of provdng a set of canddate networks to the decson makers can offer more flexbltes n selectng an approprate network judged by ther own preferences. For ths reason, genetc algorthms and multobjectve optmzaton technques can be ntroduced n neural network desgn problems to evolve network topology along wth parameters and present a set of alternatve canddates network. L III. RBF NEURAL NETWORK DESIGN In the lterature of usng genetc algorthms to assst neural networks desgn, several approaches have been proposed for evolvng NN structure together wth weghts and bases [3,6-7]. Among all these methods, we ncorporate a herarchcal genotype representaton nto an RBF neural network desgn. Herarchcal Genetc Algorthm (HGA) was frst proposed by Ke, et. al., [8] for fuzzy controller desgn usng two layer genes to evolve membershp. Based on ths dea, Yen and Lu [7] desgned an HGA Neural Network (HGA- NN). In the HGA-NN, a three-level HGA s used to evolve a Mult-layer Perceptron neural network. Each canddate chromosome corresponds to a neural network and the frst, second and thrd level genes represent the network layer, the neuron n each layer and the parameter values of each neuron, respectvely. By usng ths herarchcal genotype codng, the problem of one phenotype mappng dfferent genotypes can be prevented [7]. In a smlar sprt, HGA s talored n ths paper to evolve an RBF (Radal-Bass Functon) neural network. A radalbasc functon can be formed as m f ( x) = ω exp( x c ) (4) = 1 where c denotes the center of the th localzed functon, ω s the weghtng coeffcent connectng the th Gaussan neuron to the output neuron, and m s the of Gaussan neurons n the hdden layer. Wthout loss of generalty, we choose the varance as unty for each Gaussan neuron. In HGA based RBF neural network desgn, genes n the genotype are classfed nto three categores: control genes, weght genes and center genes. The lengths of these three knds of genes are the same. The value of each control gene (0 or 1) determnes the actvaton status (off or on) of the correspondng weght gene and center gene. The weght genes and center genes are represented by real values. Control genes and weght genes are randomly ntalzed and the center genes are randomly selected from gven tranng data samples. Fgure 1 shows the genotype and phenotype of HGA based RBF neural network. Control genes Weght genes Center genes Fgure 1 Genotype and Phenotype of HGA based RBF neural network IV. MULTIOBJECTIVE GENETIC ALGORITHMS As dscussed n Secton II, neural network desgn problems have a multobjectve trade-off characterstc n
3 terms of optmzng network topology and performance. Therefore, multobjectve genetc algorthm can be mplemented n NN desgn procedure. A. Multobjectve Genetc Algorthms (MOGAs) Snce the 1980 s, several Multobjectve Genetc Algorthms (MOGAs) have been proposed and appled n Multobjectve Optmzaton Problems (MOPs) [9]. These algorthms share the same purpose searchng for a unformly dstrbuted, near-optmal and near-complete famly of non-domnated ndvduals, a so-called Pareto front [9], whch descrbes the trade-off among contradcted objectves as shown n Fgure. For example, consderng the NN desgn dlemma ntroduced n Secton II, a neural network desgn problem can be regarded as a class of MOPs as mnmzng network structure and mprovng network performance, whch are two conflctng objectves. Therefore, searchng for a near-complete set of non-domnated and nearoptmal canddate networks as the desgn solutons (.e., Pareto front) s our goal. Second, to mantan the dversty of the obtaned Pareto front, we adopt an adaptve cell densty evaluaton scheme as shown n Fgure 3. The cell wdth n each objectve dmenson can be formed as d where K max f ( x) mn f ( x) x X x X =, 1,..., n K d s the wdth of the cell n =, (6) the th dmenson, denotes the of cells desgnated for the th dmens on (.e., n Fgure 4, K 1 and K 8 ), and X denotes the decson vector space. As the maxmum and mnmum ftness values n objectve space wll change wth dfferent generatons, the cell sze w ll vary from generat on to generaton to mantan the resoluton of the densty calculaton. The densty value of an ndvdual s defned as the of the ndvduals located n the same cell. f 1 = = f 1 Fgure 3 Densty map and densty grd Fgure Graphcal llustraton of the Pareto optmalty B. Rank-densty based ftness assgnment MOGA As MOGAs are desgnated to fnd a near-optmal and near complete set of Pareto solutons, the ftness assgnment scheme s qute dfferent from generc GAs, whch are desgned to search for a sngle optmal soluton. In ths paper, based on the herarchcal phenotype formulaton, we propose a new rank-densty based ftness assgnment scheme n a multobjectve genetc algorthm to assst neural network desgn. Three essental steps were appled n ths technque. Frst, an Automatc Accumulated Rankng Strategy (AARS) s appled to calculate the Pareto rank value, whch represents the domnated relatonshp among ndvduals. In AARS, assume at generaton t, ndvdual y s domnated by (t ) p ndvduals,, y 1 ( t ) p y L, whose rank values are already kn own as r( y ), L, r(y ), t). Its rank value can be computed by r( y, t) = 1+ ( t ) p j = 1, t 1 ( t p r( y j, t). (5) Therefore, by AARS, ndvdual s densty nformaton s ncluded n ts rank value, thus the populaton dversty wll be kept by penalzng those domnated ndvduals. Thrd, because rank and densty values represent ftness and populato n dversty, respectvely, the new rank-densty ftness formulaton can convert any multobjectve optmzaton problem nto a b-objectve optmzaton problem. Here, populaton rank and densty values are desgnated as the two ftness values for GA to mnmze. Before ftness evaluaton, the entre populaton s dvded nto two subpopulatons wth equal szes; each subpopulaton s flled wth ndvduals that are randomly chosen from the current populaton accordng to rank and densty value, respectvely. Afterwards, the entre populaton s shuffled, and crossover and mutaton are then performed. Meanwhle, snce we take the mnmzaton of the populaton densty value as one of the objectves, t s expected that the entre populaton wll move toward an opposte drecton to the Pareto front when the populaton densty value s beng mnmzed. Although movng away from the true Pareto front can reduce populaton densty value, obvously, these ndvduals are harmful to the populaton to converge to the Pareto front. To prevent harmful offsprng survvng and affectng the evolutonary drecton and speed, a forbdden regon concept s proposed n the replacement scheme for the densty subpopulaton, thereby preventng the backward effect. The forbdden regon ncludes all the cells domnated by the selected parent. The offsprng located n the forbdden regon wll not survve n the next generaton, and thus the selected parent wll not be replaced. As shown n Fgure 4, assumng our goal s to mnmze objectves f 1 and f, and a resultng offsprng of the selected parent p s located n the
4 forbdden regon, thus ths offsprng wll be elmnated even f t reduces the populaton densty, because ths knd of offsprng has the tendency to push the entre populaton away from the desred evolutonary drecton. rank and densty ftness values of each ndvdual wll be evaluated. After crossover, the offsprng replaces the low ftness parents and a new generaton s formed. Matng s then teratvely processed. 4) Stoppng crtera When the desred of generatons s met, the evolutonary process stops. V. TIME SERIES PREDICTION The cell where the selected parent p s located Vald range where parent p s offsprng can be located Forbdden regon where parent p s offsprng cannot be located Fgure 4 Illustraton of the vald range and the forbdden regon Fnally, The smple eltsm scheme [9] s also appled for bookkeepng the Pareto ndvduals obtaned n each generaton. These ndvduals are compared to acheve the fnal Pareto front after the evoluton process has stopped. C. HRDGA for NN desgn To assst RBF network desgn, the proposed Herarchcal Ran k Densty based Genetc Algorthm (HRDGA) s appled to carry out the ftness evaluaton and matng selecton schemes. The HRDGA operators are desgned as follows. 1) Chromosome representaton In HRDGA, each ndvdual (chromosome) represents a canddate neural network. The control genes are bnary bts (0 or 1). For the weght and center genes, real values are adopted as the gene representaton to reduce the length of the chromosome. The populaton sze s fxed and chosen ad hoc by the dffculty of the problem to be solved. ) Crossover and mutaton We used one-pont crossover n the control gene segments and two-pont crossover n the other two gene segments. The crossover ponts were randomly selected, and the crossover rates were chosen to be 0.8, 0.7 and 0.7 for the control, weght and center genes, respectvely. One-pont mutaton was appled n each segment. In the control gene segment, common bnary value mutaton was adopted. In the weght and center gene segments, real value mutaton was performed by addng a Gaussan (0,1), whch denotes a Gaussan functon wth zero mean and unt varance. The mutaton rates were set to be 0.1, 0.05 and 0.05 for the control, weght and center genes, respectvely. 3) Ftness evaluatons and matng selecton Snce we are tryng to use HRDGA to optmze the neural network topology along wth ts performance, we need to convert them nto the rank-densty doman. Therefore, the orgnal ftness network performance and of neurons of each ndvdual n a generaton s evaluated and ranked, and the densty value s calculated. Then the new Snce the proposed HRDGA was desgned to evolve the neural network topology together wth ts best performance, t proves useful n solvng complex problems such as tme seres predcton or pattern classfcaton. For a feasblty check, we use the HRDGA asssted NN desgn to predct a Mackey-Glass chaotc tme seres: d( x( t)) a x( t τ) = b x( t) (7) c d( t) (1 + x ( t τ)) where τ = 150, a = 0., b = 0. 1 and c = 10. The network s set to predct x( t + 6) based on x( t), x( t 6), x( t 1) and x( t 18 ). In the proposed HRDGA, 150 ntal center genes are selected, 150 control genes a nd 150 weght genes are ntally generated as well. Populaton sze was set to be 400. For comparson, we appled three other center selecton methods KNN (K-Nearest Neghbor) [10], GRNN (Generalzed Regresson Neural Network) [11] and OLS (Orthogonal Least Square Error) [1] on the same tme seres predcton problem. For KNN and GRNN types of networks, 70 networks are generated wth the neuron s ncreasng from 11 to 80 wth the step sze equals to one. Each of these networks wll be traned by KNN and GRNN methods, and the stop crtera s the same wth the one we appled n HRDGA. For the OLS method, the selecton of the tolerance parameter ρ determnes the trade-off between the performance and complexty of the network. A smaller ρ value wll produce a neural network wth more neuron s, whereas a larger ρ generally results n a network wth less of neurons. Therefore, by usng dfferent ρ values, we generated a group of neural networks wth varous tranng performances and s of hdden neurons. For the gven Mackey-Glass tme seres predcton problem, w e selected 40 dfferent ρ values, whch are from 0.01 to 0.4 wth the step sze of Optmal k value n KNN s determned accordng to reference [10]. The stop crtera for KNN, GRNN and OLS algorthms s ether the epochs exceeds 5,000, or the tranng Sum Square Error () between two sequental generatons s smaller than For HRDGA, the stoppng generaton s set to be 5,000. We used the frst 50 seconds of the data as the tranng data set, and then the data from , , and seconds were used as the testng data sets (labeled #1, # and #3) to be predcted by four dfferent approaches. Each
5 approach runs 30 tmes wth dfferent parameter ntalzatons to obtan the average results. Fgure 5(a)-(d) shows the average s of tranng data set and three testng data sets by the resultng neural networks wth dfferent of hdden neurons. Fgure 6(a)-(d) shows the correspondng approxmated Pareto fronts (.e., nondomnated sets) by the selected four approaches. Table 1 shows the best tranng and testng performances and ther correspondng s of hdden neurons over 30 runs. From Fgures 5 6, we can see, comparng to KNN and GRNN, HRDGA and OLS algorthms have much smaller tranng and testng errors for the same network structures. KNN traned networks produce the worst performances, because the RBF centers of the KNN algorthm are randomly selected, whch make KNN to acheve a local optmum soluton. Snce GA always seeks global optmum, and the orthogonal result s near optmal, the performances of OLS are comparable to HRDGA. (a) Tranng data set (a) Tranng data set (b) Testng data set #1 (b) Testng data set #1 (c) Testng data set # (c) Testng data set # (d) Testng data set #3 Fgure 5 Tranng and testng performances for the resultng neural networks wth dfferent of hdden neurons (d) Testng data set #3 Fgure 6 The correspondng Pareto fronts (non-domnated sets)
6 Table1 - Structure and performance comparson among four algorthms for Tranng set for Testng set #1 for Testng set # for Testng set #3 KNN GRNN OLS HRDGA Moreover, from Fgure 5, we can see that for all tranng algorthms, when the network complexty ncreases, the tranng error decreases. However, ths phenomenon s only partally mantaned for the relatonshp between the testng performances and the network complexty. Before the of hdden neurons reaches a certan threshold, the testng error decreases as the network complexty ncreases. After that, the testng error has the tendency to fluctuate even when the of hdden neurons ncreases. Ths occurrence can be consdered as that the resultng networks are overftted. The network wth the best testng performance before overfttng occurs s called the optmal network and judged as the fnal sngle soluton by conventonal NN desgn algorthms [4]. However, from Fgures 5 6 and Table 1, t s very dffcult to fnd a sngle optmal network that can offer the best performances for all the testng data sets, snce these data sets possess dfferent trats. Therefore, nstead of searchng for a sngle optmal neural network, HRDGA can be a more reasonable and applcable opton snce t results n a near-complete set of near-optmal networks.. Fgure 7 Relatonshp between ρ values and network complexty From the smulaton results, although KNN and GRNN provde worse tranng and testng results comparng to the other two approaches, they have the advantage that the desgner can control the network complexty by ncreasng or decreasng the neuron s at wll. On the other hand, although the OLS algorthm always provdes near-optmal network solutons wth good performances, the desgners cannot manage the network structure drectly. The trade-off characterstc between network performance and complexty totally depends on the value of tolerance parameter ρ. Same ρ value means completely dfferent trade-off features for dfferent NN desgn problems. In addton, as shown n Fgure 7, the relatonshp between ρ value and network topology s a nonlnear, many-to-one mappng, whch may cause a redundant computaton effo rt n order to generate a near-complete neural network soluton set. Compared wth the other three tranng approaches, HRDGA does not have problems n desgnng trade-off parameters, because t treats each objectve equally and ndependently, and ts populaton dversty preservng technques help to buld a near-unformly dstrbuted non-domnated soluton set. VI. CONCLUSION From the results presented above, HRDGA shows potental n estmatng neural network topology and weghtng parameters for complex problems when a heurstc estmaton of the neural network structure s not readly avalable. For the gven Mackey Glass chaotc tme seres predcton, HRDGA shows compettve, or even superor performances comparng wth the other three selected tranng algorthms n terms of search ng for a set of nondomnated, near-complete neural network solutons wth hgh-qualty tranng and testng performances. Whle we consdered radal-bass functon neural networks, the proposed herarchcal genetc algorthm may be easly extended to the desgns of other ne ural networks (.e., feedforward, feedback, or self-organzed). REFERENCES [1] N. Murata, S. Yoshzawa and S. A mar, Network nformaton crteron determnng the of hdden unts for an artfcal neural network model, IEEE Trans. Neural Networks, vol. 5, pp , [] X. M. Gao, S. J. Ovaska and Z. O. Hartmo, Speech sgnal restoraton usng an optmal neural network structur e, n Proc. IEEE Int. Conf. Neural Networks, pp , [3] X. Yao, Evolvng artfcal neural network, Internatonal Journal of Neural Systems, vol. 4, pp. 03-, [4] A. Doerng, M. Galck and H. Wtte, Structure optmzaton of neural networks wth the A-Algorthm, IEEE Trans. Neural Networks, vol. 8, pp , [5] S. Geman, E. Benenstock and R. Dousat, Neural networks and the bas/varance dlemma, Neural Comput, vol., pp , [6] B. Zhang and D. Cho, Evolvng neural trees for tme seres predcton usng Bayesan evolutonary algorthms, n Proc. 1 st IEEE Symp. Combnaton of Evolutonary Computaton and Neural Networks, pp. 17-3, 000. [7] G. G. Yen and H. Lu, Herarchcal genetc algorthm based neural network desgn, n Proc. 1 st IEEE Symp. Combnaton of Evolutonary Computaton and Neural Networks, pp , 000. [8] T. Y. Ke, K. S. Tang, K. F. Man and P. C. Luk, Herarchcal genetc fuzzy controller for a solar power plant, n Proc. IEEE Int. Symp. Industral Electroncs, pp , [9] C. M. Fonseca and P. J. Flemng, An overvew of evolutonary algorthms n multobjectve optmzaton, Evol. Comput., vol. 3, pp. 1-16, [10] T. Kaylan and S. Dasgupta, A new method for ntalzng radal bass functon classfers, n Proc. IEEE In t. Conf. Systems, Man, and Cybernetcs, pp , [11] P. D. Wasserman, Advanced Method n Neural Computng, New York: Van Nostrand Renhold, 1993 [1] S. Chen, C. F. Cowan and P. M. Grant, Orthogonal least square learnng algorthm for radal bass functon networks, IEEE Trans. Neural Networks, vol., pp , 1991.
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationDesign and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm
Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationThe Study of Teaching-learning-based Optimization Algorithm
Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More informationUsing Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*
Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationA New Evolutionary Computation Based Approach for Learning Bayesian Network
Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang
More informationGeneralized Linear Methods
Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set
More informationMarkov Chain Monte Carlo Lecture 6
where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationCSC 411 / CSC D11 / CSC C11
18 Boostng s a general strategy for learnng classfers by combnng smpler ones. The dea of boostng s to take a weak classfer that s, any classfer that wll do at least slghtly better than chance and use t
More informationSupport Vector Machines. Vibhav Gogate The University of Texas at dallas
Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest
More informationDepartment of Quantitative Methods & Information Systems. Time Series and Their Components QMIS 320. Chapter 6
Department of Quanttatve Methods & Informaton Systems Tme Seres and Ther Components QMIS 30 Chapter 6 Fall 00 Dr. Mohammad Zanal These sldes were modfed from ther orgnal source for educatonal purpose only.
More informationFUZZY GOAL PROGRAMMING VS ORDINARY FUZZY PROGRAMMING APPROACH FOR MULTI OBJECTIVE PROGRAMMING PROBLEM
Internatonal Conference on Ceramcs, Bkaner, Inda Internatonal Journal of Modern Physcs: Conference Seres Vol. 22 (2013) 757 761 World Scentfc Publshng Company DOI: 10.1142/S2010194513010982 FUZZY GOAL
More informationNote 10. Modeling and Simulation of Dynamic Systems
Lecture Notes of ME 475: Introducton to Mechatroncs Note 0 Modelng and Smulaton of Dynamc Systems Department of Mechancal Engneerng, Unversty Of Saskatchewan, 57 Campus Drve, Saskatoon, SK S7N 5A9, Canada
More informationLecture 12: Classification
Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationBoostrapaggregating (Bagging)
Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod
More informationPsychology 282 Lecture #24 Outline Regression Diagnostics: Outliers
Psychology 282 Lecture #24 Outlne Regresson Dagnostcs: Outlers In an earler lecture we studed the statstcal assumptons underlyng the regresson model, ncludng the followng ponts: Formal statement of assumptons.
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationSDMML HT MSc Problem Sheet 4
SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be
More informationMACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression
11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING
More informationChapter 13: Multiple Regression
Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to
More informationA PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS
HCMC Unversty of Pedagogy Thong Nguyen Huu et al. A PROBABILITY-DRIVEN SEARCH ALGORITHM FOR SOLVING MULTI-OBJECTIVE OPTIMIZATION PROBLEMS Thong Nguyen Huu and Hao Tran Van Department of mathematcs-nformaton,
More informationA Robust Method for Calculating the Correlation Coefficient
A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal
More informationNeural networks. Nuno Vasconcelos ECE Department, UCSD
Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationKernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan
Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationClassification as a Regression Problem
Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class
More informationLossy Compression. Compromise accuracy of reconstruction for increased compression.
Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost
More informationA Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach
A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationMultigradient for Neural Networks for Equalizers 1
Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationAn Extended Hybrid Genetic Algorithm for Exploring a Large Search Space
2nd Internatonal Conference on Autonomous Robots and Agents Abstract An Extended Hybrd Genetc Algorthm for Explorng a Large Search Space Hong Zhang and Masum Ishkawa Graduate School of L.S.S.E., Kyushu
More informationStatistics for Economics & Business
Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable
More informationAnnexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances
ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationAppendix B: Resampling Algorithms
407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles
More informationInteractive Bi-Level Multi-Objective Integer. Non-linear Programming Problem
Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan
More informationSolving of Single-objective Problems based on a Modified Multiple-crossover Genetic Algorithm: Test Function Study
Internatonal Conference on Systems, Sgnal Processng and Electroncs Engneerng (ICSSEE'0 December 6-7, 0 Duba (UAE Solvng of Sngle-objectve Problems based on a Modfed Multple-crossover Genetc Algorthm: Test
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More information2016 Wiley. Study Session 2: Ethical and Professional Standards Application
6 Wley Study Sesson : Ethcal and Professonal Standards Applcaton LESSON : CORRECTION ANALYSIS Readng 9: Correlaton and Regresson LOS 9a: Calculate and nterpret a sample covarance and a sample correlaton
More informationChapter - 2. Distribution System Power Flow Analysis
Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load
More information4 Analysis of Variance (ANOVA) 5 ANOVA. 5.1 Introduction. 5.2 Fixed Effects ANOVA
4 Analyss of Varance (ANOVA) 5 ANOVA 51 Introducton ANOVA ANOVA s a way to estmate and test the means of multple populatons We wll start wth one-way ANOVA If the populatons ncluded n the study are selected
More informationAtmospheric Environmental Quality Assessment RBF Model Based on the MATLAB
Journal of Envronmental Protecton, 01, 3, 689-693 http://dxdoorg/10436/jep0137081 Publshed Onlne July 01 (http://wwwscrporg/journal/jep) 689 Atmospherc Envronmental Qualty Assessment RBF Model Based on
More informationDUE: WEDS FEB 21ST 2018
HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant
More informationMDL-Based Unsupervised Attribute Ranking
MDL-Based Unsupervsed Attrbute Rankng Zdravko Markov Computer Scence Department Central Connectcut State Unversty New Brtan, CT 06050, USA http://www.cs.ccsu.edu/~markov/ markovz@ccsu.edu MDL-Based Unsupervsed
More informationChapter 6. Supplemental Text Material
Chapter 6. Supplemental Text Materal S6-. actor Effect Estmates are Least Squares Estmates We have gven heurstc or ntutve explanatons of how the estmates of the factor effects are obtaned n the textboo.
More informationWhich Separator? Spring 1
Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal
More informationComparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method
Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method
More informationCIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M
CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute
More informationThe Convergence Speed of Single- And Multi-Objective Immune Algorithm Based Optimization Problems
The Convergence Speed of Sngle- And Mult-Obectve Immune Algorthm Based Optmzaton Problems Mohammed Abo-Zahhad Faculty of Engneerng, Electrcal and Electroncs Engneerng Department, Assut Unversty, Assut,
More information4DVAR, according to the name, is a four-dimensional variational method.
4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The
More informationChapter 9: Statistical Inference and the Relationship between Two Variables
Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,
More informationLinear Feature Engineering 11
Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19
More informationCluster Validation Determining Number of Clusters. Umut ORHAN, PhD.
Cluster Analyss Cluster Valdaton Determnng Number of Clusters 1 Cluster Valdaton The procedure of evaluatng the results of a clusterng algorthm s known under the term cluster valdty. How do we evaluate
More informationAn improved multi-objective evolutionary algorithm based on point of reference
IOP Conference Seres: Materals Scence and Engneerng PAPER OPEN ACCESS An mproved mult-objectve evolutonary algorthm based on pont of reference To cte ths artcle: Boy Zhang et al 08 IOP Conf. Ser.: Mater.
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationDifferential Evolution Algorithm with a Modified Archiving-based Adaptive Tradeoff Model for Optimal Power Flow
1 Dfferental Evoluton Algorthm wth a Modfed Archvng-based Adaptve Tradeoff Model for Optmal Power Flow 2 Outlne Search Engne Constrant Handlng Technque Test Cases and Statstcal Results 3 Roots of Dfferental
More informationCHAPTER 2 MULTI-OBJECTIVE GENETIC ALGORITHM (MOGA) FOR OPTIMAL POWER FLOW PROBLEM INCLUDING VOLTAGE STABILITY
26 CHAPTER 2 MULTI-OBJECTIVE GENETIC ALGORITHM (MOGA) FOR OPTIMAL POWER FLOW PROBLEM INCLUDING VOLTAGE STABILITY 2.1 INTRODUCTION Voltage stablty enhancement s an mportant tas n power system operaton.
More informationChapter 2 Real-Coded Adaptive Range Genetic Algorithm
Chapter Real-Coded Adaptve Range Genetc Algorthm.. Introducton Fndng a global optmum n the contnuous doman s challengng for Genetc Algorthms (GAs. Tradtonal GAs use the bnary representaton that evenly
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationNon-linear Canonical Correlation Analysis Using a RBF Network
ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane
More informationA Hybrid Variational Iteration Method for Blasius Equation
Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method
More informationBasic Statistical Analysis and Yield Calculations
October 17, 007 Basc Statstcal Analyss and Yeld Calculatons Dr. José Ernesto Rayas Sánchez 1 Outlne Sources of desgn-performance uncertanty Desgn and development processes Desgn for manufacturablty A general
More informationData Mining Using Surface and Deep Agents Based on Neural Networks
Assocaton for Informaton Systems AIS Electronc Lbrary (AISeL) AMCIS 2010 Proceedngs Amercas Conference on Informaton Systems (AMCIS) 8-2010 Based on Neural Networks Subhash Kak Oklahoma State Unversty,
More informationLINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity
LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have
More informationResource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud
Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationChapter 8 Indicator Variables
Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n
More informationThin-Walled Structures Group
Thn-Walled Structures Group JOHNS HOPKINS UNIVERSITY RESEARCH REPORT Towards optmzaton of CFS beam-column ndustry sectons TWG-RR02-12 Y. Shfferaw July 2012 1 Ths report was prepared ndependently, but was
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationChapter 11: Simple Linear Regression and Correlation
Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests
More informationSIMULTANEOUS TUNING OF POWER SYSTEM STABILIZER PARAMETERS FOR MULTIMACHINE SYSTEM
SIMULTANEOUS TUNING OF POWER SYSTEM STABILIZER PARAMETERS FOR MULTIMACHINE SYSTEM Mr.M.Svasubramanan 1 Mr.P.Musthafa Mr.K Sudheer 3 Assstant Professor / EEE Assstant Professor / EEE Assstant Professor
More informationFuzzy Boundaries of Sample Selection Model
Proceedngs of the 9th WSES Internatonal Conference on ppled Mathematcs, Istanbul, Turkey, May 7-9, 006 (pp309-34) Fuzzy Boundares of Sample Selecton Model L. MUHMD SFIIH, NTON BDULBSH KMIL, M. T. BU OSMN
More informationHomework Assignment 3 Due in class, Thursday October 15
Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.
More informationCHAPTER 7 STOCHASTIC ECONOMIC EMISSION DISPATCH-MODELED USING WEIGHTING METHOD
90 CHAPTER 7 STOCHASTIC ECOOMIC EMISSIO DISPATCH-MODELED USIG WEIGHTIG METHOD 7.1 ITRODUCTIO early 70% of electrc power produced n the world s by means of thermal plants. Thermal power statons are the
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationEntropy Generation Minimization of Pin Fin Heat Sinks by Means of Metaheuristic Methods
Indan Journal of Scence and Technology Entropy Generaton Mnmzaton of Pn Fn Heat Snks by Means of Metaheurstc Methods Amr Jafary Moghaddam * and Syfollah Saedodn Department of Mechancal Engneerng, Semnan
More informationEnsemble Methods: Boosting
Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationAn Improved Clustering Based Genetic Algorithm for Solving Complex NP Problems
Journal of Computer Scence 7 (7): 1033-1037, 2011 ISSN 1549-3636 2011 Scence Publcatons An Improved Clusterng Based Genetc Algorthm for Solvng Complex NP Problems 1 R. Svaraj and 2 T. Ravchandran 1 Department
More informationCOMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS
More informationPattern Classification
Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher
More informationThe Second Anti-Mathima on Game Theory
The Second Ant-Mathma on Game Theory Ath. Kehagas December 1 2006 1 Introducton In ths note we wll examne the noton of game equlbrum for three types of games 1. 2-player 2-acton zero-sum games 2. 2-player
More informationGlobal Sensitivity. Tuesday 20 th February, 2018
Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values
More informationWavelet chaotic neural networks and their application to continuous function optimization
Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,
More informationDr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur
Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed
More informationCS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015
CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationMLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012
MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:
More informationMultilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata
Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,
More information