Electromagnetic Algorithm for tuning the structure and parameters of Neural Networks

Size: px
Start display at page:

Download "Electromagnetic Algorithm for tuning the structure and parameters of Neural Networks"

Transcription

1 Electromagnetc Algorthm for tunng the structure and parameters of Neural Networks Ayad Mashaan Turky, Salwan Abdullah and Nasser R. Sabar Abstract Electromagnetc algorthm s a populaton based meta-heurstc whch mtates the attracton and repulson of sample ponts. In ths paper, we propose an electromagnetc algorthm to smultaneously tune the structure and parameter of the feed forward neural network. Each soluton n the electromagnetc algorthm contans both the desgn structure and the parameters values of the neural network. Ths soluton later wll be used by the neural network to represents ts confguraton. The classfcaton accuracy returned by the neural network represents the qualty of the soluton. The performance of the proposed method s verfed by usng the well-known classfcaton benchmarks and compared aganst the latest methodologes n the lterature. Emprcal results demonstrate that the proposed algorthm s able to obtan compettve results, when compared to the best-known results n the lterature. I. INTRODUCTION Neural networks (NNs are computng technques nspred by nature, whch has been successfully used to solve a wde varety of problems such as pattern recognton [], sgnal processng [] and optmsaton []. The use of neural network to solve real world problems requres some crtcal decsons, whch may has negatve effect on solvng certan problem. or example, the optmum network structure and parameters are the most mportant attrbutes of the network networks, whch have drect effect on the soluton qualty, to be determned [3]. In multdmensonal space, the tunng of the neural network structure and parameters can be consdered as a complex optmsaton problem, snce each pont represents a potental neural network wth dfferent network structures and lnk weghts [3]. Hence, the use of fxed parameters and structure at the overall connectvty between the neurons may not produce good results. A network that has small neurons may not acheve good performance due to ts lmted nformaton processng power. On the other hand, a network that conssts large number of neurons may contan many redundant connectons and also computatonally expensve [4, 5]. Durng the last decade, several populaton-based methods have been developed to generate the approprate structure and parameters of a neural network. Ilonen et al. Ayad Mashaan Turky and Salwan Abdullah are wth Unverst Kebangsaan Malaysa, UKM Bang, Selangor, Malaysa (e-mal: ayadalrashd@gmal.com, salwan@ftsm.ukm.my. Ayad s also afflated wth Swnburne Unversty of Technology, Vctora, Australa. Nasser R. Sabar s wth The Unversty of Nottngham Malaysa Campus, Jalan Broga, Semenyh, Selangor, Malaysa (e-mal: Nasser.Sabar@nottngham.edu.my. [6] employed a dfferental evoluton for tranng the feed forward neural network. The proposed algorthm produced promsng results when tested on pattern classfcaton and functon approxmaton. Leung et al. [4] proposed a genetc algorthm to tune the structure and parameter values of a neural network. In ths approach, a fully connected three-layer feed forward neural network wth swtches are used and the number of hdden nodes s manually determned, startng wth a small number and teratvely ncreased untl the learnng performance s acheved. The algorthm s tested on sunspots and assocatve memory. Tsa et al. [7] employed a hybrd Taguch-genetc algorthm to tune the structure and the parameter values of a neural network. In ths approach, the same model whch s proposed by Leung et al. [4] s used. The proposed algorthm has shown excellent results when used to estmate the number of sunspots and to realze the assocatve memory. Da et al. [8] ntroduced a new populaton-based heurstc search algorthm called seeker optmsaton algorthm, whch s used to tune the structure and the parameter values of a neural network. The proposed algorthm has shown good results when tested on pattern classfcaton and functon approxmaton. Zhao et al. [9] appled a cooperatve bnary-real partcle swarm optmsaton to tune the structure and parameter values of a neural network. In ths method, bnary partcle swarm optmsaton (PSO s used to tackle the swtches set, where each swtch has ether 0 or value, whlst the basc partcle swarm optmsaton s used to optmse the weght values. The proposed bnary-real PSO algorthm has been able to acheve the state-of-the-art results when tested to estmate the number of sunspots. The successes of the mentoned populaton-based methods are the man motvatng factors for proposng a new populaton-based method based on Electromagnetc algorthm (EM for tunng both structure and parameter values of a feed forward neural network. EM s a populaton based meta-heurstc method that mtates the attracton and repulson of the sample ponts and moves them towards a hgh qualty soluton whle avodng the local optma [0]. It has been successfully employed to solve several optmsaton problems such as examnaton tmetablng problems [], vehcle routng problems [] and ob shop schedulng [3], whch made t a worthy canddate to consder for solvng real world problems. In addton to our knowledge, t has not been comprehensvely studed n the context of tunng both the structure and parameter values of a feed forward neural network [4-6]. Ths paper s organsed as follows. Secton presents the proposed algorthm. Expermental results are dscussed

2 n Secton 3. nally, some bref concludng comments are provded n Secton 4. 0 f α 0.5 (3 f α0.5 II. THE PROPOSED ALGORITHM In ths secton, the NN wth lnk swtches, followed by the descrpton of the EM algorthm and ts applcatons to tune the structure and parameters of the neural network are presented. A. The NN wth lnk swtches In general, the structure of a neural network starts wth a fxed number of nputs, hdden and outputs nodes. These nclude the set of the parameters and network structure. Hence, the use of fxed parameters and structure may not yeld good results wthn a gven tranng perod. Small networks get nto the local mnma too easly, and may not acheve good results due to ts lmted nformaton processng power. On the other hand, large networks may take a long tme to learn the characterstcs of the data and computatonally too expensve [4] [7] [8]. In ths study, we used a fully connected three layer neural network wth lnk swtches that was proposed by Leung et al. [4]. In order to choose an optmal number of hdden nodes, ther numbers are frst fxed between three and seven, n order to test the learnng performance. The nput-output relatonshp can be defned as follows: B. Electromagnetc Algorthm (EM for tunng the neural networks Nature nspred algorthms have proven to be an effectve soluton method for varous optmzaton problems [7, 8]. Electromagnetc algorthm (EM s a recent nature nspred populaton based metaheurstc algorthm ntroduced by Brbl and ang [0]. EM smulates an attracton repulson mechansm of electromagnetc theory n explorng the mult-dmensonal soluton space of a gven problem. Each pont represents a soluton and each soluton s assocated wth a charge, whch represents the qualty of the soluton. The solutons wll exert some force (attracton or repulson on other solutons. The attracton-repulson force s used to explore a soluton search space. The man dea behnd the attracton-repulson s that, a bad qualty soluton repels other solutons from movng towards ts drecton. On the other hand, a good qualty soluton wll attract other solutons to move towards ts drecton. EM has four steps as follows (see g. :, k=,,, no. ( Start Intalsaton where z, z,,z n and y, y n o are the nputs and outputs of neural network, respectvely; n represents the number of nputs; n o represents the number of outputs; n h represents the number of hdden nodes; v denotes the weght of the lnk between the th hdden node and the th nput node; w k denotes the weght of the lnk between the kth output node and the th hdden node; b and b denote the bases for the hdden and output nodes, respectvely; s represents the swtch lnk between the th nput node and the th hdden node; t k represents the swtch lnk between the th hdden node and the kth output node; and denote the bas swtches of hdden and output nodes, respectvely. In case the swtch value s equal to, there s a lnk between two nodes from dfferent layers, otherwse there s no lnk between these two nodes. Logsg (. s a sgmod functon that s defned n Eq. (: logsg( = e. ( or each dmenson, the connecton weght values are tuned to be wthn [-, ] and the lnk swtch bt s ether 0 or as n [4]. Along wth [4], a unt step functon s ntroduced to each lnk that s defned as n Eq. (3: - Set the populaton sze, M - Set the number of teratons as a stoppng crteron, MAXITER - Set the number of teratons for the local search, LSITER Generate populaton of solutons at random Populaton evluaon Evaluate the soluton by callng NN to solve a gven problem dataset usng the current structure and parameter values of current soluton Local search At each teraton, evaluate the modfed soluton by callng NN to solve a gven problem dataset usng the current structure and parameter values of current soluton Calculate the force of each soluton Movement Evaluate the modfed soluton by callng NN to solve a gven problem dataset usng the current structure and parameter values of current soluton No Stoppng crteron satsfed? g. lowchart of the EM algorthm for tunng neural networks (EM-NN. Yes Return the best soluton

3 Intalsaton: In ths step, EM parameters are ntalsed.e., the populaton sze (M, number of teratons (MAX ITER and number of teratons for local search (LS ITER. A populaton of soluton s randomly created. In ths study, a one-dmensonal vector represents the soluton. The sze of the vector s equal to the number of decson varables n the gven problem. Each cell n the vector represents one decson varable. In ths study, the soluton s dvded nto two parts: network structure, and connected weghts. Two types of representatons (bnary and real are used. Bnary representaton represents the network structure, whlst the real representaton represents the regularzaton parameter and connected weghts. or bnary representaton, the soluton s generated by randomly assgnng ether zero or one for each varable. or real representaton, the soluton s randomly generated by assgnng a random value to each decson varable wthn ts upper and lower range as calculated n Eq. (4. x = L x + rand [0,] ( U x - L x (4 where rand return a random number between [0,], U x and Lx are the upper and lower bounds of the decson parameter, respectvely. The proposed method s employed to learn the neural network model for approxmatng the gven nput-output relatonshps: y d (t= g(z d (t, t=,,,n d (5 where z d (t= [z d (t z d (t z d n (t] and yd (t=[ y d (t y d (t y d (t] represent the nputs and the desred outputs of no an unknown nonlnear functon g(., respectvely. n d s the number of the nput-output pars. The qualty of each soluton s calculated by creatng the neural network usng current structure and the weght values encoded n the current soluton usng Eq. (6. The obectve s to maxmse the ftness value as defned n Eq. (6 and to mnmse the error rate as n Eq. (7. f= err wth where the err represents mean absolute error (MAE. The sample of the soluton representaton s gven n g.. Weght values Swtches (6 (7 g.the representaton of the soluton showng weght and swtches values. Local search: In ths step, a local search procedure s conducted to mprove the qualty of the solutons n the populaton that are generated n the ntalsaton step. The local search procedure has two parameters.e., the number of teraton (LS ITER and the multpler for the neghborhood search (δ, whch s referred to as a changng amount when the soluton s updated by addng or reducng by δ amount. At each teraton, the local search procedure generates a neghbourhood soluton by addng or subtractng δ from the current soluton for the real code representaton. Both addng and subtractng have the same probablty rate whch s fxed to 0.5,.e., f the generated random number s less than 0.5, then δ s subtracted from the current soluton, otherwse, δ s added to the current soluton (Brbl and ang [0]. The neghbourhood soluton s generated by flppng-floppng the decson varable value from zero to one or one to zero n the bnary representaton. If the neghbourhood soluton s better than the current one, t wll be accepted and becomes the current soluton for the next teraton. Otherwse, t wll be reected. Ths process s repeated for a pre-defned number of teratons (LS ITER. 3 Total force calculaton: In ths step, the charge of each soluton based on the obectve functon s calculated. Based on the charge value, the soluton wll ether follow attracton or repulson. The charge of each soluton s calculated as stated n Eq. (8: q exp n m k f ( x f ( x ( f ( x f ( x,. best Ths mples that the good qualty solutons have a hgher charge and consequently wll have a strong attracton. Thus, the solutons wll be attracted to the good qualty solutons and wll be repel from the bad qualty solutons. Based on the calculated charge, the total force,, exerted on each soluton s calculated as shown n Eq. (9: m ( x ( x J q q x x x x x q q x rom Eq. (9, the calculated total force between the solutons s proportonal to the product of the charges and s nversely proportonal to the dstance between the solutons. k best f f ( x f ( x f f ( x f ( x (8 (9

4 4 Movement: Based on the calculated total force n Eq. (9, the solutons moved n the drecton of the total force by a random step length usng Eq. (0. The random step length takes a random value between zero and one. A postve force wll move the soluton towards the upper bound, whlst a negatve force wll move the soluton towards a lower bound. Note that, the best qualty soluton wll not move and wll only attract other solutons. or the real code representaton, the movement s calculated based on Eq. (0. x = x + λ (RNG =,,,m (0 or the bnary representaton, the soluton s moved based on the gven probablty as follows: or the attracton case: a soluton s moved accordng to the attracton probablty (PA, whch s fxed to a large number between zero and one. or the repulson case: a soluton s repelled based on the repulson probablty (PR, whch s fxed to small number between zero and one. or example, assume that PA=0.8 and PR =0.0. If the state s attractng, the soluton wll be moved as follows: generate a random number PR between zero and one. If PR s less than PA, flp-flop the current decson varable. Otherwse, keep t unchanged. III. EXPERIMENTAL RESULTS In ths secton, the proposed algorthm EM-NN performance s analysed usng fve datasets obtaned from Unversty of Calforna at Irvne (UCI Machne Learnng Repostory ( These datasets are chosen because they represent a varety of mportant real world problems as well as many researchers have used them to evaluate the performance of ther algorthms. The characterstcs of these datasets are presented n Table. TABLE CHARACTERISTIC O THE DATASETS USED Dataset attrbutes nstances classes Australan Breast cancer German Irs Pma class In order to determne the approprate values for the EM parameters (.e., populaton sze, stoppng crteron, and number of teratons for the local search, some prelmnary experments were conducted. The obtaned prelmnary results n terms of the classfcaton accuracy are presented n Table, where the algorthm performs the best wth the populaton sze = 00 (presented n bold on two datasets. TABLE RESULTS O USING DIERENT POPULATION SIZE Dataset Populaton sze, M Pma Breast cancer We then tuned the number of teratons for the stoppng crteron (MAX ITER and the local search (LS ITER by fxng the value of the populaton sze as 00. We examned MAX ITER wth three dfferent values.e., 50, 00 and 50, and LS ITER wth 5, 0 and 5 as shown n Table 3. TABLE 3 CLASSIICATION ACCURACY WITH DIERENT VALUES O MAX ITER AND LS ITER Datasets MAX ITER LS ITER Pma Breast cancer Table 3 presents the results of the classfcaton accuracy. The best results are presented n bold. rom Table 3, t s clearly shown that the best classfcaton accuracy are obtaned when MAX ITER = 00 and LS ITER = 0. The fnal parameter settngs for the EM algorthm are presented n Table 4. TABLE 4 PARAMETER SETTING Parameter Value Populaton sze, M 00 No. of teratons for the local search (LS ITER 0 EM stoppng condton (MAX ITER 00 In our experments, each dataset has been dvded nto tranng data (75% and test set (5% as n [8]. The experments were executed for 50 tmes wth dfferent seed numbers on a PC wth.4 GHz speed and 4 GB RAM under Wndows 7 Operatng System. In ths study, the performance of the proposed algorthm s compared wth the state-of-the-art approaches, whch are summarsed n Table 5. Note that, these approaches are chosen based on ther ablty to produce the best-known results n the lterature. TABLE 5 ACRONYMS O STATE-O-THE-ART APPROACHES IN COMPARISONS Symbol Refer ences Descrpton PSO+SV M [9] GA+SVM [0] Partcle swarm optmzaton for parameter determnaton and feature selecton of support vector machnes. A GA-based feature selecton and parameters optmzaton for support vector machnes.

5 SS-based ensemble [] SA-SVM [] Ensembles [3] Enhancng the classfcaton accuracy by scatter-search-based ensemble approach. Parameter determnaton of support vector machne and feature selecton usng smulated annealng approach. Creatng dversty n ensembles usng artfcal data. The results of the comparson are presented n Table 6, where the best results are presented n bold. The comparson shows that our proposed approach (EM-NN s able to obtan three new best results out of fve tested datasets. The dea of attracton-repulson mechansms wthn the EM algorthm that ams to move the solutons toward the hgh qualty solutons and avod the soluton from beng trapped n a local optmum helps n achevng hgher classfcaton accuracy on the tested datasets. Ths shows that EM s one of the approprate methods to smultaneously tune the structure and parameters of the NN for the classfcaton problems. TABLE 6 CLASSIICATION ACCURACY OR EM-NN AND OTHER METHODS Algorthms Datasets EM-NN PSO+ GA+ SS-based SA-SVM Ensembles SVM SVM ensemble Australan Breast cancer German Irs Pma ndcates that datasets have not been attempted. The results obtaned are further analysed by conductng a redman s mult comparson statstcal tests wth a sgnfcant nterval of 95% (α = 0.05 to see f there s any sgnfcant dfference between EM-NN and the compared methods (PSO+SVM, GA+SVM and SS-based ensemble [4]. Note that, only those methods that have been tested on all datasets are consdered n ths statstcal test. If sgnfcant dfferences are detected (based on redman s test, post hoc methods (Holm s and Hochberg s tests are conducted to obtan the adusted p-values for each comparson between the control algorthm (the best-performng one n the comparson s the one that has st rank accordng to redman s test and the rest of algorthms. The p-value computed by the redman s test s 0.000, whch s below the sgnfcant nterval of 95% (α = Ths ndcates that there s a sgnfcant dfference among the observed results. Table 7 summarses the rankng obtaned by the redman s test where EM-NN s ranked frst. 3 GA+SVM SS-based ensemble.6 A post hoc method s conducted to obtan the adusted p-values for each comparson between EM-NN (as the controllng method and PSO+SVM, GA+SVM and SS-based ensemble algorthms. Table 8 shows the adusted p-values whch reveals that EM-NN s better than (PSO+SVM and GA+SVM wth α = However, there s no sgnfcant dfference between EM-NN and SS-based ensemble (adusted p-value s hgher than Nevertheless, the results reported n Table 6 clearly show that EM-NN s able to obtan three new best results out of fve tested datasets as compared to SS-based ensemble that s only margnally better on two datasets. TABLE 8 ADJUSTED P-VALUE O THE COMPARED METHODS Algorthm Unadusted P P Holm P Hochberg PSO + SVM GA + SVM SS-based ensemble IV. CONCLUSION In ths paper, we have presented a methodology that handles the problem of tunng the structure and parameters of the three layers fully connected feed forward neural network wth lnk swtches based on the prncple of the electromagnetc-lke mechansm. The performance of the approach s tested on classfcaton of benchmark datasets. We have made comparson wth a set of state-of-the-art approaches from the lterature. Ths approach produces three best results and s consstently good across the all the benchmark problems n comparson wth other approaches studed n the lterature. Wth the help of the electromagnet that moves sample ponts (solutons towards a hgh qualty soluton whle avodng the local optma by utlsng a calculated force value, our approach s capable of fndng better solutons for the classfcaton problems. We are confdent that a sgnfcant contrbuton n producng hgh qualty solutons to the classfcaton problems has been made. REERENCES TABLE 7 AVERAGE RANKING O RIEDMAN S TEST. # Algorthm Rankng EM-NN.4 PSO+SVM 3.5

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,* Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks Other NN Models Renforcement learnng (RL) Probablstc neural networks Support vector machne (SVM) Renforcement learnng g( (RL) Basc deas: Supervsed dlearnng: (delta rule, BP) Samples (x, f(x)) to learn

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances

Annexes. EC.1. Cycle-base move illustration. EC.2. Problem Instances ec Annexes Ths Annex frst llustrates a cycle-based move n the dynamc-block generaton tabu search. It then dsplays the characterstcs of the nstance sets, followed by detaled results of the parametercalbraton

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

A Hybrid Differential Evolution Algorithm Game Theory for the Berth Allocation Problem

A Hybrid Differential Evolution Algorithm Game Theory for the Berth Allocation Problem A Hybrd Dfferental Evoluton Algorthm ame Theory for the Berth Allocaton Problem Nasser R. Sabar, Sang Yew Chong, and raham Kendall The Unversty of Nottngham Malaysa Campus, Jalan Broga, 43500 Semenyh,

More information

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

An Extended Hybrid Genetic Algorithm for Exploring a Large Search Space

An Extended Hybrid Genetic Algorithm for Exploring a Large Search Space 2nd Internatonal Conference on Autonomous Robots and Agents Abstract An Extended Hybrd Genetc Algorthm for Explorng a Large Search Space Hong Zhang and Masum Ishkawa Graduate School of L.S.S.E., Kyushu

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Natural Language Processing and Information Retrieval

Natural Language Processing and Information Retrieval Natural Language Processng and Informaton Retreval Support Vector Machnes Alessandro Moschtt Department of nformaton and communcaton technology Unversty of Trento Emal: moschtt@ds.untn.t Summary Support

More information

ECE559VV Project Report

ECE559VV Project Report ECE559VV Project Report (Supplementary Notes Loc Xuan Bu I. MAX SUM-RATE SCHEDULING: THE UPLINK CASE We have seen (n the presentaton that, for downlnk (broadcast channels, the strategy maxmzng the sum-rate

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015

CS 3710: Visual Recognition Classification and Detection. Adriana Kovashka Department of Computer Science January 13, 2015 CS 3710: Vsual Recognton Classfcaton and Detecton Adrana Kovashka Department of Computer Scence January 13, 2015 Plan for Today Vsual recognton bascs part 2: Classfcaton and detecton Adrana s research

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

An Interactive Optimisation Tool for Allocation Problems

An Interactive Optimisation Tool for Allocation Problems An Interactve Optmsaton ool for Allocaton Problems Fredr Bonäs, Joam Westerlund and apo Westerlund Process Desgn Laboratory, Faculty of echnology, Åbo Aadem Unversty, uru 20500, Fnland hs paper presents

More information

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity

Week3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle

More information

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI

Logistic Regression. CAP 5610: Machine Learning Instructor: Guo-Jun QI Logstc Regresson CAP 561: achne Learnng Instructor: Guo-Jun QI Bayes Classfer: A Generatve model odel the posteror dstrbuton P(Y X) Estmate class-condtonal dstrbuton P(X Y) for each Y Estmate pror dstrbuton

More information

Regularized Discriminant Analysis for Face Recognition

Regularized Discriminant Analysis for Face Recognition 1 Regularzed Dscrmnant Analyss for Face Recognton Itz Pma, Mayer Aladem Department of Electrcal and Computer Engneerng, Ben-Guron Unversty of the Negev P.O.Box 653, Beer-Sheva, 845, Israel. Abstract Ths

More information

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)

ANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U) Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of

More information

A New Evolutionary Computation Based Approach for Learning Bayesian Network

A New Evolutionary Computation Based Approach for Learning Bayesian Network Avalable onlne at www.scencedrect.com Proceda Engneerng 15 (2011) 4026 4030 Advanced n Control Engneerng and Informaton Scence A New Evolutonary Computaton Based Approach for Learnng Bayesan Network Yungang

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

CSC321 Tutorial 9: Review of Boltzmann machines and simulated annealing

CSC321 Tutorial 9: Review of Boltzmann machines and simulated annealing CSC321 Tutoral 9: Revew of Boltzmann machnes and smulated annealng (Sldes based on Lecture 16-18 and selected readngs) Yue L Emal: yuel@cs.toronto.edu Wed 11-12 March 19 Fr 10-11 March 21 Outlne Boltzmann

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Experment-I MODULE VII LECTURE - 3 ANALYSIS OF COVARIANCE Dr Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Any scentfc experment s performed

More information

STATISTICS QUESTIONS. Step by Step Solutions.

STATISTICS QUESTIONS. Step by Step Solutions. STATISTICS QUESTIONS Step by Step Solutons www.mathcracker.com 9//016 Problem 1: A researcher s nterested n the effects of famly sze on delnquency for a group of offenders and examnes famles wth one to

More information

Some modelling aspects for the Matlab implementation of MMA

Some modelling aspects for the Matlab implementation of MMA Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests

Simulated Power of the Discrete Cramér-von Mises Goodness-of-Fit Tests Smulated of the Cramér-von Mses Goodness-of-Ft Tests Steele, M., Chaselng, J. and 3 Hurst, C. School of Mathematcal and Physcal Scences, James Cook Unversty, Australan School of Envronmental Studes, Grffth

More information

Using deep belief network modelling to characterize differences in brain morphometry in schizophrenia

Using deep belief network modelling to characterize differences in brain morphometry in schizophrenia Usng deep belef network modellng to characterze dfferences n bran morphometry n schzophrena Walter H. L. Pnaya * a ; Ary Gadelha b ; Orla M. Doyle c ; Crstano Noto b ; André Zugman d ; Qurno Cordero b,

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based Optimization Problems

The Convergence Speed of Single- And Multi-Objective Immune Algorithm Based Optimization Problems The Convergence Speed of Sngle- And Mult-Obectve Immune Algorthm Based Optmzaton Problems Mohammed Abo-Zahhad Faculty of Engneerng, Electrcal and Electroncs Engneerng Department, Assut Unversty, Assut,

More information

The Minimum Universal Cost Flow in an Infeasible Flow Network

The Minimum Universal Cost Flow in an Infeasible Flow Network Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

SDMML HT MSc Problem Sheet 4

SDMML HT MSc Problem Sheet 4 SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Uncertainty in measurements of power and energy on power networks

Uncertainty in measurements of power and energy on power networks Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:

More information

Clock-Gating and Its Application to Low Power Design of Sequential Circuits

Clock-Gating and Its Application to Low Power Design of Sequential Circuits Clock-Gatng and Its Applcaton to Low Power Desgn of Sequental Crcuts ng WU Department of Electrcal Engneerng-Systems, Unversty of Southern Calforna Los Angeles, CA 989, USA, Phone: (23)74-448 Massoud PEDRAM

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor

Copyright 2017 by Taylor Enterprises, Inc., All Rights Reserved. Adjusted Control Limits for P Charts. Dr. Wayne A. Taylor Taylor Enterprses, Inc. Control Lmts for P Charts Copyrght 2017 by Taylor Enterprses, Inc., All Rghts Reserved. Control Lmts for P Charts Dr. Wayne A. Taylor Abstract: P charts are used for count data

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Department of Electrical & Electronic Engineeing Imperial College London. E4.20 Digital IC Design. Median Filter Project Specification

Department of Electrical & Electronic Engineeing Imperial College London. E4.20 Digital IC Design. Median Filter Project Specification Desgn Project Specfcaton Medan Flter Department of Electrcal & Electronc Engneeng Imperal College London E4.20 Dgtal IC Desgn Medan Flter Project Specfcaton A medan flter s used to remove nose from a sampled

More information

2003 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

2003 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, 003 IEEE. Personal use of ths materal s permtted. Permsson from IEEE must be obtaned for all other uses, n any current or future meda, ncludng reprntng/republshng ths materal for advertsng or promotonal

More information

Particle Swarm Optimization with Adaptive Mutation in Local Best of Particles

Particle Swarm Optimization with Adaptive Mutation in Local Best of Particles 1 Internatonal Congress on Informatcs, Envronment, Energy and Applcatons-IEEA 1 IPCSIT vol.38 (1) (1) IACSIT Press, Sngapore Partcle Swarm Optmzaton wth Adaptve Mutaton n Local Best of Partcles Nanda ulal

More information

Research on Route guidance of logistic scheduling problem under fuzzy time window

Research on Route guidance of logistic scheduling problem under fuzzy time window Advanced Scence and Technology Letters, pp.21-30 http://dx.do.org/10.14257/astl.2014.78.05 Research on Route gudance of logstc schedulng problem under fuzzy tme wndow Yuqang Chen 1, Janlan Guo 2 * Department

More information

Hopfield Training Rules 1 N

Hopfield Training Rules 1 N Hopfeld Tranng Rules To memorse a sngle pattern Suppose e set the eghts thus - = p p here, s the eght beteen nodes & s the number of nodes n the netor p s the value requred for the -th node What ll the

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

A FAST HEURISTIC FOR TASKS ASSIGNMENT IN MANYCORE SYSTEMS WITH VOLTAGE-FREQUENCY ISLANDS

A FAST HEURISTIC FOR TASKS ASSIGNMENT IN MANYCORE SYSTEMS WITH VOLTAGE-FREQUENCY ISLANDS Shervn Haamn A FAST HEURISTIC FOR TASKS ASSIGNMENT IN MANYCORE SYSTEMS WITH VOLTAGE-FREQUENCY ISLANDS INTRODUCTION Increasng computatons n applcatons has led to faster processng. o Use more cores n a chp

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen

Hopfield networks and Boltzmann machines. Geoffrey Hinton et al. Presented by Tambet Matiisen Hopfeld networks and Boltzmann machnes Geoffrey Hnton et al. Presented by Tambet Matsen 18.11.2014 Hopfeld network Bnary unts Symmetrcal connectons http://www.nnwj.de/hopfeld-net.html Energy functon The

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore

Predictive Analytics : QM901.1x Prof U Dinesh Kumar, IIMB. All Rights Reserved, Indian Institute of Management Bangalore Sesson Outlne Introducton to classfcaton problems and dscrete choce models. Introducton to Logstcs Regresson. Logstc functon and Logt functon. Maxmum Lkelhood Estmator (MLE) for estmaton of LR parameters.

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

Chapter 12 Analysis of Covariance

Chapter 12 Analysis of Covariance Chapter Analyss of Covarance Any scentfc experment s performed to know somethng that s unknown about a group of treatments and to test certan hypothess about the correspondng treatment effect When varablty

More information

Support Vector Machines

Support Vector Machines CS 2750: Machne Learnng Support Vector Machnes Prof. Adrana Kovashka Unversty of Pttsburgh February 17, 2016 Announcement Homework 2 deadlne s now 2/29 We ll have covered everythng you need today or at

More information

Performance Comparison of Electromagnetism-Like Algorithms for Global Optimization

Performance Comparison of Electromagnetism-Like Algorithms for Global Optimization Appled Mathematcs, 2012, 3, 1265-1275 http://dx.do.org/10.4236/am.2012.330183 Publshed Onlne October 2012 (http://www.scrp.org/ournal/am) Performance Comparson of Electromagnetsm-Lke Algorthms for Global

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan

Winter 2008 CS567 Stochastic Linear/Integer Programming Guest Lecturer: Xu, Huan Wnter 2008 CS567 Stochastc Lnear/Integer Programmng Guest Lecturer: Xu, Huan Class 2: More Modelng Examples 1 Capacty Expanson Capacty expanson models optmal choces of the tmng and levels of nvestments

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Aggregation of Social Networks by Divisive Clustering Method

Aggregation of Social Networks by Divisive Clustering Method ggregaton of Socal Networks by Dvsve Clusterng Method mne Louat and Yves Lechaveller INRI Pars-Rocquencourt Rocquencourt, France {lzennyr.da_slva, Yves.Lechevaller, Fabrce.Ross}@nra.fr HCSD Beng October

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros

On the Interval Zoro Symmetric Single-step Procedure for Simultaneous Finding of Polynomial Zeros Appled Mathematcal Scences, Vol. 5, 2011, no. 75, 3693-3706 On the Interval Zoro Symmetrc Sngle-step Procedure for Smultaneous Fndng of Polynomal Zeros S. F. M. Rusl, M. Mons, M. A. Hassan and W. J. Leong

More information

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem

Interactive Bi-Level Multi-Objective Integer. Non-linear Programming Problem Appled Mathematcal Scences Vol 5 0 no 65 3 33 Interactve B-Level Mult-Objectve Integer Non-lnear Programmng Problem O E Emam Department of Informaton Systems aculty of Computer Scence and nformaton Helwan

More information

Chapter 9: Statistical Inference and the Relationship between Two Variables

Chapter 9: Statistical Inference and the Relationship between Two Variables Chapter 9: Statstcal Inference and the Relatonshp between Two Varables Key Words The Regresson Model The Sample Regresson Equaton The Pearson Correlaton Coeffcent Learnng Outcomes After studyng ths chapter,

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

Multilayer neural networks

Multilayer neural networks Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method

Comparison of the Population Variance Estimators. of 2-Parameter Exponential Distribution Based on. Multiple Criteria Decision Making Method Appled Mathematcal Scences, Vol. 7, 0, no. 47, 07-0 HIARI Ltd, www.m-hkar.com Comparson of the Populaton Varance Estmators of -Parameter Exponental Dstrbuton Based on Multple Crtera Decson Makng Method

More information

Module 14: THE INTEGRAL Exploring Calculus

Module 14: THE INTEGRAL Exploring Calculus Module 14: THE INTEGRAL Explorng Calculus Part I Approxmatons and the Defnte Integral It was known n the 1600s before the calculus was developed that the area of an rregularly shaped regon could be approxmated

More information

Lecture 6: Introduction to Linear Regression

Lecture 6: Introduction to Linear Regression Lecture 6: Introducton to Lnear Regresson An Manchakul amancha@jhsph.edu 24 Aprl 27 Lnear regresson: man dea Lnear regresson can be used to study an outcome as a lnear functon of a predctor Example: 6

More information