A PARALLEL APPROACH FOR BACKPROPAGATION LEARNING OF NEURAL NETWORKS CRESPO, M., PICCOLI F.,PRINTISTA M. 1, GALLARD R.

Size: px
Start display at page:

Download "A PARALLEL APPROACH FOR BACKPROPAGATION LEARNING OF NEURAL NETWORKS CRESPO, M., PICCOLI F.,PRINTISTA M. 1, GALLARD R."

Transcription

1 A PARALLEL APPROACH FOR BACKPROPAGATION LEARNING OF NEURAL NETWORKS CRESPO, M., PICCOLI F.,PRINTISTA M. 1, GALLARD R. Grup de Interés en Sistemas de Cmputación 3 Departament de Infrmática Universidad Nacinal de San Luis Ejércit de ls Andes Lcal San Luis Argentina {mcresp,mpiccli,mprinti,rgallard}@inter2.unsl.edu.ar {mcresp,mpiccli,mprinti,rgallard}@unsl.edu.ar Phne: Fax : , ABSTRACT Learning algrithms fr neural netwrks invlve CPU intensive prcessing and cnsequently great effrt has been dne t develp parallel implemetatins intended fr a reductin f learning time. This wrk briefly describes parallel schemes fr a backprpagatin algrithm and prpses a distributed system architecture fr develping parallel training with a partitin pattern scheme. Under this apprach, weight changes are cmputed cncurrently, exchanged between system cmpnents and adjusted accrdingly until the whle parallel learning prcess is cmpleted. Sme cmparative results are als shwn. KEYWORDS: Neutral netwrks, parallelised backprpagatin, partitining schemes, pattern partitining, system architecture. 1 Members f the Research Grup. 2 Full Prfessr at the Infrmatics Department. Head f the Research Grup. 3 The Research Grup is supprted by the Universidad Nacinal de San Luis and the CONICET (Science and Technlgy Research Cuncil). Departament de Infrmática - Facultad de Ciencias Exactas 1

2 1. INTRODUCTION Neural nets learn by training, nt by being prgrammed. Learning is the prcess f adjustment f the neural netwrk t external stimuli. After learning it is expected that the netwrk will shw recall and generalizatin abilities. By recall we mean the capability t recgnise inputs frm the training set, that is t say, thse patterns presented t the netwrk during the learning prcess. By generalizatin we mean the ability t prduce reasnable utputs assciated with new inputs f the same ttal pattern space. These prperties are attained during the slw prcess f learning. Many appraches t speedup the training prcess has been devised by means f parallelism. The backprpagatin algrithm (BP) is ne f the mst ppular learning algrithms and many appraches t parallel implementatins has been studied [7], [12], [13]. T parallelise BP either the netwrk r the training pattern space is partitined. In netwrk partitining, the ndes and weights f the neural netwrk are distributed amng diverse prcessrs. Hence the cmputatins due t nde activatins, nde errrs and weight changes are parallelised. In pattern partitining the whle neural net is replicated in different prcessrs and the weight changes due t distinct training patterns are parallelised. This paper shws the design f a distributed supprt fr parallel learning f neural netwrks using a pattern partitining apprach. Results n speedup in learning and its impact n recall and generalizatin are shwn. 2. BACKPROPAGATION NETWORKS A backprpagatin neural netwrk is cmpsed f at least three unit layers; an input, an utput and ne re mre hidden (intermediate) layers. Figure 1 shws a three layer BP netwrk. y 1 y 2 y 3 y m... Output layer... 1 Hidden layer... 1 Input layer x 1 x 2 x 3 x n Fig. 1 - A Backprpagatin Netwrk Given a set f p 2-tuples f vectrs (x 1 y 1 ), (x 2 y 2 ),..., (x p y p ), which are samples f a functinal mapping y = φ (x) : x R N, y R M, the gal is t train the netwrk in rder it learns an apprximatin = y' = φ' (x). The learning prcess is carried ut by using a tw phase cycle; prpagatin r frward phase and adaptatin r backward phase [6], [11], [12]. Once an input pattern is applied as an excitatin t the input layer, it is prpagated thrughut the remaining layers up t the utput layer where the current utput f the netwrk is generated. This utput is cntrasted against the desired utput and an errr value is cmputed as a functin f the utcme errr in each utput unit. This makes up the frward phase. The learning prcess include adjusting f weights (adaptatin) in the netwrk in rder Departament de Infrmática - Facultad de Ciencias Exactas 2

3 t minimise the errr functin. Fr this reasn, the errr btained is prpagated back frm each nde f the utput layer t the crrespnding (cntributrs) ndes f the intermediate layer. Hwever each f these intermediate units receives nly a prtin f the ttal errr accrding t its relative cntributin t the current utput. This prcess is repeated frm ne layer t the previus ne until each nde in the netwrk receives an errr signal describing its relative cntributin t the ttal errr. Based n this signal, weights are crrected in a prprtin directly related t the errr in the cnnected units. This makes up the backward phase. During this prcess, as the training is prgressing, ndes in the intermediate levels are rganised in such a way that different ndes recgnise different features f the ttal training space. After training, when a new input pattern is supplied, units in the hidden layers will generate active utputs if such an input pattern preserves the features they (individually) learnt. Cnversely, if such an input pattern d nt cntain thse knwn features then these units will be inclined t inhibit their utputs. It has been shwn that during training, backprpagatin neural nets tend t develp internal relatinships between units, as t categrise training data int pattern classes. This assciatin can be either evident r nt t the bserver. The pint here is that, firstly, the net finds an internal representatin which enables it t generate apprpriate respnses when thse patterns used during training are subsequently submitted t the netwrk and secndly, the netwrk will classify thse patterns never seen befre accrding t the features they share (resemblance) with the training patterns. 3. PATTERN PARTITIONING FOR PARALLEL BACKPROPAGATION Pattern partitining replicates the neural net structure (units, edges and assciated weights) at each prcessr and the training set is equally distributed amng prcessrs. Each prcessr perfrms the prpagatin and the adaptatin phases fr the lcal set f patterns. Als, each prcessr accumulates the weight changes prduced by the lcal patterns which afterward are exchanged with ther prcessrs t update weight values. This scheme is suitable fr prblems with a large set f training patterns and fit prperly t run n lcal memry architectures [1], [8], [10]. Different training techniques, r regimes, can be implemented in a backprpagatin algrithm t implement the learning prcess [6], [11], [12]. In this wrk, because it is apprpriated fr a distributed envirnment, the chice was the set-training regime. Under this technique weight changes are accumulated fr all training patterns befre updating any f them, with a subsequent update all stage each time all patterns in the training set were presented. Nde J Nde I Nde F Nde K 4. THE PARALLEL LEARNING ALGORITHM Fig. 2. A Pattern Partitining Scheme Departament de Infrmática - Facultad de Ciencias Exactas 3

4 T implement a pattern partitining scheme, the whle neural netwrk is replicated amng P prcessrs and each prcessr carries ut the learning prcess using Ts/P patterns where Ts is the size f the training set. As shwn in figure 2, weight changes are perfrmed in parallel and then the crrespnding accumulated weight changes vectrs, awc, are exchanged between prcessrs. Nw we describe the basic steps f a backprpagatin learning algrithm under the set-training regime, als knwn as per-epch training 4. The superindexes h and identify hidden and utput units respectively. Batch-Training Algrithm Repeat 1. Fr each pattern x p = (x p1, x p2,..., x pn ) 1.1 Cmpute the utput f units in the hidden layer: h net pj N h = w x i= 1 ji pi + Θ j h i pj = f (net h pj ) 1.2 Cmpute the utput f units in the utput: net pk L = w i j= 1 kj pj + Θ k pk = f (net pk ) 1.3 Cmpute errr terms fr the units in the utput layer: δ pk = (y pk - pk ) f ( net pk ) 1.4 Cmpute errr terms fr the units in the hidden layer: h δ pj = f (net h pj ) δ pk k 1.5 Cmpute weight changes in the utput layer: c kj = c kj + η δ pk ipj 1.6 Cmpute weight changes in the hidden layer: c h ji = c h ji + η δ h pj xi h 2. Send lcal c and c and Receive remte c h y c. 3. Update weights changes in the utput layer: w kj = w kj + ( c kj (lcal)+ c kj (remte)) 4. Update weights changes in the hidden layer: h = w h ji +( c h ji (lcal)+ c h ji (remte)) T M w ji Until (E = ( 1 (y pk - pk ) 2 ) < maximum accepted errr) r p= 1 2 k = 1 (number f iteratins < maximum number f iteratins) The subindexes p, i, j and k identify the p th input pattern, the i th input unit, the j th hidden unit and the k th utput unit respectively. w ij is the weight crrespnding t the cnnectin between unit j and unit i, c ij is the accumulated change fr the weight 4 During an epch, r batch, the submissin f all patterns in the partitin, the crrespnding cmputatins and the accumulatin f weight changes must be perfrmed befre weights update takes place, then the next epch begin. w kj Departament de Infrmática - Facultad de Ciencias Exactas 4

5 crrespnding t the cnnectin between unit j and unit i, Θ is the bias and N, L and M identify the number f input, hidden and utput units respectively. f dentes the sigmid functin which is used t cmpute the activatin f each nde. 5. PARALLEL LEARNING SUPPORT In this sectin we present a brief descriptin f the underlying system architecture and prcedures supprting the parallel learning prcess, running n the prcessrs distributed in a LAN f wrkstatins [3]. Each prcessr is allcated fr a replicated neural netwrk. The parallel learning supprt presented here is independent f the neural netwrk structure. 5.1 System Architecture Figure 3 shws the supprt system architecture, prcesses and interactins. The parallel learning algrithm is independently initiated by a prcess at each LAN nde. The PrcW prcess, will be respnsible f lcal learning prcessing and when needed will request a service t exchange vectrs awc. PrcW frks twice t create child prcesses HI and HJ fr cmmunicatin with ther LAN ndes. Lcal Weight Vectr awc NN Cmpute New weight Vectr Current Weight Vectr AWC PrcW Remte Weight Vectr Lcal Weight IPC1 IPC2 PrcHI Receives Remte Weight Vectrs Sends Envia Lcal Vectr Weight Pes Vectrs Lcal N E T W O R K PrcHJ The tasks perfrmed by each prcess are: Fig Supprt System Architecture Main prcess PrcW runs the learning algrithm fr the neural net NN. After each epch PrcW request the fllwing services: Sending f the lcal awc vectr. Receiving f remte awc vectrs. T update the weights fr every pattern in the partitin it creates a current AWC vectr by gathering the lcal and remte awc vectrs. Prcess PrcHI receives the awc vectrs sent by remte PrcHJ prcesses. Prcess PrcHJ sends the lcal awc vectrs generated during ne re mre epchs t the remte PrcHI prcesses. Departament de Infrmática - Facultad de Ciencias Exactas 5

6 These three prcesses execute independently but they cmmunicate fr exchanging infrmatin. In rder t allw the cncurrent executin f learning and the external cmmunicatin with remte prcesses, a set f nn blcking mechanisms where used [4], [9], [14], [15]. This cmmunicatin is handled via interprcess cmmunicatin mechanisms (IPCs). The interactin between prcesses PrcW and PrcHI is established via IPC1 when after an epch the learning prcess request t distribute its awc vectr. The interactin between prcesses PrcHJ and PrcW is established via IPC2 when after an epch the learning prcess request remte awc vectrs t subsequently determine the current AWC vectr. 6. THE PARALLEL VERSION OF THE NND SUPERVISOR In ur research, Neural Netwrk Devices (NND) are used t supervise lad distributin in Distributed Systems. In particular, fr parallelising BP, the test case was chsen as a reduced versin f the ne discussed elsewhere [2]. The neural netwrk was built as a Feed Frward Neural Net (FFNN) f 14 input ndes, 2 utput ndes and ne hidden layer f 30 ndes with a ttal number f 527 intercnnectins. The pattern space cntains a ttal number f 2,376 patterns. Fr the experiments discussed later nly ne third (abut 800 samples) f the pattern space was enugh t btain satisfactry results. In previus wrks by creating prttypes we crrbrate that claimed neural netwrk features such as fast respnse, strage efficiency, fault tlerance and graceful degradatin in face f scarce r spurius inputs make them apprpriate tls fr Intelligent Cmputer Systems. Nw, parallel learning f prttypes reflecting different allcatin plicies are being implemented and in a testing stage by means f a pattern partitining scheme with a settraining regime as explained in previus sectins. 7. EXPERIMENTS DESCRIPTION Our initial experiments in parellelising neural nets were divided int tw stages; 1. T establish if strng crrelatin exists between data dependencies and gdness f results expressed as the recall and generalisatin abilities. 2. T cmpare gdness f results and speedup fr diverse sizes f the training set. In bth analysis, t cmpare results, the neural net was trained sequentially and in parallel. Under pattern partitining, the pattern space was divided int three sets and each set was assigned t ne prcessr. Many series were perfrmed and each ne cnsisted f many runs fr η values f 0.05 and 0.1. The fllwing relevant perfrmance variables were examined: T: is the running time f the learning algrithm. R = r/size. Is the recall abitily f the neural net. Were Size is the size f the submitted training set and r is the ttal number f patterns recgnised when nly thse patterns f the training set are presented, after learning, t the netwrk. G = g/s. Is the cmbined recall and generalizatin ability. Were S is the size f the ttal pattern space and g is the ttal number f patterns recgnised when all the patterns are presented, after learning, t the netwrk. Sp =T seq / T par is the rati between the sequential learning and the parallel learning times. Departament de Infrmática - Facultad de Ciencias Exactas 6

7 Rec B/C = Sp/(R seq - R par ) is the benefit-cst rati fr recall. It indicates the benefit f speeding up the learning prcess which is paid by the cst f lsing recall ability. GenB/C = Sp/(G seq - G par ) is the benefit-cst rati fr generalizatin. It indicates the benefit f speeding up the learning prcess which is paid by the cst f lsing generalizatin ability. Partitining schemes were applied as fllws (See figures 4.a and 4.b). In the first analysis stage, fr sequential backprpagatin (SBP) the training set T s was built by unifrm selectin f 30% f the pattern space S. As we decided t dedicate nly three separate prcessrs, fr parallel training (PBP) three randm disjint subsets were selected accrding t the fllwing criteria: One third f Ts fr each subset. (Experiment PBP-10-D) 10% f S fr each subset. (Experiment PBP-10-Ι) In the secnd analysis stage, fr sequential backprpagatin (SBP) the training set T s was built by unifrm selectin f X% f the pattern space S. Fr parallel training (PBP) three disjint subsets f X/3% f S were selected. X was chsen as 30, 45 and 60. (Experiments PBP-X/3). Ttal Patterns Set 30% f Patterns SBP 10% 10% 10% PBP PBP PBP Fig. 4.a - Dependent-Data Partitining Apprach Departament de Infrmática - Facultad de Ciencias Exactas 7

8 X % f Patterns Ttal Patterns Set SBP X/3% X/3% X/3% PBP PBP PBP Fig. 4.b - Independent-Data Partitining Apprach 8. RESULTS We shw nw sme results frm bth investigatin stages. 8.1 Stage I T determine the pssible existence f a strng crrelatin between data dependencies and gdness f results, experiments SBP, PBP-10-D and PBP-10-Ι were cnducted. The issues studied thrugh a large number f runs were; learning time, recall, generalizatin and number f iteratins needed t reach an acceptable errr value while training. The fllwing figures and tables shw the crrespnding mean values. Experiment Learning Time Recall Generalizatin SBP PBP-10-D PBP-10-Ι Table 1. Summary f T, R and G results in Stage I Secnds Learning Time SBP PBP D PBP I Experiments % Recall and Generalizatin R 100 G SBP PBP D PBP I Experiments Fig. 5 - Values f T fr Stage I under sequential and parallel prcessing. Fig. 6 - Values f R and G fr Stage I under sequential and parallel prcessing. Departament de Infrmática - Facultad de Ciencias Exactas 8

9 As we can bserve in the abve table and figures imprtant learning time reductin is achieved withut significant lss f recall and generalizatin capabilities f the neural netwrk. This results are attained by using either parallel partitining apprach. Hwever the independent data criterin (PBP-10-Ι) seems t prvide better perfrmance values, pssibly due t a mre unifrm distributin f data gathered. Learning Prcess Trace 100 Errr PBP-10-D PBP-10-I SBP Number f iteratins Fig. 7 Prgress f the learning prcess under sequential and parallel prcessing As it is shwn in figure 7, the substantial time reductin btained under either parallel apprach arise frm a lesser number f iteratins t arrive t an acceptable errr value. When the learning prcess is perfrmed in parallel, each replicatin f the neural net is trained with fewer number f patterns and it learns nt nly due t its individual wrk but als fr the experience shared with ther replicas. 8.2 Stage II T cmpare gdness f results and speedup fr diverse sizes f the training set many tests were perfrmed in experiments SBP-X and SBP-X/3 addressed t learning time, recall, generalizatin, speed-up, and benefit-cst rati. Here, parallel prcessing was implemented under the independent data apprach. Experiment Learning time Recall Generalizatin SBP PBP SBP PBP SBP PBP Table 2. Summary f T, R and G results in Stage II Departament de Infrmática - Facultad de Ciencias Exactas 9

10 Learning Time Recall Secnds SBP Experiments PBP 30 vs vs vs % SBP Experiments PBP 30 vs vs vs 20 Fig. 8 - Values f T fr Stage II, under sequential and parallel prcessing Fig. 9 - Values f R fr Stage II under sequential and parallel prcessing % SBP Generalizatin Experiments PBP 10 vs vs vs 60 Fig Values f G fr Stage II under sequential and parallel prcessing. As we can bserve in Table 2 and Figure 8 cnsiderable learning time reductin is achieved als fr diverse sizes f the training set thrughut experiments f Stage II.This results are attained by using the independet data apprach. Figures 9 and 10 shw the assciated lss in recall and generalizatin f the neural netwrk fr different sizes f the training set. As anticipated, the amunt f capabiliy detriment decreases as the training set size is incremented, and its values range frm 2.9% t 5.6% fr recall and frm 2.1% t 5.3% fr generalizatin. Speed-up SBP-30 vs. PBP-10 SBP-45 vs. PBP-15 SBP-60 vs. PBP Speed-up Sp SBP-30 vs PBP-10 SBP-45 vs PBP-15 SBP-60 vs PBP-20 Experiments Fig Speed-up values achieved thrugh parallel prcessing. The abve table and figure indicate the Speed-up attained thrugh parallel prcessing when different sizes f the prtins f the pattern space S are selected fr training the neural netwrk. As the size f T s increases then an effect f slwing the Speed-up arises but this cnsequence is cmpensated by an imprvement in quality f results. Departament de Infrmática - Facultad de Ciencias Exactas 10

11 Benefit/Cst Rati SBP-30 vs. PBP-10 SBP-45 vs. PBP-15 SBP-60 vs. PBP-20 Recall B/C Generalizatin B/C Benefit-Cst Rati fr Recall Rec B/C SBP-30 vs PBP-10 SBP-45 vs PBP-15 SBP-60 vs PBP-20 Fig Benefit-Cst Rati fr Recall thrugh parallel prcessing. 3 Benefit-Cst Rati fr Generalizatin Gen B/C 2 1 SBP-30 vs PBP-10 SBP-45 vs PBP-15 SBP-60 vs PBP-20 The Benefit-Cst Rati gives an indicatin f a benefit (speed-up) btained by paying a cst (detriment in recall r generalizatin). This perfrmance variable culd be f great help when designing a parallel apprach fr training neural nets. It is clear that when larger sizes fr the training set are used the benefit decreases and cnsequently the cst decreases. In any case, decisin will depend n applicatin requirements. 9. CONCLUSIONS Fig Benefit-Cst Rati fr Generalizatin thrugh parallel prcessing. The time that it takes t train a neural netwrk has lng been an issue f research. The length f training time depends, essentially, upn the number f iteratins required. This number depends n several interrelated factrs. Sme f them are; size and tplgy f the netwrk, initialisatin f weights and the amunt f training data used. A means f training acceleratin based in the last mentined factr is a parallel apprach knwn as pattern partitining. Departament de Infrmática - Facultad de Ciencias Exactas 11

12 In this paper we presented a feasible architecture fr a system supprting parallel learning f backprpagatin neural netwrks using a pattern partitining scheme with a set-training regime. A preliminary set f experiments in ur investigatin, revealed that the beneficial effects f parallel prcessing can be achieved with minr capability lss. Fr the small neural net selected fr the testing case a substantial acceleratin ranging frm 4 t mre than ten times was attained by using as few as three prcessrs. As the distributed apprach t parallelise the learning prcess shwed its effectiveness, at the present time, tests with larger number f prcessrs, different training set sizes and variable cmmunicatin intervals are being perfrmed fr different neural netwrks. 9. ACKNOWLEDGEMENTS We acknwledge the cperatin f the prject grup fr prviding new ideas and cnstructive criticisms. Als t the Universidad Nacinal de San Luis and the CONICET frm which we receive cntinuus supprt. 10. REFERENCES [1] Berman F., Snyder L. - On mapping parallel algrithms int parallel architectures- Parallel and Distributed Cmputing, pp , [2] Cena M., Cresp M. L., Gallard R. Transparent Remte Executin in LAHNOS by Means f a Neural Netwrk Device. Operating System Reviews, Vl. 29, Nr 1, ACM Press, [3] Cluris G., Dllimre J., Kindberg T. -Distributed Systems: Cncept and Design - Addisn-Wesley, [1] Cmer, D. E., Stevens, D. L. - Internetwrking with TCP/IP - Vl. III - Prentice Hall. [1] Fster, Ian T. : Designing and Building Parallel Prgrams - Addisn Wesley, [2] Freeman, J., Skapura, D. Neural Netwrks. Algrithms, Applicatins and Prgramming Techniques. Addisn-Wesley, Reading, MA, [3] Girau, B. - Mapping Neural Netwrk Back-Prpagatin nt Parallel Cmputers with Cmputatin/Cmmunicatin Overlapping. [4] Kumar, V., Shekhar, S., Amin, M.- A Scalable Parallel Frmulatin f the Backprpagatin Algrithm fr Hypercubes and Related Architectures. IEEE transactins n Parallel and Distributed Systems, Vl. 5. Nr.10, Octber pp [5] McEntire, P. L., O Reilly, J. G., Larsn, R. E. (Editrs) : Distributed Cmputing: Cncepts and Implementatins - Addisn Wesley, [6] Petrwski, A., Dreyfus, G., Girault, C.- Perfrmance Analysis f a Pipelined Backprpagatin Parallel Algrithm. IEEE. Trasactins Netwrks, Vl. 4, Nvember pp [7] Plaut, D., Nwlan, S., Hintn, G. Experiments n Learning by Backprpagatin. Tech. Reprt, CMU-CS , Carniege Melln University, Pittsburg, PA, [8] Rumelhart, D., Hintn, G., Willams, R. Learning Internal Representatins by Errr Prpagatin. MIT Press, Cambridge, MA, [9] Rumelhart, D., McClelland, J. Parallel Distributed Prcessing, vl. 1 y 2. MIT Press, Cambridge, MA, [10] Stevens, R.W.- Advanced Prgramming in the UNIX Envinment- AdDisn- Weslwy Publishing Cmpany [11] Stevens, R.W.- UNIX Netwrk Prgraming. Prentice Hall-Englewd Cliff Departament de Infrmática - Facultad de Ciencias Exactas 12

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins

More information

Technical Bulletin. Generation Interconnection Procedures. Revisions to Cluster 4, Phase 1 Study Methodology

Technical Bulletin. Generation Interconnection Procedures. Revisions to Cluster 4, Phase 1 Study Methodology Technical Bulletin Generatin Intercnnectin Prcedures Revisins t Cluster 4, Phase 1 Study Methdlgy Release Date: Octber 20, 2011 (Finalizatin f the Draft Technical Bulletin released n September 19, 2011)

More information

NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION

NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION NUROP Chinese Pinyin T Chinese Character Cnversin NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION CHIA LI SHI 1 AND LUA KIM TENG 2 Schl f Cmputing, Natinal University f Singapre 3 Science

More information

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

Hypothesis Tests for One Population Mean

Hypothesis Tests for One Population Mean Hypthesis Tests fr One Ppulatin Mean Chapter 9 Ala Abdelbaki Objective Objective: T estimate the value f ne ppulatin mean Inferential statistics using statistics in rder t estimate parameters We will be

More information

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur Cdewrd Distributin fr Frequency Sensitive Cmpetitive Learning with One Dimensinal Input Data Aristides S. Galanpuls and Stanley C. Ahalt Department f Electrical Engineering The Ohi State University Abstract

More information

A New Evaluation Measure. J. Joiner and L. Werner. The problems of evaluation and the needed criteria of evaluation

A New Evaluation Measure. J. Joiner and L. Werner. The problems of evaluation and the needed criteria of evaluation III-l III. A New Evaluatin Measure J. Jiner and L. Werner Abstract The prblems f evaluatin and the needed criteria f evaluatin measures in the SMART system f infrmatin retrieval are reviewed and discussed.

More information

A Scalable Recurrent Neural Network Framework for Model-free

A Scalable Recurrent Neural Network Framework for Model-free A Scalable Recurrent Neural Netwrk Framewrk fr Mdel-free POMDPs April 3, 2007 Zhenzhen Liu, Itamar Elhanany Machine Intelligence Lab Department f Electrical and Cmputer Engineering The University f Tennessee

More information

Least Squares Optimal Filtering with Multirate Observations

Least Squares Optimal Filtering with Multirate Observations Prc. 36th Asilmar Cnf. n Signals, Systems, and Cmputers, Pacific Grve, CA, Nvember 2002 Least Squares Optimal Filtering with Multirate Observatins Charles W. herrien and Anthny H. Hawes Department f Electrical

More information

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax .7.4: Direct frequency dmain circuit analysis Revisin: August 9, 00 5 E Main Suite D Pullman, WA 9963 (509) 334 6306 ice and Fax Overview n chapter.7., we determined the steadystate respnse f electrical

More information

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*

More information

ENSC Discrete Time Systems. Project Outline. Semester

ENSC Discrete Time Systems. Project Outline. Semester ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding

More information

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007 CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is

More information

Learning to Control an Unstable System with Forward Modeling

Learning to Control an Unstable System with Forward Modeling 324 Jrdan and Jacbs Learning t Cntrl an Unstable System with Frward Mdeling Michael I. Jrdan Brain and Cgnitive Sciences MIT Cambridge, MA 02139 Rbert A. Jacbs Cmputer and Infrmatin Sciences University

More information

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) > Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);

More information

A Matrix Representation of Panel Data

A Matrix Representation of Panel Data web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins

More information

CAUSAL INFERENCE. Technical Track Session I. Phillippe Leite. The World Bank

CAUSAL INFERENCE. Technical Track Session I. Phillippe Leite. The World Bank CAUSAL INFERENCE Technical Track Sessin I Phillippe Leite The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Phillippe Leite fr the purpse f this wrkshp Plicy questins are causal

More information

Design and Simulation of Dc-Dc Voltage Converters Using Matlab/Simulink

Design and Simulation of Dc-Dc Voltage Converters Using Matlab/Simulink American Jurnal f Engineering Research (AJER) 016 American Jurnal f Engineering Research (AJER) e-issn: 30-0847 p-issn : 30-0936 Vlume-5, Issue-, pp-9-36 www.ajer.rg Research Paper Open Access Design and

More information

Lab 1 The Scientific Method

Lab 1 The Scientific Method INTRODUCTION The fllwing labratry exercise is designed t give yu, the student, an pprtunity t explre unknwn systems, r universes, and hypthesize pssible rules which may gvern the behavir within them. Scientific

More information

Dataflow Analysis and Abstract Interpretation

Dataflow Analysis and Abstract Interpretation Dataflw Analysis and Abstract Interpretatin Cmputer Science and Artificial Intelligence Labratry MIT Nvember 9, 2015 Recap Last time we develped frm first principles an algrithm t derive invariants. Key

More information

Pattern Recognition 2014 Support Vector Machines

Pattern Recognition 2014 Support Vector Machines Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

Dead-beat controller design

Dead-beat controller design J. Hetthéssy, A. Barta, R. Bars: Dead beat cntrller design Nvember, 4 Dead-beat cntrller design In sampled data cntrl systems the cntrller is realised by an intelligent device, typically by a PLC (Prgrammable

More information

Artificial Neural Networks MLP, Backpropagation

Artificial Neural Networks MLP, Backpropagation Artificial Neural Netwrks MLP, Backprpagatin 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

Sequential Allocation with Minimal Switching

Sequential Allocation with Minimal Switching In Cmputing Science and Statistics 28 (1996), pp. 567 572 Sequential Allcatin with Minimal Switching Quentin F. Stut 1 Janis Hardwick 1 EECS Dept., University f Michigan Statistics Dept., Purdue University

More information

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint Biplts in Practice MICHAEL GREENACRE Prfessr f Statistics at the Pmpeu Fabra University Chapter 13 Offprint CASE STUDY BIOMEDICINE Cmparing Cancer Types Accrding t Gene Epressin Arrays First published:

More information

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are:

and the Doppler frequency rate f R , can be related to the coefficients of this polynomial. The relationships are: Algrithm fr Estimating R and R - (David Sandwell, SIO, August 4, 2006) Azimith cmpressin invlves the alignment f successive eches t be fcused n a pint target Let s be the slw time alng the satellite track

More information

Floating Point Method for Solving Transportation. Problems with Additional Constraints

Floating Point Method for Solving Transportation. Problems with Additional Constraints Internatinal Mathematical Frum, Vl. 6, 20, n. 40, 983-992 Flating Pint Methd fr Slving Transprtatin Prblems with Additinal Cnstraints P. Pandian and D. Anuradha Department f Mathematics, Schl f Advanced

More information

Tree Structured Classifier

Tree Structured Classifier Tree Structured Classifier Reference: Classificatin and Regressin Trees by L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stne, Chapman & Hall, 98. A Medical Eample (CART): Predict high risk patients

More information

SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical model for microarray data analysis

SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical model for microarray data analysis SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical mdel fr micrarray data analysis David Rssell Department f Bistatistics M.D. Andersn Cancer Center, Hustn, TX 77030, USA rsselldavid@gmail.cm

More information

Collocation Map for Overcoming Data Sparseness

Collocation Map for Overcoming Data Sparseness Cllcatin Map fr Overcming Data Sparseness Mnj Kim, Yung S. Han, and Key-Sun Chi Department f Cmputer Science Krea Advanced Institute f Science and Technlgy Taejn, 305-701, Krea mj0712~eve.kaist.ac.kr,

More information

Fall 2013 Physics 172 Recitation 3 Momentum and Springs

Fall 2013 Physics 172 Recitation 3 Momentum and Springs Fall 03 Physics 7 Recitatin 3 Mmentum and Springs Purpse: The purpse f this recitatin is t give yu experience wrking with mmentum and the mmentum update frmula. Readings: Chapter.3-.5 Learning Objectives:.3.

More information

A Frequency-Based Find Algorithm in Mobile Wireless Computing Systems

A Frequency-Based Find Algorithm in Mobile Wireless Computing Systems A Frequency-Based Find Algrithm in Mbile Wireless Cmputing Systems Seung-yun Kim and Waleed W Smari Department f Electrical Cmputer Engineering University f Daytn 300 Cllege Park Daytn, OH USA 45469 Abstract

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

Determining the Accuracy of Modal Parameter Estimation Methods

Determining the Accuracy of Modal Parameter Estimation Methods Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system

More information

Document for ENES5 meeting

Document for ENES5 meeting HARMONISATION OF EXPOSURE SCENARIO SHORT TITLES Dcument fr ENES5 meeting Paper jintly prepared by ECHA Cefic DUCC ESCOM ES Shrt Titles Grup 13 Nvember 2013 OBJECTIVES FOR ENES5 The bjective f this dcument

More information

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards:

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards: MODULE FOUR This mdule addresses functins SC Academic Standards: EA-3.1 Classify a relatinship as being either a functin r nt a functin when given data as a table, set f rdered pairs, r graph. EA-3.2 Use

More information

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1

Administrativia. Assignment 1 due thursday 9/23/2004 BEFORE midnight. Midterm exam 10/07/2003 in class. CS 460, Sessions 8-9 1 Administrativia Assignment 1 due thursday 9/23/2004 BEFORE midnight Midterm eam 10/07/2003 in class CS 460, Sessins 8-9 1 Last time: search strategies Uninfrmed: Use nly infrmatin available in the prblem

More information

ROUNDING ERRORS IN BEAM-TRACKING CALCULATIONS

ROUNDING ERRORS IN BEAM-TRACKING CALCULATIONS Particle Acceleratrs, 1986, Vl. 19, pp. 99-105 0031-2460/86/1904-0099/$15.00/0 1986 Grdn and Breach, Science Publishers, S.A. Printed in the United States f America ROUNDING ERRORS IN BEAM-TRACKING CALCULATIONS

More information

Broadcast Program Generation for Unordered Queries with Data Replication

Broadcast Program Generation for Unordered Queries with Data Replication Bradcast Prgram Generatin fr Unrdered Queries with Data Replicatin Jiun-Lng Huang and Ming-Syan Chen Department f Electrical Engineering Natinal Taiwan University Taipei, Taiwan, ROC E-mail: jlhuang@arbr.ee.ntu.edu.tw,

More information

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1 Data Mining: Cncepts and Techniques Classificatin and Predictin Chapter 6.4-6 February 8, 2007 CSE-4412: Data Mining 1 Chapter 6 Classificatin and Predictin 1. What is classificatin? What is predictin?

More information

Perfrmance f Sensitizing Rules n Shewhart Cntrl Charts with Autcrrelated Data Key Wrds: Autregressive, Mving Average, Runs Tests, Shewhart Cntrl Chart

Perfrmance f Sensitizing Rules n Shewhart Cntrl Charts with Autcrrelated Data Key Wrds: Autregressive, Mving Average, Runs Tests, Shewhart Cntrl Chart Perfrmance f Sensitizing Rules n Shewhart Cntrl Charts with Autcrrelated Data Sandy D. Balkin Dennis K. J. Lin y Pennsylvania State University, University Park, PA 16802 Sandy Balkin is a graduate student

More information

Building research leadership consortia for Quantum Technology Research Hubs. Call type: Expression of Interest

Building research leadership consortia for Quantum Technology Research Hubs. Call type: Expression of Interest Building research leadership cnsrtia fr Quantum Technlgy Research Hubs Call type: Expressin f Interest Clsing date: 17:00, 07 August 2018 Hw t apply: Expressin f Interest (EI) fr research leaders t attend

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

Engineering Decision Methods

Engineering Decision Methods GSOE9210 vicj@cse.unsw.edu.au www.cse.unsw.edu.au/~gs9210 Maximin and minimax regret 1 2 Indifference; equal preference 3 Graphing decisin prblems 4 Dminance The Maximin principle Maximin and minimax Regret

More information

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network Research Jurnal f Applied Sciences, Engineering and Technlgy 5(2): 465-469, 2013 ISSN: 2040-7459; E-ISSN: 2040-7467 Maxwell Scientific Organizatin, 2013 Submitted: May 08, 2012 Accepted: May 29, 2012 Published:

More information

Resampling Methods. Chapter 5. Chapter 5 1 / 52

Resampling Methods. Chapter 5. Chapter 5 1 / 52 Resampling Methds Chapter 5 Chapter 5 1 / 52 1 51 Validatin set apprach 2 52 Crss validatin 3 53 Btstrap Chapter 5 2 / 52 Abut Resampling An imprtant statistical tl Pretending the data as ppulatin and

More information

Math Foundations 20 Work Plan

Math Foundations 20 Work Plan Math Fundatins 20 Wrk Plan Units / Tpics 20.8 Demnstrate understanding f systems f linear inequalities in tw variables. Time Frame December 1-3 weeks 6-10 Majr Learning Indicatrs Identify situatins relevant

More information

Methods for Determination of Mean Speckle Size in Simulated Speckle Pattern

Methods for Determination of Mean Speckle Size in Simulated Speckle Pattern 0.478/msr-04-004 MEASUREMENT SCENCE REVEW, Vlume 4, N. 3, 04 Methds fr Determinatin f Mean Speckle Size in Simulated Speckle Pattern. Hamarvá, P. Šmíd, P. Hrváth, M. Hrabvský nstitute f Physics f the Academy

More information

Multiple Source Multiple. using Network Coding

Multiple Source Multiple. using Network Coding Multiple Surce Multiple Destinatin Tplgy Inference using Netwrk Cding Pegah Sattari EECS, UC Irvine Jint wrk with Athina Markpulu, at UCI, Christina Fraguli, at EPFL, Lausanne Outline Netwrk Tmgraphy Gal,

More information

How do scientists measure trees? What is DBH?

How do scientists measure trees? What is DBH? Hw d scientists measure trees? What is DBH? Purpse Students develp an understanding f tree size and hw scientists measure trees. Students bserve and measure tree ckies and explre the relatinship between

More information

CHAPTER 3 INEQUALITIES. Copyright -The Institute of Chartered Accountants of India

CHAPTER 3 INEQUALITIES. Copyright -The Institute of Chartered Accountants of India CHAPTER 3 INEQUALITIES Cpyright -The Institute f Chartered Accuntants f India INEQUALITIES LEARNING OBJECTIVES One f the widely used decisin making prblems, nwadays, is t decide n the ptimal mix f scarce

More information

Comprehensive Exam Guidelines Department of Chemical and Biomolecular Engineering, Ohio University

Comprehensive Exam Guidelines Department of Chemical and Biomolecular Engineering, Ohio University Cmprehensive Exam Guidelines Department f Chemical and Bimlecular Engineering, Ohi University Purpse In the Cmprehensive Exam, the student prepares an ral and a written research prpsal. The Cmprehensive

More information

COMP9444 Neural Networks and Deep Learning 3. Backpropagation

COMP9444 Neural Networks and Deep Learning 3. Backpropagation COMP9444 Neural Netwrks and Deep Learning 3. Backprpagatin Tetbk, Sectins 4.3, 5.2, 6.5.2 COMP9444 17s2 Backprpagatin 1 Outline Supervised Learning Ockham s Razr (5.2) Multi-Layer Netwrks Gradient Descent

More information

Homology groups of disks with holes

Homology groups of disks with holes Hmlgy grups f disks with hles THEOREM. Let p 1,, p k } be a sequence f distinct pints in the interir unit disk D n where n 2, and suppse that fr all j the sets E j Int D n are clsed, pairwise disjint subdisks.

More information

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS

CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS CHAPTER 4 DIAGNOSTICS FOR INFLUENTIAL OBSERVATIONS 1 Influential bservatins are bservatins whse presence in the data can have a distrting effect n the parameter estimates and pssibly the entire analysis,

More information

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers Cmbining Dialectical Optimizatin and Gradient Descent Methds fr Imprving the Accuracy f Straight Line Segment Classifiers Rsari A. Medina Rdriguez and Rnald Fumi Hashimt University f Sa Paul Institute

More information

A New Approach to Increase Parallelism for Dependent Loops

A New Approach to Increase Parallelism for Dependent Loops A New Apprach t Increase Parallelism fr Dependent Lps Yeng-Sheng Chen, Department f Electrical Engineering, Natinal Taiwan University, Taipei 06, Taiwan, Tsang-Ming Jiang, Arctic Regin Supercmputing Center,

More information

Differentiation Applications 1: Related Rates

Differentiation Applications 1: Related Rates Differentiatin Applicatins 1: Related Rates 151 Differentiatin Applicatins 1: Related Rates Mdel 1: Sliding Ladder 10 ladder y 10 ladder 10 ladder A 10 ft ladder is leaning against a wall when the bttm

More information

Five Whys How To Do It Better

Five Whys How To Do It Better Five Whys Definitin. As explained in the previus article, we define rt cause as simply the uncvering f hw the current prblem came int being. Fr a simple causal chain, it is the entire chain. Fr a cmplex

More information

EXPERIMENTAL STUDY ON DISCHARGE COEFFICIENT OF OUTFLOW OPENING FOR PREDICTING CROSS-VENTILATION FLOW RATE

EXPERIMENTAL STUDY ON DISCHARGE COEFFICIENT OF OUTFLOW OPENING FOR PREDICTING CROSS-VENTILATION FLOW RATE EXPERIMENTAL STUD ON DISCHARGE COEFFICIENT OF OUTFLOW OPENING FOR PREDICTING CROSS-VENTILATION FLOW RATE Tmnbu Gt, Masaaki Ohba, Takashi Kurabuchi 2, Tmyuki End 3, shihik Akamine 4, and Tshihir Nnaka 2

More information

Module 4: General Formulation of Electric Circuit Theory

Module 4: General Formulation of Electric Circuit Theory Mdule 4: General Frmulatin f Electric Circuit Thery 4. General Frmulatin f Electric Circuit Thery All electrmagnetic phenmena are described at a fundamental level by Maxwell's equatins and the assciated

More information

Drought damaged area

Drought damaged area ESTIMATE OF THE AMOUNT OF GRAVEL CO~TENT IN THE SOIL BY A I R B O'RN EMS S D A T A Y. GOMI, H. YAMAMOTO, AND S. SATO ASIA AIR SURVEY CO., l d. KANAGAWA,JAPAN S.ISHIGURO HOKKAIDO TOKACHI UBPREFECTRAl OffICE

More information

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor Prgress In Electrmagnetics Research M, Vl. 35, 5 6, 24 The Research n Flux Linkage Characteristic Based n BP and RBF Neural Netwrk fr Switched Reluctance Mtr Yan Cai, *, Siyuan Sun, Chenhui Wang, and Cha

More information

INTERNAL AUDITING PROCEDURE

INTERNAL AUDITING PROCEDURE Yur Cmpany Name INTERNAL AUDITING PROCEDURE Originatin Date: XXXX Dcument Identifier: Date: Prject: Dcument Status: Dcument Link: Internal Auditing Prcedure Latest Revisin Date Custmer, Unique ID, Part

More information

A Quick Overview of the. Framework for K 12 Science Education

A Quick Overview of the. Framework for K 12 Science Education A Quick Overview f the NGSS EQuIP MODULE 1 Framewrk fr K 12 Science Educatin Mdule 1: A Quick Overview f the Framewrk fr K 12 Science Educatin This mdule prvides a brief backgrund n the Framewrk fr K-12

More information

Finite State Automata that Recurrent Cascade-Correlation Cannot Represent

Finite State Automata that Recurrent Cascade-Correlation Cannot Represent Finite State Autmata that Recurrent Cascade-Crrelatin Cannt Represent Stefan C. Kremer Department f Cmputing Science University f Alberta Edmntn, Alberta, CANADA T6H 5B5 Abstract This paper relates the

More information

5 th grade Common Core Standards

5 th grade Common Core Standards 5 th grade Cmmn Cre Standards In Grade 5, instructinal time shuld fcus n three critical areas: (1) develping fluency with additin and subtractin f fractins, and develping understanding f the multiplicatin

More information

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large

More information

Experimental Study on Time and Space Sharing on the PowerXplorer

Experimental Study on Time and Space Sharing on the PowerXplorer Experimental Study n Time and Space Sharing n the PwerXplrer Sulieman Bani-Ahmad CWRU, Ohi, USA. sulieman@case.edu Ismail Ababneh AABU, Mafraq, Jrdan Ismail@aabu.edu.j ABSTRACT Scheduling algrithms in

More information

Assessment Primer: Writing Instructional Objectives

Assessment Primer: Writing Instructional Objectives Assessment Primer: Writing Instructinal Objectives (Based n Preparing Instructinal Objectives by Mager 1962 and Preparing Instructinal Objectives: A critical tl in the develpment f effective instructin

More information

Associated Students Flacks Internship

Associated Students Flacks Internship Assciated Students Flacks Internship 2016-2017 Applicatin Persnal Infrmatin: Name: Address: Phne #: Years at UCSB: Cumulative GPA: E-mail: Majr(s)/Minr(s): Units Cmpleted: Tw persnal references (Different

More information

Early detection of mining truck failure by modelling its operation with neural networks classification algorithms

Early detection of mining truck failure by modelling its operation with neural networks classification algorithms RU, Rand GOLOSINSKI, T.S. Early detectin f mining truck failure by mdelling its peratin with neural netwrks classificatin algrithms. Applicatin f Cmputers and Operatins Research ill the Minerals Industries,

More information

Department of Electrical Engineering, University of Waterloo. Introduction

Department of Electrical Engineering, University of Waterloo. Introduction Sectin 4: Sequential Circuits Majr Tpics Types f sequential circuits Flip-flps Analysis f clcked sequential circuits Mre and Mealy machines Design f clcked sequential circuits State transitin design methd

More information

Emphases in Common Core Standards for Mathematical Content Kindergarten High School

Emphases in Common Core Standards for Mathematical Content Kindergarten High School Emphases in Cmmn Cre Standards fr Mathematical Cntent Kindergarten High Schl Cntent Emphases by Cluster March 12, 2012 Describes cntent emphases in the standards at the cluster level fr each grade. These

More information

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression 4th Indian Institute f Astrphysics - PennState Astrstatistics Schl July, 2013 Vainu Bappu Observatry, Kavalur Crrelatin and Regressin Rahul Ry Indian Statistical Institute, Delhi. Crrelatin Cnsider a tw

More information

Coalition Formation and Data Envelopment Analysis

Coalition Formation and Data Envelopment Analysis Jurnal f CENTRU Cathedra Vlume 4, Issue 2, 20 26-223 JCC Jurnal f CENTRU Cathedra Calitin Frmatin and Data Envelpment Analysis Rlf Färe Oregn State University, Crvallis, OR, USA Shawna Grsspf Oregn State

More information

A Novel Isolated Buck-Boost Converter

A Novel Isolated Buck-Boost Converter vel slated uck-st Cnverter S-Sek Kim *,WOO-J JG,JOOG-HO SOG, Ok-K Kang, and Hee-Jn Kim ept. f Electrical Eng., Seul atinal University f Technlgy, Krea Schl f Electrical and Cmputer Eng., Hanyang University,

More information

Kinetic Model Completeness

Kinetic Model Completeness 5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins

More information

Eric Klein and Ning Sa

Eric Klein and Ning Sa Week 12. Statistical Appraches t Netwrks: p1 and p* Wasserman and Faust Chapter 15: Statistical Analysis f Single Relatinal Netwrks There are fur tasks in psitinal analysis: 1) Define Equivalence 2) Measure

More information

Training Algorithms for Recurrent Neural Networks

Training Algorithms for Recurrent Neural Networks raining Algrithms fr Recurrent Neural Netwrks SUWARIN PAAAVORAKUN UYN NGOC PIEN Cmputer Science Infrmatin anagement Prgram Asian Institute f echnlgy P.O. Bx 4, Klng Luang, Pathumthani 12120 AILAND http://www.ait.ac.th

More information

CHAPTER 24: INFERENCE IN REGRESSION. Chapter 24: Make inferences about the population from which the sample data came.

CHAPTER 24: INFERENCE IN REGRESSION. Chapter 24: Make inferences about the population from which the sample data came. MATH 1342 Ch. 24 April 25 and 27, 2013 Page 1 f 5 CHAPTER 24: INFERENCE IN REGRESSION Chapters 4 and 5: Relatinships between tw quantitative variables. Be able t Make a graph (scatterplt) Summarize the

More information

Name: Block: Date: Science 10: The Great Geyser Experiment A controlled experiment

Name: Block: Date: Science 10: The Great Geyser Experiment A controlled experiment Science 10: The Great Geyser Experiment A cntrlled experiment Yu will prduce a GEYSER by drpping Ments int a bttle f diet pp Sme questins t think abut are: What are yu ging t test? What are yu ging t measure?

More information

David HORN and Irit OPHER. School of Physics and Astronomy. Raymond and Beverly Sackler Faculty of Exact Sciences

David HORN and Irit OPHER. School of Physics and Astronomy. Raymond and Beverly Sackler Faculty of Exact Sciences Cmplex Dynamics f Neurnal Threshlds David HORN and Irit OPHER Schl f Physics and Astrnmy Raymnd and Beverly Sackler Faculty f Exact Sciences Tel Aviv University, Tel Aviv 69978, Israel hrn@neurn.tau.ac.il

More information

A mathematical model for complete stress-strain curve prediction of permeable concrete

A mathematical model for complete stress-strain curve prediction of permeable concrete A mathematical mdel fr cmplete stress-strain curve predictin f permeable cncrete M. K. Hussin Y. Zhuge F. Bullen W. P. Lkuge Faculty f Engineering and Surveying, University f Suthern Queensland, Twmba,

More information

Recurrent Neural-Network.Learning of Phonological Regularities m Turkish

Recurrent Neural-Network.Learning of Phonological Regularities m Turkish Recurrent Neural-Netwrk.Learning f Phnlgical Regularities m Turkish Jennifer Rdd Centre fr Cgnitive Science University f Edinburgh 2 Buccleugh Place Edinburgh, EH8 9LW, UK j enni~cgsci, ed. ac. uk Abstract

More information

BASD HIGH SCHOOL FORMAL LAB REPORT

BASD HIGH SCHOOL FORMAL LAB REPORT BASD HIGH SCHOOL FORMAL LAB REPORT *WARNING: After an explanatin f what t include in each sectin, there is an example f hw the sectin might lk using a sample experiment Keep in mind, the sample lab used

More information

o o IMPORTANT REMINDERS Reports will be graded largely on their ability to clearly communicate results and important conclusions.

o o IMPORTANT REMINDERS Reports will be graded largely on their ability to clearly communicate results and important conclusions. BASD High Schl Frmal Lab Reprt GENERAL INFORMATION 12 pt Times New Rman fnt Duble-spaced, if required by yur teacher 1 inch margins n all sides (tp, bttm, left, and right) Always write in third persn (avid

More information

How T o Start A n Objective Evaluation O f Your Training Program

How T o Start A n Objective Evaluation O f Your Training Program J O U R N A L Hw T Start A n Objective Evaluatin O f Yur Training Prgram DONALD L. KIRKPATRICK, Ph.D. Assistant Prfessr, Industrial Management Institute University f Wiscnsin Mst training m e n agree that

More information

ChE 471: LECTURE 4 Fall 2003

ChE 471: LECTURE 4 Fall 2003 ChE 47: LECTURE 4 Fall 003 IDEL RECTORS One f the key gals f chemical reactin engineering is t quantify the relatinship between prductin rate, reactr size, reactin kinetics and selected perating cnditins.

More information

NGSS High School Physics Domain Model

NGSS High School Physics Domain Model NGSS High Schl Physics Dmain Mdel Mtin and Stability: Frces and Interactins HS-PS2-1: Students will be able t analyze data t supprt the claim that Newtn s secnd law f mtin describes the mathematical relatinship

More information

THERMAL-VACUUM VERSUS THERMAL- ATMOSPHERIC TESTS OF ELECTRONIC ASSEMBLIES

THERMAL-VACUUM VERSUS THERMAL- ATMOSPHERIC TESTS OF ELECTRONIC ASSEMBLIES PREFERRED RELIABILITY PAGE 1 OF 5 PRACTICES PRACTICE NO. PT-TE-1409 THERMAL-VACUUM VERSUS THERMAL- ATMOSPHERIC Practice: Perfrm all thermal envirnmental tests n electrnic spaceflight hardware in a flight-like

More information

ALE 21. Gibbs Free Energy. At what temperature does the spontaneity of a reaction change?

ALE 21. Gibbs Free Energy. At what temperature does the spontaneity of a reaction change? Name Chem 163 Sectin: Team Number: ALE 21. Gibbs Free Energy (Reference: 20.3 Silberberg 5 th editin) At what temperature des the spntaneity f a reactin change? The Mdel: The Definitin f Free Energy S

More information

Relationship Between Amplifier Settling Time and Pole-Zero Placements for Second-Order Systems *

Relationship Between Amplifier Settling Time and Pole-Zero Placements for Second-Order Systems * Relatinship Between Amplifier Settling Time and Ple-Zer Placements fr Secnd-Order Systems * Mark E. Schlarmann and Randall L. Geiger Iwa State University Electrical and Cmputer Engineering Department Ames,

More information

UN Committee of Experts on Environmental Accounting New York, June Peter Cosier Wentworth Group of Concerned Scientists.

UN Committee of Experts on Environmental Accounting New York, June Peter Cosier Wentworth Group of Concerned Scientists. UN Cmmittee f Experts n Envirnmental Accunting New Yrk, June 2011 Peter Csier Wentwrth Grup f Cncerned Scientists Speaking Ntes Peter Csier: Directr f the Wentwrth Grup Cncerned Scientists based in Sydney,

More information

DISTRIBUTED PRODUCTION SYSTEM: A FRAMEWORK TO SOlVE THE PRINTED CIRCUIT BOARD ROUTING

DISTRIBUTED PRODUCTION SYSTEM: A FRAMEWORK TO SOlVE THE PRINTED CIRCUIT BOARD ROUTING Cybernetics and Systems: An Internatinal Jurnal, 21: 181-193, 1990 DISTRIBUTED PRODUCTION SYSTEM: A FRAMEWORK TO SOlVE THE PRINTED CIRCUIT BOARD ROUTING MARíA TERESA DE PEDRO ANGELA RIBEIRO JOSÉ Luís PEDRAZA

More information

AMERICAN PETROLEUM INSTITUTE API RP 581 RISK BASED INSPECTION BASE RESOURCE DOCUMENT BALLOT COVER PAGE

AMERICAN PETROLEUM INSTITUTE API RP 581 RISK BASED INSPECTION BASE RESOURCE DOCUMENT BALLOT COVER PAGE Ballt ID: Title: USING LIFE EXTENSION FACTOR (LEF) TO INCREASE BUNDLE INSPECTION INTERVAL Purpse: 1. Prvides a methd t increase a bundle s inspectin interval t accunt fr LEF. 2. Clarifies Table 8.6.5 Als

More information

8 th Grade Math: Pre-Algebra

8 th Grade Math: Pre-Algebra Hardin Cunty Middle Schl (2013-2014) 1 8 th Grade Math: Pre-Algebra Curse Descriptin The purpse f this curse is t enhance student understanding, participatin, and real-life applicatin f middle-schl mathematics

More information

Admin. MDP Search Trees. Optimal Quantities. Reinforcement Learning

Admin. MDP Search Trees. Optimal Quantities. Reinforcement Learning Admin Reinfrcement Learning Cntent adapted frm Berkeley CS188 MDP Search Trees Each MDP state prjects an expectimax-like search tree Optimal Quantities The value (utility) f a state s: V*(s) = expected

More information