PERFORMANCE COMPARISON BETWEEN BACK PROPAGATION, RPE AND MRPE ALGORITHMS FOR TRAINING MLP NETWORKS

Size: px
Start display at page:

Download "PERFORMANCE COMPARISON BETWEEN BACK PROPAGATION, RPE AND MRPE ALGORITHMS FOR TRAINING MLP NETWORKS"

Transcription

1 PERFORMANCE COMPARISON BETWEEN BACK PROPAGATION, RPE AND MRPE ALGORITHMS FOR TRAINING MLP NETWORKS Mohd Yusoff Mashor School of Electrcal and Electronc Engneerng, Unversty Scence Malaysa, Pera Branch Campus, 3750 Tronoh, Pera, Malaysa. Emal: ABSTRACT Ths paper presents the performance comparson between bac propagaton, recursve predcton error (RPE) and modfed recursve predcton error (MRPE) algorthms for tranng multlayered perceptron networs. Bac propagaton s a steepest descent type algorthm that normally has slow convergence rate and the search for the global mnmum often becomes trapped at poor local mnma. RPE and MRPE are based on Gaussan-Newton type algorthm that generally provdes better performance. The current study nvestgates the performance of three algorthms to tran MLP networs. Two real data sets were used for the comparson. Its was found that the RPE and MRPE are much better than the bac propagaton algorthm.. INTRODUCTION Nowadays, artfcal neural networs are studed and appled n varous dscplnes such as neurobology, psychology, computer scence, cogntve scence, engneerng, economcs, medcne, etc. Tan et al. (992) used neural networs for forecastng US-Sngapore dollar exchange. Lnens and Ne (993) appled a neuro-fuzzy controller to a problem of multvarable blood pressure control. Yu et al. (993) used neural networs to solve the travellng salesman problem and the map-colourng problem. Arad et al. (994) used RBF networs to recognse human facal expressons based on 2D and 3D models of the face. Rosenblum and Davs (994) have appled RBF networs for a vehcle vsual autonomous road followng system. Many applcatons of artfcal neural networs are nspred by the

2 ablty of the networs to demonstrate bran-le behavour. Applcatons of artfcal neural networs n these dverse felds have made t possble to tacle some problems that were prevously consdered to be very dffcult or unsolvable. Multlayered perceptron (MLP) networs traned usng bac propagaton (BP) algorthm are the most popular choce n neural networ applcatons. It has been shown that the networ can provde satsfactory results. However, MLP networ and BP algorthm can be consdered as the bass to the neural networ studes. For examples RBF and HMLP networs have been proved to provde much better performance than MLP networ (Chen, 992; Mashor, 999a). The current study compares the performance of BP, RPE and MRPE to tran MLP networs. The comparson was carred out by usng the MLP networs that were traned usng the three algorthms to perform non-lnear system dentfcaton. 2. MULTILAYERED PERCEPTRON NETWORKS MLP networs are feed forward neural networs wth one or more hdden layers. Cybeno (989) and Funahash (989) have proved that the MLP networ s a general functon approxmator and that one hdden layer networs wll always be suffcent to approxmate any contnuous functon up to certan accuracy. A MLP networ wth two hdden layers s shown n Fgure. The nput layer acts as an nput data holder that dstrbutes the nput to the frst hdden layer. The outputs from the frst hdden layer then become the nputs to the second layer and so on. The last layer acts as the networ output layer. A hdden neuron performs two functons that are the combnng functon and the actvaton functon. The output of the j-th neuron of the -th hdden layer s gven by n v j ( t ) F wj v = ( t ) + b j ; for j n () = and f the m-th layer s the output layer then the output of the l-th neuron $y l of the output layer s gven by m m $y ( t ) = w v ( t ) l n m j = ; for l n o (2) where n, n o w s, b s and F(.) are the number of neurons n -th layer, number of neurons n output layer, weghts, thresholds and an actvaton functon respectvely.

3 $y $y no Output Layer 2nd Hdden Layer st Hdden Layer Input Layer v v2 vn Fgure : Multlayered perceptron networs In the current study, the networ wth a sngle output node and a sngle hdden layer s used,.e m = 2 and n o =. Wth these smplfcatons the networ output s: n n n r 2 2 $y ( t ) = w v ( t ) = w F wjv j ( t ) + b j = = j= (3) where n r s the number of nodes n the nput layer. The actvaton functon F(.) s selected to be F( v( t )) = + ( ) e v t (4) The weghts w and threshold b j are unnown and should be selected to mnmse the predcton errors defned as ε( t ) = y( t ) y$ ( t ) (5) where y(t) s the actual output and ( ) $y t s the networ output.

4 3. TRAINING ALGORITHMS Ths secton brefly presents bac propagaton, RPE and MRPE algorthms. Bac propagaton and RPE algorthms are based on paper by Chen et. al. (990) and MRPE s based on paper by Mashor (999b). 3. Bac Propagaton Algorthm Bac propagaton algorthm was ntally ntroduced by Werbos (974) and further developed by Rumelhart and McClelland (986). Bac propagaton s the steepest decent type algorthm where the weght connecton between the j-th neuron of the (-)-th layer and the -th neuron of the -th layer are respectvely updated accordng to w ( t) = w ( t ) + w ( t ) j j j b ( t ) = b ( t ) + b ( t ) (6) wth the ncrement wj ( t ) and b ( t ) gven by w ( t ) = η ρ ( t) v ( t ) + α w ( t ) j w j w j b ( t ) = η ρ ( t) + α b ( t ) b b (7) where the subscrpts w and b represent the weght and threshold respectvely, α w and α b are momentum constants whch determne the nfluence of the past parameter changes on the current drecton of movement n the parameter space, η w and η b represent the learnng rates and ρ ( t ) s the error sgnal of the -th neuron of the -th layer whch s bac propagated n the networ. Snce the actvaton functon of the output neuron s lnear, the error sgnal at the output node s ρ m ( t ) = y( t ) y$ ( t) (8) and for the neurons n the hdden layer ( ) ( ) ( ) ( ) ρ ( ) ( ) ρ + + t = F v t j t w j t = m-,..., 2, (9) j ( ) where F v ( t ) s the frst dervatve of F v ( t ) wth respect to v ( t ). Snce bac propagaton algorthm s a steepest decent type algorthm, the algorthm suffers from a slow convergence rate. The search for the global mnma may become trapped at local mnma and the algorthm can be senstve to the user selectable parameters.

5 3.2 RPE and MRPE Algorthms Recursve predcton error algorthm (RPE) was orgnally derved by Ljung and Soderstrom (983) and modfed by Chen et al. (990) to tran MLP networs. RPE algorthm s a Gauss-Newton type algorthm that wll generally gve better performance than a steepest descent type algorthm such as bac propagaton algorthm. In the present study, the convergence rate of the RPE algorthm s further mproved by usng the optmsed momentum and learnng rate RPE (Mashor, 999b). The momentum and learnng rate n ths research are vared compared to the constant values n Chen et al. (990). Ths modfed RPE s orgnally proposed by Mashor (999b) and referred as modfed recursve predcton error (MRPE) algorthm. The RPE algorthm modfed by Chen et al. (990) mnmses the followng cost functon, J N T ( Θ) = ε ( t, Θˆ ) Λ ε( t, Θˆ ) ˆ (0) 2N t= by updatng the estmated parameter vector, Θˆ (conssts of w s and b s), recursvely usng Gauss-Newton algorthm: and ( t) = Θˆ ( t ) + P( t) ( t) Θ ˆ () ( t) = α ( t) ( t ) + α ( t) ψ( t) ε( t) (2) m where ε ( t) and Λ are the predcton error and m respectvely, and m s the number of output nodes; and m( t) αg ( t) and learnng rate respectvely. m( t) αg ( t) between 0 and and the typcal value of m( t) αg ( t) In the present study, ( t) α ( t) g m symmetrc postve defnte matrx α and are the momentum α and can be arbtrarly assgned to some values α and are closed to and 0 respectvely. αm and g are vared to further mprove the convergence rate of the RPE algorthm accordng to (Mashor, 999b): and ( t) = α ( t ) a α (3) m m + g ( t) = α ( t) ( α ( t) ) α (4) m where a s a small constant (typcally a = 0.0); m( t) 0 α m ( 0) <. ( t) m α s normally ntalsed to ψ represents the gradent of the one step ahead predcted output wth respect to the networ parameters: ( t, Θ) ( ) dyˆ ψ t, Θ = dθ (5)

6 P ( t) n equaton () s updated recursvely accordng to: P ( t) = λ ( t) P T ( ) ψ ( t) P( t ) T ( t ) P( t ) ψ( t) λ( t) I + ψ ( t) P( t ) ψ( t) (6) where λ ( t) s the forgettng factor, 0 λ( ) < < t, and normally been updated usng the followng scheme, Ljung and Soderstrom (983): λ ( t) = λ λ( t ) + ( λ ) 0 0 where 0 λ 0 are the desgn values. Intal value of P(t) matrx, P(0) s normally set to α I where I s the dentty matrx and α s a constant, typcally between 00 to Small value of α wll cause slow learnng however too large α may cause the estmated parameters do not converge properly. Hence, t should be selected to compromse between the two ponts, α = 000 s adequate for most cases. λ and the ntal forgettng factor ( ) ψ can be modfed to accommodate the extra lnear connectons for one-hdden-layer HMLP networ model by dfferentatng equaton (3) wth respect to the parameters, θ c, to yeld: The gradent matrx ( t) (7) ψ ( t) dy = dθ ( ) j t v j ( v j) = c v ( v ) j v 0 j w 2 j 2 0 jv w f θ c f θ f θ c c = w = b = w otherwse 2 j j j j n j n j n h h h, n (8) The above gradent matrx s derved based on sgmod functon therefore, f other actvaton functons were used the matrx should be changed accordngly. The modfed RPE algorthm for one hdden layer MLP networ can be mplemented as follows (Mashor, 999b): Intalse weghts, thresholds, P (0), a, b, m( 0), λ 0 and λ 0. Present nputs to the networ and compute the networ outputs accordng to equaton (3). Calculate the predcton error accordng to equaton (5) and compute matrx ψ( t) accordng to equaton (8). Note that elements of ψ ( t) should be calculated from the output layer down to the hdden layer. v Compute matrx λ ( t) and P (t) accordng to equaton (2) and () respectvely. v If α m ( t) < b, update α m ( t) accordng to equaton (3). v Update α g ( t) and then ( t) accordng to equaton (4) and (2) respectvely. v Update parameter vector Θˆ ( t) accordng to equaton (). v Repeat steps () to (v) for each tranng data sample. α ( )

7 The desgn parameter b n step (v) s the upper lmt of momentum that has typcal value between 0.8 to 0.9. So momentum wll be ncreased for each data sample from a small value (normally close to 0) to ths value. 4. MODELLING NON-LINEAR SYSTEMS USING HMLP NETWORKS Modellng usng MLP networs can be consdered as fttng a surface n a multdmensonal space to represent the tranng data set and usng the surface to predct over the testng data set. Therefore, MLP networs requre all the future data of the system to le wthn the doman of the ftted surface to ensure a correct mappng so that good predctons can be acheved. Ths s normal for the non-lnear modellng where the model s only vald over a certan ampltude range. A wde class of non-lnear systems can be represented by non-lnear auto-regressve movng average wth exogenous nput (NARMAX) model, Leontarts and Bllngs (985). The NARMAX model can be expressed n terms of a non-lnear functon expanson of lagged nput, output and nose terms as follows: y ( t) = f ( y( t ),, y( t n ), u( t ), L, u( t n ), e( t ), L, e( t n )) e( t) L (9) s y u e + where y ( t) u ( t ) e ( t ) y( t ) = M, u( t) = M and e( t) = M y ( t) u ( t ) e ( t ) m r m are the system output, nput and nose vector respectvely; ny, nu and ne are the maxmum lags n the output, nput and nose vector respectvely. The non-lnear functon, f s ( ) s normally very complcated and rarely nown a pror for practcal systems. If the mechansms of a system are nown the functon f s ( ) can be derved from the functons that govern those mechansms. In the case of an unnown system, f s ( ) s normally constructed based on the observaton of the nput and output data. In the present study, MLP networs wll be used to model the nput-output relatonshp. In other words, f s ( ) wll be approxmated by usng equaton (3) where F ( ) s selected to be sgmod functon. The networ nput vector, v( t ) s formed from lagged nput, output and nose terms, whch are denoted as u( t ) L u( t n u ), y( t ) L y ( t n y ) and e( t ) L e( t n e ) respectvely n equaton (9). The fnal stage n system dentfcaton s model valdaton. There are several ways of testng a model such as one step ahead predctons (OSA), model predcted outputs (MPO), mean squared error (MSE), correlaton tests and ch-squares tests. In the present study, OSA,

8 MPO, MSE and correlaton tests were used to justfy the performance of the ftted networ models. In the present study, only OSA test and MSE test wll be used snce t s not easy to see the performance dfferent usng other tests. OSA s a common measure of predctve accuracy of a model that has been consdered by many researchers. OSA can be expressed as: ( ) ( ) = ( ) L ( ) ( ) L ( ) ε( θ) L ε( θ) y $ t f u t,, u t n, y t,, y t n, t, $,, t n, $ s u y ε (20) and the resdual or predcton error s defned as: ( t θ) y( t ) y( t ) ε$, $ = $ (2) where ( ) f s s a non-lnear functon, n ths case the MLP networ. A good model wll normally gve a good predcton, however, a model that has a good one step ahead predcton and model predcted output may not always be unbased. The model may be sgnfcantly based and predcton over a dfferent set of data often reveals ths problem. Splttng the data nto two sets can test ths condton, a tranng set and a testng set. MSE s an teratve method of model valdaton where the model s tested by calculatng the mean squared errors after each tranng step. MSE test wll ndcate how fast a predcton error or resdual converges wth the number of tranng data. The MSE at the t-th tranng step, s gven by: n d 2 ( ) E ( ( )) ( ) ( ( ) MSE t,θ t = y y$, Θ t ) (22) n ( ) ( ( )) d = where E t ( t ) MSE, Θ and y$, Θ t are the MSE and OSA for a gven set of estmated parameters Θ( t ) after t tranng steps respectvely, and n d s the number of data that were used to calculate the MSE. 5. PERFORMANCE COMPARISON The performance of MLP networs traned usng the BP, RPE and MRPE algorthms presented n secton 3 were compared. Two real data sets were used for ths comparson. The networs were used to perform system dentfcaton and the resultng models were used to produce OSA and MSE tests. Example The frst data set was taen from a heat exchanger system and conssts of 000 samples. The frst 500 data were used to tran the networ and the remanng 500 data were used to test the ftted networ model. The networ has been traned usng the followng specfcaton: v t = u t u t 2 y t y t 4 e t 3 e t 4 e t 5 and bas nput. ( ) [ ( ) ( ) ( ) ( ) ( ) ( ) ( )]

9 All the networ models have the same structure but dfferent tranng algorthm. The desgn parameters for BP, RPE and MRPE algorthms were set as follows, BP algorthm: ηw = ηb = and αw = αb = RPE algorthm: P(0) = 000I, αm = 0.85, αg = 0., 0 = MRPE algorthm: P = 000 ( t ) I, ( 0 ) = 0.6, λ and ( 0 ) = α m a = 0.0, b = 0.85, 0 = λ. λ and ( 0 ) = λ. The MSE calculated over both tranng and testng data sets for the networ models traned usng BP, RPE and MRPE algorthms are shown n Fgure (2) and (3) respectvely. Both fgures ndcated that MRPE produces MSE that s sgnfcantly better than RPE and much better than BP algorthm. These fgures also suggest that the networ traned usng BP algorthm do not have good generalsaton snce the performance dfferent over the testng data set s larger than the one over the tranng data set. Fgure 2: MSE calculated over tranng data set Fgure 3: MSE calculated over testng data set

10 Fgure 4: OSA test for MLP networ traned usng BP algorthm Fgure 5: OSA test for MLP networ traned usng RPE algorthm OSA tests over the tranng and testng data sets for the networ models traned usng BP, RPE and MRPE are shown n Fgure (4), (5) and (6) respectvely. The result n Fgure (4) reconfrms that the networ traned usng BP algorthm cannot predct properly and do not have good generalsaton where the predcton over testng data set s not satsfactory. The networs traned usng RPE and MRPE algorthms on the other hand predct very well over both the tranng and testng data sets. Referrng to Fgure (5) and (6), t can be sad that the networ traned usng MRPE algorthm gves sgnfcantly better predcton (OSA test) compared to the networ traned usng RPE algorthm. Fgure 6: OSA test for MLP networ traned usng MRPE algorthm

11 Example 2 The second system s a tenson legs data consst of 000 data samples where the frst 600 data samples were used for tranng and the next 400 data samples were used for testng. The networ has been traned usng the followng specfcaton: v ( t) = [ u( t ) L u( t 8) y( t ) L y( t 4) e( t 3) e( t 5) ] BP algorthm: ηw = ηb = 0.00 and αw = αb = RPE algorthm: P(0) = 000I, αm = 0.85, αg = 0. 07, 0 = MRPE algorthm: P = 000 ( t ) I, ( 0 ) = 0, λ and ( 0 ) = α m a = 0.0, b = 0.85, 0 = λ. λ and ( 0 ) = λ. and bas nput The MSE calculated over both tranng and testng data sets for the networ models traned usng BP, RPE and MRPE algorthms are shown n Fgure (7) and (8) respectvely. In ths example the networ models traned usng MRPE and RPE algorthms produced MSE that are much better than the one traned usng BP algorthm. These fgures also ndcate that the MRPE s sgnfcantly better than RPE algorthm. Fgure (7) and (8) also suggest that the networ traned usng BP algorthm do not have good generalsaton snce the performance dfferent over the testng data set s larger than the one over the tranng data set. Fgure 7: MSE calculated over tranng data set Fgure 8: MSE calculated over testng data set

12 Fgure 9: OSA test for MLP networ traned usng BP algorthm Fgure 0: OSA test for MLP networ traned usng RPE algorthm OSA tests over the tranng and testng data sets for the networ models traned usng BP, RPE and MRPE are shown n Fgure (9), (0) and () respectvely. For ths example t s qute hard to dstngush the performance advantage between the networs traned usng BP and RPE algorthms. However, the OSA test produced by the networ traned usng MRPE s sgnfcantly better than the networ models that have been traned usng BP and RPE algorthms. Fgure : OSA test for MLP networ traned usng MRPE algorthm

13 6. CONCLUSION BP, RPE and MRPE algorthms are brefly dscussed and used to tran MLP networs to perform system dentfcaton. Two real data sets were used to test the performance of the algorthms. The MSE and OSA tests for both examples ndcated that the MRPE has sgnfcantly mproved the performance of RPE algorthm and both RPE and MRPE algorthm are much better than BP algorthm. The results also suggest that the networ traned usng BP algorthm does not normally have good generalsaton where the predcton over the testng data set s not as good as over the tranng data set. REFERENCES [] Arad, N., Dyn, N., Resfeld, D., and Yeshurun, Y., 994, Image warpng by radal bass functons: applcaton to facal expressons, CVGIP: Graphcal Models and Image Processng, 56 (2), [2] Chen, S., Cowan, C.F.N., Bllngs, S.A., and Grant, P.M., 990, A parallel recursve predcton error algorthm for tranng layered neural networs, Int. J. Control, 5 (6), [3] Chen, S., and Bllngs, S.A., 992, Neural networs for non-lnear dynamc system modellng and dentfcaton, Int. J. of Control, 56 (2), [4] Cybeno, G., 989, Approxmatons by superposton of a sgmodal functon, Mathematcs of Control, Sgnal and Systems, 2, [5] Funahash, K., 989, On the approxmate realsaton of contnuous mappngs by neural networs, Neural Networs, 2, [6] Leontarts, I.J., and Bllngs, S.A., 985, Input-output parametrc models for nonlnear systems. Part I - Determnstc non-lnear systems. Part II - Stochastc non-lnear systems, Int. J. Control, 4, [7] Lnens, D.A., and Ne, J., 993, Fuzzfed RBF networ-based learnng control: Structure and self-constructon, IEEE Int. Conf. on Neural Networs, 2, [8] Ljung, L., and Soderstrom, T., 983, Theory and Practce of Recursve Identfcaton, MIT Press, Cambrdge. [9] Mashor, M.Y., 999a, Performance Comparson Between HMLP and MLP Networs, Int. Conf. on Robotcs, Vson & Parallel Processng for Automaton (ROVPIA 99), pp [0] Mashor, M.Y., 999b, Hybrd multlayered perceptron networs, accepted for publcaton by Int. J. of System and Scence.

14 [] Rosenblum, M., and Davs, L.S., 994, An mproved radal bass functon networ for vsual autonomous road followng, Proc. of the SPIE - The Int. Socety for Optcal Eng., 203, [2] Rumelhart, D.E., and McClelland, J.L., 986, Parallel dstrbuted processng: exploratons n the mcrostructure of cognton, I & II, MIT Press, Cambrdge, MA [3] Tan, P.Y., Lm, G., Chua, K., Wong, F.S., and Neo, S., 992, Comparatve studes among neural nets, radal bass functons and regresson methods, ICARCV '92. Second Int. Conf. on Automaton, Robotcs and Computer Vson,, NW-3.3/-6. [4] Yu, D.H., Ja, J., and Mor, S., 993, A new neural networ algorthm wth the orthogonal optmsed parameters to solve the optmal problems, IEICE Trans. on Fundamentals of Electroncs, Comm. and Computer Scences, E76-A (9), [5] Werbos, P.J., 974, Beyond Regresson: New Tools for Predcton and Analyss n the Behavoural Scences, Ph.D. Thess, Harvard Unversty.

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Modified Recursive Prediction Error Algorithm For Training Layered Neural Network

Modified Recursive Prediction Error Algorithm For Training Layered Neural Network Modfed Recursve Predcton Error Alorth For Trann Layered Neural Netor Mohd Yusoff Mashor Centre for ELectronc Intellent Syste (CELIS) School of Electrcal and Electronc Enneern, Unversty Sans Malaysa, Pulau

More information

Neural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17

Neural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17 Neural Networks Perceptrons and Backpropagaton Slke Bussen-Heyen Unverstät Bremen Fachberech 3 5th of Novemeber 2012 Neural Networks 1 / 17 Contents 1 Introducton 2 Unts 3 Network structure 4 Snglelayer

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Lecture 23: Artificial neural networks

Lecture 23: Artificial neural networks Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

Introduction to the Introduction to Artificial Neural Network

Introduction to the Introduction to Artificial Neural Network Introducton to the Introducton to Artfcal Neural Netork Vuong Le th Hao Tang s sldes Part of the content of the sldes are from the Internet (possbly th modfcatons). The lecturer does not clam any onershp

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Radial-Basis Function Networks

Radial-Basis Function Networks Radal-Bass uncton Networs v.0 March 00 Mchel Verleysen Radal-Bass uncton Networs - Radal-Bass uncton Networs p Orgn: Cover s theorem p Interpolaton problem p Regularzaton theory p Generalzed RBN p Unversal

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

A Comparison on Neural Network Forecasting

A Comparison on Neural Network Forecasting 011 nternatonal Conference on Crcuts, System and Smulaton PCST vol.7 (011) (011) ACST Press, Sngapore A Comparson on Neural Networ Forecastng Hong-Choon Ong 1, a and Shn-Yue Chan 1, b 1 School of Mathematcal

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

Multi-layer neural networks

Multi-layer neural networks Lecture 0 Mult-layer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Lnear regresson w Lnear unts f () Logstc regresson T T = w = p( y =, w) = g( w ) w z f () = p ( y = ) w d w d Gradent

More information

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M

CIS526: Machine Learning Lecture 3 (Sept 16, 2003) Linear Regression. Preparation help: Xiaoying Huang. x 1 θ 1 output... θ M x M CIS56: achne Learnng Lecture 3 (Sept 6, 003) Preparaton help: Xaoyng Huang Lnear Regresson Lnear regresson can be represented by a functonal form: f(; θ) = θ 0 0 +θ + + θ = θ = 0 ote: 0 s a dummy attrbute

More information

Why feed-forward networks are in a bad shape

Why feed-forward networks are in a bad shape Why feed-forward networks are n a bad shape Patrck van der Smagt, Gerd Hrznger Insttute of Robotcs and System Dynamcs German Aerospace Center (DLR Oberpfaffenhofen) 82230 Wesslng, GERMANY emal smagt@dlr.de

More information

Note 10. Modeling and Simulation of Dynamic Systems

Note 10. Modeling and Simulation of Dynamic Systems Lecture Notes of ME 475: Introducton to Mechatroncs Note 0 Modelng and Smulaton of Dynamc Systems Department of Mechancal Engneerng, Unversty Of Saskatchewan, 57 Campus Drve, Saskatoon, SK S7N 5A9, Canada

More information

CHAPTER III Neural Networks as Associative Memory

CHAPTER III Neural Networks as Associative Memory CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 31 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 6. Rdge regresson The OLSE s the best lnear unbased

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

2 STATISTICALLY OPTIMAL TRAINING DATA 2.1 A CRITERION OF OPTIMALITY We revew the crteron of statstcally optmal tranng data (Fukumzu et al., 1994). We

2 STATISTICALLY OPTIMAL TRAINING DATA 2.1 A CRITERION OF OPTIMALITY We revew the crteron of statstcally optmal tranng data (Fukumzu et al., 1994). We Advances n Neural Informaton Processng Systems 8 Actve Learnng n Multlayer Perceptrons Kenj Fukumzu Informaton and Communcaton R&D Center, Rcoh Co., Ltd. 3-2-3, Shn-yokohama, Yokohama, 222 Japan E-mal:

More information

Multilayer neural networks

Multilayer neural networks Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

Evaluation of classifiers MLPs

Evaluation of classifiers MLPs Lecture Evaluaton of classfers MLPs Mlos Hausrecht mlos@cs.ptt.edu 539 Sennott Square Evaluaton For any data set e use to test the model e can buld a confuson matrx: Counts of examples th: class label

More information

Non-linear Canonical Correlation Analysis Using a RBF Network

Non-linear Canonical Correlation Analysis Using a RBF Network ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane

More information

A neural network with localized receptive fields for visual pattern classification

A neural network with localized receptive fields for visual pattern classification Unversty of Wollongong Research Onlne Faculty of Informatcs - Papers (Archve) Faculty of Engneerng and Informaton Scences 2005 A neural network wth localzed receptve felds for vsual pattern classfcaton

More information

Multi-Step-Ahead Prediction of Stock Price Using a New Architecture of Neural Networks

Multi-Step-Ahead Prediction of Stock Price Using a New Architecture of Neural Networks Journal of Computer & Robotcs 8(), 05 47-56 47 Mult-Step-Ahead Predcton of Stoc Prce Usng a New Archtecture of Neural Networs Mohammad Taleb Motlagh *, Hamd Khaloozadeh Department of Systems and Control,

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

Feb 14: Spatial analysis of data fields

Feb 14: Spatial analysis of data fields Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei The Chaotc Robot Predcton by Neuro Fuzzy Algorthm Mana Tarjoman, Shaghayegh Zare Abstract In ths paper an applcaton of the adaptve neurofuzzy nference system has been ntroduced to predct the behavor of

More information

Atmospheric Environmental Quality Assessment RBF Model Based on the MATLAB

Atmospheric Environmental Quality Assessment RBF Model Based on the MATLAB Journal of Envronmental Protecton, 01, 3, 689-693 http://dxdoorg/10436/jep0137081 Publshed Onlne July 01 (http://wwwscrporg/journal/jep) 689 Atmospherc Envronmental Qualty Assessment RBF Model Based on

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

Negative Binomial Regression

Negative Binomial Regression STATGRAPHICS Rev. 9/16/2013 Negatve Bnomal Regresson Summary... 1 Data Input... 3 Statstcal Model... 3 Analyss Summary... 4 Analyss Optons... 7 Plot of Ftted Model... 8 Observed Versus Predcted... 10 Predctons...

More information

Topic 5: Non-Linear Regression

Topic 5: Non-Linear Regression Topc 5: Non-Lnear Regresson The models we ve worked wth so far have been lnear n the parameters. They ve been of the form: y = Xβ + ε Many models based on economc theory are actually non-lnear n the parameters.

More information

829. An adaptive method for inertia force identification in cantilever under moving mass

829. An adaptive method for inertia force identification in cantilever under moving mass 89. An adaptve method for nerta force dentfcaton n cantlever under movng mass Qang Chen 1, Mnzhuo Wang, Hao Yan 3, Haonan Ye 4, Guola Yang 5 1,, 3, 4 Department of Control and System Engneerng, Nanng Unversty,

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

β0 + β1xi. You are interested in estimating the unknown parameters β

β0 + β1xi. You are interested in estimating the unknown parameters β Ordnary Least Squares (OLS): Smple Lnear Regresson (SLR) Analytcs The SLR Setup Sample Statstcs Ordnary Least Squares (OLS): FOCs and SOCs Back to OLS and Sample Statstcs Predctons (and Resduals) wth OLS

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Fundamentals of Computational Neuroscience 2e

Fundamentals of Computational Neuroscience 2e Fundamentals of Computatonal Neuroscence e Thomas Trappenberg February 7, 9 Chapter 6: Feed-forward mappng networks Dgtal representaton of letter A 3 3 4 5 3 33 4 5 34 35

More information

ENGINE AIR PATH FAULT DIAGNOSIS USING ADAPTIVE NEURAL CLASSIFIER. M. S. Sangha, D. L. Yu, J. B. Gomm

ENGINE AIR PATH FAULT DIAGNOSIS USING ADAPTIVE NEURAL CLASSIFIER. M. S. Sangha, D. L. Yu, J. B. Gomm ENGINE AIR PATH FAULT DIAGNOSIS USING ADAPTIVE NEURAL CLASSIFIER M. S. Sangha, D. L. Yu, J. B. Gomm Control Systems Research Group, School of Engneerng, Lverpool John Moores Unversty, Byrom Street, Lverpool,

More information

Fault Diagnosis of Autonomous Underwater Vehicles

Fault Diagnosis of Autonomous Underwater Vehicles Research Journal of Appled Scences, Engneerng and Technology 5(6): 407-4076, 03 SSN: 040-7459; e-ssn: 040-7467 Maxwell Scentfc Organzaton, 03 Submtted: March 3, 0 Accepted: January, 03 Publshed: Aprl 30,

More information

Online Classification: Perceptron and Winnow

Online Classification: Perceptron and Winnow E0 370 Statstcal Learnng Theory Lecture 18 Nov 8, 011 Onlne Classfcaton: Perceptron and Wnnow Lecturer: Shvan Agarwal Scrbe: Shvan Agarwal 1 Introducton In ths lecture we wll start to study the onlne learnng

More information

Application research on rough set -neural network in the fault diagnosis system of ball mill

Application research on rough set -neural network in the fault diagnosis system of ball mill Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(4):834-838 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 Applcaton research on rough set -neural network n the

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Wavelet chaotic neural networks and their application to continuous function optimization

Wavelet chaotic neural networks and their application to continuous function optimization Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,

More information

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING

AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING AN IMPROVED PARTICLE FILTER ALGORITHM BASED ON NEURAL NETWORK FOR TARGET TRACKING Qn Wen, Peng Qcong 40 Lab, Insttuton of Communcaton and Informaton Engneerng,Unversty of Electronc Scence and Technology

More information

Grover s Algorithm + Quantum Zeno Effect + Vaidman

Grover s Algorithm + Quantum Zeno Effect + Vaidman Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the

More information

Model of Neurons. CS 416 Artificial Intelligence. Early History of Neural Nets. Cybernetics. McCulloch-Pitts Neurons. Hebbian Modification.

Model of Neurons. CS 416 Artificial Intelligence. Early History of Neural Nets. Cybernetics. McCulloch-Pitts Neurons. Hebbian Modification. Page 1 Model of Neurons CS 416 Artfcal Intellgence Lecture 18 Neural Nets Chapter 20 Multple nputs/dendrtes (~10,000!!!) Cell body/soma performs computaton Sngle output/axon Computaton s typcally modeled

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

A Prediction Method of Spacecraft Telemetry Parameter Based On Chaos Theory

A Prediction Method of Spacecraft Telemetry Parameter Based On Chaos Theory A Predcton Method of Spacecraft Telemetry Parameter Based On Chaos Theory L Le, Gao Yongmng, Wu Zhhuan, Ma Kahang Department of Graduate Management Equpment Academy Beng, Chna e-mal: lle_067@hotmal.com

More information

Efficient Weather Forecasting using Artificial Neural Network as Function Approximator

Efficient Weather Forecasting using Artificial Neural Network as Function Approximator Effcent Weather Forecastng usng Artfcal Neural Network as Functon Approxmator I. El-Fegh, Z. Zuba and S. Abozgaya Abstract Forecastng s the referred to as the process of estmaton n unknown stuatons. Weather

More information

MATH 567: Mathematical Techniques in Data Science Lab 8

MATH 567: Mathematical Techniques in Data Science Lab 8 1/14 MATH 567: Mathematcal Technques n Data Scence Lab 8 Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 11, 2017 Recall We have: a (2) 1 = f(w (1) 11 x 1 + W (1) 12 x 2 + W

More information

Improvement of Histogram Equalization for Minimum Mean Brightness Error

Improvement of Histogram Equalization for Minimum Mean Brightness Error Proceedngs of the 7 WSEAS Int. Conference on Crcuts, Systems, Sgnal and elecommuncatons, Gold Coast, Australa, January 7-9, 7 3 Improvement of Hstogram Equalzaton for Mnmum Mean Brghtness Error AAPOG PHAHUA*,

More information

10-701/ Machine Learning, Fall 2005 Homework 3

10-701/ Machine Learning, Fall 2005 Homework 3 10-701/15-781 Machne Learnng, Fall 2005 Homework 3 Out: 10/20/05 Due: begnnng of the class 11/01/05 Instructons Contact questons-10701@autonlaborg for queston Problem 1 Regresson and Cross-valdaton [40

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY

PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY POZNAN UNIVE RSITY OF TE CHNOLOGY ACADE MIC JOURNALS No 86 Electrcal Engneerng 6 Volodymyr KONOVAL* Roman PRYTULA** PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY Ths paper provdes a

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Scroll Generation with Inductorless Chua s Circuit and Wien Bridge Oscillator

Scroll Generation with Inductorless Chua s Circuit and Wien Bridge Oscillator Latest Trends on Crcuts, Systems and Sgnals Scroll Generaton wth Inductorless Chua s Crcut and Wen Brdge Oscllator Watcharn Jantanate, Peter A. Chayasena, and Sarawut Sutorn * Abstract An nductorless Chua

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

1 Derivation of Point-to-Plane Minimization

1 Derivation of Point-to-Plane Minimization 1 Dervaton of Pont-to-Plane Mnmzaton Consder the Chen-Medon (pont-to-plane) framework for ICP. Assume we have a collecton of ponts (p, q ) wth normals n. We want to determne the optmal rotaton and translaton

More information

The Quadratic Trigonometric Bézier Curve with Single Shape Parameter

The Quadratic Trigonometric Bézier Curve with Single Shape Parameter J. Basc. Appl. Sc. Res., (3541-546, 01 01, TextRoad Publcaton ISSN 090-4304 Journal of Basc and Appled Scentfc Research www.textroad.com The Quadratc Trgonometrc Bézer Curve wth Sngle Shape Parameter Uzma

More information

Linear Regression Analysis: Terminology and Notation

Linear Regression Analysis: Terminology and Notation ECON 35* -- Secton : Basc Concepts of Regresson Analyss (Page ) Lnear Regresson Analyss: Termnology and Notaton Consder the generc verson of the smple (two-varable) lnear regresson model. It s represented

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

STAT 3008 Applied Regression Analysis

STAT 3008 Applied Regression Analysis STAT 3008 Appled Regresson Analyss Tutoral : Smple Lnear Regresson LAI Chun He Department of Statstcs, The Chnese Unversty of Hong Kong 1 Model Assumpton To quantfy the relatonshp between two factors,

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Classification as a Regression Problem

Classification as a Regression Problem Target varable y C C, C,, ; Classfcaton as a Regresson Problem { }, 3 L C K To treat classfcaton as a regresson problem we should transform the target y nto numercal values; The choce of numercal class

More information

A new construction of 3-separable matrices via an improved decoding of Macula s construction

A new construction of 3-separable matrices via an improved decoding of Macula s construction Dscrete Optmzaton 5 008 700 704 Contents lsts avalable at ScenceDrect Dscrete Optmzaton journal homepage: wwwelsevercom/locate/dsopt A new constructon of 3-separable matrces va an mproved decodng of Macula

More information

Introduction to Neural Networks. David Stutz

Introduction to Neural Networks. David Stutz RWTH Aachen Unversty Char of Computer Scence 6 Prof. Dr.-Ing. Hermann Ney Selected Topcs n Human Language Technology and Pattern Recognton WS 13/14 Introducton to Neural Networs Davd Stutz Matrculaton

More information

Uncertainty in measurements of power and energy on power networks

Uncertainty in measurements of power and energy on power networks Uncertanty n measurements of power and energy on power networks E. Manov, N. Kolev Department of Measurement and Instrumentaton, Techncal Unversty Sofa, bul. Klment Ohrdsk No8, bl., 000 Sofa, Bulgara Tel./fax:

More information

Statistical Foundations of Pattern Recognition

Statistical Foundations of Pattern Recognition Statstcal Foundatons of Pattern Recognton Learnng Objectves Bayes Theorem Decson-mang Confdence factors Dscrmnants The connecton to neural nets Statstcal Foundatons of Pattern Recognton NDE measurement

More information

The exam is closed book, closed notes except your one-page cheat sheet.

The exam is closed book, closed notes except your one-page cheat sheet. CS 89 Fall 206 Introducton to Machne Learnng Fnal Do not open the exam before you are nstructed to do so The exam s closed book, closed notes except your one-page cheat sheet Usage of electronc devces

More information