Using collocation neural network to solve nonlinear singular differential equations

Size: px
Start display at page:

Download "Using collocation neural network to solve nonlinear singular differential equations"

Transcription

1 216; 2(1): 5-56 ISSN: Maths 217; 2(1): Stats & Maths Receved: Accepted: Luma. N.M Tawfq Department of Mathematcs, College of Educaton Ibn Al- Hatham, Baghdad Unversty, Iraq Usng collocaton neural network to solve nonlnear sngular dfferental equatons Luma. N.M. Tawfq Abstract The am of ths paper s to desgn collocaton neural network to solve second order nonlnear boundary value problems n sngular ordnary dfferental equatons. The proposed network traned by back propagaton wth dfferent tranng algorthms quas-newton, Levenberg Marquardt, and Bayesan Regulaton, were the desgned network traned wth those algorthms usng the avalable expermental data as the tranng set and the proposed network contanng a sngle hdden layer of fve nodes. The next objectve of ths paper was to compare the performance of aforementoned algorthms wth regard to predctng ablty. Keywords: Ann, Back propagaton, Tranng Algorthm, ordnary dfferental equatons Correspondence Luma. N.M Tawfq Department of Mathematcs, College of Educaton Ibn Al- Hatham, Baghdad Unversty, Iraq 1. Introducton Many methods have been developed so far for solvng dfferental equatons, some of them produce a soluton n the form of an array that contans the value of the soluton at a selected group of ponts [1]. Others use bass functons to represent the soluton n analytc form and transform the orgnal problem usually to a system of algebrac equatons [2]. Most of the prevous study n solvng dfferental equatons usng artfcal neural network(ann) s restrcted to the case of solvng the systems of algebrac equatons whch result from the dscretzaton of the doman [3]. Most of the prevous works n solvng dfferental equatons usng neural networks s restrcted to the case of solvng the lnear systems of algebrac equatons whch result from the dscretzaton of the doman. The mnmzaton of the networks energy functon provdes the soluton to the system of equatons [4]. Lagars et al. [5] employed two networks: a multlayer perceptron and a radal bass functon network to solve partal dfferental equatons (PDE) wth boundary condtons (Drchlet or Neumann) defned on boundares wth the case of complex boundary geometry. Mc Fall et al. [6] compared weght reuse for two exstng methods of defnng the network error functon, weght reuse s shown to accelerate tranng of ODE, the second method outperforms the fals unpredctably when weght reuse s appled to accelerate soluton of the dffuson equaton. Tawfq [7] proposed a radal bass functon neural network (RBFNN) and Hopfeld neural network (unsupervsed tranng network) as a desgner network to solve ODE and PDE and compared between them. Malek et al. [8] reported a novel hybrd method based on optmzaton technques and neural networks methods for the soluton of hgh order ODE whch used three layered perceptron network. Akca et al. [9] dscussed dfferent approaches of usng wavelets n the soluton of boundary value problems (BVP) for ODE, also ntroduced convenent wavelet representatons for the dervatves for certan functons and dscussed wavelet network algorthm. Mc Fall [1] presented mult-layer perceptron networks to solve BVP of PDE for arbtrary rregular doman where he used logsg transfer functon n hdden layer and purelne n output layer and used gradent decent tranng algorthm, also, he used RBFNN for solvng ths problem and compared between them. Junad et al. [11] used Ann wth genetc tranng algorthm and log sgmod functon for solvng frst order ODE, Zahoor et al. [12] has been used an evolutonary technque for the soluton of nonlnear Rccat dfferental equatons of fractonal order and the learnng of the unknown parameters n neural network has been acheved wth hybrd ntellgent algorthms manly based on genetc algorthm (GA). ~5~

2 Abdul Samath et al. [13] suggested the soluton of the matrx Rccat dfferental equaton (MRDE) for nonlnear sngular system usng Ann. Ibraheem et al. [14] proposed shootng neural networks algorthm for solvng two pont second order BVP n ODEs whch reduced the equaton to the system of two equatons of frst order. Hoda et al. [4] descrbed a numercal soluton wth neural networks for solvng PDE, wth mxed boundary condtons. Majdzadeh [15] suggested a new approach for reducng the nverse problem for a doman to an equvalent problem n a varatonal settng usng radal bass functons neural network, also he used cascade feed forward to solve two - dmensonal Posson equaton wth back propagaton and levenberg - Marqwardt tran algorthm wth the archtecture three layers and 12 nput nodes, 18 tansg transfer functon n hdden layer, and 3 lnear nodes n output layer. Orab [16] desgned feed forward neural networks (FFNN) for solvng IVP of ODE. Al [17] desgn fast FFNN to solve two pont BVP. Ths paper proposed FFNN to solve two pont sngular boundary value problem (TPSBVP) wth back propagaton (BP) tranng algorthm. 2. Sngular Boundary Value Problem The general form of second order two pont boundary value problem (TPBVP) s: y + P(x)y + Q(x)y =, a x b (1) y(a) = A and y(b) = B, where A, B ϵ R there are two types of a pont x [,] : Ordnary Pont and Sngular Pont. A functon y(x) s analytc at x f t has a power seres expanson at x that converges to y(x) on an open nterval contanng x. A pont x s an ordnary pont of the ODE (1), f the functons P(x) and Q(x) are analytc at x. Otherwse x s a sngular pont of the ODE. On the other hand f P(x) or Q(x) are not analytc at x then x s sad to be a sngular pont [18, 19]. There s at present no theoretcal work justfyng numercal methods for solvng problems wth sngular ponts. The man practcal occurrence of such problems seems to be sem - analytc technque [2]. 3. Artfcal Neural Network Ann s a smplfed mathematcal model of the human bran, It can be mplemented by both electrc elements and computer software. It s a parallel dstrbuted processor wth large numbers of connectons, t s an nformaton processng system that has certan performance characters n common wth bologcal neural networks [21]. The arrvng sgnals, called nputs, multpled by the connecton weghts (adjusted) are frst summed (combned) and then passed through a transfer functon to produce the output for that neuron. The actvaton (transfer) functon acts on the weghted sum of the neuron s nputs and the most commonly used transfer functon s the sgmod functon (tansg.) [17]. There are two man connecton formulas (types): feedback (recurrent) and feed forward connecton. Feedback s one type of connecton where the output of one layer routes back to the nput of a prevous layer, or to same layer. Feed forward (FFNN) does not have a connecton back from the output to the nput neurons [22]. There are many dfferent tranng algorthms but the most often used s the Delta-rule or back propagaton (BP) rule. A neural network s traned to map a set of nput data by teratve adjustment of the weghts. Informaton from nputs s fed forward through the network to optmze the weghts between neurons. Optmzaton of the weghts s made by backward propagaton of the error durng tranng phase. The Ann reads the nput and output values n the tranng data set and changes the value of the weghted lnks to reduce the dfference between the predcted and target (observed) values. The error n predcton s mnmzed across many tranng cycles (teraton or epoch) untl network reaches specfed level of accuracy. A complete round of forward backward passes and weght adjustments usng all nput output pars n the data set s called an epoch or teraton. If a network s left to tran for too long, however, t wll be over traned and wll lose the ablty to generalze. In ths paper, we focused on the tranng stuaton known as supervsed tranng, n whch a set of nput/output data patterns s avalable. Thus, the Ann has to be traned to produce the desred output accordng to the examples. In order to perform a supervsed tranng we need a way of evaluatng the Ann output error between the actual and the expected output. A popular measure s the mean squared error (MSE) or root mean squared error (RMSE) [23]. 4. Descrpton of the Method In the proposed approach the model functon s expressed as the sum of two terms: the frst term satsfes the boundary condtons (BC) and contans no adjustable parameters. The second term can be found by usng Ann whch s traned so as to satsfy the dfferental equaton and such technque called collocaton neural network. In ths secton we wll llustrate how our approach can be used to fnd the approxmate soluton of the general form a 2 nd order TPSBVP: x m y"(x) = F( x, y(x), y'(x)) (2) where a subject to certan BC s and m Z, x R, D R denotes the doman and y(x) s the soluton to be computed. If y t(x, p) denotes a tral soluton wth adjustable parameters p, the problem s transformed to a dscretze form : Mn G(x, (x,p), (x,p), (x,p)) 2 t t t p x ˆ Mn p D F(x, y t(x,p), y t'(x,p)) (3) subject to the constrants mposed by the BC s. ~51~

3 In the our proposed approach, the tral soluton y t employs a Ann and the parameters p correspond to the weghts and bases of the neural archtecture. We choose a form for the tral functon y t(x) such that t satsfes the BC s. Ths s acheved by wrtng t as a sum of two terms: y t(x, p) = A(x) + G( x, N(x, p)) (4) where N(x, p) s a sngle-output Ann wth parameters p and n nput unts fed wth the nput vector x. The term A(x) contans no adjustable parameters and satsfes the BC s. The second term G s constructed so as not to contrbute to the BC s, snce y t(x) satsfy them. Ths term can be formed by usng a Ann whose weghts and bases are to be adjusted n order to deal wth the mnmzaton problem. 5. Computaton of the Gradent An effcent mnmzaton of (3) can be consdered as a procedure of tranng the network, where the error correspondng to each nput x s the value E(x ) whch has to forced near zero. Computaton of ths error value nvolves not only the Ann output but also the dervatves of the output wth respect to any of ts nputs. Therefore, n computng the gradent of the error wth respect to the network weghts consder a multlayer Ann wth n nput unts (where n s the dmensons of the doman ) one hdden layer wth H sgmod unts and a lnear output unt. For a gven nput x the output of the Ann s: N H 1 (z ), where z n j1 w x b j j w j denotes the weght connectng the nput unt j to the hdden unt v denotes the weght connectng the hdden unt to the output unt, b denotes the bas of hdden unt, and σ (z) s the sgmod transfer functon (tansg.). The gradent of Ann, wth respect to the parameters of the Ann can be easly obtaned as : N N b N w j (z ) (5) v (z ) (6) v (z ) x j (7) Once the dervatve of the error wth respect to the network parameters has been defned, then t s a straght forward to employ any mnmzaton technque. It must also be noted, the batch mode of weght updates may be employed. 6. Illustraton of the Method In ths secton we descrbe soluton of TPSBVP usng Ann. To llustrate the method, we wll consder the 2 nd order TPSBVP: x m d 2 y(x) / dx 2 f( x, y, y') (8) where x [a, b] and the BC: y(a) A, y(b) = B, ( Drchlet case ) or y (a) A, y (b) = B, ( Neumann case) or y(a) A, y(b) = B, (Mxed case) a tral soluton can be wrtten as: y t(x, p) (ba ab) / (b a) + (B A)x /(b a) + (x a)(x b)n(x, p) (9) where N(x, p) s the output of a FFNN wth one nput unt for x and weghts p. Note that y t(x) satsfes the BC by constructon. The error quantty to be mnmzed s gven by: n 2 d t(x ) f (x, t(x )) E[p] 1 dx d 2 y t(x, p) / dx 2 f(x, y t(x, p), dy t(x, p) / dx ) } 2 (1) ~52~

4 where the x [a, b]. Snce : dy t(x, p) /dx (B A)/(b a) + {(x a) + (x b)}n(x, p) + (x a) (x b) and dn(x,p) dx dn(x,p) dx d 2 y t(x, p) /dx 2 = 2N(x, p) + 2{(x a) + (x b)} + (x a) (x b) d 2 N(x, p) /dx 2 It s straghtforward to compute the gradent of the error wth respect to the parameters p usng (5) (7). The same holds for all subsequent model problems. 7. Example In ths secton we report numercal result, usng a mult-layer Ann havng one hdden layer wth 5 hdden unts (neurons) and one lnear output unt. The sgmod actvaton of each hdden unt s tansg, the analytc soluton y a(x) was known n advance. Therefore we test the accuracy of the obtaned solutons by computng the devaton: y(x) y t(x) y a(x). In order to llustrate the characterstcs of the solutons provded by the neural network method, we provde fgures dsplayng the correspondng devaton y(x) both at the few ponts (tranng ponts) that were used for tranng and at many other ponts (test ponts) of the doman of equaton. The latter knd of fgures are of major mportance snce they show the nterpolaton capabltes of the neural soluton whch to be superor compared to other soluton obtaned by usng other methods. Moreover, we can consder ponts outsde the tranng nterval n order to obtan an estmate of the extrapolaton performance of the obtaned numercal soluton. Example 1 Consder the followng 2 nd order TPSBVP: y = 5x3(5x5ey x 5) - ( 1 +1) y 4+x5 x wth BC: y (), y (1) = 1 and x [, 1]. Ths problem s an applcaton of oxygen dffuson wth the analytc soluton s: y a(x) - ln( x 5 + 4), accordng to (9) the tral neural form of the soluton s taken to be : y t(x) x + x (x - 1) N(x, p). The Ann traned usng a grd of ten equdstant ponts n [, 1]. Fgure(1) dsplay the analytc and neural solutons wth dfferent tranng algorthm. The neural results wth dfferent types of tranng algorthm such as: Levenberg Marquardt ( tranlm ), quas Newton ( tranbfg ), Bayesan Regulaton ( tranbr ) ntroduced n table (1) and ts errors gven n table (2), table (3) gves the performance of the tran wth epoch and tme and table(4) gves the weght and bas of the desgner network. Fg 1: analytc and neural soluton of example 1 usng: tranbfg, tranbr, and tranlm tranng algorthm. ~53~

5 Table 1: Analytc and Neural soluton of example 1 Input Analytc soluton Out of suggested FFNN yt(x) for dfferent tranng algorthm x ya(x) Tranbfg Tranbr Table 2: Accuracy of solutons for example 1 The error E(x) yt(x) ya(x) where yt(x) computed by the followng tranng algorthm t r a n l m t r a n b f g Tranbr e e e e e e e e e e e e e e e e e e e e e e e e e e e-8 Table 3: the performance of the tran wth epoch and tme Tran Fcn Tranbfg Tranbr Performance of tran 7.4e e e-13 Epoch Tme ::17 ::14 :1:31 MSE 1.e e-1 4.e-11 Table 4: Weght and bas of the network for dfferent tranng algorthm Weghts and bas for tranlm Net. IW{1,1} Net. LW{2,1} Net. B{1} Weghts and bas for tranbfg Net. IW{1,1} Net. LW{2,1} Net. B{1} Weghts and bas for tranbr Net. IW{1,1} Net. LW{2,1} Net. B{1} Rasheed [2] solved ths problem usng sem-analytc technque and gave the followng seres soluton: P 15 = x x x x x x x x x Abukhaled et al. n [24] applyng L'Hoptal's rule to overcome the sngularty at x =, then used the modfed splne approach and get maxmum error 7.79e -4 and resoluton ths problem usng fnte dfference method then gves maxmum error 1.46e -3. Example 2 Consder the followng 2 nd order TPSBVP: y + ( 1 x ) y + exp(y) = wth BC: y (), y(1) = and x [, 1]. The analytc soluton s: y a(x) 2ln( ) accordng to (9) the tral neural form of the soluton s : x2+1 y t(x) x (x - 1) N(x, p). The Ann traned usng a grd of ten equdstant ponts n [, 1]. Fgure (2) dsplay the analytc and neural solutons wth dfferent tranng algorthm. The neural results wth dfferent types of tranng algorthm such as : Levenberg Marquardt (tranlm), quas Newton ( tranbfg ), Bayesan Regulaton (tranbr) ntroduced n table (5) and ts errors gven n table (6), table(7) gves the performance of the tran wth epoch and tme and table(8) gves the weght and bas of the desgner network. ~54~

6 Fg 2: analytc and neural soluton of example 2 usng : tranbfg, tranbr, and tranlm tranng algorthm. Table 5: Analytc and Neural soluton of example 2 Input x Analytc soluton ya(x) Out of suggested FFNN yt(x) for dfferent tranng algorthm Tranbfg Tranbr e e-8 Table 6: Accuracy of solutons for example 2 The error E(x) yt(x) ya(x) where yt(x) computed by the followng tranng algorthm t r a n l m t r a n b f g tranbr e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e-8 Table 7: The performance of the tran wth epoch and tme TranFcn Tranbfg Tranbr Performance of tran 2.48e e e-13 Epoch Tme ::3 ::48 ::41 MSE 3.687e e e-11 Table 8: Weght and bas of the network for dfferent tranng algorthm Weghts and bas for tranlm Net.IW{1,1} Net.LW{2,1} Net.B{1} Weghts and bas for tranbfg Net.IW{1,1} Net.LW{2,1} Net.B{1} Weghts and bas for tranbr Net.IW{1,1} Net.LW{2,1} Net.B{1} ~55~

7 Ramos n [25] solved ths example usng C 1 - lnearzaton method and gave the absolute error e 3, also, Kumar n [26], solved ths example by the three-pont fnte dfference technque and gave the absolute error 2.e Concluson From the above problems t s clear that the network whch proposed can handle effectvely TPSBVP and provde accurate approxmate soluton throughout the whole doman and not only at the tranng ponts. As evdent from the tables, the results of proposed network are more precse as compared to method suggested n [2, 24, 25, 26]. In general, the practcal results on Ann show whch contan up to a few hundred weghts the Levenberg-Marquardt algorthm (tranlm) wll have the fastest convergence, then tranbfg and then tranbr. However, "tranbr" t does not perform well on functon approxmaton on problems. The performance of the varous algorthms can be affected by the accuracy requred of the approxmaton. 9. References 1. Mohammed KM. On Soluton of Two Pont Second Order Boundary Value Problems by Usng Sem-Analytc Method, MSc thess, Unversty of Baghdad, College of Educaton Ibn Al- Hatham, LeVeque RJ. Fnte Dfference Methods for Dfferental Equatons, Unversty of Washngton, AMath 585, Wnter Quarter, Agatonovc-Kustrn S, Beresford R. Basc concepts of artfcal neural network modelng and ts applcaton n pharmaceutcal, research. J. Pharm. Bomed. Anal., 2; 22: Hoda SA, Nagla HA. On Neural Network Methods for Mxed Boundary Value Problems, Internatonal Journal of Nonlnear Scence. 211; 11(3): Lagars IE, Lkas AC, Papageorgou DG. Neural-Network Methods for Boundary Value Problems wth Irregular Boundares, IEEE Transactons on Neural Networks. 2; 11(5): Mc Fall KS, Mahan JS. Investgaton of Weght Reuse n Mult-Layer Perceptron Networks for Acceleratng the Soluton of Dfferental Equatons, IEEE Internatonal Conference on Neural Networks. 24; 14: Tawfq LNM. Desgn and Tranng Artfcal Neural Networks for Solvng Dfferental Equatons, PhD thess, Unversty of Baghdad, College of Educaton Ibn Al- Hatham, Malek A, Bedokht RS. Numercal soluton for hgh order dfferental equatons usng a hybrd neural network-optmzaton method, Appled Mathematcs and Computaton. 26; 183: Akca H, Al-Lal MH, Covachev V. Survey on Wavelet Transform and Applcaton n ODE and Wavelet Networks, Advances n Dynamcal Systems and Applcatons. 26; 1(2): Mc Fall KS. An Artfcal Neural Network Method for Solvng Boundary Value Problems wth Arbtrary Irregular Boundares, PhD Thess, Georga Insttute of Technology, Junad A, Raja MAZ, Quresh IM. Evolutonary Computng Approach for The Soluton of Intal value Problems n Ordnary Dfferental Equatons, World Academy of Scence, Engneerng and Technology. 29; 55: Zahoor RMA, Khan JA, Quresh IM. Evolutonary Computaton Technque for Solvng Rccat Dfferental Equaton of Arbtrary Order, World Academy of Scence, Engneerng and Technology. 29; 58: Abdul Samath J, Kumar PS, Begum A. Soluton of Lnear Electrcal Crcut Problem usng Neural Networks, Internatonal Journal of Computer Applcatons. 21; 2(1): Ibraheem KI, Khalaf BM. Shootng Neural Networks Algorthm for Solvng Boundary Value Problems n ODEs, Applcatons and Appled Mathematcs: An Internatonal Journal. 211; 6(11): Majdzadeh K. Inverse Problem wth Respect to Doman and Artfcal Neural Network Algorthm for the Soluton, Mathematcal Problems n Engneerng. 211, Orab YA. Desgn Feed Forward Neural Networks For Solvng Ordnary Intal Value Problem, MSc Thess, Unversty of Baghdad, College of Educaton Ibn Al-Hatham, Al MH. Desgn Fast Feed Forward Neural Networks to Solve Two Pont Boundary Value Problems, MSc Thess, Unversty of Baghdad, College of Educaton Ibn Al-Hatham, Rachůnková I, Staněk S, Tvrdý M. Solvablty of Nonlnear Sngular Problems for Ordnary Dfferental Equatons, New York, USA, Shampne J. Kerzenka, Rechelt MW. Solvng Boundary Value Problems for Ordnary Dfferental Equatons n Matlab wth bvp4c, Rasheed HW. Effcent Sem-Analytc Technque for Solvng Second Order Sngular Ordnary Boundary Value Problems, MSc. Thess, Unversty of Baghdad, College of Educaton - Ibn - Al - Hatham, Galushkn IA. Neural Networks Theory, Berln Hedelberg, Mehrotra K, Mohan CK, Ranka S. Elements of Artfcal Neural Networks, Sprnger, Ghaffar A, Abdollah H, Khoshayand MR, Soltan Bozchaloo I, Dadgar A, Rafee-Tehran M. Performance comparson of neural network tranng algorthms n modelng of bmodal drug delvery, Internatonal Journal of Pharmaceutcs. 26; 327: Abukhaled M, Khur SA, Sayfy A. A Numercal Approach for Solvng a Class of Sngular Boundary Value Problems Arsng n Physology, Internatonal Journal of Numercal Analyss and Modelng. 211; 8(2): Ramos JI. Pecewse quaslnearzaton technques for sngular boundary value problems, Computer Physcs Communcatons. 24; 158: Kumar M. A three-pont fnte dfference method for a class of sngular two-pont boundary value problems, J Comput. Appl. Math. 22; 145: ~56~

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

Research Article Design Feed Forward Neural Network to Solve Singular Boundary Value Problems

Research Article Design Feed Forward Neural Network to Solve Singular Boundary Value Problems ISRN Applied Mathematics Volume 2013, Article ID 650467, 7 pages http://d.doi.org/10.1155/2013/650467 Research Article Design Feed Forward Neural Network to Solve Singular Boundary Value Problems Luma

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

A Hybrid Variational Iteration Method for Blasius Equation

A Hybrid Variational Iteration Method for Blasius Equation Avalable at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 10, Issue 1 (June 2015), pp. 223-229 Applcatons and Appled Mathematcs: An Internatonal Journal (AAM) A Hybrd Varatonal Iteraton Method

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems

Application of B-Spline to Numerical Solution of a System of Singularly Perturbed Problems Mathematca Aeterna, Vol. 1, 011, no. 06, 405 415 Applcaton of B-Splne to Numercal Soluton of a System of Sngularly Perturbed Problems Yogesh Gupta Department of Mathematcs Unted College of Engneerng &

More information

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems Chapter. Ordnar Dfferental Equaton Boundar Value (BV) Problems In ths chapter we wll learn how to solve ODE boundar value problem. BV ODE s usuall gven wth x beng the ndependent space varable. p( x) q(

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

THE STURM-LIOUVILLE EIGENVALUE PROBLEM - A NUMERICAL SOLUTION USING THE CONTROL VOLUME METHOD

THE STURM-LIOUVILLE EIGENVALUE PROBLEM - A NUMERICAL SOLUTION USING THE CONTROL VOLUME METHOD Journal of Appled Mathematcs and Computatonal Mechancs 06, 5(), 7-36 www.amcm.pcz.pl p-iss 99-9965 DOI: 0.75/jamcm.06..4 e-iss 353-0588 THE STURM-LIOUVILLE EIGEVALUE PROBLEM - A UMERICAL SOLUTIO USIG THE

More information

New Method for Solving Poisson Equation. on Irregular Domains

New Method for Solving Poisson Equation. on Irregular Domains Appled Mathematcal Scences Vol. 6 01 no. 8 369 380 New Method for Solvng Posson Equaton on Irregular Domans J. Izadan and N. Karamooz Department of Mathematcs Facult of Scences Mashhad BranchIslamc Azad

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method

The Exact Formulation of the Inverse of the Tridiagonal Matrix for Solving the 1D Poisson Equation with the Finite Difference Method Journal of Electromagnetc Analyss and Applcatons, 04, 6, 0-08 Publshed Onlne September 04 n ScRes. http://www.scrp.org/journal/jemaa http://dx.do.org/0.46/jemaa.04.6000 The Exact Formulaton of the Inverse

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family

Using T.O.M to Estimate Parameter of distributions that have not Single Exponential Family IOSR Journal of Mathematcs IOSR-JM) ISSN: 2278-5728. Volume 3, Issue 3 Sep-Oct. 202), PP 44-48 www.osrjournals.org Usng T.O.M to Estmate Parameter of dstrbutons that have not Sngle Exponental Famly Jubran

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0

n α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0 MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector

More information

DETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM

DETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM Ganj, Z. Z., et al.: Determnaton of Temperature Dstrbuton for S111 DETERMINATION OF TEMPERATURE DISTRIBUTION FOR ANNULAR FINS WITH TEMPERATURE DEPENDENT THERMAL CONDUCTIVITY BY HPM by Davood Domr GANJI

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k)

Hongyi Miao, College of Science, Nanjing Forestry University, Nanjing ,China. (Received 20 June 2013, accepted 11 March 2014) I)ϕ (k) ISSN 1749-3889 (prnt), 1749-3897 (onlne) Internatonal Journal of Nonlnear Scence Vol.17(2014) No.2,pp.188-192 Modfed Block Jacob-Davdson Method for Solvng Large Sparse Egenproblems Hongy Mao, College of

More information

Haar wavelet collocation method to solve problems arising in induction motor

Haar wavelet collocation method to solve problems arising in induction motor ISSN 746-7659, England, UK Journal of Informaton and Computng Scence Vol., No., 07, pp.096-06 Haar wavelet collocaton method to solve problems arsng n nducton motor A. Padmanabha Reddy *, C. Sateesha,

More information

Numerical Solutions of a Generalized Nth Order Boundary Value Problems Using Power Series Approximation Method

Numerical Solutions of a Generalized Nth Order Boundary Value Problems Using Power Series Approximation Method Appled Mathematcs, 6, 7, 5-4 Publshed Onlne Jul 6 n ScRes. http://www.scrp.org/journal/am http://.do.org/.436/am.6.77 umercal Solutons of a Generalzed th Order Boundar Value Problems Usng Power Seres Approxmaton

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL

The Synchronous 8th-Order Differential Attack on 12 Rounds of the Block Cipher HyRAL The Synchronous 8th-Order Dfferental Attack on 12 Rounds of the Block Cpher HyRAL Yasutaka Igarash, Sej Fukushma, and Tomohro Hachno Kagoshma Unversty, Kagoshma, Japan Emal: {garash, fukushma, hachno}@eee.kagoshma-u.ac.jp

More information

Neural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17

Neural Networks. Perceptrons and Backpropagation. Silke Bussen-Heyen. 5th of Novemeber Universität Bremen Fachbereich 3. Neural Networks 1 / 17 Neural Networks Perceptrons and Backpropagaton Slke Bussen-Heyen Unverstät Bremen Fachberech 3 5th of Novemeber 2012 Neural Networks 1 / 17 Contents 1 Introducton 2 Unts 3 Network structure 4 Snglelayer

More information

Finite Element Modelling of truss/cable structures

Finite Element Modelling of truss/cable structures Pet Schreurs Endhoven Unversty of echnology Department of Mechancal Engneerng Materals echnology November 3, 214 Fnte Element Modellng of truss/cable structures 1 Fnte Element Analyss of prestressed structures

More information

Appendix B. The Finite Difference Scheme

Appendix B. The Finite Difference Scheme 140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton

More information

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system

Transfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

Lecture 12: Discrete Laplacian

Lecture 12: Discrete Laplacian Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Note 10. Modeling and Simulation of Dynamic Systems

Note 10. Modeling and Simulation of Dynamic Systems Lecture Notes of ME 475: Introducton to Mechatroncs Note 0 Modelng and Smulaton of Dynamc Systems Department of Mechancal Engneerng, Unversty Of Saskatchewan, 57 Campus Drve, Saskatoon, SK S7N 5A9, Canada

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 12

2.29 Numerical Fluid Mechanics Fall 2011 Lecture 12 REVIEW Lecture 11: 2.29 Numercal Flud Mechancs Fall 2011 Lecture 12 End of (Lnear) Algebrac Systems Gradent Methods Krylov Subspace Methods Precondtonng of Ax=b FINITE DIFFERENCES Classfcaton of Partal

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

PART 8. Partial Differential Equations PDEs

PART 8. Partial Differential Equations PDEs he Islamc Unverst of Gaza Facult of Engneerng Cvl Engneerng Department Numercal Analss ECIV 3306 PAR 8 Partal Dfferental Equatons PDEs Chapter 9; Fnte Dfference: Ellptc Equatons Assocate Prof. Mazen Abualtaef

More information

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS Journal of Mathematcs and Statstcs 9 (1): 4-8, 1 ISSN 1549-644 1 Scence Publcatons do:1.844/jmssp.1.4.8 Publshed Onlne 9 (1) 1 (http://www.thescpub.com/jmss.toc) A MODIFIED METHOD FOR SOLVING SYSTEM OF

More information

A new Approach for Solving Linear Ordinary Differential Equations

A new Approach for Solving Linear Ordinary Differential Equations , ISSN 974-57X (Onlne), ISSN 974-5718 (Prnt), Vol. ; Issue No. 1; Year 14, Copyrght 13-14 by CESER PUBLICATIONS A new Approach for Solvng Lnear Ordnary Dfferental Equatons Fawz Abdelwahd Department of

More information

APPENDIX A Some Linear Algebra

APPENDIX A Some Linear Algebra APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,

More information

One-sided finite-difference approximations suitable for use with Richardson extrapolation

One-sided finite-difference approximations suitable for use with Richardson extrapolation Journal of Computatonal Physcs 219 (2006) 13 20 Short note One-sded fnte-dfference approxmatons sutable for use wth Rchardson extrapolaton Kumar Rahul, S.N. Bhattacharyya * Department of Mechancal Engneerng,

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Semi-supervised Classification with Active Query Selection

Semi-supervised Classification with Active Query Selection Sem-supervsed Classfcaton wth Actve Query Selecton Jao Wang and Swe Luo School of Computer and Informaton Technology, Beng Jaotong Unversty, Beng 00044, Chna Wangjao088@63.com Abstract. Labeled samples

More information

Errors for Linear Systems

Errors for Linear Systems Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch

More information

The Finite Element Method

The Finite Element Method The Fnte Element Method GENERAL INTRODUCTION Read: Chapters 1 and 2 CONTENTS Engneerng and analyss Smulaton of a physcal process Examples mathematcal model development Approxmate solutons and methods of

More information

Georgia Tech PHYS 6124 Mathematical Methods of Physics I

Georgia Tech PHYS 6124 Mathematical Methods of Physics I Georga Tech PHYS 624 Mathematcal Methods of Physcs I Instructor: Predrag Cvtanovć Fall semester 202 Homework Set #7 due October 30 202 == show all your work for maxmum credt == put labels ttle legends

More information

Multilayer neural networks

Multilayer neural networks Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer

More information

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009

College of Computer & Information Science Fall 2009 Northeastern University 20 October 2009 College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:

More information

Numerical Solution of Ordinary Differential Equations

Numerical Solution of Ordinary Differential Equations Numercal Methods (CENG 00) CHAPTER-VI Numercal Soluton of Ordnar Dfferental Equatons 6 Introducton Dfferental equatons are equatons composed of an unknown functon and ts dervatves The followng are examples

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Wavelet chaotic neural networks and their application to continuous function optimization

Wavelet chaotic neural networks and their application to continuous function optimization Vol., No.3, 04-09 (009) do:0.436/ns.009.307 Natural Scence Wavelet chaotc neural networks and ther applcaton to contnuous functon optmzaton Ja-Ha Zhang, Yao-Qun Xu College of Electrcal and Automatc Engneerng,

More information

Lecture 23: Artificial neural networks

Lecture 23: Artificial neural networks Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin

LOW BIAS INTEGRATED PATH ESTIMATORS. James M. Calvin Proceedngs of the 007 Wnter Smulaton Conference S G Henderson, B Bller, M-H Hseh, J Shortle, J D Tew, and R R Barton, eds LOW BIAS INTEGRATED PATH ESTIMATORS James M Calvn Department of Computer Scence

More information

MMA and GCMMA two methods for nonlinear optimization

MMA and GCMMA two methods for nonlinear optimization MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons

More information

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei

The Chaotic Robot Prediction by Neuro Fuzzy Algorithm (2) = θ (3) = ω. Asin. A v. Mana Tarjoman, Shaghayegh Zarei The Chaotc Robot Predcton by Neuro Fuzzy Algorthm Mana Tarjoman, Shaghayegh Zare Abstract In ths paper an applcaton of the adaptve neurofuzzy nference system has been ntroduced to predct the behavor of

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

Numerical Simulation of One-Dimensional Wave Equation by Non-Polynomial Quintic Spline

Numerical Simulation of One-Dimensional Wave Equation by Non-Polynomial Quintic Spline IOSR Journal of Matematcs (IOSR-JM) e-issn: 78-578, p-issn: 319-765X. Volume 14, Issue 6 Ver. I (Nov - Dec 018), PP 6-30 www.osrournals.org Numercal Smulaton of One-Dmensonal Wave Equaton by Non-Polynomal

More information

Multiple Sound Source Location in 3D Space with a Synchronized Neural System

Multiple Sound Source Location in 3D Space with a Synchronized Neural System Multple Sound Source Locaton n D Space wth a Synchronzed Neural System Yum Takzawa and Atsush Fukasawa Insttute of Statstcal Mathematcs Research Organzaton of Informaton and Systems 0- Mdor-cho, Tachkawa,

More information

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis

RBF Neural Network Model Training by Unscented Kalman Filter and Its Application in Mechanical Fault Diagnosis Appled Mechancs and Materals Submtted: 24-6-2 ISSN: 662-7482, Vols. 62-65, pp 2383-2386 Accepted: 24-6- do:.428/www.scentfc.net/amm.62-65.2383 Onlne: 24-8- 24 rans ech Publcatons, Swtzerland RBF Neural

More information

The Order Relation and Trace Inequalities for. Hermitian Operators

The Order Relation and Trace Inequalities for. Hermitian Operators Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence

More information

COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD

COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD COMPOSITE BEAM WITH WEAK SHEAR CONNECTION SUBJECTED TO THERMAL LOAD Ákos Jósef Lengyel, István Ecsed Assstant Lecturer, Professor of Mechancs, Insttute of Appled Mechancs, Unversty of Mskolc, Mskolc-Egyetemváros,

More information

Short Term Load Forecasting using an Artificial Neural Network

Short Term Load Forecasting using an Artificial Neural Network Short Term Load Forecastng usng an Artfcal Neural Network D. Kown 1, M. Km 1, C. Hong 1,, S. Cho 2 1 Department of Computer Scence, Sangmyung Unversty, Seoul, Korea 2 Department of Energy Grd, Sangmyung

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES

Lecture 13 APPROXIMATION OF SECOMD ORDER DERIVATIVES COMPUTATIONAL FLUID DYNAMICS: FDM: Appromaton of Second Order Dervatves Lecture APPROXIMATION OF SECOMD ORDER DERIVATIVES. APPROXIMATION OF SECOND ORDER DERIVATIVES Second order dervatves appear n dffusve

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Lecture 21: Numerical methods for pricing American type derivatives

Lecture 21: Numerical methods for pricing American type derivatives Lecture 21: Numercal methods for prcng Amercan type dervatves Xaoguang Wang STAT 598W Aprl 10th, 2014 (STAT 598W) Lecture 21 1 / 26 Outlne 1 Fnte Dfference Method Explct Method Penalty Method (STAT 598W)

More information

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,*

Using Immune Genetic Algorithm to Optimize BP Neural Network and Its Application Peng-fei LIU1,Qun-tai SHEN1 and Jun ZHI2,* Advances n Computer Scence Research (ACRS), volume 54 Internatonal Conference on Computer Networks and Communcaton Technology (CNCT206) Usng Immune Genetc Algorthm to Optmze BP Neural Network and Its Applcaton

More information

Convexity preserving interpolation by splines of arbitrary degree

Convexity preserving interpolation by splines of arbitrary degree Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete

More information

risk and uncertainty assessment

risk and uncertainty assessment Optmal forecastng of atmospherc qualty n ndustral regons: rsk and uncertanty assessment Vladmr Penenko Insttute of Computatonal Mathematcs and Mathematcal Geophyscs SD RAS Goal Development of theoretcal

More information

On the Multicriteria Integer Network Flow Problem

On the Multicriteria Integer Network Flow Problem BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of

More information

Model of Neurons. CS 416 Artificial Intelligence. Early History of Neural Nets. Cybernetics. McCulloch-Pitts Neurons. Hebbian Modification.

Model of Neurons. CS 416 Artificial Intelligence. Early History of Neural Nets. Cybernetics. McCulloch-Pitts Neurons. Hebbian Modification. Page 1 Model of Neurons CS 416 Artfcal Intellgence Lecture 18 Neural Nets Chapter 20 Multple nputs/dendrtes (~10,000!!!) Cell body/soma performs computaton Sngle output/axon Computaton s typcally modeled

More information

Application research on rough set -neural network in the fault diagnosis system of ball mill

Application research on rough set -neural network in the fault diagnosis system of ball mill Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(4):834-838 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 Applcaton research on rough set -neural network n the

More information

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b

A New Refinement of Jacobi Method for Solution of Linear System Equations AX=b Int J Contemp Math Scences, Vol 3, 28, no 17, 819-827 A New Refnement of Jacob Method for Soluton of Lnear System Equatons AX=b F Naem Dafchah Department of Mathematcs, Faculty of Scences Unversty of Gulan,

More information

Digital Signal Processing

Digital Signal Processing Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over

More information

Solution for singularly perturbed problems via cubic spline in tension

Solution for singularly perturbed problems via cubic spline in tension ISSN 76-769 England UK Journal of Informaton and Computng Scence Vol. No. 06 pp.6-69 Soluton for sngularly perturbed problems va cubc splne n tenson K. Aruna A. S. V. Rav Kant Flud Dynamcs Dvson Scool

More information

Non-linear Canonical Correlation Analysis Using a RBF Network

Non-linear Canonical Correlation Analysis Using a RBF Network ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

Fixed point method and its improvement for the system of Volterra-Fredholm integral equations of the second kind

Fixed point method and its improvement for the system of Volterra-Fredholm integral equations of the second kind MATEMATIKA, 217, Volume 33, Number 2, 191 26 c Penerbt UTM Press. All rghts reserved Fxed pont method and ts mprovement for the system of Volterra-Fredholm ntegral equatons of the second knd 1 Talaat I.

More information

ALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION

ALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION ANALELE ŞTIINŢIFICE ALE UNIVERSITĂŢII AL.I. CUZA DIN IAŞI (S.N.) MATEMATICĂ, Tomul LIX, 013, f.1 DOI: 10.478/v10157-01-00-y ALGORITHM FOR THE CALCULATION OF THE TWO VARIABLES CUBIC SPLINE FUNCTION BY ION

More information

Maejo International Journal of Science and Technology

Maejo International Journal of Science and Technology Maejo Int. J. Sc. Technol. () - Full Paper Maejo Internatonal Journal of Scence and Technology ISSN - Avalable onlne at www.mjst.mju.ac.th Fourth-order method for sngularly perturbed sngular boundary value

More information

Neuro-Adaptive Design - I:

Neuro-Adaptive Design - I: Lecture 36 Neuro-Adaptve Desgn - I: A Robustfyng ool for Dynamc Inverson Desgn Dr. Radhakant Padh Asst. Professor Dept. of Aerospace Engneerng Indan Insttute of Scence - Bangalore Motvaton Perfect system

More information

Procedia Computer Science

Procedia Computer Science Avalable onlne at www.scencedrect.com Proceda Proceda Computer Computer Scence Scence 1 (01) 00 (009) 589 597 000 000 Proceda Computer Scence www.elsever.com/locate/proceda Internatonal Conference on Computatonal

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

An Improved multiple fractal algorithm

An Improved multiple fractal algorithm Advanced Scence and Technology Letters Vol.31 (MulGraB 213), pp.184-188 http://dx.do.org/1.1427/astl.213.31.41 An Improved multple fractal algorthm Yun Ln, Xaochu Xu, Jnfeng Pang College of Informaton

More information