RESEARCH ARTICLE. Solving Polynomial Systems Using a Fast Adaptive Back Propagation-type Neural Network Algorithm

Size: px
Start display at page:

Download "RESEARCH ARTICLE. Solving Polynomial Systems Using a Fast Adaptive Back Propagation-type Neural Network Algorithm"

Transcription

1 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas Internatona Journa of Computer Mathematcs Vo., No., Month, 9 RESEARCH ARTICLE Sovng Poynoma Systems Usng a Fast Adaptve Back Propagaton-type Neura Netork Agorthm KONSTANTINOS GOULIANAS a, ATHANASIOS MARGARIS b,, IOANNIS REFANIDIS c KONSTANTINOS DIAMANTARAS d a,d ATEI of Thessaonk, Department of Informaton Technoogy, GR 574, Thessaonk, Greece b,c Unversty of Macedona, Department of Apped Informatcs, GR 54 6, Thessaonk, Creece (Submtted Jan, 6) Ths paper proposes a neura netork archtecture for sovng systems of nonnear equatons. A back propagaton agorthm s apped to sove the probem, usng an adaptve earnng rate procedure, based on the mnmzaton of the mean squared error functon defned by the system, as e as the netork actvaton functon, hch can be near or non-near. The resuts obtaned are compared th some of the standard goba optmzaton technques that are used for sovng nonnear equatons systems. The method as tested th some e-knon and dffcut appcatons (such as Gauss-Legendre -pont formua for numerca ntegraton, chemca equbrum appcaton, knematc appcaton, neuropsychoogy appcaton, combuston appcaton and nterva arthmetc benchmark) n order to evauate the performance of the ne approach. Emprca resuts revea that the proposed method s characterzed by fast convergence and s abe to dea th hgh-dmensona equatons systems.. Introducton and reve of prevous ork Poynoma systems of equatons are of major nterest and they are heavy used n any dscpne of scences such as mathematcs, physcs and engneerng. The ast decades a ot of agorthms have been deveoped for sovng such systems (see for eampe [6] & [7] as e as [8], [] and []). Accordng to [3] the approaches for sovng poynoma systems of equatons can be cassfed n three man categores: () Symboc methods that stem from agebrac geometry and are abe to perform varabe emnaton. Hoever, the currenty avaabe methods are effcent ony for sets of o-degree systems of poynomas. () Numerca methods that are many based on teratve procedures. These methods are sutabe for oca anayss ony and perform e ony f the nta guess s good enough, a condton that generay s rather dffcut to satsfy. (3) Geometrc methods such as subdvson-based agorthms for ntersecton and ray tracng of curves and surfaces. They are characterzed by so convergence and they have mted appcatons. Snce poynoma systems can be consdered as generazatons of systems of near equatons, t s temptng to attempt to sove them, usng the e knon teratve procedures of numerca near agebra, after ther approprate modfcaton. The methods descrbed by Ortega and Rhenbodt ([4],[5]) are eampes of such an approach. On the other hand, the so-caed contnuaton methods [6] begn th a startng system th a Correspondng author. e-ma: amarg@uom.gr ISSN: -76 prnt/issn 9-65 onne c Tayor & Francs DOI:.8/76YY

2 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas knon souton that t s graduay transformed to the nonnear system to be soved. Ths s a mutstage process, n hch, at each stage, the current system s soved by Netontype methods to dentfy a startng pont for the net stage system, snce, as the system changes, the souton s moved on a path that jons a souton of the startng system th the souton of the desred system. There are to other effcent approaches for sovng poynoma systems of equatons as e as nonnear agebrac systems of any type, based on the use of neura netorks and genetc agorthms. Snce the proposed method s a neura based one, the neura based methods are descrbed n a ot of deta, he the descrpton of the genetc agorthm approaches s shorter enough. The man motvaton for usng neura netorks n an attempt to sove nonnear agebrac systems, s that neura netorks are unversa appromators n the sense that they have the abty to smuate any functon of any type th a predefned degree of accuracy. Toards ths drecton, Nguyen [] proposed a neura netork archtecture capabe of mpementng the e knon Neton-Raphson agorthm for sovng mutvarate nonnear systems usng the teratve equaton p = p + p n the p th step th J( p ) p = f( p ) here s the souton vector of the approprate dmensons, andj s the Jacoban matr. Nguyen defned the quadratc form E p =[J( p ) p +f( p )] t [J( p ) p +f() p ] characterzed by a zero mnma vaue (the notaton A t represents the transpose of the matr A) and a gradent n the form E p p =[J(p )] t J( p ) p +[J( p ) t ]f( p ) The proposed neura netork that deas th ths stuaton uses p as the output vector and satsfy the equaton d p dt = k Ep p herek s a negatve scaar constant. The stabty of ths system s guaranteed by the fact that the Hessan matr of the system s a postve defnte matr. The proposed structure s a feed-forard to-ayered neura netork n hch the eghtng coeffcents are the eements of the transpose of the Jacoban matr. The output of ths netork mpements the vector-matr product[j( p )] t J( p ) p. Each neuron s assocated th a near processng functon he the tota number of processng nodes s equa to the dmenson N of the nonnear system of equatons. On the other hand, Matha & Saeks [] soved nonnear equatons usng recurrent neura netorks (and more specfcay near Hopfed netorks) n conjuncton th mutayered perceptrons that are traned frst. The mutayered perceptrons they used, are composed of an nput ayer ofmneurons, a hdden ayer ofq neurons and an output ayer of n neurons th the matr W to contan the synaptc eghts beteen the nput and the hdden ayer and the matr W to contan the synaptc eghts beteen the hdden and the output ayer. The neurons of the hdden ayer use the sgmoda functon, he the neurons of the output ayer are near unts. Ths MLP can be veed as a parametrzed nonnear mappng n the form f : R m R n th y = f() = W g(w + b ) = W g(α)

3 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 3 hereα = W + b and g : R q R q th z = g(α) = [g(α ),g(α ),...,g(α q )] T s the hdden ayer representatve functon. On the other hand, the near Hopfed netork has the e knon recurrent structure th a constant nput and state y and n ths ork t uses near actvaton functons nstead of the commony used nonnear functons (such as the step of sgmoda functon). Matha & Saeks used ths structure to mpement the Neton method defned above that guarantees a quadratc convergence, gven a suffcenty accurate nta condton and a nonsnguar Jacoban matr J for a the teraton steps. In ths approach, the Jacoban of the MLP s obtaned va the chan rue J = f = f z z α α = W [G(α(I G(α)))] α=α W It s cear that snce ths method requres the nverson of the Jacoban matr, ths matr must be a square matr and therefore ony the casem = n s supported by ths archtecture. Another prerequste for the method to ork s that the number of hdden neurons must be greater than the number of nput and output neurons. Based on a these evdences, Matha & Saeks defned the recurrent netork as p+ = p α(w (G p (I G p ))W ) (W g(w p + b ) y) here G p = G(α p ) s a constant dagona matr. The souton of the nonnear equaton can no be estmated va an teratve procedure that termnates hen a predefned toerance vaue s reached by the netork. To evauate the performance of the proposed method for varous system dmensons, t s compared to four e knon methods found n the terature. The frst one of them s the Neton s method [6] that aos the appromaton of the functon F() by ts frst order Tayor epanson n the neghborhood of a pont = (,, 3,..., n ) T R n. Ths s an teratve method that uses as nput an nta guess = [ (), (), 3 (),..., n ()] T () and generates a vector sequence{,,..., k,...}, th the vector k assocated th the k th teraton of the agorthm, to be estmated as k = k J( k ) F( k ) () here J( k ) R n n s the Jacoban matr of the functon F = (f,f,,f n ) T estmated at the vector k. Note, that even though the method converges fast to the souton provded that the nta guess (namey the startng pont ) s a good one, t s not consdered as an effcent agorthm, snce t requres n each step the evauaton (or appromaton) of n parta dervatves and the souton of an n n near system. A performance evauaton of the Neton s method as e as other smar drect methods, shos that these methods are mpractca for arge scae probems, due to the arge computatona cost and memory requrements. In ths ork, besdes the cassca Neton s method, the fed Neton s method as aso used. As t s e knon, the dfference beteen these varatons s the fact that n the fed method the matr of dervatves s not updated durng teratons, and therefore the agorthm uses aays the dervatve matr assocated th the nta condton. An mprovement to the cassca Neton s method can be found n the ork of Broyden [7] (see aso [8] as e as [9] for the descrpton of the secant method, another e

4 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 4 knon method of souton), n hch the computaton at each step s reduced sgnfcanty, thout a major decrease of the convergence speed; hoever, a good nta guess s st requred. Ths prerequste s not necessary n the e knon steepest descent method, hch unfortunatey does not gve a rapdy convergence sequence of vectors toards the system souton. The Broyden s methods used n ths ork are the foong: Broyden method. Ths method aos the update of the Jacoban appromaton B durng the stepn a stabe and effcent ay and s reated to the equaton B = B + ( B δ )δ T δ T (3) here and are the current and the prevous steps of teratve agorthm, and furthermore e defne, = f( ) f( ) andδ =. Broyden method. Ths method aos the emnaton of the requrement of a near sover to compute the step drecton and s reated to the equaton B = B + (δ B )δ T B δ T B (4) th the parameters of ths equaton to be defned as prevousy. The ast descrbed neura netork approach used for sovng nonnear agebrac equatons, s the modfed Hopfed netork mode of Mshra & Kara [3] that can sove a nonnear system of equatons th n unknons. Ths neura netork s composed of n product unts hose outputs are neary summed va synaptc eghts. The state equaton of each one of those neurons s formed by appyng the e knon Krchhoff s a n the eectrc crcut that represents the netork and t s proven to have the form C dϕ ( ) dt + ϕ ( ) R = j j f j (,, 3,..., n )+I (,j) [,,...,n] here ϕ s the sgmoda actvaton functon of the n neurons. The energy functon of the modfed Hopfed netork s defned as E = ϕ (s) ds R j f j (,, 3,..., n ) j I and t s a bounded functon snce t can be proven that t satsfes the nequatye (t) heree (t) s the dervatve of the energy functon E(t). In order to use ths mode for sovng nonnear agebrac systems, ths energy functon s defned as the sum of the squares of the dfferences beteen the eft and the rght hand sde of each equaton, he the number n of neurons s equa to the number of the unknons of the nonnear agebrac system under consderaton. It can be proven that ths system s stabe n the Lyapunov sense and t s abe to dentfy the souton vector of the above system of equatons. On the other hand, n the genetc agorthm approach, a popuaton of canddate soutons (knon as ndvduas or phenotypes) to an optmzaton probem s evoved toards better soutons. Each canddate souton s assocated th a set of propertes (knon as chromosomes or genotypes) that can be mutated and atered. In appyng ths technque, a popuaton of ndvduas s randomy seected and evoved n tme formng generatons, hereas a ftness vaue s estmated for each one of them. In the net step, the fttest ndvduas are seected va a stochastc process and ther genomes are modfed to form

5 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 5 a ne popuaton that s used n the net generaton of the agorthm. The procedure s termnated hen ether a mamum number of generatons has been produced, or a satsfactory ftness eve has been reached. The appcaton of genetc agorthm for sovng nonnear agebrac systems aos the smutaneous consderaton of mutpe soutons due to ther popuaton approach [] (see aso [] and [5]). Furthermore, snce they do not use at a dervatve nformaton, they do not tend to get caught n oca mnma. Hoever, they do not aays converge to the true mnmum, and n fact, ther strength s that they rapdy converge to near optma soutons. The most commony used ftness functon has the form g = ma {abs (f )} ( =,,3,...,n) herema{abs(f )} s the mamum absoute vaue of the ndvdua equatons of the system andns the number of the system equatons. The hybrd approach of Karr et a. [5] attempt to dea th the eakness of the cassca Neton - Raphson method accordng to hch hen the nta guess s not cose to the root (or, even orse, t s competey unknon) the method does not behave e. In ths case, a genetc agorthm can be used to ocate regons n hch roots of the system of equatons are key to est. The ftness functon s the functon g defned above, he, the codng schema for strng representaton s the e knon concatenated bnary neary mapped codng scheme. The ast ork cted n ths paper s the ork of Grossan & Abraham [6] (see aso [7]) that dea the system of nonnear equatons usng a modfed ne search approach as e as a mutobjectve evoutonary agorthm that transfers the system of equatons nto an optmzaton probem. Returnng to the proposed neura based nonnear system sover, the man dea s to construct a neura netork-ke archtecture that can reproduce or mpement the structure of the system of poynomas to be soved by assgnng the vaues of the poynoma coeffcents as e as the components of ther souton (,, 3,..., n ) to fed and varabe synaptc eghts n an approprate ay. If ths netork s traned such that ts output vector s the nu vector, then, by constructon, the vaues of the varabe synaptc eghts be the components of one of the system roots. Ths dea together th representatve eampes s epaned competey n [7] (see aso [8]).. Probem formuaton As t s e knon from nonnear agebra, the structure of a typca nonnear agebrac system of n equatons th n unknons has the form or, usng vector notaton,f() =, here s a vector of nonnear functons f (,, 3,..., n ) = f (,, 3,..., n ) = f 3 (,, 3,..., n ) = (5)... f n (,, 3,..., n ) = F = (f,f,f 3,...,f n ) T (6) f () = f (,, 3,..., n ) (7)

6 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 6 each one of them beng defned n the vector space Ω = n {α,β } R n (8) = of a rea vaued contnuous functons and = (,, 3,..., n ) T (9) s the souton vector of the system. For nonnear poynoma systems, ths system of equatons can be rtten as or equvaenty, f () = a n f () = a e e e n n +a e e e n n + a k e k e k, e nk n β = f () = a e e e n n +a e e e n n + a k e k e k e nk n β = f n () = a n en en en n n +a n en en en n n = f () = a n = = e + a nkn en kn en kn en nkn n β n = n +a = e n +a = +a n = e,...,+a k e,...,+a k n = n = e k = e k = k j= k j= (a j n = (a j n = e j ) β = e j ) β =... n n n k n n f n ( = a n en en,...,+a nkn en kn = (a nj en j ) β n = or n a more compact form, j= = = k n f () = (a j e j ) β = ( =,,...,n) () here n every eponent e j the superscrpt denotes the equaton, the frst subscrpt j denotes the factor of the summaton n equaton, and the second subscrptdenotes the correspondng unknon. j= = 3. The archtecture of the proposed neura nonnear system sover The archtecture of the neura netork-ke archtecture that can sove a compete n n system of poynoma equatons, s characterzed by four ayers th the foong structure:

7 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 7 Layer s the snge nput ayer. Ths ayer does not used at a to the back propagaton tranng and t smpy partcpates to the varabe eght synapses hose vaues after tranng are the components of the system roots. In other ords, n ths procedure there s not a tranng set snce the smpe nput s aays the vaue of unty he the assocated desred output s the zero vector of n eements. Layer contans n neurons each one of them s connected to the snge nput unt of the frst ayer. As t has been mentoned, the eghts of the n synapses defned n ths ay, are the ony varabe eghts of the netork. Durng netork tranng, ther vaues are updated accordng to the equatons of the back propagaton agorthm and after a successfu tranng these eghts contan thencomponents of a system root r = (,, 3,..., n ) () Layer s composed of n bocks of neurons th the th bock contanng k neurons, namey, one neuron for each one of the k products of poers of the s assocated th the th equaton of the nonnear system. The neurons of ths ayer, as e as the actvaton functons assocated th them, are therefore descrbed usng the doube nde notaton(,j) [for vaues( =,,...,n) and(j =,,...,k )]. The structure of the th bock of ths ayer as e as ts connectons th the Layers and 3 are shon n Fgure. In order to descrbe the Layer neurons, the shorthand notatonn(,j) s used, based on the fact that the tota output of thej th neuron of the th bock s estmated as j j j3 3 j jn n = n µ= jµ µ = j () ( =,,...,n, j =,,...,k n ). In the above epresson, j s just a symboc notaton and does not descrbe some vad vector operaton. To descrbe the fact that these neurons mpement the product of the components, each one of them s rased to the poer of the correspondng eght component, e characterze these neurons as Π poer neurons. Layer 3 contans an output neuron for each equaton, namey, a tota number of n neurons that use the dentty functon y = f() = or the hyperboc tangent functon y = f() = tanh() as the actvaton functon. The compete neura netork-ke archtecture s shon n Fgure. Note, that each neuron n ( =,,...,n) of the output ayer s assocated th a bas unt th the eght of ths bas synapse to has the vaue β here β s the fed term of the th nonnear poynoma equaton. Ths bas synapse s not shon n the fgure. The matr = [,,..., n ] = [,,..., n ] connectng Layers and s the ony varabe eght matr, hose eements (after the successfu tranng of the netork) are the components of one of the system roots, or n a mathematca notaton = ( =,,...,n). The matr W connectng Layers and s composed of n ros th the th ro to be assocated th the varabe ( =,,...,n). The vaues of ths ro are the eghts of the synapses jonng the th neuron of the second ayer th the compete set of neurons of the thrd ayer. There s a tota number ofk = k +k + +k n neurons n ths ayer and therefore the dmensons of the matr W are n k = n (k + k + + k n ). The vaues of these eghts are the eponents of the s n every term of each equaton.

8 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 8 th bock for th equaton n n j n j n j j j j n 3 3 j j j 3j 3 k k 3k k j... k n n nj n nk n α α α j α k -β F k neurons N(,j) (j=,,...,k ) LAYER LAYER LAYER LAYER 3 Fgure. The structure of the th bock of Layer and ts connectons th the Layers and 3. Therefore, e have j = e j =,...,n j =,,...k =,,...,n (3) From the programmng pont of ve t s not dffcut to note that the matr W s a to dmensona matr n hch the eghtj s assocated th the cew[][σ+j] here σ = k +k + +k. Fnay, the matraconnectng Layers and 3 has dmensonsk n = (k +k + + k n ) n th non-zero eements n the connectons beteen thek neurons of the th bock of Layer th the th neuron of Layer 3. The vaues of ths matr are the coeffcentsα s of the poynoma system of equatons. It s not dffcut to note that the parameter vaue α j s stored n the ce A[][σ + j] here the σ parameter s estmated as prevousy. Regardng the netork archtecture, the vaues of the th coumn of the A matr are the fed eghts of the nput synapses of the th output neuron ( =,,...,n). In a mathematca anguage, the coeffcent matr A s defned as A j = α j =,,...,n, j =,,...,k A = (4) A k j = k =,,...,n, k Snce the unque neuron of the frst ayer does not partcpate n the cacuatons, t s not ncuded n the nde notaton and therefore f e use the symbo u to descrbe the neuron nput and the symbo v to descrbe the neuron output, the symbos u and v are assocated th the n neurons of the second ayer, the symbos u and v are assocated th the k neurons of the thrd ayer and the symbos u3 and v3 are assocated th the n neurons of the fourth (output) ayer. These symbos are accompaned by addtona ndces that dentfy a specfc neuron nsde a ayer and ths notaton s used throughout the remanng part of the paper.

9 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas W A k synapses k neurons N(,j) (j=,,...,k ) (Equaton ) k synapses F j n j n k neurons N(,j) (j=,,...,k ) (Equaton ) k neurons N(,j) (j=,,...,k ) (Equaton ) k n neurons N(n,j) (j=,,...,k n ) (Equaton n) k synapses k n synapses F F F n LAYER LAYER LAYER LAYER 3 Fgure. The compete archtecture of the poynoma nonnear system sover. Eampe 3. To understand the archtecture of the neura netork-ke system and the ay t s formed, consder a poynoma system of to equatons th to unknons (n = ) n hch the frst equaton contans to terms (k = ) and the second equaton contans three terms (k = 3). Usng the above notaton ths system s epressed as f (, ) = α e e +α e e β f (, ) = α e e +α e e +α 3 e 3 e 3 β and ts souton s the vector = (, ) T. The fna form of the system depends on the vaues of the above parameters. For eampe, usng the eponentse =, e = e = e = e 3 =, e = e = 3, e = e = e 3 = 4, the coeffcentsα =, α = 3, α = α =, α 3 = 4 and the fed terms β = 5 andβ = 3, the system s f (, ) = f () = The neura netork that soves ths eampe system s shon n Fgure 3. Eampe 3. The neura netork archtecture for the system of poynomas f (, ) =.5 + = f (, ) = + = s shon n Fgure 4, he, the contents of the matrcesw andaare the foong: W = ( 3 3 A = α α α α α 3 α 3 α 4 α 4 α 5 α 5 =.5 ) = ( ) and

10 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas j k s the synaptc eght from th neuron of LAYER to j th neuron of the k th bock n LAYER 3 A = α A -β 3 3 A = α A = α A = α F -β F W 3 3 A 3 = α 3 LAYER LAYER LAYER 3 LAYER 4 Fgure 3. The structure of neura sover for Eampe 3.. Bock f (, ) f (, ) Bock Fgure 4. The structure of neura sover for Eampe 3..The dashed nes represent synapses th zero eghts. 3. Dervng the back propagaton equatons At ths pont e can proceed to the deveopment of the back propagaton equatons that mpement the three e knon stages of ths agorthm, namey the forard pass, the backard pass assocated th the estmaton of the deta parameter, and the second forard pass reated to the eght adaptaton process. In a more detaed descrpton, these stages are performed as foos: 3.. Forard pass The nputs and the outputs of the netork neurons durng the forard pass stage, are computed as foos: 3... LAYER. u = =,v = u = =,,...,n LAYER. The nputs to the neurons of Layer are u j = n = v j = n = e j (5)

11 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas he ther assocated outputs have the form v j = u j ( =,,...,n, j =,,...,k ) LAYER 3. The nputs of the Layer 3 neurons are k u3 = v j A j β k n = α j e j β j= j= = = F () = F (,,..., n ) (6) for the vaues =,,...,n and j =,,...,k. Regardng the output v3 t depends on the actvaton functon assocated th the output neurons. In ths paper, the smuator uses to dfferent functons of nterest, the dentty functon y = f() = (Case I) and the hyperboc tangent functon y = f() = tanh() (Case II). The correspondng output of the Layer 3 neurons, s therefore defned as u3 =,,...,n (Case I) v3 = tanh(u3 ) =,,...,n (Case II) 3.. Backard pass - estmaton ofδ parameters In ths phase of back propagaton agorthm, the vaues of δ parameters are estmated. Usng the notaton δ, δ and δ3 to denote these parameters for Layer, Layer and Layer3 neurons, ther estmaton equaton are the foong: Case I (Identty functon) (7) δ3 = F () ( =,,...,n) (8) δ (k) j =δ3 A v n j kj j = F ()α j e kj e k k = k for the vaues,k =,,...,n andj =,,...,k e j δ k = k = j= δ (k) j = k F () α j e kj e kj k = j= n = k e j (k =,,...,n). The parameter δ (k) j (k =,,...,n). Case II (Hyperboc tangent functon) ( ) δ3 = v3 ( v3 ) = tanh[f ()] tanh [F ()] δ (k) j s ceary assocated th the unknon k ( =,,...,n) = δ3 A v ( ) j j = tanh[f ()] tanh [F ()] α j e kj e kj k k n = k e j

12 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas for the vaues,k =,,...,n andj =,,...,k. δ k = = = ( ) k tanh[f ()] tanh [F ()] = j= ( ) tanh[f ()] tanh [F ()] δ (k) j k j= = k = j= α j e kj e kj k α j e kj e kj k n = k e j n = k e j (k =,,...,n). Agan, the parameter δ (k) j (k =,,...,n). s assocated th the unknon k 4. Convergence anayss and update of the synaptc eghts Case I (Identty functon) We defne the energy functon as E() = (d v3 ) = = ( v3 ) = = (v3 ) = = [F ()] = [here d = ( =,,...,n) s the desred output of the th neuron of the output ayer andv3 ( =,,...,n) s the correspondng rea output] and therefore e have E() k = = F () F () k = k F () α j e kj e kj k = j= n = e j = δ k (k =,,...,n). By appyng the eght update equaton of the back propagaton, e get the foong epresson m+ k = m k +β(m) E() k = m k β(m)δk (9) (k =,,...,n) hereβ s the earnng rate of the back propagaton agorthm and m k and m+ k are the vaues of the synaptc eght k = k durng the m th and (m+) th teratons respectvey. Case II (Hyperboc tangent functon) Workng n the same ay ths tme e have E() = (d v3 ) = = ( v3 ) = = (v3 ) = = tanh [F ()] = In ths case, the parta dervatve of the mean square error th respect to k has the

13 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 3 form E() k = = = = tanh[f ()] tanh[f ()] k = = ( ) tanh[f ()] tanh [F ()] ( ) tanh[f ()] tanh F () [F ()] k k j= α j e kj e kj k n = k e j = δ k (k =,,..., n) and the eght adaptaton equaton s gven agan by the epresson m+ k for the vauesk =,,...,n. = m k +β(m) E() k = m k β(m)δk () 5. The case of adaptve earnng rate The adaptve earnng rate s one of the most nterestng features of the proposed smuator snce t aos to each neuron of Layer to be traned th ts on earnng rate vaueβ(k) (k =,,...,n). Hoever, the vaues of these ndvdua earnng rates must ao the agorthm to converge, and n the net secton the requred convergence condtons are estabshed. The energy functon assocated th them th teraton s defned as E m () = = [F m ()] () and therefore the dfference beteen the energes assocated th the m th and (m+) th teratons has the form E m () = E m+ () E m () = = { = [ F m () = [F m+ ()] F m ()+F m () ]} = [F m ()] () as t can be easy be proven va smpe mathematca operatons. From the eght update equaton of the back propagaton agorthm t can be easy seen that k = β(k) Em () k = β(k) = F m () Fm () k (3) and therefore, F m () = Fm () k = β(k) Fm () k k = F m () Fm () k (k =,,...,n). Usng ths epresson n the equaton of E m (), after the approprate

14 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 4 mathematca manpuatons s gets the form E m () = β(k) ( = F m () Fm () ) [ β(k) k ( ) F m () ] k The convergence condton of the back propagaton agorthm s epressed as E m () <, an nequaty that eads drecty to the requred crteron of convergence = β(k) < ( ) F m () (4) k = Defnng the adaptve earnng rate parameter (ALRP)µ, ths epresson s rtten as β(k) = µ C m k (J) (5) hereck m(j) s the k th coumn of the Jacoban matr F / F /... F / n F / F /... F / n J = F n / F n /... F n / n (6) for the m th teraton. Usng ths notaton, the back propagaton agorthm converges for adaptve earnng rate parameter (ALRP) vaues µ <. The above descrpton s assocated th the Case I that uses the dentty functon, he, for Case II (hyperboc tangent functon), the approprate equatons are E m () = = ( v,m 3 ) = = [v,m 3 ] = = ) E m () = ( β(j) tanh[f m ()] tanh[fm ()] j = [ ( ) tanh[f m β(j) ()] ] j = tanh [F m ()] (7) and by performng a engthy (but qute smar anayss) the convergence crteron s proven to have the form β(j) < = ( ) tanh[f m ()] = j = ( tanh[f m ()] F m Fm () () j = [ ( tanh [F m ()] ) ] Fm () [ ( [v 3 ]) Fm () j j = = ) ] th the µ parameter to be defned accordngy.

15 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 5 6. Epermenta resuts To eamne and evauate the vadty, accuracy and performance of the proposed neura sover, seected poynoma agebrac systems ere soved and the smuaton resuts ere compared aganst those obtaned by other methods. In these smuatons, a the three modes of tranng ere eamned, namey Lnear Adaptve Learnng Rate (LALR) that uses as actvaton functon the dentty functony = and mnmzes the sum F (). Nonnear Adaptve Learnng Rate (NLALR) that uses as actvaton functon the hyperboc tangent functony = tanh() and mnmzes the sum tanh(f )(). Nonnear Modfed Adaptve Learnng Rate (NLMALR) that uses as actvaton functon the hyperboc tangent functony = tanh() and mnmzes the sum F (). Even though n the theoretca anayss and the constructon of the assocated equatons the cassca back propagaton agorthm as used, the smuatons shoed that the eecuton tme can be further decreased f n each cyce the synaptc eghts ere updated one after the other and the ne output vaues ere used as nput parameters n the correctons assocated th the net eght adaptaton. To evauate the accuracy of the resuts and to perform a vad comparson th the resuts found n the terature, dfferent toerance vaues n the form to ere used, th a vaue of to = to gve an accuracy of 6 decma dgts. Therefore, the toerance vaue used n each eampe has been seected such that the acheved accuracy to be the same th the terature n order to make comparsons. Snce the convergence condton of the proposed neura sover has the form µ <, here µ s the adaptve earnng rate parameter (ALRP), the performed smuatons used the vauesµ =..9 th a varaton step equa to. (n some cases the vaueµ =. as aso tested). The mamum aoed number of teratons as set ton =, he, the vector of nta condtons [ (), (),..., n ()] and the n-dmensona search regon α α as a functon of the dmenson of the probem n. The man graphca representaton of the resuts shos the varaton of the mnmum and the mean teraton number th respect to the vaue of the adaptve earnng rate parameter µ, he, the remanng resuts are shon n a tabuated form. In these resuts, the parameter SR descrbes the success rate, namey the percentage of the tested systems (.e nta condton combnatons) that converged to some of the system roots, hesr() s the success rate assocated th the th root. After the descrpton of the epermenta condtons et us no present eght eampe systems as e as the epermenta resuts emerged for each one of them. These eampe systems are poynoma systems of n equatons th n unknons th ncreasng sze n =,3,4,5,6,8,. The eampe systems th dmensons n = and n = 3 have been borroed from [7] and [8] respectvey n order to compare the proposed method th the neura methods presented there, he, the remanng eampes are rea engneerng appcatons found n the terature, and more specfcay a Gauss-Legendre -pont formua for numerca ntegraton (n = 4), a chemca equbrum appcaton (n = 5), a neurophysoogy appcaton (n = 6), a knematc appcaton (n = 8), a combuston appcaton (n = ) and an nterva arthmetc appcaton (n = ). Note, that of course the proposed neura sover can hande any dmenson and the mamum dmenson d = has been seected n accordance th the studed terature (a the studed smuatons ere apped for sovng systems up to ths dmenson). In the foong presentaton, the roots of the eampe systems are dentfed and compared th the roots estmated by the other methods.

16 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas Mnmum and average teraton number vs. ALRP parameter (LALR),,,3,4,5,6,7,8,9,,,3,4,5,6,7,8,9 Mn Average Mnmum and average teraton number vs. ALRP parameter (NLMALR) ,,,3,4,5,6,7,8,9,,,3,4,5,6,7,8,9 Mn Average Mnmum and average teraton number vs. ALRP parameter (NLALR) ,,,3,4,5,6,7,8,9,,,3,4,5,6,7,8,9 Mn Average Fgure 5. The varaton of the overa mnmum and the average teraton number th respect to the vaue of the ALRP for the Eampe System Eampe. Consder the foong poynoma system of to equatons th to unknons, defned as F (, ) =.5 + = F (, ) = ( ) + = (ths s the Eampe from [7] - see aso [9]). The system has to dstnct roots th vaues ( ROOT(,y )= ( ROOT(,y )= 3,+ 3 3, 3 ) =( , ) ) =( ,.9489) as e as a doube root th vaue ROOT3 ( 3,y 3 ) = (,). To study the system, the netork ran 68 tmes th the synaptc eghts to vary n the nterva, th a varaton steph = = =. and a toerance vaueto =. The varaton of the overa (.e regardess or the root) mnmum and the average teraton number for the LALR, NLMALR and NLALR modes of operaton, are shon n Fgure 5. Tabe shos the smuaton resuts for the Eampe System and for the best run, namey for the mnmum teraton number for Root ( =,,3) and for the modes LALR, NLMALR and NLALR. The coumns of ths tabe from eft to rght are the vaue

17 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 7 Tabe. The epermenta resuts for the Eampe System µ Root MIN AVG SR () () F F LALR NLMALR NLALR of µ parameter, the root Id, the mnmum (MIN) and the average (AVG) teraton number, the success rate (SR), the nta condtons () and (), the dentfed root components and and the vaues of the functons F (, ) and F (, ) estmated for the dentfed root. In ths smuaton the absoute error vaue as of the order of 6 snce the used toerance vaue as to =. The smuator as aso tested usng the nta vaues () =.7 and () =.3 that accordng to the terature (see [7]) resuted to a mnmum number of teraton th a vaue equa to345. The smuaton converged to eacty the same root but n ony 6-8 teratons a fact that s assocated th the adaptve earnng rate feature Eampe. Consder the foong poynoma system of three equatons th three unknons,, 3 defned as F (,, 3 ) = = F (,, 3 ) = = F 3 (,, 3 ) = = (ths s the Eampe from [8]). It can be proven that ths system has four dstnct roots th vaues ROOT(,, 3 )=(.484,.3843, ) ROOT(,, 3 )=(.77,.8547, ) ROOT3(,, 3 )=(.6,.85639,.4386) ROOT4(,, 3 )=(,,) The neura sover as abe to dentfy a the four roots th an absoute error of 6 due to the used toerance vaue of to = 3. The root estmaton procedure as performed n the search regon [ 3,3] th a varaton step of = =. and a the three modes of operaton (LALR, NLMALR, NLALR) ere tested. The varaton of the mnmum and the average teraton number for each one of the four roots are shon n Fgure 6. A comparson of the proposed method to the one presented n [8] gave to the resuts of Tabe that revea the superorty of the proposed method over the method descrbed n [8] here the earnng rate vaue as not adaptve but aays had the same fed vaue. In ths tabe, the frst coumn ncudes the resuts reported n [8], he the second and the thrd coumn ncude the mnmum teraton number and the assocated

18 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 8 Tabe. A comparson of the resuts emerged by appyng the method descrbed n [8] and the proposed method for the Eampe System. In the same tabe the smuaton resuts assocated th the same nta vaues used n [8] are aso shon. Run th nta vaues as n [8] Method of Proposed ALRP Method of Proposed ALRP Method Ref. [8] method vaue Ref. [8] method vaue Used ROOT ROOT NLALR ROOT 734. ROOT NLALR ROOT ROOT LALR ROOT ROOT LARL ,,3,4,5,6,7,8,9,,,3,4,5,6,7,8,9,,3,4,5,6,7,8,9,,,3,4,5,6,7,8,9 Root Root Root3 Root4 Root Root Root3 Root4 Fgure 6. The varaton of the mnmum and the average teraton number th respect to the vaue of the ALRP for the four roots of the Eampe System and for the NLALR mode of operaton. ALRP for the proposed method Eampe 3 (n = 4 - Gauss - Legendre -pont formua for numerca ntegraton).. The net eampe s from the fed of numerca ntegraton, here the Gauss - Legendre N-pont teraton formua for n =, resuts n the foong nonnear agebrac system of four equatons th four unknons(,, 3, 4 ). F (,, 3, 4 ) = = F (,, 3, 4 ) = = F 3 (,, 3, 4 ) = (/3) = F 4 (,, 3, 4 ) = = (see [] and aso []). The system has to symmetrc roots n the search regon.5.5 ( =,,3,4) hose vaues ere dentfed by the neura netork as ROOT (,, 3, 4 ) = ( , , +., +.) ROOT (,, 3, 4 ) = ( , , +., +.) usng a varaton step =.3 ( =,,3,4) and a toerance vaue of to =. Karr et a [5] fnds the one of the roots, he E-Emary and E-Kareem fnd the other root. The

19 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas ,5,6,7,8,9,,,3,4,5,6,7,8,9 LALR NLMALR NLALR Fgure 7. The varaton of the overa success rate for the Eampe System 3 and for the nterva.5 µ.9. proposed neura sover as abe to dentfy both roots after34 teratons. The varaton of the overa success rate th respect to the ALRP n the nterva.5 µ.9 for the operaton modes LALR, NLMALR and NLALR n shon n Fgure Eampe 4 (n = 5 - Chemca equbrum appcaton).. The net system s assocated th a chemca engneerng appcaton. It has fve equatons th fve unknons and t s defned as (,, 3, 4, 5 ) F = F (,, 3, 4, 5 ) = = F = F (,, 3, 4, 5 ) = R 8 R 5 +R +R 7 3 +R 9 4 = F 3 = F 3 (,, 3, 4, 5 ) = 3 +R R 6 3 +R 7 3 = F 4 = F 4 (,, 3, 4, 5 ) = R R 5 = F 5 = F 5 (,, 3, 4, 5 ) = ( +)+R + 3 +R 8 +R R R 7 3 +R 9 4 = here the constants that appear n the above equatons are defned asr = R 5 =.93, and R 6 =.597 4, R 7 = , R 8 = R 9 =.55 4, R = (ths s the Eampe 5 from [], see aso [6]). Ths system has a ot of roots. The method of Overa & Petraga [] as abe to dentfy seven roots, he the method of Grosan et.a. [6] as abe to dentfy eght roots; hoever, no root vaues are reported. To dea

20 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas Tabe 3. Smuaton resuts for the Eampe System 4 for vaues () = ( =,,3,4,5), h = and to = 3. ALRP Roots Roots Overa Overa Success Mean goba vaue found n Doman average mn rate absoute error e e e e e e e e e e e e e-3 Tabe 4. The 8 roots assocated th the Eampe System 4 for the vaueµ =.4 and for toerance to = e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e- th ths system, the neura netork run th nta condtons [ (), (), 3 (), 4 (), 5 ()] = (±,±,±,±,±) th a varaton step h = = [ths mean that each ( =,,3,4,5) got the vaues± ony, eadng thus to a number of 5 = 3 dfferent eamned combnatons]. The agorthm orked n LALR, NLMALR and NLALR modes of operaton th a ot of toerance vaues. The mnmum number of teratons as 6 for to = 3, 7 9 for to = 4 and 3 for to = 5 and for the vaue to = 5 regardng the goba absoute error, the accuracy of the resuts as superor th respect to [] and [6]. Restrctng to ths case, as an eampe, the resuts for the NLALR method are shon n Tabe 3. The vaues of the 8 roots assocated th the vaue µ =.4 (see the fourth ro of Tabe 3) are shon n Tabe 4 he the varaton of the overa success rate for the three modes of operatons and for the vaues.5 µ.5 are shon n Fgure 8. It s nterestng to note that the roots of ths system are ocated to the edges of hypercubes (.e they are combnatons of the same vaues) as t s shon from Tabe 4 for the roots 4-5, 6-7, 8-9, -, 5-6 and 7-8.

21 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas,9,8,7,6,5,4,3,,,5,6,7,8,9,,,3,4,5 LALR NLMALR NLALR Fgure 8. The varaton of the overa success rate for the Eampe System 4 and for the nterva.5 µ Eampe 5 (n = 6 - Neurophysoogy appcaton).. The net system s assocated th a neurophysoogy appcaton and has s equatons th s unknons (,, 3, 4, 5, 6 ) defned as F (,, 3, 4, 5, 6 ) = + 3 = F (,, 3, 4, 5, 6 ) = + 4 = F 3 (,, 3, 4, 5, 6 ) = = F 4 (,, 3, 4, 5, 6 ) = = F 5 (,, 3, 4, 5, 6 ) = = F 6 (,, 3, 4, 5, 6 ) = = (ths s the Eampe 4 from [], see aso [6]). It shoud be noted that for the sake of smpcty each equaton of the system as dvded by the vaue d =, namey the mamum vaue assocated th each equaton for the used nta condton vector [ (), (), 3 (), 4 (), 5 (), 6 ()] = (,,,,,) The neura smuator as tested usng the toerance vaues to = 4,8,3,34 for ALRP vaues.5 µ.9 and the smuaton resuts are shon n Tabe 5. It seems that the smuaton resuts are better compared th the resuts of [6] and []. From ths tabe t seems that even though a agorthms returned the same root number th the same overa success rates, the LALR s more effcent snce t dentfed 4 roots compared th the 4 roots returned by NLMALR and NLALR approaches. In Tabe 5, RN s the number of dentfed roots for each case, he the ntas MGAE stand for Mean Goba Absoute Error. The dentfed roots of the system for the three best cases assocated th the toerance

22 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas Tabe 5. Smuaton resuts for the Eampe System 5. LALR NLMALR NLALR to µ MIN RN SR MGAE to µ MIN RN SR MGAE to µ MIN RN SR MGAE vaue to = 8 - these vaues are typed n bod - are the foong: e+, e, e e 3, e+, e e+, e, e e 8, e, e e, e, e e, e, e Eampe 6 (n = 8 - Knematc appcaton).. The net system s part of the descrpton for a knematc appcaton and s composed of 8 nonnear equatons th 8 unknons defned as (see []) j + j+ = = (,, 3, 4, 5, 6, 7, 8 ) α j 3 +α j 4 +α 3j 3 +α 4j 4 +α 5j 7 + α 6j 5 8 +α 7j 6 7 +α 8j 6 8 +α 9j +α j + α j 3 +α j 4 +α 3j 5 +α 4j 6 +α 5j 7 + α 6j 8 +α 7j = ( j 4) th the coeffcents α j to have the vaues of Tabe 6. Note that for sake of smpcty and better performance, each parameter vaue has been dvded by d =. The neura sover as tested n the search regon ( =,,...,8) th a varaton step h = and a goba absoute error toerance vaues GAE TOL = 8,,,,5,6. The best resuts th respect to the tota number of the dentfed roots, are shon n Tabe 7. It s nterestng to note that n a cases ths mamum number of dentfed roots s assocated th an ALRP vaue of µ =.5. The varaton of the average and the mnmum teraton number for a roots for a toerance vaue GAE TOL = 6 s shon n Fgure 9. A comparson of the resuts emerged from the proposed neura method and the method of Overa & Petraga [] s shon n Tabe 8. In the method of Overa & Petraga, the roots are dentfed th a goba absoute error toerance th vaues 8,,,, 5, 6. On the other hand, the proposed method s capabe of dentfyng the same roots th any toerance vaue up to 6. Note that the neura based sover dentfed s roots for ths

23 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas ,6,7,8,9,,,3,4,5,6,7,8 Average Mn Fgure 9. The varaton of the overa mnmum and average number of teratons for a toerance GAE TOL = 6 and for.6 µ.8 for the Eampe System 6. Tabe 6. The coeffcentsα j ( 7, j 4) for the Eampe System 6. Coumn Coumn Coumn3 Coumn4 Ro Ro Ro Ro Ro Ro Ro Ro Ro Ro Ro Ro Ro Ro Ro Ro Ro Tabe 7. The best resuts regardng the mamum number of dentfed roots for the Eampe System 6. GAE TOL ALRP Roots MIN AVERAGE SR Average Goba Absoute Error e e e e e e-7

24 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas 4 Tabe 8. A comparson beteen the resuts emerged from the proposed method and the method of Overa & Petraga [], regardng the vaues of the dentfed roots assocated th the Eampe System 6. The notaton RN ( =,, 4, 5, 6) descrbes the roots dentfed by the neura method he RP s the root dentfed n []. RN e e e e e e e e- RP e e e e e e e e- RN e e e e e e e e+ RP e e e e e e e e+ R4N e e e e e e e e+ R4P e e e e e e e e+ R5N e e e e e e e e+ R5P e e e e e e e e+ R6N e e e e e e e e- R6P e e e e e e e e- system, he the method of Overa & Petraga ony fve roots (the Roots and 3 of Tabe 7 n [] are actuay the same root th a dfferent accuracy) Eampe 7 (n = - Combuston appcaton).. Let us sove no a arge system of equatons th unknons n the form F () = = F () = = F 3 () = = F 4 () = = F 5 () = = F 6 () = = F 7 () = = F 8 () = = F 9 () = = F () = = here = (,, 3, 4, 5, 6, 7, 8, 9, ) T (ths s the Eampe 7 from [], see aso [6]). To reproduce the epermenta resuts of the terature the neura sover ran n the nterva[,] and snce the functon y() = tanh() does not ork correcty for arge vaues of ts argument, ony the dentty functon y = as used. Furthermore, to smpfy the cacuatons, each equaton as dvded th the vaued = 8, the argest vaue that each equaton can take, for the nta vector () = ( =,,...,). Ths system has a ot of roots and the cted orks dentfy dfferent roots th dfferent accuracy. For eampe, n [6] the roots are dentfed th an accuracy of to decma dgts he n [] the accuracy of estmatng the absoute error as fve decma dgts. The neura sover s abe to reach better accuracy for a toerance vaue of to = that gves an accuracy of s decma dgts. The varaton of the dentfed root number n the search nterva and the mnmum teraton number for ALRP vaues. µ.8 and for the dentfy actvaton functon are shon n Fgure. On the other hand, Fgure

25 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas ,,,3,4,5,6,7,8,9,,,3,4,5,6,7,8,9 Number of Roots n [-,] Mnmum teratons Fgure. The varaton of the dentfed root number n the search nterva and the mnmum teraton number for ALRP vaues. µ.8 for the dentty actvaton functon, for the Eampe System ,,,3,4,5,6,7,8,9,,,3,4,5,6,7,8,9 Tota dentfed roots Identfed roots n the search regon Fgure. The tota number of dentfed roots th respect to the dentfed roots that beong to the search nterva for the Eampe System 7. sho the reatonshp beteen the tota roots dentfed by the netork and the number of the those roots that beong n the search nterva (ths means that the remanng roots are ocated outsde ths -dmensona nterva).

26 Juy 8, 6 8:57 Internatona Journa of Computer Mathematcs poynomas ,,,3,4,5,6,7,8,9,,,3,4,5,6,7,8,9 Average Iteraton Number Mn Iteraton Number Fgure. The varaton of the average and the mnmum teraton number th respect to the vaue of the ALRP (to = 9) for the NLMALR method and for the Eampe System Eampe 8 (n = - Interva arthmetc appcaton).. The ast system eamned here, s another system of equatons th unknons, defned as F () = = F () = = F 3 () = = F 4 () = = F 5 () = = F 6 () = = F 7 () = = F 8 () = = F 9 () = = F () = = here = (,, 3, 4, 5, 6, 7, 8, 9, ) T (ths s the Eampe 3 from [], see aso [6]), t s assocated th an nterva arthmetc appcaton, and accordng to [] t has ony one souton). To acheve a souton smar to the one reported n [] a toerance vaue to = 9 has to be used th an accuracy regardng the absoute error of 4-5 decma dgts. The neura soved as ran 4 tmes orkng n a three modes of operaton n the search nterva ( =,,...,) th a varaton step h = 4 and for ALRP vaues. µ.9. The varaton of the average and the mnmum teraton number th respect to the vaue of the ALRP for toerance to = 9 and for the NLMALR method s shon n Fgure. Regardng the goba absoute error the cted method gave an error of. he the error vaue assocated th the proposed method s e 6 =..

Associative Memories

Associative Memories Assocatve Memores We consder now modes for unsupervsed earnng probems, caed auto-assocaton probems. Assocaton s the task of mappng patterns to patterns. In an assocatve memory the stmuus of an ncompete

More information

Neural network-based athletics performance prediction optimization model applied research

Neural network-based athletics performance prediction optimization model applied research Avaabe onne www.jocpr.com Journa of Chemca and Pharmaceutca Research, 04, 6(6):8-5 Research Artce ISSN : 0975-784 CODEN(USA) : JCPRC5 Neura networ-based athetcs performance predcton optmzaton mode apped

More information

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory

Research on Complex Networks Control Based on Fuzzy Integral Sliding Theory Advanced Scence and Technoogy Letters Vo.83 (ISA 205), pp.60-65 http://dx.do.org/0.4257/ast.205.83.2 Research on Compex etworks Contro Based on Fuzzy Integra Sdng Theory Dongsheng Yang, Bngqng L, 2, He

More information

Pattern Classification

Pattern Classification Pattern Classfcaton All materals n these sldes ere taken from Pattern Classfcaton (nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wley & Sons, 000 th the permsson of the authors and the publsher

More information

MARKOV CHAIN AND HIDDEN MARKOV MODEL

MARKOV CHAIN AND HIDDEN MARKOV MODEL MARKOV CHAIN AND HIDDEN MARKOV MODEL JIAN ZHANG JIANZHAN@STAT.PURDUE.EDU Markov chan and hdden Markov mode are probaby the smpest modes whch can be used to mode sequenta data,.e. data sampes whch are not

More information

Numerical integration in more dimensions part 2. Remo Minero

Numerical integration in more dimensions part 2. Remo Minero Numerca ntegraton n more dmensons part Remo Mnero Outne The roe of a mappng functon n mutdmensona ntegraton Gauss approach n more dmensons and quadrature rues Crtca anass of acceptabt of a gven quadrature

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

Supervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System.

Supervised Learning. Neural Networks and Back-Propagation Learning. Credit Assignment Problem. Feedforward Network. Adaptive System. Part 7: Neura Networ & earnng /2/05 Superved earnng Neura Networ and Bac-Propagaton earnng Produce dered output for tranng nput Generaze reaonaby & appropratey to other nput Good exampe: pattern recognton

More information

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks

Supplementary Material: Learning Structured Weight Uncertainty in Bayesian Neural Networks Shengyang Sun, Changyou Chen, Lawrence Carn Suppementary Matera: Learnng Structured Weght Uncertanty n Bayesan Neura Networks Shengyang Sun Changyou Chen Lawrence Carn Tsnghua Unversty Duke Unversty Duke

More information

COXREG. Estimation (1)

COXREG. Estimation (1) COXREG Cox (972) frst suggested the modes n whch factors reated to fetme have a mutpcatve effect on the hazard functon. These modes are caed proportona hazards (PH) modes. Under the proportona hazards

More information

A finite difference method for heat equation in the unbounded domain

A finite difference method for heat equation in the unbounded domain Internatona Conerence on Advanced ectronc Scence and Technoogy (AST 6) A nte derence method or heat equaton n the unbounded doman a Quan Zheng and Xn Zhao Coege o Scence North Chna nversty o Technoogy

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

Cyclic Codes BCH Codes

Cyclic Codes BCH Codes Cycc Codes BCH Codes Gaos Feds GF m A Gaos fed of m eements can be obtaned usng the symbos 0,, á, and the eements beng 0,, á, á, á 3 m,... so that fed F* s cosed under mutpcaton wth m eements. The operator

More information

Quantitative Evaluation Method of Each Generation Margin for Power System Planning

Quantitative Evaluation Method of Each Generation Margin for Power System Planning 1 Quanttatve Evauaton Method of Each Generaton Margn for Poer System Pannng SU Su Department of Eectrca Engneerng, Tohoku Unversty, Japan moden25@ececetohokuacjp Abstract Increasng effcency of poer pants

More information

Boundary Value Problems. Lecture Objectives. Ch. 27

Boundary Value Problems. Lecture Objectives. Ch. 27 Boundar Vaue Probes Ch. 7 Lecture Obectves o understand the dfference between an nta vaue and boundar vaue ODE o be abe to understand when and how to app the shootng ethod and FD ethod. o understand what

More information

Sensitivity Analysis Using Neural Network for Estimating Aircraft Stability and Control Derivatives

Sensitivity Analysis Using Neural Network for Estimating Aircraft Stability and Control Derivatives Internatona Conference on Integent and Advanced Systems 27 Senstvty Anayss Usng Neura Networ for Estmatng Arcraft Stabty and Contro Dervatves Roht Garhwa a, Abhshe Hader b and Dr. Manoranan Snha c Department

More information

Quantum Runge-Lenz Vector and the Hydrogen Atom, the hidden SO(4) symmetry

Quantum Runge-Lenz Vector and the Hydrogen Atom, the hidden SO(4) symmetry Quantum Runge-Lenz ector and the Hydrogen Atom, the hdden SO(4) symmetry Pasca Szrftgser and Edgardo S. Cheb-Terrab () Laboratore PhLAM, UMR CNRS 85, Unversté Le, F-59655, France () Mapesoft Let's consder

More information

Image Classification Using EM And JE algorithms

Image Classification Using EM And JE algorithms Machne earnng project report Fa, 2 Xaojn Sh, jennfer@soe Image Cassfcaton Usng EM And JE agorthms Xaojn Sh Department of Computer Engneerng, Unversty of Caforna, Santa Cruz, CA, 9564 jennfer@soe.ucsc.edu

More information

Multilayer Perceptron (MLP)

Multilayer Perceptron (MLP) Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne

More information

Deriving the Dual. Prof. Bennett Math of Data Science 1/13/06

Deriving the Dual. Prof. Bennett Math of Data Science 1/13/06 Dervng the Dua Prof. Bennett Math of Data Scence /3/06 Outne Ntty Grtty for SVM Revew Rdge Regresson LS-SVM=KRR Dua Dervaton Bas Issue Summary Ntty Grtty Need Dua of w, b, z w 2 2 mn st. ( x w ) = C z

More information

Introduction to the Introduction to Artificial Neural Network

Introduction to the Introduction to Artificial Neural Network Introducton to the Introducton to Artfcal Neural Netork Vuong Le th Hao Tang s sldes Part of the content of the sldes are from the Internet (possbly th modfcatons). The lecturer does not clam any onershp

More information

A DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS

A DIMENSION-REDUCTION METHOD FOR STOCHASTIC ANALYSIS SECOND-MOMENT ANALYSIS A DIMESIO-REDUCTIO METHOD FOR STOCHASTIC AALYSIS SECOD-MOMET AALYSIS S. Rahman Department of Mechanca Engneerng and Center for Computer-Aded Desgn The Unversty of Iowa Iowa Cty, IA 52245 June 2003 OUTLIE

More information

The Application of BP Neural Network principal component analysis in the Forecasting the Road Traffic Accident

The Application of BP Neural Network principal component analysis in the Forecasting the Road Traffic Accident ICTCT Extra Workshop, Bejng Proceedngs The Appcaton of BP Neura Network prncpa component anayss n Forecastng Road Traffc Accdent He Mng, GuoXucheng &LuGuangmng Transportaton Coege of Souast Unversty 07

More information

Hopfield Training Rules 1 N

Hopfield Training Rules 1 N Hopfeld Tranng Rules To memorse a sngle pattern Suppose e set the eghts thus - = p p here, s the eght beteen nodes & s the number of nodes n the netor p s the value requred for the -th node What ll the

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

Supervised Learning NNs

Supervised Learning NNs EE788 Robot Cognton and Plannng, Prof. J.-H. Km Lecture 6 Supervsed Learnng NNs Robot Intellgence Technolog Lab. From Jang, Sun, Mzutan, Ch.9, Neuro-Fuzz and Soft Computng, Prentce Hall Contents. Introducton.

More information

1 GSW Iterative Techniques for y = Ax

1 GSW Iterative Techniques for y = Ax 1 for y = A I m gong to cheat here. here are a lot of teratve technques that can be used to solve the general case of a set of smultaneous equatons (wrtten n the matr form as y = A), but ths chapter sn

More information

Application of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages

Application of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages Appcaton of Partce Swarm Optmzaton to Economc Dspatch Probem: Advantages and Dsadvantages Kwang Y. Lee, Feow, IEEE, and Jong-Bae Par, Member, IEEE Abstract--Ths paper summarzes the state-of-art partce

More information

Sampling-based Approach for Design Optimization in the Presence of Interval Variables

Sampling-based Approach for Design Optimization in the Presence of Interval Variables 0 th Word Congress on Structura and Mutdscpnary Optmzaton May 9-4, 03, Orando, orda, USA Sampng-based Approach for Desgn Optmzaton n the Presence of nterva Varabes Davd Yoo and kn Lee Unversty of Connectcut,

More information

Chapter 6. Rotations and Tensors

Chapter 6. Rotations and Tensors Vector Spaces n Physcs 8/6/5 Chapter 6. Rotatons and ensors here s a speca knd of near transformaton whch s used to transforms coordnates from one set of axes to another set of axes (wth the same orgn).

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Multivariate Ratio Estimation With Known Population Proportion Of Two Auxiliary Characters For Finite Population

Multivariate Ratio Estimation With Known Population Proportion Of Two Auxiliary Characters For Finite Population Multvarate Rato Estmaton Wth Knon Populaton Proporton Of To Auxlar haracters For Fnte Populaton *Raesh Sngh, *Sachn Mal, **A. A. Adeara, ***Florentn Smarandache *Department of Statstcs, Banaras Hndu Unverst,Varanas-5,

More information

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory

Multispectral Remote Sensing Image Classification Algorithm Based on Rough Set Theory Proceedngs of the 2009 IEEE Internatona Conference on Systems Man and Cybernetcs San Antono TX USA - October 2009 Mutspectra Remote Sensng Image Cassfcaton Agorthm Based on Rough Set Theory Yng Wang Xaoyun

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Networked Cooperative Distributed Model Predictive Control Based on State Observer

Networked Cooperative Distributed Model Predictive Control Based on State Observer Apped Mathematcs, 6, 7, 48-64 ubshed Onne June 6 n ScRes. http://www.scrp.org/journa/am http://dx.do.org/.436/am.6.73 Networed Cooperatve Dstrbuted Mode redctve Contro Based on State Observer Ba Su, Yanan

More information

On the Equality of Kernel AdaTron and Sequential Minimal Optimization in Classification and Regression Tasks and Alike Algorithms for Kernel

On the Equality of Kernel AdaTron and Sequential Minimal Optimization in Classification and Regression Tasks and Alike Algorithms for Kernel Proceedngs of th European Symposum on Artfca Neura Networks, pp. 25-222, ESANN 2003, Bruges, Begum, 2003 On the Equaty of Kerne AdaTron and Sequenta Mnma Optmzaton n Cassfcaton and Regresson Tasks and

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Dynamic Analysis Of An Off-Road Vehicle Frame

Dynamic Analysis Of An Off-Road Vehicle Frame Proceedngs of the 8th WSEAS Int. Conf. on NON-LINEAR ANALYSIS, NON-LINEAR SYSTEMS AND CHAOS Dnamc Anass Of An Off-Road Vehce Frame ŞTEFAN TABACU, NICOLAE DORU STĂNESCU, ION TABACU Automotve Department,

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS)

Some Comments on Accelerating Convergence of Iterative Sequences Using Direct Inversion of the Iterative Subspace (DIIS) Some Comments on Acceleratng Convergence of Iteratve Sequences Usng Drect Inverson of the Iteratve Subspace (DIIS) C. Davd Sherrll School of Chemstry and Bochemstry Georga Insttute of Technology May 1998

More information

Structure and Drive Paul A. Jensen Copyright July 20, 2003

Structure and Drive Paul A. Jensen Copyright July 20, 2003 Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.

More information

A General Column Generation Algorithm Applied to System Reliability Optimization Problems

A General Column Generation Algorithm Applied to System Reliability Optimization Problems A Genera Coumn Generaton Agorthm Apped to System Reabty Optmzaton Probems Lea Za, Davd W. Cot, Department of Industra and Systems Engneerng, Rutgers Unversty, Pscataway, J 08854, USA Abstract A genera

More information

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems

Research Article H Estimates for Discrete-Time Markovian Jump Linear Systems Mathematca Probems n Engneerng Voume 213 Artce ID 945342 7 pages http://dxdoorg/11155/213/945342 Research Artce H Estmates for Dscrete-Tme Markovan Jump Lnear Systems Marco H Terra 1 Gdson Jesus 2 and

More information

Optimal Guaranteed Cost Control of Linear Uncertain Systems with Input Constraints

Optimal Guaranteed Cost Control of Linear Uncertain Systems with Input Constraints Internatona Journa Optma of Contro, Guaranteed Automaton, Cost Contro and Systems, of Lnear vo Uncertan 3, no Systems 3, pp 397-4, wth Input September Constrants 5 397 Optma Guaranteed Cost Contro of Lnear

More information

The line method combined with spectral chebyshev for space-time fractional diffusion equation

The line method combined with spectral chebyshev for space-time fractional diffusion equation Apped and Computatona Mathematcs 014; 3(6): 330-336 Pubshed onne December 31, 014 (http://www.scencepubshnggroup.com/j/acm) do: 10.1164/j.acm.0140306.17 ISS: 3-5605 (Prnt); ISS: 3-5613 (Onne) The ne method

More information

A Derivative-Free Algorithm for Bound Constrained Optimization

A Derivative-Free Algorithm for Bound Constrained Optimization Computatona Optmzaton and Appcatons, 21, 119 142, 2002 c 2002 Kuwer Academc Pubshers. Manufactured n The Netherands. A Dervatve-Free Agorthm for Bound Constraned Optmzaton STEFANO LUCIDI ucd@ds.unroma.t

More information

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method

Solution of Linear System of Equations and Matrix Inversion Gauss Seidel Iteration Method Soluton of Lnear System of Equatons and Matr Inverson Gauss Sedel Iteraton Method It s another well-known teratve method for solvng a system of lnear equatons of the form a + a22 + + ann = b a2 + a222

More information

On the Power Function of the Likelihood Ratio Test for MANOVA

On the Power Function of the Likelihood Ratio Test for MANOVA Journa of Mutvarate Anayss 8, 416 41 (00) do:10.1006/jmva.001.036 On the Power Functon of the Lkehood Rato Test for MANOVA Dua Kumar Bhaumk Unversty of South Aabama and Unversty of Inos at Chcago and Sanat

More information

Interference Alignment and Degrees of Freedom Region of Cellular Sigma Channel

Interference Alignment and Degrees of Freedom Region of Cellular Sigma Channel 2011 IEEE Internatona Symposum on Informaton Theory Proceedngs Interference Agnment and Degrees of Freedom Regon of Ceuar Sgma Channe Huaru Yn 1 Le Ke 2 Zhengdao Wang 2 1 WINLAB Dept of EEIS Unv. of Sc.

More information

A Hybrid Learning Algorithm for Locally Recurrent Neural Networks

A Hybrid Learning Algorithm for Locally Recurrent Neural Networks Contemporary Engneerng Scences, Vo. 11, 2018, no. 1, 1-13 HIKARI Ltd, www.m-hkar.com https://do.org/10.12988/ces.2018.711194 A Hybrd Learnng Agorthm for Locay Recurrent Neura Networks Dmtrs Varsams and

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS

NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc

More information

ON AUTOMATIC CONTINUITY OF DERIVATIONS FOR BANACH ALGEBRAS WITH INVOLUTION

ON AUTOMATIC CONTINUITY OF DERIVATIONS FOR BANACH ALGEBRAS WITH INVOLUTION European Journa of Mathematcs and Computer Scence Vo. No. 1, 2017 ON AUTOMATC CONTNUTY OF DERVATONS FOR BANACH ALGEBRAS WTH NVOLUTON Mohamed BELAM & Youssef T DL MATC Laboratory Hassan Unversty MORO CCO

More information

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming

OPTIMISATION. Introduction Single Variable Unconstrained Optimisation Multivariable Unconstrained Optimisation Linear Programming OPTIMIATION Introducton ngle Varable Unconstraned Optmsaton Multvarable Unconstraned Optmsaton Lnear Programmng Chapter Optmsaton /. Introducton In an engneerng analss, sometmes etremtes, ether mnmum or

More information

Neural networks. Nuno Vasconcelos ECE Department, UCSD

Neural networks. Nuno Vasconcelos ECE Department, UCSD Neural networs Nuno Vasconcelos ECE Department, UCSD Classfcaton a classfcaton problem has two types of varables e.g. X - vector of observatons (features) n the world Y - state (class) of the world x X

More information

ON THE BEHAVIOR OF THE CONJUGATE-GRADIENT METHOD ON ILL-CONDITIONED PROBLEMS

ON THE BEHAVIOR OF THE CONJUGATE-GRADIENT METHOD ON ILL-CONDITIONED PROBLEMS ON THE BEHAVIOR OF THE CONJUGATE-GRADIENT METHOD ON I-CONDITIONED PROBEM Anders FORGREN Technca Report TRITA-MAT-006-O Department of Mathematcs Roya Insttute of Technoogy January 006 Abstract We study

More information

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM

IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED BY PARTICLE SWARM ALGORITHM Journa of Theoretca and Apped Informaton Technoogy th February 3. Vo. 48 No. 5-3 JATIT & LLS. A rghts reserved. ISSN: 99-8645 www.att.org E-ISSN: 87-395 IDENTIFICATION OF NONLINEAR SYSTEM VIA SVR OPTIMIZED

More information

Review of Taylor Series. Read Section 1.2

Review of Taylor Series. Read Section 1.2 Revew of Taylor Seres Read Secton 1.2 1 Power Seres A power seres about c s an nfnte seres of the form k = 0 k a ( x c) = a + a ( x c) + a ( x c) + a ( x c) k 2 3 0 1 2 3 + In many cases, c = 0, and the

More information

Typical Neuron Error Back-Propagation

Typical Neuron Error Back-Propagation x Mutayer Notaton 1 2 2 2 y Notaton ayer of neuron abeed 1,, N neuron n ayer = vector of output from neuron n ayer nput ayer = x (the nput pattern) output ayer = y (the actua output) = weght between ayer

More information

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm

Design and Optimization of Fuzzy Controller for Inverse Pendulum System Using Genetic Algorithm Desgn and Optmzaton of Fuzzy Controller for Inverse Pendulum System Usng Genetc Algorthm H. Mehraban A. Ashoor Unversty of Tehran Unversty of Tehran h.mehraban@ece.ut.ac.r a.ashoor@ece.ut.ac.r Abstract:

More information

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS

A MODIFIED METHOD FOR SOLVING SYSTEM OF NONLINEAR EQUATIONS Journal of Mathematcs and Statstcs 9 (1): 4-8, 1 ISSN 1549-644 1 Scence Publcatons do:1.844/jmssp.1.4.8 Publshed Onlne 9 (1) 1 (http://www.thescpub.com/jmss.toc) A MODIFIED METHOD FOR SOLVING SYSTEM OF

More information

Lower bounds for the Crossing Number of the Cartesian Product of a Vertex-transitive Graph with a Cycle

Lower bounds for the Crossing Number of the Cartesian Product of a Vertex-transitive Graph with a Cycle Lower bounds for the Crossng Number of the Cartesan Product of a Vertex-transtve Graph wth a Cyce Junho Won MIT-PRIMES December 4, 013 Abstract. The mnmum number of crossngs for a drawngs of a gven graph

More information

DISTRIBUTED PROCESSING OVER ADAPTIVE NETWORKS. Cassio G. Lopes and Ali H. Sayed

DISTRIBUTED PROCESSING OVER ADAPTIVE NETWORKS. Cassio G. Lopes and Ali H. Sayed DISTRIBUTED PROCESSIG OVER ADAPTIVE ETWORKS Casso G Lopes and A H Sayed Department of Eectrca Engneerng Unversty of Caforna Los Angees, CA, 995 Ema: {casso, sayed@eeucaedu ABSTRACT Dstrbuted adaptve agorthms

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Development of whole CORe Thermal Hydraulic analysis code CORTH Pan JunJie, Tang QiFen, Chai XiaoMing, Lu Wei, Liu Dong

Development of whole CORe Thermal Hydraulic analysis code CORTH Pan JunJie, Tang QiFen, Chai XiaoMing, Lu Wei, Liu Dong Deveopment of whoe CORe Therma Hydrauc anayss code CORTH Pan JunJe, Tang QFen, Cha XaoMng, Lu We, Lu Dong cence and technoogy on reactor system desgn technoogy, Nucear Power Insttute of Chna, Chengdu,

More information

QUARTERLY OF APPLIED MATHEMATICS

QUARTERLY OF APPLIED MATHEMATICS QUARTERLY OF APPLIED MATHEMATICS Voume XLI October 983 Number 3 DIAKOPTICS OR TEARING-A MATHEMATICAL APPROACH* By P. W. AITCHISON Unversty of Mantoba Abstract. The method of dakoptcs or tearng was ntroduced

More information

MODEL TUNING WITH THE USE OF HEURISTIC-FREE GMDH (GROUP METHOD OF DATA HANDLING) NETWORKS

MODEL TUNING WITH THE USE OF HEURISTIC-FREE GMDH (GROUP METHOD OF DATA HANDLING) NETWORKS MODEL TUNING WITH THE USE OF HEURISTIC-FREE (GROUP METHOD OF DATA HANDLING) NETWORKS M.C. Schrver (), E.J.H. Kerchoffs (), P.J. Water (), K.D. Saman () () Rswaterstaat Drecte Zeeand () Deft Unversty of

More information

Elastic Collisions. Definition: two point masses on which no external forces act collide without losing any energy.

Elastic Collisions. Definition: two point masses on which no external forces act collide without losing any energy. Elastc Collsons Defnton: to pont asses on hch no external forces act collde thout losng any energy v Prerequstes: θ θ collsons n one denson conservaton of oentu and energy occurs frequently n everyday

More information

From Biot-Savart Law to Divergence of B (1)

From Biot-Savart Law to Divergence of B (1) From Bot-Savart Law to Dvergence of B (1) Let s prove that Bot-Savart gves us B (r ) = 0 for an arbtrary current densty. Frst take the dvergence of both sdes of Bot-Savart. The dervatve s wth respect to

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

An Effective Space Charge Solver. for DYNAMION Code

An Effective Space Charge Solver. for DYNAMION Code A. Orzhehovsaya W. Barth S. Yaramyshev GSI Hemhotzzentrum für Schweronenforschung (Darmstadt) An Effectve Space Charge Sover for DYNAMION Code Introducton Genera space charge agorthms based on the effectve

More information

L-Edge Chromatic Number Of A Graph

L-Edge Chromatic Number Of A Graph IJISET - Internatona Journa of Innovatve Scence Engneerng & Technoogy Vo. 3 Issue 3 March 06. ISSN 348 7968 L-Edge Chromatc Number Of A Graph Dr.R.B.Gnana Joth Assocate Professor of Mathematcs V.V.Vannaperuma

More information

Key words. corner singularities, energy-corrected finite element methods, optimal convergence rates, pollution effect, re-entrant corners

Key words. corner singularities, energy-corrected finite element methods, optimal convergence rates, pollution effect, re-entrant corners NESTED NEWTON STRATEGIES FOR ENERGY-CORRECTED FINITE ELEMENT METHODS U. RÜDE1, C. WALUGA 2, AND B. WOHLMUTH 2 Abstract. Energy-corrected fnte eement methods provde an attractve technque to dea wth eptc

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

A MIN-MAX REGRET ROBUST OPTIMIZATION APPROACH FOR LARGE SCALE FULL FACTORIAL SCENARIO DESIGN OF DATA UNCERTAINTY

A MIN-MAX REGRET ROBUST OPTIMIZATION APPROACH FOR LARGE SCALE FULL FACTORIAL SCENARIO DESIGN OF DATA UNCERTAINTY A MIN-MAX REGRET ROBST OPTIMIZATION APPROACH FOR ARGE SCAE F FACTORIA SCENARIO DESIGN OF DATA NCERTAINTY Travat Assavapokee Department of Industra Engneerng, nversty of Houston, Houston, Texas 7704-4008,

More information

LECTURE 21 Mohr s Method for Calculation of General Displacements. 1 The Reciprocal Theorem

LECTURE 21 Mohr s Method for Calculation of General Displacements. 1 The Reciprocal Theorem V. DEMENKO MECHANICS OF MATERIALS 05 LECTURE Mohr s Method for Cacuaton of Genera Dspacements The Recproca Theorem The recproca theorem s one of the genera theorems of strength of materas. It foows drect

More information

Numerical Investigation of Power Tunability in Two-Section QD Superluminescent Diodes

Numerical Investigation of Power Tunability in Two-Section QD Superluminescent Diodes Numerca Investgaton of Power Tunabty n Two-Secton QD Superumnescent Dodes Matta Rossett Paoo Bardea Ivo Montrosset POLITECNICO DI TORINO DELEN Summary 1. A smpfed mode for QD Super Lumnescent Dodes (SLD)

More information

CHAPTER III Neural Networks as Associative Memory

CHAPTER III Neural Networks as Associative Memory CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people

More information

Predicting Model of Traffic Volume Based on Grey-Markov

Predicting Model of Traffic Volume Based on Grey-Markov Vo. No. Modern Apped Scence Predctng Mode of Traffc Voume Based on Grey-Marov Ynpeng Zhang Zhengzhou Muncpa Engneerng Desgn & Research Insttute Zhengzhou 5005 Chna Abstract Grey-marov forecastng mode of

More information

Distributed Moving Horizon State Estimation of Nonlinear Systems. Jing Zhang

Distributed Moving Horizon State Estimation of Nonlinear Systems. Jing Zhang Dstrbuted Movng Horzon State Estmaton of Nonnear Systems by Jng Zhang A thess submtted n parta fufment of the requrements for the degree of Master of Scence n Chemca Engneerng Department of Chemca and

More information

The equation of motion of a dynamical system is given by a set of differential equations. That is (1)

The equation of motion of a dynamical system is given by a set of differential equations. That is (1) Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

2.29 Numerical Fluid Mechanics

2.29 Numerical Fluid Mechanics REVIEW Lecture 10: Sprng 2015 Lecture 11 Classfcaton of Partal Dfferental Equatons PDEs) and eamples wth fnte dfference dscretzatons Parabolc PDEs Ellptc PDEs Hyperbolc PDEs Error Types and Dscretzaton

More information

Lower Bounding Procedures for the Single Allocation Hub Location Problem

Lower Bounding Procedures for the Single Allocation Hub Location Problem Lower Boundng Procedures for the Snge Aocaton Hub Locaton Probem Borzou Rostam 1,2 Chrstoph Buchhem 1,4 Fautät für Mathemat, TU Dortmund, Germany J. Faban Meer 1,3 Uwe Causen 1 Insttute of Transport Logstcs,

More information

Greyworld White Balancing with Low Computation Cost for On- Board Video Capturing

Greyworld White Balancing with Low Computation Cost for On- Board Video Capturing reyword Whte aancng wth Low Computaton Cost for On- oard Vdeo Capturng Peng Wu Yuxn Zoe) Lu Hewett-Packard Laboratores Hewett-Packard Co. Pao Ato CA 94304 USA Abstract Whte baancng s a process commony

More information

[WAVES] 1. Waves and wave forces. Definition of waves

[WAVES] 1. Waves and wave forces. Definition of waves 1. Waves and forces Defnton of s In the smuatons on ong-crested s are consdered. The drecton of these s (μ) s defned as sketched beow n the goba co-ordnate sstem: North West East South The eevaton can

More information

MATH 567: Mathematical Techniques in Data Science Lab 8

MATH 567: Mathematical Techniques in Data Science Lab 8 1/14 MATH 567: Mathematcal Technques n Data Scence Lab 8 Domnque Gullot Departments of Mathematcal Scences Unversty of Delaware Aprl 11, 2017 Recall We have: a (2) 1 = f(w (1) 11 x 1 + W (1) 12 x 2 + W

More information

Support Vector Machines CS434

Support Vector Machines CS434 Support Vector Machnes CS434 Lnear Separators Many lnear separators exst that perfectly classfy all tranng examples Whch of the lnear separators s the best? Intuton of Margn Consder ponts A, B, and C We

More information

Analysis of Bipartite Graph Codes on the Binary Erasure Channel

Analysis of Bipartite Graph Codes on the Binary Erasure Channel Anayss of Bpartte Graph Codes on the Bnary Erasure Channe Arya Mazumdar Department of ECE Unversty of Maryand, Coege Par ema: arya@umdedu Abstract We derve densty evouton equatons for codes on bpartte

More information

DIOPHANTINE EQUATIONS WITH BINOMIAL COEFFICIENTS AND PERTURBATIONS OF SYMMETRIC BOOLEAN FUNCTIONS

DIOPHANTINE EQUATIONS WITH BINOMIAL COEFFICIENTS AND PERTURBATIONS OF SYMMETRIC BOOLEAN FUNCTIONS DIOPHANTINE EQUATIONS WITH BINOMIAL COEFFICIENTS AND PERTURBATIONS OF SYMMETRIC BOOLEAN FUNCTIONS FRANCIS N CASTRO, OSCAR E GONZÁLEZ, AND LUIS A MEDINA Abstract Ths work presents a study of perturbatons

More information

IV. Performance Optimization

IV. Performance Optimization IV. Performance Optmzaton A. Steepest descent algorthm defnton how to set up bounds on learnng rate mnmzaton n a lne (varyng learnng rate) momentum learnng examples B. Newton s method defnton Gauss-Newton

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

Lecture 23: Artificial neural networks

Lecture 23: Artificial neural networks Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

Adaptive LRBP Using Learning Automata for Neural Networks

Adaptive LRBP Using Learning Automata for Neural Networks Adaptve LRBP Usng Learnng Automata for eura etworks *B. MASHOUFI, *MOHAMMAD B. MEHAJ (#, *SAYED A. MOTAMEDI and **MOHAMMAD R. MEYBODI *Eectrca Engneerng Department **Computer Engneerng Department Amrkabr

More information

Grid Generation around a Cylinder by Complex Potential Functions

Grid Generation around a Cylinder by Complex Potential Functions Research Journal of Appled Scences, Engneerng and Technolog 4(): 53-535, 0 ISSN: 040-7467 Mawell Scentfc Organzaton, 0 Submtted: December 0, 0 Accepted: Januar, 0 Publshed: June 0, 0 Grd Generaton around

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information