CHAPTER II Recurrent Neural Networks
|
|
- Caroline Marshall
- 5 years ago
- Views:
Transcription
1 CHAPTER II Recurrent Neural Networks In ths chapter frst the dynamcs of the contnuous space recurrent neural networks wll be examned n a general framework. Then, the Hopfeld Network as a specal case of ths knd of networks wll be ntroduced. 2.. Dynamcal Systems The dynamcs of a large class of neural network models, may be represented by a set of frst order dfferental equatons n the form d x ( t) = F ( x ( t), x2 ( t),.., x( t),.., xn ( t)) =.. N (2..) where F s a nonlnear functon of ts argument. In a more compact form t may be reformulated as d x ( t) = F( x( t)) (2..2) where the nonlnear functon F operates on elements of the state vector x(t) n an autonomous way, that s F(x(t)) does not depend explctly on tme t. F(x) s a vector EE543 LECTURE NOTES. METU EEE. ANKARA 5
2 feld n an N-dmensonal state space. Such an equaton s called state space equaton and x(t) s called the state of the system at partcular tme t. In order the state space equaton (2..2) to have a soluton and the soluton to be unque, we have to mpose certan restrctons on the vector functon F(x(t)). For a soluton to exst, t s suffcent that F(x) s contnuous n all of ts arguments. However, ths restrcton by tself does not guarantee the unqueness of the soluton, so we have to mpose a further restrcton, known as Lpschtz conon. Let x denotes a norm, whch may be the Eucldean length, Hammng dstance or any other one, dependng on the purpose. Let x and y be a par of vectors n an open set, n vector space. Then accordng to the Lpschtz conon, there exsts a constant κ such that F(x) - F(y) κ x - y (2..3) for all x and y n. A vector F(x) that satsfes equaton (2..3) s sad to be Lpschtz. Note that Eq. (2..3) also mples contnuty of the functon wth respect to x. Therefore, n the case of autonomous systems the Lpschtz conon guarantees both the exstence and unqueness of solutons for the state space equaton (2..2). In partcular, f all partal dervatves F (x)/ x are fnte everywhere, then the functon F(x) satsfes the Lpschtz conon [Haykn 94]. Exercse: Compare the defntons of Eucldean length and Hammng dstance 2.2. Phase Space Regardless of the exact form of the nonlnear functon F, the state vector x(t) vares wth tme, that s the pont representng x(t) n N dmensonal space, changes ts poston n EE543 LECTURE NOTES. METU EEE. ANKARA 6
3 tme. Whle the behavor of x(t) may be thought as a flow, the vector functon F(x), may be thought as a velocty vector n an abstract sense. For vsualzaton of the moton of the states n tme, t may be helpful to use phase space of the dynamcal system, whch descrbes the global characterstcs of the moton rather than the detaled aspects of analytc or numerc solutons of the equaton. At a partcular nstant of tme t, a sngle pont n the N-dmensonal phase space represents the observed state of the state vector, that s x(t). Changes n the state of the system wth tme t are represented as a curve n the phase space, each pont on the curve carryng (explctly or mplctly) a label that records the tme of observaton. Ths curve s called a traectory or orbt of the system. Fgure 2..a. llustrates a traectory n a two dmensonal system. The famly of traectores, each of whch beng for a dfferent ntal conon x(0), s called the phase portrat of the system (Fgure 2..b). The phase portrat ncludes all those ponts n the phase space where the feld vector F(x) s defned. For an autonomous system, there wll be one and only one traectory passng through an ntal state [Abraham and Shaw 92, Haykn 94]. The tangent vector, that s dx(t)/, represents the nstantaneous velocty F(x(t)) of the traectory. We may thus derve a velocty vector for each pont of the traectory. F(x(t 0 )) t t 0 t 2 t 3 x(t) Fgure 2.. a) A two dmensonal traectory b) Phase portrat EE543 LECTURE NOTES. METU EEE. ANKARA 7
4 2.3. Maor forms of Dynamcal Systems For fxed weghts and nputs, we dstngush three maor forms dynamcal system, [Bressloff and Wer 9]. Each s characterzed by the behavor of the network when t s large, so that any transents are assumed to have dsappeared and the system has settled nto some steady state (Fgure 2.2): Fgure 2.2. Three maor forms of dynamcal systems a) Convergent b) Oscllatory c) Chaotc a) Convergent: every traectory x(t) converges to some fxed pont, whch s a state that does not change over tme (Fgure 2.2.a). These fxed ponts are called the attractors of the system. The set of ntal states x(0) that evolves to a partcular attractor s called the basn of attracton. The locatons of the attractors and the basn boundares change as the dynamcal system parameters change. For example, by alterng the external nputs or connecton weghts n a recurrent neural network, the basn attracton of the system can be adusted. b) Oscllatory: every traectory converges ether to a cycle or to a fxed pont. A cycle of perod T satsfes x(t+t)=x(t) for all tmes t (Fgure 2.2.b) c) Chaotc: most traectores do not tend to cycles or fxed ponts. One of the characterstcs of chaotc systems s that the long-term behavor of traectores s EE543 LECTURE NOTES. METU EEE. ANKARA 8
5 extremely senstve to ntal conons. That s, a slght change n the ntal state x(0) can lead to very dfferent behavors, as t becomes large Gradent, Conservatve and Dsspatve Systems For a vector feld F(x) on state space x(t) R N, the operator helps n formal descrpton of the system. In fact, s an operatonal vector defned as: =[ x x2 xn ]. (2.4.) If the operator appled on a scalar functon E of vector x(t), that s E=[ E E E x x... 2 x ]. (2.4.2) N s called the gradent of the functon E and extends n the drecton of the greatest rate of change of E and has that rate of change for ts length. If we set E(x)=c, we obtan a famly of surfaces known as level surfaces of E, as x takes on dfferent values. Due to the assumpton that E s sngle valued at each pont, one and only one level surface passes through any gven pont P [Wyle and Barret 85]. The gradent of E(x) at any pont P s perpendcular to the level surface of E, whch passes through that pont. (Fgure 2.3) For a vector feld F(x)=[F ( x) F 2 ( x)... F N ( x )] (2.4.3) EE543 LECTURE NOTES. METU EEE. ANKARA 9
6 Fgure 2.3 a) Energy landscape b) a slce c) level surfaces d) (-) gradent the nner product EE543 LECTURE NOTES. METU EEE. ANKARA 20
7 .F= F x F F N. (2.4.4) x x 2 N s called the dvergence of F, and t has a scalar value. Consder a regon of volume V and surface S n the phase space of an autonomous system, and assume a flow of ponts from ths regon. From our earler dscusson, we recognze that the velocty vector dx/ s equal to the vector feld F(x). Provded that the vector feld F(x) wthn the volume V s "well behaved", we may apply the dvergence theorem from the vector calculus [Wyle and Barret 85, Haykn 94]. Let n denote a unt vector normal to the surface at ds pontng outward from the enclosed volume. Then, accordng to the dvergence theorem, the relaton ( F( x). n) ds (. F( x)) dv (2.4.5) S = V holds between the volume ntegral of the dvergence of F(x) and the surface ntegral of the outwardly drected normal component of F(x). The quantty on the left-hand sde of Eq. (2.4.5) s recognzed as the net flux flowng out of the regon surrounded by the closed surface S. If the quantty s zero, the system s conservatve; f t s negatve, the system s dsspatve. In the lght of Eq. (2.4.5), we may state equvalently that f the dvergence Fx ( ) = 0 (2.4.6) then the system s conservatve and f Fx ( ) < 0 (2.4.7) the system s dsspatve, whch mples the stablty of the system [Haykn 94]. EE543 LECTURE NOTES. METU EEE. ANKARA 2
8 2.5. Equlbrum States A constant vector x* satsfyng the conon Fx ( *) = 0, (2.5.) s called an equlbrum state (statonary state or fxed pont) of the dynamcal system defned by Eq. (2..2). Snce t results n dx = 0 for =.. N, (2.5.2) x * the constant functon x(t)=x* s a soluton of the dynamcal system. If the system s operatng at an equlbrum pont, then the state vector stays constant, and the traectory wth an ntal state x(0)=x* degenerates to a sngle pont. We are frequently nterested n the behavor of the system around the equlbrum ponts, and try to nvestgate f the traectores around the equlbrum ponts are convergng to the equlbrum pont, dvergng from t or stayng n an orbt around the pont or combnaton of these. The use of a lnear approxmaton of the nonlnear functon F(x) makes t easer to understand the behavor of the system around the equlbrum ponts. Let x=x*+ x be a pont around x*. If the nonlnear functon F(x) s smooth and f the dsturbance x s small enough then t can be approxmated by the frst two terms of ts Taylor expanson around x* as: Fx ( * + x) Fx ( *) + F ( x*) x (2.5.3) EE543 LECTURE NOTES. METU EEE. ANKARA 22
9 where F'( x*) = x F x= x* (2.5.4) that s, n partcular: F ( x) F ( x*) = x x= x*. (2.5.5) Notce that F(x*) and F '(x*) n Eq. (2.5.3) are constant, therefore t s a lnear equaton n terms of x. Furthermore, snce an equlbrum pont satsfes Eq. (2.5.), we obtan Fx ( * + x) F ( x*) x (2.5.6) On the other hand, snce d d ( x* + x) = x (2.5.7) the Eq. (2..2) becomes d x = F ( x *) x (2.5.8) Snce Eq. (2.5.8) defnes a homogenous dfferental equaton wth constant real coeffcent, the egenvalues of the matrx F '(x*) determnes the behavor of the system. EE543 LECTURE NOTES. METU EEE. ANKARA 23
10 Exercse: Gve the general form of the soluton of the system defned by Eq. (2.5.8) Notce that, n order to have x(t) to dmnsh as t, we need the real parts of all the egenvalues to be negatve Stablty An equlbrum state x* of an autonomous nonlnear dynamcal system s called stable, f for any gven postve ε, there exsts a postve δ satsfyng, x(0)-x* < δ x(t)-x* < ε for all t >0. (2.6.) If x* s a stable equlbrum pont, t means that any traectory descrbed by the state vector x(t) of the system can be made to stay wthn a small neghborhood of the equlbrum state x* by choosng an ntal state x(0) close enough to x*. An equlbrum pont x * s sad to be asymptotcally stable f t s also convergent, where convergence requres the exstence of a postve δ such that x(0)-x* < δ lm x( t) = x *. (2.6.2) t If the equlbrum pont s convergent, the traectory can be made approachng to x* as t goes to nfnty, by choosng agan an ntal state x(0) close enough to x *. Notce that asymptotcally stable states correspond to attractors of the system. For an autonomous nonlnear dynamcal system the asymptotc stablty of an equlbrum state x* can be decded by the exstence of energy functons. Such energy functons are called also as Lapunov functons snce they are dscovered by Alexander Lapunov n the early 900s to prove the stablty of dfferental equatons. EE543 LECTURE NOTES. METU EEE. ANKARA 24
11 A contnuous functon L(x) wth a contnuous tme dervatve L'(x)=dL(x)/ s a defnte Lapunov functon f t satsfes: a) L(x) s bounded b) L'(x) s negatve defnte, that s: L'(x)<0 for x x* (2.6.3) and L'(x)=0 for x=x* (2.6.4) If the conon (2.6.3) s n the form L' ( x) 0 forx x * (2.6.5) the Lapunov functon s called semdefnte. Havng defned the Lapunov functon, the stablty of an equlbrum pont can be decded by usng the followng theorem: Lapunov's Theorem: The equlbrum state x * s stable (asymptotcally stable), f there exsts a semdefnte (defnte) Lapunov functon n a small neghborhood of x *. The use of Lapunov functons makes t possble to decde the stablty of equlbrum ponts wthout solvng the state-space equaton of the system. Unfortunately there s not any formal way to fnd a Lapunov functon, mostly t s determned n a tral and error fashon. If we are able to fnd a Lapunov functon, then we state the stablty of the system. However, the nablty to fnd a Lapunov functon, does not mply the nstablty of the system. EE543 LECTURE NOTES. METU EEE. ANKARA 25
12 Often convergence of neural networks s guaranteed by ntroducng an energy functon together wth the network tself. In fact the energy functons are Lapunov functons, so they are non-ncreasng along traectores. Therefore the dynamcs of the network can be vsualzed n terms of some multdmensonal 'energy landscapes' as gven prevously n Fgure 2.3. The attractors of the dynamcal system are the local mnma of the energy functon surrounded wth 'valleys' correspondng to the basns of attracton (Fgure 2.4). Fgure 2.4. Energy landscape and basn attractons 2.7. Effect of nput and ntal state on the attracton The convergence of a network to an attractor of the actvaton dynamcs may be vewed as a retreval process n whch the fxed pont s nterpreted as the output of the neural network. As an example consder the followng network dynamc: d x ( t) = x ( t) + f ( w x +θ ) (2.7.) EE543 LECTURE NOTES. METU EEE. ANKARA 26
13 Assume that the weght matrx W s fxed and the network s specfed through θ and ntal state x(0). Both θ and x(0) are ways of ntroducng an nput pattern nto the network, although they play dstnct dynamcal roles [Bressloff and Wer 9]. We then dstngush two modes of operaton, dependng on whether network has fxed x(0) but θ=u or t has fxed θ but x(0)= u. In the frst case, the vector u acts as nput and the ntal state s set to some constant vector for all nputs. In general, the value of the attractors vary smoothly as the vector u s vared, hence the network provdes a contnuous mappng between the nput and the output spaces (Fgure 2.5.a). However ths wll breakdown f, for example, the ntal pont x(0) crosses the boundary of a basn attracton for some nput. Such a scenaro can be avoded by makng the network globally convergent, whch means that all the traectores converge to a unque attractor. In such a network, f the ntal state s not set to the same fxed vector, t may gve dfferent responses to the same nput pattern on dfferent occasons. Although such a feature may be desrable when consderng temporal sequences, t makes the network unsutable as a classfer. Fgure 2.5. Both external nput u and ntal value x(0) has effects the fnal state a) The same ntal value x(0) may result n dfferent fxed ponts as fnal value for dfferent u b) Dfferent x(0) may converge to dfferent fxed values although u s the same In the second mode of operaton, the nput pattern s presented to the network through the ntal state x(0) whle θ s kept fxed. The attractors of the dynamcs may be used to EE543 LECTURE NOTES. METU EEE. ANKARA 27
14 represent tems n a memory whle the ntal states are the stmulus to remember the stored memory tems. The ntal states that contan ncomplete or erronous nformaton may be consdered as queres to the memory. The network then converges to the complete memory tems that best fts the stmulus (Fgure 2.5.b). Thus n contrast to the frst mode of operaton, whch deally uses a globally convergent network, ths form of operaton explots the fact that there are many basns of attracton to act as a content addressable memory. However, as a result of napproprate choces for the weghts, there may be a complcaton arsng from the fxed ponts of the network where the memory tems are ndented to resde. Although the ntenton s to have fxed ponts correspondng only to the stored memory tems, t may appear undesred fxed ponts called spurous states [Bresloff and Wer 9]. 2.8 Cohen-Grossberg Theorem The dea of usng energy functons to analyze the behavor neural networks was ntroduced durng the frst half of the 970s ndependently n [Amar 72], [Grossberg 72] and [Lttle 74]. A general prncple, known as Cohen-Grossberg theorem s based on the Grossberg's studes durng the prevous decade. As descrbed n [Cohen and Grossberg 83] t s used to decde the stablty of a certan class of neural networks. Theorem: Consder a neural network wth N processng elements havng output sgnals f(a) and transfer functons of the form n d a = α ( a )( β( a ) w f ( a )) =.. N (2.8.) = satsfyng constrants: ) Matrx W=[w ] s symmetrc, that s w =w, and all w >0 2) Functon α (a) s contnuous and α ( a)> 0 for a > 0 EE543 LECTURE NOTES. METU EEE. ANKARA 28
15 d 3) Functon f (a) s dfferentable and f ( a) = ( f ( a)) 0 da for a 0 4) (β(a )-w)<0 as a 5) Functon β(a) s contnuous for a>0 a 6) Ether lm β ( a) = + or lm β < = + ( a) but a 0 ds a 0 0 α ( s) for some a >0. If the network's state a(0) at tme 0 s n the postve orthant of Rn (that s a(0)>0 =..N), then the network wll almost certanly converge to some stable pont also n the postve orthant. Further, there wll be at most a countable number of such stable ponts. Here the statement that the network wll "almost certanly" converge to a stable pont means that ths wll happen except for certan rare choces of the weght matrx. That s, f weght matrx W s chosen at random among all possble W choces, then t s vrtually certan that a bad one wll never be chosen. In Eq. (2.8.), whch s descrbng the dynamc behavor of the system, α(a) s the control parameter for the convergence rate. The decay functon β(a) allows us to place the system's stable attractors n approprate postons of the state space. In order to prove the part related to the stablty of the system, the theorem uses Lapunov functon approach by showng that, under the gven conons, the functon N N N a = = = 0 E = + 2 w f ( a ) f ( a ) β ( s) f ( s) ds (2.8.2) s an energy functon of the system. That s, E has negatve tme dervatve on every possble traectory that the network's state can follow. By usng the conon (), that s W s symmetrc, the tme dervatve of the energy functon can be wrtten as EE543 LECTURE NOTES. METU EEE. ANKARA 29
16 de N N = α ( ) a f ( a ) β ( a ) w f ( a ) (2.8.3) = = 2 and t has negatve value for a a* whenever conons (2) and (3) are satsfed. The conon (4) guarantees that any a* to has a fnte value, preventng them to approach nfnty. The rest of the conons (the conon w >0 n () and the conons (5) and (6)) are requrements to prove that the soluton always stays n the postve orthant and satsfes some other detaled mathematcal requrements, whch are not so easy to show, requres some sophstcated mathematcs. Whle the converge to a stable pont n the postve orthant s mportant for a model resemblng a bologcal neuron, we do not care such a conon for artfcal neurons as long as they converge to some stable pont havng fnte value. Whenever the functon f s a bounded, that s f(a) <c for some postve constant c, any state can not take nfnte value. However for the stablty of the system stll reman the constrants: a) Symmetry: w = w, =.. N (2.8.4) b) Nonnegatvty: α ( a) 0 =.. N (2.8.5) EE543 LECTURE NOTES. METU EEE. ANKARA 30
17 c) Monotonocty: d f ( a) = ( f ( a)) 0 for a 0 (2.8.6) da Ths form of Cohen-Grossberg theorem states that f the system of nonlnear equatons satsfes the conons on symmetry, nonnegatvty and monotonocty, then the energy functon defned by Eq. (2.8.) s a Lapunov functon of the system satsfyng de * 0 for a a < (2.8.7) and the global system s therefore asymptotcally stable [Haykn 94]. 2.9 Hopfeld Network The contnuous determnstc Hopfeld model whch s based on contnuous varables and responses, s proposed n [Hopfeld 84] to extend ther dscrete model of the processng elements [Hopfeld 82] to resemble actual neurons more closely. In ths extended model, the neurons are modeled as amplfers n conuncton wth feedback crcuts made up of wres, resstors and capactors whch suggests the possblty of buldng these crcuts usng VLSI technology. The crcut dagram of the contnuous hopfeld network s gven n Fgure 2.6. Ths crcut has a nerobologcal ground as explaned below: EE543 LECTURE NOTES. METU EEE. ANKARA 3
18 Fgure 2.6 Hopfeld Network made of electroncal components C s the total nput capactance of the amplfer representng the capactance of cell membrane of neuron, ρ s nput conductance of the amplfer representng the transmembrane conductance of neuron w s the value of the conductance of the connecton from the output of the th amplfer to the nput of the th amplfer, representng fnte conductance between the output of neuron and the cell body of neuron. a(t) s the voltage at the nput of the amplfer representng the soma potental of neuron x(t) s the output voltage of the th amplfer representng the short-term average of frng rate of neoron θ s the current due to external nput feedng the amplfer nput representng the threshold for actvaton of neuron. The output of the amplfer, x, s a contnuous, monotoncally ncreasng functon of the nstantaneous nput a to the th amplfer. The nput-output relaton of the th amplfer s gven by f ( a) = tanh( κ a) (2.9.) EE543 LECTURE NOTES. METU EEE. ANKARA 32
19 where κ s constant called the gan parameter. Notce that, snce tanh( x) x x e e = x x e + e (2.9.2) the amplfer transfer functon s n fact a sgmod functon f e ( a)= + e κa κa 2 = + 2aκ e (2.9.3) as gven n equaton (.2.8) wth κ'=2κ, but shfted so that to have values between - and +. In Fgure 2.7, the transfer functon s llustrated for several values of κ. Ths functon s dfferentable at each pont and always has postve dervatve. In partcular, ts dervatve at orgn gves the gan κ, that s x= f( κa)=tanh( κa) + 0 a - Fgure 2.7 Output functon used n Hopfeld network EE543 LECTURE NOTES. METU EEE. ANKARA 33
20 κ df = da a=0 (2.9.4) The amplfers n the Hopfeld crcut correspond to the neurons. A set of nonlnear dfferental equatons descrbes the dynamcs of the network. The nput voltage a of the amplfer s determned by the equaton C da () t = a () t + w f ( a ()) t + θ (2.9.5) R whle the output voltage s x = f a ) (2.9.6) ( In Eq. (2.9.5) R s determned as /R = ρ +Σ w The state of the network s descrbed by an N dmensonal state vector where N s the number of neurons n the network. The th component of the state vector s gven by the output value of the th amplfer takng real values between - and. The state of the network moves n the state space n a drecton determned by the above nonlnear dynamc equaton (2.9.5). Based on the neuron characterstcs gven above, Hopfeld network can be represented by a neural network as shown n Fgure 2.8. Fgure 2.8 Hopfeld Network made of neurons EE543 LECTURE NOTES. METU EEE. ANKARA 34
21 The energy functon for the contnuous Hopfeld model s gven by the formula E = w x x + f ( x) dx θ x (2.9.7) 2 R x 0 where f - s the nverse of the functon f, that s f ( x ) = a (2.9.8) In partcular, for the transfer functon defned by the equaton (2.9.3), we have x f ( x) = ln (2.9.9) + x whch s shown n Fgure 2.9. Fgure 2.9 Inverse of the output functon EE543 LECTURE NOTES. METU EEE. ANKARA 35
22 Exercse: What happens to the system's energy f sgmod functon s used nstead of tanh functon. In [Hopfeld 84], t s shown that the energy functon gven n equaton s an approprate Lyapunov functon for the system ensurng that the system eventually reaches a stable confguraton f the network has symmetrc connectons, that s w =w. Such a network always converges to a stable equlbrum state, where the outputs of the unts reman constant. For energy E of the Hopfeld network to be a Lyapunov functon, t should satsfy the followng constrants: a) E(x) s bounded b) de 0 Because the functon tanh(κa) s used n the system as the output functon, t lmts the state varable to take value between -<x<. Furthermore, because the ntegral of the nverse of ths functon s bounded f -<x <, the energy functon gven by Eq. (2.9.7) s bounded. In order to show that the tme dervatve of the energy functon s always less than or equal to zero, we dfferentate E wth respect to tme, de dx dx dx dx = + + w ( x x ) 2 f ( x ) θ R (2.9.0) Snce we assumed w=w, we have de dx = w x f x ( ( R ) + θ ) (2.9.) EE543 LECTURE NOTES. METU EEE. ANKARA 36
23 By the use of equatons (2.9.6) and (2.9.8) n (2.9.5), we obtan da w x f x R C ( ) + θ =. (2.9.2) Therefore Eq. (2.9.) results n de C da dx = (2.9.3) On the other hand notce that, by the use of Eq. (2.9.8) we have da d dx f = x dx ( ) (2.9.4) so de = C df ( x ) dx ( ) 2 (2.9.5) dx Due to equaton (2.9.9) we have df ( x) 0 dx (2.9.6) for any value of x. So Eq. (2.9.5) mples that, de 0 (2.9.7) EE543 LECTURE NOTES. METU EEE. ANKARA 37
24 Therefore the energy functon descrbed by equaton (2.9.7) s a Lyapunov functon for the Hopfeld network when the connecton weghts are symmetrcal. Ths means that, whatever the ntal state of the network s, t wll converge to one of the equlbrum states dependng on the basn attracton n whch the ntal state les. Another way to show that the Hopfeld network s stable s to apply the Cohen-Grossberg theorem gven n secton 2.8. For ths purpose we reorganze the Eq. (2.9.5) as: da () t = (( a t w f a t C R ( ) + θ ) ( ) ( ( ))) (2.9.8) If we compare Eq. (2.9.8) wth Eq. (2.8.) we recognze that n fact Hopfeld network s a specal case of the system defned n Cohen-Grossberg theorem: and α ( a ) (2.9.9) C and a () t β( a ( t)) + θ (2.9.20) R w -w (2.9.2) satsfyng the conons on a) symmetry, because w =w mples -w =-w (2.9.22) b) nonnegatvty, because EE543 LECTURE NOTES. METU EEE. ANKARA 38
25 α ( a ) = 0 (2.9.23) C c) monotonocty, because d f ' ( a) = tanh( κ a) 0 (2.9.24) Therefore, accordng to the Cohen-Grossberg theorem, the energy functon defned as a E = 2 ( w ) f ( a ) f ( a ) ( R a + θ ) f ( a) da (2.9.25) 0 s a Lyapunov functon of the Hopfeld network and the network s globally asymptotcally stable. In fact, the energy equaton defned by equaton (2.2.25) may be organzed as E = 2 w θ f ( a ) f ( a ) + a 0 f ( a) da R a 0 a f ( a) da (2.9.26) Notce that a f ( a) da = f ( a ) f (0) = f ( a ) (2.9.27) 0 and also a f ( a ) a f ( a) da = ad( f ( a)) = 0 f (0) x 0 f ( x) dx (2.9.28) Therefore the energy equaton defned by Eq. (2.9.25) becomes EE543 LECTURE NOTES. METU EEE. ANKARA 39
26 E = 2 w x x + R x 0 f ( x) dx θ x (2.9.29) whch s the same as the energy functon defned by the Eq. (2.9.7). As the tme dervatve of the Energy functon s negatve, the change n the state value of the network s n a drecton where the energy decreases. The behavor of a Hopfeld network of two neurons s demonstrated n the Fgure 2.0 [Hopfeld 84]. In the fgure the ordnate and abssca are the outputs of each neuron. The network has two stable states and they are located near the upper left and lower rght corners, marked by x n the fgure. x x Fgure 2.0 Energy contour map for a two neuron two stable system The second term of the energy functon n Eq. (2.9.7), whch s N R x = 0 f ( x) dx (2.9.30) EE543 LECTURE NOTES. METU EEE. ANKARA 40
27 alters the energy landscape. The value of the gan parameter determnes how close the stable ponts come to the hypercube corners. In the lmt of very hgh gan, κ, ths term approaches to zero and the stable ponts of the system le ust at the corners of the Hammng hypercupe where the value of each state component s ether - or. For fnte gan, the stable ponts move towards the nteror of the hypercube. As the gan becomes smaller these stable ponts gets closer. When κ=0, only a sngle stable pont exsts for the system. Therefore the choce of the gan parameter s qute mportant for the success of the operaton [Freeman 9]. EE543 LECTURE NOTES. METU EEE. ANKARA 4
28 2.0. Dscrete tme representaton of recurrent networks Consder the dynamcal system defned by the Eq. (2..2). The change x(t) n the value of x(t) n a small amount of tme t can be approxmated as: ( x ( t)) F( x( t)) t (2.0.) Hence the value of x(t+ t) n terms of ths amount of change, x(t+ t)=x(t)+ (x(t)) (2.0.2) becomes x(t+ t)=x(t)+f(x(t)) t. (2.0.3) Therefore, f we start wth t=0 and observe the output value at each tme elapse of t, then the value of x (t) at kth observaton may be expressed by usng the value of the prevous observaton as x(k)=x(k-)+f(x(k-)) t k=,2... (2.0.4) or equvalently, x(k)=x(k-)+ηf(x(k-)) k=,2... (2.0.5) where η s used nstead of t to represent the approxmaton step sze and t should be assgned a small value for a good approxmaton. However, dependng on the propertes of the functon F, the system may also be represented by other dscrete tme equatons. EE543 LECTURE NOTES. METU EEE. ANKARA 42
29 For example, for the contnuous tme contnuous state Hopfeld network descrbed n by Equaton (2.9.5) we may use the followng dscrete tme approxmaton: x(k)=x(k-)+ η [-x(k-)+tanh(κ(wtx(k-)+θ))], k=,2... (2.0.6) However n Secton 4.3 we wll examne a specal case of the Hopfeld network where the state varables are forced to take dscrete values n bnary state space. So the dscrete tme dynamcal system representaton gven by Eq wll further be modfed, by usng sgn(wtx(k-)+θ) where sgn s a specal case of the sgmod functon n whch the gan s nfnty. We have observed that the stablty of contnuous tme dynamcal systems descrbed by Eq. 2.. are mpled by the exstence of a bounded energy functon wth a tme dervatve that s always less or equal to zero. The states of the network resultng n zero dervatves are the equlbrum states. Analogously, for a dscrete tme neural network wth an exctaton x(k)=x(k-)+g(x(k-)) k=,2... (2.0.7) the stablty of the system s mpled by the exstence of a bounded energy functon so that the dfference n the value of the energy should always be negatve as the system changes states. When G(x)=ηF(x) the stable states of the contnuous and dscrete tme systems descrbed by equatons 2.. and respectvely are almost the same for small values of η. However f η s not small enough, some stable states of the contnuous system may becomes unreachable n dscrete tme. EE543 LECTURE NOTES. METU EEE. ANKARA 43
The equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More informationCHAPTER III Neural Networks as Associative Memory
CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationNUMERICAL DIFFERENTIATION
NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationLectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationWeek3, Chapter 4. Position and Displacement. Motion in Two Dimensions. Instantaneous Velocity. Average Velocity
Week3, Chapter 4 Moton n Two Dmensons Lecture Quz A partcle confned to moton along the x axs moves wth constant acceleraton from x =.0 m to x = 8.0 m durng a 1-s tme nterval. The velocty of the partcle
More informationLinear Approximation with Regularization and Moving Least Squares
Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More information12. The Hamilton-Jacobi Equation Michael Fowler
1. The Hamlton-Jacob Equaton Mchael Fowler Back to Confguraton Space We ve establshed that the acton, regarded as a functon of ts coordnate endponts and tme, satsfes ( ) ( ) S q, t / t+ H qpt,, = 0, and
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More information9 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 - Chapter 9R -Davd Klenfeld - Fall 2005 9 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys a set
More informationModule 1 : The equation of continuity. Lecture 1: Equation of Continuity
1 Module 1 : The equaton of contnuty Lecture 1: Equaton of Contnuty 2 Advanced Heat and Mass Transfer: Modules 1. THE EQUATION OF CONTINUITY : Lectures 1-6 () () () (v) (v) Overall Mass Balance Momentum
More informationPhysics 5153 Classical Mechanics. Principle of Virtual Work-1
P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationEcon107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)
I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes
More information1 Derivation of Rate Equations from Single-Cell Conductance (Hodgkin-Huxley-like) Equations
Physcs 171/271 -Davd Klenfeld - Fall 2005 (revsed Wnter 2011) 1 Dervaton of Rate Equatons from Sngle-Cell Conductance (Hodgkn-Huxley-lke) Equatons We consder a network of many neurons, each of whch obeys
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationPhysics 5153 Classical Mechanics. D Alembert s Principle and The Lagrangian-1
P. Guterrez Physcs 5153 Classcal Mechancs D Alembert s Prncple and The Lagrangan 1 Introducton The prncple of vrtual work provdes a method of solvng problems of statc equlbrum wthout havng to consder the
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More informationχ x B E (c) Figure 2.1.1: (a) a material particle in a body, (b) a place in space, (c) a configuration of the body
Secton.. Moton.. The Materal Body and Moton hyscal materals n the real world are modeled usng an abstract mathematcal entty called a body. Ths body conssts of an nfnte number of materal partcles. Shown
More informationELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM
ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More informationDO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED.
EE 539 Homeworks Sprng 08 Updated: Tuesday, Aprl 7, 08 DO NOT DO HOMEWORK UNTIL IT IS ASSIGNED. THE ASSIGNMENTS MAY CHANGE UNTIL ANNOUNCED. For full credt, show all work. Some problems requre hand calculatons.
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationMMA and GCMMA two methods for nonlinear optimization
MMA and GCMMA two methods for nonlnear optmzaton Krster Svanberg Optmzaton and Systems Theory, KTH, Stockholm, Sweden. krlle@math.kth.se Ths note descrbes the algorthms used n the author s 2007 mplementatons
More informationKernel Methods and SVMs Extension
Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general
More informationON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EQUATION
Advanced Mathematcal Models & Applcatons Vol.3, No.3, 2018, pp.215-222 ON A DETERMINATION OF THE INITIAL FUNCTIONS FROM THE OBSERVED VALUES OF THE BOUNDARY FUNCTIONS FOR THE SECOND-ORDER HYPERBOLIC EUATION
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More informationAffine and Riemannian Connections
Affne and Remannan Connectons Semnar Remannan Geometry Summer Term 2015 Prof Dr Anna Wenhard and Dr Gye-Seon Lee Jakob Ullmann Notaton: X(M) space of smooth vector felds on M D(M) space of smooth functons
More informationThe Feynman path integral
The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2015 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons We consder a network of many neurons, each of whch obeys a set of conductancebased,
More information8 Derivation of Network Rate Equations from Single- Cell Conductance Equations
Physcs 178/278 - Davd Klenfeld - Wnter 2019 8 Dervaton of Network Rate Equatons from Sngle- Cell Conductance Equatons Our goal to derve the form of the abstract quanttes n rate equatons, such as synaptc
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationNumerical Heat and Mass Transfer
Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and
More informationCHAPTER 6. LAGRANGE S EQUATIONS (Analytical Mechanics)
CHAPTER 6 LAGRANGE S EQUATIONS (Analytcal Mechancs) 1 Ex. 1: Consder a partcle movng on a fxed horzontal surface. r P Let, be the poston and F be the total force on the partcle. The FBD s: -mgk F 1 x O
More informationPhysics 181. Particle Systems
Physcs 181 Partcle Systems Overvew In these notes we dscuss the varables approprate to the descrpton of systems of partcles, ther defntons, ther relatons, and ther conservatons laws. We consder a system
More informationTHE CHINESE REMAINDER THEOREM. We should thank the Chinese for their wonderful remainder theorem. Glenn Stevens
THE CHINESE REMAINDER THEOREM KEITH CONRAD We should thank the Chnese for ther wonderful remander theorem. Glenn Stevens 1. Introducton The Chnese remander theorem says we can unquely solve any par of
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationLecture 5 Decoding Binary BCH Codes
Lecture 5 Decodng Bnary BCH Codes In ths class, we wll ntroduce dfferent methods for decodng BCH codes 51 Decodng the [15, 7, 5] 2 -BCH Code Consder the [15, 7, 5] 2 -code C we ntroduced n the last lecture
More informationWeek 2. This week, we covered operations on sets and cardinality.
Week 2 Ths week, we covered operatons on sets and cardnalty. Defnton 0.1 (Correspondence). A correspondence between two sets A and B s a set S contaned n A B = {(a, b) a A, b B}. A correspondence from
More informationPHYS 705: Classical Mechanics. Newtonian Mechanics
1 PHYS 705: Classcal Mechancs Newtonan Mechancs Quck Revew of Newtonan Mechancs Basc Descrpton: -An dealzed pont partcle or a system of pont partcles n an nertal reference frame [Rgd bodes (ch. 5 later)]
More informationFeb 14: Spatial analysis of data fields
Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s
More informationCanonical transformations
Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationU.C. Berkeley CS294: Spectral Methods and Expanders Handout 8 Luca Trevisan February 17, 2016
U.C. Berkeley CS94: Spectral Methods and Expanders Handout 8 Luca Trevsan February 7, 06 Lecture 8: Spectral Algorthms Wrap-up In whch we talk about even more generalzatons of Cheeger s nequaltes, and
More information6. Stochastic processes (2)
Contents Markov processes Brth-death processes Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 Markov process Consder a contnuous-tme and dscrete-state stochastc process X(t) wth state space
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More information6. Stochastic processes (2)
6. Stochastc processes () Lect6.ppt S-38.45 - Introducton to Teletraffc Theory Sprng 5 6. Stochastc processes () Contents Markov processes Brth-death processes 6. Stochastc processes () Markov process
More informationCOMPLEX NUMBERS AND QUADRATIC EQUATIONS
COMPLEX NUMBERS AND QUADRATIC EQUATIONS INTRODUCTION We know that x 0 for all x R e the square of a real number (whether postve, negatve or ero) s non-negatve Hence the equatons x, x, x + 7 0 etc are not
More information10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)
0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes
More informationSIMULATION OF WAVE PROPAGATION IN AN HETEROGENEOUS ELASTIC ROD
SIMUATION OF WAVE POPAGATION IN AN HETEOGENEOUS EASTIC OD ogéro M Saldanha da Gama Unversdade do Estado do o de Janero ua Sào Francsco Xaver 54, sala 5 A 559-9, o de Janero, Brasl e-mal: rsgama@domancombr
More informationSome modelling aspects for the Matlab implementation of MMA
Some modellng aspects for the Matlab mplementaton of MMA Krster Svanberg krlle@math.kth.se Optmzaton and Systems Theory Department of Mathematcs KTH, SE 10044 Stockholm September 2004 1. Consdered optmzaton
More informationNON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS
IJRRAS 8 (3 September 011 www.arpapress.com/volumes/vol8issue3/ijrras_8_3_08.pdf NON-CENTRAL 7-POINT FORMULA IN THE METHOD OF LINES FOR PARABOLIC AND BURGERS' EQUATIONS H.O. Bakodah Dept. of Mathematc
More informationCHAPTER 14 GENERAL PERTURBATION THEORY
CHAPTER 4 GENERAL PERTURBATION THEORY 4 Introducton A partcle n orbt around a pont mass or a sphercally symmetrc mass dstrbuton s movng n a gravtatonal potental of the form GM / r In ths potental t moves
More informationChapter 8. Potential Energy and Conservation of Energy
Chapter 8 Potental Energy and Conservaton of Energy In ths chapter we wll ntroduce the followng concepts: Potental Energy Conservatve and non-conservatve forces Mechancal Energy Conservaton of Mechancal
More informationDigital Signal Processing
Dgtal Sgnal Processng Dscrete-tme System Analyss Manar Mohasen Offce: F8 Emal: manar.subh@ut.ac.r School of IT Engneerng Revew of Precedent Class Contnuous Sgnal The value of the sgnal s avalable over
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationSolving Nonlinear Differential Equations by a Neural Network Method
Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationLecture 5.8 Flux Vector Splitting
Lecture 5.8 Flux Vector Splttng 1 Flux Vector Splttng The vector E n (5.7.) can be rewrtten as E = AU (5.8.1) (wth A as gven n (5.7.4) or (5.7.6) ) whenever, the equaton of state s of the separable form
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationModelli Clamfim Equazione del Calore Lezione ottobre 2014
CLAMFIM Bologna Modell 1 @ Clamfm Equazone del Calore Lezone 17 15 ottobre 2014 professor Danele Rtell danele.rtell@unbo.t 1/24? Convoluton The convoluton of two functons g(t) and f(t) s the functon (g
More informationCS 468 Lecture 16: Isometry Invariance and Spectral Techniques
CS 468 Lecture 16: Isometry Invarance and Spectral Technques Justn Solomon Scrbe: Evan Gawlk Introducton. In geometry processng, t s often desrable to characterze the shape of an object n a manner that
More informationIntroduction. - The Second Lyapunov Method. - The First Lyapunov Method
Stablty Analyss A. Khak Sedgh Control Systems Group Faculty of Electrcal and Computer Engneerng K. N. Toos Unversty of Technology February 2009 1 Introducton Stablty s the most promnent characterstc of
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationSTAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 16
STAT 39: MATHEMATICAL COMPUTATIONS I FALL 218 LECTURE 16 1 why teratve methods f we have a lnear system Ax = b where A s very, very large but s ether sparse or structured (eg, banded, Toepltz, banded plus
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationModelli Clamfim Equazioni differenziali 22 settembre 2016
CLAMFIM Bologna Modell 1 @ Clamfm Equazon dfferenzal 22 settembre 2016 professor Danele Rtell danele.rtell@unbo.t 1/22? Ordnary Dfferental Equatons A dfferental equaton s an equaton that defnes a relatonshp
More informationFREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,
FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then
More informationTemperature. Chapter Heat Engine
Chapter 3 Temperature In prevous chapters of these notes we ntroduced the Prncple of Maxmum ntropy as a technque for estmatng probablty dstrbutons consstent wth constrants. In Chapter 9 we dscussed the
More informationMathematical Preparations
1 Introducton Mathematcal Preparatons The theory of relatvty was developed to explan experments whch studed the propagaton of electromagnetc radaton n movng coordnate systems. Wthn expermental error the
More informationk t+1 + c t A t k t, t=0
Macro II (UC3M, MA/PhD Econ) Professor: Matthas Kredler Fnal Exam 6 May 208 You have 50 mnutes to complete the exam There are 80 ponts n total The exam has 4 pages If somethng n the queston s unclear,
More information2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification
E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton
More information1 Matrix representations of canonical matrices
1 Matrx representatons of canoncal matrces 2-d rotaton around the orgn: ( ) cos θ sn θ R 0 = sn θ cos θ 3-d rotaton around the x-axs: R x = 1 0 0 0 cos θ sn θ 0 sn θ cos θ 3-d rotaton around the y-axs:
More informationLinear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.
Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +
More information( ) = ( ) + ( 0) ) ( )
EETOMAGNETI OMPATIBIITY HANDBOOK 1 hapter 9: Transent Behavor n the Tme Doman 9.1 Desgn a crcut usng reasonable values for the components that s capable of provdng a tme delay of 100 ms to a dgtal sgnal.
More informationOne Dimension Again. Chapter Fourteen
hapter Fourteen One Dmenson Agan 4 Scalar Lne Integrals Now we agan consder the dea of the ntegral n one dmenson When we were ntroduced to the ntegral back n elementary school, we consdered only functons
More informationRobert Eisberg Second edition CH 09 Multielectron atoms ground states and x-ray excitations
Quantum Physcs 量 理 Robert Esberg Second edton CH 09 Multelectron atoms ground states and x-ray exctatons 9-01 By gong through the procedure ndcated n the text, develop the tme-ndependent Schroednger equaton
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationANSWERS. Problem 1. and the moment generating function (mgf) by. defined for any real t. Use this to show that E( U) var( U)
Econ 413 Exam 13 H ANSWERS Settet er nndelt 9 deloppgaver, A,B,C, som alle anbefales å telle lkt for å gøre det ltt lettere å stå. Svar er gtt . Unfortunately, there s a prntng error n the hnt of
More informationC/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1
C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned
More informationSL n (F ) Equals its Own Derived Group
Internatonal Journal of Algebra, Vol. 2, 2008, no. 12, 585-594 SL n (F ) Equals ts Own Derved Group Jorge Macel BMCC-The Cty Unversty of New York, CUNY 199 Chambers street, New York, NY 10007, USA macel@cms.nyu.edu
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More informationModelli Clamfim Equazioni differenziali 7 ottobre 2013
CLAMFIM Bologna Modell 1 @ Clamfm Equazon dfferenzal 7 ottobre 2013 professor Danele Rtell danele.rtell@unbo.t 1/18? Ordnary Dfferental Equatons A dfferental equaton s an equaton that defnes a relatonshp
More informationOpen Systems: Chemical Potential and Partial Molar Quantities Chemical Potential
Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,
More informationAppendix B. The Finite Difference Scheme
140 APPENDIXES Appendx B. The Fnte Dfference Scheme In ths appendx we present numercal technques whch are used to approxmate solutons of system 3.1 3.3. A comprehensve treatment of theoretcal and mplementaton
More informationInternet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks
Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc
More informationAdvanced Quantum Mechanics
Advanced Quantum Mechancs Rajdeep Sensarma! sensarma@theory.tfr.res.n ecture #9 QM of Relatvstc Partcles Recap of ast Class Scalar Felds and orentz nvarant actons Complex Scalar Feld and Charge conjugaton
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationComposite Hypotheses testing
Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter
More informationCHAPTER 4. Vector Spaces
man 2007/2/16 page 234 CHAPTER 4 Vector Spaces To crtcze mathematcs for ts abstracton s to mss the pont entrel. Abstracton s what makes mathematcs work. Ian Stewart The man am of ths tet s to stud lnear
More informationLecture Note 3. Eshelby s Inclusion II
ME340B Elastcty of Mcroscopc Structures Stanford Unversty Wnter 004 Lecture Note 3. Eshelby s Incluson II Chrs Wenberger and We Ca c All rghts reserved January 6, 004 Contents 1 Incluson energy n an nfnte
More information