CHAPTER III Neural Networks as Associative Memory
|
|
- April Amy Lindsey
- 5 years ago
- Views:
Transcription
1 CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people even f they have sunglasses or f they are somehow elder now. In ths chapter, frst the basc defntons about assocatve memory wll be gven and then t wll be explaned how neural networs can be made lnear assocators so as to perform as nterpolatve memory. Next t wll be explaned how the Hopfeld networ can be used as autoassocatve memory and then Bpolar Assocatve Memory networ that s desgned to operate as heteroassocatve memory wll be ntroduced. EE543 - ANN - CHAPTER 3
2 3.. Assocatve Memory In an assocatve memory we store a set of patterns µ,...k, so that the networ responds by producng whchever of the stored patterns most closely resembles the one presented to the networ Suppose that the stored patterns, whch are called exemplars or memory elements, are n the form of pars of assocatons, µ (u,y ) where u R N, y R M,..K. Accordng to the mappng ϕ: R N R M that they mplement, we dstngush the followng types of assocatve memores: Interpolatve assocatve memory Accretve assocatve memory 3.. Assocatve Memory: Interpolatve Memory In nterpolatve assocatve memory, When uu r s presented to the memory t responds by producng y r of the stored assocaton. However f u dffers from u r by an amount of ε, that s f uu r +ε s presented to the memory, then the response dffers from y r by some amount ε r. Therefore n nterpolatve assocatve memory we have r r r r ϕ( u + ε) y + ε such that ε 0 ε 0, r.. K (3..) EE543 - ANN - CHAPTER 3 2
3 3.. Assocatve Memory: Accretve Memory In accretve assocatve memory, when u s presented to the memory t responds by producng y r of the stored assocaton such that u r s the one closest to u among u,..k, that s, r r ϕ( u) y such that u mn u u,... K u (3..2) 3.. Assocatve Memory: Heteroassocatve-Autoassocatve The accretve assocatve memory n the form gven above, that s u and y are dfferent, s called heteroassocatve memory. However f the stored exemplars are n a specal form such that the desred patterns and the nput patterns are the same, that s y u for..k, then t s called autoassocatve memory. In such a memory, whenever u s presented to the memory t responds by u r whch s the closest one to u among u,..k, that s, r r ϕ( u) u such that u mn u u,... K u (3..3) EE543 - ANN - CHAPTER 3 3
4 3.. Assocatve Memory Whle nterpolatve memores can be mplemented by usng feed-forward neural networs, t s more approprate to use recurrent networs as accretve memores. The advantage of usng recurrent networs as assocatve memory s ther convergence to one of a fnte number of stable states when started at some ntal state. The basc goals are: to be able to store as many exemplars as we need, each correspondng to a dfferent stable state of the networ, to have no other stable state to have the stable state that the networ converges to be the one closest to the appled pattern. 3.. Assocatve Memory The problems that we are faced wth beng: the capacty of the networ s restrcted dependng on the number and propertes of the patterns to be stored, some of the exemplars may not be among the stable states some spurous stable states dfferent than the exemplars may arse by themselves the converged stable state may be other than the one closest to the appled pattern EE543 - ANN - CHAPTER 3 4
5 3.. Assocatve Memory One way of usng recurrent neural networs as assocatve memory s to fx the external nput of the networ and present the nput pattern u r to the system by settng x(0)u r. If we relax such a networ, then t wll converge to the attractor x* for whch x(0) s wthn the basn attracton as explaned n Chapter 2. If we are able to place each µ as an attractor of the networ by proper choce of the connecton weghts, then we expect the networ to relax to the attractor x* µ r that s related to the ntal state x(0)u r. For a good performance of the networ, we need the networ only to converge to one of the stored patterns µ,...k. 3.. Assocatve Memory Unfortunately, some ntal states may converge to spurous states, whch are the undesred attractors of the networ representng none of the stored patterns. Spurous states may arse by themselves dependng on the model used and the patterns stored. The capacty of the neural assocatve memores s restrcted by the sze of the networs. If we ncrement the number of stored patterns for a fxed sze neural networ, spurous states arse nevtably. Sometmes, the networ may converge not to a spurous state, but to a memory pattern that s not so close to the pattern presented. EE543 - ANN - CHAPTER 3 5
6 3.. Assocatve Memory What we expect for a feasble operaton s that, at least for the memory patterns themselves, f any of the stored pattern s presented to the networ by settng x(0)µ, then the networ should stay converged to x* µ r (Fgure 3.). Fgure 3.. In assocatve memory each memory element s assgned to an attractor 3.. Assocatve Memory A second way to use recurrent networs as assocatve memory, s to present the nput pattern u r to the system as an external nput. Ths can be done by settng θu r, where θ s the threshold vector whose th component s correspondng to the threshold of neuron. After settng x(0) to some fxed value, we relax the networ and then wat untl t converges to an attractor x*. For a good performance of the networ, we desre the networ to have a sngle attractor such that x*µ for each stored nput pattern u, therefore the networ wll converge to ths attractor ndependent of the ntal state of the networ. Another soluton to the problem s to have predetermned ntal values, so that these ntal values le wthn the basn attracton of µ whenever u s appled. We wll consder ths nd of networs n Chapter 7 n more detal, where we wll examne how these recurrent networs are traned. EE543 - ANN - CHAPTER 3 6
7 3.2. Lnear Assocators : Orthonormal patterns It s qute easy to mplement nterpolatve assocatve memory when the set of nput memory elements {u } consttutes an othonormal set of vectors, that s T u u 0 (3.2.) where T denotes transpose. By usng ronecer delta, we wrte smply u T u δ (3.2.2) 3.2. Lnear Assocators : Orthonormal patterns The mappng functon ϕ(u) defned below may be used to establsh an nterpolatve assocatve memory: ϕ ( u) T W u (3.2.3) where W u y (3.2.4) Here the symbol s used to denote outer product of vectors x R N and y R M, whch s defned as T u y u y ( y u T ) T (3.2.5) resultng n a matrx of sze N by M. EE543 - ANN - CHAPTER 3 7
8 3.2. Lnear Assocators : Orthonormal patterns By defnng matrces [Hayn 94]: and U[u u 2.. u.. u K ] (3.2.6) Y[y y 2.. y.. y K ] (3.2.7) the weght matrx can be formulated as T T W YU (3.2.8) If the networ s gong to be used as autoassocatve memory we have YU so, T T W UU (3.2.9) 3.2. Lnear Assocators : Orthonormal patterns For a functon ϕ(u) to consttute an nterpolatve assocatve memory, t should satsfy the condton ϕ(u r )y r r..k (3.2.0) We can chec t smply as r T r ϕ ( u ) W u (3.2.) whch s T r T r W u YU u (3.2.2) EE543 - ANN - CHAPTER 3 8
9 3.2. Lnear Assocators : Orthonormal patterns Snce the set {u } s orthonormal, we have T r YU u δ ry y (3.2.3) whch results n r T r r ϕ ( u ) YU u y (3.2.4) as we desred Lnear Assocators : Orthonormal patterns Remember: T r T r W u YU u T r YU u δ ry y r (3.2.2) (3.2.3) Furthermore, f an nput pattern uu r +ε dfferent than the stored patterns s appled as nput to the networ, we obtan T r T r T (3.2.5) ϕ( u ) W ( u + ε) W u + W ε Usng equaton (3.2.2) and (3.2.3) results n r T ϕ( u) y + W ε (3.2.6) Therefore, we have r r ϕ( u) y + ε n the requred form, where r ε W T ε (3.2.7) (3.2.8) EE543 - ANN - CHAPTER 3 9
10 3.2. Lnear Assocators : Orthonormal patterns Such a memory can be mplemented by usng M neurons each havng N nputs as shown n Fgure 3.2. Fgure 3.2 Lnear Assocator 3.2. Lnear Assocators : Orthonormal patterns The connecton weghts of neuron s assgned value W, whch s the th column vector of matrx W. Here each neuron has a lnear output transfer functon f(a)a. When a stored pattern u s appled as nput to the networ, the desred value y s observed at the output of the networ as: T x W u (3.2.9) EE543 - ANN - CHAPTER 3 0
11 3.2. Lnear Assocators : General case Untl now, we have nvestgated the use of lnear mappng YU T as assocatve memory, whch wors well when the nput patterns are orthonormal. In the case the nput patterns are not orthonormal, the lnear assocator cannot map some nput patterns to desred output patterns wthout error. In the followng we wll nvestgate the condtons necessary to mnmze the output error for the exemplar patterns Lnear Assocators : General case Remember: U[u u 2.. u.. u K ] (3.2.6) Y[y y 2.. y.. y K ] (3.2.7) Therefore, for a gven set of exemplars µ (u,y ), u R N, y R M,.. K, our purpose s to fnd a lnear mappng A* among A: R N R M such that: * A mn y Au A (3.2.20) where. s chosen as Eucldean norm. The problem may be reformulated by usng the matrces U and Y [Hayn 94]: A * mn Y AU A (3.2.2) EE543 - ANN - CHAPTER 3
12 3.2. Lnear Assocators : General case The pseudo nverse method [Kohonen 76] based on least squares estmaton provdes a soluton for the problem n whch A* s determned as: * A + YU (3.2.22) where U + s pseudo nverse of U. The pseudonverse U + s a matrx satsfyng the condton: + U U (3.2.23) where s the dentty matrx Lnear Assocators : General case A perfect match s obtaned by usng * A + YU snce * + AU YU U Y (3.2.24) resultng n no error due to the fact Y - A*U 0 (3.2.25) EE543 - ANN - CHAPTER 3 2
13 3.2. Lnear Assocators : Lnearly Independent Patterns Remember + U U (3.2.23) In the case the nput patterns are lnearly ndependent, that s none of them can be obtaned as a lnear combnatons of the others, then a matrx U + satsfyng Eq. (3.2.23) can be obtaned by applyng the formula [Golub and Van Loan 89, Hayn 94] + T T U ( U U) U (3.2.26) Notce that for the nput patterns, whch are the columns of the matrx U, to be lnearly ndependent, the number of columns should not be more than the number of rows, that s K N, otherwse U T U wll be sngular and no nverse wll exst. The condton K N means that the number of entres consttutng the patterns restrcts the capacty of the memory. At most N patterns can be stored n such a memory Lnear Assocators : Lnearly Independent Patterns Ths memory can be mplemented by a neural networ for whch W T YU +. The desred value y appears at the output of the networ as x when u s appled as nput to the networ: T x W u (3.2.27) as explaned prevously. EE543 - ANN - CHAPTER 3 3
14 3.2. Lnear Assocators : Lnearly Independent Patterns Remember: + T T U ( U U) U (3.2.26) Notce that for the specal case of orthonormal patterns that we examned prevously n ths secton, we have T U U (3.2.28) that results n the pseudonverse, whch s n the form + U T U (3.2.29) and therefore T + W YU YU T (3.2.30) as we have derved prevously Hopfeld Autoassocatve Memory In ths secton we wll nvestgate how Hopfeld networ can be used as autoassocatve memory. For ths purpose some modfcatons are done on contnuous Hopfeld networ so that t wors n dscrete state space and dscrete tme. Fgure 3.3 Hopfeld Assocatve Memory EE543 - ANN - CHAPTER 3 4
15 3.3. Hopfeld Autoassocatve Memory Note that whenever the patterns to be stored n Hopfeld networ are from N dmensonal bpolar space consttutng a hypercube, that s u {-,} N,..K, then t s convenent to have any stable state of the networ on the corners of the hypercube. If we let the output transfer functon of the neurons n the networ to have very hgh gan, n the extreme case f ( a) lm tanh( κa) κ we obtan f ( a) sgn( a) 0 for a > 0 for a 0 for a < 0 (3.3.) (3.3.2) 3.3. Hopfeld Autoassocatve Memory Furthermore note that the second term of the energy functon Chapter 2 prevously) E N N N x w x x + f x R ) 2 0 N ( dx θ x (whch was gven n (3.3.3) approaches to zero. Therefore the stable states of the networ corresponds to the local mnma of the functon: E 2 w x x θ x (3.3.4) so that they le on the corners of the hypercube as explaned prevously. EE543 - ANN - CHAPTER 3 5
16 3.3. Hopfeld Autoassocatve Memory Dscrete tme state exctaton [Hopfeld 82] of the networ, s provded n the followng: x ( + ) f ( a ( )) x( ) for a ( ) > 0 for a ( ) 0 for a ( ) < 0 (3.3.5) where a () s defned as we used to, that s, a( ) wx( ) + θ (3.3.6) The processng elements of the networ are updated one at a tme, such that all of the processng elements must be updated at the same average rate Hopfeld Autoassocatve Memory Remember: U[u u 2.. u.. u K ] (3.2.6) Y[y y 2.. y.. y K ] (3.2.7) For stablty of the bpolar dscrete Hopfeld networ, t s further requred to have w 0 n addton to the constrant w w In order to use dscrete Hopfeld networ as autoassocatve memory, ts weghts are fxed to T T W UU (3.3.8) where U s the nput pattern matrx as defned n Eq. (3.2.6), and then w are set to 0 Remember that n autoassocatve memory we have YU, where Y s the matrx of desred output patterns as defned n Eq (3.2.7). EE543 - ANN - CHAPTER 3 6
17 3.3. Hopfeld Autoassocatve Memory If all the states of the networ are to be updated at once, then the next state of the system may be represented n the form x(+)f(w T x()) (3.3.9) For the specal case f the exemplars are orthonormal, we have f(w T u r )f(u r )u r (3.3.0). that means each exemplar s a stable state of the networ Whenever the ntal state s set to one of the exemplar, the system remans there. However, f the ntal state s set to some arbtrary nput, then the networ converges to one of the stored exemplars, dependng on the basn of attracton n whch x(0) les Hopfeld Autoassocatve Memory However n general the nput patterns are not orthonormal, so there s no guarantee that each exemplar s correspondng to a stable state. Therefore the problems that we mentoned n Secton 3. arse. The capacty of the Hopfeld net s less than 0.38*N patterns, where N s the number of unts n the networ [Lppmann 89]. It s shown n the lecture notes that the energy functon always decreases as the state of the processng elements are changed one by one (asynchronous update). EE543 - ANN - CHAPTER 3 7
18 3.4. B-drectonal Assocatve Memory The B-drectonal Assocatve Memory (BAM) ntroduced n [Koso 88] s a recurrent networ (Fgure 3.4) desgned to wor as heteroassocatve memory [Nelsen 90]. Fgure 3.4: B-drectonal Assocatve Memory 3.4. B-drectonal Assocatve Memory BAM networ conssts of two sets of neurons whose outputs are represented by vectors x R N and v R M respectvely, havng actvaton defned by the par of equatons: da dt da x v dt N α ax + w f ( av ) +.. M θ M β a y + w f ( ax ) +.. N φ (3.4.) (3.4.2) where α, β, θ, φ are postve constants for..m,..n, f s tanh functon and W[w ] s any NxM real matrx. EE543 - ANN - CHAPTER 3 8
19 EE543 - ANN - CHAPTER 3 9 CHAPTER CHAPTER III : III : Neural Networs as Assocatve Memory Neural Networs as Assocatve Memory 3.4. B-drectonal Assocatve Memory The stablty of the BAM networ can be proved easly by applyng Cohen-Grossberg theorem by defnng a state vector z R N+M, such that (3.4.3) that s z obtaned through concatenaton x and v. N M M M v M x z + <, CHAPTER CHAPTER III : III : Neural Networs as Assocatve Memory Neural Networs as Assocatve Memory 3.4. B-drectonal Assocatve Memory N M a N a M v x N M v f x f bdb b f da a a f a f a f w E v x φ θ β α ) ( ) ( ) ( ) ( ) ( ) ( ), ( v x Snce BAM s a specal case of the networ defned by Cohen-Grossberg theorem, t has a Lyapunov Energy functon as t s provded n the followng:
20 3.4. B-drectonal Assocatve Memory The dscrete BAM model s defned n a smlar manner to dscrete Hopfeld networ. The output functons are chosen to be f(a)sgn(a) and states are excted as: x ( + ) f( a ( )) x for a ( ) > 0 x x( ) for a ( ) 0 x for a ( ) < 0 x where m ax w f ( av ) +.. M θ v ( + ) f( a ( )) v for a ( ) > 0 v v( ) for a ( ) 0 v for a ( ) < 0 v where n av w f ( ax ) +.. N φ 3.4. B-drectonal Assocatve Memory In compact matrx notaton t s shortly x(+)f (W T v()) (3.4.9) and v(+)f(wx(+)). (3.4.0) EE543 - ANN - CHAPTER 3 20
21 3.4. B-drectonal Assocatve Memory In the dscrete BAM, the energy functon becomes E( x, y) M x M N N v x f ( a ) θ f ( a )φ w f ( a ) f ( a ) v (3.4.) satsfyng the condton E 0 (3.4.2) whch mples the stablty of the system B-drectonal Assocatve Memory The weghts of BAM s determned by the equaton W T YU T (3.4.3) For the specal case of orthonormal nput and output patterns we have and f(w T u r ) f(yu T u r )f(y r )y r (3.4.4) f(wy r ) f(uy T y r )f(u r )u r (3.4.5) ndcatng that exemplar are stable states of the networ. EE543 - ANN - CHAPTER 3 2
22 3.4. B-drectonal Assocatve Memory Whenever the ntal state s set to one of the exemplar, the system remans there. For arbtrary ntal states the networ converges to one of the stored exemplars, dependng on the basn of attracton n whch x(0) les. For the nput patterns that are not orthonormal, the networ behaves as t s explaned for the Hopfeld networ. EE543 - ANN - CHAPTER 3 22
Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix
Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could
More informationAPPENDIX A Some Linear Algebra
APPENDIX A Some Lnear Algebra The collecton of m, n matrces A.1 Matrces a 1,1,..., a 1,n A = a m,1,..., a m,n wth real elements a,j s denoted by R m,n. If n = 1 then A s called a column vector. Smlarly,
More information8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 493 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces you have studed thus far n the text are real vector spaces because the scalars
More informationModule 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:
More informationNP-Completeness : Proofs
NP-Completeness : Proofs Proof Methods A method to show a decson problem Π NP-complete s as follows. (1) Show Π NP. (2) Choose an NP-complete problem Π. (3) Show Π Π. A method to show an optmzaton problem
More informationStructure and Drive Paul A. Jensen Copyright July 20, 2003
Structure and Drve Paul A. Jensen Copyrght July 20, 2003 A system s made up of several operatons wth flow passng between them. The structure of the system descrbes the flow paths from nputs to outputs.
More informationChapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems
Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons
More informationReport on Image warping
Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.
More informationLecture 12: Discrete Laplacian
Lecture 12: Dscrete Laplacan Scrbe: Tanye Lu Our goal s to come up wth a dscrete verson of Laplacan operator for trangulated surfaces, so that we can use t n practce to solve related problems We are mostly
More informationInner Product. Euclidean Space. Orthonormal Basis. Orthogonal
Inner Product Defnton 1 () A Eucldean space s a fnte-dmensonal vector space over the reals R, wth an nner product,. Defnton 2 (Inner Product) An nner product, on a real vector space X s a symmetrc, blnear,
More informationCHAPTER II Recurrent Neural Networks
CHAPTER II Recurrent Neural Networks In ths chapter frst the dynamcs of the contnuous space recurrent neural networks wll be examned n a general framework. Then, the Hopfeld Network as a specal case of
More information2.3 Nilpotent endomorphisms
s a block dagonal matrx, wth A Mat dm U (C) In fact, we can assume that B = B 1 B k, wth B an ordered bass of U, and that A = [f U ] B, where f U : U U s the restrcton of f to U 40 23 Nlpotent endomorphsms
More informationAffine and Riemannian Connections
Affne and Remannan Connectons Semnar Remannan Geometry Summer Term 2015 Prof Dr Anna Wenhard and Dr Gye-Seon Lee Jakob Ullmann Notaton: X(M) space of smooth vector felds on M D(M) space of smooth functons
More informationWeek 5: Neural Networks
Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple
More informationErrors for Linear Systems
Errors for Lnear Systems When we solve a lnear system Ax b we often do not know A and b exactly, but have only approxmatons  and ˆb avalable. Then the best thng we can do s to solve ˆx ˆb exactly whch
More informationSalmon: Lectures on partial differential equations. Consider the general linear, second-order PDE in the form. ,x 2
Salmon: Lectures on partal dfferental equatons 5. Classfcaton of second-order equatons There are general methods for classfyng hgher-order partal dfferental equatons. One s very general (applyng even to
More informationn α j x j = 0 j=1 has a nontrivial solution. Here A is the n k matrix whose jth column is the vector for all t j=0
MODULE 2 Topcs: Lnear ndependence, bass and dmenson We have seen that f n a set of vectors one vector s a lnear combnaton of the remanng vectors n the set then the span of the set s unchanged f that vector
More informationConvexity preserving interpolation by splines of arbitrary degree
Computer Scence Journal of Moldova, vol.18, no.1(52), 2010 Convexty preservng nterpolaton by splnes of arbtrary degree Igor Verlan Abstract In the present paper an algorthm of C 2 nterpolaton of dscrete
More informationAssociative Memories
Assocatve Memores We consder now modes for unsupervsed earnng probems, caed auto-assocaton probems. Assocaton s the task of mappng patterns to patterns. In an assocatve memory the stmuus of an ncompete
More information2 More examples with details
Physcs 129b Lecture 3 Caltech, 01/15/19 2 More examples wth detals 2.3 The permutaton group n = 4 S 4 contans 4! = 24 elements. One s the dentty e. Sx of them are exchange of two objects (, j) ( to j and
More informationprinceton univ. F 17 cos 521: Advanced Algorithm Design Lecture 7: LP Duality Lecturer: Matt Weinberg
prnceton unv. F 17 cos 521: Advanced Algorthm Desgn Lecture 7: LP Dualty Lecturer: Matt Wenberg Scrbe: LP Dualty s an extremely useful tool for analyzng structural propertes of lnear programs. Whle there
More informationThe Order Relation and Trace Inequalities for. Hermitian Operators
Internatonal Mathematcal Forum, Vol 3, 08, no, 507-57 HIKARI Ltd, wwwm-hkarcom https://doorg/0988/mf088055 The Order Relaton and Trace Inequaltes for Hermtan Operators Y Huang School of Informaton Scence
More informationCOMPLEX NUMBERS AND QUADRATIC EQUATIONS
COMPLEX NUMBERS AND QUADRATIC EQUATIONS INTRODUCTION We know that x 0 for all x R e the square of a real number (whether postve, negatve or ero) s non-negatve Hence the equatons x, x, x + 7 0 etc are not
More informationMore metrics on cartesian products
More metrcs on cartesan products If (X, d ) are metrc spaces for 1 n, then n Secton II4 of the lecture notes we defned three metrcs on X whose underlyng topologes are the product topology The purpose of
More informationLinear, affine, and convex sets and hulls In the sequel, unless otherwise specified, X will denote a real vector space.
Lnear, affne, and convex sets and hulls In the sequel, unless otherwse specfed, X wll denote a real vector space. Lnes and segments. Gven two ponts x, y X, we defne xy = {x + t(y x) : t R} = {(1 t)x +
More informationEEE 241: Linear Systems
EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they
More informationSL n (F ) Equals its Own Derived Group
Internatonal Journal of Algebra, Vol. 2, 2008, no. 12, 585-594 SL n (F ) Equals ts Own Derved Group Jorge Macel BMCC-The Cty Unversty of New York, CUNY 199 Chambers street, New York, NY 10007, USA macel@cms.nyu.edu
More informationSupporting Information
Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to
More informationC/CS/Phy191 Problem Set 3 Solutions Out: Oct 1, 2008., where ( 00. ), so the overall state of the system is ) ( ( ( ( 00 ± 11 ), Φ ± = 1
C/CS/Phy9 Problem Set 3 Solutons Out: Oct, 8 Suppose you have two qubts n some arbtrary entangled state ψ You apply the teleportaton protocol to each of the qubts separately What s the resultng state obtaned
More information3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X
Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number
More informationMEM 255 Introduction to Control Systems Review: Basics of Linear Algebra
MEM 255 Introducton to Control Systems Revew: Bascs of Lnear Algebra Harry G. Kwatny Department of Mechancal Engneerng & Mechancs Drexel Unversty Outlne Vectors Matrces MATLAB Advanced Topcs Vectors A
More informationThe Minimum Universal Cost Flow in an Infeasible Flow Network
Journal of Scences, Islamc Republc of Iran 17(2): 175-180 (2006) Unversty of Tehran, ISSN 1016-1104 http://jscencesutacr The Mnmum Unversal Cost Flow n an Infeasble Flow Network H Saleh Fathabad * M Bagheran
More informationDynamic Systems on Graphs
Prepared by F.L. Lews Updated: Saturday, February 06, 200 Dynamc Systems on Graphs Control Graphs and Consensus A network s a set of nodes that collaborates to acheve what each cannot acheve alone. A network,
More informationInternet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks
Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc
More informationNeuro-Adaptive Design - I:
Lecture 36 Neuro-Adaptve Desgn - I: A Robustfyng ool for Dynamc Inverson Desgn Dr. Radhakant Padh Asst. Professor Dept. of Aerospace Engneerng Indan Insttute of Scence - Bangalore Motvaton Perfect system
More informationSingular Value Decomposition: Theory and Applications
Sngular Value Decomposton: Theory and Applcatons Danel Khashab Sprng 2015 Last Update: March 2, 2015 1 Introducton A = UDV where columns of U and V are orthonormal and matrx D s dagonal wth postve real
More informationLecture 3. Ax x i a i. i i
18.409 The Behavor of Algorthms n Practce 2/14/2 Lecturer: Dan Spelman Lecture 3 Scrbe: Arvnd Sankar 1 Largest sngular value In order to bound the condton number, we need an upper bound on the largest
More informationAdditional Codes using Finite Difference Method. 1 HJB Equation for Consumption-Saving Problem Without Uncertainty
Addtonal Codes usng Fnte Dfference Method Benamn Moll 1 HJB Equaton for Consumpton-Savng Problem Wthout Uncertanty Before consderng the case wth stochastc ncome n http://www.prnceton.edu/~moll/ HACTproect/HACT_Numercal_Appendx.pdf,
More informationPhysics 5153 Classical Mechanics. Principle of Virtual Work-1
P. Guterrez 1 Introducton Physcs 5153 Classcal Mechancs Prncple of Vrtual Work The frst varatonal prncple we encounter n mechancs s the prncple of vrtual work. It establshes the equlbrum condton of a mechancal
More informationISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013
ISSN: 2277-375 Constructon of Trend Free Run Orders for Orthogonal rrays Usng Codes bstract: Sometmes when the expermental runs are carred out n a tme order sequence, the response can depend on the run
More informationDUE: WEDS FEB 21ST 2018
HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant
More informationCHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD
CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB
More informationLecture 10 Support Vector Machines II
Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed
More information10-801: Advanced Optimization and Randomized Methods Lecture 2: Convex functions (Jan 15, 2014)
0-80: Advanced Optmzaton and Randomzed Methods Lecture : Convex functons (Jan 5, 04) Lecturer: Suvrt Sra Addr: Carnege Mellon Unversty, Sprng 04 Scrbes: Avnava Dubey, Ahmed Hefny Dsclamer: These notes
More informationTime-Varying Systems and Computations Lecture 6
Tme-Varyng Systems and Computatons Lecture 6 Klaus Depold 14. Januar 2014 The Kalman Flter The Kalman estmaton flter attempts to estmate the actual state of an unknown dscrete dynamcal system, gven nosy
More informationTransfer Functions. Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: ( ) system
Transfer Functons Convenent representaton of a lnear, dynamc model. A transfer functon (TF) relates one nput and one output: x t X s y t system Y s The followng termnology s used: x y nput output forcng
More informationNorms, Condition Numbers, Eigenvalues and Eigenvectors
Norms, Condton Numbers, Egenvalues and Egenvectors 1 Norms A norm s a measure of the sze of a matrx or a vector For vectors the common norms are: N a 2 = ( x 2 1/2 the Eucldean Norm (1a b 1 = =1 N x (1b
More informationModule 2. Random Processes. Version 2 ECE IIT, Kharagpur
Module Random Processes Lesson 6 Functons of Random Varables After readng ths lesson, ou wll learn about cdf of functon of a random varable. Formula for determnng the pdf of a random varable. Let, X be
More informationj) = 1 (note sigma notation) ii. Continuous random variable (e.g. Normal distribution) 1. density function: f ( x) 0 and f ( x) dx = 1
Random varables Measure of central tendences and varablty (means and varances) Jont densty functons and ndependence Measures of assocaton (covarance and correlaton) Interestng result Condtonal dstrbutons
More informationHopfield Training Rules 1 N
Hopfeld Tranng Rules To memorse a sngle pattern Suppose e set the eghts thus - = p p here, s the eght beteen nodes & s the number of nodes n the netor p s the value requred for the -th node What ll the
More informationCONJUGACY IN THOMPSON S GROUP F. 1. Introduction
CONJUGACY IN THOMPSON S GROUP F NICK GILL AND IAN SHORT Abstract. We complete the program begun by Brn and Squer of charactersng conjugacy n Thompson s group F usng the standard acton of F as a group of
More informationProblem Set 9 Solutions
Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem
More informationThe Multiple Classical Linear Regression Model (CLRM): Specification and Assumptions. 1. Introduction
ECONOMICS 5* -- NOTE (Summary) ECON 5* -- NOTE The Multple Classcal Lnear Regresson Model (CLRM): Specfcaton and Assumptons. Introducton CLRM stands for the Classcal Lnear Regresson Model. The CLRM s also
More informationLecture Notes on Linear Regression
Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume
More informationNotes on Frequency Estimation in Data Streams
Notes on Frequency Estmaton n Data Streams In (one of) the data streamng model(s), the data s a sequence of arrvals a 1, a 2,..., a m of the form a j = (, v) where s the dentty of the tem and belongs to
More informationCollege of Computer & Information Science Fall 2009 Northeastern University 20 October 2009
College of Computer & Informaton Scence Fall 2009 Northeastern Unversty 20 October 2009 CS7880: Algorthmc Power Tools Scrbe: Jan Wen and Laura Poplawsk Lecture Outlne: Prmal-dual schema Network Desgn:
More informationBallot Paths Avoiding Depth Zero Patterns
Ballot Paths Avodng Depth Zero Patterns Henrch Nederhausen and Shaun Sullvan Florda Atlantc Unversty, Boca Raton, Florda nederha@fauedu, ssull21@fauedu 1 Introducton In a paper by Sapounaks, Tasoulas,
More informationCHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE
CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng
More informationCSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography
CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve
More informationOn the Multicriteria Integer Network Flow Problem
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 5, No 2 Sofa 2005 On the Multcrtera Integer Network Flow Problem Vassl Vasslev, Marana Nkolova, Maryana Vassleva Insttute of
More informationFeb 14: Spatial analysis of data fields
Feb 4: Spatal analyss of data felds Mappng rregularly sampled data onto a regular grd Many analyss technques for geophyscal data requre the data be located at regular ntervals n space and/or tme. hs s
More informationModelli Clamfim Equazione del Calore Lezione ottobre 2014
CLAMFIM Bologna Modell 1 @ Clamfm Equazone del Calore Lezone 17 15 ottobre 2014 professor Danele Rtell danele.rtell@unbo.t 1/24? Convoluton The convoluton of two functons g(t) and f(t) s the functon (g
More informationFoundations of Arithmetic
Foundatons of Arthmetc Notaton We shall denote the sum and product of numbers n the usual notaton as a 2 + a 2 + a 3 + + a = a, a 1 a 2 a 3 a = a The notaton a b means a dvdes b,.e. ac = b where c s an
More informationBOUNDEDNESS OF THE RIESZ TRANSFORM WITH MATRIX A 2 WEIGHTS
BOUNDEDNESS OF THE IESZ TANSFOM WITH MATIX A WEIGHTS Introducton Let L = L ( n, be the functon space wth norm (ˆ f L = f(x C dx d < For a d d matrx valued functon W : wth W (x postve sem-defnte for all
More informationFirst day August 1, Problems and Solutions
FOURTH INTERNATIONAL COMPETITION FOR UNIVERSITY STUDENTS IN MATHEMATICS July 30 August 4, 997, Plovdv, BULGARIA Frst day August, 997 Problems and Solutons Problem. Let {ε n } n= be a sequence of postve
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 13
CME 30: NUMERICAL LINEAR ALGEBRA FALL 005/06 LECTURE 13 GENE H GOLUB 1 Iteratve Methods Very large problems (naturally sparse, from applcatons): teratve methods Structured matrces (even sometmes dense,
More informationOpen Systems: Chemical Potential and Partial Molar Quantities Chemical Potential
Open Systems: Chemcal Potental and Partal Molar Quanttes Chemcal Potental For closed systems, we have derved the followng relatonshps: du = TdS pdv dh = TdS + Vdp da = SdT pdv dg = VdP SdT For open systems,
More informationRandom Walks on Digraphs
Random Walks on Dgraphs J. J. P. Veerman October 23, 27 Introducton Let V = {, n} be a vertex set and S a non-negatve row-stochastc matrx (.e. rows sum to ). V and S defne a dgraph G = G(V, S) and a drected
More informationVQ widely used in coding speech, image, and video
at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng
More information1 Convex Optimization
Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,
More information10. Canonical Transformations Michael Fowler
10. Canoncal Transformatons Mchael Fowler Pont Transformatons It s clear that Lagrange s equatons are correct for any reasonable choce of parameters labelng the system confguraton. Let s call our frst
More informationCOS 521: Advanced Algorithms Game Theory and Linear Programming
COS 521: Advanced Algorthms Game Theory and Lnear Programmng Moses Charkar February 27, 2013 In these notes, we ntroduce some basc concepts n game theory and lnear programmng (LP). We show a connecton
More informationMultilayer Perceptron (MLP)
Multlayer Perceptron (MLP) Seungjn Cho Department of Computer Scence and Engneerng Pohang Unversty of Scence and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea seungjn@postech.ac.kr 1 / 20 Outlne
More informationSolutions to exam in SF1811 Optimization, Jan 14, 2015
Solutons to exam n SF8 Optmzaton, Jan 4, 25 3 3 O------O -4 \ / \ / The network: \/ where all lnks go from left to rght. /\ / \ / \ 6 O------O -5 2 4.(a) Let x = ( x 3, x 4, x 23, x 24 ) T, where the varable
More informationMaximizing the number of nonnegative subsets
Maxmzng the number of nonnegatve subsets Noga Alon Hao Huang December 1, 213 Abstract Gven a set of n real numbers, f the sum of elements of every subset of sze larger than k s negatve, what s the maxmum
More informationFREQUENCY DISTRIBUTIONS Page 1 of The idea of a frequency distribution for sets of observations will be introduced,
FREQUENCY DISTRIBUTIONS Page 1 of 6 I. Introducton 1. The dea of a frequency dstrbuton for sets of observatons wll be ntroduced, together wth some of the mechancs for constructng dstrbutons of data. Then
More informationAssortment Optimization under MNL
Assortment Optmzaton under MNL Haotan Song Aprl 30, 2017 1 Introducton The assortment optmzaton problem ams to fnd the revenue-maxmzng assortment of products to offer when the prces of products are fxed.
More informationPARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY
POZNAN UNIVE RSITY OF TE CHNOLOGY ACADE MIC JOURNALS No 86 Electrcal Engneerng 6 Volodymyr KONOVAL* Roman PRYTULA** PARTICIPATION FACTOR IN MODAL ANALYSIS OF POWER SYSTEMS STABILITY Ths paper provdes a
More informationThe equation of motion of a dynamical system is given by a set of differential equations. That is (1)
Dynamcal Systems Many engneerng and natural systems are dynamcal systems. For example a pendulum s a dynamcal system. State l The state of the dynamcal system specfes t condtons. For a pendulum n the absence
More informationChapter Newton s Method
Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve
More informationMATH 5630: Discrete Time-Space Model Hung Phan, UMass Lowell March 1, 2018
MATH 5630: Dscrete Tme-Space Model Hung Phan, UMass Lowell March, 08 Newton s Law of Coolng Consder the coolng of a well strred coffee so that the temperature does not depend on space Newton s law of collng
More informationU.C. Berkeley CS294: Beyond Worst-Case Analysis Luca Trevisan September 5, 2017
U.C. Berkeley CS94: Beyond Worst-Case Analyss Handout 4s Luca Trevsan September 5, 07 Summary of Lecture 4 In whch we ntroduce semdefnte programmng and apply t to Max Cut. Semdefnte Programmng Recall that
More informationarxiv: v1 [quant-ph] 6 Sep 2007
An Explct Constructon of Quantum Expanders Avraham Ben-Aroya Oded Schwartz Amnon Ta-Shma arxv:0709.0911v1 [quant-ph] 6 Sep 2007 Abstract Quantum expanders are a natural generalzaton of classcal expanders.
More informationModule 9. Lecture 6. Duality in Assignment Problems
Module 9 1 Lecture 6 Dualty n Assgnment Problems In ths lecture we attempt to answer few other mportant questons posed n earler lecture for (AP) and see how some of them can be explaned through the concept
More information332600_08_1.qxp 4/17/08 11:29 AM Page 481
336_8_.qxp 4/7/8 :9 AM Page 48 8 Complex Vector Spaces 8. Complex Numbers 8. Conjugates and Dvson of Complex Numbers 8.3 Polar Form and DeMovre s Theorem 8.4 Complex Vector Spaces and Inner Products 8.5
More information5 The Rational Canonical Form
5 The Ratonal Canoncal Form Here p s a monc rreducble factor of the mnmum polynomal m T and s not necessarly of degree one Let F p denote the feld constructed earler n the course, consstng of all matrces
More information9 Characteristic classes
THEODORE VORONOV DIFFERENTIAL GEOMETRY. Sprng 2009 [under constructon] 9 Characterstc classes 9.1 The frst Chern class of a lne bundle Consder a complex vector bundle E B of rank p. We shall construct
More informationSupplement: Proofs and Technical Details for The Solution Path of the Generalized Lasso
Supplement: Proofs and Techncal Detals for The Soluton Path of the Generalzed Lasso Ryan J. Tbshran Jonathan Taylor In ths document we gve supplementary detals to the paper The Soluton Path of the Generalzed
More informationExample: (13320, 22140) =? Solution #1: The divisors of are 1, 2, 3, 4, 5, 6, 9, 10, 12, 15, 18, 20, 27, 30, 36, 41,
The greatest common dvsor of two ntegers a and b (not both zero) s the largest nteger whch s a common factor of both a and b. We denote ths number by gcd(a, b), or smply (a, b) when there s no confuson
More informationCanonical transformations
Canoncal transformatons November 23, 2014 Recall that we have defned a symplectc transformaton to be any lnear transformaton M A B leavng the symplectc form nvarant, Ω AB M A CM B DΩ CD Coordnate transformatons,
More informationPHYS 705: Classical Mechanics. Calculus of Variations II
1 PHYS 705: Classcal Mechancs Calculus of Varatons II 2 Calculus of Varatons: Generalzaton (no constrant yet) Suppose now that F depends on several dependent varables : We need to fnd such that has a statonary
More informationGrover s Algorithm + Quantum Zeno Effect + Vaidman
Grover s Algorthm + Quantum Zeno Effect + Vadman CS 294-2 Bomb 10/12/04 Fall 2004 Lecture 11 Grover s algorthm Recall that Grover s algorthm for searchng over a space of sze wors as follows: consder the
More informationFeature Selection: Part 1
CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?
More informationBasic Regular Expressions. Introduction. Introduction to Computability. Theory. Motivation. Lecture4: Regular Expressions
Introducton to Computablty Theory Lecture: egular Expressons Prof Amos Israel Motvaton If one wants to descrbe a regular language, La, she can use the a DFA, Dor an NFA N, such L ( D = La that that Ths
More informationFormulas for the Determinant
page 224 224 CHAPTER 3 Determnants e t te t e 2t 38 A = e t 2te t e 2t e t te t 2e 2t 39 If 123 A = 345, 456 compute the matrx product A adj(a) What can you conclude about det(a)? For Problems 40 43, use
More information8.6 The Complex Number System
8.6 The Complex Number System Earler n the chapter, we mentoned that we cannot have a negatve under a square root, snce the square of any postve or negatve number s always postve. In ths secton we want
More informationLecture 3: Shannon s Theorem
CSE 533: Error-Correctng Codes (Autumn 006 Lecture 3: Shannon s Theorem October 9, 006 Lecturer: Venkatesan Guruswam Scrbe: Wdad Machmouch 1 Communcaton Model The communcaton model we are usng conssts
More informationLecture 20: Lift and Project, SDP Duality. Today we will study the Lift and Project method. Then we will prove the SDP duality theorem.
prnceton u. sp 02 cos 598B: algorthms and complexty Lecture 20: Lft and Project, SDP Dualty Lecturer: Sanjeev Arora Scrbe:Yury Makarychev Today we wll study the Lft and Project method. Then we wll prove
More informationDifference Equations
Dfference Equatons c Jan Vrbk 1 Bascs Suppose a sequence of numbers, say a 0,a 1,a,a 3,... s defned by a certan general relatonshp between, say, three consecutve values of the sequence, e.g. a + +3a +1
More informationFor now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.
Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson
More information