Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred by bologcal nervous systems, such as the bran. It s composed of a large number of hghly nterconnected processng elements orkng n unon to solve specfc problems. ANN s type of artfcal ntellgence that attempts to mtate the ay a human bran orks. We advse the reader to read S. Haykn, Neural Netorks: A comprehensve foundaton, 999. Fg. 0 A bologcal neuron Most of the ANN structures used commonly for many applcatons often consder the behavor of sngle neuron as the basc computng unt descrbng neural nformaton processng operatons. Each computng unt,.e. the artfcal neuron n the neural netork s based on the concept of an deal neuron. A bologcal neuron or a nerve cell conssts of synapses, dendrtes and the axon, the functon of the man elements can be gven as: - Dendrte: Receves sgnals from other neurons. - Soma: Sums all the ncomng sgnals. - Axon: When a partcular amount of nput s receved, then the cell fres. It transmts sgnal through axon to other cells. For the purposes of the course e ll look at neural netorks as functon approxmator. As shon n Fgure, e have some unknon functon that e sh to approxmate. We ant to adust the parameters of the netork so that t ll produce the same response as the unknon functon, f the same nput s appled to both systems.
Fg. Neural Netork as Functon Approxmator For our applcatons, the unknon functon may correspond to a system e are tryng to control, n hch case the neural netork ll be the dentfed plant model. The unknon functon could also represent the nverse of a system e are tryng to control, n hch case the neural netork can be used to mplement the controller. At the end of ths tutoral e ll present several control archtectures demonstratng a varety of uses for functon approxmator neural netorks. Fgure Neural Netork as Functon Approxmator In the next secton e ll present the multlayer perceptron neural netork, and ll demonstrate ho t can be used as a functon approxmator. 2. Artfcal Neural Netorks Artfcal neural netorks are nonlnear nformaton processng devces, hch are bult from nterconnected elementary processng devces called neurons. ANN thus s an nformaton-processng system. In ths nformaton-processng system, the elements called as neurons, process the nformaton. The sgnals are transmtted by the means of connecton lnks. The lnk possesses an assocated eght, hch s multpled along th the ncomng sgnal (net nput) for any typcal neural net. The output sgnal s obtaned by applyng actvatons to the net nput. An Artfcal neuron s charactered by:. Archtecture (connecton beteen neurons) 2. Tranng or learnng (determnng eghts on the connectons) 3. Actvaton functon
3. Basc Netork Structures a. Hstorcally, the earlest ANN s are The perceptron, proposed by the psychologst Frank Rosenblatt. b. The Adalne (Adaptve Lne near Neuron). It s a sngle neuron, not a netork. c. The Madalne (Many Adalne). Ths s ANN formulaton based on the Adalne above. d. The Mult-Layer Perceptrons. Ths s a generaled archtecture of the perceptron. Ths net s used for approxmaton functon problem. e. The Hopfeld Netork. Ths dfferent netork has mportant aspect hch has recurrent feature feedback beteen neurons. Ths net provdes an effcent soluton for the Travelng Sales-man Problem. f. The Self-Organng Mappng s utled to facltate unsupervsed learnng. These nets are appled to many recognton problems. 4. Feedback Neural Netorks Archtecture Ths type of netork as descrbed by J.J. Hopfeld n 982. The topology of a Hopfeld netork s very smple: t has n neurons, hch are all netorked th each other.
The archtecture shon n the prevous fgure conssts of 4 numbers of nput neurons and 4 output neurons. It should be noted that apart from recevng a sgnal from nput, the frst neuron receves sgnal form other output neurons also. Ths s the same for the all other output neurons. Thus, there exsts a feedback output beng returned to each output neuron. That s hy the Hopfeld netork s called a feedback netork. 5. Feed forard Neural Netorks Archtecture A MLP and RBF neural netorks archtecture may be veed as a practcal vehcle for performng a nonlnear nput-output mappng of a general nature. 5. MLP We study multlayer feedforard netorks, an mportant class of neural netork. Typcally, the netork consst of a set of sensory of unts that consttute the nput layer, one or more hdden layers of computaton nodes, and an output layer of computaton nodes. The nput sgnal propagates through the netork n a forard drecton, on a layerby-layer bass. These neural netorks are commonly referred to as Mult Layers Perceptrons.
y y 2 y m outputs o om v o p pm p hdden layer v op v v np nputs x x 2 x n Fgure. MLPNN archtecture Fgure. shos the archtectural graph of MLP th 2 hdden layers and an output layer. A MLP has three dstnctve characterstcs:. The model of each neuron n the netork ncludes a nonlnear actvaton functon. The mportant pont s that the nonlnearty s smooth and commonly used form nonlnearty that satsfes ths requrement s a sgmodal nonlnearty defned as follo y = x + e
2. The neural netork contans one or more layers of hdden neurons. 3. The netork exhbts a hgh degree of connectvty, determned by the synapses of the netork. A change n the connectvty of the netork requres a change n the populaton of synaptc connectons or ther eghts. Back-propagaton algorthm MLP have been appled successfully to solve some dffcult and dverse problems by tranng them n a supervsed manner th hghly popular algorthm knon as the error back-propagaton algorthm. Ths s based on the error-correcton learnng rule. Bascally, error BP algorthm conssts of to passes though the dfferent layers of the netork: a forard pass and backard pass. In the forard pass, an nput vector s appled to the nput layer of the netork, and ts effect propagates through the netork layer by layer. Durng the backard pass, on the other hand, the synaptc eghts of the netorks are all adusted n accordance th error-correcton rule. Specfcally, the actual response of the netork s subtracted from a desred response to produce an error sgnal. Ths error s then propagate backard through the netork. The tranng algorthm of back propagaton can be descrbed as follo: Intalaton of the eghts Step : Intale eght to small random values. Step 2: Whle stoppng condton s false, do steps 3-0 Step 3: For each tranng par do steps 4-9 Feedforard pass Step 4: Each nput unt receves the nput sgnal x and transmts ths sgnals to all unts n the hdden layer Step 5: Each nput unt receves the nput sgnal, =,..., p sums ts eghted nput sgnals n n = vo + = f ( n applyng actvaton functon Z = ) and sends ths to all unts n the output layer Step 6: Each output unt y k, k =,..., m sums ts eghted nput sgnals y nk = ok + and apples ts actvaton functon to calculate the output sgnal Y = f y ) x v p = k k ( nk Backard pass Step 7: Each output unt y k, k =,..., m receves a target pattern correspondng to an nput pattern, error nformaton term s calculated as δ = t y ) f ( y ) k ( k k nk
Step 8: Each hdden unt, =,..., p sums ts delta nputs from unts n the layer above δ n = m k = δ The error nformaton term s calculated as δ = δ f n k ( n Updatng Weght and Bases Step 9: Each output unt y k, k =,..., m updates ts bas and eghts ( = 0,..., p). The eght correcton term s gven by Δ W k = αδ k and the bas correcton term s gven by Δ Wok = αδ k Therefore, W k ( ne) = W k ( old) + ΔW k, Wok ( ne) = Wok ( old) + ΔWok Each hdden unt, =,..., p updates ts bas and eghts ( = 0,..., n). The eght correcton term Δ V = αδ x The eght correcton term Δ Vo = αδ Therefore, V ( ne) = V ( old) + ΔV, Vo ( ne) = Vo ( old) + ΔVo Step 0: Test the stoppng condton, the stoppng condton may be the mnmaton of the errors, number of teratons etc. ) 5.2 RBF Radal bass functon netork can be used for approxmatng functons too. It uses Gaussan kernel functons. The archtecture of radal bass functon conssts of three layers, the nput, the hdden and the output layers as shon n the fgure belo. The archtecture of radal bass functon netork s multlayer feed-forard netork. The output of the RBFNN s lnear combnaton f bass functons. here ϕ s mappng R + y = n = ϕ ( x) R and the norm s Eucldean dstance.
Weghts Lnear eghts Radal Bass Functons Fgure.2 RBFNN archtecture The follong forms have been consdered as radal bass functon a. Mult Quadratc functon and r R b. ϕ ( r ) = r c. d. e. ϕ ( r ) = r ϕ ( r ) = r ϕ ( r) = e 2 3 2 r 2 / 2 ϕ ( r ) = ( r + c) here C s postve constant Tranng algorthm for an RBFNN The tranng algorthm for the radal bass functon netork s gven belo: Step : Intale the eghts (set to small random values) Step 2: Whle stoppng s false do step 3-0 Step 3: For each nput do step 4-9 Step 4: Each nput unt x, =,..., n receves nput sgnals to all unts n the layer above. Step 5: Calculate the radal bass functon Step 6: Choose the centers for the radal bass functon. The centres are chosen from the set of nput vectors. A suffcent number of centers have to be selected n order to ensure adequate samplng of the nput vector space. Step 7: The output of m unt v ( x ) n the hdden layer
here xˆ s center of the RBF neurons, [ ] 2 x xˆ 2 / r v ( x ) = exp σ = σ dth of the RBF and x s the nput varable. Step 8: Intale the eghts n the output layer of the netork to some small random value Step 9: Calculate the output of the neural netork H y net = mv ( x ) + = here H s number of hdden neurons, y net output value of m neurons, W m eght beteen I RBF unt and m output node, and basng term at n output nde. Step 0: Calculate error and test stoppng condton, the stoppng condton may be the eght change, number of teratons, etc. 6. Ho to desgn an ANN o - Choose the approprate ANN needed to solve your problem (MLP, RBF, Hopfeld, ) - Choose the number of hdden layers needed. - Choose the number of neurons. - Choose the tranng algorthm - Valdate your results on ne examples. o 7. Applcaton to Control Neural netorks have been appled very successfully n the dentfcaton and control of dynamc systems. The unversal approxmaton capabltes of the multlayer perceptron have made t a popular choce for modelng nonlnear systems and for mplementng general-purpose nonlnear controllers. In the last chapter of ths lecture notes, e ll ntroduce some of the more popular neural netork archtectures for system dentfcaton and control.