CHAPTER 3 ARTIFICIAL NEURAL NETWORKS AND LEARNING ALGORITHM

Size: px
Start display at page:

Download "CHAPTER 3 ARTIFICIAL NEURAL NETWORKS AND LEARNING ALGORITHM"

Transcription

1 46 CHAPTER 3 ARTIFICIAL NEURAL NETWORKS AND LEARNING ALGORITHM 3.1 ARTIFICIAL NEURAL NETWORKS Introducton The noton of computng takes many forms. Hstorcally, the term computng has been domnated by the concept of programmed computng than neural computng. In programmed computng, algorthms are desgned and subsequently mplemented usng the currently domnant archtecture, whereas n neural computng, learnng replaces a pror program development. The neural computng offers a potental soluton to many currently unsolved problems n conventonal computng. Artfcal neural networks provde new tools and new foundatons for solvng practcal problems n predcton, decson and control, sgnal separaton, state estmaton, pattern recognton, data mnng, etc. Tradtonal statstcs tres to collect a huge lbrary of dfferent methods for dfferent tasks, but the bran s a lvng proof that one system can do t all, f there s data. It proves that a system can manage mllons of varables wthout beng confused. Nowadays, engneers and scentsts are tryng to develop ntellgent machnes. Artfcal Neural Systems (ANS) are examples of such machnes that have greater potental to further mprove the qualty of human lfe. Artfcal neural networks are collectons of mathematcal models that emulate some of the observed propertes of bologcal nervous systems and

2 47 draw on the analoges of adaptve bologcal learnng. They are capable of developng ther behavor through learnng. They learn through experences lke human bran. They are dynamcal systems n the learnng/tranng phase of ther operaton and convergence s an essental feature. There are dfferent types of ANNs. Some of the popular models nclude the BPN whch s generally traned wth the Generalzed Delta Rule (GDR), Learnng Vector Quantzaton (LVQ), Radal Bass Functon (RBF), Hopfeld, Adaptve Resonance Theory (ART) and Kohonen s Self-Organzng Feature Map (SOM) networks. Some ANNs are classfed as feedforward whle others are recurrent (.e., mplement feedback) dependng on how data s processed through the network. The synaptc weght update of ANNs can be carred out by supervsed methods, or by unsupervsed methods, or by fxed weght assocaton network methods. In the case of the supervsed methods, nputs and outputs are used, whereas, n the unsupervsed methods, only the nputs are used. In the fxed weght assocaton network methods, nputs and outputs are used along wth pre-computed and pre-stored weghts. Artfcal Neural Networks are used where a conventonal process s not sutable. the conventonal method cannot be easly delvered. the conventonal method cannot fully capture the complexty n the data and the stochastc behavor s mportant. an explanaton of the network s decson s not requred. The functon of a sngle neuron n an ANN s shown n Fgure 3.1. Each neuron receves nputs from other neurons and/or from external sources.

3 48 Lke a real neuron, the processng element has many nputs but has only a sngle output, whch s connected to many other processng elements n the network. Each processng element s numbered. It apples lnear/nonlnear functon on ts net nput to compute the output. Fgure 3.1 Functon of a sngle neuron In the above Fgure, the nput pattern s represented by [ x, x,...,x ] T X = (3.1) 1 2 n Each artfcal neuron s nput has an assocated weght whch ndcates the fracton (or amount) of the transfer of one neuron s output to another neuron s nput. The nputs for an artfcal neuron are from external sources or from other neurons. The notaton w j s used to represent the weght on the nterconnecton lnk from neuron j to neuron. The weght vector W s represented as [ w, w,...,w ] T W = (3.2) 1 2 n

4 49 The net nput value of a sngle neuron s determned by the weghted sum of nputs as gven by net() = net = w1x1 + w2x wnx n (3.3) = n wkxk (3.4) k= 1 where n s the number of nputs. A neuron (or unt) fres, f the sum of ts nputs exceeds some threshold value. If t fres, t produces an output, whch has been sent to the next layer neurons. In vector notaton, the net nput value s gven by net T = W X (3.5) The output value of a sngle neuron s obtaned as o() = f (net ) (3.6) The objectve of neural network desgn s to determne an optmal set of weghts w*. Therefore, the artfcal neurons nvolve two mportant processes. () Determne the net nput value by combnng the nputs. () Mappng the net nput value nto the neuron s output. Ths mappng may be as smple as usng the dentty functon or as complex as usng a nonlnear functon Backpropagaton Neural Network The area of speech sgnal separaton and recovery of orgnal sgnals from a mxed sgnal s a challengng doman for nformaton and system processng. Contnued attempt of usng artfcal neural network n speech

5 50 sgnal separaton s takng place (Cchock and Unbehauen 1996; Amar and Cchock 1998; Meyer et al 2006). They have ensured the separaton of extremely weak or badly scaled statonary sgnals, as well as a successful separaton even f the mxng matrx s very ll-condtoned. The Backpropagaton (BPN) neural network s a multlayer supervsed neural network whch uses steepest descent method to update weghts. Its archtecture as gven n Fgure 3.2 has an nput layer wth I nodes and an output layer wth K nodes and one hdden layer wth J nodes. Each neuron n the hdden layer has ts own nput weghts and the output of a neuron n a layer goes to all neurons n the next layer. Fgure 3.2 Archtecture of backpropagaton neural network The ntal weght values are randomly generated by the Matlab functon rand(). The nput layer does not process the mxture nput. It just dstrbutes the nput samples to all the neurons n the hdden layer. The output of each hdden layer neuron s obtaned by applyng the sgmod functon to ts net

6 51 nput value. Each hdden neuron output s fed to all the neurons n the output layer. Each neuron n the output layer, frst calculates the net nput value and then t apples nonlnear functon on the net nput to produce an output value m. In the earler stage of ths work, an attempt has been made to extract the ndvdual speech sgnals from an artfcally mxed speech sgnal usng Backpropagaton neural network and ts performance s compared wth RBF neural network. To compare ther performance, the waveforms of speeches of only two persons are recorded n a closed envronment for two seconds at the rate of 8KHz. The schematc dagram of speech sgnal recovery by BPN neural network s shown n Fgure 3.3. Tranng ` Testng Fgure 3.3 Schematc dagram of speech sgnal recovery 4000 samples of each speech sgnal shown n Fgure 3.4 are mxed and are used for tranng the BPN network.

7 52 (a) Speech sgnals of two persons (b) Mxed speech sgnal Fgure 3.4 Two speech sgnals and mxed sgnal (a) Recovery of speech 1 (b) Recovery of speech 2 Fgure 3.5 Recovery of speech sgnals by BPN neural network (a) Recovery of speech 1 (b) Recovery of speech 2 Fgure 3.6 Recovery of speech sgnals by RBF neural network

8 53 From the smulaton result shown n Fgure 3.5, t has been observed that the recovered sgnals by BPN are dstorted wth poor qualty of speech for lmted number of teratons (at whch RBF neural network s converged). The BPN network whch has 15 hdden neurons, takes 65 mnutes to reduce the error to The computatonal load s more snce the weghts between nput and hdden layer and weghts between hdden and output layer are updated. The major lmtaton of the backpropagaton neural network s ts slow convergence tme (.e. more tranng tme) and t has ended up wth local mnma. Moreover, there s no proof of convergence, although t seems to perform well n practce. Due to stochastc gradent descent on a nonlnear error surface, t s lkely that most of the tme the result may converge to some local mnmum on the error surface. Another major lmtaton s the problem of scalng and when the complexty of the problem s ncreased, there s no guarantee that good generalzaton would result. So, the network sze, such as the number of hdden layers has to be ncreased. Ths results n heavy burden on the computaton and network complexty. So the RBF neural network s traned wth the same data and from Fgure 3.6, t s observed that recovered sgnals by RBF network are obtaned wthout dstorton. The RBF network whch has two hdden neurons, takes 52 sec to reduce the error to The computatonal load s less snce only the weghts between hdden and output layer are updated Tranng Strategy of Backpropagaton Neural Network For the network to learn patterns, the weght updatng algorthm, Unsupervsed Stochastc Gradent Descent Algorthm (USGDA) has been used. The present work nvolves modfcaton of weghts to extract the

9 54 ndependent sgnals from mxed sgnals. The functon of the network s based on an unsupervsed learnng strategy. The nputs of a pattern are presented and the output of the network n the output layer s computed and the weghts n the output layer and hdden layer are updated by the weght update equaton and compared wth the prevous weght value. The total error.e., the dfference between the prevous weght value and the current weght value s determned. The total error for all patterns presented s calculated and f ths total error s greater than zero, the learnng rate parameter s vared by the Equaton (4.1) and the weghts are updated by the weght update Equatons (3.7) and (3.8). At each teraton, ths process decreases the total error of the network for all the patterns presented. To mnmze the total error to zero, the network s presented wth all the tranng patterns many tmes. Ths procedure s repeated untl the error becomes As gven n Fgure 3.2, R s the weght matrx between nput and hdden layer. Z s the weght matrx between the hdden and output layer. The weght value z kj on the nterconnecton from neuron j to neuron k s updated by the weght update equaton ( ) T ( ) ( ) ( ) ( ) z t + 1 = z t + lrp d t d t O + ε (3.7) kj kj 1 2 p where t - tme step, lrp - learnng rate parameter, = 0.99, ε - constant parameter, = 0.05 d 1 (t) = nv(det(z(t)) ( ) = + + d t [3m 4m 2.92m 5m 3.417m 2 k k k k k 0.78m m ] k k

10 55 T O p - Transpose of mxture sgnal p Smlarly, all the remanng weghts between hdden and output layer are updated by the Equaton (3.7). The weght value r j on the nterconnecton from neuron to neuron j s updated by the weght update equaton T ( ( ) ) ( ) ( ) 1 ( ) ( ) ( ) ( ) r t + 1 = z t + lrp d t d t O + ε z t 1+ exp( net j j 1 2 p kj pj (3.8) Smlarly, all the remanng weghts between nput and hdden layer are updated by the Equaton (3.8). To mplement ths algorthm, the speeches of two persons are recorded n a closed envronment for two seconds at the rate of 4 KHz. Lke ths, 25 speeches of males and 25 speeches of females are recorded and stored as.wav fles. Ffty combnatons (one male and one female; two males; two females) of dfferent speech waveforms (S 1 and S 2 ) are mxed artfcally by multplyng the speech sgnals wth varous coeffcents as gven by O 1 = 0.3 S S 2 (3.9) 500 samples of each mxture sgnal are preprocessed by the technque, normalzaton so that the nputs to the nodes of the nput layer are between zero and one. 40 mxture sgnals are used for tranng both BPN and RBF neural networks by the unsupervsed stochastc gradent descent algorthm and 10 mxture sgnals are used for testng the networks. From Fgure 3.7, t s found that after certan number of teratons (no. of teratons=2223, at whch the RBF network s converged), the BPN network s able to converge to only some extent. The network s found to convergence

11 56 slowly due to local mnma and slow tranng. When the number of nodes n hdden layer s 15, BPN network takes 13 hours and 5 mnutes for reducng the error to So, t has been observed that the recovered sgnals by BPN are dstorted for lmted number of teratons (at whch ASN-RBF neural network s converged) and the contents of speeches are retaned wth dstortons n the qualty of speech. Error No. of Itratons BPN ASN-RBF Fgure 3.7 Graph of error versus teratons for sample sze=500 (50 mxture sgnal), MSE= 0.01 and η=0.99 When the number of nodes n hdden layer s 2, RBF neural network takes 4 hours and 33 mnutes for reducng the error to 0.01 n 2223 teratons. So, the tranng tme of RBF network s 8 hours and 32 mnutes less than that of BPN network. Thus, the performance of the RBF network s found to be much superor to BPN n terms of recoverng the orgnal sgnals wth less tranng tme.

12 Adaptve Self-Normalzed Radal Bass Functon (ASN-RBF) Neural Network In recent years, there has been an ncreasng nterest n usng Radal Bass Functon neural network for many problems. Lke Backpropagaton and Counter propagaton neural networks, t s a feedforward neural network that s capable of performng nonlnear relatonshp between the nput and output vector spaces. RBF and BPN are both unversal approxmators..e., when they are desgned wth enough hdden layer neurons, they approxmate any contnuous functon wth arbtrary accuracy (Gros and Poggo 1989; Hartman et al 1990). Ths s a property they share wth other feedforward networks havng one hdden layer of nonlnear neurons. Hornk et al (1989) have shown that the nonlnearty need not be sgmod and t can be any of a wde range of functons. It s therefore not surprsng to fnd that there always exsts an RBF network capable of accurately mmckng a specfed BPN or vce-versa. The RBF network s found to be sutable for BSS problem snce t has the followng characterstcs. 1. It has faster learnng capablty and t s good at handlng nonlnear data. 2. It fnds nput to output mappng usng local approxmators and they requre fewer tranng samples. 3. It provdes smaller nterpolaton errors, hgher relablty and a more well-developed theoretcal analyss than BPN

13 58 As shown n Fgure 3.8, the ASN-RBF neural network conssts of three layers: an nput layer wth 500 neurons, a sngle layer of nonlnear processng neurons and an output layer wth 2, 3 or 4 neurons dependng on the number of sources. In Backpropagaton neural network, the weghts between hdden layer and output layer and also the weghts between hdden and nput layer are updated durng tranng. But, n RBF neural network, only the weghts between hdden and output layer are updated. The RBF network does not end up wth local mnma and the outputs of the hdden layer neurons are calculated by m m ( ) ( ) k ( ) p = f o = u ϕ o,c = u ϕ o c k k k k k k = 1 k = 1 for = 1,2,..., N (3.10) and the outputs of output layer neurons are calculated by N w j.p = β = 1 m j where 1 β = α (3.11) where p s the output of the hdden neuron, α s the convergence parameter used n the network, n 1 o R + s an nput vector and ( ) bass functon whch s gven by exp( D 2 ( 2 ) 2 ) T ( ) ( ) 2 j j ϕ s a radal k. λ, where D = O W O W and λ s the spread factor whch controls the wdth of the radal bass functon, U k s the weght matrx between nput and hdden layer, W s the weght matrx between hdden and output layer, N s j N 1 the number of neurons n the hdden layer and c R are the RBF centers n the nput vector space. For each neuron n the hdden layer, the Eucldean k

14 59 dstance between ts assocated center and the nput to the network s computed. The convergence parameter α s used n the network for faster convergence of the proposed learnng algorthm. Durng tranng, f t s very low, the total error becomes NaN (Not a Number) and the network s not converged. So, the convergence parameter s gradually ncreased from a lower value such that the network does not encounter wth NaN and the network s converged for a partcular value. Therefore, the total error s reduced to the tolerance value after a fnte number of teratons. m=500; N=20-2; n=2,3,4 Fgure 3.8 Topology of ASN-RBF neural network The convergence parameters used for dfferent expermental set ups are gven n Table 3.1. The ASN-RBF neural network archtecture s capable of performng nonlnear relatonshp between nput and output vector spaces. The scalng parameter β s used for post-processng to obtan the correct

15 60 output data. The centers c k are assumed to perform an adequate samplng of the nput vector space. They are usually chosen as subset of the nput data. The weght vector W k determnes the value of O whch produces the maxmum output from the neuron. The response at other values of O drops quckly as O devates from W becomng neglgble n value when O s far from W. Table 3.1 Convergence parameters used for expermental set ups Source sgnals No. of samples Convergence parameter (α) Max. Iteratons Crow2.wav,song1.wav crow2.wav, ssong1.wav wsong2.wav male1.wav,female1.wav, male2.wav male1.wav,female1.wav, male2.wav, female2.wav Tranng Strategy of ASN-RBF Neural Network There are two sets of parameters governng the mappng propertes of the RBF neural network: the weghts W k n the output layer and the centers c k of the radal bass functons. The ASN-RBF neural network s traned wth fxed centers. They are chosen n a random manner as a subset of the nput data set. After the network has been traned, some of the centers are removed

16 61 n a systematc manner wthout sgnfcant degradaton of the system performance. The locaton of the centers of the receptve felds s a crtcal ssue and there are many alternatves for ther determnaton. In the learnng algorthm, the center and correspondng hdden layer neuron are located at each nput vector n the tranng set. The dameter of the receptve regon, determned by the value of the spread factor λ s set at 0.01, has a profound effect upon the accuracy of the system. The objectve s to cover the nput space wth receptve felds as unformly as possble. If the spacng between centers s not unform, t s necessary for each hdden neuron to have ts own value of λ. 100 nputs (brds voces) are used for ths proposed network. Out of these, 80 nputs are used for tranng the ASN-RBF neural network and 20 nputs are used for testng the network. Each nput corresponds to dfferent combnatons of brds voces downloaded from the webste Dfferent combnatons of the brds voces are artfcally mxed and preprocessed by the technque, Normalzaton so that the nputs to the nodes of nput layer are between zero and one. The preprocessed nput s fed to the network n the form of 500 samples, correspondng to 500 neurons n the nput layer. The number of source sgnals s vared from two to four. The ASN-RBF neural network s also traned and tested for separaton of speech sgnals whch are recorded for about 2 sec n a closed envronment at the rate of 8 KHz. The proposed ASN- RBF neural network and learnng algorthm perform well under nonstatonary envronments and when the number of source sgnals s unknown and dynamcally changed.

17 UNSUPERVISED STOCHASTIC GRADIENT DESCENT LEARNING ALGORITHM To separate ndependent components from the observed sgnal, an objectve functon s requred. The objectve functon s chosen such that t gves orgnal sgnals when t s mnmzed. Durng tranng of the ASN-RBF neural network, one of the free parameters.e., the nterconnecton weghts between hdden layer and output layer are adjusted to mnmze the objectve functon. In sgnal processng (Taleb and Jutten 1998), when the components of the output vector become ndependent, ts jont probablty densty functon factorzes to margnal pdfs, gven by ( ) ( ) M M = 1 k f M,W = f m,w (3.12) where fm (m, W) s the margnal pdf of M, m s the th component of the output sgnal M and k s the number of source sgnals. The Equaton (3.12) s a constrant mposed on the learnng algorthm. The jont pdf of M parameterzed by W s wrtten as f M ( M,W) ( O) f = o (3.13) B where B s the determnant of the Jacoban matrx B. It s defned as B = m m m... o o o k m m m... o o o k m m m... o o o k k k 1 2 k (3.14)

18 63 Referrng to Equaton (1.4), each element n Equaton (3.14) s represented nterms of w as m o j = w j (3.15) Therefore, Equaton (3.14) s wrtten as w w... w k w w... w W k = (3.16) w w... w k1 k2 kk Now, Equaton (3.13) s wrtten as f M ( M,W) ( O) f = o (3.17) W To extract ndependent components from the observed sgnal, the dfference between the jont pdf and the product of margnal pdfs s determned. When the components become ndependent, the dfference becomes zero (.e. the jont pdf becomes equal to the product of margnal pdfs of the separated sgnals). It s expressed as M M = 1 k f ( M,W) f ( m,w) = 0 (3.18) Snce the logarthm provdes computatonal smplcty, t s taken on both sdes of Equaton (3.18) and t becomes, ( ) k M ( ) = M ( ) (3.19) = 1 log(f M, W) log f m, W

19 Equaton (3.19), Substtutng the value of f ( ) M 64 m,w from Equaton (3.17) n ( O) f log = log f m,w ( M ( )) k o W (3.20) = 1 Because the pdf of the nput vector s ndependent of the parameter vector W, the objectve functon for optmzaton becomes k ( ) ( M ) Φ (W) = log W log f (m, W) (3.21) = 1 Now, the Edgeworth seres has been used to expand the second term n Equaton (3.21). The edgeworth seres expanson of the random varable M about the Gaussan approxmate α(m) s gven by ( ) ( ) 2 fm m k3 k 4 10k3 = 1+ H3 ( m) + H4 ( m) + H6 ( m ) +... α m 3! 4! 6! (3.22) where α(m) denotes the probablty densty functon of a random varable M, normalzed to zero mean and unt varance, k denotes the cumulant of order of the standardzed scalar random varable M and H denotes the Hermte polynomal of order. The thrd and fourth order cumulants and Hermte polynomals are gven by k 3 =c 3 (3.23) H 3 (m) = m 3-3m (3.24) H 4 (m)=m 4-6m 2 +3 (3.25) 2 k 4 =c 4-3c 2 (3.26)

20 65 The cumulants are expressed nterms of moments. The r th order moment of m s gven by c,r = E m (3.27) r = E w.o n r r (3.28) r= 1 Substtutng the values of the cumulants and Hermte polynomals from Equatons (3.23), (3.24), (3.25) and (3.26) n Equaton (3.22) and takng logarthm on both sdes, t becomes f ( m ) ( m) M log = 0.75m m 0.365m + 0.5m 0.285m α m m (3.29) Dfferentatng Equaton (3.29) wth respect to w k, fm log α w ( m ) ( m) k = 3m + 4m 2.92m + 5m 3.417m m 0.056m O k Therefore, the optmzaton functon becomes ( m (t)) 3m 4m 2.92m 5m 3.417m 0.78m 0.056m (3.30) Ψ = (3.31) After smplfcaton, the gradent descent of Equaton (3.21) now becomes ( ) Φ W = W T Ψ ( m) O T W (3.32) The stochastc gradent descent algorthm for weght update s wrtten as

21 66 ( W) Φ W( t + 1) = W(t) η (3.33) W Substtutng the gradent of the cost functon from Equaton (3.32) n Equaton (3.33), the weght update rule s wrtten as T ( + ) = ( ) + η( ) Ψ ( ) T ( ) ( ) W t 1 W t t W m t O t (3.34) The Edgeworth seres s used for the approxmaton of probablty densty functons snce ts coeffcents decrease unformly and the error s controlled, so that t s a true asymptotc expanson. On the other hand, the terms n the Gram-Charler expanson do not tend unformly to zero from the vew pont of numercal errors;.e., n general, no term s neglgble compared to a precedng term Algorthm Descrpton Once the centers and spread factors have been chosen, the output layer weght matrx W s optmzed by unsupervsed learnng usng stochastc gradent descent technque. The tranng process conssts of the followng sequences as gven n Fgure 3.9. Step 1 : Step 2 : Intalze the parameters. a) Assgn weghts between nput and hdden layer. b) Assgn weghts between hdden and output layer. c) Set η=0.99, λ=0.09, ε = 0.05 and set M(t)=O(t). Read nput sgnals. Step 3 : Generate mxng matrx A. Step 4 : Step 5: Obtan the observed mxture sgnal O. Preprocess (.e., normalze) the mxture sgnals.

22 67 Step 6 : Recover source sgnals. () Apply the observed sgnal to the nput layer neurons. % Forward operaton % For each pattern n the tranng set a) Fnd the Hdden layer output. b) Fnd nputs to nodes n the output layer. c) Compute the actual output of output layer neurons. d) Determne delta W. e) Update the weghts between hdden and output layer. f) If the dfference between prevous weght value and current weght value s not equal to zero, then go to step 5 else stop tranng Step 7: Postprocess (.e., Denormalzaton) the output data. Step 8: Store and dsplay the separated sgnals. Thus, the development of ths algorthm nvolves maxmzaton of the statstcal ndependence between the output vectors M. It s equvalent to mnmzng the dvergence between the two dstrbutons: () Jont probablty densty functon f M (m,w) parameterzed by W.

23 68 n fm ( m,w). = 1 () Product of margnal densty functon of M, Fgure 3.9 Block dagram of source sgnal recovery Implementaton of USGDA Step 1: Get Source 1. [s1,srate,no_bts]=wavread('nukeanthem'); % Returns the sample rate n Hertz and the number of bts per sample (NBITS) used to encode the data n the fle. s1=wavread('nukeanthem.wav',num_samples);

24 69 sources(1,:)=s1; Step 2: Get Source 2. s2=wavread('dspafxf.wav',num_samples); % Returns only the frst N samples from each channel n the fle. sources(2,:)=s2; Step 3: Get Source 3. s3=wavread('utopia.wav',num_samples); sources(3,:)=s3; Step 4 : Intalze the parameters. a) Assgn weghts between nput and hdden layer. b) Assgn weghts between hdden and output layer. c) Set η=0.99, λ=0.09, ε = 0.05 and set M(t)=O(t). Step 5: Generate mxng matrx A. A=rand(num_mxtures,num_sources); Step 6: Step 7: Step 8: Obtan the observed mxture sgnal O by O = sources * A; Preprocess the mxture sgnal. Recover source sgnals. %Forward operaton % For each pattern n the tranng set () Fnd h (Hdden layer output). for j =1:p ds=0; for k=1:l x=o(j,k); c=center(,k); dff = x-c; ds = ds + dff.^2;

25 70 () end end ph(j)=exp((-ds)/(2*sg^2)); output_of_hddenn(j)=ph(j); Fnd nputs to nodes n the output layer. nput_to_outputn=output_of_hddenn*hou; () Compute the actual output (for output layer neurons) for b = 1:ol end output_of_outputn(b)=(nput_to_outputn(b)/α); (v) Fnd the dfference between prevous weght value and current weght value. (v) If the dfference s equal to zero, stop tranng, else fnd the weght update value.e., delta 1 ( ( )) d = nv det hou _ old ; for k=1:output_neurons ( ) = + + d k [3m 4m 2.92m 5m 3.417m 2 k k k k k end deloutput (d d ) O T = 1 2 ; 0.78m m ] k k (v) Update the weghts between hdden and output layer by the equaton W(t + 1) = W(t) + lrp deloutput (t) + ε ; Step 9: Step 10: Evaluate M(t) and postprocess t to obtan the orgnal sgnals. Repeat steps 8 and 9 untl the dfference between the prevous weght value and the current weght value s not equal to zero.

26 71 The radal bass functon ph = exp(-d 2 )/(2σ)^2 was evaluated over the nterval -1< x <1 and -1<y<1 as shown n the graph n Fgure Fgure 3.10 Evaluaton of radal bass functon over the nterval -1< x <1 and -1<y<1 Thus, the algorthm whch s mplemented, nvolves the condton of ndependency whch nturn related to varyng the weghts of RBF neural network, successfully separates the sgnals from the mxed nput sgnal. The ASN-RBF neural network and the proposed learnng algorthm perform well under non-statonary envronments and when the number of source sgnals s unknown and dynamcally changed.

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Module 3 LOSSY IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur Module 3 LOSSY IMAGE COMPRESSION SYSTEMS Verson ECE IIT, Kharagpur Lesson 6 Theory of Quantzaton Verson ECE IIT, Kharagpur Instructonal Objectves At the end of ths lesson, the students should be able to:

More information

EEE 241: Linear Systems

EEE 241: Linear Systems EEE : Lnear Systems Summary #: Backpropagaton BACKPROPAGATION The perceptron rule as well as the Wdrow Hoff learnng were desgned to tran sngle layer networks. They suffer from the same dsadvantage: they

More information

Neural Networks & Learning

Neural Networks & Learning Neural Netorks & Learnng. Introducton The basc prelmnares nvolved n the Artfcal Neural Netorks (ANN) are descrbed n secton. An Artfcal Neural Netorks (ANN) s an nformaton-processng paradgm that nspred

More information

Lossy Compression. Compromise accuracy of reconstruction for increased compression.

Lossy Compression. Compromise accuracy of reconstruction for increased compression. Lossy Compresson Compromse accuracy of reconstructon for ncreased compresson. The reconstructon s usually vsbly ndstngushable from the orgnal mage. Typcally, one can get up to 0:1 compresson wth almost

More information

VQ widely used in coding speech, image, and video

VQ widely used in coding speech, image, and video at Scalar quantzers are specal cases of vector quantzers (VQ): they are constraned to look at one sample at a tme (memoryless) VQ does not have such constrant better RD perfomance expected Source codng

More information

Generalized Linear Methods

Generalized Linear Methods Generalzed Lnear Methods 1 Introducton In the Ensemble Methods the general dea s that usng a combnaton of several weak learner one could make a better learner. More formally, assume that we have a set

More information

Lecture Notes on Linear Regression

Lecture Notes on Linear Regression Lecture Notes on Lnear Regresson Feng L fl@sdueducn Shandong Unversty, Chna Lnear Regresson Problem In regresson problem, we am at predct a contnuous target value gven an nput feature vector We assume

More information

1 Convex Optimization

1 Convex Optimization Convex Optmzaton We wll consder convex optmzaton problems. Namely, mnmzaton problems where the objectve s convex (we assume no constrants for now). Such problems often arse n machne learnng. For example,

More information

Week 5: Neural Networks

Week 5: Neural Networks Week 5: Neural Networks Instructor: Sergey Levne Neural Networks Summary In the prevous lecture, we saw how we can construct neural networks by extendng logstc regresson. Neural networks consst of multple

More information

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD

CHALMERS, GÖTEBORGS UNIVERSITET. SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS. COURSE CODES: FFR 135, FIM 720 GU, PhD CHALMERS, GÖTEBORGS UNIVERSITET SOLUTIONS to RE-EXAM for ARTIFICIAL NEURAL NETWORKS COURSE CODES: FFR 35, FIM 72 GU, PhD Tme: Place: Teachers: Allowed materal: Not allowed: January 2, 28, at 8 3 2 3 SB

More information

Multigradient for Neural Networks for Equalizers 1

Multigradient for Neural Networks for Equalizers 1 Multgradent for Neural Netorks for Equalzers 1 Chulhee ee, Jnook Go and Heeyoung Km Department of Electrcal and Electronc Engneerng Yonse Unversty 134 Shnchon-Dong, Seodaemun-Ku, Seoul 1-749, Korea ABSTRACT

More information

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata

Multilayer Perceptrons and Backpropagation. Perceptrons. Recap: Perceptrons. Informatics 1 CG: Lecture 6. Mirella Lapata Multlayer Perceptrons and Informatcs CG: Lecture 6 Mrella Lapata School of Informatcs Unversty of Ednburgh mlap@nf.ed.ac.uk Readng: Kevn Gurney s Introducton to Neural Networks, Chapters 5 6.5 January,

More information

NUMERICAL DIFFERENTIATION

NUMERICAL DIFFERENTIATION NUMERICAL DIFFERENTIATION 1 Introducton Dfferentaton s a method to compute the rate at whch a dependent output y changes wth respect to the change n the ndependent nput x. Ths rate of change s called the

More information

Lecture 10 Support Vector Machines II

Lecture 10 Support Vector Machines II Lecture 10 Support Vector Machnes II 22 February 2016 Taylor B. Arnold Yale Statstcs STAT 365/665 1/28 Notes: Problem 3 s posted and due ths upcomng Frday There was an early bug n the fake-test data; fxed

More information

Introduction to the Introduction to Artificial Neural Network

Introduction to the Introduction to Artificial Neural Network Introducton to the Introducton to Artfcal Neural Netork Vuong Le th Hao Tang s sldes Part of the content of the sldes are from the Internet (possbly th modfcatons). The lecturer does not clam any onershp

More information

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification

2E Pattern Recognition Solutions to Introduction to Pattern Recognition, Chapter 2: Bayesian pattern classification E395 - Pattern Recognton Solutons to Introducton to Pattern Recognton, Chapter : Bayesan pattern classfcaton Preface Ths document s a soluton manual for selected exercses from Introducton to Pattern Recognton

More information

Radial-Basis Function Networks

Radial-Basis Function Networks Radal-Bass uncton Networs v.0 March 00 Mchel Verleysen Radal-Bass uncton Networs - Radal-Bass uncton Networs p Orgn: Cover s theorem p Interpolaton problem p Regularzaton theory p Generalzed RBN p Unversal

More information

Supporting Information

Supporting Information Supportng Informaton The neural network f n Eq. 1 s gven by: f x l = ReLU W atom x l + b atom, 2 where ReLU s the element-wse rectfed lnear unt, 21.e., ReLUx = max0, x, W atom R d d s the weght matrx to

More information

Linear Feature Engineering 11

Linear Feature Engineering 11 Lnear Feature Engneerng 11 2 Least-Squares 2.1 Smple least-squares Consder the followng dataset. We have a bunch of nputs x and correspondng outputs y. The partcular values n ths dataset are x y 0.23 0.19

More information

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity

LINEAR REGRESSION ANALYSIS. MODULE IX Lecture Multicollinearity LINEAR REGRESSION ANALYSIS MODULE IX Lecture - 30 Multcollnearty Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur 2 Remedes for multcollnearty Varous technques have

More information

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results.

For now, let us focus on a specific model of neurons. These are simplified from reality but can achieve remarkable results. Neural Networks : Dervaton compled by Alvn Wan from Professor Jtendra Malk s lecture Ths type of computaton s called deep learnng and s the most popular method for many problems, such as computer vson

More information

Linear Approximation with Regularization and Moving Least Squares

Linear Approximation with Regularization and Moving Least Squares Lnear Approxmaton wth Regularzaton and Movng Least Squares Igor Grešovn May 007 Revson 4.6 (Revson : March 004). 5 4 3 0.5 3 3.5 4 Contents: Lnear Fttng...4. Weghted Least Squares n Functon Approxmaton...

More information

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE

CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE CHAPTER 5 NUMERICAL EVALUATION OF DYNAMIC RESPONSE Analytcal soluton s usually not possble when exctaton vares arbtrarly wth tme or f the system s nonlnear. Such problems can be solved by numercal tmesteppng

More information

FUZZY FINITE ELEMENT METHOD

FUZZY FINITE ELEMENT METHOD FUZZY FINITE ELEMENT METHOD RELIABILITY TRUCTURE ANALYI UING PROBABILITY 3.. Maxmum Normal tress Internal force s the shear force, V has a magntude equal to the load P and bendng moment, M. Bendng moments

More information

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester

Admin NEURAL NETWORKS. Perceptron learning algorithm. Our Nervous System 10/25/16. Assignment 7. Class 11/22. Schedule for the rest of the semester 0/25/6 Admn Assgnment 7 Class /22 Schedule for the rest of the semester NEURAL NETWORKS Davd Kauchak CS58 Fall 206 Perceptron learnng algorthm Our Nervous System repeat untl convergence (or for some #

More information

LECTURE 9 CANONICAL CORRELATION ANALYSIS

LECTURE 9 CANONICAL CORRELATION ANALYSIS LECURE 9 CANONICAL CORRELAION ANALYSIS Introducton he concept of canoncal correlaton arses when we want to quantfy the assocatons between two sets of varables. For example, suppose that the frst set of

More information

Chapter 13: Multiple Regression

Chapter 13: Multiple Regression Chapter 13: Multple Regresson 13.1 Developng the multple-regresson Model The general model can be descrbed as: It smplfes for two ndependent varables: The sample ft parameter b 0, b 1, and b are used to

More information

Chapter 11: Simple Linear Regression and Correlation

Chapter 11: Simple Linear Regression and Correlation Chapter 11: Smple Lnear Regresson and Correlaton 11-1 Emprcal Models 11-2 Smple Lnear Regresson 11-3 Propertes of the Least Squares Estmators 11-4 Hypothess Test n Smple Lnear Regresson 11-4.1 Use of t-tests

More information

Markov Chain Monte Carlo Lecture 6

Markov Chain Monte Carlo Lecture 6 where (x 1,..., x N ) X N, N s called the populaton sze, f(x) f (x) for at least one {1, 2,..., N}, and those dfferent from f(x) are called the tral dstrbutons n terms of mportance samplng. Dfferent ways

More information

Kernel Methods and SVMs Extension

Kernel Methods and SVMs Extension Kernel Methods and SVMs Extenson The purpose of ths document s to revew materal covered n Machne Learnng 1 Supervsed Learnng regardng support vector machnes (SVMs). Ths document also provdes a general

More information

Ensemble Methods: Boosting

Ensemble Methods: Boosting Ensemble Methods: Boostng Ncholas Ruozz Unversty of Texas at Dallas Based on the sldes of Vbhav Gogate and Rob Schapre Last Tme Varance reducton va baggng Generate new tranng data sets by samplng wth replacement

More information

DUE: WEDS FEB 21ST 2018

DUE: WEDS FEB 21ST 2018 HOMEWORK # 1: FINITE DIFFERENCES IN ONE DIMENSION DUE: WEDS FEB 21ST 2018 1. Theory Beam bendng s a classcal engneerng analyss. The tradtonal soluton technque makes smplfyng assumptons such as a constant

More information

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin

Finite Mixture Models and Expectation Maximization. Most slides are from: Dr. Mario Figueiredo, Dr. Anil Jain and Dr. Rong Jin Fnte Mxture Models and Expectaton Maxmzaton Most sldes are from: Dr. Maro Fgueredo, Dr. Anl Jan and Dr. Rong Jn Recall: The Supervsed Learnng Problem Gven a set of n samples X {(x, y )},,,n Chapter 3 of

More information

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS Avalable onlne at http://sck.org J. Math. Comput. Sc. 3 (3), No., 6-3 ISSN: 97-537 COMPARISON OF SOME RELIABILITY CHARACTERISTICS BETWEEN REDUNDANT SYSTEMS REQUIRING SUPPORTING UNITS FOR THEIR OPERATIONS

More information

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud

Resource Allocation with a Budget Constraint for Computing Independent Tasks in the Cloud Resource Allocaton wth a Budget Constrant for Computng Independent Tasks n the Cloud Wemng Sh and Bo Hong School of Electrcal and Computer Engneerng Georga Insttute of Technology, USA 2nd IEEE Internatonal

More information

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography

CSci 6974 and ECSE 6966 Math. Tech. for Vision, Graphics and Robotics Lecture 21, April 17, 2006 Estimating A Plane Homography CSc 6974 and ECSE 6966 Math. Tech. for Vson, Graphcs and Robotcs Lecture 21, Aprl 17, 2006 Estmatng A Plane Homography Overvew We contnue wth a dscusson of the major ssues, usng estmaton of plane projectve

More information

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010

Parametric fractional imputation for missing data analysis. Jae Kwang Kim Survey Working Group Seminar March 29, 2010 Parametrc fractonal mputaton for mssng data analyss Jae Kwang Km Survey Workng Group Semnar March 29, 2010 1 Outlne Introducton Proposed method Fractonal mputaton Approxmaton Varance estmaton Multple mputaton

More information

Homework Assignment 3 Due in class, Thursday October 15

Homework Assignment 3 Due in class, Thursday October 15 Homework Assgnment 3 Due n class, Thursday October 15 SDS 383C Statstcal Modelng I 1 Rdge regresson and Lasso 1. Get the Prostrate cancer data from http://statweb.stanford.edu/~tbs/elemstatlearn/ datasets/prostate.data.

More information

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law:

Introduction to Vapor/Liquid Equilibrium, part 2. Raoult s Law: CE304, Sprng 2004 Lecture 4 Introducton to Vapor/Lqud Equlbrum, part 2 Raoult s Law: The smplest model that allows us do VLE calculatons s obtaned when we assume that the vapor phase s an deal gas, and

More information

4DVAR, according to the name, is a four-dimensional variational method.

4DVAR, according to the name, is a four-dimensional variational method. 4D-Varatonal Data Assmlaton (4D-Var) 4DVAR, accordng to the name, s a four-dmensonal varatonal method. 4D-Var s actually a drect generalzaton of 3D-Var to handle observatons that are dstrbuted n tme. The

More information

Pulse Coded Modulation

Pulse Coded Modulation Pulse Coded Modulaton PCM (Pulse Coded Modulaton) s a voce codng technque defned by the ITU-T G.711 standard and t s used n dgtal telephony to encode the voce sgnal. The frst step n the analog to dgtal

More information

2 Finite difference basics

2 Finite difference basics Numersche Methoden 1, WS 11/12 B.J.P. Kaus 2 Fnte dfference bascs Consder the one- The bascs of the fnte dfference method are best understood wth an example. dmensonal transent heat conducton equaton T

More information

Support Vector Machines. Vibhav Gogate The University of Texas at dallas

Support Vector Machines. Vibhav Gogate The University of Texas at dallas Support Vector Machnes Vbhav Gogate he Unversty of exas at dallas What We have Learned So Far? 1. Decson rees. Naïve Bayes 3. Lnear Regresson 4. Logstc Regresson 5. Perceptron 6. Neural networks 7. K-Nearest

More information

The Feynman path integral

The Feynman path integral The Feynman path ntegral Aprl 3, 205 Hesenberg and Schrödnger pctures The Schrödnger wave functon places the tme dependence of a physcal system n the state, ψ, t, where the state s a vector n Hlbert space

More information

Solving Nonlinear Differential Equations by a Neural Network Method

Solving Nonlinear Differential Equations by a Neural Network Method Solvng Nonlnear Dfferental Equatons by a Neural Network Method Luce P. Aarts and Peter Van der Veer Delft Unversty of Technology, Faculty of Cvlengneerng and Geoscences, Secton of Cvlengneerng Informatcs,

More information

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan

Kernels in Support Vector Machines. Based on lectures of Martin Law, University of Michigan Kernels n Support Vector Machnes Based on lectures of Martn Law, Unversty of Mchgan Non Lnear separable problems AND OR NOT() The XOR problem cannot be solved wth a perceptron. XOR Per Lug Martell - Systems

More information

STAT 511 FINAL EXAM NAME Spring 2001

STAT 511 FINAL EXAM NAME Spring 2001 STAT 5 FINAL EXAM NAME Sprng Instructons: Ths s a closed book exam. No notes or books are allowed. ou may use a calculator but you are not allowed to store notes or formulas n the calculator. Please wrte

More information

Chapter - 2. Distribution System Power Flow Analysis

Chapter - 2. Distribution System Power Flow Analysis Chapter - 2 Dstrbuton System Power Flow Analyss CHAPTER - 2 Radal Dstrbuton System Load Flow 2.1 Introducton Load flow s an mportant tool [66] for analyzng electrcal power system network performance. Load

More information

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems

Chapter 5. Solution of System of Linear Equations. Module No. 6. Solution of Inconsistent and Ill Conditioned Systems Numercal Analyss by Dr. Anta Pal Assstant Professor Department of Mathematcs Natonal Insttute of Technology Durgapur Durgapur-713209 emal: anta.bue@gmal.com 1 . Chapter 5 Soluton of System of Lnear Equatons

More information

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression

MACHINE APPLIED MACHINE LEARNING LEARNING. Gaussian Mixture Regression 11 MACHINE APPLIED MACHINE LEARNING LEARNING MACHINE LEARNING Gaussan Mture Regresson 22 MACHINE APPLIED MACHINE LEARNING LEARNING Bref summary of last week s lecture 33 MACHINE APPLIED MACHINE LEARNING

More information

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012

MLE and Bayesian Estimation. Jie Tang Department of Computer Science & Technology Tsinghua University 2012 MLE and Bayesan Estmaton Je Tang Department of Computer Scence & Technology Tsnghua Unversty 01 1 Lnear Regresson? As the frst step, we need to decde how we re gong to represent the functon f. One example:

More information

Maximum Likelihood Estimation (MLE)

Maximum Likelihood Estimation (MLE) Maxmum Lkelhood Estmaton (MLE) Ken Kreutz-Delgado (Nuno Vasconcelos) ECE 175A Wnter 01 UCSD Statstcal Learnng Goal: Gven a relatonshp between a feature vector x and a vector y, and d data samples (x,y

More information

SDMML HT MSc Problem Sheet 4

SDMML HT MSc Problem Sheet 4 SDMML HT 06 - MSc Problem Sheet 4. The recever operatng characterstc ROC curve plots the senstvty aganst the specfcty of a bnary classfer as the threshold for dscrmnaton s vared. Let the data space be

More information

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems

Chapter 12. Ordinary Differential Equation Boundary Value (BV) Problems Chapter. Ordnar Dfferental Equaton Boundar Value (BV) Problems In ths chapter we wll learn how to solve ODE boundar value problem. BV ODE s usuall gven wth x beng the ndependent space varable. p( x) q(

More information

Report on Image warping

Report on Image warping Report on Image warpng Xuan Ne, Dec. 20, 2004 Ths document summarzed the algorthms of our mage warpng soluton for further study, and there s a detaled descrpton about the mplementaton of these algorthms.

More information

Numerical Heat and Mass Transfer

Numerical Heat and Mass Transfer Master degree n Mechancal Engneerng Numercal Heat and Mass Transfer 06-Fnte-Dfference Method (One-dmensonal, steady state heat conducton) Fausto Arpno f.arpno@uncas.t Introducton Why we use models and

More information

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X

3.1 Expectation of Functions of Several Random Variables. )' be a k-dimensional discrete or continuous random vector, with joint PMF p (, E X E X1 E X Statstcs 1: Probablty Theory II 37 3 EPECTATION OF SEVERAL RANDOM VARIABLES As n Probablty Theory I, the nterest n most stuatons les not on the actual dstrbuton of a random vector, but rather on a number

More information

Which Separator? Spring 1

Which Separator? Spring 1 Whch Separator? 6.034 - Sprng 1 Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng Whch Separator? Mamze the margn to closest ponts 6.034 - Sprng 3 Margn of a pont " # y (w $ + b) proportonal

More information

The Study of Teaching-learning-based Optimization Algorithm

The Study of Teaching-learning-based Optimization Algorithm Advanced Scence and Technology Letters Vol. (AST 06), pp.05- http://dx.do.org/0.57/astl.06. The Study of Teachng-learnng-based Optmzaton Algorthm u Sun, Yan fu, Lele Kong, Haolang Q,, Helongang Insttute

More information

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN

MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN MULTISPECTRAL IMAGE CLASSIFICATION USING BACK-PROPAGATION NEURAL NETWORK IN PCA DOMAIN S. Chtwong, S. Wtthayapradt, S. Intajag, and F. Cheevasuvt Faculty of Engneerng, Kng Mongkut s Insttute of Technology

More information

Non-linear Canonical Correlation Analysis Using a RBF Network

Non-linear Canonical Correlation Analysis Using a RBF Network ESANN' proceedngs - European Smposum on Artfcal Neural Networks Bruges (Belgum), 4-6 Aprl, d-sde publ., ISBN -97--, pp. 57-5 Non-lnear Canoncal Correlaton Analss Usng a RBF Network Sukhbnder Kumar, Elane

More information

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1]

Outline. Communication. Bellman Ford Algorithm. Bellman Ford Example. Bellman Ford Shortest Path [1] DYNAMIC SHORTEST PATH SEARCH AND SYNCHRONIZED TASK SWITCHING Jay Wagenpfel, Adran Trachte 2 Outlne Shortest Communcaton Path Searchng Bellmann Ford algorthm Algorthm for dynamc case Modfcatons to our algorthm

More information

A Robust Method for Calculating the Correlation Coefficient

A Robust Method for Calculating the Correlation Coefficient A Robust Method for Calculatng the Correlaton Coeffcent E.B. Nven and C. V. Deutsch Relatonshps between prmary and secondary data are frequently quantfed usng the correlaton coeffcent; however, the tradtonal

More information

Composite Hypotheses testing

Composite Hypotheses testing Composte ypotheses testng In many hypothess testng problems there are many possble dstrbutons that can occur under each of the hypotheses. The output of the source s a set of parameters (ponts n a parameter

More information

Lecture 12: Classification

Lecture 12: Classification Lecture : Classfcaton g Dscrmnant functons g The optmal Bayes classfer g Quadratc classfers g Eucldean and Mahalanobs metrcs g K Nearest Neghbor Classfers Intellgent Sensor Systems Rcardo Guterrez-Osuna

More information

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks

Other NN Models. Reinforcement learning (RL) Probabilistic neural networks Other NN Models Renforcement learnng (RL) Probablstc neural networks Support vector machne (SVM) Renforcement learnng g( (RL) Basc deas: Supervsed dlearnng: (delta rule, BP) Samples (x, f(x)) to learn

More information

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM

ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM ELASTIC WAVE PROPAGATION IN A CONTINUOUS MEDIUM An elastc wave s a deformaton of the body that travels throughout the body n all drectons. We can examne the deformaton over a perod of tme by fxng our look

More information

Lecture 23: Artificial neural networks

Lecture 23: Artificial neural networks Lecture 23: Artfcal neural networks Broad feld that has developed over the past 20 to 30 years Confluence of statstcal mechancs, appled math, bology and computers Orgnal motvaton: mathematcal modelng of

More information

Appendix B: Resampling Algorithms

Appendix B: Resampling Algorithms 407 Appendx B: Resamplng Algorthms A common problem of all partcle flters s the degeneracy of weghts, whch conssts of the unbounded ncrease of the varance of the mportance weghts ω [ ] of the partcles

More information

Queueing Networks II Network Performance

Queueing Networks II Network Performance Queueng Networks II Network Performance Davd Tpper Assocate Professor Graduate Telecommuncatons and Networkng Program Unversty of Pttsburgh Sldes 6 Networks of Queues Many communcaton systems must be modeled

More information

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach

A Bayes Algorithm for the Multitask Pattern Recognition Problem Direct Approach A Bayes Algorthm for the Multtask Pattern Recognton Problem Drect Approach Edward Puchala Wroclaw Unversty of Technology, Char of Systems and Computer etworks, Wybrzeze Wyspanskego 7, 50-370 Wroclaw, Poland

More information

Boostrapaggregating (Bagging)

Boostrapaggregating (Bagging) Boostrapaggregatng (Baggng) An ensemble meta-algorthm desgned to mprove the stablty and accuracy of machne learnng algorthms Can be used n both regresson and classfcaton Reduces varance and helps to avod

More information

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks

Internet Engineering. Jacek Mazurkiewicz, PhD Softcomputing. Part 3: Recurrent Artificial Neural Networks Self-Organising Artificial Neural Networks Internet Engneerng Jacek Mazurkewcz, PhD Softcomputng Part 3: Recurrent Artfcal Neural Networks Self-Organsng Artfcal Neural Networks Recurrent Artfcal Neural Networks Feedback sgnals between neurons Dynamc

More information

The Finite Element Method

The Finite Element Method The Fnte Element Method GENERAL INTRODUCTION Read: Chapters 1 and 2 CONTENTS Engneerng and analyss Smulaton of a physcal process Examples mathematcal model development Approxmate solutons and methods of

More information

Global Sensitivity. Tuesday 20 th February, 2018

Global Sensitivity. Tuesday 20 th February, 2018 Global Senstvty Tuesday 2 th February, 28 ) Local Senstvty Most senstvty analyses [] are based on local estmates of senstvty, typcally by expandng the response n a Taylor seres about some specfc values

More information

Comparison of Regression Lines

Comparison of Regression Lines STATGRAPHICS Rev. 9/13/2013 Comparson of Regresson Lnes Summary... 1 Data Input... 3 Analyss Summary... 4 Plot of Ftted Model... 6 Condtonal Sums of Squares... 6 Analyss Optons... 7 Forecasts... 8 Confdence

More information

Supervised Learning NNs

Supervised Learning NNs EE788 Robot Cognton and Plannng, Prof. J.-H. Km Lecture 6 Supervsed Learnng NNs Robot Intellgence Technolog Lab. From Jang, Sun, Mzutan, Ch.9, Neuro-Fuzz and Soft Computng, Prentce Hall Contents. Introducton.

More information

Chapter Newton s Method

Chapter Newton s Method Chapter 9. Newton s Method After readng ths chapter, you should be able to:. Understand how Newton s method s dfferent from the Golden Secton Search method. Understand how Newton s method works 3. Solve

More information

Estimation: Part 2. Chapter GREG estimation

Estimation: Part 2. Chapter GREG estimation Chapter 9 Estmaton: Part 2 9. GREG estmaton In Chapter 8, we have seen that the regresson estmator s an effcent estmator when there s a lnear relatonshp between y and x. In ths chapter, we generalzed the

More information

De-noising Method Based on Kernel Adaptive Filtering for Telemetry Vibration Signal of the Vehicle Test Kejun ZENG

De-noising Method Based on Kernel Adaptive Filtering for Telemetry Vibration Signal of the Vehicle Test Kejun ZENG 6th Internatonal Conference on Mechatroncs, Materals, Botechnology and Envronment (ICMMBE 6) De-nosng Method Based on Kernel Adaptve Flterng for elemetry Vbraton Sgnal of the Vehcle est Kejun ZEG PLA 955

More information

Feature Selection: Part 1

Feature Selection: Part 1 CSE 546: Machne Learnng Lecture 5 Feature Selecton: Part 1 Instructor: Sham Kakade 1 Regresson n the hgh dmensonal settng How do we learn when the number of features d s greater than the sample sze n?

More information

Lecture 3: Probability Distributions

Lecture 3: Probability Distributions Lecture 3: Probablty Dstrbutons Random Varables Let us begn by defnng a sample space as a set of outcomes from an experment. We denote ths by S. A random varable s a functon whch maps outcomes nto the

More information

RELIABILITY ASSESSMENT

RELIABILITY ASSESSMENT CHAPTER Rsk Analyss n Engneerng and Economcs RELIABILITY ASSESSMENT A. J. Clark School of Engneerng Department of Cvl and Envronmental Engneerng 4a CHAPMAN HALL/CRC Rsk Analyss for Engneerng Department

More information

Chapter 8 Indicator Variables

Chapter 8 Indicator Variables Chapter 8 Indcator Varables In general, e explanatory varables n any regresson analyss are assumed to be quanttatve n nature. For example, e varables lke temperature, dstance, age etc. are quanttatve n

More information

CHAPTER III Neural Networks as Associative Memory

CHAPTER III Neural Networks as Associative Memory CHAPTER III Neural Networs as Assocatve Memory Introducton One of the prmary functons of the bran s assocatve memory. We assocate the faces wth names, letters wth sounds, or we can recognze the people

More information

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING

ADVANCED MACHINE LEARNING ADVANCED MACHINE LEARNING 1 ADVANCED ACHINE LEARNING ADVANCED ACHINE LEARNING Non-lnear regresson technques 2 ADVANCED ACHINE LEARNING Regresson: Prncple N ap N-dm. nput x to a contnuous output y. Learn a functon of the type: N

More information

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement

Markov Chain Monte Carlo (MCMC), Gibbs Sampling, Metropolis Algorithms, and Simulated Annealing Bioinformatics Course Supplement Markov Chan Monte Carlo MCMC, Gbbs Samplng, Metropols Algorthms, and Smulated Annealng 2001 Bonformatcs Course Supplement SNU Bontellgence Lab http://bsnuackr/ Outlne! Markov Chan Monte Carlo MCMC! Metropols-Hastngs

More information

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur

Dr. Shalabh Department of Mathematics and Statistics Indian Institute of Technology Kanpur Analyss of Varance and Desgn of Exerments-I MODULE III LECTURE - 2 EXPERIMENTAL DESIGN MODELS Dr. Shalabh Deartment of Mathematcs and Statstcs Indan Insttute of Technology Kanur 2 We consder the models

More information

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing

Pop-Click Noise Detection Using Inter-Frame Correlation for Improved Portable Auditory Sensing Advanced Scence and Technology Letters, pp.164-168 http://dx.do.org/10.14257/astl.2013 Pop-Clc Nose Detecton Usng Inter-Frame Correlaton for Improved Portable Audtory Sensng Dong Yun Lee, Kwang Myung Jeon,

More information

Multilayer neural networks

Multilayer neural networks Lecture Multlayer neural networks Mlos Hauskrecht mlos@cs.ptt.edu 5329 Sennott Square Mdterm exam Mdterm Monday, March 2, 205 In-class (75 mnutes) closed book materal covered by February 25, 205 Multlayer

More information

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix

Lectures - Week 4 Matrix norms, Conditioning, Vector Spaces, Linear Independence, Spanning sets and Basis, Null space and Range of a Matrix Lectures - Week 4 Matrx norms, Condtonng, Vector Spaces, Lnear Independence, Spannng sets and Bass, Null space and Range of a Matrx Matrx Norms Now we turn to assocatng a number to each matrx. We could

More information

Mean Field / Variational Approximations

Mean Field / Variational Approximations Mean Feld / Varatonal Appromatons resented by Jose Nuñez 0/24/05 Outlne Introducton Mean Feld Appromaton Structured Mean Feld Weghted Mean Feld Varatonal Methods Introducton roblem: We have dstrbuton but

More information

x = , so that calculated

x = , so that calculated Stat 4, secton Sngle Factor ANOVA notes by Tm Plachowsk n chapter 8 we conducted hypothess tests n whch we compared a sngle sample s mean or proporton to some hypotheszed value Chapter 9 expanded ths to

More information

Polynomial Regression Models

Polynomial Regression Models LINEAR REGRESSION ANALYSIS MODULE XII Lecture - 6 Polynomal Regresson Models Dr. Shalabh Department of Mathematcs and Statstcs Indan Insttute of Technology Kanpur Test of sgnfcance To test the sgnfcance

More information

Expectation Maximization Mixture Models HMMs

Expectation Maximization Mixture Models HMMs -755 Machne Learnng for Sgnal Processng Mture Models HMMs Class 9. 2 Sep 200 Learnng Dstrbutons for Data Problem: Gven a collecton of eamples from some data, estmate ts dstrbuton Basc deas of Mamum Lelhood

More information

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4)

Econ107 Applied Econometrics Topic 3: Classical Model (Studenmund, Chapter 4) I. Classcal Assumptons Econ7 Appled Econometrcs Topc 3: Classcal Model (Studenmund, Chapter 4) We have defned OLS and studed some algebrac propertes of OLS. In ths topc we wll study statstcal propertes

More information

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl

Suppose that there s a measured wndow of data fff k () ; :::; ff k g of a sze w, measured dscretely wth varable dscretzaton step. It s convenent to pl RECURSIVE SPLINE INTERPOLATION METHOD FOR REAL TIME ENGINE CONTROL APPLICATIONS A. Stotsky Volvo Car Corporaton Engne Desgn and Development Dept. 97542, HA1N, SE- 405 31 Gothenburg Sweden. Emal: astotsky@volvocars.com

More information

Statistics for Economics & Business

Statistics for Economics & Business Statstcs for Economcs & Busness Smple Lnear Regresson Learnng Objectves In ths chapter, you learn: How to use regresson analyss to predct the value of a dependent varable based on an ndependent varable

More information

Problem Set 9 Solutions

Problem Set 9 Solutions Desgn and Analyss of Algorthms May 4, 2015 Massachusetts Insttute of Technology 6.046J/18.410J Profs. Erk Demane, Srn Devadas, and Nancy Lynch Problem Set 9 Solutons Problem Set 9 Solutons Ths problem

More information

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD

The Gaussian classifier. Nuno Vasconcelos ECE Department, UCSD he Gaussan classfer Nuno Vasconcelos ECE Department, UCSD Bayesan decson theory recall that we have state of the world X observatons g decson functon L[g,y] loss of predctng y wth g Bayes decson rule s

More information