Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction

Size: px
Start display at page:

Download "Neural Networks with Wavelet Based Denoising Layers for Time Series Prediction"

Transcription

1 Neural Netwrks with Wavelet Based Denising Layers fr Time Series Predictin UROS LOTRIC 1 AND ANDREJ DOBNIKAR University f Lublana, Faculty f Cmputer and Infrmatin Science, Slvenia, {urs.ltric, andre.dbnikar}@fri.uni-l.si Abstract T avid preprcessing f nisy data, tw special denising layers based n wavelet multireslutin analysis are integrated int the layered neural netwrks. A gradient based learning algrithm is develped which uses the same cst functin fr setting bth the neural netwrk weights and the free parameters f denising layers. The prpsed layers, integrated int feedfrward and recurrent neural netwrks, are validated n time series predictin prblems: the Feigenbaum sequence, the rubber hardness time series and the yearly average sunspt number. It is shwn that the intrduced denising layers imprve the predictin accuracy in bth cases. Keywrds: feedfrward and recurrent neural netwrks, wavelet multireslutin analysis, denising, gradient based threshld adaptatin, time series predictin 1 Crrespnding authr: Urs Ltric, University f Lublana, Faculty f Cmputer and Infrmatin Science, Trzaska 25, 1000 Lublana, Slvenia, urs.ltric@fri.uni-l.si, phne: , fax:

2 1. Intrductin Mdels f dynamical systems in which the structure and parameters are extracted directly frm real data cannt avid the prblem f nise. Therefre, nise reductin has becme an imprtant issue in mdeling with neural netwrks [1]. Usually, nise reductin is treated as a separate prblem and preprcessing methds such as filtering [1, 2] based n statistical criteria are used. Hwever, attempts have been made t achieve nise reductin by integrating varius perfrmance measures int the cst functin withut influencing the input data [3, 4]. Anther apprach prpses t include methds fr input data reductin in the neural netwrk learning prcess [5]. In this paper tw special denising layers that enable neural netwrks t deal with nise withut separate data preprcessing are intrduced. These layers apply nvel filtering methds riginating in the wavelet multireslutin analysis [6]. Such methds have already been reprted in neural netwrk predictin prblems [7], hwever, nly as a preprcessing technique and nt integrated int the mdel itself. At the same time, mdels which integrate wavelets features in neural netwrks can be fund in literature [8, 9]. The design f the prpsed denising layers enables fr their integratin int the marity f layered neural netwrk. By this integratin, the same cst functin is used as the mdel perfrmance criterin and as the nise level estimatin criterin. The parameters f denising layers are set in the prcess f learning and any gradient based learning algrithm can be applied. In cntrast t methds using cst functins with additinal terms [3, 4] the prpsed denising layers can be added t any neural netwrk withut rewriting its learning prcess equatins. 2. Wavelet Based Denising In wavelet multireslutin analysis a data sample is treated n different scales. It is decmpsed int cefficients f apprximatin n a carser scale and cefficients f remaining details n finer scales [10]. One-dimensinal wavelet multireslutin analysis is based n the scaling functin φ () t and the 2

3 crrespnding mther wavelet ψ () t fulfilling certain technical cnditins. By dilatin and translatin f bth functins n abscissa the basis functins φk t k / 2, = 2 φ(2 ) and ψ k t k / 2, = 2 ψ (2 ), k, Z, are derived. An arbitrary cntinuus data sample can be written as a superpsitin f these basis functins where the sum with cefficients a J, k p() t = a φ () t + d ψ () t, Jk, Jk, k, k, k Z J k Z data sample n the scale J and the sum with cefficients the details f the data sample n scale., k Z, represents the apprximatin f the d k,, k Z, represents (1) In further discussin, nly discrete data samples p = ( p1, p 2, p, ) T N are cnsidered. The discrete wavelet transfrm and its inverse are linear peratins, and as such, they can be written in matrix frm as respectively, where the C -dimensinal vectr 11, 1, C1 J, 1 J, CJ J, 1 J, CJ D R c= W p and p= W c, (2) T c = ( d,, d,, d,, d, a,, a ) represents the discrete wavelet transfrm f the data sample. The decmpsitin matrix recnstructin matrix cnvlutin schemes [11]. D W and the R W can be cnstructed by applying the pyramidal As the wavelet cefficients f details having small abslute values are likely t cntribute t nise, Dnh and Jhnstne [6] prpsed denising by threshlding. Denising is a three-step prcess, cmpsed f discrete wavelet transfrmatin f a data sample t the space f wavelet cefficients, scale dependent threshlding f the wavelet cefficients f details, and cnstructin f a denised data sample frm threshlded wavelet cefficients in terms f inverse discrete wavelet transfrm. Fllwing the wrk f Dnh and Jhnstne [6], the generalized sft threshlding functin 3

4 1 2 2 Td ( k τ ) d k ( d k τ ) s ( d k τ ) s,, =, +, +, + + (3) 2 was intrduced [12] with parameter s 0 determining the level f smthness (Figure 1). Figure 1 The functin remves the wavelet cefficients f details d k, smaller than threshld τ and reduces the abslute values f larger wavelet cefficients f details. With the threshld τ any level f denising can be chsen, frm n denising by setting τ = 0 up t cmplete remval f infrmatin by setting = d, with d, max being the value f the abslutely largest wavelet τ, max cefficient f details n the -th scale [12]. 3. Denising Layers In neural netwrks terminlgy the wavelet denising prcess can be described by tw layers, the threshlding layer and the recnstructin layer, presented in Figure 2. Figure 2 The threshlding layer has C cmputatinal units r threshlding neurns, split int J + 1 grups, differing in threshlds τ. The number f scales J equals t the largest integer smaller than r equal t lg2 N. While the threshlds f the first J grups, prcessing the wavelet cefficients f details, are allwed t change, the threshld f the last grup, representing the apprximatin n scale J, is fixed t zer. The threshlding neurn is essentially the standard neurn [13] with nnlinear sigmid activatin functin replaced by generalized sft threshlding functin. When the input data sample p ( q) at sme discrete time q 4

5 is presented t the denising layers, the utput f the k -th threshlding neurn belnging t the grup is given by equatins N D k k k Wk l l l= 1 c% ( q) = T( c ( q), τ ), c ( q) =, p ( q), (4) where the weighted sum ck ( q ) represents ne f the wavelet cefficients ( a k, r d,,, k Z ) and the threshlding neurn utput c% k( q) represents the k matching threshlded wavelet cefficient. Threshlded wavelet cefficients are further passed t the recnstructin layer having N neurns with linear activatin functins, which generate the denised sample p% ( q) with elements C R p% k( q) = Wkl, c% l( q), k = 1,, N. (5) l= Gradient Based Learning Algrithm fr Threshlds The gal f learning is t adapt the mdel s free parameters in rder t minimize a cst functin. When gradient based algrithms are used, the gradients f the cst functin r its additive parts with respect t the mdel s free parameters need t be expressed analytically [14]. Oppsed t standard neurns with adaptable weights, the weights f threshlding neurns are fixed as given by the decmpsitin matrix D W. Similarly, the weights f linear neurns in recnstructin layer are given by the recnstructin matrix R W. Hence, the nly adaptable parameters f the denising layers are the threshlds, determining the shape f threshlding neurns activatin functins. Althugh the threshlding functin with parameter s = 0 exhibits nice statistical prperties [6], it can lead t a situatin where gradient based adaptatin f threshlds is unintentinally stpped [12]. Therefre, the parameter s = 001. d, > 0, fund empirically, is used in threshlding neurn activatin 2 max functins. Mrever, t cmply with the gradient based algrithms, uncnstrained threshlds τ, = 1, J,, which are mapped t the bunded threshlds τ by the sigmid functin (1 τ = d ) max / +, are intrduced [12]., e τ 5

6 Let us suppse that the cst functin depends n sme functin ε, which n the ther hand depends n the denised data sample, ε= ε ( p% ( q ), ). T calculate gradients f the functin ε with respect t the threshlds, an apprach similar t the ne f Trentin [15] has been used. Having in mind that each element f the denised data sample is btained frm all wavelet cefficients, ne can write ε ε p% ( q) τ =. τ τ τ N i i= 1 p% i( q) The first factr in the sum is btained frm the mdel t which the denised data sample is passed. Cnsidering the Equatin (5) and the mapping f threshlds, the secnd and the third factr can be expressed as with p% ( q) τ Tc ( ( q), τ ) τ τ τ C i R l = il τ τ d W, /, max l= 1 (1 ) (6) (7) Tc ( l( q), τ ) 1 cl( q) τ cl( q) + τ = +. τ ( c( q) τ ) + s ( c( q) + τ ) + s (8) 4. Neural Netwrk Mdel with Denising Layers The denising layers are usually placed between the neural netwrk mdel inputs and its first nnlinear layer. T demnstrate their universality, the integratin f the denising layers int the feedfrward tw-layered perceptrn and the twlayered perceptrn with glbal recurrent cnnectins is presented. Only equatins f the recurrent mdel are given, since they can be easily reduced t the feedfrward versin. The riginal mdels and their mdificatins are shwn in Figure 3. Figure 3 6

7 4.1. Tw-Layered Perceptrn with Recurrent Cnnectins The tw-layered perceptrn with glbal recurrent cnnectins starts by merging the inputs x ( q) = ( x1 ( q ),, x ( q)) T T t the last knwn mdel utputs y ( q 1). N Obtained vectr z( q) = ( y ( q 1) T, x ( q) T ) T is passed further t the N h nnlinear neurns in the hidden layer with utputs N + N h h h h h k ϕ ( k( )) k( ) ωk, l l( ) 1 h l= 0 y = s q, s q = z q, k =,, N, (9) where h h ϕ ( s) = tanh( s) is the nnlinear activatin functin, ω kl, are the adaptive weights and the bias z0( q ) = 1. The utputs f the hidden layer are further fed t the N neurns in the utput layer, giving the mdel utputs N h y = ϕ ( s ( q)), s ( q) = ω, y ( q), k = 1,, N, (10) h k k k k l l l= 0 with the nnlinear activatin ϕ ( s) = tanh( s), the adaptive weights ω kl, and the h bias y0 ( q ) = 1. Neural netwrk mdels are cmmnly trained n a set f knwn input utput pairs, { p( q), t ( q)}, with t ( q) being the vectr f target values. The cst functin usually depends n the errrs f each input utput pair, e = t y, k = 1, O,, k k k O N, and therefre its gradients can be expressed with the gradients f the mdel utputs. Accrding t the derivatin f the Real Time Recurrent Learning [16], tw dynamical systems are btained fr the tw-layered perceptrn with glbal recurrent cnnectins, determining the gradients f utputs, Nh N k h h h m i, k k, l l il l, m i, l= 1 m= 1 (11) ρ ( q) = ϕ ( s ( q)) ω ϕ ( s ( q)){ δ z ( q) + ω ρ ( q 1)} Nh N k h h h h m i, k ik k, l l l, m i, l= 1 m= 1 (12) π ( q) = ϕ ( s ( q)){ δ y ( q) + ω ϕ ( s ( q)) ω π ( q 1)}, 7

8 with ρ k ( q) = y ( q) / ω h and π k ( q) = y ( q) / ω. In Equatins (11) and (12), i, k i, i, k i, ϕ h () s and ϕ ( s) dente the derivatives f the crrespnding activatin functins and δ i is the Krnecker symbl. At the beginning f the learning prcess k k ρ, (0) = 0 and π, (0) = 0 is set. i i By setting the number f utputs N t zer in Equatins (9), (11) and (12), the equatins gverning the tw-layered perceptrn withut recurrent cnnectins are btained Integratin f Denising Layers The tw-layered perceptrn with glbal recurrent cnnectins and integrated denising layers is shwn in Figure 3b. The input sample p ( q) is first passed t the threshlding layer and then further t the recnstructin layer in rder t btain the denised sample p% ( q). The prcess is gverned by Equatins (4) and (5), respectively. The inputs f the tw-layered perceptrn with glbal recurrent cnnectins are then substituted with the denised data sample, x( q) = p% ( q). Applying the Equatins (9) and (10) the mdel utputs are finally cmputed. As pinted ut in sectin 1, the gradients f a cst functin can be btained frm the gradients f the mdel utputs. The gradients f the mdel utputs with respect t the weights f the neurns in the hidden layer and t the weights f the neurns in the utput layer are given by Equatins (11) and (12). The gradients f utputs with respect t the threshlds are btained frm Equatin (6) by setting the mdel utput y ( q ) as the functin ε. In this case, the first factr in Equatin (6) becmes k yk ( q) = ϕ ω ϕ ω. p% ( q) i Nh h h h ( sk( q)) k, l ( sl ( q)) l, N + i l= 1 Again, the equatins f the feedfrward tw-layered perceptrn with the denising layers are btained by setting N t zer. (13) 8

9 5. Results The prpsed denising layers were integrated int the tw-layered perceptrn (TLP) and the tw-layered perceptrn with glbal recurrent cnnectins (RTLP). The perfrmance f the mdels enhanced with the denising layers, the DTLP mdel and the DRTLP mdel, respectively, was tested n three ne-step-ahead predictin prblems: the lgistic map, the hardness f rubber cmpund and the yearly average sunspt number. The time series mdel usually cnnects a value f a time series n mdel utput with N previus values presented n mdel inputs. T establish such cnnectin, the mdel is trained n a set f knwn input utput pairs. Frm each time series, first 85% f input utput pairs were included in the training set, used t set free parameters f the mdels, while the remaining 15% f input utput pairs frmed the testing set, used fr mdel cmparisn. The ptimal values f tplgical parameters, which cannt be included in the gradient based algrithms, were fund by inspecting the free parameter space. The mdel cnfiguratins were allwed t have frm 4 t 20 inputs ( N ) with the number f free parameters nt exceeding 35% f the number f all input utput pairs. The wavelets frm symlet family [10], denising layers. M S, 2M N 1 +, were used in the cnstructin f the As the perfrmance measures, the rt mean squared errr, nrmalized t standard deviatin f a time series, (NRMSE) and the mean abslute percentage errr (MAPE) were cnsidered. On each mdel cnfiguratin, the Levenberg- Marquardt gradient algrithm [14] was used t adapt weights and threshlds. The learning was repeated 20 times and nly the cnfiguratins with the smallest NRMSE were used in further analysis Lgistic Map The lgistic map is knwn as a simple mdel which yields chas [17]. It is given by the recursive relatin xt = 4 xt 1(1 xt 1). In the fllwing experiments, 250 values were used, beginning with x 1 =

10 Detailed cmparisn f the applied mdels, each having 4 inputs, 10 neurns in the hidden layer and 1 neurn in the utput layer, is given in Table 1. Table 1 The number f neurns in threshlding and recnstructin layers is established by the number f inputs N and the wavelet rder M S, in these mdels the numbers are 9 and 4, respectively. All mdels succeeded t determine the relatinship 5 between the cnsecutive values. Small values f threshlds, i.e., τ = d 1 1max, 4 and τ = d f the DRTLP mdel, indicate that the mdels with 2 2, max denising layers are capable f distinguishing between chatic data and nise, which is nt the case when denising methds based n statistical criteria are applied [12]. Slightly higher errrs bserved at the mdels with the denising layers riginate in the mapping f the uncnstrained threshlds τ t the bunded threshlds τ, nt allwing the latter t becme zer in finite number f learning steps Hardness f Rubber Cmpund The quality f rubber prducts, such as bicycle and mtrcycle tubes, greatly depends n the quality f its cmpnents, particularly f a rubber cmpund. Thus, in the rubber industry the quality f a rubber cmpund after each mixing is permanently mnitred. An imprtant indicatr fr the quality f a rubber cmpund is its hardness. Variatin f hardness in 199 successive mixings tgether with sme denised data samples, btained by the denising layer f the DTLP and the DRTLP mdels, is shwn in Figure 4. Quite substantial denising perfrmed by bth, the DTLP mdel and the DRTLP mdel, reveals the nisy nature f the time series. Figure 4 10

11 The predictin errrs given in Table 2 shw that the denising layers imprve predictin: the DTLP mdel has an advantage ver the TLP mdel and the DRTLP mdel ver the RTLP mdel. Within the testing set, the NRMSE measure is decreased by mre than 6% and the MAPE measure by at least 3%. Larger predictin errrs n the training set may be explained by higher variatins f the training set data. Table 2 The success f the DTLP and the DRTLP mdels can be attributed t the fact that the denising layers have simplified the relatins between input and utput data samples by remving sme nise frm the input data samples. Mutual cmparisn f the mdels TLP and RTLP shws that recurrent cnnectins d nt necessarily imprve predictin. This is mst prbably due t the learning algrithms which are far mre cmplex fr recurrent neural netwrk mdels Sunspt number The number f sunspts, dark areas f cncentrated magnetic field n the Sun, is highly crrelated with the Sun activity. The yearly average f the sunspt number, cllected between the years 1700 and 2001, was used in the analysis [18]. Figure 5 shws the variatins f the yearly average f the sunspt number and sme denised data samples btained by the mdels DTLP and DRTLP. Figure 5 Denised data samples differ substantially. The DTLP mdel fund almst n nise in the time series, i.e., the details were remved mainly n the first scale, = 087. d, while the details n the ther scales were practically left intact. τ 1, max The DRTLP mdel, n the ther hand, perfrmed denising n all three scales, 11

12 i.e., τ 1 = 011d. 1max,, τ 2 = 018d. 2, max and τ 3 = 035d. 3, max, which reflects in reductin f amplitudes in denised data samples. Table 3 Detailed cmparisn f the mdels given in Table 3 shws that the predictin imprvement f the DTLP mdel cmpared t the TLP mdel is similar t the predictin imprvement f the DRTLP mdel ver the RTLP mdel, regardless f the cmpletely different cncepts f denising. In bth cases, the NRMSE and MAPE perfrmance measures are reduced frm 5% and even up t 25% n training sets. Furthermre, the recurrent cnnectins f the mdels RTLP and DRTLP had reduced the predictin errrs n the training set fr at least 7% in cmparisn t the mdels TLP and DTLP, respectively. Very large MAPE errrs n the training set, cmpared t the MAPE errrs n the test set, are due t the nature f the time series, having many small values in the training set. Namely, the MAPE perfrmance measure extremely penalizes errrs in predictin f small values. 6. Cnclusin The denising based n wavelet multireslutin analysis can be frmulated as a generalizatin f neural netwrk layers. Oppsed t standard neurns, in which free parameters determine the weighted sum, the free parameters f threshlding neurns determine the shape f threshlding functins. A gradient based algrithm fr setting the free parameters f threshlding neurns was develped. This algrithm enables fr the same cst functin t be used fr setting bth the weights f standard neurns and the threshlds f threshlding neurns. In this way, the denising layers can be cmpletely integrated int many types f layered neural netwrk and denising is nt treated as a separate prcess. Althugh the learning algrithm is designed fr gradient based methds, it can easily be generalized t varius nn-gradient algrithms, including evlutinary. 12

13 The presented examples f applicatin n ne-step-ahead predictin prblems revealed that the denising layers imprve predictin n nisy data sets. The utilizatin f the cst functin which minimizes predictin errr ensures that denising is perfrmed in such a way that the ptimal predictin is achieved. Unlike statistical methds, the neural netwrks with denising layers can distinguish between nise and chas and practically n denising is perfrmed in the latter case. Acknwledgements The prect has been funded in part by the Slvenian Ministry f Educatin, Science and Sprt under Grant N. Z References 1. Masters T (1995) Neural, Nvel & Hybrid Algrithms fr Time Series Predictin. Jhn Wiley & Sns, Trnt. 2. Weigend A S, Gershenfeld N A (1994) Time Series Predictin: Frecasting the Future and Understanding the Past. Addisn Wesley, Reading. 3. Frsee F D, Hagan M T (1997) Gauss-Newtn Apprximatin t Bayesian Regularizatin. In Prceedings f the 1997 IEEE Internatinal Jint Cnference n Neural Netwrks, Hustn, pp Drucker H, Cun Y L (1992) Imprving Generalizatin Perfrmance Using Duble Backprpagatin. IEEE Trans. Neural Netw. 3 (6): Ster B (2003) Latched recurrent neural netwrk. Electrtechnical Review 70 (1-2): 46-51, 6. Dnh D L (1995) De-Nising by Sft-Threshlding. IEEE Trans. Inf. Thery 41: Prchazka A, Mudrva M, Strek M. (1998) Wavelet Use fr Nise Reectin and Signal Mdelling. In Prchazka A et al. (eds). Signal Analysis and Predictin, Birkhauser, Bstn, pp Zhang Q, Benveniste A (1992) Wavelet Netwrks. IEEE Trans. Neural Netw. 3: Quipeng L, Xialing Y, Quanke F (2003) Fault Diagnsis Using Wavelet Neural Netwrks, Neural Prcessing Letters 18: Daubechies I (1992) Ten Lectures n Wavelets, SIAM, Philadelphia. 11. Ltric U, Dbnikar A (2003) Matrix Frmulatin f Multilayered Perceptrn with Denising Unit. Electrtechnical Review 70, in press, Ltric U (2000) Using Wavelet Analysis and Neural Netwrks fr Time Series Predictin, Ph. D. Thesis. University f Lublana, Faculty f Cmputer and Infrmatin Science, Lublana. 13. Haykin S (1999) Neural Netwrks: A Cmprehensive Fundatin, 2nd editin. Prentice-Hall, New Jersey. 14. Press W H, Teuklsky S A, Vetterling W T, Flannery B P (1992) Numerical Recipes in C, 2nd editin, Cambridge University, Cambridge. 13

14 15. Trentin E (2001) Netwrks with Trainable Amplitude f Activatin Functins. Neural Netw. 14: Williams R J, Zipser D (1989) A Learning Algrithm fr Cntinually Running Fully Recurrent Neural Netwrks. Neural Cmputatin 1: Schuster H G (1984) Deterministic Chas. An Intrductin,. Physik, Weinheim. 18. Sunspt Index Data Center, Ryal Observatry f Belgium (2001) Yearly Definitive Sunspt Number, 14

15 Tables: Table 1. Mdel cmparisn in predictin f the lgistic map. Mdel Structure NRMSE MAPE [Free parameters] Training Testing Training Testing TLP [61] DTLP 2 4-(9-4)-10-1, S [64] RTLP [71] DRTLP 4-(9-4)-10-1, 2 S [74] Table 2. Mdel cmparisn in predictin f rubber hardness. Mdel Structure NRMSE MAPE [Free parameters] Training Testing Training Testing TLP [21] DTLP 2 19-(53-19)-1-1, S [27] RTLP [15] DRTLP 18-(76-18)-1-4, 8 S [36] Table 3. Mdel cmparisn in predictin f the yearly average sunspt number. Mdel Structure NRMSE MAPE [Free parameters] Training Testing Training Testing TLP [34] DTLP 5 9-(36-9)-8-1, S [93] RTLP [55] DRTLP 11-(37-11)-3-9, 5 S [103]

16 Figure legends: Figure 1. Influence f the parameter s n the shape f the generalized sft threshlding functin. Figure 2. Wavelet based denising layers. Figure 3. a) The tw-layered perceptrn with glbal recurrent cnnectins N - N - N and b) its enhancement with the wavelet based denising layers N -(C - h N )- N - N. When the recurrent cnnectins (dashed) are mitted, the h feedfrward tw-layered perceptrn (a) and the tw-layered perceptrn with the denising layers (b) are btained. Figure 4. Variatin f a rubber hardness with examples f the denised data samples p% ( q), btained by the denising layers f the mdels DTLP and DRTLP. Figure 5. Variatin f the yearly average sunspt number with examples f the denised data samples p% ( q), btained by the denising layers f the mdels DTLP and DRTLP. 16

17 Figures Figure 1 T ( dk, ) t s>0 s=0 -t t d k Figure 2 recnstructin layer N-dimensinal denised data sample p( ~ q) threshlding layer grup 1 1 grup grup J grup J+1 0 N-dimensinal data sample p( q) Figure 3 a) y () q b) y () q 1 N 1 N D -1 D -1 nnlinear layers y h() q 1 N h D -1 D -1 nnlinear layers y h() q 1 N h 1 N x( q) y ( q-1) denising layers p( ~ q) 1 N 1 C 1 N p( q) 17

18 Figure 4 Figure 5 18

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme

Enhancing Performance of MLP/RBF Neural Classifiers via an Multivariate Data Distribution Scheme Enhancing Perfrmance f / Neural Classifiers via an Multivariate Data Distributin Scheme Halis Altun, Gökhan Gelen Nigde University, Electrical and Electrnics Engineering Department Nigde, Turkey haltun@nigde.edu.tr

More information

A Scalable Recurrent Neural Network Framework for Model-free

A Scalable Recurrent Neural Network Framework for Model-free A Scalable Recurrent Neural Netwrk Framewrk fr Mdel-free POMDPs April 3, 2007 Zhenzhen Liu, Itamar Elhanany Machine Intelligence Lab Department f Electrical and Cmputer Engineering The University f Tennessee

More information

Support-Vector Machines

Support-Vector Machines Supprt-Vectr Machines Intrductin Supprt vectr machine is a linear machine with sme very nice prperties. Haykin chapter 6. See Alpaydin chapter 13 fr similar cntent. Nte: Part f this lecture drew material

More information

Artificial Neural Networks MLP, Backpropagation

Artificial Neural Networks MLP, Backpropagation Artificial Neural Netwrks MLP, Backprpagatin 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001 00100000

More information

Modeling the Nonlinear Rheological Behavior of Materials with a Hyper-Exponential Type Function

Modeling the Nonlinear Rheological Behavior of Materials with a Hyper-Exponential Type Function www.ccsenet.rg/mer Mechanical Engineering Research Vl. 1, N. 1; December 011 Mdeling the Nnlinear Rhelgical Behavir f Materials with a Hyper-Expnential Type Functin Marc Delphin Mnsia Département de Physique,

More information

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network

Analysis on the Stability of Reservoir Soil Slope Based on Fuzzy Artificial Neural Network Research Jurnal f Applied Sciences, Engineering and Technlgy 5(2): 465-469, 2013 ISSN: 2040-7459; E-ISSN: 2040-7467 Maxwell Scientific Organizatin, 2013 Submitted: May 08, 2012 Accepted: May 29, 2012 Published:

More information

Chapter 3: Cluster Analysis

Chapter 3: Cluster Analysis Chapter 3: Cluster Analysis } 3.1 Basic Cncepts f Clustering 3.1.1 Cluster Analysis 3.1. Clustering Categries } 3. Partitining Methds 3..1 The principle 3.. K-Means Methd 3..3 K-Medids Methd 3..4 CLARA

More information

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must

the results to larger systems due to prop'erties of the projection algorithm. First, the number of hidden nodes must M.E. Aggune, M.J. Dambrg, M.A. El-Sharkawi, R.J. Marks II and L.E. Atlas, "Dynamic and static security assessment f pwer systems using artificial neural netwrks", Prceedings f the NSF Wrkshp n Applicatins

More information

Least Squares Optimal Filtering with Multirate Observations

Least Squares Optimal Filtering with Multirate Observations Prc. 36th Asilmar Cnf. n Signals, Systems, and Cmputers, Pacific Grve, CA, Nvember 2002 Least Squares Optimal Filtering with Multirate Observatins Charles W. herrien and Anthny H. Hawes Department f Electrical

More information

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs

Admissibility Conditions and Asymptotic Behavior of Strongly Regular Graphs Admissibility Cnditins and Asympttic Behavir f Strngly Regular Graphs VASCO MOÇO MANO Department f Mathematics University f Prt Oprt PORTUGAL vascmcman@gmailcm LUÍS ANTÓNIO DE ALMEIDA VIEIRA Department

More information

On Huntsberger Type Shrinkage Estimator for the Mean of Normal Distribution ABSTRACT INTRODUCTION

On Huntsberger Type Shrinkage Estimator for the Mean of Normal Distribution ABSTRACT INTRODUCTION Malaysian Jurnal f Mathematical Sciences 4(): 7-4 () On Huntsberger Type Shrinkage Estimatr fr the Mean f Nrmal Distributin Department f Mathematical and Physical Sciences, University f Nizwa, Sultanate

More information

The blessing of dimensionality for kernel methods

The blessing of dimensionality for kernel methods fr kernel methds Building classifiers in high dimensinal space Pierre Dupnt Pierre.Dupnt@ucluvain.be Classifiers define decisin surfaces in sme feature space where the data is either initially represented

More information

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) >

Bootstrap Method > # Purpose: understand how bootstrap method works > obs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(obs) > Btstrap Methd > # Purpse: understand hw btstrap methd wrks > bs=c(11.96, 5.03, 67.40, 16.07, 31.50, 7.73, 11.10, 22.38) > n=length(bs) > mean(bs) [1] 21.64625 > # estimate f lambda > lambda = 1/mean(bs);

More information

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons

Slide04 (supplemental) Haykin Chapter 4 (both 2nd and 3rd ed): Multi-Layer Perceptrons Slide04 supplemental) Haykin Chapter 4 bth 2nd and 3rd ed): Multi-Layer Perceptrns CPSC 636-600 Instructr: Ynsuck Che Heuristic fr Making Backprp Perfrm Better 1. Sequential vs. batch update: fr large

More information

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp

1996 Engineering Systems Design and Analysis Conference, Montpellier, France, July 1-4, 1996, Vol. 7, pp THE POWER AND LIMIT OF NEURAL NETWORKS T. Y. Lin Department f Mathematics and Cmputer Science San Jse State University San Jse, Califrnia 959-003 tylin@cs.ssu.edu and Bereley Initiative in Sft Cmputing*

More information

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur

initially lcated away frm the data set never win the cmpetitin, resulting in a nnptimal nal cdebk, [2] [3] [4] and [5]. Khnen's Self Organizing Featur Cdewrd Distributin fr Frequency Sensitive Cmpetitive Learning with One Dimensinal Input Data Aristides S. Galanpuls and Stanley C. Ahalt Department f Electrical Engineering The Ohi State University Abstract

More information

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax

Revision: August 19, E Main Suite D Pullman, WA (509) Voice and Fax .7.4: Direct frequency dmain circuit analysis Revisin: August 9, 00 5 E Main Suite D Pullman, WA 9963 (509) 334 6306 ice and Fax Overview n chapter.7., we determined the steadystate respnse f electrical

More information

Methods for Determination of Mean Speckle Size in Simulated Speckle Pattern

Methods for Determination of Mean Speckle Size in Simulated Speckle Pattern 0.478/msr-04-004 MEASUREMENT SCENCE REVEW, Vlume 4, N. 3, 04 Methds fr Determinatin f Mean Speckle Size in Simulated Speckle Pattern. Hamarvá, P. Šmíd, P. Hrváth, M. Hrabvský nstitute f Physics f the Academy

More information

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression

4th Indian Institute of Astrophysics - PennState Astrostatistics School July, 2013 Vainu Bappu Observatory, Kavalur. Correlation and Regression 4th Indian Institute f Astrphysics - PennState Astrstatistics Schl July, 2013 Vainu Bappu Observatry, Kavalur Crrelatin and Regressin Rahul Ry Indian Statistical Institute, Delhi. Crrelatin Cnsider a tw

More information

NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION

NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION NUROP Chinese Pinyin T Chinese Character Cnversin NUROP CONGRESS PAPER CHINESE PINYIN TO CHINESE CHARACTER CONVERSION CHIA LI SHI 1 AND LUA KIM TENG 2 Schl f Cmputing, Natinal University f Singapre 3 Science

More information

Remote Sensing Water Information Extraction Based on Neural Network Sensitivity Analysis

Remote Sensing Water Information Extraction Based on Neural Network Sensitivity Analysis Remte Sensing Water Infrmatin Extractin Based n eural etwrk Sensitivity Analysis Yanrng Wu 1,Da Sha 1,Cngcng Zhang 1,Chen Qi 1, Chen Ping 2* 1 Cllege f Infrmatin Science and Technlgy, Beijing rmal University

More information

ENSC Discrete Time Systems. Project Outline. Semester

ENSC Discrete Time Systems. Project Outline. Semester ENSC 49 - iscrete Time Systems Prject Outline Semester 006-1. Objectives The gal f the prject is t design a channel fading simulatr. Upn successful cmpletin f the prject, yu will reinfrce yur understanding

More information

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d)

COMP 551 Applied Machine Learning Lecture 9: Support Vector Machines (cont d) COMP 551 Applied Machine Learning Lecture 9: Supprt Vectr Machines (cnt d) Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Class web page: www.cs.mcgill.ca/~hvanh2/cmp551 Unless therwise

More information

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels

k-nearest Neighbor How to choose k Average of k points more reliable when: Large k: noise in attributes +o o noise in class labels Mtivating Example Memry-Based Learning Instance-Based Learning K-earest eighbr Inductive Assumptin Similar inputs map t similar utputs If nt true => learning is impssible If true => learning reduces t

More information

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification

COMP 551 Applied Machine Learning Lecture 5: Generative models for linear classification COMP 551 Applied Machine Learning Lecture 5: Generative mdels fr linear classificatin Instructr: Herke van Hf (herke.vanhf@mail.mcgill.ca) Slides mstly by: Jelle Pineau Class web page: www.cs.mcgill.ca/~hvanh2/cmp551

More information

Dead-beat controller design

Dead-beat controller design J. Hetthéssy, A. Barta, R. Bars: Dead beat cntrller design Nvember, 4 Dead-beat cntrller design In sampled data cntrl systems the cntrller is realised by an intelligent device, typically by a PLC (Prgrammable

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

A Matrix Representation of Panel Data

A Matrix Representation of Panel Data web Extensin 6 Appendix 6.A A Matrix Representatin f Panel Data Panel data mdels cme in tw brad varieties, distinct intercept DGPs and errr cmpnent DGPs. his appendix presents matrix algebra representatins

More information

Homology groups of disks with holes

Homology groups of disks with holes Hmlgy grups f disks with hles THEOREM. Let p 1,, p k } be a sequence f distinct pints in the interir unit disk D n where n 2, and suppse that fr all j the sets E j Int D n are clsed, pairwise disjint subdisks.

More information

Pattern Recognition 2014 Support Vector Machines

Pattern Recognition 2014 Support Vector Machines Pattern Recgnitin 2014 Supprt Vectr Machines Ad Feelders Universiteit Utrecht Ad Feelders ( Universiteit Utrecht ) Pattern Recgnitin 1 / 55 Overview 1 Separable Case 2 Kernel Functins 3 Allwing Errrs (Sft

More information

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff

Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeoff Lecture 2: Supervised vs. unsupervised learning, bias-variance tradeff Reading: Chapter 2 STATS 202: Data mining and analysis September 27, 2017 1 / 20 Supervised vs. unsupervised learning In unsupervised

More information

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic.

Sections 15.1 to 15.12, 16.1 and 16.2 of the textbook (Robbins-Miller) cover the materials required for this topic. Tpic : AC Fundamentals, Sinusidal Wavefrm, and Phasrs Sectins 5. t 5., 6. and 6. f the textbk (Rbbins-Miller) cver the materials required fr this tpic.. Wavefrms in electrical systems are current r vltage

More information

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression

3.4 Shrinkage Methods Prostate Cancer Data Example (Continued) Ridge Regression 3.3.4 Prstate Cancer Data Example (Cntinued) 3.4 Shrinkage Methds 61 Table 3.3 shws the cefficients frm a number f different selectin and shrinkage methds. They are best-subset selectin using an all-subsets

More information

Theoretical study of third virial coefficient with Kihara potential

Theoretical study of third virial coefficient with Kihara potential Theretical study f third virial cefficient with Kihara ptential Jurnal: Manuscript ID cjp-017-0705.r Manuscript Type: Article Date Submitted by the Authr: 6-Dec-017 Cmplete List f Authrs: Smuncu E.; Giresun

More information

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint

Biplots in Practice MICHAEL GREENACRE. Professor of Statistics at the Pompeu Fabra University. Chapter 13 Offprint Biplts in Practice MICHAEL GREENACRE Prfessr f Statistics at the Pmpeu Fabra University Chapter 13 Offprint CASE STUDY BIOMEDICINE Cmparing Cancer Types Accrding t Gene Epressin Arrays First published:

More information

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1

Data Mining: Concepts and Techniques. Classification and Prediction. Chapter February 8, 2007 CSE-4412: Data Mining 1 Data Mining: Cncepts and Techniques Classificatin and Predictin Chapter 6.4-6 February 8, 2007 CSE-4412: Data Mining 1 Chapter 6 Classificatin and Predictin 1. What is classificatin? What is predictin?

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 2: Mdeling change. In Petre Department f IT, Åb Akademi http://users.ab.fi/ipetre/cmpmd/ Cntent f the lecture Basic paradigm f mdeling change Examples Linear dynamical

More information

Learning to Control an Unstable System with Forward Modeling

Learning to Control an Unstable System with Forward Modeling 324 Jrdan and Jacbs Learning t Cntrl an Unstable System with Frward Mdeling Michael I. Jrdan Brain and Cgnitive Sciences MIT Cambridge, MA 02139 Rbert A. Jacbs Cmputer and Infrmatin Sciences University

More information

ChE 471: LECTURE 4 Fall 2003

ChE 471: LECTURE 4 Fall 2003 ChE 47: LECTURE 4 Fall 003 IDEL RECTORS One f the key gals f chemical reactin engineering is t quantify the relatinship between prductin rate, reactr size, reactin kinetics and selected perating cnditins.

More information

Lyapunov Stability Stability of Equilibrium Points

Lyapunov Stability Stability of Equilibrium Points Lyapunv Stability Stability f Equilibrium Pints 1. Stability f Equilibrium Pints - Definitins In this sectin we cnsider n-th rder nnlinear time varying cntinuus time (C) systems f the frm x = f ( t, x),

More information

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007

CS 477/677 Analysis of Algorithms Fall 2007 Dr. George Bebis Course Project Due Date: 11/29/2007 CS 477/677 Analysis f Algrithms Fall 2007 Dr. Gerge Bebis Curse Prject Due Date: 11/29/2007 Part1: Cmparisn f Srting Algrithms (70% f the prject grade) The bjective f the first part f the assignment is

More information

SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical model for microarray data analysis

SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical model for microarray data analysis SUPPLEMENTARY MATERIAL GaGa: a simple and flexible hierarchical mdel fr micrarray data analysis David Rssell Department f Bistatistics M.D. Andersn Cancer Center, Hustn, TX 77030, USA rsselldavid@gmail.cm

More information

COMP 551 Applied Machine Learning Lecture 4: Linear classification

COMP 551 Applied Machine Learning Lecture 4: Linear classification COMP 551 Applied Machine Learning Lecture 4: Linear classificatin Instructr: Jelle Pineau (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted

More information

Multiple Source Multiple. using Network Coding

Multiple Source Multiple. using Network Coding Multiple Surce Multiple Destinatin Tplgy Inference using Netwrk Cding Pegah Sattari EECS, UC Irvine Jint wrk with Athina Markpulu, at UCI, Christina Fraguli, at EPFL, Lausanne Outline Netwrk Tmgraphy Gal,

More information

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems.

Building to Transformations on Coordinate Axis Grade 5: Geometry Graph points on the coordinate plane to solve real-world and mathematical problems. Building t Transfrmatins n Crdinate Axis Grade 5: Gemetry Graph pints n the crdinate plane t slve real-wrld and mathematical prblems. 5.G.1. Use a pair f perpendicular number lines, called axes, t define

More information

MODULAR DECOMPOSITION OF THE NOR-TSUM MULTIPLE-VALUED PLA

MODULAR DECOMPOSITION OF THE NOR-TSUM MULTIPLE-VALUED PLA MODUAR DECOMPOSITION OF THE NOR-TSUM MUTIPE-AUED PA T. KAGANOA, N. IPNITSKAYA, G. HOOWINSKI k Belarusian State University f Infrmatics and Radielectrnics, abratry f Image Prcessing and Pattern Recgnitin.

More information

Distributions, spatial statistics and a Bayesian perspective

Distributions, spatial statistics and a Bayesian perspective Distributins, spatial statistics and a Bayesian perspective Dug Nychka Natinal Center fr Atmspheric Research Distributins and densities Cnditinal distributins and Bayes Thm Bivariate nrmal Spatial statistics

More information

Verification of Quality Parameters of a Solar Panel and Modification in Formulae of its Series Resistance

Verification of Quality Parameters of a Solar Panel and Modification in Formulae of its Series Resistance Verificatin f Quality Parameters f a Slar Panel and Mdificatin in Frmulae f its Series Resistance Sanika Gawhane Pune-411037-India Onkar Hule Pune-411037- India Chinmy Kulkarni Pune-411037-India Ojas Pandav

More information

MINIMIZATION OF ACTUATOR REPOSITIONING USING NEURAL NETWORKS WITH APPLICATION IN NONLINEAR HVAC 1 SYSTEMS

MINIMIZATION OF ACTUATOR REPOSITIONING USING NEURAL NETWORKS WITH APPLICATION IN NONLINEAR HVAC 1 SYSTEMS MINIMIZATION OF ACTUATOR REPOSITIONING USING NEURAL NETWORKS WITH APPLICATION IN NONLINEAR HVAC SYSTEMS M. J. Yazdanpanah *, E. Semsar, C. Lucas * yazdan@ut.ac.ir, semsar@chamran.ut.ac.ir, lucas@ipm.ir

More information

Part 3 Introduction to statistical classification techniques

Part 3 Introduction to statistical classification techniques Part 3 Intrductin t statistical classificatin techniques Machine Learning, Part 3, March 07 Fabi Rli Preamble ØIn Part we have seen that if we knw: Psterir prbabilities P(ω i / ) Or the equivalent terms

More information

Pressure And Entropy Variations Across The Weak Shock Wave Due To Viscosity Effects

Pressure And Entropy Variations Across The Weak Shock Wave Due To Viscosity Effects Pressure And Entrpy Variatins Acrss The Weak Shck Wave Due T Viscsity Effects OSTAFA A. A. AHOUD Department f athematics Faculty f Science Benha University 13518 Benha EGYPT Abstract:-The nnlinear differential

More information

Determining the Accuracy of Modal Parameter Estimation Methods

Determining the Accuracy of Modal Parameter Estimation Methods Determining the Accuracy f Mdal Parameter Estimatin Methds by Michael Lee Ph.D., P.E. & Mar Richardsn Ph.D. Structural Measurement Systems Milpitas, CA Abstract The mst cmmn type f mdal testing system

More information

Training Algorithms for Recurrent Neural Networks

Training Algorithms for Recurrent Neural Networks raining Algrithms fr Recurrent Neural Netwrks SUWARIN PAAAVORAKUN UYN NGOC PIEN Cmputer Science Infrmatin anagement Prgram Asian Institute f echnlgy P.O. Bx 4, Klng Luang, Pathumthani 12120 AILAND http://www.ait.ac.th

More information

Early detection of mining truck failure by modelling its operation with neural networks classification algorithms

Early detection of mining truck failure by modelling its operation with neural networks classification algorithms RU, Rand GOLOSINSKI, T.S. Early detectin f mining truck failure by mdelling its peratin with neural netwrks classificatin algrithms. Applicatin f Cmputers and Operatins Research ill the Minerals Industries,

More information

Particle Size Distributions from SANS Data Using the Maximum Entropy Method. By J. A. POTTON, G. J. DANIELL AND B. D. RAINFORD

Particle Size Distributions from SANS Data Using the Maximum Entropy Method. By J. A. POTTON, G. J. DANIELL AND B. D. RAINFORD 3 J. Appl. Cryst. (1988). 21,3-8 Particle Size Distributins frm SANS Data Using the Maximum Entrpy Methd By J. A. PTTN, G. J. DANIELL AND B. D. RAINFRD Physics Department, The University, Suthamptn S9

More information

Time-domain lifted wavelet collocation method for modeling nonlinear wave propagation

Time-domain lifted wavelet collocation method for modeling nonlinear wave propagation Lee et al.: Acustics Research Letters Online [DOI./.] Published Online 8 August Time-dmain lifted wavelet cllcatin methd fr mdeling nnlinear wave prpagatin Kelvin Chee-Mun Lee and Wn-Seng Gan Digital Signal

More information

A.H. Helou Ph.D.~P.E.

A.H. Helou Ph.D.~P.E. 1 EVALUATION OF THE STIFFNESS MATRIX OF AN INDETERMINATE TRUSS USING MINIMIZATION TECHNIQUES A.H. Helu Ph.D.~P.E. :\.!.\STRAC'l' Fr an existing structure the evaluatin f the Sti"ffness matrix may be hampered

More information

Design and Simulation of Dc-Dc Voltage Converters Using Matlab/Simulink

Design and Simulation of Dc-Dc Voltage Converters Using Matlab/Simulink American Jurnal f Engineering Research (AJER) 016 American Jurnal f Engineering Research (AJER) e-issn: 30-0847 p-issn : 30-0936 Vlume-5, Issue-, pp-9-36 www.ajer.rg Research Paper Open Access Design and

More information

SAMPLING DYNAMICAL SYSTEMS

SAMPLING DYNAMICAL SYSTEMS SAMPLING DYNAMICAL SYSTEMS Melvin J. Hinich Applied Research Labratries The University f Texas at Austin Austin, TX 78713-8029, USA (512) 835-3278 (Vice) 835-3259 (Fax) hinich@mail.la.utexas.edu ABSTRACT

More information

Kinetic Model Completeness

Kinetic Model Completeness 5.68J/10.652J Spring 2003 Lecture Ntes Tuesday April 15, 2003 Kinetic Mdel Cmpleteness We say a chemical kinetic mdel is cmplete fr a particular reactin cnditin when it cntains all the species and reactins

More information

David HORN and Irit OPHER. School of Physics and Astronomy. Raymond and Beverly Sackler Faculty of Exact Sciences

David HORN and Irit OPHER. School of Physics and Astronomy. Raymond and Beverly Sackler Faculty of Exact Sciences Cmplex Dynamics f Neurnal Threshlds David HORN and Irit OPHER Schl f Physics and Astrnmy Raymnd and Beverly Sackler Faculty f Exact Sciences Tel Aviv University, Tel Aviv 69978, Israel hrn@neurn.tau.ac.il

More information

3D FE Modeling Simulation of Cold Rotary Forging with Double Symmetry Rolls X. H. Han 1, a, L. Hua 1, b, Y. M. Zhao 1, c

3D FE Modeling Simulation of Cold Rotary Forging with Double Symmetry Rolls X. H. Han 1, a, L. Hua 1, b, Y. M. Zhao 1, c Materials Science Frum Online: 2009-08-31 ISSN: 1662-9752, Vls. 628-629, pp 623-628 di:10.4028/www.scientific.net/msf.628-629.623 2009 Trans Tech Publicatins, Switzerland 3D FE Mdeling Simulatin f Cld

More information

CAUSAL INFERENCE. Technical Track Session I. Phillippe Leite. The World Bank

CAUSAL INFERENCE. Technical Track Session I. Phillippe Leite. The World Bank CAUSAL INFERENCE Technical Track Sessin I Phillippe Leite The Wrld Bank These slides were develped by Christel Vermeersch and mdified by Phillippe Leite fr the purpse f this wrkshp Plicy questins are causal

More information

NEURAL NETWORKS. Neural networks

NEURAL NETWORKS. Neural networks NEURAL NETWORKS Neural netwrks Mtivatin Humans are able t prcess cmplex tasks efficiently (perceptin, pattern recgnitin, reasning, etc.) Ability t learn frm examples Adaptability and fault tlerance Engineering

More information

A mathematical model for complete stress-strain curve prediction of permeable concrete

A mathematical model for complete stress-strain curve prediction of permeable concrete A mathematical mdel fr cmplete stress-strain curve predictin f permeable cncrete M. K. Hussin Y. Zhuge F. Bullen W. P. Lkuge Faculty f Engineering and Surveying, University f Suthern Queensland, Twmba,

More information

7 TH GRADE MATH STANDARDS

7 TH GRADE MATH STANDARDS ALGEBRA STANDARDS Gal 1: Students will use the language f algebra t explre, describe, represent, and analyze number expressins and relatins 7 TH GRADE MATH STANDARDS 7.M.1.1: (Cmprehensin) Select, use,

More information

Internal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9.

Internal vs. external validity. External validity. This section is based on Stock and Watson s Chapter 9. Sectin 7 Mdel Assessment This sectin is based n Stck and Watsn s Chapter 9. Internal vs. external validity Internal validity refers t whether the analysis is valid fr the ppulatin and sample being studied.

More information

Current/voltage-mode third order quadrature oscillator employing two multiple outputs CCIIs and grounded capacitors

Current/voltage-mode third order quadrature oscillator employing two multiple outputs CCIIs and grounded capacitors Indian Jurnal f Pure & Applied Physics Vl. 49 July 20 pp. 494-498 Current/vltage-mde third rder quadrature scillatr emplying tw multiple utputs CCIIs and grunded capacitrs Jiun-Wei Hrng Department f Electrnic

More information

IN a recent article, Geary [1972] discussed the merit of taking first differences

IN a recent article, Geary [1972] discussed the merit of taking first differences The Efficiency f Taking First Differences in Regressin Analysis: A Nte J. A. TILLMAN IN a recent article, Geary [1972] discussed the merit f taking first differences t deal with the prblems that trends

More information

Application Of Mealy Machine And Recurrence Relations In Cryptography

Application Of Mealy Machine And Recurrence Relations In Cryptography Applicatin Of Mealy Machine And Recurrence Relatins In Cryptgraphy P. A. Jytirmie 1, A. Chandra Sekhar 2, S. Uma Devi 3 1 Department f Engineering Mathematics, Andhra University, Visakhapatnam, IDIA 2

More information

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards:

MODULE FOUR. This module addresses functions. SC Academic Elementary Algebra Standards: MODULE FOUR This mdule addresses functins SC Academic Standards: EA-3.1 Classify a relatinship as being either a functin r nt a functin when given data as a table, set f rdered pairs, r graph. EA-3.2 Use

More information

Thermodynamics and Equilibrium

Thermodynamics and Equilibrium Thermdynamics and Equilibrium Thermdynamics Thermdynamics is the study f the relatinship between heat and ther frms f energy in a chemical r physical prcess. We intrduced the thermdynamic prperty f enthalpy,

More information

The standards are taught in the following sequence.

The standards are taught in the following sequence. B L U E V A L L E Y D I S T R I C T C U R R I C U L U M MATHEMATICS Third Grade In grade 3, instructinal time shuld fcus n fur critical areas: (1) develping understanding f multiplicatin and divisin and

More information

Perfrmance f Sensitizing Rules n Shewhart Cntrl Charts with Autcrrelated Data Key Wrds: Autregressive, Mving Average, Runs Tests, Shewhart Cntrl Chart

Perfrmance f Sensitizing Rules n Shewhart Cntrl Charts with Autcrrelated Data Key Wrds: Autregressive, Mving Average, Runs Tests, Shewhart Cntrl Chart Perfrmance f Sensitizing Rules n Shewhart Cntrl Charts with Autcrrelated Data Sandy D. Balkin Dennis K. J. Lin y Pennsylvania State University, University Park, PA 16802 Sandy Balkin is a graduate student

More information

Eric Klein and Ning Sa

Eric Klein and Ning Sa Week 12. Statistical Appraches t Netwrks: p1 and p* Wasserman and Faust Chapter 15: Statistical Analysis f Single Relatinal Netwrks There are fur tasks in psitinal analysis: 1) Define Equivalence 2) Measure

More information

Application of APW Pseudopotential Form Factor in the Calculation of Liquid Metal Resistivities.

Application of APW Pseudopotential Form Factor in the Calculation of Liquid Metal Resistivities. Internatinal Jurnal f Pure and Applied Physics. ISSN 097-1776 Vlume 8, Number (01), pp. 11-117 Research India Publicatins http://www.ripublicatin.cm/pap.htm Applicatin f APW Pseudptential Frm Factr in

More information

a(k) received through m channels of length N and coefficients v(k) is an additive independent white Gaussian noise with

a(k) received through m channels of length N and coefficients v(k) is an additive independent white Gaussian noise with urst Mde Nn-Causal Decisin-Feedback Equalizer based n Sft Decisins Elisabeth de Carvalh and Dirk T.M. Slck Institut EURECOM, 2229 rute des Crêtes,.P. 93, 694 Sphia ntiplis Cedex, FRNCE Tel: +33 493263

More information

Comparison of two variable parameter Muskingum methods

Comparison of two variable parameter Muskingum methods Extreme Hydrlgical Events: Precipitatin, Flds and Drughts (Prceedings f the Ykhama Sympsium, July 1993). IAHS Publ. n. 213, 1993. 129 Cmparisn f tw variable parameter Muskingum methds M. PERUMAL Department

More information

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers

Combining Dialectical Optimization and Gradient Descent Methods for Improving the Accuracy of Straight Line Segment Classifiers Cmbining Dialectical Optimizatin and Gradient Descent Methds fr Imprving the Accuracy f Straight Line Segment Classifiers Rsari A. Medina Rdriguez and Rnald Fumi Hashimt University f Sa Paul Institute

More information

ON THE COMPUTATIONAL DESIGN METHODS FOR IMPROOVING THE GEAR TRANSMISSION PERFORMANCES

ON THE COMPUTATIONAL DESIGN METHODS FOR IMPROOVING THE GEAR TRANSMISSION PERFORMANCES ON THE COMPUTATIONAL DESIGN METHODS FOR IMPROOVING THE GEAR TRANSMISSION PERFORMANCES Flavia Chira 1, Mihai Banica 1, Dinu Sticvici 1 1 Assc.Prf., PhD. Eng., Nrth University f Baia Mare, e-mail: Flavia.Chira@ubm.r

More information

Feedforward Neural Networks

Feedforward Neural Networks Feedfrward Neural Netwrks Yagmur Gizem Cinar, Eric Gaussier AMA, LIG, Univ. Grenble Alpes 17 March 2017 Yagmur Gizem Cinar, Eric Gaussier Multilayer Perceptrns (MLP) 17 March 2017 1 / 42 Reference Bk Deep

More information

ENG2410 Digital Design Sequential Circuits: Part A

ENG2410 Digital Design Sequential Circuits: Part A ENG2410 Digital Design Sequential Circuits: Part A Fall 2017 S. Areibi Schl f Engineering University f Guelph Week #6 Tpics Sequential Circuit Definitins Latches Flip-Flps Delays in Sequential Circuits

More information

5 th grade Common Core Standards

5 th grade Common Core Standards 5 th grade Cmmn Cre Standards In Grade 5, instructinal time shuld fcus n three critical areas: (1) develping fluency with additin and subtractin f fractins, and develping understanding f the multiplicatin

More information

Computational modeling techniques

Computational modeling techniques Cmputatinal mdeling techniques Lecture 4: Mdel checing fr ODE mdels In Petre Department f IT, Åb Aademi http://www.users.ab.fi/ipetre/cmpmd/ Cntent Stichimetric matrix Calculating the mass cnservatin relatins

More information

Drought damaged area

Drought damaged area ESTIMATE OF THE AMOUNT OF GRAVEL CO~TENT IN THE SOIL BY A I R B O'RN EMS S D A T A Y. GOMI, H. YAMAMOTO, AND S. SATO ASIA AIR SURVEY CO., l d. KANAGAWA,JAPAN S.ISHIGURO HOKKAIDO TOKACHI UBPREFECTRAl OffICE

More information

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor

The Research on Flux Linkage Characteristic Based on BP and RBF Neural Network for Switched Reluctance Motor Prgress In Electrmagnetics Research M, Vl. 35, 5 6, 24 The Research n Flux Linkage Characteristic Based n BP and RBF Neural Netwrk fr Switched Reluctance Mtr Yan Cai, *, Siyuan Sun, Chenhui Wang, and Cha

More information

A New Evaluation Measure. J. Joiner and L. Werner. The problems of evaluation and the needed criteria of evaluation

A New Evaluation Measure. J. Joiner and L. Werner. The problems of evaluation and the needed criteria of evaluation III-l III. A New Evaluatin Measure J. Jiner and L. Werner Abstract The prblems f evaluatin and the needed criteria f evaluatin measures in the SMART system f infrmatin retrieval are reviewed and discussed.

More information

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines

COMP 551 Applied Machine Learning Lecture 11: Support Vector Machines COMP 551 Applied Machine Learning Lecture 11: Supprt Vectr Machines Instructr: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/cmp551 Unless therwise nted, all material psted fr this curse

More information

Churn Prediction using Dynamic RFM-Augmented node2vec

Churn Prediction using Dynamic RFM-Augmented node2vec Churn Predictin using Dynamic RFM-Augmented nde2vec Sandra Mitrvić, Jchen de Weerdt, Bart Baesens & Wilfried Lemahieu Department f Decisin Sciences and Infrmatin Management, KU Leuven 18 September 2017,

More information

Biocomputers. [edit]scientific Background

Biocomputers. [edit]scientific Background Bicmputers Frm Wikipedia, the free encyclpedia Bicmputers use systems f bilgically derived mlecules, such as DNA and prteins, t perfrm cmputatinal calculatins invlving string, retrieving, and prcessing

More information

2.161 Signal Processing: Continuous and Discrete Fall 2008

2.161 Signal Processing: Continuous and Discrete Fall 2008 MIT OpenCurseWare http://cw.mit.edu 2.161 Signal Prcessing: Cntinuus and Discrete Fall 2008 Fr infrmatin abut citing these materials r ur Terms f Use, visit: http://cw.mit.edu/terms. Massachusetts Institute

More information

IAML: Support Vector Machines

IAML: Support Vector Machines 1 / 22 IAML: Supprt Vectr Machines Charles Suttn and Victr Lavrenk Schl f Infrmatics Semester 1 2 / 22 Outline Separating hyperplane with maimum margin Nn-separable training data Epanding the input int

More information

1 The limitations of Hartree Fock approximation

1 The limitations of Hartree Fock approximation Chapter: Pst-Hartree Fck Methds - I The limitatins f Hartree Fck apprximatin The n electrn single determinant Hartree Fck wave functin is the variatinal best amng all pssible n electrn single determinants

More information

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium

Lecture 17: Free Energy of Multi-phase Solutions at Equilibrium Lecture 17: 11.07.05 Free Energy f Multi-phase Slutins at Equilibrium Tday: LAST TIME...2 FREE ENERGY DIAGRAMS OF MULTI-PHASE SOLUTIONS 1...3 The cmmn tangent cnstructin and the lever rule...3 Practical

More information

, which yields. where z1. and z2

, which yields. where z1. and z2 The Gaussian r Nrmal PDF, Page 1 The Gaussian r Nrmal Prbability Density Functin Authr: Jhn M Cimbala, Penn State University Latest revisin: 11 September 13 The Gaussian r Nrmal Prbability Density Functin

More information

Module 4: General Formulation of Electric Circuit Theory

Module 4: General Formulation of Electric Circuit Theory Mdule 4: General Frmulatin f Electric Circuit Thery 4. General Frmulatin f Electric Circuit Thery All electrmagnetic phenmena are described at a fundamental level by Maxwell's equatins and the assciated

More information

Relationship Between Amplifier Settling Time and Pole-Zero Placements for Second-Order Systems *

Relationship Between Amplifier Settling Time and Pole-Zero Placements for Second-Order Systems * Relatinship Between Amplifier Settling Time and Ple-Zer Placements fr Secnd-Order Systems * Mark E. Schlarmann and Randall L. Geiger Iwa State University Electrical and Cmputer Engineering Department Ames,

More information

On classifier behavior in the presence of mislabeling noise

On classifier behavior in the presence of mislabeling noise Data Min Knwl Disc DOI 10.1007/s10618-016-0484-8 On classifier behavir in the presence f mislabeling nise Katsiaryna Mirylenka 1 Gerge Giannakpuls 2 Le Minh D 3 Themis Palpanas 4 Received: 12 Nvember 2015

More information

arxiv: v1 [physics.comp-ph] 21 Feb 2018

arxiv: v1 [physics.comp-ph] 21 Feb 2018 arxiv:182.7486v1 [physics.cmp-ph] 21 Feb 218 rspa.ryalscietypublishing.rg Research Article submitted t jurnal Subject Areas: Mechanical Engineering Keywrds: Data-driven frecasting, Lng-Shrt Term Memry,

More information

Sequential Allocation with Minimal Switching

Sequential Allocation with Minimal Switching In Cmputing Science and Statistics 28 (1996), pp. 567 572 Sequential Allcatin with Minimal Switching Quentin F. Stut 1 Janis Hardwick 1 EECS Dept., University f Michigan Statistics Dept., Purdue University

More information