Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural Networks

Size: px
Start display at page:

Download "Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural Networks"

Transcription

1 Simplified Gaing in Long Shor-erm Memory (LSTM) Recurren Neural Neworks Yuzhen Lu and Fahi M. Salem Circuis, Sysems, and Neural Neworks (CSANN) Lab Deparmen of Biosysems and Agriculural Engineering Deparmen of Elecrical and Compuer Engineering Michigan Sae Universiy Eas Lansing, Michigan 48824, USA Absrac The sandard LSTM recurren neural neworks while very powerful in long-range dependency sequence applicaions have highly complex srucure and relaively large (adapive) parameers. In his work, we presen empirical comparison beween he sandard LSTM recurren neural nework archiecure and hree new parameer-reduced varians obained by eliminaing combinaions of he inpu signal, bias, and hidden uni signals from individual gaing signals. The experimens on wo sequence daases show ha he hree new varians, called simply as LSTM1, LSTM2, and LSTM3, can achieve comparable performance o he sandard LSTM model wih less (adapive) parameers. Index Terms Recurren Neural Neworks (RNN), Long Shor-erm Memory (LSTM), Sochasic Gradien Descen I. INTRODUCTION Recurren neural neworks (RNN) have recenly shown grea promise in ackling various sequence modeling asks in machine learning, such as auomaic speech recogniion [1-2], language ranslaion [3-4], and generaion of language descripions for images [5-6]. Simple RNNs, however, are difficul o rain using he sochasic gradien decen and have been repored o exhibi he so-called vanishing gradien and/or exploding gradien phenomena [7-8]. This has limied he abiliy of simple RNN o learn sequences wih relaively long dependencies. To address his limiaion, researchers have developed a number of echniques in nework archiecures and opimizaion algorihms [9-11], among which he mos successful in applicaions is he Long Shor-erm Memory (LSTM) unis in RNN [9, 12]. A LSTM uni uilizes a memory cell ha may mainain is sae value over a long ime, and a gaing mechanism ha conains hree non-linear gaes, namely, an inpu, an oupu and a forge gae. The gaes inended role is o regulae he flow of signals ino and ou of he cell, in order o be effecive in regulaing long-range dependencies and achieve successful RNN raining. Since he incepion of he LSTM uni, many modificaions have been inroduced o improve performance. Gers e al. [13] have inroduced peephole connecions o he LSTM uni ha connecs he memory cell o he gaes so as o infer precise iming of he oupus. Sak e al. [14-15] inroduced wo recurren and non-recurren projecion layers beween he LSTM unis layer and he oupu layer, which resuled in significanly improved performance in a large vocabulary speech recogniion ask. Adding more componens in he LSTM unis archiecure, however, may complicae he learning process and hinder undersanding of he role of individual componens. Recenly, researchers proposed a number of simplified varians of he LSTM-based RNN. Cho e al. [3] proposed a wo-gae relaed archiecure, called Gaed Recurren Uni (GRU) RNN, in which he inpu, forge, and oupu gaes are replaced by an updae gae and a rese gae. Chung e al. [16] presened performance comparisons beween LSTM and GRU RNNs, and observed ha he laer performed comparably or even exceeded he former on he specific daase used. These conclusions, however, sill are being furher evaluaed using more experimens and over broader daases. In exploring eigh archiecural varians of he LSTM RNN, Greff e al. [17] found ha coupling he inpu and forge gaes, as in he GRU model, and removing peephole connecions, did no significanly impair performance. Furhermore, hey repor ha he forge gae and he oupu acivaion are criical componens. These findings were corroboraed by he work of Jozefowicz e al. [18] who evaluaed an exensive archiecural designs of en housand differen RNNs. In [18], he auhors observed ha he oupu gae was he leas imporan compared o he inpu and forge gaes, and suggesed adding a bias of 1 o he forge gae o improve he performance of he LSTM RNN. Zhou e al. [19] proposed a Minimal Gae Uni (MGU), which has a minimum of one gae, namely, he forge gae archiecure, creaed by merging he updae and rese gaes in he GRU model. Through evaluaions on four differen sequence daa, he auhors found ha an RNN wih he fewer parameers MGU model was a par wih he GRU model in erms of (esing) accuracy. The auhors, however, did no explicily perform comparisons agains he sandard LSTM RNN. Recenly, Salem [20] inroduced a simple approach o simplifying he sandard LSTM model focusing only on he gaing signal generaion. The gaing signals can be used as general conrol signals o be specified by minimizing he loss funcion/crierion. Specifically, all hree gaing equaions were reained bu heir parameers were reduced by eliminaing one or more of he signals driving he gaes. For simpliciy, we shall call hese hree varians, LSTM1, LSTM2, and LSTM3 and will be deailed in secion III below. The paper presens a comparaive evaluaion of he sandard LSTM RNN model wih hree new LSTM model varians. The evaluaion and es resuls have been demonsraed on wo public daases which reveal ha he LSTM model varians are comparable o he sandard LSTM

2 RNN model in esing accuracy performance. We remark ha hese are iniial ess and furher evaluaions and comparisons need o be conduced among he sandard LSTM RNN and he hree LSTM varians. The remainder of he paper is organized as follows. Secion II specifies he sandard LSTM RNN archiecure wih is hree gaing signals. Secion III describes he hree LSTM varians, called LSTM1, LSTM2, and LSTM3, respecively. Secion IV presens he experimens considered in his sudy. Secion V deails he comparaive performance resuls. Finally, secion VI summarizes he main conclusions. II. THE RNN LSTM ARCHITECTURE The LSTM archiecure considered here is similar o ha in Graves e al. [2, 16-19] bu wihou peep-hole connecions. I is referred o as he sandard LSTM archiecure and will be used for comparison wih is simplified LSTM varians [20]. The (dynamic) equaions for he LSTM memory blocks are given as follows: i = σ ( U h + W x + b ) (1) i 1 i i f = σ ( U h + W x + b ) (2) f 1 f f o = σ ( U h + W x + b ) (3) o 1 o o c = f c + i anh( U h + W x + b ) (4) 1 c 1 c c h = o anh( c ) (5) In hese equaions, he (n-d vecors) i, f and o are he inpu gae, forge gae, oupu gae a ime, eqns (1)-(3). Noe ha hese gae signals include he logisic nonlineariy,, and hus heir signals ranges is beween 0 and 1. The n-d cell sae vecor,, and is n-d acivaion hidden uni, h, a he curren ime, are in eqns (4)-(5). The inpu vecor,, is an m-d vecor, anh is he hyperbolic angen funcion, and * in eqns (4)-(5), denoes a poin-wise (Hadamard) muliplicaion operaor. Noe ha he gaes, cell and acivaion all have he same dimension (n). The parameers of he LSTM model are he marices (, ) and biases ( ) in eqns (1)-(5). The oal number of parameers (i.e., he number of all he elemens in W, U and b ), say N, for he sandard LSTM, can be calculaed o be 2 N = 4 ( m n+ n + n) (6) where, again, m is he inpu dimension, and n is he cell dimension. This consiues a four-fold increase in parameers in comparison o he simple RNN [16-20]. III. THE RNN LSTM VARIANTS While he LSTM model has demonsraed impressive performance in applicaions involving sequence-o-sequence relaionships, a criicism of he sandard LSTM resides in is relaively complex model srucure wih 3 gaing signals and he number of is relaively large number of parameers [see eqn (6)]. The gaes in fac replicae he parameers in he cell. I is observed ha he gaes serve as conrol signals and he forms in eqns (1)-(3) are redundan [20]. Here, hree simplificaions o he sandard LSTM resul in hree LSTM varians we refer o hem here as simply, LSTM1, LSTM2, and LSTM3. There varians are obained by removing signals, and associaed parameers in he gaing eqns (1)-(3). For uniformiy and simpliciy, we apply he changes o all he gaes idenically: 1) The LSTM1 model: No Inpu Signal Here he inpu signal and is associaed parameer marix are removed from he gaing signals (1)-(3). We hus obain he new gaing equaions: i = σ ( U h + b ) (7) i 1 i f = σ ( U h + b ) (8) f 1 f o = σ ( U h + b ) (9) o 1 o 2) The LSTM2 model: No Inpu Signal and No Bias The gaing signals conain only he hidden acivaion uni in all hree gaes, idenically. i = σ ( U h ) (10) i 1 f = σ ( U h ) (11) f 1 o = σ ( U h ) (12) o 1 3) The LSTM3 model: No Inpu Signal and No Hidden Uni Signal The gaing signals conain only he bias erm. Noe ha, as he bias is adapive during raining, i will include informaion abou he sae via he backpropagaion learning algorihms or he co-sae [24]. i = σ ( b ) (13) i f = σ ( b ) (14) f o = σ ( b ) (15) Compared o he sandard LSTM, i can be seen ha he 2 hree varians resuls in 3mn, 3( mn+ n) and 3( mn+ n ) fewer parameers, respecively, and consequenly, reducing he compuaional expense. IV. EXPERIMENTS The effeciveness of he hree proposed varians were evaluaed using wo public daases, MNIST and IMDB. The focus here is o demonsrae he comparaive performance of he sandard LSTM RNN and he varians raher han o achieve sae-of-he-ar resuls. Only he sandard LSTM RNN o

3 [2, 16-19] was used as a base-line model and compared wih is hree varians. A. Experimens on he MNIST daase: This daase conains 60,000 raining se and 10,000 esing se of handwrien images of he digis (0-9). The raining se conains he labelled class of he image available for raining. Each image has he size of pixels. The image daa were pre-processed o have zero mean and uni variance. As in he work of Zhou e al. [19], he daase was organized in wo manners o be he inpu of an LSTM-based nework. The firs was o reshape each image as a onedimensional vecor wih pixels scanned row by row, from he op lef corner o he boom righ corner. This resuls in a long sequence inpu of lengh 784. The second requires no image reshaping bu reaed each row of an image as a vecor inpu, hus giving a much shorer inpu sequence of lengh 28. The wo ypes of daa organizaion were referred o as pixel-wise and row-wise sequence inpus, respecively. I is noed ha he pixel-wise sequence is more ime consuming in raining. In he wo raining asks, 100 hidden unis and 100 raining epochs were used for he pixel-wise sequencing inpu, while 50 hidden unis and 200 raining epochs were used for he row-wise sequencing inpu. Oher nework seings were kep he same hroughou, including he bach size se o 32, RMSprop opimizer, cross-enropy loss, dynamic learning rae (η) and early sopping sraegies. In paricular, for he learning rae, i was se o be an exponenial funcion of raining loss o speed up raining, specifically, η= η 0 exp(c), where η 0 is a consan coefficien, and C is he raining loss. For he pixelwise sequence, wo learning rae coefficiens η 0 =1e-3 and 1e-4 were considered as i akes relaively long ime o rain, while for he row-wise sequence, four η 0 values of 1e-2, 1e-3, 1e-4 and 1e-5 were considered. The dynamic learning rae is hus direcly relaed o he raining performance. A he iniial sage, he raining loss is ypically large, hus resuling in a large learning rae (η), which in urn increases he sepping of he gradien furher from he presen parameer locaion. The learning rae decreases only as he loss funcions decreases owards lower loss level and evenually owards an accepable minima in he parameer space. Thus was found o achieve faser convergence o an accepable soluion. For he early sopping crierion, he raining process would be erminaed if here was no improvemen on he es daa over consecuive epochs, in our case we chose 25 epochs. B. Experimens on he IMDB daase: This daase consiss of 50,000 movie reviews from IMDB, which have been labelled ino wo classes according o (he reviews) senimen, posiive or negaive. Boh raining and es ses conain 25,000 reviews. These reviews are encoded as a sequence of word indices based on he overall frequency in he daase. The maximum sequence lengh was se o 80 among he op 20,000 mos common words (longer sequences were runcaed while shorer ones were zero-padded a he end). Referring o an example in he Library Keras [21], an embedding layer wih he oupu dimension of 128 was added as an inpu o he LSTM layer ha conained 128 hidden unis. The dropou echnique [22] was implemened o randomly zero 20% of signals in he embedding layer and 20% of rows in he weigh marices (i.e., U and W) in he LSTM layer. The model was rained for 100 epochs. Oher seings remained he same as hose in he MNIST daa. Training for he wo daases were implemened by using he Keras package in conjuncion wih he Theano library (he implemenaion code and resuls are available a: hps://gihub.com/jingweimo/modified-lstm). V. RESULTS AND DISCUSSION A. The MNIST daase: Table I summarizes he accuracies on he es daase for he pixel-wise sequence. A η 0 =1e-3, he sandard LSTM produced he highes accuracy, while a η 0 =1e-4, boh LSTM1 and LSTM2 achieved accuracies slighly higher han ha by he sandard LSTM. LSTM3 performed he wors in boh cases. TABLE I THE BEST ACCURACIES OF DIFFERENT LSTM NETWORKS ON THE TEST SET AND CORRESPONDING PARAMETER SIZES OF THE LSTM LAYERS Learning coefficien (η 0) 1e-3 1e-4 #Params Sandard ,800 LSTM ,500 LSTM ,200 LSTM ,500 Examining he raining curves revealed he imporance of η 0 and he differen responses of he. As shown in Fig. 1, he sandard LSTM performed well in he wo cases; while LSTM1 and LSTM2 performed similarly poorly a η 0 =1e-3, where boh suffered serious flucuaions a beginning and dramaically lowered accuracies a he end. However, decreasing η 0 o 1e-4 circumvened he problem fairly well for LSTM1 and LSTM2. For LSTM3, boh η 0 =1e-3 and 1e-4 could no achieve successful raining because of he flucuaion issue, suggesing ha η 0 should be decreased furher. As shown in Fig. 2 where 200 raining epochs were execued, choosing η 0 =1e-5 provided a seadily increasing accuracy wih he highes es accuracy of Despie he accuracy ha was sill lower han oher varians. I is expeced ha he LSTM3 would achieve higher accuracies if longer raining ime was allowed. In essence LSTM3 has he lowes parameers bu needs more raining execuion epochs o improve (esing) accuracy. The flucuaion phenomenon observed above is a ypical issue caused by a large learning rae, and is likely due o numerical insabiliy where he (sochasic) gradien can no longer be approximaed, and i can readily be resolved by decreasing he learning coefficien-- however, a he price of slowing down raining. From he resuls, he sandard LSTM seemed more resisan o flucuaions in modeling longsequence daa han he hree varians, more likely due o he suiabiliy of he learning rae coefficien. LSTM3 was he mos suscepible o he flucuaion issue, however, and i

4 requires a lower coefficien. Is opimal coefficien in his sudy appears o be beween η 0 =1e-5 and η 0 =1e-4. Thus, furher invesigaion may lead o benefis of using he LSTM3 o reap he benefi of is dramaically reduced model parameers (see Table I). Overall, hese findings have showed ha he hree LSTM varians were capable of handling a long-range dependencies sequence comparable o he sandard LSTM. Due aenion should be paid o uning he learning rae o achieve higher accuracies. TABLE II THE BEST ACCURACIES OF DIFFERENT LSTM NETWORKS VARIANTS ON THE TEST SET AND THEIR CORRESPONDING PARAMETER SIZES Learning rae coefficien (η 0) 1e-2 1e-3 1e-4 1e-5 # Params Sandard ,800 LSTM ,600 LSTM ,450 LSTM ,100 Among he four η 0 values, he η 0 =1e-3 gave he bes resuls for all he excep LSTM2 ha performed he bes a η 0 =1e-2. Fig. 3 shows he corresponding learning curves a η 0 =1e-3. All he exhibied similar raining paern profiles, which demonsraed he efficacy of he hree LSTM varians. Fig. 1 Accuracies vs. epochs on he rain/es daases obained by he sandard LSTM (op), LSTM1 (middle) and LSTM2 (boom), wih he learning rae coefficiens η 0 = 1e-3 (lef) and η 0 = 1e-4 (righ). The difference in epochs is due o he response o he implemened early sopping crierion. Fig. 3 Accuracies vs. epochs on he rain/es daase obained by he sandard LSTM (a), LSTM1 (b), LSTM2 (c) and LSTM3 (d) wih he learning rae coefficien η 0 = 1e-3. The difference in epochs is due o he response o he early sopping crierion. From he resuls of he pixel-wise (long) and row-wise (shor) sequence daa, i is noed ha he hree LSTM varians, especially LSTM3, performed closely similar o he sandard LSTM in handling he shor sequence daa. Fig. 2 Accuracies vs. epochs on he rain/es daases obained by he LSTM3 wih he learning rae coefficiens η 0 = 1e-4 (lef) and η 0 = 1e-5 (righ). The difference in epochs is due o he response o he early sopping crierion. Compared o he pixel-wise sequence of lengh 784, he row-wise sequence form is 28 in lengh and was much easier (and faser) o rain. Table II summarizes he resuls. All he achieved high accuracies a four differen η 0. The sandard LSTM, LSTM1 and LSTM2 performed similarly, where hey all slighly ouperformed he LSTM3. No flucuaion issues were encounered in all he cases. These experimens have used neworks wih 50 hidden unis. B. The IMDB daase: For his daase, he inpu sequence from he embedding layer o he LSTM layer is of he inermediae lengh 128. Table III liss he esing resuls for various learning coefficiens. The sandard LSTM and he hree varians have show similar accuracies, excep ha LSTM1 and LSTM2 show slighly lower performance a η 0 =1e-2. Similar o he row-wise MNIST sequence case sudy, no noiceable flucuaions have been observed for any of he four values of η 0.

5 TABLE III THE BEST ACCURACIES OF DIFFERENT LSTM NETWORKS ON THE TEST SET AND CORRESPONDING PARAMETER SIZES OF THE LSTM LAYERS Learning rae coefficien (η 0) 1e-2 1e-3 1e-4 1e-5 # Params Sandard ,584 LSTM ,432 LSTM ,048 LSTM ,280 The case for η 0 =1e-5, consisenly produced he bes resuls for all he and exhibied very similar raining/esing profile curves as depiced in Fig. 4. Fig. 4 Accuracies agains epochs on he es daase obained by he sandard LSTM (a), LSTM1 (b), LSTM2 (c) and LSTM3 (d) wih he learning rae coefficien η 0 = 1e-5. The main benefi of he hree LSTM varians is o reduce he number of parameers involved, and hus reduce he compuaion expense. This has been confirmed from he experimens and as summarized in he hree ables above. The LSTM1 and LSTM2 show small difference in he number of parameers and boh conain he hidden uni signal in heir gaes, which explains heir similar performance. The LSTM3 has dramaically reduced parameers size since i only uses he bias, an indirecly conained delayed version of he hidden uni signal via he gradien descen updae equaions. This may explain heir relaive lagging performance, especially in long sequences. The acual reducion of parameers is dependen on he srucure (i.e., dimension) of inpu sequences and he number of hidden unis in he LSTM layer. VI. CONCLUSIONS In his paper, hree simplified ha were defined by eliminaing inpu signal, bias and/or hidden unis from heir he gae signals in he sandard LSTM RNN, were evaluaed on he asks of modeling sequence daa of varied lenghs. The resuls confirmed he uiliy of he hree LSTM varians wih reduced parameers, which a proper learning raes were capable of achieving he performance comparable o he sandard LSTM model. This work represens a preliminary sudy, and furher work is needed o evaluae he hree LSTM varians on more exensive daases of varied sequence lengh. REFERENCES [1] A. Graves, Supervised Sequence Labelling wih Recurren Neural Neworks, Berlin, Heidelbergz: Springer-Verlag, [2] A. Graves, Speech recogniion wih deep recurren neworks, in Acousics, Speech and Signal Processing (ICASSP), 2013 IEEE Inernaional Conference on, 2013, pp [3] K. Cho, B. van Merienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, Learning phrase represenaions using RNN encoder-decoder for saisical machine ranslaion, in Proceedings of he 2014 Conference on Empirical Mehods in Naural Language Processing, Associaion for Compuaional Linguisics, 2014, pp [4] I. Suskever, O. Vinyals, and Q. V. Le, Sequence o sequence learning wih neural neworks, in Proceedings of Advances in Neural Informaion Processing Sysems 27, NIPS, 2014, pp , [5] A. Karpahy, F. F. Li, Deep visual-semanic alignmens for generaing image descripions, in Proceedings of IEEE Inernaional Conference on Compuer Vision and Paern Recogniion, IEEE, 2015, pp [6] K. Xu, J. L. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. S. Zemel, and Y. Bengio. Show, aend and ell: Neural image capion generaion wih visual aenion, in Proceedings of he 32nd Inernaional Conference on Machine Learning, 2015, vol. 37, pp [7] Y. Bengio, P. Simard, and P. Frasconi, Learning long-erm dependencies wih gradien descen is difficul, IEEE Transacions on Neural Neworks, vol. 5, no. 2, pp , [8] R. Pascanu, T. Mikolov, and Y, Bengio, On he difficuly of raining recurren neural neworks, in Proceedings of he 30 h Inernaional Conference on Ma-chine Learning, JMLR: W&CP volume 28. [9] S. Hochreier, J. Schmidhuber. Long shor-erm memory, Neural Compuaion, vol. 9, no. 8, pp , [10] Q.V. Le, N. Jaily, G.E. Hinon, A simple way o iniialize recurren neworks of recified linear unis. arxiv preprin arxiv: , [11] J. Marens and I. Suskever, Training deep and recurren neural neworks wih Hessian-Free opimizaion. Neural Neworks: Tricks of he Trade, Springer, 2012, pp. pp [12] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning o forge: Coninual predicion wih LSTM. in Proceedings of he 9 h Inernaional Conference on Arificial Neural Neworks, IEEE, 1999, vol. 2, pp [13] F. A. Gers, N. N. Schraudolph, and J. Schmidhuber. Learning precise iming wih LSTM recurren neworks, Journal of Machine Learning Research, vol. 3, pp , [14] H. Sak, A. Senior, and F. Beaufays, Long shor-erm memory recurren neural nework archiecures for large scale acousic modeling. ISCA, pp , [15] H. Sak, A. Senior, and F. Beaufays, Long shor-erm memory based recurren neural nework archiecures for large vocabulary speech recogniion, ArXiv e-prins, [16] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluaion of gaed recurren neural neworks on sequence modeling, arxiv: , [17] K. Greff, R. K. Srivasava, J. Kounk, B. R. Seunebrink, and J. Schmidhuber. LSTM: a search space odyssey, arxiv: , [18] R. Jozefowicz, W. Zaremba, and I. Suskever. An empirical exploraion of recurren nework archiecures, in Proceedings of he 32 nd Inernaional Conference on Machine Learning, 2015, vol. 37, pp , [19] G.B. Zhou, J. Wu, C.L. Zhang, Z.H. Zhou, Minimal gaed uni for recurren neural neworks, Inernaional Journal of Auomaion and Compuing, pp , [20] F. Salem, ` Reduced Parameerizaion in Gaed Recurren Neural Neworks, Memorandum , MSU, Nov [21] hps://gihub.com/fcholle/keras/blob/maser/examples/imdb_lsm.py. [22] W. Zaremba, I. Suskever, and O. Vinyals, Recurren neural nework regularizaion, arxiv preprin arxiv: , [23] G. Hinon, N. Srivasava, and K. Swersky Lecure 6a Overview of mini-bach gradien descen, Coursera Lecure slides. [24] F. Salem, A Basic Recurren Neural Nework Model, arxiv. Preprin arxiv: , Dec

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks -

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks - Deep Learning: Theory, Techniques & Applicaions - Recurren Neural Neworks - Prof. Maeo Maeucci maeo.maeucci@polimi.i Deparmen of Elecronics, Informaion and Bioengineering Arificial Inelligence and Roboics

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Learning to Process Natural Language in Big Data Environment

Learning to Process Natural Language in Big Data Environment CCF ADL 2015 Nanchang Oc 11, 2015 Learning o Process Naural Language in Big Daa Environmen Hang Li Noah s Ark Lab Huawei Technologies Par 2: Useful Deep Learning Tools Powerful Deep Learning Tools (Unsupervised

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

FITTING OF A PARTIALLY REPARAMETERIZED GOMPERTZ MODEL TO BROILER DATA

FITTING OF A PARTIALLY REPARAMETERIZED GOMPERTZ MODEL TO BROILER DATA FITTING OF A PARTIALLY REPARAMETERIZED GOMPERTZ MODEL TO BROILER DATA N. Okendro Singh Associae Professor (Ag. Sa.), College of Agriculure, Cenral Agriculural Universiy, Iroisemba 795 004, Imphal, Manipur

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear In The name of God Lecure4: Percepron and AALIE r. Majid MjidGhoshunih Inroducion The Rosenbla s LMS algorihm for Percepron 958 is buil around a linear neuron a neuron ih a linear acivaion funcion. Hoever,

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

di Bernardo, M. (1995). A purely adaptive controller to synchronize and control chaotic systems.

di Bernardo, M. (1995). A purely adaptive controller to synchronize and control chaotic systems. di ernardo, M. (995). A purely adapive conroller o synchronize and conrol chaoic sysems. hps://doi.org/.6/375-96(96)8-x Early version, also known as pre-prin Link o published version (if available):.6/375-96(96)8-x

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

Time Series Forecasting using CCA and Kohonen Maps - Application to Electricity Consumption

Time Series Forecasting using CCA and Kohonen Maps - Application to Electricity Consumption ESANN'2000 proceedings - European Symposium on Arificial Neural Neworks Bruges (Belgium), 26-28 April 2000, D-Faco public., ISBN 2-930307-00-5, pp. 329-334. Time Series Forecasing using CCA and Kohonen

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Experiments on logistic regression

Experiments on logistic regression Experimens on logisic regression Ning Bao March, 8 Absrac In his repor, several experimens have been conduced on a spam daa se wih Logisic Regression based on Gradien Descen approach. Firs, he overfiing

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

Numerical Dispersion

Numerical Dispersion eview of Linear Numerical Sabiliy Numerical Dispersion n he previous lecure, we considered he linear numerical sabiliy of boh advecion and diffusion erms when approimaed wih several spaial and emporal

More information

A DELAY-DEPENDENT STABILITY CRITERIA FOR T-S FUZZY SYSTEM WITH TIME-DELAYS

A DELAY-DEPENDENT STABILITY CRITERIA FOR T-S FUZZY SYSTEM WITH TIME-DELAYS A DELAY-DEPENDENT STABILITY CRITERIA FOR T-S FUZZY SYSTEM WITH TIME-DELAYS Xinping Guan ;1 Fenglei Li Cailian Chen Insiue of Elecrical Engineering, Yanshan Universiy, Qinhuangdao, 066004, China. Deparmen

More information

Dimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2

Dimitri Solomatine. D.P. Solomatine. Data-driven modelling (part 2). 2 Daa-driven modelling. Par. Daa-driven Arificial di Neural modelling. Newors Par Dimiri Solomaine Arificial neural newors D.P. Solomaine. Daa-driven modelling par. 1 Arificial neural newors ANN: main pes

More information

Shortcut Sequence Tagging

Shortcut Sequence Tagging Shorcu Sequence Tagging Huijia Wu 1,3, Jiajun Zhang 1,2, and Chengqing Zong 1,2,3 1 Naional Laboraory of Paern Recogniion, Insiue of Auomaion, CAS 2 CAS Cener for Excellence in Brain Science and Inelligence

More information

CSE/NB 528 Lecture 14: Reinforcement Learning (Chapter 9)

CSE/NB 528 Lecture 14: Reinforcement Learning (Chapter 9) CSE/NB 528 Lecure 14: Reinforcemen Learning Chaper 9 Image from hp://clasdean.la.asu.edu/news/images/ubep2001/neuron3.jpg Lecure figures are from Dayan & Abbo s book hp://people.brandeis.edu/~abbo/book/index.hml

More information

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x WEEK-3 Reciaion PHYS 131 Ch. 3: FOC 1, 3, 4, 6, 14. Problems 9, 37, 41 & 71 and Ch. 4: FOC 1, 3, 5, 8. Problems 3, 5 & 16. Feb 8, 018 Ch. 3: FOC 1, 3, 4, 6, 14. 1. (a) The horizonal componen of he projecile

More information

Deep Convolutional Recurrent Network for Segmentation-free Offline Handwritten Japanese Text Recognition

Deep Convolutional Recurrent Network for Segmentation-free Offline Handwritten Japanese Text Recognition 2017 14h IAPR Inernaional Conference on Documen Analysis and Recogniion Deep Convoluional Recurren Nework for Segmenaion-free Offline Handwrien Japanese Tex Recogniion Nam-Tuan Ly Dep. of Compuer Science

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION DOI: 0.038/NCLIMATE893 Temporal resoluion and DICE * Supplemenal Informaion Alex L. Maren and Sephen C. Newbold Naional Cener for Environmenal Economics, US Environmenal Proecion

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Stability and Bifurcation in a Neural Network Model with Two Delays

Stability and Bifurcation in a Neural Network Model with Two Delays Inernaional Mahemaical Forum, Vol. 6, 11, no. 35, 175-1731 Sabiliy and Bifurcaion in a Neural Nework Model wih Two Delays GuangPing Hu and XiaoLing Li School of Mahemaics and Physics, Nanjing Universiy

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

Tensorial Recurrent Neural Networks for Longitudinal Data Analysis

Tensorial Recurrent Neural Networks for Longitudinal Data Analysis 1 Tensorial Recurren Neural Neworks for Longiudinal Daa Analysis Mingyuan Bai Boyan Zhang and Junbin Gao arxiv:1708.00185v1 [cs.lg] 1 Aug 2017 Absrac Tradiional Recurren Neural Neworks assume vecorized

More information

CHAPTER 12 DIRECT CURRENT CIRCUITS

CHAPTER 12 DIRECT CURRENT CIRCUITS CHAPTER 12 DIRECT CURRENT CIUITS DIRECT CURRENT CIUITS 257 12.1 RESISTORS IN SERIES AND IN PARALLEL When wo resisors are conneced ogeher as shown in Figure 12.1 we said ha hey are conneced in series. As

More information

04. Kinetics of a second order reaction

04. Kinetics of a second order reaction 4. Kineics of a second order reacion Imporan conceps Reacion rae, reacion exen, reacion rae equaion, order of a reacion, firs-order reacions, second-order reacions, differenial and inegraed rae laws, Arrhenius

More information

Chapter 7 Response of First-order RL and RC Circuits

Chapter 7 Response of First-order RL and RC Circuits Chaper 7 Response of Firs-order RL and RC Circuis 7.- The Naural Response of RL and RC Circuis 7.3 The Sep Response of RL and RC Circuis 7.4 A General Soluion for Sep and Naural Responses 7.5 Sequenial

More information

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization

A Forward-Backward Splitting Method with Component-wise Lazy Evaluation for Online Structured Convex Optimization A Forward-Backward Spliing Mehod wih Componen-wise Lazy Evaluaion for Online Srucured Convex Opimizaion Yukihiro Togari and Nobuo Yamashia March 28, 2016 Absrac: We consider large-scale opimizaion problems

More information

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3

d 1 = c 1 b 2 - b 1 c 2 d 2 = c 1 b 3 - b 1 c 3 and d = c b - b c c d = c b - b c c This process is coninued unil he nh row has been compleed. The complee array of coefficiens is riangular. Noe ha in developing he array an enire row may be divided or

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

CSE/NB 528 Lecture 14: From Supervised to Reinforcement Learning (Chapter 9) R. Rao, 528: Lecture 14

CSE/NB 528 Lecture 14: From Supervised to Reinforcement Learning (Chapter 9) R. Rao, 528: Lecture 14 CSE/NB 58 Lecure 14: From Supervised o Reinforcemen Learning Chaper 9 1 Recall from las ime: Sigmoid Neworks Oupu v T g w u g wiui w Inpu nodes u = u 1 u u 3 T i Sigmoid oupu funcion: 1 g a 1 a e 1 ga

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Applying Genetic Algorithms for Inventory Lot-Sizing Problem with Supplier Selection under Storage Capacity Constraints

Applying Genetic Algorithms for Inventory Lot-Sizing Problem with Supplier Selection under Storage Capacity Constraints IJCSI Inernaional Journal of Compuer Science Issues, Vol 9, Issue 1, No 1, January 2012 wwwijcsiorg 18 Applying Geneic Algorihms for Invenory Lo-Sizing Problem wih Supplier Selecion under Sorage Capaciy

More information

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality

Matrix Versions of Some Refinements of the Arithmetic-Geometric Mean Inequality Marix Versions of Some Refinemens of he Arihmeic-Geomeric Mean Inequaliy Bao Qi Feng and Andrew Tonge Absrac. We esablish marix versions of refinemens due o Alzer ], Carwrigh and Field 4], and Mercer 5]

More information

Layer Trajectory LSTM

Layer Trajectory LSTM Layer Trajecory LSTM Jinyu Li, Changliang Liu, Yifan Gong Microsof AI and Research {jinyli, chanliu, ygong}@microsof.com Absrac I is popular o sack LSTM layers o ge beer modeling power, especially when

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

Single-Pass-Based Heuristic Algorithms for Group Flexible Flow-shop Scheduling Problems

Single-Pass-Based Heuristic Algorithms for Group Flexible Flow-shop Scheduling Problems Single-Pass-Based Heurisic Algorihms for Group Flexible Flow-shop Scheduling Problems PEI-YING HUANG, TZUNG-PEI HONG 2 and CHENG-YAN KAO, 3 Deparmen of Compuer Science and Informaion Engineering Naional

More information

STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN

STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN Inernaional Journal of Applied Economerics and Quaniaive Sudies. Vol.1-3(004) STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN 001-004 OBARA, Takashi * Absrac The

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time. Supplemenary Figure 1 Spike-coun auocorrelaions in ime. Normalized auocorrelaion marices are shown for each area in a daase. The marix shows he mean correlaion of he spike coun in each ime bin wih he spike

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n Module Fick s laws of diffusion Fick s laws of diffusion and hin film soluion Adolf Fick (1855) proposed: d J α d d d J (mole/m s) flu (m /s) diffusion coefficien and (mole/m 3 ) concenraion of ions, aoms

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

Hardware-Software Co-design of Slimmed Optical Neural Networks

Hardware-Software Co-design of Slimmed Optical Neural Networks Hardware-Sofware Co-design of Slimmed Opical Neural Neworks Zheng Zhao 1, Derong Liu 1, Meng Li 1, Zhoufeng Ying 1, Lu Zhang 2, Biying Xu 1, Bei Yu 2, Ray Chen 1, David Pan 1 The Universiy of Texas a Ausin

More information

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence

Supplement for Stochastic Convex Optimization: Faster Local Growth Implies Faster Global Convergence Supplemen for Sochasic Convex Opimizaion: Faser Local Growh Implies Faser Global Convergence Yi Xu Qihang Lin ianbao Yang Proof of heorem heorem Suppose Assumpion holds and F (w) obeys he LGC (6) Given

More information

Presentation Overview

Presentation Overview Acion Refinemen in Reinforcemen Learning by Probabiliy Smoohing By Thomas G. Dieerich & Didac Busques Speaer: Kai Xu Presenaion Overview Bacground The Probabiliy Smoohing Mehod Experimenal Sudy of Acion

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Retrieval Models. Boolean and Vector Space Retrieval Models. Common Preprocessing Steps. Boolean Model. Boolean Retrieval Model

Retrieval Models. Boolean and Vector Space Retrieval Models. Common Preprocessing Steps. Boolean Model. Boolean Retrieval Model 1 Boolean and Vecor Space Rerieval Models Many slides in his secion are adaped from Prof. Joydeep Ghosh (UT ECE) who in urn adaped hem from Prof. Dik Lee (Univ. of Science and Tech, Hong Kong) Rerieval

More information

6.2 Transforms of Derivatives and Integrals.

6.2 Transforms of Derivatives and Integrals. SEC. 6.2 Transforms of Derivaives and Inegrals. ODEs 2 3 33 39 23. Change of scale. If l( f ()) F(s) and c is any 33 45 APPLICATION OF s-shifting posiive consan, show ha l( f (c)) F(s>c)>c (Hin: In Probs.

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

Rapid Termination Evaluation for Recursive Subdivision of Bezier Curves

Rapid Termination Evaluation for Recursive Subdivision of Bezier Curves Rapid Terminaion Evaluaion for Recursive Subdivision of Bezier Curves Thomas F. Hain School of Compuer and Informaion Sciences, Universiy of Souh Alabama, Mobile, AL, U.S.A. Absrac Bézier curve flaening

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1.

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1. Roboics I April 11, 017 Exercise 1 he kinemaics of a 3R spaial robo is specified by he Denavi-Harenberg parameers in ab 1 i α i d i a i θ i 1 π/ L 1 0 1 0 0 L 3 0 0 L 3 3 able 1: able of DH parameers of

More information

Matlab and Python programming: how to get started

Matlab and Python programming: how to get started Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,

More information

Removing Useless Productions of a Context Free Grammar through Petri Net

Removing Useless Productions of a Context Free Grammar through Petri Net Journal of Compuer Science 3 (7): 494-498, 2007 ISSN 1549-3636 2007 Science Publicaions Removing Useless Producions of a Conex Free Grammar hrough Peri Ne Mansoor Al-A'ali and Ali A Khan Deparmen of Compuer

More information

Particle Swarm Optimization

Particle Swarm Optimization Paricle Swarm Opimizaion Speaker: Jeng-Shyang Pan Deparmen of Elecronic Engineering, Kaohsiung Universiy of Applied Science, Taiwan Email: jspan@cc.kuas.edu.w 7/26/2004 ppso 1 Wha is he Paricle Swarm Opimizaion

More information

Mean-square Stability Control for Networked Systems with Stochastic Time Delay

Mean-square Stability Control for Networked Systems with Stochastic Time Delay JOURNAL OF SIMULAION VOL. 5 NO. May 7 Mean-square Sabiliy Conrol for Newored Sysems wih Sochasic ime Delay YAO Hejun YUAN Fushun School of Mahemaics and Saisics Anyang Normal Universiy Anyang Henan. 455

More information

Explaining Total Factor Productivity. Ulrich Kohli University of Geneva December 2015

Explaining Total Factor Productivity. Ulrich Kohli University of Geneva December 2015 Explaining Toal Facor Produciviy Ulrich Kohli Universiy of Geneva December 2015 Needed: A Theory of Toal Facor Produciviy Edward C. Presco (1998) 2 1. Inroducion Toal Facor Produciviy (TFP) has become

More information

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs

A Primal-Dual Type Algorithm with the O(1/t) Convergence Rate for Large Scale Constrained Convex Programs PROC. IEEE CONFERENCE ON DECISION AND CONTROL, 06 A Primal-Dual Type Algorihm wih he O(/) Convergence Rae for Large Scale Consrained Convex Programs Hao Yu and Michael J. Neely Absrac This paper considers

More information

Class Meeting # 10: Introduction to the Wave Equation

Class Meeting # 10: Introduction to the Wave Equation MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion

More information

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models.

Technical Report Doc ID: TR March-2013 (Last revision: 23-February-2016) On formulating quadratic functions in optimization models. Technical Repor Doc ID: TR--203 06-March-203 (Las revision: 23-Februar-206) On formulaing quadraic funcions in opimizaion models. Auhor: Erling D. Andersen Convex quadraic consrains quie frequenl appear

More information

Electrical and current self-induction

Electrical and current self-induction Elecrical and curren self-inducion F. F. Mende hp://fmnauka.narod.ru/works.hml mende_fedor@mail.ru Absrac The aricle considers he self-inducance of reacive elemens. Elecrical self-inducion To he laws of

More information

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate. Inroducion Gordon Model (1962): D P = r g r = consan discoun rae, g = consan dividend growh rae. If raional expecaions of fuure discoun raes and dividend growh vary over ime, so should he D/P raio. Since

More information

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems

Particle Swarm Optimization Combining Diversification and Intensification for Nonlinear Integer Programming Problems Paricle Swarm Opimizaion Combining Diversificaion and Inensificaion for Nonlinear Ineger Programming Problems Takeshi Masui, Masaoshi Sakawa, Kosuke Kao and Koichi Masumoo Hiroshima Universiy 1-4-1, Kagamiyama,

More information

Online Convex Optimization Example And Follow-The-Leader

Online Convex Optimization Example And Follow-The-Leader CSE599s, Spring 2014, Online Learning Lecure 2-04/03/2014 Online Convex Opimizaion Example And Follow-The-Leader Lecurer: Brendan McMahan Scribe: Sephen Joe Jonany 1 Review of Online Convex Opimizaion

More information

Inventory Control of Perishable Items in a Two-Echelon Supply Chain

Inventory Control of Perishable Items in a Two-Echelon Supply Chain Journal of Indusrial Engineering, Universiy of ehran, Special Issue,, PP. 69-77 69 Invenory Conrol of Perishable Iems in a wo-echelon Supply Chain Fariborz Jolai *, Elmira Gheisariha and Farnaz Nojavan

More information

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17

Designing Information Devices and Systems I Spring 2019 Lecture Notes Note 17 EES 16A Designing Informaion Devices and Sysems I Spring 019 Lecure Noes Noe 17 17.1 apaciive ouchscreen In he las noe, we saw ha a capacior consiss of wo pieces on conducive maerial separaed by a nonconducive

More information

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi Creep in Viscoelasic Subsances Numerical mehods o calculae he coefficiens of he Prony equaion using creep es daa and Herediary Inegrals Mehod Navnee Saini, Mayank Goyal, Vishal Bansal (23); Term Projec

More information

EE 330 Lecture 23. Small Signal Analysis Small Signal Modelling

EE 330 Lecture 23. Small Signal Analysis Small Signal Modelling EE 330 Lecure 23 Small Signal Analysis Small Signal Modelling Exam 2 Friday March 9 Exam 3 Friday April 13 Review Session for Exam 2: 6:00 p.m. on Thursday March 8 in Room Sweeney 1116 Review from Las

More information

The equation to any straight line can be expressed in the form:

The equation to any straight line can be expressed in the form: Sring Graphs Par 1 Answers 1 TI-Nspire Invesigaion Suden min Aims Deermine a series of equaions of sraigh lines o form a paern similar o ha formed by he cables on he Jerusalem Chords Bridge. Deermine he

More information

RC, RL and RLC circuits

RC, RL and RLC circuits Name Dae Time o Complee h m Parner Course/ Secion / Grade RC, RL and RLC circuis Inroducion In his experimen we will invesigae he behavior of circuis conaining combinaions of resisors, capaciors, and inducors.

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

An Empirical Study on Energy Disaggregation via Deep Learning

An Empirical Study on Energy Disaggregation via Deep Learning Advances in Inelligen Sysems Research, volume 133 2nd Inernaional Conference on Arificial Inelligence and Indusrial Engineering (AIIE2016) An Empirical Sudy on Energy Disaggregaion via Deep Learning Wan

More information

Lab 10: RC, RL, and RLC Circuits

Lab 10: RC, RL, and RLC Circuits Lab 10: RC, RL, and RLC Circuis In his experimen, we will invesigae he behavior of circuis conaining combinaions of resisors, capaciors, and inducors. We will sudy he way volages and currens change in

More information

A Note on the Equivalence of Fractional Relaxation Equations to Differential Equations with Varying Coefficients

A Note on the Equivalence of Fractional Relaxation Equations to Differential Equations with Varying Coefficients mahemaics Aricle A Noe on he Equivalence of Fracional Relaxaion Equaions o Differenial Equaions wih Varying Coefficiens Francesco Mainardi Deparmen of Physics and Asronomy, Universiy of Bologna, and he

More information

Development of a new metrological model for measuring of the water surface evaporation Tovmach L. Tovmach Yr. Abstract Introduction

Development of a new metrological model for measuring of the water surface evaporation Tovmach L. Tovmach Yr. Abstract Introduction Developmen of a new merological model for measuring of he waer surface evaporaion Tovmach L. Tovmach Yr. Sae Hydrological Insiue 23 Second Line, 199053 S. Peersburg, Russian Federaion Telephone (812) 323

More information

hen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif

hen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif Chaper Kalman Filers. Inroducion We describe Bayesian Learning for sequenial esimaion of parameers (eg. means, AR coeciens). The updae procedures are known as Kalman Filers. We show how Dynamic Linear

More information

Single and Double Pendulum Models

Single and Double Pendulum Models Single and Double Pendulum Models Mah 596 Projec Summary Spring 2016 Jarod Har 1 Overview Differen ypes of pendulums are used o model many phenomena in various disciplines. In paricular, single and double

More information

Chapter 8 The Complete Response of RL and RC Circuits

Chapter 8 The Complete Response of RL and RC Circuits Chaper 8 The Complee Response of RL and RC Circuis Seoul Naional Universiy Deparmen of Elecrical and Compuer Engineering Wha is Firs Order Circuis? Circuis ha conain only one inducor or only one capacior

More information

EECE251. Circuit Analysis I. Set 4: Capacitors, Inductors, and First-Order Linear Circuits

EECE251. Circuit Analysis I. Set 4: Capacitors, Inductors, and First-Order Linear Circuits EEE25 ircui Analysis I Se 4: apaciors, Inducors, and Firs-Order inear ircuis Shahriar Mirabbasi Deparmen of Elecrical and ompuer Engineering Universiy of Briish olumbia shahriar@ece.ubc.ca Overview Passive

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

A New Perturbative Approach in Nonlinear Singularity Analysis

A New Perturbative Approach in Nonlinear Singularity Analysis Journal of Mahemaics and Saisics 7 (: 49-54, ISSN 549-644 Science Publicaions A New Perurbaive Approach in Nonlinear Singulariy Analysis Ta-Leung Yee Deparmen of Mahemaics and Informaion Technology, The

More information

Sliding Mode Controller for Unstable Systems

Sliding Mode Controller for Unstable Systems S. SIVARAMAKRISHNAN e al., Sliding Mode Conroller for Unsable Sysems, Chem. Biochem. Eng. Q. 22 (1) 41 47 (28) 41 Sliding Mode Conroller for Unsable Sysems S. Sivaramakrishnan, A. K. Tangirala, and M.

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

Longest Common Prefixes

Longest Common Prefixes Longes Common Prefixes The sandard ordering for srings is he lexicographical order. I is induced by an order over he alphabe. We will use he same symbols (,

More information

We just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n

We just finished the Erdős-Stone Theorem, and ex(n, F ) (1 1/(χ(F ) 1)) ( n Lecure 3 - Kövari-Sós-Turán Theorem Jacques Versraëe jacques@ucsd.edu We jus finished he Erdős-Sone Theorem, and ex(n, F ) ( /(χ(f ) )) ( n 2). So we have asympoics when χ(f ) 3 bu no when χ(f ) = 2 i.e.

More information

Distributed Language Models Using RNNs

Distributed Language Models Using RNNs Disribued Language Models Using RNNs Ting-Po Lee ingpo@sanford.edu Taman Narayan amann@sanford.edu 1 Inroducion Language models are a fundamenal par of naural language processing. Given he prior words

More information

The field of mathematics has made tremendous impact on the study of

The field of mathematics has made tremendous impact on the study of A Populaion Firing Rae Model of Reverberaory Aciviy in Neuronal Neworks Zofia Koscielniak Carnegie Mellon Universiy Menor: Dr. G. Bard Ermenrou Universiy of Pisburgh Inroducion: The field of mahemaics

More information

EXPLICIT TIME INTEGRATORS FOR NONLINEAR DYNAMICS DERIVED FROM THE MIDPOINT RULE

EXPLICIT TIME INTEGRATORS FOR NONLINEAR DYNAMICS DERIVED FROM THE MIDPOINT RULE Version April 30, 2004.Submied o CTU Repors. EXPLICIT TIME INTEGRATORS FOR NONLINEAR DYNAMICS DERIVED FROM THE MIDPOINT RULE Per Krysl Universiy of California, San Diego La Jolla, California 92093-0085,

More information