hen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif

Size: px
Start display at page:

Download "hen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif"

Transcription

1 Chaper Kalman Filers. Inroducion We describe Bayesian Learning for sequenial esimaion of parameers (eg. means, AR coeciens). The updae procedures are known as Kalman Filers. We show how Dynamic Linear Models, Recursive Leas Squares and Seepes Descen algorihms are all special cases of he Kalman ler... Sequenial Esimaion of Nonsaionary Mean In he lecure on Bayesian mehods we described he sequenial esimaion of a saionary mean. We now exend ha analysis o he nonsaionary case. A reasonable model of a ime varying mean is ha i can drif from sample o sample. If he drif is random (laer on we will also consider deerminisic drifs) hen we have = + w (.) where he random drif is Gaussian p(w ) = N(w ; ; w) wih drif variance w. The daa poins are hen Gaussian abou mean. If hey have a xed variance x (laer on we will also consider ime-varing variance) where e = x. Hence p(e ) = N(e ; ; x). x = + e (.) A ime our esimae of has a Gaussian disribuion wih mean ^ and variance ^. We sress ha his is he variance of our mean esimae and no he variance of he daa. The sandard error esimae for his variance ( =) is no longer valid as we have nonsaionary daa. We herefore have o esimae i as we go along. This means we keep running esimaes of he disribuion of he mean. A ime his disribuion has a mean ^ and a variance ^. The disribuion a ime is 7

2 hen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif variance o he variance from he previous ime sep) r = ^ + (.4) w and he likelihood is p(x j ) = N(x ; ^ ; x) (.5) The poserior is hen given by p( jx ) = N( ; ^ ; ^ ) (.6) where he mean is and he variance is ^ = ^ + r x + r (x ^ ) (.7) ^ = r x r + x (.8) We now wrie he above equaions in a slighly dieren form o allow for comparison wih laer esimaion procedures where and ^ = ^ + K e (.9) ^ = r ( K ) K = r x + r (.) e = x ^ (.) In he nex secion we will see ha our updae equaions are a special case of a Kalman ler where e is he predicion error and K is he Kalman gain. In gure. we give a numerical example where daa poins were generaed; he rs having a mean of 4 and he nex a mean of. The updae equaions have wo paramaers which we mus se (i) he daa variance and (ii) he drif x variance w. Togeher, hese parameers deermine (a) how responsive he racking will be and (b) how sable i will be. The wo plos are for wo dieren values of w and =. Laer we will see how hese wo parameers can be learn. x.. A single sae variable We now look a a general mehodology for he sequenial esimaion of a nonsaionary parameer (his can be anyhing - no necesarily he daa mean).

3 (a) 5 5 (b) 5 5 Figure.: Sequenial esimaion of nonsaionary mean. The graphs plo daa values x (crosses) and esimaed mean values ^ (circles) along wih error bars ^ (verical lines) versus ieraion number for wo dieren drif noise values (a) = : and (b) = :. w w The parameer's evoluion is modelled as a linear dynamical sysem. The sae-space equaions are = g + w ; w N(w ; ; w) (.) x = f + e ; e N(e ; ; x) The value of he parameer a ime is referred o as he sae of he sysem. This sae can change deerminisically, by being muliplied by g, and sochasically by added a random drif w. This drif is referred o as sae noise. The observed daa (eg. ime series values) are referred o as observaions x which are generaed from he sae according o he second equaion. This allows for a linear ransformaion plus he addiion of observaion noise. A ime our esimae of has a Gaussian disribuion wih mean ^ and variance ^. The prior disribuion is herefore given by where r is he prior variance and he likelihood is The poserior is hen given by where p( ) = N( ; g ^ ; r ) (.3) r = g ^ + w (.4) p(x j ) = N(x ; f ^ ; x) (.5) p( jx ) = N( ; ^ ; ^ ) (.6) ^ = g ^ + K e (.7) ^ = r ( K f ) and K = r x + f r f (.8)

4 The above equaions consiue a -dimensional Kalman Filer (he sae is -dimensional because here is only sae variable). Nex we consider many sae variables...3 Muliple sae variables We now consider linear dynamical sysems where daa is generaed according o he model = G + w ; w N(w ; ; W ) y = F + v ; v N(v ; ; V ) (.9) where are `sae' or `laen' variables, G is a `ow' marix, w is `sae noise' disribued according o a normal disribuion wih zero mean and covariance marix W, y are he mulivariae observaions, F is a ransformaion marix and v is `observaion noise' disribued according o a normal disribuion wih zero mean and covariance marix V. The model is parameerised by he marices G, W, F and V. These parameers may depend on (as indicaed by he subscrip). The Kalman ler is a recursive procedure for esimaing he laen variables, [9]. Meinhold and Singpurwalla [4] show how his esimaion procedure is derived (also see lecure on Bayesian mehods). The laen variables are normally disribued wih a mean and covariance ha can be esimaed wih he following recursive formulae ^ = G ^ + K e (.) = R K F R where K is he `Kalman gain' marix, e is he predicion error and R is he `prior covariance' of he laen variables (ha is, prior o y being observed). These quaniies are calculaed as follows K = R F T V + F R F T (.) e = y F G ^ R = G G T + W To apply hese equaions you need o know he parameers G, W,F and V and make iniial guesses for he sae mean and covariance; ^ and. Equaions (3) and () can hen be applied o esimae he sae mean and covariance a he nex ime sep. The equaions are hen applied recursively.

5 A useful quaniy is he likelihood of an observaion given he model parameers before hey are updaed p(y ) = N y ; F ^ ; V + F G T G F T (.) In Bayesian erminology his likelihood is known as he evidence for he daa poin [4]. Daa poins wih low evidence correspond o periods when he saisics of he underlying sysem are changing (non-saionariy) or, less consisenly, o daa poins having large observaion noise componens. The sae-space equaions may be viewed as a dynamic version of facor analysis where he facor,, evolves over ime according o linear dynamics. Shumway and Soer [56] derive an Expecaion-Maximisaion (EM) algorihm (see nex lecure) in which he parameers of he model G, W and V can all be learn. Only F is assumed known. Noe ha hese parameers are no longer dependen on. This does no, however, mean ha he model is no longer dynamic; he sae,, is sill ime dependen. Ghahramani and Hinon [] have recenly exended he algorihm o allow F o be learn as well. These learning algorihms are bach learning algorihms raher han recursive updae procedures. They are herefore no suiable for `on-line' learning (where he learning algorihm has only one `look' a each observaion). In he engineering and saisical forecasing lieraure [44] [] he ransformaion marix, F, is known. I is relaed o he observed ime series (or oher observed ime series) according o a known deerminisic funcion se by he saisician or `model builder'. Assumpions are hen made abou he ow marix, G. Assumpions are also made abou he sae noise covariance, W, and he observaion noise covariance, V, or hey are esimaed on-line. We now look a a se of assumpions which reduces he Kalman ler o a `Dynamic Linear Model'...4 Dynamic Linear Models In his secion we consider Dynamic Linear Models (DLMs) [] which for a univariae ime series are = + w ; w N(w ; ; W ) y = F + v ; v N(v ; ; ) (.3) This is a linear regression model wih ime-varying coeciens. I is idenical o he generic Kalman ler model wih G = I. Subsiuing his ino he updae equaions gives ^ = ^ + K e (.4) = R K F R

6 where K = R F T ^y (.5) R = + W ^y = + = F R F T e = y ^y ^y = F ^ (.6) where ^y is he predicion and ^y is he esimaed predicion variance. This is composed of wo erms; he observaion noise,, and he componen of predicion variance due o sae uncerainy,. The likelihood of a daa poin under he old model (or evidence) is p(y ) = N y ; ^y ; ^y (.7) If we make he furher assumpion ha he ransformaion vecor (is no longer a marix because we have univariae predicions) is equal o F = [y ; y ; :::; y p] hen we have a Dynamic Auoregressive (DAR) model. To apply he model we make iniial guesses for he sae (AR parameers) mean and covariance (^ and ) and use he above equaions. We mus also plug in guesses for he sae noise covariance, W, and he observaion noise variance,. In a laer secion we show how hese can be esimaed on-line. I is also ofen assumed ha he sae noise covariance marix is he isoropic marix, W = qi. Nex, we look a a se of assumpions ha reduce he Kalman ler o Recursive Leas Squares...5 Recursive leas squares If here is no sae noise (w =,W = ) and no sae ow (G = I) hen he linear dynamical sysem in equaion () reduces o a saic linear sysem ( = ). If we furher assume ha our observaions are univariae we can re-wrie he sae-space equaions as y = F + v ; v N(v ; ; ) (.8) This is a regression model wih consan coeciens. We can, however, esimae hese coeciens in a recursive manner by subsiuing our assumpions abou W, G and ino he Kalman ler updae equaions. This gives V

7 ^ = ^ + K e (.9) = K F (.3) where K = F T ^y (.3) ^y = + = F F T e = y ^y ^y = F ^ (.3) where ^y is he predicion and ^y is he esimaed predicion variance. This is composed of wo erms; he observaion noise,, and he componen of predicion variance due o sae uncerainy,. The above equaions are idenical o he updae equaions for recursive leas squares (RLS) as dened by Abraham and Ledoler (equaion (8.6) in []). The likelihood of a daa poin under he old model (or evidence) is p(y ) = N y ; ^y ; ^y (.33) If we make he furher assumpion ha he ransformaion vecor (is no longer a marix because we have univariae predicions) is equal o F = [y ; y ; :::; y p] hen we have a recursive leas squares esimaion procedure for an auoregressive (AR) model. To apply he model we make iniial guesses for he sae (AR parameers) mean and covariance (^ and ) and use he above equaions. We mus also plug in our guess for he observaion noise variance,. In a laer secion we show how his can be esimaed on-line...6 Esimaion of noise parameers To use he DLM updae equaions i is necessary o make guesses for he sae noise covariance, W, and he observaion noise variance,. In his secion we show how hese can be esimaed on-line. Noe, we eiher esimae he sae noise or he observaion noise - no boh.

8 Jazwinski's mehod for esimaing sae noise This mehod, reviewed in [4] is ulimaely due o Jazwinski [8] who derives he following equaions using he MLII approach (see Bayes lecure). We assume ha he sae noise covariance marix is he isoropic marix, W = qi. The parameer q can be updaed according o q = h e q F F T! (.34) where h(x) is he `ramp' funcion h(x) = ( x if x oherwise (.35) and q is he esimaed predicion variance assuming ha q = q = + F F T (.36) Thus, if our esimae of predicion error assuming no sae noise is smaller han our observed error (e ) we should infer ha he sae noise is non-zero. This will happen when we ransi from one saionary regime o anoher; our esimae of q will increase. This, in urn, will increase he learning rae (see laer secion). A smoohed esimae is q = q + ( )h e q F F T! (.37) where is a smoohing parameer. Alernaively, equaion.34 can be applied o a window of samples [4]. Jazwinski's mehod for esimaing observaion noise This mehod, reviewed in [4] is ulimaely due o Jazwinski [8] who derives he following equaions by applying he MLII framework (see Bayes lecure). Equaion.6 shows ha he esimaed predicion variance is composed of wo componens; he observaion noise and he componen due o sae uncerainy. Thus, o esimae he observaion noise one needs o subrac he second componen from he measured squared error = h e F R F T (.38)

9 This esimae can be derived by seing so as o maximise he evidence (likelihood) of a new daa poin (equaion.7). A smoohed esimae is = + ( )h e F R F T (.39) where is a smoohing parameer. Alernaively, equaion.38 can be applied o a window of samples [4]. For RLS hese updae equaions can be used by subsiuing R =. We sress, however, ha his esimae is especially unsuiable for RLS applied o non-saionariy daa (bu hen you should only use RLS for saionary daa, anyway). This is because he learning rae becomes dramaically decreased. We also sress ha Jazwinski's mehods canno boh be applied a he same ime; he 'exra' predicion error is explained eiher as greaer observaion noise or as greaer sae noise. Skagens' mehod Skagen [57] les W = I ie. assumes he sae noise covariance is isoropic wih a variance ha is proporional o he observaion noise. He observes ha if is kep xed hen varying over six orders of magniude has lile or no eec on he Kalman ler updaes. He herefore ses o an arbirary value eg.. He hen denes a measure R as he relaive reducion in predicion error due o adapion and chooses o give a value of R = :5...7 Comparison wih seepes descen For a linear predicor, he learning rule for `on-line' seepes descen is [3] ^ = ^ + F T e (.4) where is he learning rae, which is xed and chosen arbirarily beforehand. This mehod is oherwise known as Leas Mean Squares (LMS). Haykin [7] (page 36) discusses he condiions on which lead o a convergen learning process. Comparison of he above rule wih he DLM learning rule in equaion.5 shows ha DLM has a learning rae marix equal o = + q I + (.4) The average learning rae, averaged over all sae variables, is given by

10 DLM = T r ( + q I) p ( + ) (.4) where T r() denoes he race of he covariance marix and p is he number of sae variables. DLM hus uses a learning rae which is direcly proporional o he variance of he sae variables and is inversely proporional o he esimaed predicion variance. If he predicion variance due o sae uncerainy is signicanly smaller han he predicion variance due o sae noise ( ), as i will be once he ler has reached a seady soluion, hen increasing he sae noise parameer, q, will increase he learning rae. This is he mechanism by which DLM increases is learning rae when a new dynamic regime is encounered. The average learning rae for he RLS ler is RLS = T r ( ) p ( + ) (.43) As here is no sae noise (q = ) here is no mechanism by which he learning rae can be increased when a new dynamic regime is encounered. This underlines he fac ha RLS is a saionary model. In fac, RLS behaves paricularly poorly when given non-saionary daa. When a new dynamic regime is encounered, will increase (and so may if we're updaing i online). This leads no o he desired increase in learning rae, bu o a decrease. For saionary daa, however, he RLS model behaves well. As he model encouners more daa he parameer covariance marix decreases which in urn leads o a decrease in learning rae. In on-line gradien descen learning i is desirable o sar wih a high learning rae (o achieve faser convergence) bu end wih a low learning rae (o preven oscillaion). RLS exhibis he desirable propery of adaping is learning rae in exacly his manner. DLM also exhibis his propery when given saionary daa, bu when given non-saionary daa, has he added propery of being able o increasing is learning rae when necessary. We conclude his secion by noing ha DLM and RLS may be viewed as linear online gradien descen esimaors wih variable learning raes; RLS for saionary daa and DLM for non-saionary daa...8 Oher algorihms The Leas Mean Squares (LMS) algorihm [7] (Chaper 9) is idenical o he seepesdescen mehod (as described in his paper) - boh mehods have consan learning raes.

11 Our commens on he RLS algorihm are relevan o RLS as dened by Abraham and Ledoler []. There are, however, a number of varians of RLS. Haykin [7] (page 564) denes an exponenially weighed RLS algorihm, where pas samples are given exponenially less aenion han more recen samples. This gives rise o a limied racking abiliy (see chaper 6 in [7]). The racking abiliy can be furher improved by adding sae noise (Exended RLS- [7], page 76) or a nonconsan sae ransiion marix (Exended RLS- [7], page 77). The Exended RLS- algorihm is herefore similar o he DAR model described in his paper...9 An example This example demonsraes he basic funcioning of he dynamic AR model and compares i o RLS. A ime series was generaed consising of a Hz sine wave in he rs second, a Hz sinewave in he second second and a 3Hz sine wave in he hird second. All signals conained addiive Gaussian noise wih sandard deviaion.. One hundred samples were generaed per second. A DAR model wih p = 8 AR coeciens was rained on he daa. The algorihm was given a xed value of observaion noise ( = :). The sae noise was iniially se o zero and was adaped using Jazwinski's algorihm described in equaion.34, using a smoohing value of = :. The model was iniialised using linear regression; he rs p daa poins were regressed ono he p + h daa poin using an SVD implemenaion of leas squares, resuling in he linear regression weigh vecor w LR. The sae a ime sep = p + was iniialised o his weigh vecor; p+ = w LR. The iniial sae covariance marix was se o he linear regression covariance marix, p+ = F p+f T p+. Model parameers before ime p + were se o zero. An RLS model (wih p = 8 AR coeciens) was also rained on he daa. The algorihm was given a xed value of observaion noise ( = :). The model was iniilised by seing p+ = w LR and p+ = I (seing p+ = F p+f T p+ resuled in an iniial learning rae ha was'n sucienly large for he model o adap o he daa - see laer). Figure. shows he original ime series and he evidence of each poin in he ime series under he DAR model. Daa poins occuring a he ransiions beween dieren dynamic regimes have low evidence. Figure.3 shows ha he sae noise parameer, q, increases by an amoun necessary for he esimaed predicion error o equal he acual predicion error. The sae noise is high a ransiions beween dieren dynamic regimes. Wihin each dynamic regime he sae noise is zero. Figure.4 shows ha he variance of sae variables reduces as he model is exposed o more daa from he same saionary regime. When a new saionary regime is encounered he sae variance increases (because q increases).

12 (a) (b) Figure.: (a) Original ime series (b) Log evidence of daa poins under DAR model, log p(y ). Figure.5 shows ha he learning rae of he DAR model increases when he sysem eners a new saionary regime, whereas he learning rae of RLS acually decreases. The RLS learning rae is iniially higher because he sae covariance marix was iniialised dierenly (iniialising i in he same way gave much poorer RLS specral esimaes). Figure.6 shows he specral esimaes obained from he DAR and RLS models. The learning rae plos and specrogram plos show ha DAR is suiable for nonsaionary daa whereas RLS is no... Discussion Dynamic Linear Models, Recursive Leas Squares and Seepes-Descen Learning. are special cases of linear dynamical sysems and heir learning rules are special cases of he Kalman ler. Seepes-Descen Learning is suiable for modelling saionary daa. I uses a learning rae parameer which needs o be high a he beginning of learning (o ensure fas learning) bu low a he end of learning (o preven oscillaions). The learning rae parameer is usually hand-uned o fulll hese crieria. Recursive Leas Squares is also suiable for modelling saionary daa. I has he advanage of having an adapive learning rae ha reduces gradually as learning proceeds. I reduces in response o a reducion in he uncerainy (covariance) of he model parameers. Dynamic Linear Models are suiable for saionary and non-saionary enviromens. The models possess sae-noise and observaion noise parameers which can be updaed on-line so as o maximise he evidence of he observaions.

13 (a) (c) (b) (d) Figure.3: (a) Squared predicion error, e, (b) Esimaed predicion error wih q =, q, (c) Esimaed predicion error, ^y (he baseline level is due o he xed observaion noise componen, = :) and (d) Esimae of sae noise variance, q. The sae noise, q, increases by an amoun necessary for he esimaed predicion error (plo c) o equal he acual prediciion error (plo a) - see equaion Figure.4: Average prior variance of sae variables, T r(r p ). As he model is exposed o more daa from he same saionary regime he esimaes of he sae variables become more accurae (less variance). When a new saionary regime is encounered he sae variance increases (because q increases).

14 (a) (b) Figure.5: Average learning raes for (a) DAR model (b) RLS model. The learning rae for RLS is se o a higher iniial value (indirecly by seing o have larger enries) o give i a beer chance of racking he daa. The DAR model responds o a new dynamic regime by increasing he learning rae. The RLS responds by decreasing he learning rae and is herefore unable o rack he nonsaionariy. Seconds Seconds (a) 3 4 Frequency (b) 3 4 Frequency Figure.6: Specrograms for (a) DAR model (b) RLS model.

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

OBJECTIVES OF TIME SERIES ANALYSIS

OBJECTIVES OF TIME SERIES ANALYSIS OBJECTIVES OF TIME SERIES ANALYSIS Undersanding he dynamic or imedependen srucure of he observaions of a single series (univariae analysis) Forecasing of fuure observaions Asceraining he leading, lagging

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

Linear Gaussian State Space Models

Linear Gaussian State Space Models Linear Gaussian Sae Space Models Srucural Time Series Models Level and Trend Models Basic Srucural Model (BSM Dynamic Linear Models Sae Space Model Represenaion Level, Trend, and Seasonal Models Time Varying

More information

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes WHAT IS A KALMAN FILTER An recursive analyical echnique o esimae ime dependen physical parameers in he presence of noise processes Example of a ime and frequency applicaion: Offse beween wo clocks PREDICTORS,

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

Smoothing. Backward smoother: At any give T, replace the observation yt by a combination of observations at & before T

Smoothing. Backward smoother: At any give T, replace the observation yt by a combination of observations at & before T Smoohing Consan process Separae signal & noise Smooh he daa: Backward smooher: A an give, replace he observaion b a combinaion of observaions a & before Simple smooher : replace he curren observaion wih

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Sequential Importance Resampling (SIR) Particle Filter

Sequential Importance Resampling (SIR) Particle Filter Paricle Filers++ Pieer Abbeel UC Berkeley EECS Many slides adaped from Thrun, Burgard and Fox, Probabilisic Roboics 1. Algorihm paricle_filer( S -1, u, z ): 2. Sequenial Imporance Resampling (SIR) Paricle

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

References are appeared in the last slide. Last update: (1393/08/19)

References are appeared in the last slide. Last update: (1393/08/19) SYSEM IDEIFICAIO Ali Karimpour Associae Professor Ferdowsi Universi of Mashhad References are appeared in he las slide. Las updae: 0..204 393/08/9 Lecure 5 lecure 5 Parameer Esimaion Mehods opics o be

More information

Testing for a Single Factor Model in the Multivariate State Space Framework

Testing for a Single Factor Model in the Multivariate State Space Framework esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics

More information

A Bayesian Approach to Spectral Analysis

A Bayesian Approach to Spectral Analysis Chirped Signals A Bayesian Approach o Specral Analysis Chirped signals are oscillaing signals wih ime variable frequencies, usually wih a linear variaion of frequency wih ime. E.g. f() = A cos(ω + α 2

More information

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H. ACE 56 Fall 5 Lecure 8: The Simple Linear Regression Model: R, Reporing he Resuls and Predicion by Professor Sco H. Irwin Required Readings: Griffihs, Hill and Judge. "Explaining Variaion in he Dependen

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8) I. Definiions and Problems A. Perfec Mulicollineariy Econ7 Applied Economerics Topic 7: Mulicollineariy (Sudenmund, Chaper 8) Definiion: Perfec mulicollineariy exiss in a following K-variable regression

More information

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation Course Noes for EE7C Spring 018: Convex Opimizaion and Approximaion Insrucor: Moriz Hard Email: hard+ee7c@berkeley.edu Graduae Insrucor: Max Simchowiz Email: msimchow+ee7c@berkeley.edu Ocober 15, 018 3

More information

Math 10B: Mock Mid II. April 13, 2016

Math 10B: Mock Mid II. April 13, 2016 Name: Soluions Mah 10B: Mock Mid II April 13, 016 1. ( poins) Sae, wih jusificaion, wheher he following saemens are rue or false. (a) If a 3 3 marix A saisfies A 3 A = 0, hen i canno be inverible. True.

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

EXERCISES FOR SECTION 1.5

EXERCISES FOR SECTION 1.5 1.5 Exisence and Uniqueness of Soluions 43 20. 1 v c 21. 1 v c 1 2 4 6 8 10 1 2 2 4 6 8 10 Graph of approximae soluion obained using Euler s mehod wih = 0.1. Graph of approximae soluion obained using Euler

More information

Exponential Smoothing

Exponential Smoothing Exponenial moohing Inroducion A simple mehod for forecasing. Does no require long series. Enables o decompose he series ino a rend and seasonal effecs. Paricularly useful mehod when here is a need o forecas

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

Let us start with a two dimensional case. We consider a vector ( x,

Let us start with a two dimensional case. We consider a vector ( x, Roaion marices We consider now roaion marices in wo and hree dimensions. We sar wih wo dimensions since wo dimensions are easier han hree o undersand, and one dimension is a lile oo simple. However, our

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

Chapter 5. Heterocedastic Models. Introduction to time series (2008) 1

Chapter 5. Heterocedastic Models. Introduction to time series (2008) 1 Chaper 5 Heerocedasic Models Inroducion o ime series (2008) 1 Chaper 5. Conens. 5.1. The ARCH model. 5.2. The GARCH model. 5.3. The exponenial GARCH model. 5.4. The CHARMA model. 5.5. Random coefficien

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates Biol. 356 Lab 8. Moraliy, Recruimen, and Migraion Raes (modified from Cox, 00, General Ecology Lab Manual, McGraw Hill) Las week we esimaed populaion size hrough several mehods. One assumpion of all hese

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Lecture 3: Exponential Smoothing

Lecture 3: Exponential Smoothing NATCOR: Forecasing & Predicive Analyics Lecure 3: Exponenial Smoohing John Boylan Lancaser Cenre for Forecasing Deparmen of Managemen Science Mehods and Models Forecasing Mehod A (numerical) procedure

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

Temporal probability models. Chapter 15, Sections 1 5 1

Temporal probability models. Chapter 15, Sections 1 5 1 Temporal probabiliy models Chaper 15, Secions 1 5 Chaper 15, Secions 1 5 1 Ouline Time and uncerainy Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic Bayesian

More information

Time series model fitting via Kalman smoothing and EM estimation in TimeModels.jl

Time series model fitting via Kalman smoothing and EM estimation in TimeModels.jl Time series model fiing via Kalman smoohing and EM esimaion in TimeModels.jl Gord Sephen Las updaed: January 206 Conens Inroducion 2. Moivaion and Acknowledgemens....................... 2.2 Noaion......................................

More information

STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN

STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN Inernaional Journal of Applied Economerics and Quaniaive Sudies. Vol.1-3(004) STRUCTURAL CHANGE IN TIME SERIES OF THE EXCHANGE RATES BETWEEN YEN-DOLLAR AND YEN-EURO IN 001-004 OBARA, Takashi * Absrac The

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks -

Deep Learning: Theory, Techniques & Applications - Recurrent Neural Networks - Deep Learning: Theory, Techniques & Applicaions - Recurren Neural Neworks - Prof. Maeo Maeucci maeo.maeucci@polimi.i Deparmen of Elecronics, Informaion and Bioengineering Arificial Inelligence and Roboics

More information

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004

ODEs II, Lecture 1: Homogeneous Linear Systems - I. Mike Raugh 1. March 8, 2004 ODEs II, Lecure : Homogeneous Linear Sysems - I Mike Raugh March 8, 4 Inroducion. In he firs lecure we discussed a sysem of linear ODEs for modeling he excreion of lead from he human body, saw how o ransform

More information

Testing the Random Walk Model. i.i.d. ( ) r

Testing the Random Walk Model. i.i.d. ( ) r he random walk heory saes: esing he Random Walk Model µ ε () np = + np + Momen Condiions where where ε ~ i.i.d he idea here is o es direcly he resricions imposed by momen condiions. lnp lnp µ ( lnp lnp

More information

Modeling Economic Time Series with Stochastic Linear Difference Equations

Modeling Economic Time Series with Stochastic Linear Difference Equations A. Thiemer, SLDG.mcd, 6..6 FH-Kiel Universiy of Applied Sciences Prof. Dr. Andreas Thiemer e-mail: andreas.hiemer@fh-kiel.de Modeling Economic Time Series wih Sochasic Linear Difference Equaions Summary:

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time. Supplemenary Figure 1 Spike-coun auocorrelaions in ime. Normalized auocorrelaion marices are shown for each area in a daase. The marix shows he mean correlaion of he spike coun in each ime bin wih he spike

More information

NCSS Statistical Software. , contains a periodic (cyclic) component. A natural model of the periodic component would be

NCSS Statistical Software. , contains a periodic (cyclic) component. A natural model of the periodic component would be NCSS Saisical Sofware Chaper 468 Specral Analysis Inroducion This program calculaes and displays he periodogram and specrum of a ime series. This is someimes nown as harmonic analysis or he frequency approach

More information

Lecture 4 Notes (Little s Theorem)

Lecture 4 Notes (Little s Theorem) Lecure 4 Noes (Lile s Theorem) This lecure concerns one of he mos imporan (and simples) heorems in Queuing Theory, Lile s Theorem. More informaion can be found in he course book, Bersekas & Gallagher,

More information

Institute for Mathematical Methods in Economics. University of Technology Vienna. Singapore, May Manfred Deistler

Institute for Mathematical Methods in Economics. University of Technology Vienna. Singapore, May Manfred Deistler MULTIVARIATE TIME SERIES ANALYSIS AND FORECASTING Manfred Deisler E O S Economerics and Sysems Theory Insiue for Mahemaical Mehods in Economics Universiy of Technology Vienna Singapore, May 2004 Inroducion

More information

Estimation of Poses with Particle Filters

Estimation of Poses with Particle Filters Esimaion of Poses wih Paricle Filers Dr.-Ing. Bernd Ludwig Chair for Arificial Inelligence Deparmen of Compuer Science Friedrich-Alexander-Universiä Erlangen-Nürnberg 12/05/2008 Dr.-Ing. Bernd Ludwig (FAU

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear

The Rosenblatt s LMS algorithm for Perceptron (1958) is built around a linear neuron (a neuron with a linear In The name of God Lecure4: Percepron and AALIE r. Majid MjidGhoshunih Inroducion The Rosenbla s LMS algorihm for Percepron 958 is buil around a linear neuron a neuron ih a linear acivaion funcion. Hoever,

More information

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin ACE 56 Fall 005 Lecure 4: Simple Linear Regression Model: Specificaion and Esimaion by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Simple Regression: Economic and Saisical Model

More information

Unit Root Time Series. Univariate random walk

Unit Root Time Series. Univariate random walk Uni Roo ime Series Univariae random walk Consider he regression y y where ~ iid N 0, he leas squares esimae of is: ˆ yy y y yy Now wha if = If y y hen le y 0 =0 so ha y j j If ~ iid N 0, hen y ~ N 0, he

More information

Maximum Likelihood Parameter Estimation in State-Space Models

Maximum Likelihood Parameter Estimation in State-Space Models Maximum Likelihood Parameer Esimaion in Sae-Space Models Arnaud Douce Deparmen of Saisics, Oxford Universiy Universiy College London 4 h Ocober 212 A. Douce (UCL Maserclass Oc. 212 4 h Ocober 212 1 / 32

More information

Air Traffic Forecast Empirical Research Based on the MCMC Method

Air Traffic Forecast Empirical Research Based on the MCMC Method Compuer and Informaion Science; Vol. 5, No. 5; 0 ISSN 93-8989 E-ISSN 93-8997 Published by Canadian Cener of Science and Educaion Air Traffic Forecas Empirical Research Based on he MCMC Mehod Jian-bo Wang,

More information

Vectorautoregressive Model and Cointegration Analysis. Time Series Analysis Dr. Sevtap Kestel 1

Vectorautoregressive Model and Cointegration Analysis. Time Series Analysis Dr. Sevtap Kestel 1 Vecorauoregressive Model and Coinegraion Analysis Par V Time Series Analysis Dr. Sevap Kesel 1 Vecorauoregression Vecor auoregression (VAR) is an economeric model used o capure he evoluion and he inerdependencies

More information

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x

WEEK-3 Recitation PHYS 131. of the projectile s velocity remains constant throughout the motion, since the acceleration a x WEEK-3 Reciaion PHYS 131 Ch. 3: FOC 1, 3, 4, 6, 14. Problems 9, 37, 41 & 71 and Ch. 4: FOC 1, 3, 5, 8. Problems 3, 5 & 16. Feb 8, 018 Ch. 3: FOC 1, 3, 4, 6, 14. 1. (a) The horizonal componen of he projecile

More information

ECON 482 / WH Hong Time Series Data Analysis 1. The Nature of Time Series Data. Example of time series data (inflation and unemployment rates)

ECON 482 / WH Hong Time Series Data Analysis 1. The Nature of Time Series Data. Example of time series data (inflation and unemployment rates) ECON 48 / WH Hong Time Series Daa Analysis. The Naure of Time Series Daa Example of ime series daa (inflaion and unemploymen raes) ECON 48 / WH Hong Time Series Daa Analysis The naure of ime series daa

More information

Math 333 Problem Set #2 Solution 14 February 2003

Math 333 Problem Set #2 Solution 14 February 2003 Mah 333 Problem Se #2 Soluion 14 February 2003 A1. Solve he iniial value problem dy dx = x2 + e 3x ; 2y 4 y(0) = 1. Soluion: This is separable; we wrie 2y 4 dy = x 2 + e x dx and inegrae o ge The iniial

More information

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19 Sequenial Imporance Sampling (SIS) AKA Paricle Filering, Sequenial Impuaion (Kong, Liu, Wong, 994) For many problems, sampling direcly from he arge disribuion is difficul or impossible. One reason possible

More information

Solutions to Odd Number Exercises in Chapter 6

Solutions to Odd Number Exercises in Chapter 6 1 Soluions o Odd Number Exercises in 6.1 R y eˆ 1.7151 y 6.3 From eˆ ( T K) ˆ R 1 1 SST SST SST (1 R ) 55.36(1.7911) we have, ˆ 6.414 T K ( ) 6.5 y ye ye y e 1 1 Consider he erms e and xe b b x e y e b

More information

Types of Exponential Smoothing Methods. Simple Exponential Smoothing. Simple Exponential Smoothing

Types of Exponential Smoothing Methods. Simple Exponential Smoothing. Simple Exponential Smoothing M Business Forecasing Mehods Exponenial moohing Mehods ecurer : Dr Iris Yeung Room No : P79 Tel No : 788 8 Types of Exponenial moohing Mehods imple Exponenial moohing Double Exponenial moohing Brown s

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A

Licenciatura de ADE y Licenciatura conjunta Derecho y ADE. Hoja de ejercicios 2 PARTE A Licenciaura de ADE y Licenciaura conjuna Derecho y ADE Hoja de ejercicios PARTE A 1. Consider he following models Δy = 0.8 + ε (1 + 0.8L) Δ 1 y = ε where ε and ε are independen whie noise processes. In

More information

Lecture Notes 2. The Hilbert Space Approach to Time Series

Lecture Notes 2. The Hilbert Space Approach to Time Series Time Series Seven N. Durlauf Universiy of Wisconsin. Basic ideas Lecure Noes. The Hilber Space Approach o Time Series The Hilber space framework provides a very powerful language for discussing he relaionship

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

DEPARTMENT OF STATISTICS

DEPARTMENT OF STATISTICS A Tes for Mulivariae ARCH Effecs R. Sco Hacker and Abdulnasser Haemi-J 004: DEPARTMENT OF STATISTICS S-0 07 LUND SWEDEN A Tes for Mulivariae ARCH Effecs R. Sco Hacker Jönköping Inernaional Business School

More information

Tom Heskes and Onno Zoeter. Presented by Mark Buller

Tom Heskes and Onno Zoeter. Presented by Mark Buller Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden

More information

Homework 10 (Stats 620, Winter 2017) Due Tuesday April 18, in class Questions are derived from problems in Stochastic Processes by S. Ross.

Homework 10 (Stats 620, Winter 2017) Due Tuesday April 18, in class Questions are derived from problems in Stochastic Processes by S. Ross. Homework (Sas 6, Winer 7 Due Tuesday April 8, in class Quesions are derived from problems in Sochasic Processes by S. Ross.. A sochasic process {X(, } is said o be saionary if X(,..., X( n has he same

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate.

Introduction D P. r = constant discount rate, g = Gordon Model (1962): constant dividend growth rate. Inroducion Gordon Model (1962): D P = r g r = consan discoun rae, g = consan dividend growh rae. If raional expecaions of fuure discoun raes and dividend growh vary over ime, so should he D/P raio. Since

More information

3.1 More on model selection

3.1 More on model selection 3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of

More information

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits DOI: 0.545/mjis.07.5009 Exponenial Weighed Moving Average (EWMA) Char Under The Assumpion of Moderaeness And Is 3 Conrol Limis KALPESH S TAILOR Assisan Professor, Deparmen of Saisics, M. K. Bhavnagar Universiy,

More information

Comparing Means: t-tests for One Sample & Two Related Samples

Comparing Means: t-tests for One Sample & Two Related Samples Comparing Means: -Tess for One Sample & Two Relaed Samples Using he z-tes: Assumpions -Tess for One Sample & Two Relaed Samples The z-es (of a sample mean agains a populaion mean) is based on he assumpion

More information

Time series Decomposition method

Time series Decomposition method Time series Decomposiion mehod A ime series is described using a mulifacor model such as = f (rend, cyclical, seasonal, error) = f (T, C, S, e) Long- Iner-mediaed Seasonal Irregular erm erm effec, effec,

More information

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006 2.160 Sysem Idenificaion, Esimaion, and Learning Lecure Noes No. 8 March 6, 2006 4.9 Eended Kalman Filer In many pracical problems, he process dynamics are nonlinear. w Process Dynamics v y u Model (Linearized)

More information

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution Physics 7b: Saisical Mechanics Fokker-Planck Equaion The Langevin equaion approach o he evoluion of he velociy disribuion for he Brownian paricle migh leave you uncomforable. A more formal reamen of his

More information

Math 334 Fall 2011 Homework 11 Solutions

Math 334 Fall 2011 Homework 11 Solutions Dec. 2, 2 Mah 334 Fall 2 Homework Soluions Basic Problem. Transform he following iniial value problem ino an iniial value problem for a sysem: u + p()u + q() u g(), u() u, u () v. () Soluion. Le v u. Then

More information

) were both constant and we brought them from under the integral.

) were both constant and we brought them from under the integral. YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha

More information

Financial Econometrics Kalman Filter: some applications to Finance University of Evry - Master 2

Financial Econometrics Kalman Filter: some applications to Finance University of Evry - Master 2 Financial Economerics Kalman Filer: some applicaions o Finance Universiy of Evry - Maser 2 Eric Bouyé January 27, 2009 Conens 1 Sae-space models 2 2 The Scalar Kalman Filer 2 21 Presenaion 2 22 Summary

More information

Probabilistic Robotics

Probabilistic Robotics Probabilisic Roboics Bayes Filer Implemenaions Gaussian filers Bayes Filer Reminder Predicion bel p u bel d Correcion bel η p z bel Gaussians : ~ π e p N p - Univariae / / : ~ μ μ μ e p Ν p d π Mulivariae

More information

Ordinary dierential equations

Ordinary dierential equations Chaper 5 Ordinary dierenial equaions Conens 5.1 Iniial value problem........................... 31 5. Forward Euler's mehod......................... 3 5.3 Runge-Kua mehods.......................... 36

More information

An EM algorithm for maximum likelihood estimation given corrupted observations. E. E. Holmes, National Marine Fisheries Service

An EM algorithm for maximum likelihood estimation given corrupted observations. E. E. Holmes, National Marine Fisheries Service An M algorihm maimum likelihood esimaion given corruped observaions... Holmes Naional Marine Fisheries Service Inroducion M algorihms e likelihood esimaion o cases wih hidden saes such as when observaions

More information

Regression with Time Series Data

Regression with Time Series Data Regression wih Time Series Daa y = β 0 + β 1 x 1 +...+ β k x k + u Serial Correlaion and Heeroskedasiciy Time Series - Serial Correlaion and Heeroskedasiciy 1 Serially Correlaed Errors: Consequences Wih

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi Creep in Viscoelasic Subsances Numerical mehods o calculae he coefficiens of he Prony equaion using creep es daa and Herediary Inegrals Mehod Navnee Saini, Mayank Goyal, Vishal Bansal (23); Term Projec

More information

Properties of Autocorrelated Processes Economics 30331

Properties of Autocorrelated Processes Economics 30331 Properies of Auocorrelaed Processes Economics 3033 Bill Evans Fall 05 Suppose we have ime series daa series labeled as where =,,3, T (he final period) Some examples are he dail closing price of he S&500,

More information

Object tracking: Using HMMs to estimate the geographical location of fish

Object tracking: Using HMMs to estimate the geographical location of fish Objec racking: Using HMMs o esimae he geographical locaion of fish 02433 - Hidden Markov Models Marin Wæver Pedersen, Henrik Madsen Course week 13 MWP, compiled June 8, 2011 Objecive: Locae fish from agging

More information

Linear Surface Gravity Waves 3., Dispersion, Group Velocity, and Energy Propagation

Linear Surface Gravity Waves 3., Dispersion, Group Velocity, and Energy Propagation Chaper 4 Linear Surface Graviy Waves 3., Dispersion, Group Velociy, and Energy Propagaion 4. Descripion In many aspecs of wave evoluion, he concep of group velociy plays a cenral role. Mos people now i

More information

Data Assimilation. Alan O Neill National Centre for Earth Observation & University of Reading

Data Assimilation. Alan O Neill National Centre for Earth Observation & University of Reading Daa Assimilaion Alan O Neill Naional Cenre for Earh Observaion & Universiy of Reading Conens Moivaion Univariae scalar) daa assimilaion Mulivariae vecor) daa assimilaion Opimal Inerpoleion BLUE) 3d-Variaional

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Introduction to Mobile Robotics

Introduction to Mobile Robotics Inroducion o Mobile Roboics Bayes Filer Kalman Filer Wolfram Burgard Cyrill Sachniss Giorgio Grisei Maren Bennewiz Chrisian Plagemann Bayes Filer Reminder Predicion bel p u bel d Correcion bel η p z bel

More information

Augmented Reality II - Kalman Filters - Gudrun Klinker May 25, 2004

Augmented Reality II - Kalman Filters - Gudrun Klinker May 25, 2004 Augmened Realiy II Kalman Filers Gudrun Klinker May 25, 2004 Ouline Moivaion Discree Kalman Filer Modeled Process Compuing Model Parameers Algorihm Exended Kalman Filer Kalman Filer for Sensor Fusion Lieraure

More information

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS

SEIF, EnKF, EKF SLAM. Pieter Abbeel UC Berkeley EECS SEIF, EnKF, EKF SLAM Pieer Abbeel UC Berkeley EECS Informaion Filer From an analyical poin of view == Kalman filer Difference: keep rack of he inverse covariance raher han he covariance marix [maer of

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information