A Sequential Smoothing Algorithm with Linear Computational Cost

Size: px
Start display at page:

Download "A Sequential Smoothing Algorithm with Linear Computational Cost"

Transcription

1 A Sequenial Smoohing Algorihm wih Linear Compuaional Cos Paul Fearnhead David Wyncoll Jonahan Tawn May 9, 2008 Absrac In his paper we propose a new paricle smooher ha has a compuaional complexiy of O(N), where N is he number of paricles. This compares favourably wih he O(N 2 ) compuaional cos of mos smoohers and will resul in faser raes of convergence for fixed compuaional cos. The new mehod also overcomes some of he degeneracy problems we idenify in many exising algorihms. Through simulaion sudies we show ha subsanial gains in efficiency are obained for pracical amouns of compuaional cos. I is shown boh hrough hese simulaion sudies, and on he analysis of an ahleics daa se, ha our new mehod also subsanially ouperforms he simple Filer- Smooher (he only oher smooher wih compuaional cos ha is linear in he number of paricles). 1 Inroducion Sae space models provide a flexible framework o handle non-linear ime series. These models assume a ime-series wih observaions Y ha are condiionally independen given a hidden Markov process X. Formally he model is given by a sae equaion and an observaion equaion, which can be represened in erms of condiional disribuions X +1 {X 1: = x 1:, Y 1: = y 1: } f( x ), Y {X 1: = x 1:, Y 1: 1 = y 1: 1 } g( x ), where we use he noaion ha x 1: = (x 1,..., x ), and similarly for y 1:. The model is compleed hrough specifying an iniial disribuion for X 0. When he observaions are arriving sequenially we are ofen ineresed in he curren value of he sae X given all he available daa. For his filering problem, ineres lies in esimaing he poserior disribuion p(x y 1: ). Sequenial Mone Carlo algorihms, known generically as paricle filers, have recenly emerged as a soluion o his problem. These filers approximae p(x y 1: ) by a se of N weighed paricles; and are based on seps for sequenially producing a se of weighed paricles ha approximae p(x y 1: ) given a se ha approximaes p(x 1 y 1: 1 ). In heir simples form hese algorihms produce equally weighed paricles, and he paricles can be viewed as an approximae sample from p(x y 1: ) (see Gordon e al. (1993) or Kiagawa (1996)). The compuaional complexiy of paricle filers is usually linear in he number of paricles, 1

2 N, and as such large numbers of hem can be chosen o bes approximae he arge poserior. Our ineres in his paper lies in he relaed smoohing problem whose aim is o obain esimaes of previous saes given a block of observaions y 1,..., y T. While his problem can be heoreically solved wih a sligh modificaion of he paricle filer (see Kiagawa (1996)), i is easy o show ha his produces a poor approximaion of he smoohing densiy p(x y 1:T ) for T. Wih his in mind, alernaive algorihms o sequenially approximae p(x y 1:T ) afer a paricle filer have been developed (see for example Kiagawa (1996), Hürzeler and Künsch (1998), Douce e al. (2000), Godsill e al. (2004) and Briers e al. (2004)). All hese mehods involve a sep o re-weigh paricles ha approximae a filer disribuion so ha he re-weighed paricles approximae p(x y 1:T ). While hey perform comparably well for a fixed number of paricles, N, hese algorihms have a complexiy of O(N 2 ) and herefore heir use is resriced o smaller N han is used for paricle filering. Also, for some sae-space models, paricularly hose for which a componen of X is uniquely deermined by he previous sae, x 1, hese alernaive smoohing algorihms can degenerae; eiher hey become equivalen o he O(N) smooher of Kiagawa (1996) or canno be applied a all. We presen a new smoohing algorihm. The basic idea is o allow he smooher o simulae new paricles which will be used o approximae p(x y 1:T ), raher han jus re-weighing exising paricles. This approach avoids some of he degeneracy issues of exising paricle smoohers. However, he mos imporan feaure of our new smoohing algorihm is ha here is a se of models for which he compuaional complexiy is linear in he number of paricles. This se includes all models wih a linear-gaussian sae equaion, and all models for which he likelihood, g(y x ) is inegrable in x. This covers a wide-variey of models such as he bearings-only racking model (Gordon e al., 1993), facor sochasic volailiy models (Liu and Wes, 2001), ime varying auo-regressive coefficien models (Kiagawa and Gersch, 1985) and ARCH models (Fearnhead, 2005) amongs many ohers. Our work iself was moivaed by problems of non-saionariy when modelling he exremes of a ime-series. For example, in Secion 5, we consider daa on he fases imes for he women s 3000m (see Figure 4). Whils up o 1982 here is evidence of a year-on-year improvemen in imes for his even, since 1982 imes have plaeaued if anyhing imes worsened in he laer half of he 1980s, perhaps due o increased regulaion and esing for he use of performance enhancing drugs. I is naural o model he daa from each year using a disribuion which is moivaed by asympoic exreme value heory; and we incorporae non-saionariy ino his disribuion hrough allowing is locaion parameer o vary in a non-parameric way. We can hus obain a sae-space model for he daa, where he sae is he locaion parameer. Sandard nonparameric models, such as random walks and inegraed random walks, can be formulaed as linear-gaussian models for he locaion parameer. See Smih and Miller (1986) for an example of sae space models being used o analyse ahleics records. The aricle is organised as follows. We begin Secion 2 by describing paricle filering and go on o review he curren mehods for sequenial smoohing while demonsaing heir flaws. In Secion 3 we derive our new algorihm which aemps o overcome hese. Secion 4 conains a simulaion sudy wih a mul- 2

3 ivariae Normal model, which shows he subsanial improvemens our new smooher gives for models wih a linear-gaussian sae equaion. Secion 5 compares relaive efficiencies of he mehods a analysing he ahleics daase, and addresses he quesion as o how exreme he 1993 world record of Wang Junxia was. To analyse hese daa, we develop an efficien EM algorihm ha uilises our new smooher, o esimae fixed parameers in he exreme value disribuion. Our analysis suggess ha a new world record as or more exreme as ha of Wang Junxia s would happen less han once every 5000 years from he evidence abou he populaion of he oher op ahlees in his even. 2 Curren mehods for paricle smoohing 2.1 Paricle filering The aim of Bayesian filering is o calculae sequenially he filer disribuions p(x y 1: ) upon receip of observaions y. The analyical soluion o his problem is given by p(x y 1: ) g(y x ) f(x x 1 )p(x 1 y 1: 1 ) dx 1, (1) which relaes p(x y 1: ) o p(x 1 y 1: 1 ) and y. This recursion is inracable in general, alhough an imporan excepion o his is when boh he sae f and he likelihood g are linear-gaussian densiies and he prior is also Gaussian. In his case he soluion is given by he Kalman filer recursions of Kalman (1960). Paricle filers aim o overcome he inracabiliy of (1) by using poenial draws of he sae o approximae he unknown filer disribuions. In general, we approximae he disribuion p(x y 1: ) by a discree disribuion wih suppor {x (i) } N i=1 in (1) gives he approximaion and probabiliy masses {w(i) } N i=1. Applying his o p(x 1 y 1: 1 ) p(x y 1: ) cg(y x ) N i=1 f(x x (i) 1 )w(i) 1, (2) where c is a normalising consan. A paricle filer algorihm gives seps for producing weighed paricles o approximae his. For reviews of various filering mehods see for example Liu and Chen (1998), Douce e al. (2000) and Fearnhead (2008). We focus our aenion on he auxiliary paricle filer of Pi and Shephard (1999). This is a general mehod hrough which many simpler paricle filers can be defined as special cases. In his approach we aim o approximae by cg(y x )f(x x (i) 1 )w(i) 1 (3) q(x x (i) 1, y )β (i), where q( x (i) 1, y ) is a disribuion we can sample from and {β (i) } N i=1 are normalised weighs which sum o 1. We hen use a combinaion of re-sampling and imporance sampling o generae a weighed sample approximaing (2). 3

4 Algorihm 1 gives he general algorihm for sequenially sampling weighed paricles {(x (i), w (i) )} approximaing p(x y 1: ). While he simples way o iniialise he algorihm is o sample from he prior p(x 0 ) and propagae from = 1, i is usually possible and more accurae o sample from p(x 1 y 1 ) direcly using sandard imporance sampling. Algorihm 1 Auxiliary paricle filer of Pi and Shephard (1999) 1. Iniialisaion: Sample {x (i) 0 } from he prior p(x 0) and se w (i) 0 = 1/N for all i. 2. For = 1, 2,... (a) Re-sample: Use he {β (i) } as probabiliies o sample N indices j 1,..., j N from {1,..., N}. (b) Propagae: Sample he new paricles x (i) q( x (ji) 1, y ). (c) Re-weigh: Assign each paricle x (i) weigh w (i) g(y x (i) and normalise hem o sum o 1. independenly from he corresponding imporance )f(x (i) x (ji) 1 )w(ji) 1 q(x (i) x (ji) 1, y )β (ji) The efficiency of he paricle filer ress primarily on he choice of proposal densiy q and re-sampling weighs β (i). In he simples case we could have q(x x 1, y ) = f(x x 1 ) and β (i) = w (i) 1 which is essenially he algorihm of Gordon e al. (1993). Such a choice would place mass unevenly on he paricles hus wasing hose wih small weighs. To recify his he auxiliary approach produces more even weighs if q and β (i) are chosen so ha (3) is well approximaed. In paricular if q(x x (i) 1, y ) = p(x x (i) 1, y ) and β (i) p(y x (i) 1 )w(i) 1 hen he final weighs w (i) will all equal 1/N and we say he filer is adaped. In mos cases his opimal choice of q and β (i) is no possible, bu we can sill obain and use good approximaions o hem. Many auhors have suggesed furher enhancemens o he sandard paricle filer. The re-sampling sep is opional and can be omied by seing j i = i hus propagaing each paricle x (i) 1 once. This eliminaes any exra noise from re-sampling bu gives uneven weighs. Liu and Chen (1995) propose a measure of he effecive sample size and resample only when i falls below a fixed hreshold. Carpener e al. (1999) show ha he re-sampling noise is minimised by producing a sraified sample of he indices j i and give an O(N) algorihm o achieve his. A furher enhancemen which akes advanage of he linear-gaussian sae is Rao-Blackwellisaion; see Casella and Rober (2001) for an inroducion o he opic and Douce e al. (2000) for an applicaion o paricle filering. The idea is ha for some models i is possible o inegrae ou par of he sae analyically. This enables he inegrable par of he sae o be represened by a disribuion 4

5 raher han a specific value. The advanage of Rao-Blackwellisaion is ha less Mone Carlo error is accrued in each updae and so he variance of esimaes is reduced. An example of he applicaion of Rao-Blackwellisaion is given in Secion Smoohing while filering In is simples form, smoohing can be achieved from a simple exension o he paricle filer as shown by Kiagawa (1996), and we call he resuling algorihm he Filer-Smooher. As wih he filer disribuion p(x y 1: ) in (1), we have a recursive soluion for he join smoohing disribuion: p(x 1: y 1: ) g(y x )f(x x 1 )p(x 1: 1 y 1: 1 ). (4) By comparing (1) and (4) i is easy o show ha he paricle filer seps can be used o updae weighed pahs {(x (i) 1:, w(i) )} N i=1 approximaing p(x 1: y 1: ). Doing so simply requires keeping rack of he inheriance of he newly sampled paricle x (i) by seing x (i) 1: = (x(ji) 1: 1, x(i) ). This means ha any filering algorihm can be used and he mehod inheris he O(N) compuaional complexiy of he filer making large numbers of paricles feasible. While his Filer-Smooher approach can produce an accurae approximaion of he filering disribuion p(x y 1: ) i gives a poor represenaion of previous saes. To see his we noe ha whenever we resample he pahs {x (i) 1: 1 } by sampling he auxiliary variables {j i } we end up wih muliple copies of some pahs bu lose ohers alogeher. Therefore he number of disinc paricles a any given ime decreases monoonically he more imes we resample. Also, wih muliple copies of some paricles, heir weighs are effecively added ogeher on a single poin so ha marginally he weighs become more uneven as we look back in ime Figure 1: Plo showing how he simple smooher re-weighs filer paricles. The arrows represen he dependencies beween he paricles a ime and 1 due o re-sampling. The size of he paricle represens is oal weigh as a draw from he smoohed disribuion. 5

6 This can be seen in Figure 1 which represens 10 smoohed pahs x (i) 1:6 showing how hey re-weigh filer paricles. As you can see, paricles which are los due o re-sampling receive no weigh and paricles wih many offspring have large weighs. While he filer approximaion a ime 6 is good, he weighs become more uneven as he number of weighed paricles decreases going back in ime. This is no surprising since he paricles a imes < 6 are drawn o approximae p(x y 1: ) so mus be unevenly weighed if hey are o represen a differen disribuion. As a final poin we noe ha re-sampling more infrequenly can improve his mehod of smoohing alhough here is a limi o how much his can help. Even wih no re-sampling, he approximaion o p(x y 1:T ) will deeriorae as T ges large: wih he paricle approximaion ending o give non-negligible weigh o all bu a small subse of paricles, and evenually only one paricle having a non-negligible weigh. 2.3 Oher smoohing algorihms Several algorihms have been proposed o improve on he simple Filer-Smooher. A common requiremen is ha a paricle filer is run firs o give weighed paricles {(x (i), w (i) )} N i=1 approximaing p(x y 1: ) for = 1,..., T Forward-Backward smoohing The Forward-Backward Smooher of Douce e al. (2000), as well as he relaed algorihms of Tanizaki and Mariano (1994) and Hürzeler and Künsch (1998), is based around he backwards recursion f(x+1 x ) p(x y 1:T ) = p(x y 1: ) p(x +1 y 1: ) p(x +1 y 1:T ) dx +1, for < T. The unknown densiies can be approximaed using filer paricles from he curren ime and smooher paricles from + 1 o obain where p(x y 1:T ) w (i) T := N N i=1 δ(x x (i) )w (i) T, f(x (j) +1 T x(i) )w (i) N j=1 k=1 f(x(j) +1 T x(k) )w (k) w (j) +1 T (5) and δ( ) is he Dirac dela funcion. This approximaion can be used o sequenially re-weigh he filer paricles backwards in ime so ha hey represen he marginal smoohing densiies Two-Filer smoohing The Two-Filer Smooher of Briers e al. (2004) combines samples from a paricle filer wih hose from a backwards informaion filer o produce esimaes of p(x y 1:T ). 6

7 The backwards informaion filer produces sequenial approximaions of he likelihood p(y :T x ) backwards hrough ime and is based on he following recursion: p(y :T x ) = g(y x ) f(x +1 x )p(y +1:T x +1 ) dx +1, for < T. (6) Since p(y :T x ) is no a probabiliy densiy funcion in x i may no have a finie inegral over x in which case a paricle represenaion will no work. The smoohing algorihm in Kiagawa (1996) assumes implicily ha his is no he case bu Briers e al. (2004) propose he following consrucion which will always give a finie measure. They inroduce an arificial prior disribuion γ 0 (x 0 ) which, when subsiued for p(x 0 ), yields a backwards filer densiy p(x y :T ) γ (x )p(y :T x ), (7) where γ (x ) = f(x x 1 )γ 1 (x 1 ) dx 1 is derived recursively from γ 0 (x 0 ). An arificial prior is inroduced so ha γ (x ) is available in closed form which is only possible when he sae is linear-gaussian. If he prior p(x 0 ) is also Gaussian hen his can be used insead of γ 0 (x 0 ). If however he sae is no linear-gaussian bu he likelihood g(y x ) is inegrable, we can insead propagae a paricle represenaion of p(y :T x ) by assuming γ (x ) 1 hroughou he following derivaion. Following on from (6) he backwards filer is derived from p(x y :T ) γ (x )g(y x ) f(x +1 x ) p(x +1 y +1:T ) dx +1 γ +1 (x +1 ) γ (x )g(y x ) N k=1 f( x (k) +1 x ) γ +1 ( x (k) +1 ) w(k) where he weighed paricles {( x (k) +1, w(k) +1 )} approximae p(x +1 y +1:T ). This is very similar o he derivaion of he forwards filer and as such many filering algorihms and enhancemens can be modified for his purpose. For example, an auxiliary backwards filer in he syle of Pi and Shephard (1999) can be made by finding a disribuion q( y, x (k) +1 ) we can sample from such ha q(x y, x (k) (k) +1 ) β γ (x )g(y x )f( x (k) +1, +1 x ) w (k) +1 γ +1 ( x (k) +1 ). We hen proceed analogously o Algorihm 1 for = T,..., 1 afer iniialising he algorihm wih paricles drawn from γ T +1 (x T +1 ). An adaped backwards filer giving even weighs w (k) = 1/N is achieved wih q(x y, x (k) +1 ) = p(x y, x (k) +1 ) (k) and β p(y x (k) +1 ) w(k) +1 where we again use p o denoe a disribuion which uses γ 0 (x 0 ) as he prior insead of p(x 0 ). Having run a forwards paricle filer and a backwards informaion filer, i is possible o combine he wo o esimae p(x y 1:T ). The Two-Filer Smooher is based upon wriing he arge densiy as p(x y 1:T ) p(x y 1: 1 ) p(y :T x ) f(x x 1 )p(x 1 y 1: 1 ) dx 1 p(x y :T ). γ (x ) 7

8 Therefore filer paricles {(x (j) 1, w(j) 1 )} approximaing p(x 1 y 1: 1 ) and backwards filer paricles {( x (k), w (k) )} approximaing p(x y :T ) are used o obain where p(x y 1:T ) N k=1 w (k) T : w(k) γ ( x (k) ) N j=1 δ(x x (k) ) w (k) T, f( x (k) x (j) 1 )w(j) 1. (8) Thus paricles from a forwards filer are used o re-weigh hose from a backwards filer so ha hey represen he arge disribuion. 2.4 Comparison of curren paricle smoohers Boh he Forward-Backward and Two-Filer smoohers aim o improve on he simple Filer-Smooher by removing is dependence on he inheriance pahs of he paricle filer. Forward-Backward smoohing does his by reweighing he filer paricles while Two-Filer smoohing re-weighs paricles sampled from a backwards filer. However, boh algorihms are O(N 2 ) as he calculaion of each paricle s weigh is an O(N) operaion 1. Thus, while varians of hese paricle smoohers produce beer esimaes for a fixed paricle number N, far fewer paricles can be used for hese algorihms han can for he Filer-Smooher in a fixed amoun of ime. Anoher advanage of he Filer-Smooher is ha i gives draws of he join smoohing disribuion p(x 1:T y 1:T ) raher han only he marginal disribuions. I is possible o adap he Forward-Backward Smooher o also draw samples from he join smoohing disribuion as shown in Hürzeler and Künsch (1998) and Godsill e al. (2004). Their derivaion is similar o ha of he Forward- Backward Smooher above and as such share is complexiy and are deermined by he re-sampling of he filer. They herefore achieve beer samples of he join disribuion han he Filer-Smooher for a fixed N bu give a slighly worse represenaion of he marginal disribuions han he Forward-Backward Smooher. Since he Forward-Backward Smooher and he Filer-Smooher rely on he suppor of filer paricles we may expec hem o approximae p(x y 1:T ) bes for close o T where he arge is mos similar o p(x y 1: ). Likewise he Two- Filer Smooher may do bes for small when he backwards filer disribuion p(x y :T ) is likely o be closes o our arge. However when here is a large discrepancy beween hese disribuions he paricles will be weighed very unevenly as hey will no be locaed in he righ posiion o represen he smoohed disribuion. Ideally we would like an algorihm which samples paricles in he correc posiion for he smoohed disribuion. 1 The overall cos of calculaing he weigh (5) in he Forward-Backward Smooher is O(N) as each of he erms in he denominaor need o be calculaed only once and can hen be sored 8

9 2.5 Degeneracy of he Forward-Backward and Two-Filer smoohers As a final poin we noe ha he Forward-Backward and Two-Filer smoohers reliance on he form of he sae densiy causes degeneracy problems wih cerain models and filers. Specifically, his happens whenever f(x x 1 ) is zero or approximaely so for mos combinaions of possible x and x 1. As an example, consider he simple AR(2) process z = φ 1 z 1 + φ 2 z 2 + ɛ wih ɛ N (0, ν 2 ). The model can be wrien as a wo-dimensional Markov process by defining he sae as x = (x,1, x,2 ) where x,1 = z and x,2 = z 1. This gives he sae ransiion densiy x (j) 1 f(x x 1 ) = N (x,1 φ 1 x 1,1 + φ 2 x 1,2, ν 2 ) δ(x,2 x 1,1 ), where we wrie N (z µ, ν 2 ) for he densiy of N (µ, ν 2 ) evaluaed a z. This densiy is zero whenever he second componen of x does no equal he firs componen of x 1. This means ha for wo ses of paricles { x (j) 1 } and {x(i) }, f(x (i) ) is likely o be zero unless x(i) was generaed from x (j) 1. Since he Forward-Backward Smooher relies on comparing paricles sampled from he filer a ime wih hose a ime +1, i can be shown ha he weigh (5) reduces o he effecive weigh given o each paricle by he Filer-Smooher. However he siuaion is worse for Two-Filer smoohing which fails compleely as he forwards and backwards filer paricles were sampled independenly. Wih probabiliy 1, no pairs of forwards and backwards filer paricles mach and so all he weighs (8) will be zero. 3 New smoohing algorihm We now describe our new smoohing algorihm which aemps o overcome he weaknesses of he curren mehods. Our primary aim is o draw new paricles from he marginal smoohing densiies direcly raher han re-weigh hose drawn from anoher disribuion. We describe he basic idea firs, and hen look a how he smooher can be implemened so ha is compuaional cos is linear in he number of paricles. We sar wih a similar derivaion o he Two-Filer Smooher which gives p(x y 1:T ) p(x y 1: 1 ) g(y x ) p(y +1:T x ) f(x x 1 )p(x 1 y 1: 1 ) dx 1 g(y x ) f(x +1 x ) p(x +1 y +1:T ) dx +1, γ +1 (x +1 ) where we use he arificial prior and backwards filer in (7) above. These inegrals can be approximaed using weighed paricles from a paricle filer a ime 1 and from a backwards informaion filer a ime + 1 o obain p(x y 1:T ) c N N j=1 k=1 f(x x (j) 1 )w(j) 1 g(y x ) f( x(k) +1 x ) 9 γ +1 ( x (k) +1 ) w(k) +1, (9)

10 where c is a normalising consan. Though his formula can be wrien as he produc of wo sums, we wrie i as a double sum o emphasise ha here are N 2 (j, k) pairs. We also noe ha any filering algorihm can be used o generae } and { x(k) +1 } as long as he arificial prior γ +1(x +1 ) here is he same one used o sample {( x (k) +1, w(k) +1 )} in he backwards informaion filer. As before {x (j) 1 we assume γ +1 (x +1 ) 1 hroughou if he backwards filer approximaes p(y +1:T x +1 ) insead of p(x +1 y +1:T ). To sample from his approximaion we sar by mirroring he auxiliary paricle filer of Pi and Shephard (1999) by finding a sampling disribuion q and weighs β (j,k) such ha 1 w(k) +1 q(x x (j) 1, y, x (k) (j,k) +1 ) β f(x x (j) 1 )g(y x )f( x (k) +1 x ) w(j) γ +1 ( x (k) Algorihm 2 gives he algorihm ha resuls from using he (j, k) pairs before using q o sample new paricles x (i). Algorihm 2 New O(N 2 ) smoohing algorihm +1 ). (j,k) β s o sample 1. Filer forwards: Run a paricle filer o generae {(x (j), w (j) )} approximaing p(x y 1: ) for = 0,..., T. 2. Filer backwards: Run a backwards informaion filer o generae {( x (k), w (k) )} approximaing p(x y :T ) γ (x )p(y :T x ) for = T,..., Smooh: For = 1,..., T 1 (j,k) (a) Re-sample: Calculae he β s and use hem as probabiliies o sample N pairs {(j i, k i )} N i=1. (b) Propagae: Sample he new paricles x (i) q( x (ji) 1, y, x (ki) +1 ). (c) Re-weigh: Assign each paricle x (i) w (i) f( x(i) x (ji) 1 ) g(y x (i) q( x (i) x (ji) 1, y, x (ki) +1 and normalise hem o sum o 1. he weigh ) f( x (ki) independenly from +1 x(i) ) w (ji) 1 w(ki) +1 (ji,ki) ) β γ +1 ( x (ki) +1 ) Noe ha he oupu of Algorihm 2 is a se of riples, (x (ji) 1, x(i), x (ki) +1 ), wih associaed weighs, w (i). These can be viewed as a paricle approximaion o p(x 1:+1 y 1:T ). If our ineres solely lies in he marginal p(x y 1:T ) we jus keep he paricles, x (i), and heir associaed weighs, w (i). We noe +1 ) = furher ha he opimal choice of propagaion densiy is q(x x (j) 1, y, x (k) p(x x (j) 1, y, x (k) +1 β (j,k) ) while he opimal re-sampling probabiliies are given by f(x x (j) 1 )g(y x )f( x (k) +1 x w (j) 1 ) dx w(k) +1 (10) γ +1 ( x (k) +1 ). 10

11 We do no require our algorihm o generae samples for ime T since hese are available from he filer. Similarly, paricles for ime 1 are available from he backwards filer if we use γ 0 (x 0 ) = p(x 0 ) for he arificial prior. Algorihm 2 overcomes he degeneracy problem of he Forward-Backward and Two-Filer smoohers when here is a deerminisic relaionship beween he saes a successive ime-poins, as demonsraed in Secion 2.5 wih he AR(2) model. Algorihm 2 will sill have degeneracy problems where here is a deerminisic relaionship beween componens of saes separaed by wo or more ime-poins. However i is simple, a leas in heory, o exend our mehod so ha we joinly sample a block (x,..., x +n ) given filer paricles {x (j) 1 } and backwards filer paricles { x (k) +n+1 } (see Douce e al. (2006) for an example of block sampling in paricle filers). By choosing n sufficienly large such ha here is no deerminisic relaionship beween componens of x and x +n, our approach o smoohing can hen be applied in hese cases. Like he Two-Filer Smooher in Secion 2, our smoohing sep is no sequenial and can be performed independenly for each ime. Also, he compuaional complexiy of each sep is O(N 2 ) which is comparable wih all bu he 2 (j,k) simples Filer-Smooher. However, as i sands we have N β s o calculae making i O(N 2 ) in memory also which could mean ha i is impracical for even modes sample sizes. 3.1 Making Algorihm 2 O(N) The above smoohing algorihm has a compuaional cos ha is O(N 2 ), ha is quadraic in he number of paricles, due o he need o calculae N 2 probabiliies, β(j,k). A simple approach o reduce he compuaional cos of he smoohing algorihm is o choose hese probabiliies so ha hey correspond o choosing paricles a ime 1 and backward-filer paricles a ime + 1 independenly of each oher. Our algorihm will hen be O(N) in compuaional complexiy as well as memory and as such will be much faser for large N. Now he opimal disribuion from which o choose he paricles a ime 1 will be he corresponding marginal disribuion of he opimal probabiliies for, given in (10). Marginalising we ge: β (j,k) N k=1 β (j,k) N k=1 N f(x x (j) 1 )g(y x )f( x (k) +1 x w (j) 1 ) dx w(k) +1 γ +1 ( x (k) +1 ) f(x x (j) 1 )g(y w (j) 1 x )f(x +1 x ) dx p(x +1 y +1:T ) γ +1 (x +1 ) p(y :T x (j) 1 )w(j) 1. dx +1 Calculaing his analyically will be impossible, bu i suggess wo simple approximaions. The firs is o sample paricles a ime 1 according o heir filering weighs w (j) 1. However a beer approach will be o sample according o an approximaion of p(y x (j) 1 )w(j) 1, as i includes he informaion in he observaion a ime. Now, in performing he paricle filer we used he auxiliary filer which sampled paricle x (j) 1 wih a probabiliy β(j) which is chosen o be an approximaion o p(y x (j) 1 )w(j) 1. Thus we sugges using exacly he 11

12 same probabiliies o sample he paricles wihin one ieraion of our sampling algorihm. By similar calculaions, i can be shown ha we should opimally choose he backward-filer paricles a ime + 1 wih probabiliy proporional o p(y 1: x (k) +1 ) w(k) +1. Again, we canno calculae hese exacly, bu a simple idea is o use probabiliies ha approximae p(y x (k) +1 ) w(k) +1 he probabiliies β (k). Thus we can simply use ha were used in he backward filer, as hese were chosen as o be an approximaion o p(y x (k) +1 ) w(k) +1. We hus obain a similar algorihm o before, bu wih paricles a ime β (j,k) 1 and + 1 sampled independenly, and wih replaced by β (j) in he calculaion of he weigh. Thus we have an O(N) version of our smoohing algorihm shown in Algorihm 3. We noe ha we can speed up he algorihm furher as he probabiliies β (j) (k) and β (or even he auxiliary variables {j i } and {k i }) can be saved from he filers o reduce he number of calculaions in he smoohing sep. Algorihm 3 New O(N) smoohing algorihm Proceed as Algorihm 2 bu subsiue seps 3(a) and 3(c) for 3. (a) Re-sample: Use {β (j) (k) } from he filer o sample j 1,..., j N and { β } from he backwards filer o sample k 1,..., k N from {1,..., N} (c) Re-weigh: Assign each paricle x (i) w (i) f( x(i) x (ji) q( x (i) and normalise hem o sum o 1. he weigh 1 ) g(y x (i) ) f( x (ki) x (ji) 1, y, x (ki) +1 ) β(ji) +1 x(i) ) w (ji) 1 w(ki) +1 β (ki) γ +1 ( x (ki) +1 ) β (k) 4 Simulaion sudy We now compare he efficiency of our new algorihm agains he currenly available mehods. Our simulaions are based on a model wih linear-gaussian sae and observaion models. The specific sae model we used is chosen o be he same as for our ahleics applicaion in Secion 5. We have a chosen a linear- Gaussian observaion model so ha we can compare resuls of differen paricle smoohers wih he rue smoohing disribuions obained from he Kalman filer and smooher (see Kalman (1960) and Anderson and Moore (1979)). Formally, we consider he model: X +1 {X 1: = x 1:, Y 1: = y 1: } N (F x, Q), Y {X 1: = x 1:, Y 1: 1 = y 1: 1 } N (Gx, R), X 0 N (µ 0, Σ 0 ), 12

13 where F = ( ) ( 1 1 1, Q = ν G = (1, 0), R = τ ), The sae is derived from he pair of sochasic differenial equaions (SDEs) dx,1 = X,2 d and dx,2 = νdb and so he firs componen X,1 is he inegraed pah of he random walk X,2. A noisy observaion of he firs componen is made a each ime sep. The parameer ν 2 deermines he smoohness of he sae over ime. Wih a large value of ν 2 he sae can move freely and hus follows he observaions. When ν 2 is small however he model makes a linear fi o he observaions. We compare he wo versions of our new algorihm wih he simple Filer- Smooher of Secion 2.2, he Forward-Backward Smooher of Secion and he Two-Filer Smooher of Secion We also look a how he relaive performance of he algorihms is affeced by he raio of he sae noise ν 2 o observaion noise τ 2. The deails of our paricle filer, backwards filer and smoohing algorihms for his model are given in Appendix A.1. To compare he accuracy of our smoohing algorihms esimaes of X,d we esimae he effecive sample size N eff (X,d ). Moivaed by he fac ha ( ) ( X µ) 2 E = 1 N, when X (1),..., X (N) iid N (µ, σ 2 ) and X is heir sample mean, we ake σ 2 ( (ˆx,d µ,d ) 2 ) 1 N eff (X,d ) = E, (11) where µ,d and σ,d 2 are he rue mean and variance of X,d y 1:T obained from he Kalman smooher and ˆx,d is he random esimae from a paricle smooher. We can herefore crudely say ha he weighed sample produced by our smooher is as accurae a esimaing X,d as an independen sample of size N eff (X,d ). To esimae he expecaion in (11) we use he mean value from 100 repeiions of each algorihm. We firs compare he smoohing algorihms using model parameers of ν 2 = τ 2 = 1 wih µ 0 = (0, 0) and Σ 0 = I 2 for he prior. We generaed 20 daases, each of lengh 200, and averaged he effecive sample sizes o remove effecs caused by a single daase. We chose differen numbers of paricles for each algorihm o ry o reflec he varying complexiies of each mehod. We sared by choosing 10, 000 paricles for he Filer-Smooher and 3, 000 for he O(N) version of our new algorihm since hey hen ook approximaely he same amoun of ime o run. We would have liked o scale he O(N 2 ) algorihms o ake he same ime o run bu heir speeds varied grealy. Par of his may be due o how he algorihms are implemened in R. We herefore fixed he number of paricles for hese hree algorihms a 300. This made he O(N 2 ) version of our new algorihm faser bu he oher wo mehods slower han he Filer-Smooher. The average ime aken by each algorihm per run is shown in Table 1. σ 2,d 13

14 Algorihm Filer Forward- Backward Two-Filer New O(N 2 ) New O(N) N 10, ,000 Run ime (s) Table 1: Number of paricles used and average run ime of each algorihm. Figure 2 shows how he average effecive number of paricles for esimaing X,1 varies hrough ime for he five algorihms considered. The resuls for X,2 (no shown) are very similar. N eff Figure 2: Average effecive sample size for each of he 200 ime seps using he filer ( ), Forward-Backward (---) and Two-Filer smoohers ( ) as well as he O(N 2 ) ( ) and O(N) version ( ) of our new algorihm. We can see ha he Filer-Smooher does very well for imes close o T = 200 as his filer has by far he mos paricles and he filer and smoohing disribuions are similar a his sage of he process. As prediced however his algorihm ges progressively worse as i goes backwards hrough ime. This is no necessarily he case wih he oher algorihms whose efficiencies remain roughly consan over ime when averaged over he 20 daases. Of he wo O(N) algorihms we see ha our new mehod vasly ouperforms he Filer- Smooher for all bu he final few ime seps, despie aking a similar amoun of ime o run. From Figure 2 we can also see ha he hree O(N 2 ) algorihms have near idenical efficiencies for his paricular model. This may be because hey are all derived in some way from he same formula, p(x y 1:T ) p(x y 1: 1 )p(y :T x ), and all combine filer paricles wih an O(N 2 ) approximaion of p(y :T x ). We recall ha hese were run wih he same number of paricles N hough in our implemenaion our new algorihm was faser han he oher wo here. However, even wih his aken ino accoun, he O(N) version is many imes more efficien for even hese modes sample sizes. To see how hese resuls are affeced by he raio of he sae noise ν 2 o he observaion noise τ 2, we repea he experimen firs wih ν 2 = 100 while keeping τ 2 = 1. This gives he sae freedom o follow he observaions which helps he algorihms o perform well. The resuls are shown in Figure 3a below. 14

15 Those for ν 2 = 1 and τ 2 = 1/100 gave very similar resuls. N eff N eff (a) ν 2 /τ 2 = (b) ν 2 /τ 2 = 1/100 Figure 3: Average effecive sample sizes as in Figure 2 wih differen raios of he sae noise ν 2 o he observaion noise τ 2. We see ha he accuracy of he Filer-Smooher sill diminishes as i progresses backwards hrough ime bu all he oher mehods are close o heir opimal efficiency of an effecive sample size equal o N. This is paricularly he case wih our new O(N 2 ) algorihm which ouperforms he oher O(N 2 ) mehods a every ime sep. Our new O(N) algorihm however is by far he fases allowing i o have 10 imes as many paricles as he slower mehods. Is efficiency also suggess ha our choice of re-sampling weighs is reasonable. We finally repea he experimen wih ν 2 /τ 2 = 1/100 which makes he sae highly dependen hrough ime and causes all he paricle mehods o sruggle. This can be seen from he low effecive sample sizes in Figure 3b. Even hough he Filer-Smooher diminishes a a faser rae han before i does beer han he oher algorihms for a large number of ime seps. This is possibly due o he oal accumulaion of error in he filer, backwards filer and smooher, each of which performs badly in his case, which hinder he oher mehods. The Filer-Smooher evenually drops below he accuracy of our O(N) mehod showing ha our O(N) algorihm can give sronger esimaes of he earlies smoohing densiies in even he oughes siuaions. 5 Ahleic records We use our smoohing algorihm o analyse daa from he women s 3000m running even. Robinson and Tawn (1995) firs sudied he fases imes from 1972 o 1992 o assess wheher Wang Junxia s record in 1993 was consisen wih he previous daa. They used an exreme value likelihood wih a parameric rend o conclude ha cuing 16.51s off he record, hough unusual, was no excepionally so. Smih (1997) oulined he benefis of a Bayesian analysis for calculaing he probabiliy of beaing Wang Junxia s record given ha a new record is se and Gaean and Grigoleo (2004) exended his by using paricle mehods o model a dynamic rend. While Gaean and Grigoleo (2004) presened an aracive model for he daa, i is our belief ha he paricle mehods hey used for heir inference are 15

16 highly inefficien. They used he smoohing algorihm of Tanizaki and Mariano (1994) wih an AR(2) sae which causes he smooher o degrade o he Filer- Smooher as shown in Secion 2.5. They also inroduced wo random walks wih exremely small variances o he sae which causes all paricle mehods o sruggle as was shown by he small effecive sample sizes in Figure 3b in Secion 4. We herefore believe ha he conclusions hey drew from using only N = 1000 paricles are unreliable and we aim o produce a more robus analysis. Whereas Gaean and Grigoleo (2004) used he annual minimum running imes, we use he r fases annual imes following he iniial analysis of Robinson and Tawn (1995). Large amouns of daa are now available on-line (for example from Larsson (2008)) from which he five fases imes of differen ahlees per year is shown in Figure 4. The naural choice for modelling he fases annual running imes is he generalised exreme value disribuion for minima (GEVm) as moivaed by asympoic exreme value heory (see Coles (2001) for deails as well as an inroducion o exreme value echniques). This disribuion has locaion and scale parameers µ R and σ > 0 as well as a shape parameer ξ R which allows for many ypes of ail behaviour. Is cdf is given by { [ G(y µ, σ, ξ) = 1 exp 1 ξ ( )] 1 y µ ξ where we define [y] + := max(y, 0). This disribuion naurally exends o he r-smalles order saisic model which we use for he ih fases records y i per year. This model has a likelihood given by g(y 1:r µ, σ, ξ) Ḡ(y r µ, σ, ξ) r i=1 σ + g(y i µ, σ, ξ) Ḡ(y i µ, σ, ξ), where we wrie Ḡ(y µ, σ, ξ) := 1 G(y µ, σ, ξ) for he GEVm survivor funcion and g(y µ, σ, ξ) for he derivaive of G(y µ, σ, ξ) wih respec o y. For he sae, Gaean and Grigoleo (2004) used independen random walks for each of he hree likelihood parameers. They used a second order random walk for µ o model he clear rend seen in he locaion of he daa and for σ and ξ hey used firs order random walks wih exremely small sae variances o make hem roughly consan in ime. Since such sae variances cause poor performance wih he filers and smoohers, we assume for simpliciy hese parameers are fixed and known. We laer esimae hem using an EM algorihm based on our smoohing algorihm alhough we would have preferred o add hem o he sae o fully accoun for heir uncerainy. Since a second order AR(2) model for µ causes degeneracy problems wih some smoohers, we insead adop he smooh second order random walk given in he earlier simulaion sudy of Secion 4. We herefore augmen he sae wih µ, he velociy of µ, giving us he wo-dimensional sae x = (µ, µ ). Finally, for he prior we follow Gaean and Grigoleo (2004) and use an uninformaive normal disribuion. Since he likelihood only depends on µ and he prior is Gaussian we used Rao-Blackwellisaion o marginalise µ hus improving he accuracy of he mehods. Deails of his sep and he paricle algorihms we used o achieve his are given in Appendix A.2. For a fixed value of r and ν 2 we can esimae he likelihood parameers σ and ξ using an EM algorihm consruced using our new smooher (see Appendix B }, 16

17 Time (s) Figure 4: Five fases imes for he women s 3000m race beween 1972 and 2007 wih Wang Junxia s ime in The wo fases annual imes used for our fi are coloured black. Also shown is he mean and cenral 95% probabiliy inerval of he fied predicive disribuion for he fases ime per year. for deails). Simulaneously esimaing ν 2 requires paricles approximaing he join disribuion p(x 1, x y 1:T ) which is possible using our approach (as our algorihm gives approximaions o p(x 1:+1 y 1:T ), see Secion 3). I is simpler however o selec among a few possible ν 2 by maximising he model likelihood p(y 1972:2007 ν 2 ), which we esimae using he following formula of Kiagawa (1996): where µ (i) 1 sae f( x (i) p(y 1972:2007 ν 2 ) 2007 N =1972 i=1 g(y,1:r µ (i) 1, σ, ξ)w(i) 1, is he firs componen of a predicive paricle sampled from he, w (i) )} are sampled from a paricle filer given ν 2. 1, ν2 ) and {(x (i) Table 2 shows a selecion of ν 2 values wih he corresponding EM esimaes of σ and ξ and he model likelihood when we ake r = 2. ν σ ξ Likelihood ( ) Table 2: Model likelihood wih σ and ξ esimaes for differen values of he smoohing parameer ν 2 and r = 2. To selec he number of observaions r o include per year we consruced probabiliy-probabiliy and quanile-quanile plos o assess he model fi. Looking a r = 1,..., 5 we concluded ha he bes fi was obained from only wo 17

18 observaions per year. As we see from Table 2, his leads us o selec ν 2 = 1 and esimae σ and ξ o be 4.22 and respecively. To esimae he probabiliy of a new record in 1993 beaing Wang Junxia s we use he r = 2 fases imes from 1972 o 2007 excluding 1993, denoed y 1972:2007, o esimae he predicive disribuion of he fases ime in Given he parameers µ 1993, σ and ξ, he probabiliy of Y 1993, a new record in 1993, beaing Wang Junxia s ime of s is given by P{Y Y , µ 1993, σ, ξ} = G( µ 1993, σ, ξ) G( µ 1993, σ, ξ), where s was he world record prior o Uncondiioning on µ 1993, we esimae he overall probabiliy p rec wih G( µ1993, σ, ξ) p rec = G( µ 1993, σ, ξ) p(µ 1993 y 1972:2007 ) dµ 1993 N G( µ (i) 1993, σ, ξ) i=1 G( µ (i) , σ, ξ)w(i), where we use weighed paricles o approximae p(µ 1993 y 1972:2007 ). To compare algorihms efficiencies a approximaing p(µ 1993 y 1972:2007 ) we run a simulaion o esimae he effecive sample size N eff (X,1 ) as in Secion 4 using 300 repeiions of each algorihm. However, since he rue mean and variance of he arge densiy are now unknown, we firs esimae he rue disribuion using he Filer-Smooher wih 750, 000 paricles. Since our primary ineres is o esimae he probabiliy of a new record beaing Wang Junxia s, we also calculae he sample variance of our esimae of his over he 300 repeiions used o esimae N eff. For his simulaion we chose o compare only he O(N) algorihms as boh he Forward-Backward and he Two-Filer smoohers suffer problems of degeneracy when applied o he Rao-Blackwellised filer described in Appendix A.2. These smoohers could be applied o a non-rao-blackwellised filer, bu given he simulaion resuls in Secion 4, i appears ha hese smoohers would no be compeiive wih our new smooher. Since we only require he marginal smoohing disribuion for 1993, our new algorihm only requires he paricle filer up o 1992 and he backwards filer back o We herefore chose he same number of paricles, 10, 000, for boh our algorihm and he Filer-Smooher and observed ha hey ook roughly he same amoun of ime o run. The resuls of he simulaion are shown in Table 3. We can see ha our new algorihm has an effecive sample size over 8 imes as large as ha of he Filer-Smooher giving similarly less variable esimaes. Of course, o calculae he marginals for every ime sep wihin he same amoun of ime our mehod could only use a hird of he paricles, bu i would sill ouperform he Filer- Smooher for he majoriy of esimaes. 18

19 Algorihm N eff Var(ˆp rec ) ( ) Filer-Smooher New O(N) Table 3: Comparison of he efficiencies of he Filer-Smooher and our new algorihm for approximaing p(µ 1993 y 1972:2007 ) and he probabiliy of a new record beaing Wang Junxia s ime. In boh cases he average probabiliy esimae was Our analysis esimaes he probabiliy of a new record in 1993 beaing Wang s o be This conflics wih he analysis of Gaean and Grigoleo (2004) who showed Wang s record well wihin he reach of heir boxplos of he condiional disribuion. Apar from our doubs in he accuracy of heir resuls, he main difference in he wo analyses is ha Gaean and Grigoleo (2004) only used daa on he fases race for years up o Thus i may be he informaion in he exra daa we use ha leads us o a differen conclusion abou how exreme he world record of Wang is. We also admi ha our analysis fails o accoun for he uncerainy in σ and ξ which could cause our esimae o be under-esimaed. However, while Gaean and Grigoleo (2004) aemped o accoun for his by augmening he sae wih σ and ξ, his leads o poor performance of he paricle mehods so a new approach is required. Appendix A Implemenaion of paricle filers and smoohers A.1 Mulivariae Normal model To implemen he various smoohing algorihms we need o choose propagaion densiies for a paricle filer, backwards informaion filer and he smooher iself. Using auxiliary algorihms hroughou, he linear-gaussian model assumpion allows us o calculae he opimal densiies and re-sampling probabiliies. Using hese we have adaped algorihms giving even weighs of 1/N whenever we resample. Wriing N (x µ, σ 2 ) for he densiy of N (µ, σ 2 ) evaluaed a x, i is easy o show ha he opimal filer is given by q(x x (j) 1, y )β (j) = f(x x (j) 1 )g(y x )w (j) 1 = N (x µ (j) 1, Σ 1) N (y GF x (j) 1, R + GQG )w (j) 1, where Σ 1 = (Q 1 + G R 1 G) 1 and µ (j) 1 = Σ 1(Q 1 F x (j) 1 + G R 1 y ). This is used for each algorihm bu we only need o keep rack of our rajecories for he simple Filer-Smooher. For he backwards informaion filer we can use he acual prior γ (x ) = p(x ) = N (x µ, Σ ), whose mean and covariance can be calculaed sequenially using he predicion sep of he Kalman filer. This gives p(x x +1 ) = N (x F x QΣ µ, Q), 19

20 where we define F := Σ F Σ 1 +1 and Q := Σ F Σ 1 +1 QF 1. We hen obain q(x y, x (k) (k) +1 ) β = p(x )g(y x )f( x (k) +1 x ) N (x µ (k) +1, Σ +1) N (y G( F x (k) +1 w (k) +1 p( x (k) +1 ) 1 + QΣ µ ), R + G QG ) w (k) +1, where Σ +1 = (Σ 1 + G R 1 G + F Q 1 F ) 1 and µ (k) +1 = Σ +1(Σ 1 µ + G R 1 y + F (k) Q 1 x +1 ). Finally, for our new smoohing algorihm we have q(x x (j) 1, y, x (k) +1 ) f(x x (j) 1 )g(y x )f( x (k) +1, x ) N (x µ (j,k) T, Σ T ), where Σ T = (Q 1 + G R 1 G + F Q 1 F ) 1 and µ (j,k) T G R 1 y + F (k) Q 1 x β (j,k) +1 p( x (k) +1, y x (j) = N (( x (k) +1 y = Σ T (Q 1 F x (j) 1 + ). The opimal re-sampling weighs can be shown o be )w(j) 1 w(k) +1 1 p( x (k) +1 ) ) ( F 2 GF ) x (j) ( Q + F QF 1, F QG GQF R + GQG )) (j) w 1 w(k) +1 p( x (k) +1 ), which we can see does no facorise. Therefore, for he O(N) version of our algorihm we use β (j) (k) and β from he filers as suggesed in Secion 3.1 as his should be a good approximaion of he opimal weighs. A.2 Ahleics records Adaped auxiliary algorihms for his model will no be possible as he likelihood in µ is very complex. We herefore approximae he log likelihood l(µ ) by a second-order Taylor approximaion abou an esimaed mode ˆµ which leads o a normal approximaion of he form ( (ˆµ g(y,1:r µ ) N µ ˆµ l ) l (ˆµ ), 1 ), (12) l (ˆµ ) A where he disribuion is resriced o he likelihood s suppor of A := {µ σ ξ(y,i µ ) > 0, i}. In pracise, we used he opimize funcion in R o esimae he mode a each ime sep. To make he algorihms as efficien as possible we use Rao-Blackwellisaion o reduce he variance of our esimaes. For his we can marginalise he second componen of he sae µ as he likelihood only depends on µ so he disribuion of µ µ can be updaed by using only is mean and variance. This improves he 20

21 overall approximaion by allowing he second componen of each paricle o ac as a normal disribuion raher han a poin mass. We herefore have paricles of he form x (i) = (µ (i), ṁ (i), τ 2(i) ), where µ {µ = µ (i) } N (ṁ (i), τ 2(i) ). To creae a marginalised paricle filer i helps o hink each paricle x (i) 1 as a kernel approximaion o p(µ 1, µ 1 y 1: 1 ) of he form wih φ (i) (µ 1, µ 1 ) := N (µ 1, µ 1 η (i) 1, K(i) 1 ), η (i) 1 := ( µ (i) 1 ṁ (i) 1 ) ( ), K (i) := 0 τ 2(i). 1 This leads o he approximaion of p(µ, µ y 1: 1 ) by π (i) (µ, µ ) := f(µ, µ µ 1, µ 1 ) φ (i) (µ 1, µ 1 ) dµ 1 d µ 1 To creae he new paricle x (i) filering wih arge densiy o sample µ (i) ha of π (i) ( µ µ (i) = N (µ, µ F η (i) 1, Q + F K(i) 1 F ). we herefore use sandard auxiliary paricle q op (µ x (i) 1, y )β (i) = π (i) (µ ) g(y µ ) w (i) 1 and hen updae he mean and variance of µ {µ = µ (i) } wih ). For his we replace he likelihood by he approximaion (12) o give us a consrained normal sampling densiy for µ (i) he opimal re-sampling weighs wih β (i) π(i) (ˆµ ) g(y ˆµ ) w (i) 1 q(ˆµ x (i) 1, y, ) and approximae where ˆµ is he mean of he sampling densiy q(µ x (i) 1, y ). For he backwards filer we again sar by defining F := Σ F Σ 1 +1 and Q := Σ F Σ 1 +1 QF 1, where Σ is he variance of he normal prior a ime. I can hen be shown ha p(µ, µ µ +1, µ +1 ) is equal o (( ) ( ) ) ) µ µ+1 1 (ˆµ N F + QΣ µ µ, +1 ˆ µ Q, where (ˆµ, ˆ µ ) is he mean of he prior a ime. We hen combine his wih a kernel φ (i) (µ +1, µ +1 ) creaed from x (i) +1 o give he densiy (( ) ) ) π (i) µ (i) (µ, µ ) := N F η 1 (ˆµ µ +1 + QΣ, ˆ µ Q + F K (i) F +1. We now proceed in exacly he same way as wih he forwards filer using π insead of π o sample x (i). Finally, for our new smoohing algorihm, i can be shown ha our arge for µ (i) in his marginalised seing is q op (µ x (j) 1, y, x (k) (j,k) +1 ) β = π (j) (µ ) w (j) 1 g(y y µ ) π (k) (µ ) w (k) +1 p(µ ). 21

22 This leads us o sample µ (i) as before using he densiy proporional o he produc of π (j) (µ ), π (k) (µ ) and p(µ ) 1 in place of π (j) (µ ). We can hen calculae he mean and variance of µ µ (i) from he disribuion proporional o π (j) ( µ µ (i) ) π (k) ( µ µ (i) p( µ µ (i) ) The filer and backwards filer re-sampling weighs were used again for he subopimal O(N) version of our algorihm. For boh he filer and he backwards filer he iniial sep was sampled using sandard imporance sampling as he arge densiy is available in closed form and using i raher han propagaing he prior grealy improves he algorihm. We also used he sraified sampling algorihm of Carpener e al. (1999) in boh he filers and our new algorihm o reduce he Mone Carlo error of re-sampling. Since we chose no o include he daa from 1993, for his ime sep in each of he above algorihms we proceed wihou he likelihood erm g(y µ ). ). Appendix B EM algorihm for parameer esimaes For our ahleics model we require esimaes of he fixed likelihood parameers θ = (σ, ξ) which we inend o obain from he EM algorihm of Dempser e al. (1977). To do his we aim o maximise he likelihood p(y 1:T θ) by ieraively maximising ( Q(θ θ (n) ) := E log(p(x 0:T, y 1:T θ)) y 1:T, θ (n)) o give θ (n+1). Since he parameers θ do no appear in he sae densiy, he join log likelihood can be wrien as log(p(x 0:T, y 1:T θ)) = log(p(x 0 )) + We herefore have Q(θ θ (n) ) = cons + cons + T =1 T T log(f(x x 1 )) + =1 T log(g(y x, θ)). =1 ( E log(g(y X, θ)) y 1:T, θ (n)) N =1 i=1 log(g(y x (i), θ))w (i), where (x, w ) (i) are weighed paricles approximaing p(x y 1:T, θ (n) ). Thus we only require paricles from he marginal smoohing densiies o esimae he expecaion so our new algorihm can be used direcly. To esimae parameers from he sae densiy wih he EM algorihm, pairs of paricles approximaing p(x 1, x y 1:T, θ (n) ) are required which we noe are available from our algorihm as eiher ( x (i) 1, x(ki) ) a ime 1 or as (x (ji) 1, x(i) ) a ime. 22

Sequential Importance Resampling (SIR) Particle Filter

Sequential Importance Resampling (SIR) Particle Filter Paricle Filers++ Pieer Abbeel UC Berkeley EECS Many slides adaped from Thrun, Burgard and Fox, Probabilisic Roboics 1. Algorihm paricle_filer( S -1, u, z ): 2. Sequenial Imporance Resampling (SIR) Paricle

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19

m = 41 members n = 27 (nonfounders), f = 14 (founders) 8 markers from chromosome 19 Sequenial Imporance Sampling (SIS) AKA Paricle Filering, Sequenial Impuaion (Kong, Liu, Wong, 994) For many problems, sampling direcly from he arge disribuion is difficul or impossible. One reason possible

More information

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when

More information

Estimation of Poses with Particle Filters

Estimation of Poses with Particle Filters Esimaion of Poses wih Paricle Filers Dr.-Ing. Bernd Ludwig Chair for Arificial Inelligence Deparmen of Compuer Science Friedrich-Alexander-Universiä Erlangen-Nürnberg 12/05/2008 Dr.-Ing. Bernd Ludwig (FAU

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles

Diebold, Chapter 7. Francis X. Diebold, Elements of Forecasting, 4th Edition (Mason, Ohio: Cengage Learning, 2006). Chapter 7. Characterizing Cycles Diebold, Chaper 7 Francis X. Diebold, Elemens of Forecasing, 4h Ediion (Mason, Ohio: Cengage Learning, 006). Chaper 7. Characerizing Cycles Afer compleing his reading you should be able o: Define covariance

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

Lecture 33: November 29

Lecture 33: November 29 36-705: Inermediae Saisics Fall 2017 Lecurer: Siva Balakrishnan Lecure 33: November 29 Today we will coninue discussing he boosrap, and hen ry o undersand why i works in a simple case. In he las lecure

More information

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H.

ACE 562 Fall Lecture 5: The Simple Linear Regression Model: Sampling Properties of the Least Squares Estimators. by Professor Scott H. ACE 56 Fall 005 Lecure 5: he Simple Linear Regression Model: Sampling Properies of he Leas Squares Esimaors by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Inference in he Simple

More information

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8)

Econ107 Applied Econometrics Topic 7: Multicollinearity (Studenmund, Chapter 8) I. Definiions and Problems A. Perfec Mulicollineariy Econ7 Applied Economerics Topic 7: Mulicollineariy (Sudenmund, Chaper 8) Definiion: Perfec mulicollineariy exiss in a following K-variable regression

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Final Spring 2007

Final Spring 2007 .615 Final Spring 7 Overview The purpose of he final exam is o calculae he MHD β limi in a high-bea oroidal okamak agains he dangerous n = 1 exernal ballooning-kink mode. Effecively, his corresponds o

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature

On Measuring Pro-Poor Growth. 1. On Various Ways of Measuring Pro-Poor Growth: A Short Review of the Literature On Measuring Pro-Poor Growh 1. On Various Ways of Measuring Pro-Poor Growh: A Shor eview of he Lieraure During he pas en years or so here have been various suggesions concerning he way one should check

More information

Particle Filtering and Smoothing Methods

Particle Filtering and Smoothing Methods Paricle Filering and Smoohing Mehods Arnaud Douce Deparmen of Saisics, Oxford Universiy Universiy College London 3 rd Ocober 2012 A. Douce (UCL Maserclass Oc. 2012) 3 rd Ocober 2012 1 / 46 Sae-Space Models

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé

Bias in Conditional and Unconditional Fixed Effects Logit Estimation: a Correction * Tom Coupé Bias in Condiional and Uncondiional Fixed Effecs Logi Esimaion: a Correcion * Tom Coupé Economics Educaion and Research Consorium, Naional Universiy of Kyiv Mohyla Academy Address: Vul Voloska 10, 04070

More information

Ensamble methods: Bagging and Boosting

Ensamble methods: Bagging and Boosting Lecure 21 Ensamble mehods: Bagging and Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Ensemble mehods Mixure of expers Muliple base models (classifiers, regressors), each covers a differen par

More information

Object tracking: Using HMMs to estimate the geographical location of fish

Object tracking: Using HMMs to estimate the geographical location of fish Objec racking: Using HMMs o esimae he geographical locaion of fish 02433 - Hidden Markov Models Marin Wæver Pedersen, Henrik Madsen Course week 13 MWP, compiled June 8, 2011 Objecive: Locae fish from agging

More information

Air Traffic Forecast Empirical Research Based on the MCMC Method

Air Traffic Forecast Empirical Research Based on the MCMC Method Compuer and Informaion Science; Vol. 5, No. 5; 0 ISSN 93-8989 E-ISSN 93-8997 Published by Canadian Cener of Science and Educaion Air Traffic Forecas Empirical Research Based on he MCMC Mehod Jian-bo Wang,

More information

Testing for a Single Factor Model in the Multivariate State Space Framework

Testing for a Single Factor Model in the Multivariate State Space Framework esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics

More information

3.1 More on model selection

3.1 More on model selection 3. More on Model selecion 3. Comparing models AIC, BIC, Adjused R squared. 3. Over Fiing problem. 3.3 Sample spliing. 3. More on model selecion crieria Ofen afer model fiing you are lef wih a handful of

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

2. Nonlinear Conservation Law Equations

2. Nonlinear Conservation Law Equations . Nonlinear Conservaion Law Equaions One of he clear lessons learned over recen years in sudying nonlinear parial differenial equaions is ha i is generally no wise o ry o aack a general class of nonlinear

More information

23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes

23.5. Half-Range Series. Introduction. Prerequisites. Learning Outcomes Half-Range Series 2.5 Inroducion In his Secion we address he following problem: Can we find a Fourier series expansion of a funcion defined over a finie inerval? Of course we recognise ha such a funcion

More information

Morning Time: 1 hour 30 minutes Additional materials (enclosed):

Morning Time: 1 hour 30 minutes Additional materials (enclosed): ADVANCED GCE 78/0 MATHEMATICS (MEI) Differenial Equaions THURSDAY JANUARY 008 Morning Time: hour 30 minues Addiional maerials (enclosed): None Addiional maerials (required): Answer Bookle (8 pages) Graph

More information

KINEMATICS IN ONE DIMENSION

KINEMATICS IN ONE DIMENSION KINEMATICS IN ONE DIMENSION PREVIEW Kinemaics is he sudy of how hings move how far (disance and displacemen), how fas (speed and velociy), and how fas ha how fas changes (acceleraion). We say ha an objec

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

) were both constant and we brought them from under the integral.

) were both constant and we brought them from under the integral. YIELD-PER-RECRUIT (coninued The yield-per-recrui model applies o a cohor, bu we saw in he Age Disribuions lecure ha he properies of a cohor do no apply in general o a collecion of cohors, which is wha

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

Maximum Likelihood Parameter Estimation in State-Space Models

Maximum Likelihood Parameter Estimation in State-Space Models Maximum Likelihood Parameer Esimaion in Sae-Space Models Arnaud Douce Deparmen of Saisics, Oxford Universiy Universiy College London 4 h Ocober 212 A. Douce (UCL Maserclass Oc. 212 4 h Ocober 212 1 / 32

More information

Ensamble methods: Boosting

Ensamble methods: Boosting Lecure 21 Ensamble mehods: Boosing Milos Hauskrech milos@cs.pi.edu 5329 Senno Square Schedule Final exam: April 18: 1:00-2:15pm, in-class Term projecs April 23 & April 25: a 1:00-2:30pm in CS seminar room

More information

Bias-Variance Error Bounds for Temporal Difference Updates

Bias-Variance Error Bounds for Temporal Difference Updates Bias-Variance Bounds for Temporal Difference Updaes Michael Kearns AT&T Labs mkearns@research.a.com Sainder Singh AT&T Labs baveja@research.a.com Absrac We give he firs rigorous upper bounds on he error

More information

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering

Announcements. Recap: Filtering. Recap: Reasoning Over Time. Example: State Representations for Robot Localization. Particle Filtering Inroducion o Arificial Inelligence V22.0472-001 Fall 2009 Lecure 18: aricle & Kalman Filering Announcemens Final exam will be a 7pm on Wednesday December 14 h Dae of las class 1.5 hrs long I won ask anyhing

More information

Unit Root Time Series. Univariate random walk

Unit Root Time Series. Univariate random walk Uni Roo ime Series Univariae random walk Consider he regression y y where ~ iid N 0, he leas squares esimae of is: ˆ yy y y yy Now wha if = If y y hen le y 0 =0 so ha y j j If ~ iid N 0, hen y ~ N 0, he

More information

OBJECTIVES OF TIME SERIES ANALYSIS

OBJECTIVES OF TIME SERIES ANALYSIS OBJECTIVES OF TIME SERIES ANALYSIS Undersanding he dynamic or imedependen srucure of he observaions of a single series (univariae analysis) Forecasing of fuure observaions Asceraining he leading, lagging

More information

15. Vector Valued Functions

15. Vector Valued Functions 1. Vecor Valued Funcions Up o his poin, we have presened vecors wih consan componens, for example, 1, and,,4. However, we can allow he componens of a vecor o be funcions of a common variable. For example,

More information

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution

Physics 127b: Statistical Mechanics. Fokker-Planck Equation. Time Evolution Physics 7b: Saisical Mechanics Fokker-Planck Equaion The Langevin equaion approach o he evoluion of he velociy disribuion for he Brownian paricle migh leave you uncomforable. A more formal reamen of his

More information

Some Basic Information about M-S-D Systems

Some Basic Information about M-S-D Systems Some Basic Informaion abou M-S-D Sysems 1 Inroducion We wan o give some summary of he facs concerning unforced (homogeneous) and forced (non-homogeneous) models for linear oscillaors governed by second-order,

More information

Online Appendix to Solution Methods for Models with Rare Disasters

Online Appendix to Solution Methods for Models with Rare Disasters Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Spike-count autocorrelations in time. Supplemenary Figure 1 Spike-coun auocorrelaions in ime. Normalized auocorrelaion marices are shown for each area in a daase. The marix shows he mean correlaion of he spike coun in each ime bin wih he spike

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi Creep in Viscoelasic Subsances Numerical mehods o calculae he coefficiens of he Prony equaion using creep es daa and Herediary Inegrals Mehod Navnee Saini, Mayank Goyal, Vishal Bansal (23); Term Projec

More information

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates

Biol. 356 Lab 8. Mortality, Recruitment, and Migration Rates Biol. 356 Lab 8. Moraliy, Recruimen, and Migraion Raes (modified from Cox, 00, General Ecology Lab Manual, McGraw Hill) Las week we esimaed populaion size hrough several mehods. One assumpion of all hese

More information

Christos Papadimitriou & Luca Trevisan November 22, 2016

Christos Papadimitriou & Luca Trevisan November 22, 2016 U.C. Bereley CS170: Algorihms Handou LN-11-22 Chrisos Papadimiriou & Luca Trevisan November 22, 2016 Sreaming algorihms In his lecure and he nex one we sudy memory-efficien algorihms ha process a sream

More information

Comparing Means: t-tests for One Sample & Two Related Samples

Comparing Means: t-tests for One Sample & Two Related Samples Comparing Means: -Tess for One Sample & Two Relaed Samples Using he z-tes: Assumpions -Tess for One Sample & Two Relaed Samples The z-es (of a sample mean agains a populaion mean) is based on he assumpion

More information

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still.

Lecture 2-1 Kinematics in One Dimension Displacement, Velocity and Acceleration Everything in the world is moving. Nothing stays still. Lecure - Kinemaics in One Dimension Displacemen, Velociy and Acceleraion Everyhing in he world is moving. Nohing says sill. Moion occurs a all scales of he universe, saring from he moion of elecrons in

More information

Rao-Blackwellized Auxiliary Particle Filters for Mixed Linear/Nonlinear Gaussian models

Rao-Blackwellized Auxiliary Particle Filters for Mixed Linear/Nonlinear Gaussian models Rao-Blackwellized Auxiliary Paricle Filers for Mixed Linear/Nonlinear Gaussian models Jerker Nordh Deparmen of Auomaic Conrol Lund Universiy, Sweden Email: jerker.nordh@conrol.lh.se Absrac The Auxiliary

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

SMC in Estimation of a State Space Model

SMC in Estimation of a State Space Model SMC in Esimaion of a Sae Space Model Dong-Whan Ko Deparmen of Economics Rugers, he Sae Universiy of New Jersey December 31, 2012 Absrac I briefly summarize procedures for macroeconomic Dynamic Sochasic

More information

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should

In this chapter the model of free motion under gravity is extended to objects projected at an angle. When you have completed it, you should Cambridge Universiy Press 978--36-60033-7 Cambridge Inernaional AS and A Level Mahemaics: Mechanics Coursebook Excerp More Informaion Chaper The moion of projeciles In his chaper he model of free moion

More information

1 Review of Zero-Sum Games

1 Review of Zero-Sum Games COS 5: heoreical Machine Learning Lecurer: Rob Schapire Lecure #23 Scribe: Eugene Brevdo April 30, 2008 Review of Zero-Sum Games Las ime we inroduced a mahemaical model for wo player zero-sum games. Any

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

EKF SLAM vs. FastSLAM A Comparison

EKF SLAM vs. FastSLAM A Comparison vs. A Comparison Michael Calonder, Compuer Vision Lab Swiss Federal Insiue of Technology, Lausanne EPFL) michael.calonder@epfl.ch The wo algorihms are described wih a planar robo applicaion in mind. Generalizaion

More information

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits

Exponential Weighted Moving Average (EWMA) Chart Under The Assumption of Moderateness And Its 3 Control Limits DOI: 0.545/mjis.07.5009 Exponenial Weighed Moving Average (EWMA) Char Under The Assumpion of Moderaeness And Is 3 Conrol Limis KALPESH S TAILOR Assisan Professor, Deparmen of Saisics, M. K. Bhavnagar Universiy,

More information

Unsteady Flow Problems

Unsteady Flow Problems School of Mechanical Aerospace and Civil Engineering Unseady Flow Problems T. J. Craf George Begg Building, C41 TPFE MSc CFD-1 Reading: J. Ferziger, M. Peric, Compuaional Mehods for Fluid Dynamics H.K.

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION DOI: 0.038/NCLIMATE893 Temporal resoluion and DICE * Supplemenal Informaion Alex L. Maren and Sephen C. Newbold Naional Cener for Environmenal Economics, US Environmenal Proecion

More information

4.1 Other Interpretations of Ridge Regression

4.1 Other Interpretations of Ridge Regression CHAPTER 4 FURTHER RIDGE THEORY 4. Oher Inerpreaions of Ridge Regression In his secion we will presen hree inerpreaions for he use of ridge regression. The firs one is analogous o Hoerl and Kennard reasoning

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

The Arcsine Distribution

The Arcsine Distribution The Arcsine Disribuion Chris H. Rycrof Ocober 6, 006 A common heme of he class has been ha he saisics of single walker are ofen very differen from hose of an ensemble of walkers. On he firs homework, we

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

Lecture Notes 2. The Hilbert Space Approach to Time Series

Lecture Notes 2. The Hilbert Space Approach to Time Series Time Series Seven N. Durlauf Universiy of Wisconsin. Basic ideas Lecure Noes. The Hilber Space Approach o Time Series The Hilber space framework provides a very powerful language for discussing he relaionship

More information

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems 8 Froniers in Signal Processing, Vol. 1, No. 1, July 217 hps://dx.doi.org/1.2266/fsp.217.112 Recursive Leas-Squares Fixed-Inerval Smooher Using Covariance Informaion based on Innovaion Approach in Linear

More information

Testing the Random Walk Model. i.i.d. ( ) r

Testing the Random Walk Model. i.i.d. ( ) r he random walk heory saes: esing he Random Walk Model µ ε () np = + np + Momen Condiions where where ε ~ i.i.d he idea here is o es direcly he resricions imposed by momen condiions. lnp lnp µ ( lnp lnp

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H. ACE 56 Fall 5 Lecure 8: The Simple Linear Regression Model: R, Reporing he Resuls and Predicion by Professor Sco H. Irwin Required Readings: Griffihs, Hill and Judge. "Explaining Variaion in he Dependen

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

hen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif

hen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif Chaper Kalman Filers. Inroducion We describe Bayesian Learning for sequenial esimaion of parameers (eg. means, AR coeciens). The updae procedures are known as Kalman Filers. We show how Dynamic Linear

More information

Empirical Process Theory

Empirical Process Theory Empirical Process heory 4.384 ime Series Analysis, Fall 27 Reciaion by Paul Schrimpf Supplemenary o lecures given by Anna Mikusheva Ocober 7, 28 Reciaion 7 Empirical Process heory Le x be a real-valued

More information

Monte Carlo Filter Particle Filter

Monte Carlo Filter Particle Filter 205 European Conrol Conference (ECC) July 5-7, 205. Linz, Ausria Mone Carlo Filer Paricle Filer Masaya Muraa, Hidehisa Nagano and Kunio Kashino Absrac We propose a new realizaion mehod of he sequenial

More information

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes

2.7. Some common engineering functions. Introduction. Prerequisites. Learning Outcomes Some common engineering funcions 2.7 Inroducion This secion provides a caalogue of some common funcions ofen used in Science and Engineering. These include polynomials, raional funcions, he modulus funcion

More information

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006 2.160 Sysem Idenificaion, Esimaion, and Learning Lecure Noes No. 8 March 6, 2006 4.9 Eended Kalman Filer In many pracical problems, he process dynamics are nonlinear. w Process Dynamics v y u Model (Linearized)

More information

1. VELOCITY AND ACCELERATION

1. VELOCITY AND ACCELERATION 1. VELOCITY AND ACCELERATION 1.1 Kinemaics Equaions s = u + 1 a and s = v 1 a s = 1 (u + v) v = u + as 1. Displacemen-Time Graph Gradien = speed 1.3 Velociy-Time Graph Gradien = acceleraion Area under

More information

Tracking. Announcements

Tracking. Announcements Tracking Tuesday, Nov 24 Krisen Grauman UT Ausin Announcemens Pse 5 ou onigh, due 12/4 Shorer assignmen Auo exension il 12/8 I will no hold office hours omorrow 5 6 pm due o Thanksgiving 1 Las ime: Moion

More information

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin

ACE 562 Fall Lecture 4: Simple Linear Regression Model: Specification and Estimation. by Professor Scott H. Irwin ACE 56 Fall 005 Lecure 4: Simple Linear Regression Model: Specificaion and Esimaion by Professor Sco H. Irwin Required Reading: Griffihs, Hill and Judge. "Simple Regression: Economic and Saisical Model

More information

Temporal probability models. Chapter 15, Sections 1 5 1

Temporal probability models. Chapter 15, Sections 1 5 1 Temporal probabiliy models Chaper 15, Secions 1 5 Chaper 15, Secions 1 5 1 Ouline Time and uncerainy Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic Bayesian

More information

Matlab and Python programming: how to get started

Matlab and Python programming: how to get started Malab and Pyhon programming: how o ge sared Equipping readers he skills o wrie programs o explore complex sysems and discover ineresing paerns from big daa is one of he main goals of his book. In his chaper,

More information

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes WHAT IS A KALMAN FILTER An recursive analyical echnique o esimae ime dependen physical parameers in he presence of noise processes Example of a ime and frequency applicaion: Offse beween wo clocks PREDICTORS,

More information

14 Autoregressive Moving Average Models

14 Autoregressive Moving Average Models 14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class

More information

Two Coupled Oscillators / Normal Modes

Two Coupled Oscillators / Normal Modes Lecure 3 Phys 3750 Two Coupled Oscillaors / Normal Modes Overview and Moivaion: Today we ake a small, bu significan, sep owards wave moion. We will no ye observe waves, bu his sep is imporan in is own

More information

Time series model fitting via Kalman smoothing and EM estimation in TimeModels.jl

Time series model fitting via Kalman smoothing and EM estimation in TimeModels.jl Time series model fiing via Kalman smoohing and EM esimaion in TimeModels.jl Gord Sephen Las updaed: January 206 Conens Inroducion 2. Moivaion and Acknowledgemens....................... 2.2 Noaion......................................

More information

Class Meeting # 10: Introduction to the Wave Equation

Class Meeting # 10: Introduction to the Wave Equation MATH 8.5 COURSE NOTES - CLASS MEETING # 0 8.5 Inroducion o PDEs, Fall 0 Professor: Jared Speck Class Meeing # 0: Inroducion o he Wave Equaion. Wha is he wave equaion? The sandard wave equaion for a funcion

More information

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK

CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 175 CHAPTER 10 VALIDATION OF TEST WITH ARTIFICAL NEURAL NETWORK 10.1 INTRODUCTION Amongs he research work performed, he bes resuls of experimenal work are validaed wih Arificial Neural Nework. From he

More information

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation Mahcad Lecure #8 In-class Workshee Curve Fiing and Inerpolaion A he end of his lecure, you will be able o: explain he difference beween curve fiing and inerpolaion decide wheher curve fiing or inerpolaion

More information

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j =

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j = 1: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME Moving Averages Recall ha a whie noise process is a series { } = having variance σ. The whie noise process has specral densiy f (λ) = of

More information

13.3 Term structure models

13.3 Term structure models 13.3 Term srucure models 13.3.1 Expecaions hypohesis model - Simples "model" a) shor rae b) expecaions o ge oher prices Resul: y () = 1 h +1 δ = φ( δ)+ε +1 f () = E (y +1) (1) =δ + φ( δ) f (3) = E (y +)

More information

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n

Module 2 F c i k c s la l w a s o s f dif di fusi s o i n Module Fick s laws of diffusion Fick s laws of diffusion and hin film soluion Adolf Fick (1855) proposed: d J α d d d J (mole/m s) flu (m /s) diffusion coefficien and (mole/m 3 ) concenraion of ions, aoms

More information