Time series model fitting via Kalman smoothing and EM estimation in TimeModels.jl

Size: px
Start display at page:

Download "Time series model fitting via Kalman smoothing and EM estimation in TimeModels.jl"

Transcription

1 Time series model fiing via Kalman smoohing and EM esimaion in TimeModels.jl Gord Sephen Las updaed: January 206 Conens Inroducion 2. Moivaion and Acknowledgemens Noaion Kalman Smoohing 3 2. Predicion and Filering Smoohing Lag- Covariance Smooher Linear Consrain Paramerizaion 4 3. Sysem Marix Decomposiion Linear Consrain Formulaion Sysem Equaion Represenaion Expecaion-Maximizaion Parameer Esimaion 7 4. Basic Log-likelihood Derivaion Log-likelihood Derivaion wih Zero-variance Saes or Observaions Solving for p A Solving for p B Solving for p Q Solving for p C Solving for p D Solving for p R Solving for Solving for p S

2 Inroducion. Moivaion and Acknowledgemens While he expecaion-maximizaion (EM) opimizaion approach is well-recognized in ime-series analysis exs as a useful mehod for obaining local maximum-likelihood (ML) parameer esimaes of linear sae-space model sysem marices, avoiding he need o resor o general gradien-free opimizaion echniques, he procedure is generally only described as a means o generae maximum likelihood esimaes of full sysem marices. In many pracical cases, however, mos elemens of a sae-space model s sysem marices are se deerminisically, wih only a small subse requiring esimaion (one noable example of such a case is he fiing of ARIMA models). In such siuaions, allowing all elemens of he sysem marix o vary during he fiing process is clearly undesirable. A common soluion o his challenge is o simply disregard analyical knowledge of he sysem and fall back on gradien-free nonlinear opimizaion echniques o compue maximumlikelihood esimaes for he unknown parameers. This can be avoided, however, by paramerizing sysem marices as reshaped linear ransformaions of smaller parameer vecors, hen applying he EM esimaion algorihm o he resuling sysem equaions o generae ML esimaes for he parameer vecors alone. This model fiing approach was firs oulined by Holmes and implemened as he MARSS R package (hps://cran. r-projec.org/web/packages/marss/index.hml). This documen re-derives hese echniques and describes heir implemenaion in he TimeModels.jl Julia package (hps://gihub.com/juliasas/timemodels.jl). The resuls presened here are based very heavily on Holmes original derivaions (available a hps:// cran.r-projec.org/web/packages/marss/vignees/emderivaion.pdf), wih some gaps in derivaion seps filled, and changes o noaion and he reamen of exernal sysem inpus ha will be more familiar o readers approaching sae-space modelling from a conrol-heory background. Like TimeModels.jl, his documen is very much a work in progress. In paricular, general conrains for solvable sysems (while usually alluded o in he various derivaions) are no ye deailed explicily. The reader is insead referred o secion 0 of Holmes derivaions, which provides a relevan summary of such consrains, albei wih he differen noaion and exernal inpu expressions menioned above. In some cases where derivaions are idenical o hose given by Holmes (noaional differences aside), only he final resuls are given..2 Noaion We define a general mulivariae sae-space process wih Gaussian noise according o he following noaion: x = A x + B u + v, v N (0, V ), = 2... n 2

3 y = C x + D u + w, w N (0, W ), =... n x N (x 0, P 0 ) Here, x R nx is an unobserved sae vecor, y R ny is an observaion vecor wih possible missing values, and u R nu is a deerminisic inpu vecor. A is he sae ransiion marix, B is he conrol inpu marix, and V is he sae ransiion noise covariance marix. C is he observaion marix, D he feed-forward marix, and W he observaion noise covariance marix. Each sysem marix can be ime-varying. x 0 and P 0 are iniial sae value and covariance condiions needed o iniiae he recursive process evoluion. 2 Kalman Smoohing Given he observaion ime series y (possibly conaining missing values), he inpu ime series u, and values for he sysem marices, we can applying Kalman smoohing o compue he probabilisic disribuion of he laen sae ime series x. Afer Kalman smoohing, he esimaed laen sae ime series values will be disribued as x N (ˆx, P ). This disribuion is obained by aking ino accoun all observaions in y. Obaining hese esimaes also requires compuing and soring a prediced sae disribuion x + N (x +, P + ), based only on observaions y... y. 2. Predicion and Filering x +, P +, he Kalman gain marix K, and he marginal log-likelihood ll of he observed series given he sysem marices can be compued via forward recursion saring from iniial condiions x 0, P 0 : ɛ = y C x D u Σ = C P C T + W K = A P C T Σ ll = ɛ T Σ ɛ + ln Σ x + = A x + B u + K ɛ P+ = A P (A K C ) T + V 3

4 2.2 Smoohing Now, given x, P, K, and Σ, smoohed values ˆx and P can be compued via backwards recursion, wih iniial condiions r n = 0 nx and N n = 0 nx,nx : L = A K C r = C T Σ ɛ + L T r N = C T Σ C + L T N L ˆx = x P = P + P r P N P 2.3 Lag- Covariance Smooher The EM parameer esimaion procedure described below also requires P,, he lag- sae covariance beween x and x. This can be compued by smoohing a corresponding sacked sae space model given as follows: I nx 0 nx,n x x = [ ] A 0 Ã = nx,nx, B = [ x x [ B 0 nx,n u ] ], Ṽ = C = [ C 0 ny,n x ] [ V 0 nx,n x 0 nx,n x 0 nx,n x ] x = Ã x + B u + ṽ, ṽ N (0, Ṽ ) y = C x + D u + w, w N (0, W ) The resuling x sae covariance P will be a block marix of he form: from which P, can be easily exraced. [ ] P P P =, P, P 4

5 3 Linear Consrain Paramerizaion 3. Sysem Marix Decomposiion To paramerize and esimae he ime-dependen sysem marices, we firs facor hem ino componens ha are eiher ime-independen and paramerized, or ime-dependen and explicily defined: A = A A 2 pa A 3, B = B B 2 pb, V = G Q pq G T C = C C 2 pc C 3, D = D D 2 pd, W = H R pr H T where p A, p B, p Q, p C, p D, and p R are parameer vecors o be esimaed. We require ha Q pq and R pr be inverible in addiion o being valid covariance marices (symmeric and posiive-semidefinie). 3.2 Linear Consrain Formulaion We formalize he linear consrains on he paramerized marices as: vec(z pz ) = f Z + D Z p Z where f Z and D Z represen he consan and parameer coefficien erms making up Z. 3.3 Sysem Equaion Represenaion We define Φ = (G T G ) G T such ha: Φ x = Φ A x + Φ B u + Φ G q Φ x = Φ A x + Φ B u + q Using Kronecker produc ideniies, we can now represen he general sysem equaions in erms of he parameers o be esimaed: vec(φ x ) = vec(φ A x ) + vec(φ B u ) + vec(q ) Using Ab = vec(ab) = (b T I na )vec(a), where A is an n a n b marix and b is a lengh-n b vecor: Φ x = Φ vec(a x ) + Φ vec(b u ) + q 5

6 Φ x = Φ (x T I nx )vec(a ) + Φ (u T I nx )vec(b ) + q Φ x = Φ (x T I nx )vec(a A 2 pa A 3 ) + Φ (u T I nx )vec(b B 2 pb ) + q Using vec(abc) = (C T A)vec(B): Φ x = Φ (x T I nx )(A T 3 A )vec(a 2 pa )+Φ (u T I nx )vec(b B 2 pb )+q Using vec(ab) = (I nc A)vec(B) where B is an n b n c marix:: Φ x = Φ (x T I nx )(A T 3 A )vec(a 2 pa )+Φ (u T I nx )(I nu B )vec(b 2 pb )+q Using he linear consrain definiion vec(z pz ) = f Z + D Z p Z : Φ x = Φ (x T I nx )(A T 3 A )(f A +D A p A )+Φ (u T I nx )(I nu B )(f B +D B p B )+q Finally, we can consolidae he deerminisic elemens by defining f A, = (A T 3 A )f A D A, = (A T 3 A )D A f B, = (I nu B )f B D B, = (I nu B )D B which resuls in Φ x = Φ (x T I nx )(f A, +D A, p A )+Φ (u T I nx )(f B, +D B, p B )+q The second sysem equaion follows from he same process and gives: Ξ = (H T H ) H T f C, = (C3 T C )f C D C, = (C3 T C )D C f D, = (I nu D )f B D D, = (I nu D )D B Ξ y = Ξ (x T I ny )(f C, + D C, p C ) + Ξ (u T I ny )(f D, + D D, p D ) + r 6

7 The paramerized iniial sae and error covariances are given similarly, as follows: x = x 0 + ζ, ζ N (0, P 0 ), P 0 = JSJ x = x 0 + Js, s N (0, S) Inroducing he usual linear parameer consrains we ge x 0 = f x 0 + D x 0, S = f S + D S p S Like Q and R, S mus be boh inverible and a valid covariance marix. Finally, defining Π = (J J) J gives Πx = Πx 0 + s = Π(f x 0 + D x 0 ) + s 4 Expecaion-Maximizaion Parameer Esimaion 4. Basic Log-likelihood Derivaion Given he paramerized sysem equaions and aking errors o be normally-disribued, he log-likelihood of he saes and observaions given he marix parameers can be represened: ɛ = Φ x Φ (x T I nx )(f A, + D A, p A ) Φ (u T I nx )(f B, + D B, p B ) ɛ = Φ (x (x T I nx )f A, (x T I nx )D A, p A (u T I nx )f B, (u T I nx )D B, p B ) f = x (x T I nx )f A, (u T I nx )f B, a = (x T I nx )D A, b = (u T I nx )D B, ɛ = Φ (f a p A b p B ) η = Ξ y Ξ (x T I ny )(f C, + D C, p C ) Ξ (u T I ny )(f D, + D D, p D ) η = Ξ (y (x T I ny )f C, (x T I ny )D C, p C (u T I ny )f D, (u T I ny )D D, p D ) g = y (x T I ny )f C, (u T I ny )f D, c = (x T I ny )D C, d = (u T I ny )D D, 7

8 η = Ξ (g c p C d p D ) ξ = Πx Π(f x 0 + D x 0 ) LL = 2 =2 ɛ Q ɛ 2 ln Q 2 =2 = η R η 2 ln R 2 ξ S ξ 2 ln S n ln 2π 2 For his and he following secions, i will be useful o define Ṽ = Φ Q Φ, W = Ξ R Ξ, and P 0 = Π S Π. Expanding he above gives: LL = = =2 = (f + a p A + b p B ) T Ṽ (f + a p A + b p B ) + 2 (g + c p C + d p D ) T W (g + c p C + d p D ) + 2 ln Q =2 ln R + 2 (f x 0 + D x 0p x 0) P 0 (fx 0 + D x 0p x 0) + 2 ln S + n ln 2π 2 = 4.2 Log-likelihood Derivaion wih Zero-variance Saes or Observaions We also need o accoun for he possibiliy ha some sae or observaion rows are deerminisicallydefined (wih G or H having a one or more all-zero-valued rows), and hus do no make any probabilisic conribuion o he log-likelihood calculaion and canno be esimaed via maximum-likelihood mehods. While he values of elemens of G and H can vary in ime, i is assumed heir overall rows will remain consisenly eiher all-zero or no-all-zero. There is addiional nuance needed in considering ha errors inroduced in non-deerminisic sae or observaion rows have he poenial o propagae o seemingly-deerminisic rows via he A and C marices. Holmes provides he full deails of his derivaion (along wih explanaions for associaed esimaion limiaions) and so for breviy only he broad srokes are presened here. Given an n x n x selecion marix I d ha zeros ou any rows ha are all-zero in G, he fully-deerminisic saes x d 2 can be expressed as: x d 2 = I d 2x 2 = I d 2(A x + B u ) x d 2 = I d 2(A x + (u T I nx )f B, + (u T I nx )D B, p B ) Similarly: x d 3 = I d 3(A 2 x 2 + (u T 2 I nx )f B,2 + (u T 2 I nx )D B,2 p B ) 8

9 Now, recursing on x 2 : x d 3 = I d 3(A 2 (A x +(u T I nx )f B, +(u T I nx )D B, p B )+(u T 2 I nx )f B,2 +(u T 2 I nx )D B,2 p B ) x d 3 = I d 3(A 2 A x +(u T 2 I nx )f B,2 +A 2 (u T I nx )f B, +((u T 2 I nx )D B,2 +A 2 (u T I nx )D B, )p B ) More generally, x d can be expressed via he recursion relaions x d = I d (A x 0 + f Bu, + D Bu, p B ) A = A A, A 0 = I nx f Bu, = (u I nx )f B, + A f Bu,, f Bu,0 = 0 D Bu, = (u I nx )D B, + A D Bu,, D Bu,0 = 0 Wih his recursion we will also need o accoun for he probabilisic and non-deerminisic componens of he expecaion of x. Specifically, defining I d as a selecion marix zeroingou non-deerminisic rows in x 0 (corresponding o non-zero rows in x 0 J): ˆx 0 = (I nx I d x )ˆx 0 + I d x x 0 0 = (I nx I d x )ˆx 0 + I d x(f 0 x 0 + D x 0 ) We can now consider he general log-likelihood formula wih deerminisic rows se o zero. Φ will auomaically zero ou likelihood conribuions from deerminisic x rows in he process evoluion erms, while Ξ will do he same for y in he observaion erms and Π for he iniial sae x 0. We can apply he xd relaion derived above o replace x and x in he process and observaion erms respecively. ɛ = Φ (x A x B u ) ɛ = Φ (x A (I nx I d )x A I d x B u ) ɛ = Φ (x A (I nx I d )x A x d B u ) ɛ = Φ (x A (I nx I d )x A I d (A 2x 0 + fbu, 2 + DBu, 2p B ) B u ) η = Ξ (y C x D u ) η = Ξ (y C (I nx I d )x C I d x D u ) η = Ξ (y C (I nx I d )x C x d D u ) η = Ξ (y C (I nx I d )x C I d (A x 0 + fbu, + DBu, p B ) D u ) ξ = Π(x f x 0 D x 0 ) 9

10 LL = 2 =2 ɛ T Q ɛ + 2 ln Q + 2 =2 = + 2 ξ S ξ + 2 ln S + n ln 2π 2 η T R η + 2 ln R From his poin we can compue derivaives wih respec o o-be-esimaed parameer vecors, and se he expeced values (expecaion) o zero (maximizaion). Solving for all parameers and using hem o recompue expeced values gives a new model wih improved fi log-likelihood. This process can be repeaed ieraively unil he log-likelihood value converges sufficienly. = 4.3 Solving for p A Same for finie-variance rows, doesn work for fixed rows. Final resul is: ( p A = DA,( x x Q ) )D A, = = D A,( vec( Q x x ) ( x x Q )f A, vec( Q B u ˆx ) ) 4.4 Solving for p B We sar by aking he derivaive of negaive log-likelihood wih respec o p B. Using he marix derivaive rule a a Ca = 2a C: ( LL) = 2 ( LL) = =2 ɛ T Q ɛ + 2 ɛ T Q ɛ + =2 = η T R η η T R η = ɛ = Φ (A I d D Bu, 2 + (u T I nx )D B, ) η = Ξ C I d D Bu, ( LL) = ɛ T Q Φ (A I d DBu, 2+(u T I nx )D B, ) η T R Ξ C I d DBu, =2 = ( LL) ( = x A (I nx I d )x A I d (A 2x + fbu, 2 + DBu, 2p B ) =2 0

11 (u T I nx )f B, (u T )T I nx )D B, p B Ṽ (A I d DBu, 2 + (u T I nx )D B, ) ( y C (I nx I d )x C I d (A x + fbu, + DBu, p )T B ) D u W C I d DBu, = [ ] Taking expecaions and seing E ( LL) = 0: 0 = (ˆx A (I nx I d )x A I d (A 2ˆx 0 + fbu, 2 + DBu, 2p B ) =2 (u T I nx )f B, (u T )T I nx )D B, p B Ṽ (A I d DBu, 2 + (u T I nx )D B, ) (ŷ C (I nx I d )x C I d (A ˆx 0 + fbu, + DBu, p )T B ) D u W C I d DBu, + = =2 0 = (ˆx A (I nx I d )x A I d (A 2ˆx 0 + fbu, 2) =2 (u T I nx )f B, )T Ṽ (A I d D Bu, 2 + (u T I nx )D B, ) ( A I d D Bu, 2p B +(u T I nx )D B, p B )T Ṽ (A I d D Bu, 2+(u T I nx )D B, ) +p B =2 = (ŷ C (I nx I d )x C I d (A ˆx 0 + fbu, ) )T D u W C I d DBu, 0 = + = ( C I d DBu, p )T B W C I d DBu, (ˆx A (I nx I d )x A I d (A 2ˆx 0 + fbu, 2) =2 (u T I nx )f B, )T Ṽ (A I d D Bu, 2 + (u T I nx )D B, ) ( A I d D Bu, 2+(u T I nx )D B, ) Ṽ (A I d D Bu, 2+(u T I nx )D B, ) = (ŷ C (I nx I d )x C I d (A ˆx 0 + fbu, ) )T D u W C I d DBu, For clariy we can define: +p B = ( C I d DBu, )T W C I d DBu,, = A I d D Bu, 2 + (u T I nx )D B,

12 2, = ˆx A (I nx I d )x A I d (A 2ˆx 0 + f Bu, 2) (u T I nx )f B, This gives: 0 = =2 3, = C I d D Bu, 4, = ŷ C (I nx I d )x C I d (A ˆx 0 + f Bu, ) D u 2,Ṽ, + p B =2,Ṽ, = 4, W 3, + p B = 3, W 3, ( p B =,Ṽ, + =2 = 3, W 3, ) ( =2,Ṽ 2, + = 3, W 4, ) 4.5 Solving for p Q The soluion for p Q is no general, bu works in a wide range of special cases, mos noably when Q is a simple diagonal marix of parameers, which when muliplied wih any arbirary G can serve mos purposes. Aside from noaional differences, he derivaion is idenical o he one presened in Holmes, so for breviy only he final resul is presened here: ( ) p Q = DQ,D Q, = DQ,vec(S ) = S = Φ ( x x + x x A + A x x ˆx u B B u ˆx + A x x A +A ˆx u B + B u ˆx A + B u u B )Φ 4.6 Solving for p C The soluion for p C is analogous o p A : ( p C = DC,( x x R ) )D C, = = DC, ( vec( R ŷ x ) ( x x R )f C, vec( R D u ˆx ) ) 2

13 4.7 Solving for p D Finie-variance rows works as normal, wih consrains This derivaion isn in Holmes: ( ) p D = DD,(u I ny ) R (u I ny )D D, DD,(u I ny ) R (ŷ C ˆx (u ) I ny )f A, = = 4.8 Solving for p R p R is fully analogous o p Q, and he same marix propery consrains apply. ( ) p R = D R, DR, DR,vec(T ) = T = Ξ ( y y + ŷx C + C x y ŷ u D D u ŷ + C x x C = +C ˆx u D + D u ˆx C + D u u D )Ξ 4.9 Solving for Solving for is paricularly messy given how ofen i appears in he log-likelihood equaion! The derivaion for he final resul is very similar o he one used o solve for p B : ( LL) = 2 ( LL) = =2 ɛ T Q ɛ + 2 ɛ T Q =2 ɛ + = = η T R η + 2 η T R ξ S ξ η + ξ S ξ ɛ = Φ A I d p A 2I d x D 0 x 0 x 0 Φ 5, η = Ξ C I d A p I d x D 0 x 0 x 0 Ξ 7, ξ = ΠD x 0 ( LL) = =2 ɛ Q Φ 5, = η R Ξ 7, ξ S ΠD x 0 3

14 [ ( LL) ] E = 0 = =2 E[ɛ ] Q Φ 5, = E[η ] R Ξ 7, E[ξ] S ΠD x 0 E[ɛ ] = Φ (ˆx A (I nx I d )ˆx A I d (A 2( (Inx I d x)ˆx 0 +I d x (f 0 x 0 +D x 0 ) ) +fbu, 2+ D Bu, 2p B ) B u ) E[η ] = Ξ (ŷ C (I nx I d )ˆx C I d (A ( (Inx I d x)ˆx 0 +I d x (f 0 x 0 +D x 0 ) ) +fbu, +D Bu, p B ) D u ) E[ξ] = Π(ˆx f x 0 D x 0 ) 0 = = =2 (ˆx A (I nx I d )ˆx A I d (A 2( (Inx I d x)ˆx 0 +I d x (f 0 x 0 +D x 0 ) ) +fbu, 2+ D Bu, 2p B ) B u ) Ṽ 5, (ŷ C (I nx I d )ˆx C I d (A ( (Inx I d x)ˆx 0 +I d x (f 0 x 0 +D x 0 ) ) +fbu, +D Bu, p B ) D u ) W 7, (ˆx f x 0 D x 0 ) P 0 Dx 0 0 = = =2 (ˆx A (I nx I d )ˆx A I d (A 2( (Inx I d x)ˆx 0 + I d ) x f 0 x 0 + f Bu, 2 + D Bu, 2p B ) B u ) Ṽ 5, + (A I d A 2I d x D 0 x 0 ) Ṽ 5, =2 (ŷ C (I nx I d )ˆx C I d (A ( (Inx I d x)ˆx 0 +I d ) x f 0 x 0 +f Bu, +DBu, p B ) D u ) W 7, + (C I d A I d x D 0 x 0 ) W 7, = (ˆx f x 0 ) P 0 Dx 0 + (D x 0p x 0) P 0 Dx 0 6, ˆx A (I nx I d )ˆx A I d (A 2( (Inx I d x)ˆx 0 +I d ) x f 0 x 0 +f Bu, 2 +DBu, 2p B ) B u 4

15 8, ŷ C (I nx I d )ˆx C I d (A ( (Inx I d x)ˆx 0 +I d ) x f 0 x 0 +f Bu, +DBu, p B ) D u =2 + 0 = = =2 6,Ṽ 5, + ( 7, ) W ( 5, ) Ṽ 5, + =2 =2 ( 5, ) Ṽ 5, 7, (ˆx f x 0 ) = 6,Ṽ 5, + ( 7, ) W = 8, Transposing he equaion and isolaing : ( =2 =2 5,Ṽ 5, + 5,Ṽ 6, + = = ( = 5,Ṽ 5, + ( =2 =2 5,Ṽ 6, + = P 0 Dx 0 = 8, + (D x 0 ) W 7, 7, + D x 0 + (D x 0 ) W 7, + (ˆx f x 0 ) P 0 Dx 0 7, W 7, W = 7, + D x 0 P 0 Dx 0 P 0 Dx 0 ) = 8, + D x P 0 0 (ˆx f x 0 7, W 7, W 7, + D x 0 P 0 Dx 0 P 0 Dx 0 = ) ) 8, + D x P 0 ) 0 (ˆx f x 0 ) 4.0 Solving for p S p S can be solved for in a very similar fashion o p Q and p R, alhough his is no done explicily in Holmes: p S = (D S D S ) D S vec ( Π( x x ˆx x 0 x 0 ˆx + x 0 x 0 )Π ) 5

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter

State-Space Models. Initialization, Estimation and Smoothing of the Kalman Filter Sae-Space Models Iniializaion, Esimaion and Smoohing of he Kalman Filer Iniializaion of he Kalman Filer The Kalman filer shows how o updae pas predicors and he corresponding predicion error variances when

More information

Georey E. Hinton. University oftoronto. Technical Report CRG-TR February 22, Abstract

Georey E. Hinton. University oftoronto.   Technical Report CRG-TR February 22, Abstract Parameer Esimaion for Linear Dynamical Sysems Zoubin Ghahramani Georey E. Hinon Deparmen of Compuer Science Universiy oftorono 6 King's College Road Torono, Canada M5S A4 Email: zoubin@cs.orono.edu Technical

More information

Notes on Kalman Filtering

Notes on Kalman Filtering Noes on Kalman Filering Brian Borchers and Rick Aser November 7, Inroducion Daa Assimilaion is he problem of merging model predicions wih acual measuremens of a sysem o produce an opimal esimae of he curren

More information

Online Appendix to Solution Methods for Models with Rare Disasters

Online Appendix to Solution Methods for Models with Rare Disasters Online Appendix o Soluion Mehods for Models wih Rare Disasers Jesús Fernández-Villaverde and Oren Levinal In his Online Appendix, we presen he Euler condiions of he model, we develop he pricing Calvo block,

More information

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD

PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD PENALIZED LEAST SQUARES AND PENALIZED LIKELIHOOD HAN XIAO 1. Penalized Leas Squares Lasso solves he following opimizaion problem, ˆβ lasso = arg max β R p+1 1 N y i β 0 N x ij β j β j (1.1) for some 0.

More information

Lecture 20: Riccati Equations and Least Squares Feedback Control

Lecture 20: Riccati Equations and Least Squares Feedback Control 34-5 LINEAR SYSTEMS Lecure : Riccai Equaions and Leas Squares Feedback Conrol 5.6.4 Sae Feedback via Riccai Equaions A recursive approach in generaing he marix-valued funcion W ( ) equaion for i for he

More information

10. State Space Methods

10. State Space Methods . Sae Space Mehods. Inroducion Sae space modelling was briefly inroduced in chaper. Here more coverage is provided of sae space mehods before some of heir uses in conrol sysem design are covered in he

More information

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006

2.160 System Identification, Estimation, and Learning. Lecture Notes No. 8. March 6, 2006 2.160 Sysem Idenificaion, Esimaion, and Learning Lecure Noes No. 8 March 6, 2006 4.9 Eended Kalman Filer In many pracical problems, he process dynamics are nonlinear. w Process Dynamics v y u Model (Linearized)

More information

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model

Modal identification of structures from roving input data by means of maximum likelihood estimation of the state space model Modal idenificaion of srucures from roving inpu daa by means of maximum likelihood esimaion of he sae space model J. Cara, J. Juan, E. Alarcón Absrac The usual way o perform a forced vibraion es is o fix

More information

Augmented Reality II - Kalman Filters - Gudrun Klinker May 25, 2004

Augmented Reality II - Kalman Filters - Gudrun Klinker May 25, 2004 Augmened Realiy II Kalman Filers Gudrun Klinker May 25, 2004 Ouline Moivaion Discree Kalman Filer Modeled Process Compuing Model Parameers Algorihm Exended Kalman Filer Kalman Filer for Sensor Fusion Lieraure

More information

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED

0.1 MAXIMUM LIKELIHOOD ESTIMATION EXPLAINED 0.1 MAXIMUM LIKELIHOOD ESTIMATIO EXPLAIED Maximum likelihood esimaion is a bes-fi saisical mehod for he esimaion of he values of he parameers of a sysem, based on a se of observaions of a random variable

More information

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems

Recursive Least-Squares Fixed-Interval Smoother Using Covariance Information based on Innovation Approach in Linear Continuous Stochastic Systems 8 Froniers in Signal Processing, Vol. 1, No. 1, July 217 hps://dx.doi.org/1.2266/fsp.217.112 Recursive Leas-Squares Fixed-Inerval Smooher Using Covariance Informaion based on Innovaion Approach in Linear

More information

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II

Zürich. ETH Master Course: L Autonomous Mobile Robots Localization II Roland Siegwar Margaria Chli Paul Furgale Marco Huer Marin Rufli Davide Scaramuzza ETH Maser Course: 151-0854-00L Auonomous Mobile Robos Localizaion II ACT and SEE For all do, (predicion updae / ACT),

More information

An introduction to the theory of SDDP algorithm

An introduction to the theory of SDDP algorithm An inroducion o he heory of SDDP algorihm V. Leclère (ENPC) Augus 1, 2014 V. Leclère Inroducion o SDDP Augus 1, 2014 1 / 21 Inroducion Large scale sochasic problem are hard o solve. Two ways of aacking

More information

Derivation of an EM algorithm for constrained and unconstrained multivariate autoregressive state-space (MARSS) models

Derivation of an EM algorithm for constrained and unconstrained multivariate autoregressive state-space (MARSS) models Derivaion of an EM algorihm for consrained and unconsrained mulivariae auoregressive sae-space MARSS models Elizabeh Eli Holmes * March 30, 08 Absrac This repor presens an Expecaion-Maximizaion EM algorihm

More information

Linear Gaussian State Space Models

Linear Gaussian State Space Models Linear Gaussian Sae Space Models Srucural Time Series Models Level and Trend Models Basic Srucural Model (BSM Dynamic Linear Models Sae Space Model Represenaion Level, Trend, and Seasonal Models Time Varying

More information

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t

R t. C t P t. + u t. C t = αp t + βr t + v t. + β + w t Exercise 7 C P = α + β R P + u C = αp + βr + v (a) (b) C R = α P R + β + w (c) Assumpions abou he disurbances u, v, w : Classical assumions on he disurbance of one of he equaions, eg. on (b): E(v v s P,

More information

Chapter 2. First Order Scalar Equations

Chapter 2. First Order Scalar Equations Chaper. Firs Order Scalar Equaions We sar our sudy of differenial equaions in he same way he pioneers in his field did. We show paricular echniques o solve paricular ypes of firs order differenial equaions.

More information

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017

Two Popular Bayesian Estimators: Particle and Kalman Filters. McGill COMP 765 Sept 14 th, 2017 Two Popular Bayesian Esimaors: Paricle and Kalman Filers McGill COMP 765 Sep 14 h, 2017 1 1 1, dx x Bel x u x P x z P Recall: Bayes Filers,,,,,,, 1 1 1 1 u z u x P u z u x z P Bayes z = observaion u =

More information

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle

Physics 235 Chapter 2. Chapter 2 Newtonian Mechanics Single Particle Chaper 2 Newonian Mechanics Single Paricle In his Chaper we will review wha Newon s laws of mechanics ell us abou he moion of a single paricle. Newon s laws are only valid in suiable reference frames,

More information

Testing for a Single Factor Model in the Multivariate State Space Framework

Testing for a Single Factor Model in the Multivariate State Space Framework esing for a Single Facor Model in he Mulivariae Sae Space Framework Chen C.-Y. M. Chiba and M. Kobayashi Inernaional Graduae School of Social Sciences Yokohama Naional Universiy Japan Faculy of Economics

More information

Testing the Random Walk Model. i.i.d. ( ) r

Testing the Random Walk Model. i.i.d. ( ) r he random walk heory saes: esing he Random Walk Model µ ε () np = + np + Momen Condiions where where ε ~ i.i.d he idea here is o es direcly he resricions imposed by momen condiions. lnp lnp µ ( lnp lnp

More information

How to Deal with Structural Breaks in Practical Cointegration Analysis

How to Deal with Structural Breaks in Practical Cointegration Analysis How o Deal wih Srucural Breaks in Pracical Coinegraion Analysis Roselyne Joyeux * School of Economic and Financial Sudies Macquarie Universiy December 00 ABSTRACT In his noe we consider he reamen of srucural

More information

( ) ( ) if t = t. It must satisfy the identity. So, bulkiness of the unit impulse (hyper)function is equal to 1. The defining characteristic is

( ) ( ) if t = t. It must satisfy the identity. So, bulkiness of the unit impulse (hyper)function is equal to 1. The defining characteristic is UNIT IMPULSE RESPONSE, UNIT STEP RESPONSE, STABILITY. Uni impulse funcion (Dirac dela funcion, dela funcion) rigorously defined is no sricly a funcion, bu disribuion (or measure), precise reamen requires

More information

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t...

t is a basis for the solution space to this system, then the matrix having these solutions as columns, t x 1 t, x 2 t,... x n t x 2 t... Mah 228- Fri Mar 24 5.6 Marix exponenials and linear sysems: The analogy beween firs order sysems of linear differenial equaions (Chaper 5) and scalar linear differenial equaions (Chaper ) is much sronger

More information

GMM - Generalized Method of Moments

GMM - Generalized Method of Moments GMM - Generalized Mehod of Momens Conens GMM esimaion, shor inroducion 2 GMM inuiion: Maching momens 2 3 General overview of GMM esimaion. 3 3. Weighing marix...........................................

More information

14 Autoregressive Moving Average Models

14 Autoregressive Moving Average Models 14 Auoregressive Moving Average Models In his chaper an imporan parameric family of saionary ime series is inroduced, he family of he auoregressive moving average, or ARMA, processes. For a large class

More information

References are appeared in the last slide. Last update: (1393/08/19)

References are appeared in the last slide. Last update: (1393/08/19) SYSEM IDEIFICAIO Ali Karimpour Associae Professor Ferdowsi Universi of Mashhad References are appeared in he las slide. Las updae: 0..204 393/08/9 Lecure 5 lecure 5 Parameer Esimaion Mehods opics o be

More information

Smoothing. Backward smoother: At any give T, replace the observation yt by a combination of observations at & before T

Smoothing. Backward smoother: At any give T, replace the observation yt by a combination of observations at & before T Smoohing Consan process Separae signal & noise Smooh he daa: Backward smooher: A an give, replace he observaion b a combinaion of observaions a & before Simple smooher : replace he curren observaion wih

More information

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes

23.2. Representing Periodic Functions by Fourier Series. Introduction. Prerequisites. Learning Outcomes Represening Periodic Funcions by Fourier Series 3. Inroducion In his Secion we show how a periodic funcion can be expressed as a series of sines and cosines. We begin by obaining some sandard inegrals

More information

Chapter 3 Boundary Value Problem

Chapter 3 Boundary Value Problem Chaper 3 Boundary Value Problem A boundary value problem (BVP) is a problem, ypically an ODE or a PDE, which has values assigned on he physical boundary of he domain in which he problem is specified. Le

More information

INTRODUCTION TO MACHINE LEARNING 3RD EDITION

INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN The MIT Press, 2014 Lecure Slides for INTRODUCTION TO MACHINE LEARNING 3RD EDITION alpaydin@boun.edu.r hp://www.cmpe.boun.edu.r/~ehem/i2ml3e CHAPTER 2: SUPERVISED LEARNING Learning a Class

More information

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing

Application of a Stochastic-Fuzzy Approach to Modeling Optimal Discrete Time Dynamical Systems by Using Large Scale Data Processing Applicaion of a Sochasic-Fuzzy Approach o Modeling Opimal Discree Time Dynamical Sysems by Using Large Scale Daa Processing AA WALASZE-BABISZEWSA Deparmen of Compuer Engineering Opole Universiy of Technology

More information

Maximum Likelihood Parameter Estimation in State-Space Models

Maximum Likelihood Parameter Estimation in State-Space Models Maximum Likelihood Parameer Esimaion in Sae-Space Models Arnaud Douce Deparmen of Saisics, Oxford Universiy Universiy College London 4 h Ocober 212 A. Douce (UCL Maserclass Oc. 212 4 h Ocober 212 1 / 32

More information

Chapter 6. Systems of First Order Linear Differential Equations

Chapter 6. Systems of First Order Linear Differential Equations Chaper 6 Sysems of Firs Order Linear Differenial Equaions We will only discuss firs order sysems However higher order sysems may be made ino firs order sysems by a rick shown below We will have a sligh

More information

Math 10B: Mock Mid II. April 13, 2016

Math 10B: Mock Mid II. April 13, 2016 Name: Soluions Mah 10B: Mock Mid II April 13, 016 1. ( poins) Sae, wih jusificaion, wheher he following saemens are rue or false. (a) If a 3 3 marix A saisfies A 3 A = 0, hen i canno be inverible. True.

More information

A New Perturbative Approach in Nonlinear Singularity Analysis

A New Perturbative Approach in Nonlinear Singularity Analysis Journal of Mahemaics and Saisics 7 (: 49-54, ISSN 549-644 Science Publicaions A New Perurbaive Approach in Nonlinear Singulariy Analysis Ta-Leung Yee Deparmen of Mahemaics and Informaion Technology, The

More information

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010

Simulation-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Simulaion-Solving Dynamic Models ABE 5646 Week 2, Spring 2010 Week Descripion Reading Maerial 2 Compuer Simulaion of Dynamic Models Finie Difference, coninuous saes, discree ime Simple Mehods Euler Trapezoid

More information

Temporal probability models. Chapter 15, Sections 1 5 1

Temporal probability models. Chapter 15, Sections 1 5 1 Temporal probabiliy models Chaper 15, Secions 1 5 Chaper 15, Secions 1 5 1 Ouline Time and uncerainy Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic Bayesian

More information

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon

3.1.3 INTRODUCTION TO DYNAMIC OPTIMIZATION: DISCRETE TIME PROBLEMS. A. The Hamiltonian and First-Order Conditions in a Finite Time Horizon 3..3 INRODUCION O DYNAMIC OPIMIZAION: DISCREE IME PROBLEMS A. he Hamilonian and Firs-Order Condiions in a Finie ime Horizon Define a new funcion, he Hamilonian funcion, H. H he change in he oal value of

More information

Linear Response Theory: The connection between QFT and experiments

Linear Response Theory: The connection between QFT and experiments Phys540.nb 39 3 Linear Response Theory: The connecion beween QFT and experimens 3.1. Basic conceps and ideas Q: How do we measure he conduciviy of a meal? A: we firs inroduce a weak elecric field E, and

More information

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms

L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS. NA568 Mobile Robotics: Methods & Algorithms L07. KALMAN FILTERING FOR NON-LINEAR SYSTEMS NA568 Mobile Roboics: Mehods & Algorihms Today s Topic Quick review on (Linear) Kalman Filer Kalman Filering for Non-Linear Sysems Exended Kalman Filer (EKF)

More information

Robust estimation based on the first- and third-moment restrictions of the power transformation model

Robust estimation based on the first- and third-moment restrictions of the power transformation model h Inernaional Congress on Modelling and Simulaion, Adelaide, Ausralia, 6 December 3 www.mssanz.org.au/modsim3 Robus esimaion based on he firs- and hird-momen resricions of he power ransformaion Nawaa,

More information

hen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif

hen found from Bayes rule. Specically, he prior disribuion is given by p( ) = N( ; ^ ; r ) (.3) where r is he prior variance (we add on he random drif Chaper Kalman Filers. Inroducion We describe Bayesian Learning for sequenial esimaion of parameers (eg. means, AR coeciens). The updae procedures are known as Kalman Filers. We show how Dynamic Linear

More information

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation

Mathcad Lecture #8 In-class Worksheet Curve Fitting and Interpolation Mahcad Lecure #8 In-class Workshee Curve Fiing and Inerpolaion A he end of his lecure, you will be able o: explain he difference beween curve fiing and inerpolaion decide wheher curve fiing or inerpolaion

More information

Kalman filtering for maximum likelihood estimation given corrupted observations.

Kalman filtering for maximum likelihood estimation given corrupted observations. alman filering maimum likelihood esimaion given corruped observaions... Holmes Naional Marine isheries Service Inroducion he alman filer is used o eend likelihood esimaion o cases wih hidden saes such

More information

Tom Heskes and Onno Zoeter. Presented by Mark Buller

Tom Heskes and Onno Zoeter. Presented by Mark Buller Tom Heskes and Onno Zoeer Presened by Mark Buller Dynamic Bayesian Neworks Direced graphical models of sochasic processes Represen hidden and observed variables wih differen dependencies Generalize Hidden

More information

t 2 B F x,t n dsdt t u x,t dxdt

t 2 B F x,t n dsdt t u x,t dxdt Evoluion Equaions For 0, fixed, le U U0, where U denoes a bounded open se in R n.suppose ha U is filled wih a maerial in which a conaminan is being ranspored by various means including diffusion and convecion.

More information

Lecture Notes 2. The Hilbert Space Approach to Time Series

Lecture Notes 2. The Hilbert Space Approach to Time Series Time Series Seven N. Durlauf Universiy of Wisconsin. Basic ideas Lecure Noes. The Hilber Space Approach o Time Series The Hilber space framework provides a very powerful language for discussing he relaionship

More information

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis

Speaker Adaptation Techniques For Continuous Speech Using Medium and Small Adaptation Data Sets. Constantinos Boulis Speaker Adapaion Techniques For Coninuous Speech Using Medium and Small Adapaion Daa Ses Consaninos Boulis Ouline of he Presenaion Inroducion o he speaker adapaion problem Maximum Likelihood Sochasic Transformaions

More information

Robert Kollmann. 6 September 2017

Robert Kollmann. 6 September 2017 Appendix: Supplemenary maerial for Tracable Likelihood-Based Esimaion of Non- Linear DSGE Models Economics Leers (available online 6 Sepember 207) hp://dx.doi.org/0.06/j.econle.207.08.027 Rober Kollmann

More information

Article from. Predictive Analytics and Futurism. July 2016 Issue 13

Article from. Predictive Analytics and Futurism. July 2016 Issue 13 Aricle from Predicive Analyics and Fuurism July 6 Issue An Inroducion o Incremenal Learning By Qiang Wu and Dave Snell Machine learning provides useful ools for predicive analyics The ypical machine learning

More information

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source

Multi-scale 2D acoustic full waveform inversion with high frequency impulsive source Muli-scale D acousic full waveform inversion wih high frequency impulsive source Vladimir N Zubov*, Universiy of Calgary, Calgary AB vzubov@ucalgaryca and Michael P Lamoureux, Universiy of Calgary, Calgary

More information

Institute for Mathematical Methods in Economics. University of Technology Vienna. Singapore, May Manfred Deistler

Institute for Mathematical Methods in Economics. University of Technology Vienna. Singapore, May Manfred Deistler MULTIVARIATE TIME SERIES ANALYSIS AND FORECASTING Manfred Deisler E O S Economerics and Sysems Theory Insiue for Mahemaical Mehods in Economics Universiy of Technology Vienna Singapore, May 2004 Inroducion

More information

Temporal probability models

Temporal probability models Temporal probabiliy models CS194-10 Fall 2011 Lecure 25 CS194-10 Fall 2011 Lecure 25 1 Ouline Hidden variables Inerence: ilering, predicion, smoohing Hidden Markov models Kalman ilers (a brie menion) Dynamic

More information

Outline. lse-logo. Outline. Outline. 1 Wald Test. 2 The Likelihood Ratio Test. 3 Lagrange Multiplier Tests

Outline. lse-logo. Outline. Outline. 1 Wald Test. 2 The Likelihood Ratio Test. 3 Lagrange Multiplier Tests Ouline Ouline Hypohesis Tes wihin he Maximum Likelihood Framework There are hree main frequenis approaches o inference wihin he Maximum Likelihood framework: he Wald es, he Likelihood Raio es and he Lagrange

More information

Estimation of Poses with Particle Filters

Estimation of Poses with Particle Filters Esimaion of Poses wih Paricle Filers Dr.-Ing. Bernd Ludwig Chair for Arificial Inelligence Deparmen of Compuer Science Friedrich-Alexander-Universiä Erlangen-Nürnberg 12/05/2008 Dr.-Ing. Bernd Ludwig (FAU

More information

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes

An recursive analytical technique to estimate time dependent physical parameters in the presence of noise processes WHAT IS A KALMAN FILTER An recursive analyical echnique o esimae ime dependen physical parameers in he presence of noise processes Example of a ime and frequency applicaion: Offse beween wo clocks PREDICTORS,

More information

20. Applications of the Genetic-Drift Model

20. Applications of the Genetic-Drift Model 0. Applicaions of he Geneic-Drif Model 1) Deermining he probabiliy of forming any paricular combinaion of genoypes in he nex generaion: Example: If he parenal allele frequencies are p 0 = 0.35 and q 0

More information

Modeling Economic Time Series with Stochastic Linear Difference Equations

Modeling Economic Time Series with Stochastic Linear Difference Equations A. Thiemer, SLDG.mcd, 6..6 FH-Kiel Universiy of Applied Sciences Prof. Dr. Andreas Thiemer e-mail: andreas.hiemer@fh-kiel.de Modeling Economic Time Series wih Sochasic Linear Difference Equaions Summary:

More information

Chapter 5. Heterocedastic Models. Introduction to time series (2008) 1

Chapter 5. Heterocedastic Models. Introduction to time series (2008) 1 Chaper 5 Heerocedasic Models Inroducion o ime series (2008) 1 Chaper 5. Conens. 5.1. The ARCH model. 5.2. The GARCH model. 5.3. The exponenial GARCH model. 5.4. The CHARMA model. 5.5. Random coefficien

More information

2.3 SCHRÖDINGER AND HEISENBERG REPRESENTATIONS

2.3 SCHRÖDINGER AND HEISENBERG REPRESENTATIONS Andrei Tokmakoff, MIT Deparmen of Chemisry, 2/22/2007 2-17 2.3 SCHRÖDINGER AND HEISENBERG REPRESENTATIONS The mahemaical formulaion of he dynamics of a quanum sysem is no unique. So far we have described

More information

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H.

ACE 562 Fall Lecture 8: The Simple Linear Regression Model: R 2, Reporting the Results and Prediction. by Professor Scott H. ACE 56 Fall 5 Lecure 8: The Simple Linear Regression Model: R, Reporing he Resuls and Predicion by Professor Sco H. Irwin Required Readings: Griffihs, Hill and Judge. "Explaining Variaion in he Dependen

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Control of Stochastic Systems - P.R. Kumar

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XI Control of Stochastic Systems - P.R. Kumar CONROL OF SOCHASIC SYSEMS P.R. Kumar Deparmen of Elecrical and Compuer Engineering, and Coordinaed Science Laboraory, Universiy of Illinois, Urbana-Champaign, USA. Keywords: Markov chains, ransiion probabiliies,

More information

SUFFICIENT CONDITIONS FOR EXISTENCE SOLUTION OF LINEAR TWO-POINT BOUNDARY PROBLEM IN MINIMIZATION OF QUADRATIC FUNCTIONAL

SUFFICIENT CONDITIONS FOR EXISTENCE SOLUTION OF LINEAR TWO-POINT BOUNDARY PROBLEM IN MINIMIZATION OF QUADRATIC FUNCTIONAL HE PUBLISHING HOUSE PROCEEDINGS OF HE ROMANIAN ACADEMY, Series A, OF HE ROMANIAN ACADEMY Volume, Number 4/200, pp 287 293 SUFFICIEN CONDIIONS FOR EXISENCE SOLUION OF LINEAR WO-POIN BOUNDARY PROBLEM IN

More information

System of Linear Differential Equations

System of Linear Differential Equations Sysem of Linear Differenial Equaions In "Ordinary Differenial Equaions" we've learned how o solve a differenial equaion for a variable, such as: y'k5$e K2$x =0 solve DE yx = K 5 2 ek2 x C_C1 2$y''C7$y

More information

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds

Kriging Models Predicting Atrazine Concentrations in Surface Water Draining Agricultural Watersheds 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Kriging Models Predicing Arazine Concenraions in Surface Waer Draining Agriculural Waersheds Paul L. Mosquin, Jeremy Aldworh, Wenlin Chen Supplemenal Maerial Number

More information

STATE-SPACE MODELLING. A mass balance across the tank gives:

STATE-SPACE MODELLING. A mass balance across the tank gives: B. Lennox and N.F. Thornhill, 9, Sae Space Modelling, IChemE Process Managemen and Conrol Subjec Group Newsleer STE-SPACE MODELLING Inroducion: Over he pas decade or so here has been an ever increasing

More information

LAPLACE TRANSFORM AND TRANSFER FUNCTION

LAPLACE TRANSFORM AND TRANSFER FUNCTION CHBE320 LECTURE V LAPLACE TRANSFORM AND TRANSFER FUNCTION Professor Dae Ryook Yang Spring 2018 Dep. of Chemical and Biological Engineering 5-1 Road Map of he Lecure V Laplace Transform and Transfer funcions

More information

The expectation value of the field operator.

The expectation value of the field operator. The expecaion value of he field operaor. Dan Solomon Universiy of Illinois Chicago, IL dsolom@uic.edu June, 04 Absrac. Much of he mahemaical developmen of quanum field heory has been in suppor of deermining

More information

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients

Section 3.5 Nonhomogeneous Equations; Method of Undetermined Coefficients Secion 3.5 Nonhomogeneous Equaions; Mehod of Undeermined Coefficiens Key Terms/Ideas: Linear Differenial operaor Nonlinear operaor Second order homogeneous DE Second order nonhomogeneous DE Soluion o homogeneous

More information

4.1 Other Interpretations of Ridge Regression

4.1 Other Interpretations of Ridge Regression CHAPTER 4 FURTHER RIDGE THEORY 4. Oher Inerpreaions of Ridge Regression In his secion we will presen hree inerpreaions for he use of ridge regression. The firs one is analogous o Hoerl and Kennard reasoning

More information

Isolated-word speech recognition using hidden Markov models

Isolated-word speech recognition using hidden Markov models Isolaed-word speech recogniion using hidden Markov models Håkon Sandsmark December 18, 21 1 Inroducion Speech recogniion is a challenging problem on which much work has been done he las decades. Some of

More information

1. An introduction to dynamic optimization -- Optimal Control and Dynamic Programming AGEC

1. An introduction to dynamic optimization -- Optimal Control and Dynamic Programming AGEC This documen was generaed a :45 PM 8/8/04 Copyrigh 04 Richard T. Woodward. An inroducion o dynamic opimizaion -- Opimal Conrol and Dynamic Programming AGEC 637-04 I. Overview of opimizaion Opimizaion is

More information

Chapter 4. Truncation Errors

Chapter 4. Truncation Errors Chaper 4. Truncaion Errors and he Taylor Series Truncaion Errors and he Taylor Series Non-elemenary funcions such as rigonomeric, eponenial, and ohers are epressed in an approimae fashion using Taylor

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differenial Equaions 5. Examples of linear differenial equaions and heir applicaions We consider some examples of sysems of linear differenial equaions wih consan coefficiens y = a y +... + a

More information

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection

Appendix to Online l 1 -Dictionary Learning with Application to Novel Document Detection Appendix o Online l -Dicionary Learning wih Applicaion o Novel Documen Deecion Shiva Prasad Kasiviswanahan Huahua Wang Arindam Banerjee Prem Melville A Background abou ADMM In his secion, we give a brief

More information

Probabilistic Robotics

Probabilistic Robotics Probabilisic Roboics Bayes Filer Implemenaions Gaussian filers Bayes Filer Reminder Predicion bel p u bel d Correcion bel η p z bel Gaussians : ~ π e p N p - Univariae / / : ~ μ μ μ e p Ν p d π Mulivariae

More information

arxiv: v1 [stat.me] 16 Feb 2013

arxiv: v1 [stat.me] 16 Feb 2013 Derivaion of an EM algorihm for consrained and unconsrained mulivariae auoregressive sae-space MARSS models Elizabeh Eli Holmes arxiv:30.399v [sa.me] 6 Feb 03 February 9, 03 Absrac This repor presens an

More information

The fundamental mass balance equation is ( 1 ) where: I = inputs P = production O = outputs L = losses A = accumulation

The fundamental mass balance equation is ( 1 ) where: I = inputs P = production O = outputs L = losses A = accumulation Hea (iffusion) Equaion erivaion of iffusion Equaion The fundamenal mass balance equaion is I P O L A ( 1 ) where: I inpus P producion O oupus L losses A accumulaion Assume ha no chemical is produced or

More information

Sequential Importance Resampling (SIR) Particle Filter

Sequential Importance Resampling (SIR) Particle Filter Paricle Filers++ Pieer Abbeel UC Berkeley EECS Many slides adaped from Thrun, Burgard and Fox, Probabilisic Roboics 1. Algorihm paricle_filer( S -1, u, z ): 2. Sequenial Imporance Resampling (SIR) Paricle

More information

Approximation Algorithms for Unique Games via Orthogonal Separators

Approximation Algorithms for Unique Games via Orthogonal Separators Approximaion Algorihms for Unique Games via Orhogonal Separaors Lecure noes by Konsanin Makarychev. Lecure noes are based on he papers [CMM06a, CMM06b, LM4]. Unique Games In hese lecure noes, we define

More information

Financial Econometrics Kalman Filter: some applications to Finance University of Evry - Master 2

Financial Econometrics Kalman Filter: some applications to Finance University of Evry - Master 2 Financial Economerics Kalman Filer: some applicaions o Finance Universiy of Evry - Maser 2 Eric Bouyé January 27, 2009 Conens 1 Sae-space models 2 2 The Scalar Kalman Filer 2 21 Presenaion 2 22 Summary

More information

An Introduction to Malliavin calculus and its applications

An Introduction to Malliavin calculus and its applications An Inroducion o Malliavin calculus and is applicaions Lecure 5: Smoohness of he densiy and Hörmander s heorem David Nualar Deparmen of Mahemaics Kansas Universiy Universiy of Wyoming Summer School 214

More information

Lecture 2 October ε-approximation of 2-player zero-sum games

Lecture 2 October ε-approximation of 2-player zero-sum games Opimizaion II Winer 009/10 Lecurer: Khaled Elbassioni Lecure Ocober 19 1 ε-approximaion of -player zero-sum games In his lecure we give a randomized ficiious play algorihm for obaining an approximae soluion

More information

Computation of the Effect of Space Harmonics on Starting Process of Induction Motors Using TSFEM

Computation of the Effect of Space Harmonics on Starting Process of Induction Motors Using TSFEM Journal of elecrical sysems Special Issue N 01 : November 2009 pp: 48-52 Compuaion of he Effec of Space Harmonics on Saring Process of Inducion Moors Using TSFEM Youcef Ouazir USTHB Laboraoire des sysèmes

More information

Differential Equations

Differential Equations Mah 21 (Fall 29) Differenial Equaions Soluion #3 1. Find he paricular soluion of he following differenial equaion by variaion of parameer (a) y + y = csc (b) 2 y + y y = ln, > Soluion: (a) The corresponding

More information

Chapter 7 Response of First-order RL and RC Circuits

Chapter 7 Response of First-order RL and RC Circuits Chaper 7 Response of Firs-order RL and RC Circuis 7.- The Naural Response of RL and RC Circuis 7.3 The Sep Response of RL and RC Circuis 7.4 A General Soluion for Sep and Naural Responses 7.5 Sequenial

More information

2. Nonlinear Conservation Law Equations

2. Nonlinear Conservation Law Equations . Nonlinear Conservaion Law Equaions One of he clear lessons learned over recen years in sudying nonlinear parial differenial equaions is ha i is generally no wise o ry o aack a general class of nonlinear

More information

A variational radial basis function approximation for diffusion processes.

A variational radial basis function approximation for diffusion processes. A variaional radial basis funcion approximaion for diffusion processes. Michail D. Vreas, Dan Cornford and Yuan Shen {vreasm, d.cornford, y.shen}@ason.ac.uk Ason Universiy, Birmingham, UK hp://www.ncrg.ason.ac.uk

More information

Lecture 9: September 25

Lecture 9: September 25 0-725: Opimizaion Fall 202 Lecure 9: Sepember 25 Lecurer: Geoff Gordon/Ryan Tibshirani Scribes: Xuezhi Wang, Subhodeep Moira, Abhimanu Kumar Noe: LaTeX emplae couresy of UC Berkeley EECS dep. Disclaimer:

More information

Tracking. Announcements

Tracking. Announcements Tracking Tuesday, Nov 24 Krisen Grauman UT Ausin Announcemens Pse 5 ou onigh, due 12/4 Shorer assignmen Auo exension il 12/8 I will no hold office hours omorrow 5 6 pm due o Thanksgiving 1 Las ime: Moion

More information

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j =

12: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME. Σ j = 1: AUTOREGRESSIVE AND MOVING AVERAGE PROCESSES IN DISCRETE TIME Moving Averages Recall ha a whie noise process is a series { } = having variance σ. The whie noise process has specral densiy f (λ) = of

More information

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB

T L. t=1. Proof of Lemma 1. Using the marginal cost accounting in Equation(4) and standard arguments. t )+Π RB. t )+K 1(Q RB Elecronic Companion EC.1. Proofs of Technical Lemmas and Theorems LEMMA 1. Le C(RB) be he oal cos incurred by he RB policy. Then we have, T L E[C(RB)] 3 E[Z RB ]. (EC.1) Proof of Lemma 1. Using he marginal

More information

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1.

Robotics I. April 11, The kinematics of a 3R spatial robot is specified by the Denavit-Hartenberg parameters in Tab. 1. Roboics I April 11, 017 Exercise 1 he kinemaics of a 3R spaial robo is specified by he Denavi-Harenberg parameers in ab 1 i α i d i a i θ i 1 π/ L 1 0 1 0 0 L 3 0 0 L 3 3 able 1: able of DH parameers of

More information

Random Walk with Anti-Correlated Steps

Random Walk with Anti-Correlated Steps Random Walk wih Ani-Correlaed Seps John Noga Dirk Wagner 2 Absrac We conjecure he expeced value of random walks wih ani-correlaed seps o be exacly. We suppor his conjecure wih 2 plausibiliy argumens and

More information

Vehicle Arrival Models : Headway

Vehicle Arrival Models : Headway Chaper 12 Vehicle Arrival Models : Headway 12.1 Inroducion Modelling arrival of vehicle a secion of road is an imporan sep in raffic flow modelling. I has imporan applicaion in raffic flow simulaion where

More information

EKF SLAM vs. FastSLAM A Comparison

EKF SLAM vs. FastSLAM A Comparison vs. A Comparison Michael Calonder, Compuer Vision Lab Swiss Federal Insiue of Technology, Lausanne EPFL) michael.calonder@epfl.ch The wo algorihms are described wih a planar robo applicaion in mind. Generalizaion

More information

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi

Navneet Saini, Mayank Goyal, Vishal Bansal (2013); Term Project AML310; Indian Institute of Technology Delhi Creep in Viscoelasic Subsances Numerical mehods o calculae he coefficiens of he Prony equaion using creep es daa and Herediary Inegrals Mehod Navnee Saini, Mayank Goyal, Vishal Bansal (23); Term Projec

More information

Lecture 1 Overview. course mechanics. outline & topics. what is a linear dynamical system? why study linear systems? some examples

Lecture 1 Overview. course mechanics. outline & topics. what is a linear dynamical system? why study linear systems? some examples EE263 Auumn 27-8 Sephen Boyd Lecure 1 Overview course mechanics ouline & opics wha is a linear dynamical sysem? why sudy linear sysems? some examples 1 1 Course mechanics all class info, lecures, homeworks,

More information